I'm encountering errors with an XFS filesystem on a couple of production
servers, and although I can't really reproduce this in a snap, I have had
this issue on multiple running identical configurations. I'm running 2.4.28.
Basically what happens is that although new data is put on the filesystem,
the use% actually goes DOWN! After a while this results in -64Z Used :s .
attempting to repair usually lead to everying in lost+found :s
server2 root # df -h
Filesystem Size Used Avail Use% Mounted on
/dev/root 8.0G 924M 7.1G 12% /
/dev/md0 46M 9.6M 34M 23% /boot
/dev/md3 224G -64Z 282G 101% /Storage
none 251M 0 251M 0% /dev/shm
When I do a du -k on a directory on /Storage, I gives back strange values,
that are impossible really if you add up the results of an 'ls -al'. As I do
this, you notice immediately that the filesystem has gotten really slow...
bacardi data # du -k
When I then cd into 0/0/0 and I do a 'du -sk *' :
bacardi 0 # ls -al 000fe1c2b17a7b4b4d2c4eea341cfb08.65536.db
-rw------- 1 root root 28 Oct 30 18:53
The correct filesize is indeed 28 bytes! The file mentioned here is just an
example, but there are quite some files like that actually :(
Unmounting/remounting the filesystem makes the issue go away temporarily, it
is back after a couple of hours of operation.
I did a xfs_check / xfs_repair before ; but that just dumped (ALMOST)
EVERYTHING in lost+found , so I'm losing data :(
The fact that I'm having this on multiple systems is what worries me, the
filesystems are created with default options, but are mounted with
Does this sound familiar to any of you ? Thanks a bunch!
[[HTML alternate version deleted]]