On a server that I recently constructed, the file system became corrupt twice
and I had to abandon XFS on it.
During file write operations, the system would completely lock up, requiring a
reboot, and upon rebooting the XFS file system could not be mounted.
Dmesg output:
XFS mounting filesystem sd(8,1)
Starting XFS recovery on filesystem: sd(8,1) (dev: 8/1)
XFS: xlog_recover_process_data: bad clientid
XFS: log mount/recovery failed
XFS: log mount failed
Starting XFS recovery on filesystem: sd(8,1) (dev: 8/1)
XFS: xlog_recover_process_data: bad clientid
XFS: log mount/recovery failed
XFS: log mount failed
XFS: bad magic number
XFS: SB validate failed
xfs_check came up with tons of errors, xfs_repair could not repair without a
force, and I could not afford the time required to revalidate all the files
that had been written, so I just started over with EXT2 instead. I don't have
the option to continue testing XFS on this system, I was under pressure to ship
it. XFS did this twice, a lockup, then loss of file system on reset.
The filesystem was about 980GB, /dev/sda1, /dev/sda is a 3ware 8 port RAID card.
System is Tyan Tiger MP dual Athlon MP with noapic and nopentium options.
Distro is RH 7.2, XFS CVS kernel 2.4.18.
mkfs was:
mkfs.xfs -b size=4096 -d agsize=1048575b,su=64k,sw=7 -i maxpct=5 -f /dev/sda1
The reason for the strange mkfs.xfs options is because without them, mkfs.xfs
creates an invalid XFS file system. It will create one too many ag's, which
leaves the last ag with no blocks. Apparently telling it bogus agsizes causes
it to round correctly.
Please CC: me personally on replies, as I don't subscribe to the list. I will
however CC: the list any replies to replies.
|