We are running a 20TB XFS filesystem on top of LVM2 and SAN storage (HP
Open-V) with multipathd. Ubuntu Lucid. The disk write cache is enabled
and we use mount options rw.
This is a log of events taken from my memory and can have missed out
The system panicked and automatically restarted after 30 seconds.
It seemed to be ok but after awhile we got cases where users got files
with zero length. We tried to run xfs_check on the filesystem but it
couldn't find any problems with it. After that we restarted the system
and the files (even the files that was zero length) seemed ok again. But
then we got messages (short version):
Sep 16 06:40:34 seldlnx034 kernel: [54607.977261] XFS internal error
XFS_WANT_CORRUPTED_RETURN at line 381 of
file /build/buildd/linux-2.6.32/fs/xfs/xfs_alloc.c. Caller
Sep 16 06:40:34 seldlnx034 kernel: [54607.996676] [<ffffffffa0215383>]
Sep 16 06:40:34 seldlnx034 kernel: [54607.996689]
... and files written during this period became corrupt (zero length).
We did a xfs_repair on the filesystem (short version):
entry "fw-radmp_all.deb" at block 0 offset 944 in directory inode
157891962 references free inode 195983876
clearing inode number in entry at offset 944...
Phase 5 - rebuild AG headers and trees...
- reset superblock...
Phase 6 - check inode connectivity...
- resetting contents of realtime bitmap and summary inodes
- traversing filesystem ...
bad hash table for directory inode 13786 (no data entry): rebuilding
rebuilding directory inode 13786
bad hash table for directory inode 2130829772 (no data entry):
rebuilding directory inode 2130829772
- traversal finished ...
- moving disconnected inodes to lost+found ...
Phase 7 - verify and correct link counts...
We have made a verification of the files now I we don't have any known
problems with the file system now but the files created when the file
system was broken needed to be recreated.
How can I avoid this in the future and how can I ensure that I get
informed about a problem? Do I do anything wrong with the setup that you