On 02/25/2012 10:35pm, Stan Hoeppner wrote:
>Can you run xfs_check on the filesystem to determine if a freespace
>tree is corrupted (post the output if it is), then run xfs_repair
>to rebuild them?"
Thank you for responding. This is a 24/7 production server and I did not
anticipate getting a response this late on a Saturday, so I panicked quite
frankly, and went ahead and ran "xfs_repair -L" on both volumes. I can now
mount the volumes and everything looks okay as far as I can tell. There
were only 2 files in the "lost+found" directory after the repair. Does that
mean only two files were lost? Is there any way to tell how many files were
>This corruption could have happened a long time ago in the past, and
>it may simply be coincidental that you've tripped over this at
>roughly the same time you upgraded the kernel.
It would be nice to find out why this happened. I suspect it is as you
suggested, previous corruption and not a hardware issue, because I have
other volumes mounted to other VM's that are attached to the same SAN
controller / RAID6 Array... and they did not have any issues - only this one
>So, run "xfs_check /dev/sde1" and post the output here. Then await
Can I still do this (or anything) to help uncover any causes or is it too
late? I have also run yum update on the server because it was out of date.
View this message in context:
Sent from the Xfs - General mailing list archive at Nabble.com.