On Tue, Jan 24, 2012 at 09:59:09AM -0800, Christopher Evans wrote:
> I made a mistake by recreating a raid 6 array, instead of taking the proper
> steps to rebuild it. Is there a way I can get find out which directories,
> files are/might be corrupted if 64k blocks of data offset every 21 times
> for an unknown count. Unfortunetly I've already mounted the raid array and
> have gotten xfs errors because of the corrupted data beneath it.
Write a script that walks the filesystem run xfs_bmap on every file
and directory and work out which one have extents that fall into the
bad range. If you walk into a corrupted directory, then you're
likely to see errors in dmesg, too.
In future we'll have a reverse mapping tree that will enable use to
avoid the tree walk to find the owners of corrupted regions like
this. I wrote half the code for it while I was at LCA last week ;)