Hi again all,
I thought about this a bit more over the past few days, and did
some more testing this morning. I am currently thinking that I
really don't have as many paths to follow as I originally thought.
It seems like, whether I modify sb 0 with xfs_db or not, xfs_repair
still wants to see an 11TB filesystem--I did an mdrestore and mount
on the metadump image, which saw a 21TB filesystem, then did a umount
and xfs_repair, which modified the superblock. On mounting again, the
filesystem was back to 11TB. So I think there must be a definite
risk of data loss if I try to mount what the latest kernel thinks is
a 21TB filesystem, then need to run a repair at a later date, and
therefore I have to run an xfs_repair before trying to use the new
free space.
So, here is what I think is my plan for the actual filesystem:
--take another backup
--umount all XFS filesystems (the OS filesystems are ext3)
--remove the kmod-xfs CentOS package
--update to the latest CentOS kernel and reboot, making sure
the target XFS fs does not have a mount attempted
--run xfs_repair from xfsprogs-3.1.5
--cross fingers :)
--mount and check what's in lost+found
--if all seems well, attempt another xfs_growfs using xfsprogs-3.1.5
Does this seem like a reasonable plan of attack? If so, is there
a way to estimate how long the actual xfs_repair will take from my
xfs_repair sessions on the metadump image? Obviously the hardware
isn't the same, but I'd just hope for a back of the envelope estimate,
not necessarily something terribly accurate.
Finally, are there other things I can try on the metadump image first
to give me more information on what'll happen on the live filesystem?
Thanks again!
--keith
--
kkeller@xxxxxxxxxxxxxxxxxxxxxxxxxx
|