Le 13/02/2013 22:38, Eric Sandeen a écrit :
On 2/13/13 2:12 PM, Eric Sandeen wrote:
On 2/13/13 1:50 PM, Eric Sandeen wrote:
On 2/13/13 12:09 PM, Rémi Cailletaud wrote:
Le 13/02/2013 18:52, Eric Sandeen a écrit :
<snip>
Did the filesystem grow actually work?
# xfs_db -c "sb 0" -c "p" /dev/vg0/tomo-201111
magicnum = 0x58465342
blocksize = 4096
dblocks = 10468982745
That looks like it's (still?) a 38TiB/42TB filesystem, with:
sectsize = 512
512 sectors.
How big was it before you tried to grow it, and how much did you try to grow it
by? Maybe the size never changed.
Was 39, growing to 44. Testdisk says 48 TB / 44 TiB... There is some chance
that it was never really growed.
At mount time it tries to set the sector size of the device; its' a hard-4k
device, so setting it to 512 fails.
This may be as much of an LVM issue as anything; how do you get the LVM device
back to something with 512-byte logical sectors? I have no idea...
*if* the fs didn't actually grow, and if the new 4k-sector space is not used by
the filesystem, and if you can somehow remove that new space from the device
and set the LV back to 512 sectors, you might be in good shape.
I dont either know how to see nor set LV sector size. It's 100% sure that
anything was copied on 4k sector size, and pretty sure that the fs did not
really grow.
I think the same blockdev command will tell you.
Proceed with extreme caution here, I wouldn't start just trying random things
unless you have some other way to get your data back (backups?). I'd check
with LVM folks as well, and maybe see if dchinner or the sgi folks have other
suggestions.
Sigh... No backup (44To is too large for us...) ! I'm running a testdisk
recover, but I'm not very confident about success...
Thanks to deeper investigate this...
First let's find out if the filesystem actually thinks it's living on the new
space.
What is the way to make it talk about that ?
well, you have 10468982745 4k blocks in your filesystem, so 42880953323520
bytes of xfs filesystem.
Look at your lvm layout, does that extend into the new disk space or is it
confined to the original disk space?
Seems it does not : lvm map shows 48378494844928 bytes, 1304432738304 on
the 4K device.
lvm folks I talk to say that if you remove the 4k device from the lvm volume it
should switch back to 512 sectors.
so if you can can convince yourself that 42880953323520 bytes does not cross
into the newly added disk space, just remove it again, and everything should be
happy.
Unless your rash decision to start running "testdisk" made things worse ;)
I tested this. I had a PV on a normal 512 device, then used scsi_debug to
create a 4k device.
I created an LV on the 512 device& mounted it, then added the 4k device as you
did. growfs failed immediately, and the device won't remount due to the sector
size change.
I verified that removing the 4k device from the LV changes the LV back to a 512
sector size.
However, I'm not 100% sure how to remove just the 4K PV; when I did it, I did
something wrong and it reduced the size of my LV to the point where it
corrupted the filesystem. :) Perhaps you are a better lvm admin than I am.
How did you remove the pv ? I would tend to use vgreduce, but I'm a bit
(a lot, in fact) scary about fs corruption. That's why I was wondering
about pvmove'ing extents on a 512K device
rémi
But in any case - if you know how to safely remove ONLY the 4k device from the
LV, you should be in good shape again.
-Eric
--
Rémi Cailletaud - IE CNRS
3SR - Laboratoire Sols, Solides, Structures - Risques
BP53, 38041 Grenoble CEDEX 0
FRANCE
remi.cailletaud@xxxxxxxxxxxxxxx
Tél: +33 (0)4 76 82 52 78
Fax: +33 (0)4 76 82 70 43
|