[Top] [All Lists]

Re: problem after growing

To: Rémi Cailletaud <remi.cailletaud@xxxxxxxxxxxxxxx>
Subject: Re: problem after growing
From: Eric Sandeen <sandeen@xxxxxxxxxxx>
Date: Wed, 13 Feb 2013 15:38:14 -0600
Cc: xfs-oss <xfs@xxxxxxxxxxx>
Delivered-to: xfs@xxxxxxxxxxx
In-reply-to: <511BF3B6.9040502@xxxxxxxxxxx>
References: <511BC78B.6070205@xxxxxxxxxxxxxxx> <511BCB41.4060804@xxxxxxxxxxx> <511BCD11.20907@xxxxxxxxxxxxxxx> <511BCFB6.8000309@xxxxxxxxxxx> <511BD0FB.2070401@xxxxxxxxxxxxxxx> <511BD2D1.9010906@xxxxxxxxxxx> <511BD6ED.6030006@xxxxxxxxxxxxxxx> <511BEE8B.8000400@xxxxxxxxxxx> <511BF3B6.9040502@xxxxxxxxxxx>
User-agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.8; rv:17.0) Gecko/20130107 Thunderbird/17.0.2
On 2/13/13 2:12 PM, Eric Sandeen wrote:
> On 2/13/13 1:50 PM, Eric Sandeen wrote:
>> On 2/13/13 12:09 PM, Rémi Cailletaud wrote:
>>> Le 13/02/2013 18:52, Eric Sandeen a écrit :
>> <snip>
>>>> Did the filesystem grow actually work?
>>>> # xfs_db -c "sb 0" -c "p" /dev/vg0/tomo-201111
>>>> magicnum = 0x58465342
>>>> blocksize = 4096
>>>> dblocks = 10468982745
>>>> That looks like it's (still?) a 38TiB/42TB filesystem, with:
>>>> sectsize = 512
>>>> 512 sectors.
>>>> How big was it before you tried to grow it, and how much did you try to 
>>>> grow it by?  Maybe the size never changed.
>>> Was 39, growing to 44. Testdisk says 48 TB / 44 TiB... There is some chance 
>>> that it was never really growed.
>>>> At mount time it tries to set the sector size of the device; its' a 
>>>> hard-4k device, so setting it to 512 fails.
>>>> This may be as much of an LVM issue as anything; how do you get the LVM 
>>>> device back to something with 512-byte logical sectors?  I have no idea...
>>>> *if* the fs didn't actually grow, and if the new 4k-sector space is not 
>>>> used by the filesystem, and if you can somehow remove that new space from 
>>>> the device and set the LV back to 512 sectors, you might be in good shape.
>>> I dont either know how to see nor set LV sector size.  It's 100% sure that 
>>> anything was copied on 4k sector size, and pretty sure that the fs did not 
>>> really grow.
>> I think the same blockdev command will tell you.
>>>> Proceed with extreme caution here, I wouldn't start just trying random 
>>>> things unless you have some other way to get your data back (backups?).  
>>>> I'd check with LVM folks as well, and maybe see if dchinner or the sgi 
>>>> folks have other suggestions.
>>> Sigh... No backup (44To is too large for us...) ! I'm running a testdisk 
>>> recover, but I'm not very confident about success...
>>> Thanks to deeper investigate this...
>>>> First let's find out if the filesystem actually thinks it's living on the 
>>>> new space.
>>> What is the way to make it talk about that ?
>> well, you have 10468982745 4k blocks in your filesystem, so 42880953323520 
>> bytes of xfs filesystem.
>> Look at your lvm layout, does that extend into the new disk space or is it 
>> confined to the original disk space?
> lvm folks I talk to say that if you remove the 4k device from the lvm volume 
> it should switch back to 512 sectors.
> so if you can can convince yourself that 42880953323520 bytes does not cross 
> into the newly added disk space, just remove it again, and everything should 
> be happy.
> Unless your rash decision to start running "testdisk" made things worse ;)

I tested this.  I had a PV on a normal 512 device, then used scsi_debug to 
create a 4k device.

I created an LV on the 512 device & mounted it, then added the 4k device as you 
did.  growfs failed immediately, and the device won't remount due to the sector 
size change.

I verified that removing the 4k device from the LV changes the LV back to a 512 
sector size.

However, I'm not 100% sure how to remove just the 4K PV; when I did it, I did 
something wrong and it reduced the size of my LV to the point where it 
corrupted the filesystem.  :)  Perhaps you are a better lvm admin than I am.

But in any case - if you know how to safely remove ONLY the 4k device from the 
LV, you should be in good shape again.


<Prev in Thread] Current Thread [Next in Thread>