[Top] [All Lists]

Re: problem after growing

To: Rémi Cailletaud <remi.cailletaud@xxxxxxxxxxxxxxx>
Subject: Re: problem after growing
From: Eric Sandeen <sandeen@xxxxxxxxxxx>
Date: Wed, 13 Feb 2013 11:52:17 -0600
Cc: xfs-oss <xfs@xxxxxxxxxxx>
Delivered-to: xfs@xxxxxxxxxxx
In-reply-to: <511BD0FB.2070401@xxxxxxxxxxxxxxx>
References: <511BC78B.6070205@xxxxxxxxxxxxxxx> <511BCB41.4060804@xxxxxxxxxxx> <511BCD11.20907@xxxxxxxxxxxxxxx> <511BCFB6.8000309@xxxxxxxxxxx> <511BD0FB.2070401@xxxxxxxxxxxxxxx>
User-agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.8; rv:17.0) Gecko/20130107 Thunderbird/17.0.2
On 2/13/13 11:44 AM, Rémi Cailletaud wrote:
> Le 13/02/2013 18:39, Eric Sandeen a écrit :
>> On 2/13/13 11:27 AM, Rémi Cailletaud wrote:
>>> Le 13/02/2013 18:20, Eric Sandeen a écrit :
>>>> On 2/13/13 11:04 AM, Rémi Cailletaud wrote:
>>>>> Hi,
>>>>> I face a strange and scary issue. I just grow a xfs filesystem (44To), 
>>>>> and no way to mount it anymore :
>>>>> XFS: device supports only 4096 byte sectors (not 512)
>>>> Did you expand an LV made of 512-sector physical devices by adding 
>>>> 4k-sector physical devices?
>>> The three devices are ARECA 1880 card, but the last one was added later, 
>>> and I never check for sector physical configuration on card configuration.
>>> But yes, running fdisk, it seems that sda and sdb are 512, and sdc is 4k... 
>>> :(
>>>> that's probably not something we anticipate or check for....
>>>> What sector size(s) are the actual lowest level disks under all the lvm 
>>>> pieces?
>> (re-cc'ing xfs list)
>>> What command to run to get this info ?
>> IIRC,
>> # blockdev --getpbsz --getss  /dev/sda
>> to print the physical&  logical sector size
>> You can also look at i.e.:
>> /sys/block/sda/queue/hw_sector_size
>> /sys/block/sda/queue/physical_block_size
>> /sys/block/sda/queue/logical_block_size
> ouch... nice guess :
> #  blockdev --getpbsz --getss  /dev/sda
> 512
> 512
> #  blockdev --getpbsz --getss  /dev/sdb
> 512
> 512
> #  blockdev --getpbsz --getss  /dev/sdc
> 4096
> 4096
>> I wonder what the recovery steps would be here.  I wouldn't do anything yet; 
>> I wish you hadn't already cleared the log, but oh well.
> I tried a xfs_repair -L (as mentionned by xfs_check), but it early failed as 
> show on my first post...

Ah, right.

>> So you grew it, that all worked ok, you were able to copy new data into the 
>> new space, you unmounted it, but now it won't mount, correct?
> I never was able to copy data to new space. I had an input/output error just 
> after growing.
> may pmove-ing extents on 4k device on a 512k device be a solution ?

Did the filesystem grow actually work?

# xfs_db -c "sb 0" -c "p" /dev/vg0/tomo-201111
magicnum = 0x58465342
blocksize = 4096
dblocks = 10468982745 

That looks like it's (still?) a 38TiB/42TB filesystem, with:

sectsize = 512 

512 sectors.

How big was it before you tried to grow it, and how much did you try to grow it 
by?  Maybe the size never changed.

At mount time it tries to set the sector size of the device; its' a hard-4k 
device, so setting it to 512 fails.

This may be as much of an LVM issue as anything; how do you get the LVM device 
back to something with 512-byte logical sectors?  I have no idea...

*if* the fs didn't actually grow, and if the new 4k-sector space is not used by 
the filesystem, and if you can somehow remove that new space from the device 
and set the LV back to 512 sectors, you might be in good shape.

Proceed with extreme caution here, I wouldn't start just trying random things 
unless you have some other way to get your data back (backups?).  I'd check 
with LVM folks as well, and maybe see if dchinner or the sgi folks have other 

First let's find out if the filesystem actually thinks it's living on the new 


> rémi

<Prev in Thread] Current Thread [Next in Thread>