[Top] [All Lists]

Re: xfs_growfs: XFS_IOC_FSGROWFSDATA xfsctl failed: Invalid argument

To: richard.ems@xxxxxxxxxxxxxxxxx
Subject: Re: xfs_growfs: XFS_IOC_FSGROWFSDATA xfsctl failed: Invalid argument
From: Eric Sandeen <sandeen@xxxxxxxxxxx>
Date: Sat, 23 May 2009 14:25:32 -0500
Cc: xfs@xxxxxxxxxxx
In-reply-to: <4A1844AF.7030906@xxxxxxxxxxx>
References: <4A180FCD.9080905@xxxxxxxxxxxxxxxxx> <4A181B40.9080608@xxxxxxxxxxx> <20090523180721.94212hyfjppuupmo@xxxxxxxxxxxxx> <4A1833D7.30608@xxxxxxxxxxx> <20090523194552.66062w3zquwvms00@xxxxxxxxxxxxx> <4A1844AF.7030906@xxxxxxxxxxx>
User-agent: Thunderbird (Macintosh/20090302)
Eric Sandeen wrote:
> richard.ems@xxxxxxxxxxxxxxxxx wrote:
>> Quoting Eric Sandeen <sandeen@xxxxxxxxxxx>:
>>> Not sure ... how big is the current fs and how big is the device?  Can
>>> you provide:
>>> # xfs_info /mnt
>>> # grep sda1 /proc/partitions
>> It is a 16TB FS, and I add 4 x 1 TB HDDs to the RAID 6 array, so the  
>> device went from 16 TB to 20 TB.
>> c3m:~ # xfs_info /backup/IFT
>> meta-data=/dev/sda1              isize=256    agcount=52, agsize=76288719 
>> blks
>>           =                       sectsz=512   attr=1
>> data     =                       bsize=4096   blocks=3905982455, imaxpct=25
>>           =                       sunit=0      swidth=0 blks
>> naming   =version 2              bsize=4096   ascii-ci=0
>> log      =internal               bsize=4096   blocks=32768, version=1
>>           =                       sectsz=512   sunit=0 blks, lazy-count=0
>> realtime =none                   extsz=4096   blocks=0, rtextents=0
>> c3m:~ # grep sda1 /proc/partitions
>>     8     1 19529912286 sda1
> thanks, with that info I can reproduce it, I'll look into it soon... but
> not today.

Actually I lied, I looked at it ;)

if you growfs to a nr of blocks that is about 55 blocks less than the
actual device size, it should succeed for you.  There's a case where the
last AG would be too small and it tries to compensate but there's an
overflow, I'll send a patch.


<Prev in Thread] Current Thread [Next in Thread>