Great!
That is exactly what I needed to know.
One follow up question:
Can I assume that the bug:
TAKE 959978 - growing an XFS filesystem by more than 2TB is broken
is a problem only with the the xfs_growfs code? The reason I asked is that
when I first made the original filesystem, I created it using mkfs.xfs and it
succeeded fine for 10.5 TB.
# mkfs.xfs /dev/VolGroupNAS200/LogVolNAS200
meta-data=/dev/VolGroupNAS200/LogVolNAS200 isize=256 agcount=32,
agsize=83886080 blks
= sectsz=512 attr=0
data = bsize=4096 blocks=2684354560, imaxpct=25
= sunit=0 swidth=0 blks, unwritten=1
naming =version 2 bsize=4096
log =internal log bsize=4096 blocks=32768, version=1
= sectsz=512 sunit=0 blks, lazy-count=0
realtime =none extsz=4096 blocks=0, rtextents=
so I am assuming the rest of the XFS setup can handle large Filesystems fine.
I am just trying to confirm that the probem is TAKE 959978 and that doing it in
less than 2 TB increments should be fine.
Thanks again to the List for all the assitance.
Lance
-----Original Message-----
From: Eric Sandeen [mailto:sandeen@xxxxxxxxxxx]
Sent: Tuesday, April 29, 2008 12:55 PM
To: Lance Reed
Cc: markgw@xxxxxxx; xfs@xxxxxxxxxxx
Subject: Re: Problems with xfs_grow on large LVM + XFS filesystem 20TB size
check 2 failed
Lance Reed wrote:
> Thanks,
>
> Sorry, I am a bit confused on the "data section" vs. the "real-time section"?
>
> Is it enough to just run "xfs_growfs -D XXX /mntpoint" and the rest should
> fall into place.
>
> Again, sorry for being dense.
> I really appraciate the rapid feedback.
>
Unless you specifically made a filesystem with a realtime subvol, just
ignore it, it's not created by default.
So yes, -D <size> (in blocks) is what you want. A little cumbersome but
not too bad :)
-Eric
|