On 7/13/13 7:11 PM, aurfalien wrote:
> Hello again,
> I have a Raid 6 x16 disk array with 128k stripe size and a 512 byte block
> So I do;
> mkfs.xfs -f -l size=512m -d su=128k,sw=14 /dev/mapper/vg_doofus_data-lv_data
> And I get;
> meta-data=/dev/mapper/vg_doofus_data-lv_data isize=256 agcount=32,
> agsize=209428640 blks
> = sectsz=512 attr=2, projid32bit=0
> data = bsize=4096 blocks=6701716480, imaxpct=5
> = sunit=32 swidth=448 blks
> naming =version 2 bsize=4096 ascii-ci=0
> log =internal log bsize=4096 blocks=131072, version=2
> = sectsz=512 sunit=32 blks, lazy-count=1
> realtime =none extsz=4096 blocks=0, rtextents=0
> All is fine but I was recently made aware of tweaking agsize.
Made aware by what? For what reason?
> So I would like to mess around and iozone any diffs between the above
> agcount of 32 and whatever agcount changes I may do.
Unless iozone is your machine's normal workload, that will probably prove to be
> I didn't see any mention of agsize/agcount on the XFS FAQ and would
> like to know, based on the above, why does XFS think I have 32
> allocation groups with the corresponding size?
It doesn't think so, it _knows_ so, because it made them itself. ;)
> And are these optimal
How high is up?
Here's the appropriate faq entry:
> Thanks in advance,
> - aurf
> xfs mailing list