On 7/13/2013 11:20 PM, aurfalien wrote:
>>> mkfs.xfs -f -l size=512m -d su=128k,sw=14 /dev/mapper/vg_doofus_data-lv_data
>>> meta-data=/dev/mapper/vg_doofus_data-lv_data isize=256 agcount=32,
>>> agsize=209428640 blks
>>> = sectsz=512 attr=2, projid32bit=0
>>> data = bsize=4096 blocks=6701716480, imaxpct=5
>>> = sunit=32 swidth=448 blks
>>> naming =version 2 bsize=4096 ascii-ci=0
>>> log =internal log bsize=4096 blocks=131072, version=2
>>> = sectsz=512 sunit=32 blks, lazy-count=1
>>> realtime =none extsz=4096 blocks=0, rtextents=0
> Autodesk has this software called Flame which requires very very fast local
> storage using XFS.
If "Flame" does any random writes then you probably shouldn't be using
> They have an entire write up on how to calc proper agsize for optimal
I think you're confused. Maximum agsize is 1TB. Making your AGs
smaller than that won't decrease application performance, so it's
literally impossible to tune agsize to increase performance. agcount on
the other hand can potentially have an effect if the application is
sufficiently threaded. But agcount doesn't mean anything in isolation.
It's tied directly to the characteristics of the RAID level and
hardware. For example, mkfs.xfs gave you 32 AGs for this 14 spindle
array. One could make 32 AGs on a single 4TB SATA disk and the
performance difference between the two will be radically different.
> Well, it will give me a base line comparison of non tweaked agsize vs tweaked
No, it won't. See above.
> Yea but based on what?
Based on the fact that your XFS is ~26TB.
mkfs.xfs could have given you 26 AGs of ~1TB each. But it chose to give
you 32 AGs of ~815GB each. Whether you run bonnie, iozone, or your
Flame application, you won't be able to measure a meaningful difference,
if any difference, between 26 and 32 AGs.
> Problem is I run Centos so the line;
> "As of kernel 3.2.12, the default i/o scheduler, CFQ, will defeat much of the
> parallelization in XFS. "
> ... doesn't really apply.
This makes no sense. What doesn't apply?
You can change to noop or deadline with a single echo command in a
echo noop > /sys/block/sdX/queue/scheduler
where sdX is the name of your RAID device.