[Top] [All Lists]

Re: specify agsize?

To: stan@xxxxxxxxxxxxxxxxx
Subject: Re: specify agsize?
From: aurfalien <aurfalien@xxxxxxxxx>
Date: Sun, 14 Jul 2013 09:56:10 -0700
Cc: Eric Sandeen <sandeen@xxxxxxxxxxx>, xfs@xxxxxxxxxxx
Delivered-to: xfs@xxxxxxxxxxx
Dkim-signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=subject:mime-version:content-type:from:in-reply-to:date:cc :content-transfer-encoding:message-id:references:to:x-mailer; bh=qfHB/AW39EOTaTmqmqDRdJWa+M+1Y0MlQWqodL87JkU=; b=GdvaM7GVFaBW23vlKZZcEeOThSS3WaOdnjz9ntuRkU8gfmfuT6o89/40hxX+Odi9IE rSYFUjBhzRPws7qhbQB6xFmT92KO2VQNxi0nHK1VpPyCAG5Q2uKx+VyGapf1Wb7HwdQr 870pmW4oiN6lL/hR4RKJjz3b9uXaOO5mYhoUffaKMs/i/GM4dgCMA+xh/FpcC2FjfyTl O8wmc7Th+EPGWTLOcjrxS31n0F922zUObd2CyX0/PhM+cDZfagE3nyM947c3GDq3LxmQ 80/Jnv1kHXzy/hyQWji5BJNdphVI4qIajfgGUXex3iX13/AAhvq/HBsgD5MBUcftuvtU ry+w==
In-reply-to: <51E24E03.8010609@xxxxxxxxxxxxxxxxx>
References: <6A14EB72-A699-47AF-937D-D6DA1CF12ACB@xxxxxxxxx> <51E2092D.7090409@xxxxxxxxxxx> <9AB8D1D3-29D7-4C43-A624-37024CA4EFD9@xxxxxxxxx> <51E24E03.8010609@xxxxxxxxxxxxxxxxx>
On Jul 14, 2013, at 12:06 AM, Stan Hoeppner wrote:

> On 7/13/2013 11:20 PM, aurfalien wrote:
> ...
>>>> mkfs.xfs -f -l size=512m -d su=128k,sw=14 
>>>> /dev/mapper/vg_doofus_data-lv_data
> ...
>>>> meta-data=/dev/mapper/vg_doofus_data-lv_data isize=256    agcount=32, 
>>>> agsize=209428640 blks
>>>>        =                       sectsz=512   attr=2, projid32bit=0
>>>> data     =                       bsize=4096   blocks=6701716480, imaxpct=5
>>>>        =                       sunit=32     swidth=448 blks
>>>> naming   =version 2              bsize=4096   ascii-ci=0
>>>> log      =internal log           bsize=4096   blocks=131072, version=2
>>>>        =                       sectsz=512   sunit=32 blks, lazy-count=1
>>>> realtime =none                   extsz=4096   blocks=0, rtextents=0
> ...
>> Autodesk has this software called Flame which requires very very fast local 
>> storage using XFS.  
> If "Flame" does any random writes then you probably shouldn't be using
> RAID6.
>> They have an entire write up on how to calc proper agsize for optimal 
>> performance.
> I think you're confused.  Maximum agsize is 1TB.  Making your AGs
> smaller than that won't decrease application performance, so it's
> literally impossible to tune agsize to increase performance.  agcount on
> the other hand can potentially have an effect if the application is
> sufficiently threaded.  But agcount doesn't mean anything in isolation.
> It's tied directly to the characteristics of the RAID level and
> hardware.  For example, mkfs.xfs gave you 32 AGs for this 14 spindle
> array.  One could make 32 AGs on a single 4TB SATA disk and the
> performance difference between the two will be radically different.
> ...
>> Well, it will give me a base line comparison of non tweaked agsize vs 
>> tweaked agsize.
> No, it won't.  See above.
>> Yea but based on what?
> Based on the fact that your XFS is ~26TB.
> mkfs.xfs could have given you 26 AGs of ~1TB each.  But it chose to give
> you 32 AGs of ~815GB each.  Whether you run bonnie, iozone, or your
> Flame application, you won't be able to measure a meaningful difference,
> if any difference, between 26 and 32 AGs.
> ...
>> Problem is I run Centos so the line;
>> "As of kernel 3.2.12, the default i/o scheduler, CFQ, will defeat much of 
>> the parallelization in XFS. "
>> ... doesn't really apply.
> This makes no sense.  What doesn't apply?

Well, I had assumed it meant Linux kernel version 3.2.12, were as CentOS is at 
whatever RHEL is at being 2.6.32.

At any rate, based on what I'm getting from you all, is that leave the agcount 
alone as agsize will max at 1TB and agcount will adjust depending on volume 

This volume will encounter a lot of random IO so 32 AGs will suffice at any 
rate.  Un sure if increasing it to Autodesks 128 will really help my env.  I'm 
assuming they want a lot of parallelism which again doesn't apply in my case.. 

> You can change to noop or deadline with a single echo command in a
> startup script:
> echo noop > /sys/block/sdX/queue/scheduler
> where sdX is the name of your RAID device.
> -- 
> Stan

- aurf
<Prev in Thread] Current Thread [Next in Thread>