xfs
[Top] [All Lists]

Re: XFS: Abysmal write performance because of excessive seeking (allocat

To: stan@xxxxxxxxxxxxxxxxx
Subject: Re: XFS: Abysmal write performance because of excessive seeking (allocation groups to blame?)
From: Stefan Ring <stefanrin@xxxxxxxxx>
Date: Mon, 9 Apr 2012 13:02:27 +0200
Cc: Linux fs XFS <xfs@xxxxxxxxxxx>
Dkim-signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=mime-version:in-reply-to:references:date:message-id:subject:from:to :cc:content-type:content-transfer-encoding; bh=n1kgXDD0RNclKN/I9Yn8/9kTV2Xy7Ru+2trrr20+ANk=; b=Mq0/7h2ahMqlO9Ypiw4Sph5J6gk5b8pUEhCfANOfumZEx9LdBeY8gDTIT5P4Zv6cll KkpoDCVQfCTSOGPyXfYhO2+vq4O4LJMXaiXN6AaIAj8vMBcYECwPtMGiy8ulyEkVfJ0U 37ie8vFwtnlW3tdxnSStDrJ6JbamEwGxJLiHgqtJaUGcQUM0iYjPUzfKMmS0SpIDC/OC czvHxf92gj5JxTK6rcIeJgMKlup9/HBLgqQGjD7F9dMcLrUe8vBJiaEeqywYoZBCpxXo Ng/Zxe28vJ+7Pdu83pZEeKkj2C6iqoHWXJHVXZ6NEelyywJDOFBle/n918qrAlfhXU4S eKKw==
In-reply-to: <4F8055E4.1000808@xxxxxxxxxxxxxxxxx>
References: <CAAxjCEwBMbd0x7WQmFELM8JyFu6Kv_b+KDe3XFqJE6shfSAfyQ@xxxxxxxxxxxxxx> <20350.9643.379841.771496@xxxxxxxxxxxxxxxxxx> <20350.13616.901974.523140@xxxxxxxxxxxxxxxxxx> <CAAxjCEzkemiYin4KYZX62Ei6QLUFbgZESdwS8krBy0dSqOn6aA@xxxxxxxxxxxxxx> <4F7F7C25.8040605@xxxxxxxxxxxxxxxxx> <CAAxjCEyJW1b4dbKctbrgdWjykQt8Hb4Sw1RKdys3oUsehNHCcQ@xxxxxxxxxxxxxx> <4F8055E4.1000808@xxxxxxxxxxxxxxxxx>
> Not at all.  You can achieve this performance with the 6 300GB spindles
> you currently have, as Christoph and I both mentioned.  You simply lose
> one spindle of capacity, 300GB, vs your current RAID6 setup.  Make 3
> RAID1 pairs in the p400 and concatenate them.  If the p400 can't do this
> concat the mirror pair devices with md --linear.  Format the resulting
> Linux block device with the following and mount with inode64.
>
> $ mkfs.xfs -d agcount=3 /dev/[device]
>
> That will give you 1 AG per spindle, 3 horizontal AGs total instead of 4
> vertical AGs as you get with default striping setup.  This is optimal
> for your high IOPS workload as it eliminates all 'extraneous' seeks
> yielding a per disk access pattern nearly identical to EXT4.  And it
> will almost certainly outrun EXT4 on your RAID6 due mostly to the
> eliminated seeks, but also to elimination of parity calculations.
> You've wiped the array a few times in your testing already right, so one
> or two more test setups should be no sweat.  Give it a go.  The results
> will be pleasantly surprising.

Well I had to move around quite a bit of data, but for the sake of
completeness, I had to give it a try.

With a nice and tidy fresh XFS file system, performance is indeed
impressive – about 16 sec for the same task that would take 2 min 25
before. So that’s about 150 MB/sec, which is not great, but for many
tiny files it would perhaps be a bit unreasonable to expect more. A
simple copy of the tar onto the XFS file system yields the same linear
performance, the same as with ext4, btw. So 150 MB/sec seems to be the
best these disks can do, meaning that theoretically, with 3 AGs, it
should be able to reach 450 MB/sec under optimal conditions.

I will still do a test with the free space fragmentation priming on
the concatenated AG=3 volume, because it seems to be rather slow as
well.

But then I guess I’m back to ext4 land. XFS just doesn’t offer enough
benefits in this case to justify the hassle.

<Prev in Thread] Current Thread [Next in Thread>