xfs
[Top] [All Lists]

Re: Bad performance with XFS + 2.6.38 / 2.6.39

To: Dave Chinner <david@xxxxxxxxxxxxx>
Subject: Re: Bad performance with XFS + 2.6.38 / 2.6.39
From: Xupeng Yun <xupeng@xxxxxxxxx>
Date: Mon, 12 Dec 2011 08:40:15 +0800
Cc: XFS group <xfs@xxxxxxxxxxx>
Dkim-signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=gamma; h=mime-version:sender:in-reply-to:references:from:date :x-google-sender-auth:message-id:subject:to:cc:content-type; bh=T8SQ8vTsJucq/dO080UZ36RGBrILBScm6vXRqybs/4s=; b=semiJtbRKgj0/RsdaCfqObofvS1RN2FBQZPiUvmcxGAaA92Db6qlG8+YP2OquKyuzs J3ZIZAnOCQemBawHv4vKszVGckQK4VDx78mbbcs3PApBcq4IReT1NSM41u7G2nU8QD9j out/hUpar2/Gkp+7c6Sz1HpGDQOZFxrxhj0FY=
In-reply-to: <20111211233929.GI14273@dastard>
References: <CACaf2aYZ=k=x8sPFJs4f-4vQxs+qNyoO1EUi8X=iBjWjRhy99Q@xxxxxxxxxxxxxx> <20111211233929.GI14273@dastard>
Sender: recordus@xxxxxxxxx


On Mon, Dec 12, 2011 at 07:39, Dave Chinner <david@xxxxxxxxxxxxx> wrote:
>
> > ====== XFS + 2.6.29 ======
>
> Read 21GB @ 11k iops, 210MB/s, av latency of 1.3ms/IO
> Wrote 2.3GB @ 1250 iops, 20MB/s, av latency of 0.27ms/IO
> Total 1.5m IOs, 95% @ <= 2ms
>
> > ====== XFS + 2.6.39 ======
>
> Read 6.5GB @ 3.5k iops, 55MB/s, av latency of 4.5ms/IO
> Wrote 700MB @ 386 iops, 6MB/s, av latency of 0.39ms/IO
> Total 460k IOs, 95% @ <= 10ms, 4ms > 50% < 10ms
>
> Looking at the IO stats there, this doesn't look to me like an XFS
> problem. The IO times are much, much longer on 2.6.39, so that's the
> first thing to understand. If the two tests are doing identical IO
> patterns, then I'd be looking at validating raw device performance
> first.
>

Thank you Dave.

I also did raw device and ext4 performance test with 2.6.39, all these tests are
doing identical IO patterns(non-buffered IO, 16 IO threads, 16KB block size,
mixed random read and write, r:w=9:1):
====== raw device + 2.6.39 ======
Read 21.7GB @ 11.6k IOPS , 185MB/s, av latency of 1.37 ms/IO
Wrote 2.4GB @ 1.3k IOPS, 20MB/s, av latency of 0.095 ms/IO
Total 1.5M IOs, @ 96% <= 2ms

====== ext4 + 2.6.39 ======
Read 21.7GB @ 11.6k IOPS , 185MB/s, av latency of 1.37 ms/IO
Wrote 2.4GB @ 1.3k IOPS, 20MB/s, av latency of 0.1 ms/IO
Total 1.5M IOs, @ 96% <= 2ms

====== XFS + 2.6.39 ======
Read 6.5GB @ 3.5k iops, 55MB/s, av latency of 4.5ms/IO
Wrote 700MB @ 386 iops, 6MB/s, av latency of 0.39ms/IO
Total 460k IOs, @ 95% <= 10ms, 4ms > 50% < 10ms

here are the detailed test results:
== 2.6.39 ==
http://blog.xupeng.me/wp-content/uploads/ext4-xfs-perf/2.6.39-xfs.txt
http://blog.xupeng.me/wp-content/uploads/ext4-xfs-perf/2.6.39-ext4.txt
http://blog.xupeng.me/wp-content/uploads/ext4-xfs-perf/2.6.39-raw.txt

== 2.6.29 ==

>
> > I tried different XFS format options and different mount options, but
> > it did not help.
>
> It won't if the problem is inthe layers below XFS.
>
> e.g. IO scheduler behavioural changes could be the cause (esp. if
> you are using CFQ), the SSD could be in different states or running
> garbage collection intermittently and slowing things down, the
> filesystem could be in different states (did you use a fresh
> filesystem for each of these tests?), etc, recent mkfs.xfs will trim
> the entire device if the kernel supports it, etc.


I did all the tests on the same server with deadline scheduler, and xfsprogs version
is 3.1.4. I also ran tests with noop scheduler, but not big difference.

--
Xupeng Yun
http://about.me/xupeng
<Prev in Thread] Current Thread [Next in Thread>