xfs
[Top] [All Lists]

Re: Bad performance with XFS + 2.6.38 / 2.6.39

To: Xupeng Yun <xupeng@xxxxxxxxx>
Subject: Re: Bad performance with XFS + 2.6.38 / 2.6.39
From: Dave Chinner <david@xxxxxxxxxxxxx>
Date: Mon, 12 Dec 2011 10:39:29 +1100
Cc: XFS group <xfs@xxxxxxxxxxx>
In-reply-to: <CACaf2aYZ=k=x8sPFJs4f-4vQxs+qNyoO1EUi8X=iBjWjRhy99Q@xxxxxxxxxxxxxx>
References: <CACaf2aYZ=k=x8sPFJs4f-4vQxs+qNyoO1EUi8X=iBjWjRhy99Q@xxxxxxxxxxxxxx>
User-agent: Mutt/1.5.21 (2010-09-15)
On Sun, Dec 11, 2011 at 08:45:14PM +0800, Xupeng Yun wrote:
> Hi,
> 
> I am using XFS + 2.6.29 on my MySQL servers, they perform great.
> 
> I am testing XFS on SSD these days, due to the fact that FITRIM support of
> XFS was
> shipped with Linux kernel 2.6.38 or newer, I tested XFS + 2.6.38 and XFS +
> 2.6.39, but
> it surprises me that the performance of XFS with these two versions of
> kernel drops so
> much.
> 
> Here are the results of my tests with fio, all these two tests were taken
> on the same hardware
> with same testing environment (except for different kernel version).
> 
> ====== XFS + 2.6.29 ======

Read 21GB @ 11k iops, 210MB/s, av latency of 1.3ms/IO
Wrote 2.3GB @ 1250 iops, 20MB/s, av latency of 0.27ms/IO
Total 1.5m IOs, 95% @ <= 2ms

> ====== XFS + 2.6.39 ======

Read 6.5GB @ 3.5k iops, 55MB/s, av latency of 4.5ms/IO
Wrote 700MB @ 386 iops, 6MB/s, av latency of 0.39ms/IO
Total 460k IOs, 95% @ <= 10ms, 4ms > 50% < 10ms

Looking at the IO stats there, this doesn't look to me like an XFS
problem. The IO times are much, much longer on 2.6.39, so that's the
first thing to understand. If the two tests are doing identical IO
patterns, then I'd be looking at validating raw device performance
first.

> I tried different XFS format options and different mount options, but
> it did not help.

It won't if the problem is inthe layers below XFS.

e.g. IO scheduler behavioural changes could be the cause (esp. if
you are using CFQ), the SSD could be in different states or running
garbage collection intermittently and slowing things down, the
filesystem could be in different states (did you use a fresh
filesystem for each of these tests?), etc, recent mkfs.xfs will trim
the entire device if the kernel supports it, etc.

Cheers,

Dave.
-- 
Dave Chinner
david@xxxxxxxxxxxxx

<Prev in Thread] Current Thread [Next in Thread>