[Top] [All Lists]

Re: Consistent throughput challenge -- fragmentation?

To: Brian Cain <brian.cain@xxxxxxxxx>
Subject: Re: Consistent throughput challenge -- fragmentation?
From: Dave Chinner <david@xxxxxxxxxxxxx>
Date: Tue, 26 Feb 2013 09:16:39 +1100
Cc: xfs@xxxxxxxxxxx
Delivered-to: xfs@xxxxxxxxxxx
In-reply-to: <CAEWpfG_DKJt1MmWS1tARH4OmYwpSt=A-DzwKkGcD67LuR6k=Bg@xxxxxxxxxxxxxx>
References: <CAEWpfG_DKJt1MmWS1tARH4OmYwpSt=A-DzwKkGcD67LuR6k=Bg@xxxxxxxxxxxxxx>
User-agent: Mutt/1.5.21 (2010-09-15)
On Mon, Feb 25, 2013 at 10:01:53AM -0600, Brian Cain wrote:
> All,
> I have been observing some odd behavior regarding write throughput to an
> XFS partition (the baseline kernel version is  I see
> consistently high write throughput (close to the performance of the raw
> block device) to the filesystem immediately after a mkfs, but after a few
> test cycles, there is sporadic poor performance.
> The test mechanism is like so:
> [mkfs.xfs <blockdev>] (no flags/options, xfsprogs ver 3.1.1-0.1.36)
> ...
> 1. remove a previous test cycle's directory
> 2. create a new directory
> 3. open/write/close a small file (4kb) in this directory
> 4. open/read/close this same small file (by the local NFS server)
> 5. open[O_DIRECT]/write/write/write/.../close a large file (anywhere from
> ~100MB to 200GB)
> Step #5 contains the high-throughput metrics which becomes an order of
> magnitude worse several test cycles after a mkfs.  Omitting steps 1-3 does
> not show the poor performance behavior.
> Can anyone provide any suggestions as to an explanation for the behavior or
> a way to mitigate it?  Running xfs_fsr didn't seem to improve the results.
> I'm happy to share benchmarks, specific results data, or describe the
> hardware being used for the measurements if it's helpful.

Post your benchmark script, along with the results you see, and all
the other information listed here:



Dave Chinner

<Prev in Thread] Current Thread [Next in Thread>