xfs
[Top] [All Lists]

Re: xfs performance problem

To: Dave Chinner <david@xxxxxxxxxxxxx>
Subject: Re: xfs performance problem
From: Christoph Hellwig <hch@xxxxxxxxxxxxx>
Date: Sun, 1 May 2011 12:55:46 -0400
Cc: Martin Steigerwald <Martin@xxxxxxxxxxxx>, xfs@xxxxxxxxxxx
In-reply-to: <20110501085246.GF13542@dastard>
References: <4DB72084.8020205@xxxxxxxxxxx> <20110427023534.GF12436@dastard> <201104291827.35801.Martin@xxxxxxxxxxxx> <20110501085246.GF13542@dastard>
User-agent: Mutt/1.5.21 (2010-09-15)
On Sun, May 01, 2011 at 06:52:46PM +1000, Dave Chinner wrote:
> > > more than likely your problem is that barriers have been enabled for
> > > MD/DM devices on the new kernel, and they aren't on the old kernel.
> > > XFS uses barriers by default, ext3 does not. Hence XFS performance
> > > will change while ext3 will not. Check dmesg output when mounting
> > > the filesystems on the different kernels.
> > 
> > But didn't 2.6.38 replace barriers by explicit flushes the filesystem has 
> > to 
> > wait for - mitigating most of the performance problems with barriers?
> 
> IIRC, it depends on whether the hardware supports FUA or not. If it
> doesn't then device cache flushes are used to emulate FUA and so
> performance can still suck. Christoph will no doubt correct me if I
> got that wrong ;)

Mitigating most of the barrier performance issues is a bit of a strong
word.  Yes, it remove useless ordering requirements, but fundamentally
you still have to flush the disk cache to the physical medium, which
is always going to be slower than just filling up a DRAM cache like
ext3's default behaviour in mainline does (interestingly both SLES
and RHEL have patched it to provide safe behaviour by default).

Both the old barrier and new flush code will use the FUA bit if
available, and those optimize the post-flush for a log write out.
Note that currently libata by default always disables FUA support,
even if the disk supports it, so you'll need a SAS/FC/iSCSI/etc
device to actually see FUA requests, which is quite sad as it
should provide a nice speedup epecially for SATA where the cache
flush command is not queueable and thus requires us to still
drain any outstanding I/O at least for a short duration.

<Prev in Thread] Current Thread [Next in Thread>