xfs
[Top] [All Lists]

Re: XFS vs Elevators (was Re: [PATCH RFC] nilfs2: continuous snapshottin

To: Szabolcs Szakacsits <szaka@xxxxxxxxxxx>, Andrew Morton <akpm@xxxxxxxxxxxxxxxxxxxx>, linux-fsdevel@xxxxxxxxxxxxxxx, linux-kernel@xxxxxxxxxxxxxxx, xfs@xxxxxxxxxxx
Subject: Re: XFS vs Elevators (was Re: [PATCH RFC] nilfs2: continuous snapshotting file system)
From: Matthew Wilcox <matthew@xxxxxx>
Date: Thu, 21 Aug 2008 05:53:10 -0600
In-reply-to: <20080821060418.GC5706@disturbed>
References: <20080820004326.519405a2.akpm@xxxxxxxxxxxxxxxxxxxx> <200808201613.AA00212@xxxxxxxxxxxxxxxxxxxxxx> <Pine.LNX.4.61.0808202352450.4532@dhcppc2> <20080820143916.1a7eddab.akpm@xxxxxxxxxxxxxxxxxxxx> <20080821021259.GA5706@disturbed> <Pine.LNX.4.62.0808210535450.25448@xxxxxxxxxxxxxxxxxxx> <20080821051508.GB5706@disturbed> <20080821060418.GC5706@disturbed>
Sender: xfs-bounce@xxxxxxxxxxx
User-agent: Mutt/1.5.13 (2006-08-11)
On Thu, Aug 21, 2008 at 04:04:18PM +1000, Dave Chinner wrote:
> One thing I just found out - my old *laptop* is 4-5x faster than the
> 10krpm scsi disk behind an old cciss raid controller.  I'm wondering
> if the long delays in dispatch is caused by an interaction with CTQ
> but I can't change it on the cciss raid controllers. Are you using
> ctq/ncq on your machine?  If so, can you reduce the depth to
> something less than 4 and see what difference that makes?

I don't think that's going to make a difference when using CFQ.  I did
some tests that showed that CFQ would never issue more than one IO at a
time to a drive.  This was using sixteen userspace threads, each doing a
4k direct I/O to the same location.  When using noop, I would get 70k
IOPS and when using CFQ I'd get around 40k IOPS.

-- 
Matthew Wilcox                          Intel Open Source Technology Centre
"Bill, look, we understand that you're interested in selling us this
operating system, but compare it to ours.  We can't possibly take such
a retrograde step."


<Prev in Thread] Current Thread [Next in Thread>