xfs
[Top] [All Lists]

Re: XFS vs Elevators (was Re: [PATCH RFC] nilfs2: continuous snapshottin

To: david@xxxxxxx
Subject: Re: XFS vs Elevators (was Re: [PATCH RFC] nilfs2: continuous snapshotting file system)
From: Dave Chinner <david@xxxxxxxxxxxxx>
Date: Wed, 27 Aug 2008 11:20:13 +1000
Cc: Jamie Lokier <jamie@xxxxxxxxxxxxx>, Nick Piggin <nickpiggin@xxxxxxxxxxxx>, gus3 <musicman529@xxxxxxxxx>, Szabolcs Szakacsits <szaka@xxxxxxxxxxx>, Andrew Morton <akpm@xxxxxxxxxxxxxxxxxxxx>, linux-fsdevel@xxxxxxxxxxxxxxx, linux-kernel@xxxxxxxxxxxxxxx, xfs@xxxxxxxxxxx
In-reply-to: <alpine.DEB.1.10.0808252041300.29665@xxxxxxxxxxxxxx>
Mail-followup-to: david@xxxxxxx, Jamie Lokier <jamie@xxxxxxxxxxxxx>, Nick Piggin <nickpiggin@xxxxxxxxxxxx>, gus3 <musicman529@xxxxxxxxx>, Szabolcs Szakacsits <szaka@xxxxxxxxxxx>, Andrew Morton <akpm@xxxxxxxxxxxxxxxxxxxx>, linux-fsdevel@xxxxxxxxxxxxxxx, linux-kernel@xxxxxxxxxxxxxxx, xfs@xxxxxxxxxxx
References: <20080821051508.GB5706@disturbed> <200808211933.34565.nickpiggin@xxxxxxxxxxxx> <20080821170854.GJ5706@disturbed> <200808221229.11069.nickpiggin@xxxxxxxxxxxx> <20080825015922.GP5706@disturbed> <20080825120146.GC20960@xxxxxxxxxxxxx> <20080826030759.GY5706@disturbed> <alpine.DEB.1.10.0808252041300.29665@xxxxxxxxxxxxxx>
Sender: xfs-bounce@xxxxxxxxxxx
User-agent: Mutt/1.5.18 (2008-05-17)
On Mon, Aug 25, 2008 at 08:50:14PM -0700, david@xxxxxxx wrote:
> it sounds as if the various flag definitions have been evolving, would it 
> be worthwhile to sep back and try to get the various filesystem folks to  
> brainstorm together on what types of hints they would _like_ to see  
> supported?

Three types:

        1. immediate dispatch - merge first with adjacent requests
           then dispatch
        2. delayed dispatch - queue for a short while to allow
           merging of requests from above
        3. bulk data - queue and merge. dispatch is completely
           controlled by the elevator

Basically most metadata and log writes would fall into category 2,
which every logbufs/2 log writes or every log force using a category
1 to prevent log I/O from being stalled too long by other I/O.

Data writes from the filesystem would appear as category 3 (read and write)
and are subject to the specific elevator scheduling. That is, things
like the CFQ ionice throttling would work on the bulk data queue,
but not the other queues that the filesystem is using for metadata.

Tagging the I/O as a sync I/O can still be done, but that only
affects category 3 scheduling - category 1 or 2 would do the same
thing whether sync or async....

> it sounds like you are using 'sync' for things where you really should be 
> saying 'metadata' (or 'journal contents'), it's happened to work well  
> enough in the past, but it's forcing you to keep tweaking the 
> filesystems.

Right, because there was no 'metadata' tagging, and 'sync' happened
to do exactly what we needed on all elevators at the time.

> it may be better to try and define things from the 
> filesystem point of view and let the elevators do the tweaking.
>
> basicly I'm proposing a complete rethink of the filesyste <-> elevator  
> interface.

Yeah, I've been saying that for a while w.r.t. the filesystem/block
layer interfaces, esp. now with discard requests, data integrity,
device alignment information, barriers, etc being exposed by the
layers below the filesystem, but with no interface for filesystems
to be able to access that information...

Cheers,

Dave.
-- 
Dave Chinner
david@xxxxxxxxxxxxx


<Prev in Thread] Current Thread [Next in Thread>