xfs
[Top] [All Lists]

Re: tuning XFS for tiny files

To: David Chinner <dgc@xxxxxxx>
Subject: Re: tuning XFS for tiny files
From: Andi Kleen <andi@xxxxxxxxxxxxxx>
Date: Thu, 19 Jul 2007 15:54:59 +0200
Cc: Andi Kleen <andi@xxxxxxxxxxxxxx>, timotheus <timotheus@xxxxxxxxxxx>, linux-xfs@xxxxxxxxxxx
In-reply-to: <20070719131401.GW31489@xxxxxxx>
References: <m23azlbpl1.fsf@xxxxxxxxxxx> <p73wswwwm1o.fsf@xxxxxxxxxxxxxx> <20070719131401.GW31489@xxxxxxx>
Sender: xfs-bounce@xxxxxxxxxxx
User-agent: Mutt/1.4.2.1i
On Thu, Jul 19, 2007 at 11:14:01PM +1000, David Chinner wrote:
> On Thu, Jul 19, 2007 at 03:38:43PM +0200, Andi Kleen wrote:
> > timotheus <timotheus@xxxxxxxxxxx> writes:
> > 
> > > Hi. Is there a way to tune XFS filesystem parameters to better address
> > > the usage pattern of 10000s of tiny files in directories such as:
> > >     maildir directory
> > >     mh mail directory
> > >     ccache directory
> > > 
> > > My understanding is that XFS will always be much slower than reiserfs
> > > with respect to deleting 10000s files; but that XFS might be possible to
> > > tune toward more rapid read access of 10000s of tiny files.
> > 
> > -d agcount=1 at mkfs time might help (unless you have a lot of CPUs) 
> 
> Yeah, might help, but it's not good for being able to repair the
> filesystem - repair will be unable to find a secondary superblock
> to compare the primary against and abort.....

Any reason why it aborts? It could just continue with a warning, couldn't it?

> -d agcount is only good for science experiments, not production
> systems ;)

XFS small file performance needs a lot of science.

-Andi


<Prev in Thread] Current Thread [Next in Thread>