xfs
[Top] [All Lists]

Re: [PATCH] Give logbufs a better default

To: Steve Lord <lord@xxxxxxx>
Subject: Re: [PATCH] Give logbufs a better default
From: Andi Kleen <ak@xxxxxxx>
Date: Wed, 11 Jun 2003 23:28:23 +0200
Cc: Andi Kleen <ak@xxxxxxx>, linux-xfs@xxxxxxxxxxx, mason@xxxxxxxx
In-reply-to: <1055363594.6614.44.camel@jen.americas.sgi.com>
References: <20030611093525.GA2329@wotan.suse.de> <1055363594.6614.44.camel@jen.americas.sgi.com>
Sender: linux-xfs-bounce@xxxxxxxxxxx
On Wed, Jun 11, 2003 at 03:33:14PM -0500, Steve Lord wrote:
> On Wed, 2003-06-11 at 04:35, Andi Kleen wrote:
> > A long standing problem in XFS is that in the default configuration
> > metadata performance is not that great because it uses not enough
> > log buffers. There are FAQs around to fix this, but it would be better
> > if the kernel did the right thing by default.
> > 
> > The main problem is probably that XFS still uses the default from the
> > early 90ies, which are probably not that good anymore for today's machines.
> > 
> > This patch changes the logbufs= default based on the available memory.
> > If you have 128MB or less it uses 3 logbufs (normally 96K per file system) 
> > For 400MB or less it uses 5 (160K) 
> > For anything bigger 8 (256K) 
> 
> Hi Andi,
> 
> Just wondering why you picked odd numbers? 

Usual handwaving. Of course i should have picked power of two just to
make it look more scientific, but the range was a bit too small for that ;)

3 was the old default, which seems ok for small systems. Small system
is arbitarily defined as <= 128MB memory.

8 is the current maximum (is there a reason for that btw? could it be simply
raised?) 

I wanted to get 8 on my 512MB test box. And usually the memory counting 
variable has some loss, so I choose 400MB as the boundary.

5 is somewhere between 3 and 8 for the boxes inbetween.

> 
> > 
> > It is still a kind of bandaid. I think the better solution would be to 
> > dynamically allocate new log buffers as needed until some limit
> > (and block if the memory cannot be allocated). This should not be that
> > bad because vmalloc/vfree are not that expensive anymore and with some
> > luck you can even get a physically continuous buffer (e.g. on a 16byte page 
> > size
                                                                    ^^^KByte
                                                                                
                                                        of course

> > ia64 system) 
> 
> Interesting idea, one issue is that during recovery, the maximum amount
> of outstanding I/O there might of been (i.e. number of iclog buffers)
> is a factor in how much work there is to do. Adding new ones dynamically

Hmm, I thought that recovery work was bounded by the on disk log size? How do 
the pending buffers come into play? They look more likely to make 
you lose a bit more data in case of a crash, but your new sync daemon
with a timer should take care of that (it will still be much better
with this than it was ever before)

Processing 10MB (new minimum with the mkfs patch for 1GB+) or even 64MB 
at mount shouldn't be a big issue on a modern box.


> might be possible, but there is this 'interesting' state machine on the
> log buffers to deal with there.

I have not really looked into the state machine yet. You are saying
it has some scalability problems with more buffers or are the data structures
just nasty enough to adding more buffers dynamically could be difficult?

Thanks for your comments,
-Andi


<Prev in Thread] Current Thread [Next in Thread>