xfs
[Top] [All Lists]

Re: TAKE - Locking fixes for the xfs I/O path

To: Steve Lord <lord@xxxxxxx>
Subject: Re: TAKE - Locking fixes for the xfs I/O path
From: Marcelo Tosatti <marcelo@xxxxxxxxxxxxxxxx>
Date: Wed, 24 Jan 2001 12:28:54 -0200 (BRST)
Cc: linux-xfs@xxxxxxxxxxx
In-reply-to: <200101241607.f0OG7mQ10240@jen.americas.sgi.com>
Sender: owner-linux-xfs@xxxxxxxxxxx
On Wed, 24 Jan 2001, Steve Lord wrote:

> > 
> > On Tue, 23 Jan 2001, Steve Lord wrote:
> > 
> > > Finally change the flags on memory allocations which happen under 
> > > filesyste
> > m
> > > locks (usually the xfs inode lock) to use GFP_BUFFER rather than 
> > > GFP_KERNEL
> > .
> > > This stops the memory reclaim threads from pushing back into the 
> > > filesystem
> > > again to free memory and deadlocking.
> > > 
> > > I have not yet managed to deadlock a system due to memory pressure with
> > > these changes. dbench throughput also appears to improve.
> > 
> > Steve, 
> > 
> > Have to check if, under 2.4.1, with low memory machines under heavy IO no
> > XFS allocations fails. 
> > 
> > We are not waiting for kswapd anymore, so the  !__GFP_IO allocations are
> > more fragile.
> > 
> 
> Oh Joy! So even though the request flags say it is ok to sleep for memory, it
> can still fail? I was hoping we had found a way out of this hole. OK, I do
> see that GFP_BUFFER users are expected to cope with failure, looks like it is
> back to the drawing board here - xfs cannot cope with a non-robust memory
> allocator. What we really need is an interface which says go get some memory,
> and don't return until you have some, but do not bug me to free memory. The
> problem with the GFP flags is that they are an extremely large hammer - i.e.
> do not ask any filesystem for any memory is a bit over the top.

Wait. We can call flush_dirty_buffers(0) for !__GFP_IO allocations.

This will block them at ll_rw_block() which is ok since you're not going
throught the filesystem codepath anymore.






<Prev in Thread] Current Thread [Next in Thread>