xfs
[Top] [All Lists]

Re: grab_cache_page deadlock | was Re: set_buffer_dirty_uptodate

To: Andrea Arcangeli <andrea@xxxxxxx>
Subject: Re: grab_cache_page deadlock | was Re: set_buffer_dirty_uptodate
From: Marcelo Tosatti <marcelo@xxxxxxxxxxxxxxxx>
Date: Fri, 29 Dec 2000 13:59:05 -0200 (BRST)
Cc: Andi Kleen <ak@xxxxxxx>, Rajagopal Ananthanarayanan <ananth@xxxxxxx>, linux-xfs@xxxxxxxxxxx
In-reply-to: <20001229184444.H12791@athlon.random>
Sender: owner-linux-xfs@xxxxxxxxxxx

On Fri, 29 Dec 2000, Andrea Arcangeli wrote:

> Hello!
> 
> On Fri, Dec 29, 2000 at 01:07:37PM -0200, Marcelo Tosatti wrote:
> > Basically what Andrea did in 2.2 was to add a "has_io_locks" flag to the
> > task_struct structure which indicates if the current process has any fs
> > lock held. (the flag is increased when any fs lock is taken, and decreased
> > when any fs lock is unlocked)
> > 
> > With this scheme its possible to not sleep on kswapd if we have
> > "current->has_io_locks" > 0 and avoid the deadlock. 
> 
> Correct.
> 
> However I don't see why somebody is waiting for kswapd in first place ;). 
  
from mm/page_alloc.c (__alloc_pages function) (2.4): 

                 /*
                 * When we arrive here, we are really tight on memory.
                 *
                 * We wake up kswapd and sleep until kswapd wakes us
                 * up again. After that we loop back to the start.
                 *
                 * We have to do this because something else might eat
                 * the memory kswapd frees for us and we need to be
                 * reliable. Note that we don't loop back for higher
                 * order allocations since it is possible that kswapd
                 * simply cannot free a large enough contiguous area
                 * of memory *ever*.
                 */
                if ((gfp_mask & (__GFP_WAIT|__GFP_IO)) == 
(__GFP_WAIT|__GFP_IO)) {
                        wakeup_kswapd(1);
                        memory_pressure++;
                        if (!order)
                                goto try_again;



<Prev in Thread] Current Thread [Next in Thread>