xfs
[Top] [All Lists]

Re: [PATCH] mm/vmscan: Do not block forever atshrink_inactive_list().

To: david@xxxxxxxxxxxxx
Subject: Re: [PATCH] mm/vmscan: Do not block forever atshrink_inactive_list().
From: Tetsuo Handa <penguin-kernel@xxxxxxxxxxxxxxxxxxx>
Date: Fri, 6 Jun 2014 21:19:22 +0900
Cc: rientjes@xxxxxxxxxx, Motohiro.Kosaki@xxxxxxxxxxxxxx, riel@xxxxxxxxxx, kosaki.motohiro@xxxxxxxxxxxxxx, fengguang.wu@xxxxxxxxx, kamezawa.hiroyu@xxxxxxxxxxxxxx, akpm@xxxxxxxxxxxxxxxxxxxx, hch@xxxxxxxxxxxxx, linux-kernel@xxxxxxxxxxxxxxx, xfs@xxxxxxxxxxx
Delivered-to: xfs@xxxxxxxxxxx
In-reply-to: <20140605131753.GD4523@dastard>
References: <6B2BA408B38BA1478B473C31C3D2074E31D59D8673@xxxxxxxxxxxxxxxxxxxxxxxxxx> <201405262045.CDG95893.HLFFOSFMQOVOJt@xxxxxxxxxxxxxxxxxxx> <alpine.DEB.2.02.1406031442170.19491@xxxxxxxxxxxxxxxxxxxxxxxxx> <201406052145.CIB35534.OQLVMSJFOHtFOF@xxxxxxxxxxxxxxxxxxx> <20140605131753.GD4523@dastard>
Dave Chinner wrote:
> On Thu, Jun 05, 2014 at 09:45:26PM +0900, Tetsuo Handa wrote:
> > This means that, under rare circumstances, it is possible that all processes
> > other than kswapd are trapped into too_many_isolated()/congestion_wait() 
> > loop
> > while kswapd is sleeping because this loop assumes that somebody else shall
> > wake up kswapd and kswapd shall perform operations for making
> > too_many_isolated() to return 0. However, we cannot guarantee that kswapd is
> > waken by somebody nor kswapd is not blocked by blocking operations inside
> > shrinker functions (e.g. mutex_lock()).
> 
> So what you are saying is that kswapd is having problems with
> getting blocked on locks held by processes in direct reclaim?
> 
> What are the stack traces that demonstrate such a dependency loop?

If a normal task's GFP_KERNEL memory allocation called a shrinker function and
the shrinker function does GFP_WAIT-able allocation with a mutex held, there
is a possibility that kswapd is waken up due to GFP_WAIT-able allocation and
kswapd calls the shrinker function, and the kswapd is blocked at trying to
hold the same mutex inside the shrinker function, isn't it?

Since ttm_dma_pool_shrink_count()/ttm_dma_pool_shrink_scan() holds a mutex
and ttm_dma_pool_shrink_scan() does GFP_WAIT-able allocation, I think such
a dependency loop is possible.

<Prev in Thread] Current Thread [Next in Thread>