[Top] [All Lists]

Re: 2.6.39-rc4+: oom-killer busy killing tasks

To: Dave Chinner <david@xxxxxxxxxxxxx>
Subject: Re: 2.6.39-rc4+: oom-killer busy killing tasks
From: Christian Kujau <lists@xxxxxxxxxxxxxxx>
Date: Wed, 27 Apr 2011 00:46:51 -0700 (PDT)
Cc: LKML <linux-kernel@xxxxxxxxxxxxxxx>, xfs@xxxxxxxxxxx
In-reply-to: <20110427022655.GE12436@dastard>
References: <alpine.DEB.2.01.1104211841510.18728@xxxxxxxxxxxxxx> <20110424234655.GC12436@dastard> <alpine.DEB.2.01.1104242245090.18728@xxxxxxxxxxxxxx> <alpine.DEB.2.01.1104250015480.18728@xxxxxxxxxxxxxx> <20110427022655.GE12436@dastard>
User-agent: Alpine 2.01 (DEB 1266 2009-07-14)
On Wed, 27 Apr 2011 at 12:26, Dave Chinner wrote:
> What this shows is that VFS inode cache memory usage increases until
> about the 550 sample mark before the VM starts to reclaim it with
> extreme prejudice. At that point, I'd expect the XFS inode cache to
> then shrink, and it doesn't. I've got no idea why the either the

Do you remember any XFS changes past 2.6.38 that could be related to 
something like this?

Bisecting is pretty slow on this machine. Could I somehow try to run 
2.6.39-rc4 but w/o the XFS changes merged after 2.6.38? (Does someone know 
how to do this via git?)

> Can you check if there are any blocked tasks nearing OOM (i.e. "echo
> w > /proc/sysrq-trigger") so we can see if XFS inode reclaim is
> stuck somewhere?

Will do, tomorrow.

Should I open a regression bug, so we don't loose track of this thing?

BOFH excuse #425:

stop bit received

<Prev in Thread] Current Thread [Next in Thread>