xfs
[Top] [All Lists]

Re: 2.6.39-rc4+: oom-killer busy killing tasks

To: Dave Chinner <david@xxxxxxxxxxxxx>
Subject: Re: 2.6.39-rc4+: oom-killer busy killing tasks
From: Christian Kujau <lists@xxxxxxxxxxxxxxx>
Date: Sun, 1 May 2011 21:59:35 -0700 (PDT)
Cc: Markus Trippelsdorf <markus@xxxxxxxxxxxxxxx>, LKML <linux-kernel@xxxxxxxxxxxxxxx>, xfs@xxxxxxxxxxx, minchan.kim@xxxxxxxxx
In-reply-to: <20110501080149.GD13542@dastard>
References: <alpine.DEB.2.01.1104242245090.18728@xxxxxxxxxxxxxx> <alpine.DEB.2.01.1104250015480.18728@xxxxxxxxxxxxxx> <20110427022655.GE12436@dastard> <alpine.DEB.2.01.1104270042510.18728@xxxxxxxxxxxxxx> <20110427102824.GI12436@dastard> <alpine.DEB.2.01.1104281008320.18728@xxxxxxxxxxxxxx> <20110428233751.GR12436@dastard> <alpine.DEB.2.01.1104291250480.18728@xxxxxxxxxxxxxx> <20110429201701.GA13166@xxxxxxxxxxxxxx> <alpine.DEB.2.01.1104291710340.18728@xxxxxxxxxxxxxx> <20110501080149.GD13542@dastard>
User-agent: Alpine 2.01 (DEB 1266 2009-07-14)
On Sun, 1 May 2011 at 18:01, Dave Chinner wrote:
> I really don't know why the xfs inode cache is not being trimmed. I
> really, really need to know if the XFS inode cache shrinker is
> getting blocked or not running - do you have those sysrq-w traces
> when near OOM I asked for a while back?

I tried to generate those via /proc/sysrq-trigger (don't have a F13/Print 
Screen key), but the OOM killer kicks in prett fast - so fast thay my 
debug script, trying to generate sysrq-w every second was too late and the 
machine was already dead:

   http://nerdbynature.de/bits/2.6.39-rc4/oom/
   * messages-10.txt.gz
   * slabinfo-10.txt.bz2

Timeline:
  - du(1) started at 12:25:16 (and immediately listed
    as "blocked" task)
  - the last sysrq-w succeeded at 12:38:05, listing kswapd0
  - du invoked oom-killer at 12:38:06

I'll keep trying...

> scan only scanned 516 pages. I can't see it freeing many inodes
> (there's >600,000 of them in memory) based on such a low page scan
> number.

Not sure if this is related...this XFS filesytem I'm running du(1) on is 
~1 TB in size, with 918K allocated inodes, if df(1) is correct:

# df -hi /mnt/backup/
Filesystem            Inodes   IUsed   IFree IUse% Mounted on
/dev/mapper/wdc1         37M    918K     36M    3% /mnt/backup

> Maybe you should tweak /proc/sys/vm/vfs_cache_pressure to make it
> reclaim vfs structures more rapidly. It might help

/proc/sys/vm/vfs_cache_pressure is currently set to '100'. You mean I 
should increase it? To..150? 200? 1000?

Thanks,
Christian.
-- 
BOFH excuse #347:

The rubber band broke

<Prev in Thread] Current Thread [Next in Thread>