xfs
[Top] [All Lists]

Re: Kernel 2.6.9 Multiple Page Allocation Failures

To: Lukas Hejtmanek <xhejtman@xxxxxxxxxxxxxxxxx>
Subject: Re: Kernel 2.6.9 Multiple Page Allocation Failures
From: Andrew Morton <akpm@xxxxxxxx>
Date: Thu, 2 Dec 2004 16:18:39 -0800
Cc: zaphodb@xxxxxxxxxxx, marcelo.tosatti@xxxxxxxxxxxx, piggin@xxxxxxxxxxxxxxx, linux-kernel@xxxxxxxxxxxxxxx, linux-xfs@xxxxxxxxxxx
In-reply-to: <20041202231837.GB15185@xxxxxxxxxxxx>
References: <20041113144743.GL20754@xxxxxxxxxxx> <20041116093311.GD11482@xxxxxxxxxx> <20041116170527.GA3525@xxxxxxxxxxxx> <20041121014350.GJ4999@xxxxxxxxxxx> <20041121024226.GK4999@xxxxxxxxxxx> <20041202195422.GA20771@xxxxxxxxxxxx> <20041202122546.59ff814f.akpm@xxxxxxxx> <20041202210348.GD20771@xxxxxxxxxxxx> <20041202223146.GA31508@xxxxxxxxxxx> <20041202145610.49e27b49.akpm@xxxxxxxx> <20041202231837.GB15185@xxxxxxxxxxxx>
Sender: linux-xfs-bounce@xxxxxxxxxxx
Lukas Hejtmanek <xhejtman@xxxxxxxxxxxxxxxxx> wrote:
>
> On Thu, Dec 02, 2004 at 02:56:10PM -0800, Andrew Morton wrote:
> > It's quite possible that XFS is performing rather too many GFP_ATOMIC
> > allocations and is depleting the page reserves.  Although increasing
> > /proc/sys/vm/min_free_kbytes should help there.
> 
> Btw, how the min_free_kbytes works?

The page reclaim code and the page allocator will aim to keep that amount
of memory free for emergency, IRQ and atomic allocations.

> I have up to 1MB TCP windows. If I'm running out of memory then kswapd should
> try to free some memory (or bdflush).

yes, there's some latency involved.  Especially on uniprocessor - if the
CPU is stuck in an interrupt handler refilling a huge network Rx ring then
waking kswapd won't do anything and you will run out of memory.

> But on GE I can receive data faster then
> disk is able to swap or flush buffers. So I should keep min_free big enough to
> give time to disk to flush/swap data?

All I can say is "experiment with it".

It might be useful to renice kswapd so that userspace processes do not
increase its latency.


<Prev in Thread] Current Thread [Next in Thread>