xfs
[Top] [All Lists]

Re: xfsdump stuck in io_schedule

To: Andi Kleen <ak@xxxxxxx>
Subject: Re: xfsdump stuck in io_schedule
From: Zlatko Calusic <zlatko.calusic@xxxxxxxx>
Date: Fri, 15 Nov 2002 16:28:38 +0100
Cc: linux-xfs@xxxxxxxxxxx
In-reply-to: <20021115140635.A31836@wotan.suse.de> (Andi Kleen's message of "Fri, 15 Nov 2002 14:06:35 +0100")
References: <dnfzu3yw8u.fsf@magla.zg.iskon.hr> <20021115135233.A13882@oldwotan.suse.de> <dnlm3v9ebk.fsf@magla.zg.iskon.hr> <20021115140635.A31836@wotan.suse.de>
Reply-to: zlatko.calusic@xxxxxxxx
Sender: linux-xfs-bounce@xxxxxxxxxxx
User-agent: Gnus/5.090005 (Oort Gnus v0.05) XEmacs/21.4 (Honest Recruiter, i386-debian-linux)
Andi Kleen <ak@xxxxxxx> writes:

>> I agree. What points me to this list is that I observed such behavior
>> only with xfsdump. Although 2.5 is still in development phase, VM has
>> been really stable recently. But yes, it's possible that this is a
>> genuine kernel VM bug, and xfsdump just triggers it.
>
> I guess xfsdump pins both a lot of pages and a lot of inodes/dentries
> (similar to a program that opens a few thousand files and mlocks 
> significant parts of its address space). May not be the best tested
> scenario in the world. It apparently is called the "google workload"
> (because all the hundreds of google cluster boxes run in a similar
> setup) and it was only fixed in 2.4 VM a short time ago.

Oh, yes, you're completely right, of course. It's the pinned part of
page cache that makes big pressure on the memory. Whole lot of
inactive page cache pages (~700MB in my case) is not really good
indicator of recyclable memory, when (probably) a big part of it is
pinned and can't be thrown out. So it is VM, after all.

Maybe pinned pages should be activated (temporarily?) to better
describe the fact that they're not easily freeable? That might help
kswapd and friends to make better decisions, at least theoretically.

Hm, to tell the truth, all that doesn't sound like an easy problem to
fix, but I'm sure we'll think of something. :)
-- 
Zlatko


<Prev in Thread] Current Thread [Next in Thread>