[Top] [All Lists]

Re: xfsdump stuck in io_schedule

To: Andrew Morton <akpm@xxxxxxxxx>
Subject: Re: xfsdump stuck in io_schedule
From: Steve Lord <lord@xxxxxxx>
Date: 15 Nov 2002 16:42:35 -0600
Cc: zlatko.calusic@xxxxxxxx, Andi Kleen <ak@xxxxxxx>, linux-xfs@xxxxxxxxxxx
In-reply-to: <3DD57602.857AD62D@digeo.com>
References: <dnfzu3yw8u.fsf@magla.zg.iskon.hr> <20021115135233.A13882@oldwotan.suse.de> <dnlm3v9ebk.fsf@magla.zg.iskon.hr> <20021115140635.A31836@wotan.suse.de> <dnr8dmj1i1.fsf@magla.zg.iskon.hr> <3DD57602.857AD62D@digeo.com>
Sender: linux-xfs-bounce@xxxxxxxxxxx
On Fri, 2002-11-15 at 16:32, Andrew Morton wrote:
> Zlatko Calusic wrote:
> > 
> > Oh, yes, you're completely right, of course. It's the pinned part of
> > page cache that makes big pressure on the memory. Whole lot of
> > inactive page cache pages (~700MB in my case) is not really good
> > indicator of recyclable memory, when (probably) a big part of it is
> > pinned and can't be thrown out. So it is VM, after all.
> Does xfs_dump actually pin 700 megs of memory??
> If someone could provide a detailed description of what xfs_dump
> is actually doing internally, that may help me shed some light.
> xfs_dump is actually using kernel support for coherency reasons,
> is that not so?   How does it work?

Hmm, a detailed description of xfsdump would take a long time. However,
it is reading the filesystem via system calls. It uses an ioctl to
read inodes from disk in chunks, it does not do a directory walk.
Data is read from files via the read system call. It does keep a
bunch of stuff around in memory, but this sounds like way too
much and not at all normal.

> Does the machine have highmem?
> What was the backtrace into the page allocation failure?

and what does the slabcache look like? It is always possible we are
leaking inode references in the 2.5 version of the bulkstat code.



Steve Lord                                      voice: +1-651-683-3511
Principal Engineer, Filesystem Software         email: lord@xxxxxxx

<Prev in Thread] Current Thread [Next in Thread>