[Top] [All Lists]

Re: xfsdump stuck in io_schedule

To: zlatko.calusic@xxxxxxxx
Subject: Re: xfsdump stuck in io_schedule
From: Andrew Morton <akpm@xxxxxxxxx>
Date: Sun, 17 Nov 2002 12:10:51 -0800
Cc: Stephen Lord <lord@xxxxxxx>, Andi Kleen <ak@xxxxxxx>, linux-xfs@xxxxxxxxxxx
References: <dnfzu3yw8u.fsf@xxxxxxxxxxxxxxxxx> <20021115135233.A13882@xxxxxxxxxxxxxxxx> <dnlm3v9ebk.fsf@xxxxxxxxxxxxxxxxx> <20021115140635.A31836@xxxxxxxxxxxxx> <dnr8dmj1i1.fsf@xxxxxxxxxxxxxxxxx> <20021115164012.A28685@xxxxxxxxxxxxx> <87u1ih4x29.fsf@xxxxxxxxxxxxxx> <1037539697.1240.30.camel@xxxxxxxxxxxxxxxxxxxxxxx> <877kfcqmy5.fsf@xxxxxxxxxxxxxx> <3DD7EB2C.C20F312E@xxxxxxxxx> <87n0o8c7g5.fsf@xxxxxxxxxxxxxx>
Sender: linux-xfs-bounce@xxxxxxxxxxx
Zlatko Calusic wrote:
> ...
> Call Trace:
>  [bounce_end_io+33/120] __alloc_pages+0x249/0x274
>  [move_vma+433/912] find_or_create_page+0x3d/0x98
>  [xfs_parseargs+1623/1844] _pagebuf_lookup_pages+0x19b/0x3cc
>  [xfs_initialize_vnode+188/508] pagebuf_get+0x90/0x110
>  [xfs_initialize_vnode+455/508] pagebuf_readahead+0x23/0x28

#define GFP_READAHEAD   0

That's an atomic, low-priority allocation.  It is expected to
fail, and can easily do so.

So there's your reason - this can quite easily outrun kswapd.

If we really want to do it this way (and I suspect we don't)
then the allocation attempt should be wrapped in PF_NOWARN
to keep the messages away.

And it should be changed to __GFP_HIGHMEM so XFS can perform
readahead into highmem pages.

However it is probably best to change this to just use 
mapping->gfp_mask.  I vaguely recall that the nonblocking allocation
improved performance in some situations, but it's quite possible
that the VM problem which made that a good thing got fixed.

And you really should run page reclaim for readahead - the system
is more likely to use readahead pages in the near future than it
is to use pages at the tail of the inactive list.

<Prev in Thread] Current Thread [Next in Thread>