xfs
[Top] [All Lists]

Re: xfsdump stuck in io_schedule

To: Andrew Morton <akpm@xxxxxxxxx>
Subject: Re: xfsdump stuck in io_schedule
From: Zlatko Calusic <zlatko.calusic@xxxxxxxx>
Date: Sun, 17 Nov 2002 21:49:44 +0100
Cc: Stephen Lord <lord@xxxxxxx>, Andi Kleen <ak@xxxxxxx>, linux-xfs@xxxxxxxxxxx
In-reply-to: <3DD7F7CB.F292C9C7@xxxxxxxxx> (Andrew Morton's message of "Sun, 17 Nov 2002 12:10:51 -0800")
References: <dnfzu3yw8u.fsf@xxxxxxxxxxxxxxxxx> <20021115135233.A13882@xxxxxxxxxxxxxxxx> <dnlm3v9ebk.fsf@xxxxxxxxxxxxxxxxx> <20021115140635.A31836@xxxxxxxxxxxxx> <dnr8dmj1i1.fsf@xxxxxxxxxxxxxxxxx> <20021115164012.A28685@xxxxxxxxxxxxx> <87u1ih4x29.fsf@xxxxxxxxxxxxxx> <1037539697.1240.30.camel@xxxxxxxxxxxxxxxxxxxxxxx> <877kfcqmy5.fsf@xxxxxxxxxxxxxx> <3DD7EB2C.C20F312E@xxxxxxxxx> <87n0o8c7g5.fsf@xxxxxxxxxxxxxx> <3DD7F7CB.F292C9C7@xxxxxxxxx>
Reply-to: zlatko.calusic@xxxxxxxx
Sender: linux-xfs-bounce@xxxxxxxxxxx
User-agent: Gnus/5.090005 (Oort Gnus v0.05) XEmacs/21.4 (Honest Recruiter, i386-debian-linux)
Andrew Morton <akpm@xxxxxxxxx> writes:
>
> #ifndef GFP_READAHEAD
> #define GFP_READAHEAD   0
> #endif
>
> That's an atomic, low-priority allocation.  It is expected to
> fail, and can easily do so.

Ups, you're right, of course. I totally misinterpreted gfp_mask 0x0,
forgot it means atomic.

>
> So there's your reason - this can quite easily outrun kswapd.
>

Of course.

> If we really want to do it this way (and I suspect we don't)
> then the allocation attempt should be wrapped in PF_NOWARN
> to keep the messages away.
>

That won't be of much help, as when xfsdump stucks on the page
failure, it is unkillable, and some partitions can't be cleanly
unmounted later, and other stuff gets messed up...

Messages in the kernel log are the least of our problems (also because
there's not that much of them, typicaly 1 - 10).

> And it should be changed to __GFP_HIGHMEM so XFS can perform
> readahead into highmem pages.
>
> However it is probably best to change this to just use 
> mapping->gfp_mask.  I vaguely recall that the nonblocking allocation
> improved performance in some situations, but it's quite possible
> that the VM problem which made that a good thing got fixed.
>
> And you really should run page reclaim for readahead - the system
> is more likely to use readahead pages in the near future than it
> is to use pages at the tail of the inactive list.

I think I'll leave it to xfs guys now to fix it properly, I don't even
understand why allocating pages for readahead needs to be atomic.

Thanks for explanation.

Regards,
-- 
Zlatko


<Prev in Thread] Current Thread [Next in Thread>