Zlatko Calusic wrote:
>
> ...
> Call Trace:
> [bounce_end_io+33/120] __alloc_pages+0x249/0x274
> [move_vma+433/912] find_or_create_page+0x3d/0x98
> [xfs_parseargs+1623/1844] _pagebuf_lookup_pages+0x19b/0x3cc
> [xfs_initialize_vnode+188/508] pagebuf_get+0x90/0x110
> [xfs_initialize_vnode+455/508] pagebuf_readahead+0x23/0x28
#ifndef GFP_READAHEAD
#define GFP_READAHEAD 0
#endif
That's an atomic, low-priority allocation. It is expected to
fail, and can easily do so.
So there's your reason - this can quite easily outrun kswapd.
If we really want to do it this way (and I suspect we don't)
then the allocation attempt should be wrapped in PF_NOWARN
to keep the messages away.
And it should be changed to __GFP_HIGHMEM so XFS can perform
readahead into highmem pages.
However it is probably best to change this to just use
mapping->gfp_mask. I vaguely recall that the nonblocking allocation
improved performance in some situations, but it's quite possible
that the VM problem which made that a good thing got fixed.
And you really should run page reclaim for readahead - the system
is more likely to use readahead pages in the near future than it
is to use pages at the tail of the inactive list.
|