On Fri 20-02-15 00:29:29, Tetsuo Handa wrote:
> Michal Hocko wrote:
> > On Thu 19-02-15 13:29:14, Michal Hocko wrote:
> > [...]
> > > Something like the following.
> > __GFP_HIGH doesn't seem to be sufficient so we would need something
> > slightly else but the idea is still the same:
> > diff --git a/mm/page_alloc.c b/mm/page_alloc.c
> > index 8d52ab18fe0d..2d224bbdf8e8 100644
> > --- a/mm/page_alloc.c
> > +++ b/mm/page_alloc.c
> > @@ -2599,6 +2599,7 @@ __alloc_pages_slowpath(gfp_t gfp_mask, unsigned int
> > order,
> > enum migrate_mode migration_mode = MIGRATE_ASYNC;
> > bool deferred_compaction = false;
> > int contended_compaction = COMPACT_CONTENDED_NONE;
> > + int oom = 0;
> > /*
> > * In the slowpath, we sanity check order to avoid ever trying to
> > @@ -2635,6 +2636,15 @@ retry:
> > alloc_flags = gfp_to_alloc_flags(gfp_mask);
> > /*
> > + * __GFP_NOFAIL allocations cannot fail but yet the current context
> > + * might be blocking resources needed by the OOM victim to terminate.
> > + * Allow the caller to dive into memory reserves to succeed the
> > + * allocation and break out from a potential deadlock.
> > + */
> We don't know how many callers will pass __GFP_NOFAIL. But if 1000
> threads are doing the same operation which requires __GFP_NOFAIL
> allocation with a lock held, wouldn't memory reserves deplete?
We shouldn't have an unbounded number of GFP_NOFAIL allocations at the
same time. This would be even more broken. If a load is known to use
such allocations excessively then the administrator can enlarge the
> This heuristic can't continue if memory reserves depleted or
> continuous pages of requested order cannot be found.
Once memory reserves are depleted we are screwed anyway and we might