On Fri 20-02-15 09:03:55, Dave Chinner wrote:
[...]
> Converting the code to use GFP_NOFAIL takes us in exactly the
> opposite direction to our current line of development w.r.t. to
> filesystem error handling.
Fair enough. If there are plans to have a failure policy rather than
GFP_NOFAIL like behavior then I have, of course, no objections. Quite
opposite. This is exactly what I would like to see. GFP_NOFAIL should be
rarely used, really.
The whole point of this discussion, and I am sorry if I didn't make it
clear, is that _if_ there is really a GFP_NOFAIL requirement hidden
from the allocator then it should be changed to use GFP_NOFAIL so that
allocator knows about this requirement.
> > The reason I care about GFP_NOFAIL is that there are apparently code
> > paths which do not tell allocator they are basically GFP_NOFAIL without
> > any fallback. This leads to two main problems 1) we do not have a good
> > overview how many code paths have such a strong requirements and so
> > cannot estimate e.g. how big memory reserves should be and
>
> Right, when GFP_NOFAIL got deprecated we lost the ability to document
> such behaviour and find it easily. People just put retry loops in
> instead of using GFP_NOFAIL. Good luck finding them all :/
That will be PITA, all right, but I guess the deprecation was a mistake
and we should stop this tendency.
> > 2) allocator
> > cannot help those paths (e.g. by giving them access to reserves to break
> > out of the livelock).
>
> Allocator should not help. Global reserves are unreliable - make the
> allocation context reserve the amount it needs before it enters the
> context where it can't back out....
Sure pre-allocation is preferable. But once somebody asks for GFP_NOFAIL
then it is too late and the allocator only has memory reclaim and
potentially reserves.
[...]
--
Michal Hocko
SUSE Labs
|