On Sat, Feb 28, 2015 at 05:15:58PM -0500, Johannes Weiner wrote:
> On Sat, Feb 28, 2015 at 11:41:58AM -0500, Theodore Ts'o wrote:
> > On Sat, Feb 28, 2015 at 11:29:43AM -0500, Johannes Weiner wrote:
> > >
> > > I'm trying to figure out if the current nofail allocators can get
> > > their memory needs figured out beforehand. And reliably so - what
> > > good are estimates that are right 90% of the time, when failing the
> > > allocation means corrupting user data? What is the contingency plan?
> > In the ideal world, we can figure out the exact memory needs
> > beforehand. But we live in an imperfect world, and given that block
> > devices *also* need memory, the answer is "of course not". We can't
> > be perfect. But we can least give some kind of hint, and we can offer
> > to wait before we get into a situation where we need to loop in
> > GFP_NOWAIT --- which is the contingency/fallback plan.
> Overestimating should be fine, the result would a bit of false memory
> pressure. But underestimating and looping can't be an option or the
> original lockups will still be there. We need to guarantee forward
> progress or the problem is somewhat mitigated at best - only now with
> quite a bit more complexity in the allocator and the filesystems.
The additional complexity in XFS is actually quite minor, and
initial "rough worst case" memory usage estimates are not that hard
> The block code would have to be looked at separately, but doesn't it
> already use mempools etc. to guarantee progress?
Yes, it does. I'm not concerned about the block layer.