On Mon, Jul 18, 2011 at 01:49:47PM +1000, Dave Chinner wrote:
> From: Dave Chinner <dchinner@xxxxxxxxxx>
> We currently have significant issues with the amount of stack that
> allocation in XFS uses, especially in the writeback path. We can
> easily consume 4k of stack between mapping the page, manipulating
> the bmap btree and allocating blocks from the free list. Not to
> mention btree block readahead and other functionality that issues IO
> in the allocation path.
> As a result, we can no longer fit allocation in the writeback path
> in the stack space provided on x86_64. To alleviate this problem,
> introduce an allocation workqueue and move all allocations to a
> seperate context. This can be easily added as an interposing layer
> into xfs_alloc_vextent(), which takes a single argument structure
> and does not return until the allocation is complete or has failed.
I've mentioned before that I really don't like it, but I suspect there's
not much of an way around it giving the small stacks, and significant
amount of stacks that's already used above and below XFS.
Can we at least have a sysctl nob or mount option to switch back to
direct allocator calls so that we can still debug any performance
or other issues with this one?