[Top] [All Lists]

Re: [PATCH] xfs: reduce stack usage in xfs_bmap_btalloc()

To: David Chinner <dgc@xxxxxxx>
Subject: Re: [PATCH] xfs: reduce stack usage in xfs_bmap_btalloc()
From: David Chinner <dgc@xxxxxxx>
Date: Mon, 28 Apr 2008 13:32:49 +1000
Cc: Denys Vlasenko <vda.linux@xxxxxxxxxxxxxx>, xfs@xxxxxxxxxxx, Eric Sandeen <sandeen@xxxxxxxxxxx>, Adrian Bunk <bunk@xxxxxxxxxx>, linux-kernel@xxxxxxxxxxxxxxx
In-reply-to: <20080427234056.GA108924158@xxxxxxx>
References: <200804261651.02078.vda.linux@xxxxxxxxxxxxxx> <20080427234056.GA108924158@xxxxxxx>
Sender: xfs-bounce@xxxxxxxxxxx
User-agent: Mutt/
On Mon, Apr 28, 2008 at 09:40:56AM +1000, David Chinner wrote:
> On Sat, Apr 26, 2008 at 04:51:02PM +0200, Denys Vlasenko wrote:
> > Hi David,
> > 
> > This patch reduces xfs_bmap_btalloc() stack usage by 50 bytes
> > by moving part of its body into a helper function.
> Can you please attach your patches inline, Denys (see
> Documentation/SubmittingPatches)?
> > This results in some variables not taking stack space in
> > xfs_bmap_btalloc() anymore.
> > 
> > The helper itself does not call anything stack-deep.
> > Stack-deep call to xfs_alloc_vextent() happen
> > in xfs_bmap_btalloc(), as before.
> I have a set of patches that introduces new functionality into the
> allocator (dynamic allocation policies) that reduces
> xfs_bmap_btalloc() function by 36 bytes (just by chance, I didn't
> design it for this purpose). It breaks it down on functional
> boundaries like Christoph's patch. I'm going to revist that patch
> w.r.t both these patches and see what falls out the bottom...

44 bytes saved in xfs_bmap_btalloc with the same factoring as
Christoph's patch being done. Given that most of this is now
the struct xfs_alloc_arg, I don't think this will be reduced a whole
lot more. I think I might be able to kill the tryagain and isaligned
variables as well which will save another 8 bytes, but I'll leave
that for later....

Good progress, folks.


Dave Chinner
Principal Engineer
SGI Australian Software Group

<Prev in Thread] Current Thread [Next in Thread>