xfs
[Top] [All Lists]

Re: [PATCH 3/3] xfs: allocate xfs_da_args to reduce stack footprint

To: Dave Chinner <david@xxxxxxxxxxxxx>
Subject: Re: [PATCH 3/3] xfs: allocate xfs_da_args to reduce stack footprint
From: Christoph Hellwig <hch@xxxxxxxxxxxxx>
Date: Thu, 20 Feb 2014 06:56:01 -0800
Cc: xfs@xxxxxxxxxxx
Delivered-to: xfs@xxxxxxxxxxx
In-reply-to: <1392783402-4726-4-git-send-email-david@xxxxxxxxxxxxx>
References: <1392783402-4726-1-git-send-email-david@xxxxxxxxxxxxx> <1392783402-4726-4-git-send-email-david@xxxxxxxxxxxxx>
User-agent: Mutt/1.5.21 (2010-09-15)
On Wed, Feb 19, 2014 at 03:16:42PM +1100, Dave Chinner wrote:
> From: Dave Chinner <dchinner@xxxxxxxxxx>
> 
> The struct xfs_da_args used to pass directory/attribute operation
> information to the lower layers is 128 bytes in size and is
> allocated on the stack. Dynamically allocate them to reduce the
> stack footprint of directory operations.

Are we having stack space problems in the directory code as well,
without all the VM code above it?  I'm defintively a bit scared about
adding another memory allocation to every single directory operation.

> +     args = kmem_zalloc(sizeof(*args), KM_SLEEP | KM_NOFS);
> +     if (!args)
> +             return ENOMEM;

KM_SLEEP is the default when KM_NOFS is set.  Also KM_SLEEP will never
return a NULL pointer, either remove the handling or make it an
KM_MAYFAIL allocation.

> +     /*
> +      * If we don't use KM_NOFS here, lockdep will through false positive
> +      * deadlock warnings when we come through here of the non-transactional
> +      * lookup path because the allocation can recurse into inode reclaim.
> +      * Doing this avoids having to add a bunch of lockdep class
> +      * annotations into the reclaim patch for the ilock.
> +      */
> +     args = kmem_zalloc(sizeof(*args), KM_SLEEP | KM_NOFS);

I don't understand that comment.  We do use KM_NOFS here unlike what the
comment claims, and the comment seems to explain why we actually need
KM_NOFS as far as I can tell.

<Prev in Thread] Current Thread [Next in Thread>