xfs
[Top] [All Lists]

Re: XFS status update for May 2012

To: Andreas Dilger <adilger@xxxxxxxxx>
Subject: Re: XFS status update for May 2012
From: Eric Sandeen <sandeen@xxxxxxxxxxx>
Date: Mon, 18 Jun 2012 16:11:52 -0500
Cc: Christoph Hellwig <hch@xxxxxxxxxxxxx>, "linux-fsdevel@xxxxxxxxxxxxxxx Devel" <linux-fsdevel@xxxxxxxxxxxxxxx>, xfs@xxxxxxxxxxx
In-reply-to: <AD997E9D-2C1E-4EE4-80D7-2A5C998B6E9E@xxxxxxxxx>
References: <20120618120853.GA15480@xxxxxxxxxxxxx> <AD997E9D-2C1E-4EE4-80D7-2A5C998B6E9E@xxxxxxxxx>
User-agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.6; rv:12.0) Gecko/20120428 Thunderbird/12.0.1
On 6/18/12 1:25 PM, Andreas Dilger wrote:
> On 2012-06-18, at 6:08 AM, Christoph Hellwig wrote:
>> May saw the release of Linux 3.4, including a decent sized XFS update.
>> Remarkable XFS features in Linux 3.4 include moving over all metadata
>> updates to use transactions, the addition of a work queue for the
>> low-level allocator code to avoid stack overflows due to extreme stack
>> use in the Linux VM/VFS call chain,
> 
> This is essentially a workaround for too-small stacks in the kernel,
> which we've had to do at times as well, by doing work in a separate
> thread (with a new stack) and waiting for the results?  This is a
> generic problem that any reasonably-complex filesystem will have when
> running under memory pressure on a complex storage stack (e.g. LVM +
> iSCSI), but causes unnecessary context switching.
> 
> Any thoughts on a better way to handle this, or will there continue
> to be a 4kB stack limit and hack around this with repeated kmalloc

well, 8k on x86_64 (not 4k) right?   But still...

Maybe it's still a partial hack but it's more generic - should we have
IRQ stacks like x86 has?  (I think I'm right that that only exists
on x86 / 32-bit) - is there any downside to that?

We could still get into trouble I'm sure but usually we seem to see
these stack overflows when we take an IRQ while already deep-ish
in the stack.

-Eric

> on callpaths for any struct over a few tens of bytes, implementing
> memory pools all over the place, and "forking" over to other threads
> to continue the stack consumption for another 4kB to work around
> the small stack limit?
> 
> Cheers, Andreas
> 
> 
> 
> 
> 
> _______________________________________________
> xfs mailing list
> xfs@xxxxxxxxxxx
> http://oss.sgi.com/mailman/listinfo/xfs
> 

<Prev in Thread] Current Thread [Next in Thread>