xfs
[Top] [All Lists]

Re: XFS, 4K stacks, and Red Hat

To: Steve Lord <lord@xxxxxxx>
Subject: Re: XFS, 4K stacks, and Red Hat
From: Andi Kleen <ak@xxxxxxx>
Date: Wed, 13 Jul 2005 06:10:41 +0200
Cc: Andi Kleen <ak@xxxxxxx>, Nathan Scott <nathans@xxxxxxx>, linux-xfs@xxxxxxxxxxx, axboe@xxxxxxx
In-reply-to: <42D48780.2030500@xfs.org>
References: <Pine.LNX.4.58.0507071102460.4766@chaos.egr.duke.edu> <42CD4D38.1090703@xfs.org> <Pine.LNX.4.58.0507071142550.4766@chaos.egr.duke.edu> <20050708043740.GB1679@frodo> <42D3F44B.308@strike.wu-wien.ac.at> <20050713015626.GD980@frodo> <p73eka31mkv.fsf@bragg.suse.de> <42D48780.2030500@xfs.org>
Sender: linux-xfs-bounce@xxxxxxxxxxx
> In a previous life I actually had to resort to allocating a chunk of
> memory, linking it into the stack, then carrying on down the call
> chain (not on linux). The memory was freed on the way up the stack
> again. I am not saying that would be a viable solution, but there needs
> to be something done about stack overflow and nested subsystems, before
> someone tries iscsi over IPV6 or something other bizzare combo.

ISCSI over something would be difficult again because that layering 
is invisible to the block layer. Maybe the iscsi block driver would 
need to declare how much stack it needs or do similar checks
by itself. At least for the network driver interface 
the technique doesn't really work because blocking is not allowed
at this point, so it would need to be higher level.

BTW I doubt IPv6 uses much more stack than IPv4. But e.g. Infiniband
is probably pretty bad when you run it below it.

-Andi


<Prev in Thread] Current Thread [Next in Thread>