| To: | Andi Kleen <ak@xxxxxxx> |
|---|---|
| Subject: | Re: XFS, 4K stacks, and Red Hat |
| From: | Steve Lord <lord@xxxxxxx> |
| Date: | Tue, 12 Jul 2005 22:16:16 -0500 |
| Cc: | Nathan Scott <nathans@xxxxxxx>, linux-xfs@xxxxxxxxxxx, axboe@xxxxxxx |
| In-reply-to: | <p73eka31mkv.fsf@xxxxxxxxxxxxx> |
| References: | <Pine.LNX.4.58.0507071102460.4766@xxxxxxxxxxxxxxxxxx> <42CD4D38.1090703@xxxxxxx> <Pine.LNX.4.58.0507071142550.4766@xxxxxxxxxxxxxxxxxx> <20050708043740.GB1679@frodo> <42D3F44B.308@xxxxxxxxxxxxxxxxxxxx> <20050713015626.GD980@frodo> <p73eka31mkv.fsf@xxxxxxxxxxxxx> |
| Sender: | linux-xfs-bounce@xxxxxxxxxxx |
| User-agent: | Mozilla Thunderbird 1.0.2-1.3.3 (X11/20050513) |
Andi Kleen wrote: Eventually even 8k stack systems might run into problems. A generic way to solve this would be to let the block layer who calls into the various stacking layers check how much stack is left first and when it is too low push the work to another thread using a workqueue. Jens, do you think that would be feasible? -Andi Quick, before Adrian Bunk gets his patch to completely kill 8K stacks into Linus's tree! In a previous life I actually had to resort to allocating a chunk of memory, linking it into the stack, then carrying on down the call chain (not on linux). The memory was freed on the way up the stack again. I am not saying that would be a viable solution, but there needs to be something done about stack overflow and nested subsystems, before someone tries iscsi over IPV6 or something other bizzare combo. Steve |
| <Prev in Thread] | Current Thread | [Next in Thread> |
|---|---|---|
| ||
| Previous by Date: | Re: how to flush an XFS filesystem, Nathan Scott |
|---|---|
| Next by Date: | Re: XFS, 4K stacks, and Red Hat, Andi Kleen |
| Previous by Thread: | Re: XFS, 4K stacks, and Red Hat, Andi Kleen |
| Next by Thread: | Re: XFS, 4K stacks, and Red Hat, Andi Kleen |
| Indexes: | [Date] [Thread] [Top] [All Lists] |