| To: | "David S. Miller" <davem@xxxxxxxxxx> |
|---|---|
| Subject: | Re: System crash in tcp_fragment() |
| From: | Andi Kleen <ak@xxxxxxx> |
| Date: | Tue, 21 May 2002 11:49:22 +0200 |
| Cc: | george@xxxxxxxxxx, niv@xxxxxxxxxx, kuznet@xxxxxxxxxxxxx, ak@xxxxxxx, netdev@xxxxxxxxxxx, linux-net@xxxxxxxxxxxxxxx, ak@xxxxxx, pekkas@xxxxxxxxxx |
| In-reply-to: | <20020520.230021.29510217.davem@xxxxxxxxxx> |
| References: | <Pine.LNX.4.33.0205201836160.9301-100000@xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx> <3CE9E466.AC2358EE@xxxxxxxxxx> <20020520.230021.29510217.davem@xxxxxxxxxx> |
| Sender: | owner-netdev@xxxxxxxxxxx |
| User-agent: | Mutt/1.3.22.1i |
> That's not the problem. We use per-cpu values for each counter (and > when the user asks for the value, we add together the values from > each processor). At least on x86 gcc usually seems to just generate an incl, which should be ok because it is atomic enough (even when a reschedule happens it will act as a full memory barrier) So it'll likely just be a problem for load-store architectures. -Andi |
| <Prev in Thread] | Current Thread | [Next in Thread> |
|---|---|---|
| ||
| Previous by Date: | Re: [PATCH] net/core/sock.c - 2.4.18, David S. Miller |
|---|---|
| Next by Date: | scm_cookie, thefly |
| Previous by Thread: | Re: System crash in tcp_fragment(), Andi Kleen |
| Next by Thread: | Re: System crash in tcp_fragment(), george anzinger |
| Indexes: | [Date] [Thread] [Top] [All Lists] |