netdev
[Top] [All Lists]

RE: Route cache performance under stress

To: Jamal Hadi <hadi@xxxxxxxxxxxxxxxx>
Subject: RE: Route cache performance under stress
From: Ralph Doncaster <ralph@xxxxxxxxx>
Date: Mon, 9 Jun 2003 20:32:48 -0400 (EDT)
Cc: CIT/Paul <xerox@xxxxxxxxxx>, "'Simon Kirby'" <sim@xxxxxxxxxxxxx>, "'David S. Miller'" <davem@xxxxxxxxxx>, "fw@xxxxxxxxxxxxx" <fw@xxxxxxxxxxxxx>, "netdev@xxxxxxxxxxx" <netdev@xxxxxxxxxxx>, "linux-net@xxxxxxxxxxxxxxx" <linux-net@xxxxxxxxxxxxxxx>
In-reply-to: <20030609195652.E35696@shell.cyberus.ca>
References: <008001c32eda$56760830$4a00000a@badass> <20030609195652.E35696@shell.cyberus.ca>
Reply-to: ralph+d@xxxxxxxxx
Sender: netdev-bounce@xxxxxxxxxxx
On Mon, 9 Jun 2003, Jamal Hadi wrote:

> On Mon, 9 Jun 2003, CIT/Paul wrote:
>
> > NAPI despises SMP.. Any SMP box we run NAPI on has major packet loss
> > under high load.. So I find that the e1000 ITR works just as well
> > And there is no reason for NAPI at this point.
> >
>
> Foo, you on cheap crack again?
> Please just try the tests as described if you want to help. It doesnt help
> anyone when you wildly wave your hands like that.

From personal experience, after trying numerous things for over a year one
can get very frustrated.  Although your contribution has been useful, you
are also guilty of wildly waving your hands around too.  Many moons ago
when I lamented that my 2.2.19 kernel, 750Mhz duron, 3c59x core router
performance sucked you told me NAPI would solve the performance problems.
It didn't.  And Rob's latest numbers seem to show that even with the
latest and greatest patches 148kpps is still a dream.  It's good to see
that people are finally doing tests to simulate real-world routing
(instead of just pretending the problem doesn't exist because they were
able to get 148kpps in some contrived test).

Here's my CPU graphs for the box; it's only doing routing and firewalling
isn't even built into the kernel (2.4.20 with 3c59x NAPI patches)
http://66.11.168.198/mrtg/tbgp/tbgp_usrsys.html

eth1 and eth2 are both sending and receiving ~30mbps of traffic (at
8-10kpps in and out on each interface).

The other variable that I haven't seen people discuss but have anecdotal
evidence will measurably impact performance is the motherboard used
(chipset and chipset configuration/timing).

Lastly from the software side Linux doesn't seem to have anything like
BSD's parameter to control user/system CPU sharing.  Once my CPU load
reaches 70-80%, I'd rather have some dropped packets than let the CPU hit
100% and end up with my BGP sessions drop.

-Ralph

<Prev in Thread] Current Thread [Next in Thread>