> Alexey pointed me to your extension a few months back. What kind of
> results do you get ? Mind sharing them?
Not at all!
We did a lot experimenting with this about 2 years ago. New ideas came every
day most of them came from Russia. :-)
>From my head I can give you some performance numbers with linux routing
between two tulip NIC's and smallest packets 64 bytes.
(PII ~400 Mhz ~100 Mhz bus geunine fast ethernet tulip chips)
Normal path : ~40 KPPS
Fast switching patch: ~147 KPPS
In the fast switching path HW_FLOWCONTROL is active (shouldn't be too
active here) and also skb recycling is used which seems to have good
effect with fast switching.
Expect multiport boards to have slighly less performance. PCI-bridge?
As a comparison a can give real life example from last week. When some popular
package was released. We have two linux routers that connects of the major
archives via load sharing to our university ISP were filling 155 Mbps alone.
These routers run normal path plus have some ipchains filters and as full
BGP ~75000 routes. CPU is PII 350 MHZ and HW_FLOWCONTROL.
The ISP Cisco said:
Output queue 0/40, 0 drops; input queue 0/75, 682 drops
5 minute input rate 6268000 bits/sec, 9094 packets/sec
5 minute output rate 149098000 bits/sec, 16210 packets/sec
The two Linux routers CPU was loaded at:
R1 ~20% This has 75 Mbps out
R2 ~45% This has 75 Mbps out plus incoming trafik.
You seem have equipment to verify the results above and do own experiments.
Still impressive but with a load about ~45-50% I have to start worry. :-)