> > > Any comments regarding the following patch?
> I think it will make any use of "raw" dev->hard_start_xmit" impossible.
> Which is what pktgen uses.
> > > I understand why it is valid, etc., but why do we even want to do
> > > this? It is not like this dead-loop detection stuff is a hot-path or
> > > anything like that.
> > I've implemented a prototype that uses per-CPU kernel threads for
> > processing packets coming in from a single interface. The idea is to
> > apply multiple CPUs to a single network interface to be able to have
> > multiple CPUs simultaneously pumping data into the network. So in my
> > case, I have lots of cpu_collisions and running the tx softirq to do
> > nothing may lower the performances. Anyway, even though my patch may
> > help me, it may indeed be irrelevant to the stock kernel.
> Sounds like a project at least having packet reordering and cache bouncing
> in mind.
Let me explain a bit more.
I developped a kernel module that basically implements per-cpu kernel
threads, each being bound to a particular cpu. I also modified the
Myrinet NIC driver and firmware so that they implement per-cpu rx rings.
The NIC makes sure that packets of the same connection are always
deposited in the same ring. Here's how it does it. For each incoming
pkt, the NIC computes the index of the ring into which the packet must
be placed [*], passes this index to the driver, and dmas the packet into
the appropriate ring. The driver uses the ring index to wake up the
appropriate kernel thread. Each kernel-thread behaves in a NAPI manner.