netdev
[Top] [All Lists]

RE: Intel and TOE in the news

To: "'Jeff Garzik'" <jgarzik@xxxxxxxxx>, "'Andi Kleen'" <ak@xxxxxx>, "'Leonid Grossman'" <leonid.grossman@xxxxxxxxxxxx>, "'rick jones'" <rick.jones2@xxxxxx>, <netdev@xxxxxxxxxxx>
Subject: RE: Intel and TOE in the news
From: "Alex Aizman" <alex@xxxxxxxxxxxx>
Date: Mon, 21 Feb 2005 11:34:08 -0800
Importance: Normal
In-reply-to: <42194972.8060303@xxxxxxxxx>
Sender: netdev-bounce@xxxxxxxxxxx
> > Alternative: wait until Xframe II adapter w/MSI-X..
> 
> How does that help with MSI?

Does not help with MSI, helps to scale.

Btw, there's one alternative to MSI/MSI-X idea that in theory should
help to scale with CPUs. In a month or so I might get time to try it
out.

> The infiniband Linux driver is already using multi-MSI.  You 
> are behind the times :)

That's just great. Where, which kernel?

2.6.11-rc4 MSI-HOWTO still says "Due to the non-contiguous fashion in
vector assignment of the existing Linux kernel, this version does not
support multiple messages regardless of a device function is capable of
supporting more than one vector." 

2.6.11-rc4 MTHCA driver still does request_irq() just once for MSI
(note: MSI, not MSI-X). 

> Despite Andis assertion that theres value in amortizing the 
> locks, the benefits are highly missing on a generic level 
> unfortunately. Locking overhead is like the 50th item on 
> things you have to worry about
> - so i wouldnt even start worrying about this. 

That's probably true. Not 50th, 5th but still.

>  Yes when queue length/batch increases you're risking to load the L2 
>  twice for the same skb. Which is the most expensive operation.... 
>  Forwarding profiles show most functions where cache misses occur.

I wonder if alloc_skb_from_cache() will help to relieve the pressure on
memory at multi-Gbps receives.

Alex


<Prev in Thread] Current Thread [Next in Thread>