On Wed, Mar 30, 2005 at 06:02:55PM +0200, Andi Kleen wrote:
> On Wed, Mar 30, 2005 at 05:44:18PM +0200, Andrea Arcangeli wrote:
> > On Wed, Mar 30, 2005 at 05:39:48PM +0200, Andi Kleen wrote:
> > > An unsolveable one IMHO. You can just try to be good enough. For that
> > I think it's solvable with an algorithm I outlined several emails ago.
> The problem with you algorithm is that you cannot control
> how to NIC puts incoming packets into RX rings (and then
> actually if the packets you are interested in do actually arrive from
> the net ,-)
All I care about is to assign a mempool ID to the skb (ID being unique
identifier for the tcp connection I don't care how the implementation
is). If while moving up the stack the skb data doesn't match to the
sock->mempool id, we'll just free the packet and put it back in the
This of course only triggers with skb marked with a mempool ID, all
skb allocated with GFP_ATOMIC will have a Null ID and they won't check
anything and nothing will change for them.
After GFP fails you pick the skb from a random mempool everytime, so you
need all mempools belonging to sockets that routes somehow thorugh a
certain NIC driver instance, quickly reacheable from the NIC device
I don't see any problem with this algo. I don't need to control how NIC
process the incoming packets, after GFP fails I allocate from a
random mempool, I set the skb mempool ID to the ID of the mempool we
picked from, and I let the stack process it. Then you need a check as
soon as you finished processing the TCP header, to release the skb back
in its originating mempool immediatly if the sock mempool ID doesn't
match the skb mempool ID but that's easy.
All it matters is that this skb can't get stuck in the middle of nowhere
in unfreeable state, but I don't see how could it get stuck in between
the netfix_rx and the sock identification via tcp and ip headers. It
just can't get stuck, either it's freed prematurely, or it's freed by us
with the new mempool id check. It could get stuck if we would let it go
ahead into some out of order queues, but not before our new check for
mempool ID after tcp header decode.
This is all going to be complex to code, but I think it's technically
> While some NICs have hardware support to get high priority
> packets into different queues these tend to add nasty limits
> on the max number of connections. Which IMHO is not acceptable.
> "We have an enterprise class OS with iSCSI which can only
> support four swap devices"
;) I agree the hardware solution isn't appealing.