netdev
[Top] [All Lists]

Re: Early SPECWeb99 results on 2.5.33 with TSO on e1000

To: Dave Hansen <haveblue@xxxxxxxxxx>, jamal <hadi@xxxxxxxxxx>, netdev@xxxxxxxxxxx, linux-kernel@xxxxxxxxxxxxxxx
Subject: Re: Early SPECWeb99 results on 2.5.33 with TSO on e1000
From: Manfred Spraul <manfred@xxxxxxxxxxxxxxxx>
Date: Fri, 06 Sep 2002 20:35:08 +0200
Sender: netdev-bounce@xxxxxxxxxxx
User-agent: Mozilla/4.0 (compatible; MSIE 5.5; Windows NT 4.0)
>
The real question is why NAPI causes so much more work for the client.
>
[Just a summary from my results from last year. All testing with a simple NIC without hw interrupt mitigation, on a Cyrix P150]


My assumption was that NAPI increases the cost of receiving a single packet: instead of one hw interrupt with one device access (ack interrupt) and the softirq processing, the hw interrupt must ack & disable the interrupt, then the processing occurs in softirq context, and the interrupts are reenabled at softirq context.

The second point was that interrupt mitigation must remain enabled, even with NAPI: the automatic mitigation doesn't work with process space limited loads (e.g. TCP: backlog queue is drained quickly, but the system is busy processing the prequeue or receive queue)

jamal, it is possible that a driver uses both napi and the normal interface, or would that break fairness?
Use netif_rx, until it returns dropping. If that happens, disable the interrupt, and call netif_rx_schedule().


Is it possible to determine the average number of packets that are processed for each netif_rx_schedule()?

--
        Manfred


<Prev in Thread] Current Thread [Next in Thread>