[Top] [All Lists]

Re: [PATCH] [RFT] 2.6.4 - epic100 napi

To: OGAWA Hirofumi <hirofumi@xxxxxxxxxxxxxxxxxx>
Subject: Re: [PATCH] [RFT] 2.6.4 - epic100 napi
From: Francois Romieu <romieu@xxxxxxxxxxxxx>
Date: Tue, 23 Mar 2004 19:51:19 +0100
Cc: Jeff Garzik <jgarzik@xxxxxxxxx>, netdev@xxxxxxxxxxx
In-reply-to: <>; from on Tue, Mar 23, 2004 at 11:29:49PM +0900
References: <> <> <>
Sender: netdev-bounce@xxxxxxxxxxx
User-agent: Mutt/
OGAWA Hirofumi <hirofumi@xxxxxxxxxxxxxxxxxx> :
> Umm.. the above code is part of ->poll(). I think xxx_interrut() need
> netif_running() instead. The driver must clear __LINK_STATE_RX_SCHED
> flag...
> BTW, ->napi_lock is unneeded because netif_schedule() is already
> atomic, it need only local_irq_enable/disable().

Color me confused. The lock is supposed to protect against:

CPU1                         CPU2
epic_napi_irq_on(dev, ep);
                             [irq handler]
                             if (netif_rx_schedule_prep(dev)) {
                                     epic_napi_irq_off(dev, ep);

-> napi irq are disabled and device is removed from poll list. What will
   prevent it ?

> After __netif_rx_complete() must not do "goto rx_action", otherwise it
> may become cause of twice scheduleing, it should move before spin_lock().

 understand the previous statement as:

+               status = inl(ioaddr + INTSTAT);
+               if (status & EpicNapiEvent) {
+                       epic_napi_irq_off(dev, ep);
+                       goto rx_action;
+               spin_lock_irqsave(&ep->napi_lock, flags);
+               epic_napi_irq_on(dev, ep);
+               __netif_rx_complete(dev);
+               spin_unlock_irqrestore(&ep->napi_lock, flags);

Afaiu, if some data comes in just before the spin_lock, it may wait for ages.

Can you reformulate as I feel I still did not get it right.


<Prev in Thread] Current Thread [Next in Thread>