Francois Romieu <romieu@xxxxxxxxxxxxx> writes:
> OGAWA Hirofumi <hirofumi@xxxxxxxxxxxxxxxxxx> :
> [...]
> > Umm.. the above code is part of ->poll(). I think xxx_interrut() need
> > netif_running() instead. The driver must clear __LINK_STATE_RX_SCHED
> > flag...
> >
> > BTW, ->napi_lock is unneeded because netif_schedule() is already
> > atomic, it need only local_irq_enable/disable().
>
> Color me confused. The lock is supposed to protect against:
>
> CPU1 CPU2
> [poll]
> epic_napi_irq_on(dev, ep);
> [irq handler]
> if (netif_rx_schedule_prep(dev)) {
> epic_napi_irq_off(dev, ep);
> __netif_rx_schedule(dev);
> }
> __netif_rx_complete(dev);
>
> -> napi irq are disabled and device is removed from poll list. What will
> prevent it ?
__LINK_STATE_RX_SCHED flag is setting until __netif_rx_complete() is called.
So netif_rx_schedule_prep() returns 0.
> > After __netif_rx_complete() must not do "goto rx_action", otherwise it
> > may become cause of twice scheduleing, it should move before spin_lock().
>
> understand the previous statement as:
>
> + status = inl(ioaddr + INTSTAT);
> + if (status & EpicNapiEvent) {
> + epic_napi_irq_off(dev, ep);
> + goto rx_action;
> +
> + spin_lock_irqsave(&ep->napi_lock, flags);
> + epic_napi_irq_on(dev, ep);
> + __netif_rx_complete(dev);
> + spin_unlock_irqrestore(&ep->napi_lock, flags);
> +
>
> Afaiu, if some data comes in just before the spin_lock, it may wait for ages.
Yes, maybe. But, if after spin_lock, it loop may call the twice
__netif_rx_schedule(). And netif_rx_complete() doesn't call dev_put().
It will leaks the dev->refcnt, I think.
> + __netif_rx_complete(dev);
> + spin_unlock_irqrestore(&ep->napi_lock, flags);
-- interrupt and call __netif_rx_schedule() --
> + status = inl(ioaddr + INTSTAT);
> + if (status & EpicNapiEvent) {
> + epic_napi_irq_off(dev, ep);
> + goto rx_action;
Thanks.
--
OGAWA Hirofumi <hirofumi@xxxxxxxxxxxxxxxxxx>
|