On Thu, Oct 07, 2004 at 03:07:56PM -0700, David S. Miller wrote:
> On Thu, 7 Oct 2004 16:50:26 -0500
> Matt Mackall <mpm@xxxxxxxxxxx> wrote:
>
> > > The only drawback is that there won't be a reply when the driver try
> > > lock fails, but netpoll doesn't have a queue for that anyways. You could
> > > probably poll then, but I'm not sure it's a good idea.
> >
> > But your meaning here is not entirely clear.
>
> If another thread on another cpu is in the dev->hard_start_xmit() routine,
> then it will have it's tx device lock held, and netpoll will simply get an
> immediate return from ->hard_start_xmit() with error NETDEV_TX_LOCKED.
>
> The packet will thus not be sent, and because netpoll does not have a
> backlog queue for tx packets of any kind the packet lost forever.
>
> NETDEV_TX_LOCKED is a transient condition. It works for the rest of the
> kernel because whoever holds the tx lock on the device, will recheck the
> device packet transmit queue when it drops that lock and returns from
> ->hard_start_xmit().
>
> Andi is merely noting how netpoll's design does not have such a model,
> which is why the NETIF_F_LLTX semantics don't mesh very well.
>
> It is unclear if it ise wise that netpoll_send_skb() currently spins
> on ->hard_start_xmit() returning NETDEV_TX_LOCKED. That could
> result in some kind of deadlocks.
Deadlocks from recursion, presumably? We could probably throw in a max
retry count, as ugly as that is..
--
Mathematics is the supreme nostalgia of our time.
|