netdev
[Top] [All Lists]

Re: Tx queueing

To: jamal <hadi@xxxxxxxxxx>
Subject: Re: Tx queueing
From: Jeff Garzik <jgarzik@xxxxxxxxxxxxxxxx>
Date: Sun, 21 May 2000 21:08:35 -0400
Cc: Andrew Morton <andrewm@xxxxxxxxxx>, Donald Becker <becker@xxxxxxxxx>, "netdev@xxxxxxxxxxx" <netdev@xxxxxxxxxxx>
Fake-sender: owner-netdev@xxxxxxxxxxx
Organization: MandrakeSoft
References: <Pine.GSO.4.20.0005201750240.18029-100000@xxxxxxxxxxxxxxxx>
Sender: Majordomo List Manager <majordomo@xxxxxxxxxxx>
jamal wrote:
> Jeff, when you say some modern PCI hardware has
> problems with the described semantics: can you provide more details?

I was referring to PCI drivers, not PCI hardware.  What I meant was that
my experience has shown that some of the early softnet conversions
(example 'A') caused transmit timeouts quite easily, until updated to
look like example B.

Example A:

        drv_start_xmit() {
                netif_stop_queue()
                /* queue packet for xmit */
                if (!tx_full)
                        netif_start_queue()
        }
        interrupt() {
                /* Tx'd a packet */
                if (tx_full)
                        netif_stop_queue()
                else
                        netif_wake_queue()
        }

Example B:

        drv_start_xmit() {
                /* queue packet for xmit */
                if (tx_full)
                        netif_stop_queue()
        }
        interrupt() {
                /* Tx'd a packet */
                if (!tx_full)
                        netif_wake_queue()
        }

As a further note, since many of the PCI drivers do multiple iterations
of "work," I wonder if it would be useful to postpone the
netif_wake_queue() call until after the work loop in the interrupt
handler completes.

        Jeff



-- 
Jeff Garzik              | Liberty is always dangerous, but
Building 1024            | it is the safest thing we have.
MandrakeSoft, Inc.       |      -- Harry Emerson Fosdick

<Prev in Thread] Current Thread [Next in Thread>