On Sat, 2003-09-13 at 07:52, Robert Olsson wrote:
> >
> > I spoke with Alexey once about this, actually tx_queue_len can
> > be arbitrarily large but it should be reasonable nonetheless.
> >
> > Our preliminary conclusions were that values of 1000 for 100Mbit and
> > faster were probably appropriate. Maybe something larger for 1Gbit,
> > who knows.
If you recall we saw that even for the gent who was trying to do 100K
TCP sockets on a 4 way SMP, 1000 was sufficient and no packets were
dropped.
> >
> > We also determined that the only connection between TX descriptor
> > ring size and dev->tx_queue_len was that the latter should be large
> > enough to handle, at a minimum, the amount of pending TX descriptor
> > ACKs that can be pending considering mitigation et al.
> >
> > So if TX irq mitigation can defer up to N TX descriptor completions
> > then dev->tx_queue_len must be at least that large.
> >
> > Back to the main topic, maybe we should set dev->tx_queue_len to
> > 1000 by default for all ethernet devices.
>
> Hello!
>
> Yes sounds like adequate setting for GIGE. This is what use for production
> and lab but rather than increasing dev->tx_queue_len to 1000 we replace the
> pfifo_fast with the pfifo qdisc w. setting a qlen of 1000.
>
I think this may not be good for the reason of QoS. You want BGP packets
to be given priority over ftp. A single queue kills that.
The current default 3 band queue is good enough, the only challenge
being noone sees stats for it. I have a patch for the kernel at:
http://www.cyberus.ca/~hadi/patches/restore.pfifo.kernel
and for tc at:
http://www.cyberus.ca/~hadi/patches/restore.pfifo.tc
cheers,
jamal
|