netdev
[Top] [All Lists]

Re: dummy as IMQ replacement

To: Thomas Graf <tgraf@xxxxxxx>
Subject: Re: dummy as IMQ replacement
From: Andy Furniss <andy.furniss@xxxxxxxxxxxxx>
Date: Tue, 01 Feb 2005 15:03:49 +0000
Cc: jamal <hadi@xxxxxxxxxx>, netdev@xxxxxxxxxxx, Nguyen Dinh Nam <nguyendinhnam@xxxxxxxxx>, Remus <rmocius@xxxxxxxxxxxxxx>, Andre Tomt <andre@xxxxxxxx>, syrius.ml@xxxxxxxxxx, Damion de Soto <damion@xxxxxxxxxxxx>
In-reply-to: <20050201133138.GM31837@postel.suug.ch>
References: <1107123123.8021.80.camel@jzny.localdomain> <20050131135810.GC31837@postel.suug.ch> <1107181169.7840.184.camel@jzny.localdomain> <20050131151532.GE31837@postel.suug.ch> <41FED514.7060702@dsl.pipex.com> <20050201133138.GM31837@postel.suug.ch>
Sender: netdev-bounce@xxxxxxxxxxx
User-agent: Mozilla/5.0 (X11; U; Linux i686; en-US; rv:1.3a) Gecko/20021212
Thomas Graf wrote:
X =  ----------------------------------------------------------
     R*sqrt(2*b*p/3) + (t_RTO * (3*sqrt(3*b*p/8) * p * (1+32*p^2)))

Where:

   X is the transmit rate in bytes/second.
   s is the packet size in bytes.
   R is the round trip time in seconds.
   p is the loss event rate, between 0 and 1.0, of the number of loss
     events as a fraction of the number of packets transmitted.
   t_RTO is the TCP retransmission timeout value in seconds.
   b is the number of packets acknowledged by a single TCP
     acknowledgement.

WRT policers I never figured out where you would put the effects of playing with the burst size parameter and it's effects with few/many connections and any burstiness caused into an equasion like that.


A burst buffer has impact on R on later packets, it can "smooth" R
and X and thus results in more stable rates. Depending on the actual
burst, it can avoid retransmits which stabilizes the rate as well.

But it's not a real rate limiting buffer in the policer case is it?



This sounds cool. For me in someways I think it could be nicer (in the case of shaping from the wrong end of a slow link) to delay the real packets - that way the tcps of the clients get to see the smoothed version of the traffic and you can delay udp aswell.


It's impossible to never drop anything, for udp we can either drop
it or use ECN and hope the other ip stack takes care of it or the
application implements its own cc algorithm. Basically you can already
do that with (G)RED. Most UDP users which provide a continous stream
such as video streams, implement some kind of key datagram which contains
the number of datagrams received since the last key datagram and the
application throttles down based on that so dropping is often the only
way to achieve a general working solution. Delaying UDP packets and
then drop them if the buffer is full is very dangerous, often the
protocols based on UDP rely on the assumption that datagrams get lost
randomly and not succcessive. We can think about precicse policing
for UDP again once the current poor application level cc algorithms
have failed and the industry accepted ECN as the right thing. For
now most of them still suffer from the NIH syndrom in this area.

Interesting stuff. I was thinking of game udp where just dropping would simulate what the user should have done anyway, but costing you bandwidth. If alot of gamers share a slow link then if you lag them out they know it's time to turn the rate down.




How intelligent and how much, if any, per connection state do you/could you keep?


I keep a rate estimator for every flow on ingress in a hash table and
lookup it up on egress with the flow parameters reversed. It gets
pretty expensive on huge amounts of connection usually one doesn't
want to do per connection policing on such boxes. ;->


Nice - are you planning to add anything to tweak things for the wrong end of the bottleneck problems?


Andy.


<Prev in Thread] Current Thread [Next in Thread>