netdev
[Top] [All Lists]

Re: [RFC] TCP congestion schedulers

To: Stephen Hemminger <shemminger@xxxxxxxx>
Subject: Re: [RFC] TCP congestion schedulers
From: Andi Kleen <ak@xxxxxx>
Date: 29 Mar 2005 17:25:38 +0200
Date: Tue, 29 Mar 2005 17:25:38 +0200
Cc: John Heffner <jheffner@xxxxxxx>, baruch@xxxxxxxxx, netdev@xxxxxxxxxxx
In-reply-to: <20050328155117.7c5de370@xxxxxxxxxxxxxxxxx>
References: <20050309210442.3e9786a6.davem@xxxxxxxxxxxxx> <4230288F.1030202@xxxxxxxxx> <20050310182629.1eab09ec.davem@xxxxxxxxxxxxx> <20050311120054.4bbf675a@xxxxxxxxxxxxxxxxx> <20050311201011.360c00da.davem@xxxxxxxxxxxxx> <20050314151726.532af90d@xxxxxxxxxxxxxxxxx> <m13bur5qyo.fsf@xxxxxx> <Pine.LNX.4.58.0503211605300.6729@xxxxxxxxxxxxxx> <20050322074122.GA64595@xxxxxx> <20050328155117.7c5de370@xxxxxxxxxxxxxxxxx>
Sender: netdev-bounce@xxxxxxxxxxx
User-agent: Mutt/1.4.1i
> Running on 2 Cpu Opteron using netperf loopback mode shows that the change is
> very small when averaged over 10 runs. Overall there is 
> a .28% decrease in CPU usage and a .96% loss in throughput.  But both those
> values are less than twice standard deviation which was .4% for the CPU 
> measurements
> and .8% for the performance measurements.  I can't see it as a worth
> bothering unless there is some big money benchmark on the line, in which case
> it would make more sense to look at other optimizations of the loopback
> path.

Opteron has no problems with indirect calls, IA64 seems to be different
though.

But when you see noticeable differences even on a Opteron I find
it somewhat worrying.

-Andi

<Prev in Thread] Current Thread [Next in Thread>