netdev
[Top] [All Lists]

Re: Question about QOS

To: Thomas Graf <tgraf@xxxxxxx>
Subject: Re: Question about QOS
From: Nicolas DICHTEL <nicolas.dichtel@xxxxxxxxx>
Date: Tue, 26 Apr 2005 16:57:32 +0200
Cc: netdev@xxxxxxxxxxx, linux-net@xxxxxxxxxxxxxxx
In-reply-to: <20050426125955.GT577@postel.suug.ch>
References: <426E06F1.9000105@6wind.com> <20050426125955.GT577@postel.suug.ch>
Reply-to: nicolas.dichtel@xxxxxxxxx
Sender: netdev-bounce@xxxxxxxxxxx
User-agent: Mozilla Thunderbird 1.0 (Windows/20041206)
Thomas Graf wrote:

* Nicolas DICHTEL <426E06F1.9000105@xxxxxxxxx> 2005-04-26 11:16


I set CONFIG_NET_SCH_CLK_GETTIMEOFDAY in my kernel. The macro PSCHED_TDIFF_SAFE calculates
the difference between two timestamps and uses the function psched_tod_diff() to do this.
If the clock is readjusted (due to ntp for example), this function can return a negative number
(if bound > 1000000) and then the flow is blocked by the kernel. Am I right ?



do_gettimeofday takes care of ntp adjustments so we _should_ be safe, however, it might be wise to enforce a range of 0..bound instead of INT_MIN..bound because qdiscs like red are relying on this. Assuming we have a delta of -4 seconds and return -4e6 red will horribly crash when acccessing the array with idle_time>>cell_log.



You can have the same kind of problem with a ingress filter. I propose the
following patch to fix the range to 0..bound

[SCHED] Fix range in psched_tod_diff() to 0..bound

Signed-off-by: Nicolas Dichtel <nicolas.dichtel@xxxxxxxxx>

diff -Nru linux-2.6-a/include/net/pkt_sched.h 
linux-2.6-b/include/net/pkt_sched.h
--- linux-2.6-a/include/net/pkt_sched.h 2005-04-26 15:45:07.074124664 +0200
+++ linux-2.6-b/include/net/pkt_sched.h 2005-04-26 15:47:26.215971888 +0200
@@ -140,7 +140,7 @@
        if (bound <= 1000000 || delta_sec > (0x7FFFFFFF/1000000)-1)
                return bound;
        delta = delta_sec * 1000000;
-       if (delta > bound)
+       if (delta > bound || delta < 0)
                delta = bound;
        return delta;
 }
<Prev in Thread] Current Thread [Next in Thread>