netdev
[Top] [All Lists]

Re: recent TCP changes adversive on slow links

To: kuznet@xxxxxxxxxxxxx
Subject: Re: recent TCP changes adversive on slow links
From: Aki M Laukkanen <amlaukka@xxxxxxxxxxxxxx>
Date: Thu, 1 Jun 2000 14:17:09 +0300 (EET DST)
Cc: netdev@xxxxxxxxxxx, iwtcp@xxxxxxxxxxxxxx
In-reply-to: <200005311813.WAA23306@ms2.inr.ac.ru>
Sender: owner-netdev@xxxxxxxxxxx
On Wed, 31 May 2000 kuznet@xxxxxxxxxxxxx wrote:
> It is difficult to believe that the problem is here, even if this
> change kills the effect. Actually, I am even not sure, that this

It seems in retrospect that this report was a wee bit hasty, sorry. Pre3
didn't exhibit this behaviour and the only relevant change seemed to be
this. I was a bit too quick and didn't think it thoroughly.

go to sleep check:
        return atomic_read(&sk->wmem_alloc) < sk->sndbuf;

wakeup condition:
        if (sock_wspace(sk) >= tcp_min_write_space(sk) &&

These heuristics were masked because of the over-scheduling but I don't
see how this could attribute what was seen in the tcpdumps. It seems as
you remark, as if the transmit queue length was huge. But we used ppp
which has tx_queue_len of three.

So it seems something further down is effecting the behaviour indirectly.
That seems to be our own fault (will do tests). The link used was
really an emulated link (sorry, software not publicly  available and 
it's still being beta-tested). The basic principle is to catch the PPP
stream with a pseudo tty and from there on we can mess with packets (in
principle like dummynet). We are aware of the gotchas this implies and the
emulator implements flow control but apparently in those tests it wasn't
turned on.

The only thing I can not put my finger on is why did this change make the
dumps look like they did?




<Prev in Thread] Current Thread [Next in Thread>