netdev
[Top] [All Lists]

Re: Slow TCP connection between linux and wince

To: kuznet@xxxxxxxxxxxxx
Subject: Re: Slow TCP connection between linux and wince
From: Aki M Laukkanen <amlaukka@xxxxxxxxxxxxxx>
Date: Fri, 9 Jun 2000 18:22:03 +0300 (EET DST)
Cc: netdev@xxxxxxxxxxx, iwtcp@xxxxxxxxxxxxxx
In-reply-to: <200006071608.UAA21333@xxxxxxxxxxxxx>
Sender: owner-netdev@xxxxxxxxxxx
On Wed, 7 Jun 2000 kuznet@xxxxxxxxxxxxx wrote:
> Also, the fact that truesize exceeds mss not more than twice
> is really crucial. When it is not true, linux tcp used to fail
> miserably. This phenomenon is visible only on high rtt networks
> with small losses, though.

used - As in past tense? 

Remember that truesize is not the whole story. The cloned skbs show up in
wmem_alloc too which is why we got bitten by the burstiness. I see the
heuristics are on the conservative side though.

> > I don't think this a highly non-standard situation. Even if we ignore
> > slow wireless links (e.g. GSM data), to some extent every modem-link
> > exhibits this behaviour. Have you read draft-ietf-pilc-slow-03.txt or
> > rfc2757.txt? Good read for a little perspective to the problem space.
> 
> You missed the point. Network should have large _packet_ power_
> i.e. (rtt*bandwidth)/mss to hit this problem.  This situation never occured
> in real life earlier. I have no idea, how you reached cwnd of 192. 8)

Valid examples are wireless and satellite links. Congestion window can
grow freely because the delay was constant in this test.

> Of course. The question is how to make this. I proposed one solution.

Ok, tested with 0608. Hmm. what can I say, the window never grows past 8kB
with mss of 256. Both the ofo-queue pruning and the burstiness is masked
by this behaviour (obviously). The latter of course only if the receiver
is a linux with 0608 too.

Tested with mss of 536 and 1024 too - results in max window of ~16kB and
~24kB respectively. I can't say I'm satisfied though. This penalises
connections with smaller MTUs. Think MTU of 576 which I think is pretty
common on the Internet at whole. With larger RTTs you can not use the
whole bandwidth which is available because the window is just too
small. Those tests were done with PPP which only allocates MRU bytes per
skb but your average ethernet driver has to allocate 1500+ bytes per skb
regardless of what the actual packet size is.

You can enlarge the socket buffer size to get a bigger window but how many
people will? Also many applications try to enforce a certain advertised
window by setting the socket buffer size by themselves. This no longer
results has the effect they want. 




<Prev in Thread] Current Thread [Next in Thread>