netdev
[Top] [All Lists]

Re: Slow TCP connection between linux and wince

To: kuznet@xxxxxxxxxxxxx
Subject: Re: Slow TCP connection between linux and wince
From: Aki M Laukkanen <amlaukka@xxxxxxxxxxxxxx>
Date: Wed, 7 Jun 2000 14:05:00 +0300 (EET DST)
Cc: netdev@xxxxxxxxxxx, iwtcp@xxxxxxxxxxxxxx
In-reply-to: <200006061924.XAA07893@ms2.inr.ac.ru>
Sender: owner-netdev@xxxxxxxxxxx
On Tue, 6 Jun 2000 kuznet@xxxxxxxxxxxxx wrote:
> It is not hackish, it is rather buggish. 8)
> You cannot mangle truesize, eben with good intention.

Yes, that was just to test the theory.

> Try better to tune tcp_min_write_space(). I want to think it is tunable.

To me it seems this would be trying to fix the symptom rather than the
cause. There seems to be a certain assumption in place here that the default
socket buffer of 65536 is good for all connections regardless of MTU. This 
is only true if we don't do meta-data accounting in {wmem|rmem}_alloc. 

> Also, you could select larger sndbuf for such funny links.

Funny links, well I wouldn't say so. I don't think many people recognize 
the problem and as such will not select larger socket buffer sizes. Perhaps 
an option to auto-tune the socket buffer size is needed so that the
condition, send  buffer should always be large enough to contain two
windows worth of segments, would hold true.

> Combination of ridiculously low MSS with utterly high cwnd
> is highly non-standard situation.

I don't think this a highly non-standard situation. Even if we ignore
slow wireless links (e.g. GSM data), to some extent every modem-link
exhibits this behaviour. Have you read draft-ietf-pilc-slow-03.txt or
rfc2757.txt? Good read for a little perspective to the problem space.

> > Our second problem with this disparity is on the receive side. The scenario
> > is essentially the same but with an unreliable link (read wireless) which
> > drops packets. In case of packet drop receiver keeps building an 
> > out-of-order queue which grows to the limit of the receive buffer 
> > quite quickly. However sender keeps sending more because of the difference
> > between advertised window and the actual allocated space. This triggers
> > tcp_input.c:prune_queue() which purges the whole out-of-order queue to
> > free up space, thus killing the TCP performance quite effectively.
> 
> TCP performance is killed not by pruning, but rather by packet drop. 8)

Everything is relative. Retransmitting a single packet versus having
to retransmit the whole window accounts to 112 seconds versus 160 seconds
when transmitting 100KB in this particular test case. But this is beside
the point. I think I can argue that the receiver should never advertise a
window bigger than it is prepared to receive.

This does not just affect lossy links. Single packet drop due to
congestion is quite a valid scenario due to routers deploying active
congestion management schemes. You can calculate a threshold point for
the MTU when the window calculation starts to break. 

/* 
 * How much of the receive buffer do we advertize 
 * (the rest is reserved for headers and driver packet overhead)
 * Use a power of 2.
 */
#define TCP_WINDOW_ADVERTISE_DIVISOR 2

For this divisor the threshold is something like (1536+130)/2 bytes for 
typical ethernet drivers. In the case of PPP it is a bit different.

> Yes, pruning should be a bit less aggressive. I will repair this.

Does this mean the changes described here? Doing it the other way
(not killing the whole ofo-queue) will still cause further packet loss.

                /* THIS IS _VERY_ GOOD PLACE to play window clamp.
                 * if free_space becomes suspiciously low
                 * verify ratio rmem_alloc/(rcv_nxt - copied_seq),
                 * and if we predict that when free_space will be lower mss,
                 * rmem_alloc will run out of rcvbuf*2, shrink window_clamp.
                 * It will eliminate most of prune events! Very simple,
                 * it is the next thing to do.                  --ANK

> In any case, try net-xxyyzz.dif.gz from ftp://ftp.inr.ac.ru/ip-routing/.
> It will not be better, I think, but at least you will discover when

Will do.





<Prev in Thread] Current Thread [Next in Thread>