netdev
[Top] [All Lists]

Re: Slow TCP connection between linux and wince

To: kuznet@xxxxxxxxxxxxx
Subject: Re: Slow TCP connection between linux and wince
From: Aki M Laukkanen <amlaukka@xxxxxxxxxxxxxx>
Date: Sat, 10 Jun 2000 18:01:35 +0300 (EET DST)
Cc: netdev@xxxxxxxxxxx, iwtcp@xxxxxxxxxxxxxx
In-reply-to: <200006091725.VAA23767@xxxxxxxxxxxxx>
Sender: owner-netdev@xxxxxxxxxxx
On Fri, 9 Jun 2000 kuznet@xxxxxxxxxxxxx wrote:
> Hello!
> > connections with smaller MTUs. Think MTU of 576 which I think is pretty
> > common on the Internet at whole. With larger RTTs you can not use the

> TCP calculates _maximal_ window possible with current mss and device.
> If the window converged to 8K, it cannot be larger for this connection.
> If it were >8K, it would prune. It is law of nature, rather than
> something deterined by our choice.

Yes, I understood although the truesize/len ratio might suggest that a bit
larger window was possible. Maybe I forgot my own argument. See below.
 
> If you want larger window (you does not want this, in your case
> 8K is enough), you have to increase rcvbuf. But:

For that particular test case 8kB is enough I agree. I was arguing a
case with a smaller MTU because some link on the route or the other host
might not support a larger MTU than say the default 576. Nevertheless
there's plenty of bandwidth available which can not be utilized because
of smaller window.

> > You can enlarge the socket buffer size to get a bigger window but how many
> > people will? 
 
> It does not matter. People should not increase rcvbuf per socket.
> Essentially, this number is determined by amount of available
> RAM and by number of active connections, rather than by network conditions.
> Even current value of 64K is too large for common server configuration.
> 
> Certainly, one day we will have to do more smart "fair" memory management,
> which will allow to correlate memory consumption to network conditions.
> For now it is impossible, existing algorithms (f.e. the work made in PSC)
> are too coarce to be useful for production OS, which Linux is.

Thanks for the PSC reference. Looks interesting. I agree for server
configurations but this is really not a problem for your average
workstation. It does not have a large number of simultanous connections 
and has plenty of spare memory.

Current FreeBSD is happy to (by a glance in uipc_socket2.c so don't kill
me if I'm wrong) to waste memory which I'd like as an option for Linux
too. Their sockbuf structure has sb_cc and sb_mbcnt values. The latter
seems equivalent to {rmem|wmem}_alloc while the former only includes
actual data bytes. I could see why keeping account of actual data
bytes would be beneficiary too. Although both are used in sbspace() macro
it seems sb_mbmax is pretty high. Incidentally they seem to have some of
the infra-structure (soreserve) to guarantee enough memory for mbufs in
place but unused. 

> Cloned skbs are not counted, because their number is limited by tx_queue_len.
> For slow links it must be small number, sort of 4.

Hmm. this escaped me so forget about cloned skbs.

> > Valid examples are wireless and satellite links. Congestion window can
> > grow freely because the delay was constant in this test.

> "thin" link is link with small power and small window.
> "thick" link is link with large power and large window.

This seems to be a matter of having different opinions of
thin/{thick|fat}. Writers of said RFC take it to mean the bandwidth 
only - thin long together producing the bandwidth delay product.

> Large power links must have large mtu (>= 1500, at least), no questions,
> and you will have 32K default window then.

I'll back out a bit and say that the MTU requirements of current GSM data
are indeed imposed by the small bandwidth. However with current and
forthcoming higher bandwidth wireless links there are entirely valid
reasons for choosing a smaller MTU. The higher bit error rate for one, you
don't need to retransmit as much. Link layer retransmit schemes might seem
attractive at glance but are not in practice.


<Prev in Thread] Current Thread [Next in Thread>