netdev
[Top] [All Lists]

Re: Linux TCP over Satellite

To: BERND.STURM@xxxxxxxxxxxx
Subject: Re: Linux TCP over Satellite
From: Matti Aarnio <matti.aarnio@xxxxxxxxxxx>
Date: Thu, 22 Feb 2001 21:09:02 +0200
Cc: netdev@xxxxxxxxxxx
In-reply-to: <CF62A5793E91D411BA7A0002A5131736745538@SATCOMNT11>; from BERND.STURM@xxxxxxxxxxxx on Thu, Feb 22, 2001 at 07:20:09PM +0100
References: <CF62A5793E91D411BA7A0002A5131736745538@SATCOMNT11>
Sender: owner-netdev@xxxxxxxxxxx
On Thu, Feb 22, 2001 at 07:20:09PM +0100, BERND.STURM@xxxxxxxxxxxx wrote:
> At the moment i´m doing my diploma thesis at Nortel Networks and we´re
> testing the performance of TCP over Satellite with both Linux Kernel Version
> 2.2.16 and 2.4.1.

  May I suggest that you pick the fresh number of USENIX ;login:  magazine,
  and read its TCP-tuning article.  (2001, number 1)

> After long reading which TCP parameters and extensions are significant for
> the satellite performance of Linux i did a test today with the following
> parameters enabled in the proc-filesystem.
> tcp_sack enabled, tcp_window_scaling enabled, tcp_timestamps enabled (i
> suppose a value equal to 1 denotes that an option is enabled, right ?)
> Furthermore for 2.4.1: tcp_dsack enabled, tcp_fack enabled, tcp_ecn enabled.

  ECN you don't need there.  "1" is "enabled", "0" is disabled".

> The window sizes were set as following: 
> for 2.2.16: /proc/sys/net/core/rmem-default, rmem_max, wmem_default,
> wmem_max    all of them : 262140
> for 2.4.1: /proc/sys/net/ipv4/tcp_wmem, tcp_rmem   all of them: 262140
> 262140 262140 (min, default, max) (with echo 262140 262140 262140 >
> /proc/sys...., i hope this is right) and /proc/sys/net/core/rmem-default,
> rmem_max, wmem_default, wmem_max    all of them : 262140
> 
> Nevertheless when we did ftp-sessions between the 2 Linux-machines we never
> achieved a transfer rate of considerably more than 40 kByte over a 2
> Mbit-satellite channel !!!

  2 Mbits over how long a delay ?

  You need to have the buffering to approach 2 * bandwidth * delay BYTES.
  There delay is the round-trip time for e.g.  ping.  Bandwidth is given
  as 2 000 000 bits/sec.  Roughly 256 000 B/sec

  Lets presume 300 ms delay (of which some 250 ms are light-speed delay
  from earth to geosync orbit and back).

  The delay-bandwidth product is thus some 77 000 bytes.

  Buffering at sending _and_at_receiving_ systems must thus be at least
  some 77 kB per socket at the TCP level, and taking Linux socket space
  accounting rules into count, 154 000  is the minimum value.

  Now *both* of those values are over 64 kB, which is original TCP's
  maximum outstanding acknowledgement's window size, and indeed there
  are reasons why Linux is conservative to limit to mere 32 kB.

  It may be that the softwares you used did set the window too low
  with explicite

    int sndsize = 8192;
    setsockopt(skt, SOL_SOCKET, SO_SNDBUF, &sndsize, sizeof(sndsize)

  call -- or that the defaults really didn't take hold that easily.

  If you would do some tcpdump of the flowing traffic at each end
  of the link, you might get some additional insight on what is going on.


> A colleague of mine who did the same transfers with a Windows2000-machine
> was able to achieve transfer rates of allmost 150kByte.
> So what is wrong with my setup ? Actually with SACK enabled and Window
> Scaling enabled and the huge TCP window sizes I´ve specified performance
> should be much better. I would have expected about 1...1.5 MBit  instead of
> 300 kBit !

  With 2M link and Linux at both ends I would expect 2Mbit/sec speeds,
  presuming WSCALE really is active, and sender (and receiver) are
  in agreenment about the modes.

> I hope you find a failure of mine and will be able to help me with my
> problem.
> Thank you very much in advance !
> 
> Yours sincerely,
> 
> Bernd Sturm
> ND Satcom
> phone: 0049-7545 / 96-8847
> mailto:Bernd.Sturm@xxxxxxxxxxxx

/Matti Aarnio

<Prev in Thread] Current Thread [Next in Thread>