I was rereading old text files and found the old description of the Solaris 2.5
for slow links from Sun. I think one of them affect 2.4 (and probably 2.2) too:
When the initial RTT estimate is too low then there will be no segments going
without any retransmits. On a connection with no timestamps all RTT estimates
retransmitted packets are ignored due to Karn's rule. Unfortunately this means
when the initial RTT is too low it'll never get a new estimate, because all
packets that arrive at the other end were already retransmitted. The connection
gets lots of faulty retransmits forever. Oops.
On connections with timestamps the problem does not happen because we use all
retransmitted or not, for the RTT estimates. Connections without timestamps lose
badly though (and I think Windows 9x still defaults to no timestamps)
Normally that only affects links with a RTT>3s, but with a saved rtt it could
much lower when the rtt suddenly increases after it was saved.
Sun's solution was described as:
Our solution is to keep the RTO RTT update still conservative, but now
update the RTO after no more than one receive window's worth of valid
RTT's. Further, when an invalid RTT is seen--an ACK of a retransmitted
segment, for example--any valid RTT information is fed into the RTO
I am not sure what they mean with "any valid RTT information", because in this
situation all RTTs are invalid due to Karn's rule. Also updating the RTO earlier
also does not work because of the same problem (so either this description is
wrong or Solaris still has the bug ;)
The only recovery is to ignore Karn's rule after some time and feed use the RTO,
even when you're not sure if it was for the retransmitted packet or not. After
a few retansmits the backoff is already long enough that you can be probably
assume that the previous retransmitted packet has already left the network.
So one way to solve it would be to turn off the Karn filter after a few
retransmits in tcp_ack_no_tstamp(). The hard to tune thing is how many
retransmits it should wait. It depends what are good practical upper boundaries
for packet live times in the real internet.
Any suggestions on that? Any other ideas?