Hello, we just stumbled across this discussion as part of a search on a
problem we are having.
We have a monitor called Supermon. It allows us to pull status data out of
the kernel at 5100 Hz. It includes a server tool which allows processes to
connect via TCP and read the status data. We wrote it up in ALS 2001 if
you want to see the paper, or: http://www.acl.lanl.gov/supermon/, see the
paper at the bottom.
Our goal is to have remote processes read the data over a TCP connection
at somewhere between 100 and 500 hz., depending on the requirements.
The tcp delayed ack support in Linux is killing us. Our maximum sample
rate is 25 hz. It is so slow, in fact, that we may have to return to UDP,
which we hate to think about doing.
To recap, nivedita proposed a TCP_DELACK sockopt to allow users to
completely disabled delayed acks. Alexey thought this a bad idea. We're
not convinced it is a bad idea, and it has a lot of attraction for us.
Alexey's comments are plain, and my comments in <<<>>>
Further comments welcome. I don't subscribe to netdev, however, so if you
want to include me please cc: me.
In-Reply-To: <200108280207.f7S275J18467@xxxxxxxxxxxxxxxxxxxxxx> from
"Nivedita Singhvi" at Aug 28, 1 06:15:02 am
Well, actually, I have one question: how is application supposed
to determine, when it is in this "certain environment"?
<<< It's easy for us. We run, see 25 hz. sample rates, and know that
we have determined the need to turn OFF delayed acks.
Another consideration for you: what happens if NFS ever runs over
TCP in this environment. 25 hz. packet rates are really a bad thing.
NFS running under delack constraints could limit NFS performance
to 25*8192 bytes/second. What should NFS do to fix this?
I see no way, but already implemented in tcp.
In any case, if you can propose another way to guess, when
delayed acks harm performance, it should be added to set of heuristics.
<<< I think heuristics are nice, but you can take them too far.
Heuristics are not a universal solution to every problem.
TCP heuristics are causing us a great deal of grief. I'd like
to be able to tell the kernel what to do in some circumstances.
Actually, opposite option would make sense, 2.4 really generates
too much of acks in some curcumstances. :-)
<<< which is OK for some environments; what's bad is low packet rates. >>>
I would rather expect that applications using this funny option are to be
repaired not to use it.
<<< Not always true. Sometimes, the app has a good reason to need this
You can obtain required effect blocking setting tp->ack.pingpong.
tp->ack.pingpong frozen at zero gives maximal reasonable ack frequency.
<<< the pingpong setting seems more like a hack to me than the TCP_DELACK.
Looking at 2.4.17:ipv4/net/tcp.c, how pingpong can get set is
very unclear. Also, we have tried this:
(protonum via getprotobyname)
quickack = 1;
setsockopt(fd, protonum, TCP_QUICKACK, &quickack, sizeof(quickack))
and it has had NO effect. ACKs still come 40 ms. apart.
Are we doing something wrong?