[Top] [All Lists]

Re: TOE brain dump

To: "David S. Miller" <davem@xxxxxxxxxx>
Subject: Re: TOE brain dump
From: ebiederm@xxxxxxxxxxxx (Eric W. Biederman)
Date: 05 Aug 2003 11:25:57 -0600
Cc: Werner Almesberger <werner@xxxxxxxxxxxxxxx> jgarzik@xxxxxxxxx, niv@xxxxxxxxxx, netdev@xxxxxxxxxxx, linux-kernel@xxxxxxxxxxxxxxx
In-reply-to: <20030804122632.65ba2122.davem@xxxxxxxxxx>
References: <20030802140444.E5798@xxxxxxxxxxxxxxx> <3F2BF5C7.90400@xxxxxxxxxx> <3F2C0C44.6020002@xxxxxxxxx> <20030802184901.G5798@xxxxxxxxxxxxxxx> <m1fzkiwnru.fsf@xxxxxxxxxxxxxxxxxxx> <20030804162433.L5798@xxxxxxxxxxxxxxx> <20030804122632.65ba2122.davem@xxxxxxxxxx>
Sender: netdev-bounce@xxxxxxxxxxx
User-agent: Gnus/5.09 (Gnus v5.9.0) Emacs/21.1
"David S. Miller" <davem@xxxxxxxxxx> writes:

> On Mon, 4 Aug 2003 16:24:33 -0300
> Werner Almesberger <werner@xxxxxxxxxxxxxxx> wrote:
> > Eric W. Biederman wrote:
> > > There is one place in low latency communications that I can think
> > > of where TCP/IP is not the proper solution.  For low latency
> > > communication the checksum is at the wrong end of the packet.
> > 
> > That's one of the few things ATM's AAL5 got right.
> Let's recall how long the IFF_TRAILERS hack from BSD :-)

Putting the variable length headers on the end of a packet?  Or
was that something other than RFC893?

I think IPv6 solves that much more cleanly by simply deleting them.

> > But in the end, I think it doesn't really matter.
> I tend to agree on this one.
> And on the transmit side if you have more than 1 pending TX frame, you
> can always be prefetching the next one into the fifo so that by the
> time the medium is ready all the checksum bits have been done.

For large data transmissions that happens.
> In fact I'd be surprised if current generation 1g/10g cards are not
> doing something like this.

Well at this point before I propose anything concrete I suspect I need
to profile some actual application and see how things go.  But from a
very latency sensitive perspective, I would be surprised if the
problem goes away with faster technology.

For now I am happy just to insert the peculiar thought that latency
across the entire cluster/lan is of great importance to some


<Prev in Thread] Current Thread [Next in Thread>