On Mon, 2004-08-23 at 05:39, sandr8 wrote:
Ok, in this case, retransmissions have to be unbilled.
To rewind to what i said a few emails ago:
The best place to bill is by looking at what comes out of the box;->
Ok, we dont have that luxury in this case. So the next best place
is to do it at the qdisc level. Because only at that level do you
know for sure if packets made it out or not.
Since contracking already does the job of marking the flow, then
thats the second part of your requirement "on behalf of each flow".
What we are doing now is hacking around to try and reduce the injustice.
Conclusion: The current way of billing is _wrong_. The better way is to
have contracking just mark and the qdisc decide on billing or unbilling.
Have a billing table somewhere indexed by flow that increments these
For now i think that focussing on just sch.drops++ in case of full
queue will help.
Let me cut email here for readability.
so, maybe we are saying the same thing but in different words :)
if we blindly look at layer 3 and unbill when a packet is dropped,
then the retransmission is already unbilled :)
it will be billed when it takes place, but the first transmission that
underwent a drop has been unbilled and hence we are square.
this without looking at layer 4.
what i was thinking about was mimicking the conntracking at
a device level, having per each device a singleton object that
has the same buckets as the connection tracking. it could
store a lot of interesting information that would augment queuing
disciplines to better share the pain of drops and also to perform
per-connection head drops instead of connection-unaware
this would improve fairness and shorten the time tcp sources
need to get the feedback, in a better way than random early
having this structure at a device level would be an answer
for the issue of packets cloned to multiple interfaces, as we
would be augmented to perform a separate accounting for
each interface (which seems, afaik, reasonable... in most
cases we would account on a single interface, and we also
should likely get less hash collisions... no more than in the
furthermore, the per-bucket lock you suggested, that should
be a good compromise, would also not "interfere" from one
interface to the other one. well... maybe as soon as enqueues
and dequeues on the same device stay serialized (thanks to
dev->queue_lock) we should not need that further lock
does it make sense?