Simon Kirby writes:
> This reminds me of the situation we experienced with the dst cache
> overflowing in early 2.2 kernels. This was a long time ago, when our
> traffic was only about 10 Mbits/second. We had recently upgraded from a
> 2.0 kernel. The dst cache was overflowing due to a bug in the garbage
> collector, and at the time, no messages were printed. It took me a
> _long_ time to figure out why connections to a server I hadn't previously
> connected to in a while would only work every so often, and not
> immediately like they should. I'm affraid this approach will have a
> similar effect, albeit (hopefully) only under an attack.
We are given more work than we have resources for (max_size) what else than
refuse can we do? But yes we have invested pretty much work already.
Also remember we are looking into runs were 100% of incoming traffic has one
new dst for every packet. So how is the situation in "real life"?
In case of multiple devices at least NAPI gives all devs it's share.
> Is it possible to have a dst LRU or a simpler approximation of such and
> recycle dst entries rather than deallocating/reallocating them? This
> would relieve a lot of work from the garbage collector and avoid the
> periodic large garbage collection latency. It could be tuned to only
> occur in an attack (I remember Alexey saying that the deferred garbage
> collection was implemented to reduce latency in normal opreation).
I don't see how this can be done. Others may?
Cheers.
--ro
|