On Mon, 4 Oct 2004 22:15:25 +0200
Harald Welte <laforge@xxxxxxxxxxxx> wrote:
> If we go back to the initial reason for all those modifications, it was
> deployment in large networks. The gc_thresh3 default of 1k is very
> small, even if you have only 4 interfaces with /24, you will already max
> out this default.
> Now that the hash distribution is better, and the table resized
> dynamically, we scale better with large neighbour caches.
> I was thinking of registering with some notifiers and tuning gc_thresh3
> automatically to be at least as large as the theoretical number of
> immediate neigbhours. At least for multiple /24 and /16 I think this is
> still reasonable.
> As soon as we go further up (/8 or ipv6) the limit basically becomes
> 'unlimited', since we won't be able to receive 2^24 arp-causing packets
> per second anyway.
> Or be even less conservative and say: there is no limit for neighbour
> cache entries? You mentioned that *BSD didn't have a limit, IIRC.
I think tieing it to the subnet size is wrong, because the real
cap is the routing cache size. Remember the email where I was
talking about that?
I'm nearly ambivalent about a bound-less neighbour cache. I hate
to think about having tons of crap sitting in the neighbour cache
unused and sucking up memory. BSD's scheme works because when routing
entries die so do the neighbour entries they point to, so they have no
need for garbage collection like we do.
So right now I think a better idea is to have the routing cache tell
the neigh table how big it could get, or something like that.
It is a thorny issue, since usually routing cache --> neighbour cache
assosciations are VERY many to one except in these weird odd cases.