Robert Olsson a écrit :
Did you check for performance changes too? From what I understand
we can add new lookup and cache miss in the fast packet path.
Performance is better because in case of stress (lot of incoming packets
per second), the 1024 bytes of the locks are all in cache.
As the size of the hash is divided by a 2 factor, rt_check_expire() and/or
rt_garbage_collect() have to touch less cache lines.
According to oprofile, an unpatched kernel was spending more than 15% of time
in route.c routines, now I see ip_route_input() at 1.88%
> > Anyways, I think perhaps you should dynamically allocate this lock table.
> Maybe I should make a static sizing, (replace the 256 constant by something based on MAX_CPUS) ?
IMO we should be careful with adding new complexity the route hash.
Also was this dynamic behavior gc_interval needed to fix the overflow?
In my case yes, because I have huge route cache.
gc_interval is only sort of last resort timer.
Actually not : gc_interval controls the rt_check_expire() to clean the hash
table after use.
All old enough entries can be deleted smoothly, on behalf of a timer tick (so
network interrupts can still occur)
I found it was better to adjust gc_interval to 1 (to let it fire every second and examine 1/300 table slots, or more if the dynamic behavior
triggers), and ajust params so that rt_garbage_collect() doesnt run at all : rt_garbage_collect() can take forever to complete, blocking