netdev
[Top] [All Lists]

Re: [PATCH + RFC] neighbour/ARP cache scalability

To: YOSHIFUJI Hideaki / 吉藤英明 <yoshfuji@xxxxxxxxxxxxxx>
Subject: Re: [PATCH + RFC] neighbour/ARP cache scalability
From: Pekka Savola <pekkas@xxxxxxxxxx>
Date: Tue, 21 Sep 2004 18:58:05 +0300 (EEST)
Cc: laforge@xxxxxxxxxxxx, <netdev@xxxxxxxxxxx>
In-reply-to: <20040922.001448.73843048.yoshfuji@linux-ipv6.org>
Sender: netdev-bounce@xxxxxxxxxxx
On Wed, 22 Sep 2004, YOSHIFUJI Hideaki / [iso-2022-jp] 吉藤英明 wrote:
> > > It's worse in the sense that there is more space in each subnet for
> > > doing aggressive probing -- but this may not be a big issue with a
> > > good algorithm and a threshold.
> > 
> > So what is that 'good algorithm'.  The current Linux algorithm is from
> > my point of view (only tested with ipv4) not very good when it comes to
> > high numbers of neighbours.
> 
> Well, of course, we should limit number of total entries.
> 
> If we have enough memory for usual use,
> we should not purge the "probably reachable" routes
> (REACHABLE, STALE, DELAY, and PROBE) neighbours.
> Probably, we should split neighbour entries to two parts.
>  - INCOMPLETE
>  - REACHABLE, STALE, DELAY and PROBE
> Which means, we should NOT purge "known" entries by unknown entries.

This still doesn't take a stance on rate-limiting the ND/ARP packets,
in case that there still is enough memory, but some kind of attack is
clearly underway.  Should it still be done?  Consider 100Kpps of
router-generated ARP/ND probes -- not good!

-- 
Pekka Savola                 "You each name yourselves king, yet the
Netcore Oy                    kingdom bleeds."
Systems. Networks. Security. -- George R.R. Martin: A Clash of Kings


<Prev in Thread] Current Thread [Next in Thread>