netdev
[Top] [All Lists]

Re: [RFC] Limit the size of the IPV4 route hash.

To: Robin Holt <holt@xxxxxxx>
Subject: Re: [RFC] Limit the size of the IPV4 route hash.
From: Robin Holt <holt@xxxxxxx>
Date: Fri, 10 Dec 2004 17:40:37 -0600
Cc: Andrew Morton <akpm@xxxxxxxx>, davem@xxxxxxxxxxxxx, yoshfuji@xxxxxxxxxxxxxx, hirofumi@xxxxxxxxxxxxx, torvalds@xxxxxxxx, dipankar@xxxxxxx, laforge@xxxxxxxxxxxx, bunk@xxxxxxxxx, herbert@xxxxxxxxxxxx, paulmck@xxxxxxx, netdev@xxxxxxxxxxx, linux-kernel@xxxxxxxxxxxxxxx, gnb@xxxxxxx
In-reply-to: <20041210233700.GA25582@lnx-holt.americas.sgi.com>
References: <20041210190025.GA21116@lnx-holt.americas.sgi.com> <20041210114829.034e02eb.davem@davemloft.net> <20041210210006.GB23222@lnx-holt.americas.sgi.com> <20041210130947.1d945422.akpm@osdl.org> <20041210232722.GC24468@lnx-holt.americas.sgi.com> <20041210153848.5acacd0a.akpm@osdl.org> <20041210233700.GA25582@lnx-holt.americas.sgi.com>
Sender: netdev-bounce@xxxxxxxxxxx
User-agent: Mutt/1.4.1i
On Fri, Dec 10, 2004 at 05:37:00PM -0600, Robin Holt wrote:
> On Fri, Dec 10, 2004 at 03:38:48PM -0800, Andrew Morton wrote:
> > Robin Holt <holt@xxxxxxx> wrote:
> > >
> > > > The big risk is that someone has a too-small table for some specific
> > > > application and their machine runs more slowly than it should, but they
> > > > never notice.  I wonder if it would be possible to put a little 
> > > > once-only
> > > > printk into the routing code: "warning route-cache chain exceeded 100
> > > > entries: consider using the rhash_entries boot option".
> > > 
> > > Since the hash gets flushed every 10 seconds, what if we kept track of
> > > the maximum depth reached and when we reach a certain threshold, just
> > > allocate a larger hash and replace the old with the new.  I do like the
> > > printk idea so the admin can prevent inconsistent performance early in
> > > the run cycle for the system.  We could even scale the hash size up based
> > > upon demand.
> > 
> > Once the system has been running for a while, the possibility of allocating
> > a decent number of physically-contiguous pages is basically zero.
> > 
> > If we were to dynamically size it we'd need to either use new data
> > structure (slower) or use vmalloc() (slower and can fragment vmalloc
> > space).
> 
> Why do they need to be physically contiguous?  It is a hash correct?

Sorry, I was asleep at the wheel.  I failed to even grok your second
paragraph.  I will fall back to agreeing with the printk to let the admin
know that something is amiss.

Should we possibly modify the output of /proc/net/rt_cache (or whatever
its name is) to include the hash bucket so people can watch to see how
many bucket collisions their system has?

<Prev in Thread] Current Thread [Next in Thread>