netdev
[Top] [All Lists]

Re: [RFC] Limit the size of the IPV4 route hash.

To: Robin Holt <holt@xxxxxxx>
Subject: Re: [RFC] Limit the size of the IPV4 route hash.
From: Andrew Morton <akpm@xxxxxxxx>
Date: Fri, 10 Dec 2004 15:38:48 -0800
Cc: holt@xxxxxxx, davem@xxxxxxxxxxxxx, yoshfuji@xxxxxxxxxxxxxx, hirofumi@xxxxxxxxxxxxx, torvalds@xxxxxxxx, dipankar@xxxxxxx, laforge@xxxxxxxxxxxx, bunk@xxxxxxxxx, herbert@xxxxxxxxxxxx, paulmck@xxxxxxx, netdev@xxxxxxxxxxx, linux-kernel@xxxxxxxxxxxxxxx, gnb@xxxxxxx
In-reply-to: <20041210232722.GC24468@xxxxxxxxxxxxxxxxxxxxxxxxx>
References: <20041210190025.GA21116@xxxxxxxxxxxxxxxxxxxxxxxxx> <20041210114829.034e02eb.davem@xxxxxxxxxxxxx> <20041210210006.GB23222@xxxxxxxxxxxxxxxxxxxxxxxxx> <20041210130947.1d945422.akpm@xxxxxxxx> <20041210232722.GC24468@xxxxxxxxxxxxxxxxxxxxxxxxx>
Sender: netdev-bounce@xxxxxxxxxxx
Robin Holt <holt@xxxxxxx> wrote:
>
> > The big risk is that someone has a too-small table for some specific
> > application and their machine runs more slowly than it should, but they
> > never notice.  I wonder if it would be possible to put a little once-only
> > printk into the routing code: "warning route-cache chain exceeded 100
> > entries: consider using the rhash_entries boot option".
> 
> Since the hash gets flushed every 10 seconds, what if we kept track of
> the maximum depth reached and when we reach a certain threshold, just
> allocate a larger hash and replace the old with the new.  I do like the
> printk idea so the admin can prevent inconsistent performance early in
> the run cycle for the system.  We could even scale the hash size up based
> upon demand.

Once the system has been running for a while, the possibility of allocating
a decent number of physically-contiguous pages is basically zero.

If we were to dynamically size it we'd need to either use new data
structure (slower) or use vmalloc() (slower and can fragment vmalloc
space).

<Prev in Thread] Current Thread [Next in Thread>