netdev
[Top] [All Lists]

Re: [RFC] High Performance Packet Classifiction for tc framework

To: hadi@xxxxxxxxxx
Subject: Re: [RFC] High Performance Packet Classifiction for tc framework
From: "David S. Miller" <davem@xxxxxxxxxx>
Date: Thu, 7 Aug 2003 13:05:02 -0700
Cc: nf@xxxxxxxxx, linux-net@xxxxxxxxxxxxxxx, netdev@xxxxxxxxxxx
In-reply-to: <1060286331.1025.73.camel@xxxxxxxxxxxxxxxx>
References: <200307141045.40999.nf@xxxxxxxxx> <1058328537.1797.24.camel@xxxxxxxxxxxxxxxx> <3F16A0E5.1080007@xxxxxxxxx> <1059934468.1103.41.camel@xxxxxxxxxxxxxxxx> <3F2E5CD6.4030500@xxxxxxxxx> <1060012260.1103.380.camel@xxxxxxxxxxxxxxxx> <3F302E04.1090503@xxxxxxxxx> <1060286331.1025.73.camel@xxxxxxxxxxxxxxxx>
Sender: netdev-bounce@xxxxxxxxxxx
On 07 Aug 2003 15:58:51 -0400
jamal <hadi@xxxxxxxxxx> wrote:

> > Yes, it does. Still the question is how to solve this
> > generally. Consider the following example ruleset:
> > 
> > 1) src ip 10.0.0.0/30 dst ip 20.0.0.0/20
> > 2) src ip 10.0.0.0/28 dst ip 20.0.0.0/22
> > 3) src ip 10.0.0.0/26 dst ip 20.0.0.0/24
> > 4) src ip 10.0.0.0/24 dst ip 20.0.0.0/26
> > 5) src ip 10.0.0.0/22 dst ip 20.0.0.0/28
> > 6) src ip 10.0.0.0/20 dst ip 20.0.0.0/30
> > 
> > So you have 1 src ip hash and #buckets(src ip hash) many
> > dst ip hashes. In order to achieve maximum performance
> > you have to minimize the number of collisions in the
> > hash buckets. How would you choose the hash function
> > and what would the construction look like?
> > 
> 
> It can be done by using the masks - but it would look really ugly. I
> suppose just providing a good user interface is valuable.

If you input all the keys into the Jenkins hash, how does
it perform?  Has anyone even tried that and compared it
to all of these fancy multi-level tree like hash things?

I think Jenkins would work very well for exactly this kind
of application.  And it's fully available to the entire kernel
via linux/jhash.h and already in use by other things such
as the routing cache and the netfilter conntrack code.


<Prev in Thread] Current Thread [Next in Thread>