netdev
[Top] [All Lists]

Re: 802.1q Was (Re: Plans for 2.5 / 2.6 ???

To: jamal <hadi@xxxxxxxxxx>
Subject: Re: 802.1q Was (Re: Plans for 2.5 / 2.6 ???
From: Gleb Natapov <gleb@xxxxxxxxxxx>
Date: Sun, 04 Jun 2000 15:22:16 +0000
Cc: Ben Greear <greearb@xxxxxxxxxxxxxxx>, rob@xxxxxxxxxxx, buytenh@xxxxxxx, netdev@xxxxxxxxxxx, gleb@xxxxxxxxxxxxxxxxxxxxx
Organization: NBase-Xyplex
References: <Pine.GSO.4.20.0006032106220.16434-100000@shell.cyberus.ca>
Sender: owner-netdev@xxxxxxxxxxx
jamal wrote:
> 
> On Sat, 3 Jun 2000, Ben Greear wrote:
> 
> > jamal wrote:
> >
> > > Infact i have never seen a single switch blade with more than 48 ports
> > > but even that is beside the point. The point really is the desiugn
> > > abstraction.
> >
> > I had a cisco with two FrameRelay 'ports' on it.  I added 200 PVC
> > 'devices' to the cisco setup.  Last time I'll mention it, so remember it!
> 
> We are talking about two different things. 'ports' are _physical_. So you
> only had two ports; and unless you really understand CISCOs internal
> structuring, no point in making references to their 200 'devices'
> (that could be just the user interface showing stuff that
> people like to see).

In linux there is no one to one mapping between physical ports and
network devices. tunneling (ip in ip encapsulation) and bridging are two
examples. I am sure there are more.

> 
> >
> > > I will argue that you _can not_ write a generic search algorithm for all
> > > these protocols. Unfortunately if you enforce one  then the device search
> > > algorithm will have to be the same across the  board.
> >
> > I see no need to even have a generic search algorithm, each protocol 
> > implementation
> > (ATM, FR, VLAN) can do whatever makes the most sense for it.
> >
> 
> I dont follow.
> 
> > > It goes without any arguement that we have a very good worst case estimate
> > > today, given the practical limits. You try adding all those thousands of
> > > VLANs as devices and i can _guarantee you_ that you are not optimizing for
> > > the common case.
> >
> > Ok, the question is where is the lookup 'hit' you are talking about.
> > Where is this searching that is slowing everything down?  Don't just
> > say there is a hit, show me the specific code or logic where this hit takes 
> > place.
> >
> 
> You register_netdevice() each VLAN device (because you have a device for
> each vlan).
> 
> > For incomming pkts, the packet is detected in eth.c, as it comes off
> > of the hardware.  I can immediately hash to find the VLAN device.
> > Constant time, worst case, O(n), where n is the number of physical ethernet
> > ports, and this is only when configured to allow 4096 VLANs PER Ethernet 
> > device,
> > which is fairly non-standard.
> >
> 
> I am not gonna bitch about how many devices you have but in most cases
> 1024 per device is already overkill (including cross port VLANs).
> So in the worst case for 4 ethernet ports you have grown dev_base to over
> 15000 structures (in the worst case). Now go look at the associated code
> and tell me you dont see the repurcasions.

If you really want you can create 15000 tunneling devices. The question
is why somebody will ever want to do such thing.

> If you know of a smart way to optimize that please post it. You might get
> me to support you.

If you are talking about searching the device according to the device
name then we can store the devices in hash instead of linked list for
instance.

> Most of the manipulating code doesnt run in the critical data path, but
> you are adding unnecessary noise, and besides my point is that _you dont_
> need to have a device per VLAN; i might convert if you optimize it for
> everyone else.
> 
> > I do, DHCP uses packet filters and uses hard-coded offsets into the
> > raw packet.  The 4 extra bytes throw it off by 4, and so it never things
> > it gets a packet on the right port.  See the patch on my web site if you
> > want to learn more.
> 
> Ok so they use BPF. Either out of coolness or madness. Would packet socket
> have sufficed here for Linux ?
> 
> cheers,
> jamal

--
                        Gleb.

<Prev in Thread] Current Thread [Next in Thread>