netdev
[Top] [All Lists]

Re: [Vlan-devel] Some more questions on multicast & VLAN.

To: Ben Greear <greearb@xxxxxxxx>
Subject: Re: [Vlan-devel] Some more questions on multicast & VLAN.
From: Gleb Natapov <gleb@xxxxxxxxxxx>
Date: Tue, 10 Oct 2000 18:17:28 +0200
Cc: Ben Greear <greearb@xxxxxxxxxxxxxxx>, vlan-devel@xxxxxxxxxxxxxxxxxxxxx, VLAN Mailing List <vlan@xxxxxxxxxxxxxxxx>, "netdev@xxxxxxxxxxx" <netdev@xxxxxxxxxxx>
In-reply-to: <39E33CAF.1F9C51E2@agcs.com>; from greearb@agcs.com on Tue, Oct 10, 2000 at 08:58:39AM -0700
References: <39E0E2C0.559EE3F3@candelatech.com> <20001010104223.A8550@nbase.co.il> <39E33C3D.873C5D3D@candelatech.com> <20001010174806.A967@nbase.co.il> <39E33CAF.1F9C51E2@agcs.com>
Sender: owner-netdev@xxxxxxxxxxx
On Tue, Oct 10, 2000 at 08:58:39AM -0700, Ben Greear wrote:
> Gleb Natapov wrote:
> 
> > On Tue, Oct 10, 2000 at 08:56:45AM -0700, Ben Greear wrote:
> > >
> > [...]
> > > You mentioned that there was a race condition when using SMP, could you 
> > > explain
> > > that one a bit more?  We could probably put a lock around it if we need 
> > > to, in
> > > order to make it safe.
> >
> >  Not a race condition, but deadlock. Look at linux/net/core/dev_mcast.c. 
> > VLAN set_multicast_list()
> > function is called from dev_mc_upload() while holding dev_mc_lock. When you 
> > try to add or delete
> > MC address to/from underlying device from your set_multicast_list() you 
> > call dev_mc_{delete,add}
> > and they also try to grab the same lock, oops deadlock ;)
> 
> Make the dev_mc_lock acquisition conditional:
> if (global_vlan_dev_mc_lock_already_grabbed == 1) {
>   /* don't get it again, VLAN code already obtained it... */
> }
> else {
>   /* go get the lock.. */
>  ....
> }

This way you'll get the race condition :). Anyway, I've wrote the patch that 
gets rid of dev_mc_lock
and I hope it will be applied to the kernel really soon.

> 
> Sure, it isn't beautiful, but it might work untill the subsystem is re-worked 
> a bit...
> 
--
                        Gleb.

<Prev in Thread] Current Thread [Next in Thread>