netdev
[Top] [All Lists]

Re: neighbour cache vs. invalid addresses

To: Werner Almesberger <almesber@xxxxxxxxxxx>, netdev@xxxxxxxxxxx
Subject: Re: neighbour cache vs. invalid addresses
From: "James R. Leu" <jleu@xxxxxxxxxxxxxx>
Date: Sat, 29 Apr 2000 14:57:25 -0500
In-reply-to: <200004291700.TAA03871@xxxxxxxxxxxxxxxx>; from Werner Almesberger on Sat, Apr 29, 2000 at 07:00:37PM +0200
Organization: none
References: <200004291700.TAA03871@xxxxxxxxxxxxxxxx>
Reply-to: jleu@xxxxxxxxxxxxxx
Sender: owner-netdev@xxxxxxxxxxx
On Sat, Apr 29, 2000 at 07:00:37PM +0200, Werner Almesberger wrote:
> In non-broadcast multiple-access networks (NBMA) such as Classical IP
> over ATM (CLIP), neither broadcast nor multicast have any useful
> semantics. Right now, I catch this in neigh_table->constructor and
> return -EINVAL.

I hope you mean that it is not a trivial mapping to the current neigh_table
setup.  Broadcast and multicast do have defined meanings on CLIP interfaces,
mapping this meaning to the neigh_table is where the problem comes in.

> Is this the right approach ? Or should I return success, accept the
> bogus neighbour entry (could this upset the neighbour cache ?), and
> blackhole the entire mess afterwards via neigh->ops and
> neigh->output ?
>
> (Background: some applications seem to insist on sending broadcast
> or multicast even on interfaces that have neither IFF_BROADCAST nor
> IFF_MULTICAST set. According to people who have such applications,
> the current approach makes the stack believe that there is a memory
> shortage, and shrink the neighbour cache, which is undesirable.
> Furthermore, the offending packets get killed before they show up
> on tcpdump, which makes it harder to debug the "network" problem.)

I know the reason I want multicast on CLIP (or another ATM interface type)
is because of an application I maintain that use it for neighbor discovery.

Werner, is there a discussion on the ATM list about how multicast and
broadcast will (could) work on ATM?

-- 
James R. Leu

<Prev in Thread] Current Thread [Next in Thread>