netdev
[Top] [All Lists]

Re: bugs in net/ipv6/mcast.c (fwd)

To: davem@xxxxxxxxxx
Subject: Re: bugs in net/ipv6/mcast.c (fwd)
From: YOSHIFUJI Hideaki / 吉藤英明 <yoshfuji@xxxxxxxxxxxxxx>
Date: Mon, 27 Oct 2003 11:07:10 +0900 (JST)
Cc: netdev@xxxxxxxxxxx, niteowl@xxxxxxxxxxxxx, pekkas@xxxxxxxxxx, yoshfuji@xxxxxxxxxxxxxx
In-reply-to: <Pine.LNX.4.44.0310252226470.12162-100000@netcore.fi>
Organization: USAGI Project
References: <Pine.LNX.4.44.0310252226470.12162-100000@netcore.fi>
Sender: netdev-bounce@xxxxxxxxxxx
In article <Pine.LNX.4.44.0310252226470.12162-100000@xxxxxxxxxx> (at Sat, 25 
Oct 2003 22:27:27 +0300 (EEST)), Pekka Savola <pekkas@xxxxxxxxxx> says:

> Probably fixed already, but just in case...

Not yet in the bk tree.

> Hi.  In the latest linux-2.6.0-test8 source there are 2 bugs in 
> net/ipv6/mcast.c
> In function inet6_mc_check() the if statements on lines 607 and 609 have
> extra semicolons that will cause the code to fail.
> 
>                 if (mc->sfmode == MCAST_INCLUDE && i >= psl->sl_count);
>                         rv = 0;
>                 if (mc->sfmode == MCAST_EXCLUDE && i < psl->sl_count);
>                         rv = 0;

Exactly. Patch follows.

===== net/ipv6/mcast.c 1.39 vs edited =====
--- 1.39/net/ipv6/mcast.c       Fri Oct 17 17:05:20 2003
+++ edited/net/ipv6/mcast.c     Mon Oct 27 11:01:27 2003
@@ -604,9 +604,9 @@
                        if (ipv6_addr_cmp(&psl->sl_addr[i], src_addr) == 0)
                                break;
                }
-               if (mc->sfmode == MCAST_INCLUDE && i >= psl->sl_count);
+               if (mc->sfmode == MCAST_INCLUDE && i >= psl->sl_count)
                        rv = 0;
-               if (mc->sfmode == MCAST_EXCLUDE && i < psl->sl_count);
+               if (mc->sfmode == MCAST_EXCLUDE && i < psl->sl_count)
                        rv = 0;
        }
        read_unlock(&ipv6_sk_mc_lock);


-- 
Hideaki YOSHIFUJI @ USAGI Project <yoshfuji@xxxxxxxxxxxxxx>
GPG FP: 9022 65EB 1ECF 3AD1 0BDF  80D8 4807 F894 E062 0EEA

<Prev in Thread] Current Thread [Next in Thread>