I just tested this and it does appear to be improve things. I have to
think a little on how else to break it. But overall seems to be a good
change to make.
On Sat, 2004-10-16 at 07:30, Herbert Xu wrote:
> Before we start please bear in mind that netlink is fundamentally
> an *unreliable* protocol. This is the price we pay in order to use
> it in all the contexts that we do. So what we're looking for here
> is not how to make netlink 100% reliable, but what we can do to
> improve the quality of its implementation.
> I have a proposal for the specific case of overruns with netlink
> broadcast messages generated in a synchronous context typified
> by the ifconfig example that Jamal gave.
> In such contexts it is possible for the sender to sleep. However, we
> don't want to delay them indefinitely since the system must progress
> even in the presence of idle multicast listeners. I also have strong
> reservations about introducing any additional queues since all the
> ones I've seen don't deliver anything over and above what you can
> achieve by increasing the receive queue of the listener itself.
> Now I noticed that on SMP machines Jamal's case works successfully.
> That is, ip monitor is able to keep up with the flood generated by
> In fact, what's happening on UP is that in the time slice given to
> the sender --- ifconfig, the kernel is able to generate a lot more
> messages than what the average netlink receive queue can accomodate.
> So here is my proposal: if we detect signs of impending congestion
> in netlink_broadcast(), and that we're in a sleepable context, then
> we yield().
> This gives the receivers a chance to pull down the messages without
> having the sender spinning indefinitely. I've tested it on my UP
> machine and it does resolve the problem for ip monitor.
> Comments anyone?