On Mon, 2004-09-27 at 23:59, Herbert Xu wrote:
> On Mon, Sep 27, 2004 at 11:45:25PM -0400, jamal wrote:
> > fixing the NLM_GOODSIZE issue is a very good first step.
>
> Well I'm afraid that it doesn't help in your interface address example
> because rtmsg_ifa() already allocates a buffer of (approximately) the
> right size. That is, it doesn't use NLM_GOODSIZE at all.
er, what about the host scope route msgs generated by same script? ;->
I am not sure if the IFAs cause any issues - they definetley contribute.
> > Adding congestion control would be harder but not out of question.
>
> But the question is who are you going to throttle? If you throttle
> the source of the messages then you're going to stop people from adding
> or deleting IP addresses which can't be right.
The state is per socket. You may need an intermediate queue etc which
feeds to each user socket registered for the event. The socket queue
acts as a essentially a retransmitQ for broadcast state. Just waving my
hands throwing ideas here of course.
> If you move the netlink sender into a different execution context and
> throttle that then that's just extending the receive queue length by
> stealth.
>
> So I'm afraid I don't see how congestion control could be applied in
> *this* particular context.
>
We cant control the user space script for example that caused those
events. We shouldnt infact. Congestion control in this context equates
to desire to not overlaod the reader (of events). In other words if you
know the reader's capacity for swallowing events, then you dont exceed
that rate of sending to said reader. Knowing the capacity requires even
more state:
So on that thought, lets continue on the handwaving approach of an
intermidiate queue. Congestion control would mean the puck stops at this
queue and usre space doesnt get bogged down reading meaningless state,
The problem is, like i said earlier, that your rate of consumption of
events is going to be bottlenecked by the slowest and most socket buff
deprived reader. If you create the intermediate queue i described above,
then it gets to absorb the massive messages before any socket sees them.
If this queue gets full then there is no point in sending any message to
any waiting socket. Just overrun them immediately and they (readers) get
forced to reread the state. Of course this queue will have to be larger
than any of the active sockets recv queues.
You will need to have one such queue per event - and yes there maybe
scalability issues.
The moral of this is: you could do it if you wanted - aint trivial.
> > > So just bite the bullet and reread the system state by issuing dump
> > > operations.
> >
> > We may as well choose to document it as being this mostly because of the
> > issue i described above. We shouldnt give up so easily though ;->
>
> Well IMHO this is not giving up at all.
>
> Think of it another way. Monitoring routes is like maintaining a
> program. Normally you just fix the bugs as they come. But if the
> bug reports are coming in so fast that you can't keep up, perhaps
> it's time to throw it away and rewrite it from scratch :)
Except if you drop incoming bug reports and drop them early (point of
that intermidiate queue).
BTW, Davem gets away with this congestion control alg all the time.
Heck i think his sanity survives because of it - I bet you hes got this
thread under congestion controlled right now;->
cheers,
jamal
|