On Tue, Sep 28, 2004 at 10:52:05PM -0400, jamal wrote:
>
> > You're right. rtmsg_info() is using GOODISZE unnecessarily. I'll
> > write up a patch.
>
> But why? ;->
So that the alloc_skb() is slightly less likely to fail. The dumpers
gain a lot by using GOODSIZE since they can fill it up. As rtmsg_info
has no chance of getting anywhere near GOODSIZE we should provide a
more accruate estimate.
It also means that with netlink_trim() you'll save a realloc/copy.
Signed-off-by: Herbert Xu <herbert@xxxxxxxxxxxxxxxxxxx>
Dave, you can stop reading now :)
> Well, if you are gonna overrun the socket anyways, is there a point
> in delivering all that incomplete info?
>
> If you go ahead and deliver it anyways, you will be crossing
> kernel->userspace. Its a worthy cause to do so.
Hang on a second, if we're going to overrun anyway, then we *can't*
deliver it to user-space. If we could deliver then we wouldn't be
having an overrun.
> > Can you please elaborate on "crossing into user space"? I don't think
> > I understand where these cycles are being wasted.
>
> delivering messages which get obsoleted by an overun from kernel to user
> space for uses up unnecessary cycles.
You'll have to spell it out for me I'm afraid :)
If we're overrunning then we can't deliver the message at hand. If you
are referring to the messages afterwards then the only way we can deliver
them is if the appliation lets us by clearing the queue. If you are
referring to the messages that are already on the queue then we've done
the work already so why shouldn't they stay?
> A dump may be harder given it keeps state. A get item which generates
> a huge dataset (dont ask me to give you an example) is going to cause
> overruns. Think a multi message formated data.
That's a completely different story. For that problem I'd suggest that
we extend the dump paradigm to cover get as well. However, to design the
interface, we need to look at potential users of this. So please give
me an example :)
> > Not quite. Overrun errors are reported immediately.
>
> Yes, except they get reported sooner (by virtue of queue getting filled
> sooner) if you have a 4K sock buffer vs 1M if you are registered for the
> same events. I Know its a digression - just making a point.
The one with the 1M buffer may not overrun at all if it can process the
events fast enough.
> congestion - especially in a local scenario. Of course such queues have
> finite buffers - you just engineer it so the queue doesnt overflow and
> head of line blocking is tolerable. Either of those concerns not
> addressed, shit will hit the fan.
I don't see how you can engineer it so that it doesn't overflow. In
the example that you gave with interface address, the number of messages
generated is practically unbounded.
Cheers,
--
Visit Openswan at http://www.openswan.org/
Email: Herbert Xu ~{PmV>HI~} <herbert@xxxxxxxxxxxxxxxxxxx>
Home Page: http://gondor.apana.org.au/~herbert/
PGP Key: http://gondor.apana.org.au/~herbert/pubkey.txt
p
Description: Text document
|