| To: | Gleb Natapov <gleb@xxxxxxxxxxx> |
|---|---|
| Subject: | Re: netlink drops messages. |
| From: | Chris Wedgwood <cw@xxxxxxxx> |
| Date: | Fri, 19 Jan 2001 01:39:04 +1300 |
| Cc: | Andi Kleen <ak@xxxxxx>, kuznet@xxxxxxxxxxxxx, netdev@xxxxxxxxxxx |
| In-reply-to: | <20010118111843.A21503@nbase.co.il>; from gleb@nbase.co.il on Thu, Jan 18, 2001 at 11:18:43AM +0200 |
| References: | <200101161828.VAA31502@ms2.inr.ac.ru> <20010117101720.F5122@nbase.co.il> <20010117120652.A1830@fred.local> <20010117133932.B16180@nbase.co.il> <20010117141900.A3308@fred.local> <20010117155035.C16180@nbase.co.il> <20010117171438.B5589@fred.local> <20010117191811.E16180@nbase.co.il> <20010117185057.B7146@fred.local> <20010118111843.A21503@nbase.co.il> |
| Sender: | owner-netdev@xxxxxxxxxxx |
| User-agent: | Mutt/1.2.5i |
On Thu, Jan 18, 2001 at 11:18:43AM +0200, Gleb Natapov wrote:
Exactly. And currently buffer fills very quickly. Alexey says
that there is no difference between 16 and 116 messages but I
disagree; if queue will be bigger, R will have a chance to empty
it before W will run next time and adds more routes to the
kernel. Less resyncs needed. If we can considerably enlarge
queue size for free why not to do it?
What about something like the mmap'd AF_PACKET code, basically each
application case register a user-land buffer for these sockets and
also potentially a signal for overflow, the messages get written to
this buffer and in the case of overflow a signal is sent and writing
stops; the application can then manually resync and start reading
again...
Routing daemons can register larger buffers to prevent or reduce the
number of times it might overflow.
--cw
|
| <Prev in Thread] | Current Thread | [Next in Thread> |
|---|---|---|
| ||
| Previous by Date: | Re: netlink drops messages., Andi Kleen |
|---|---|
| Next by Date: | Re: netlink drops messages., Chris Wedgwood |
| Previous by Thread: | Re: netlink drops messages., Gleb Natapov |
| Next by Thread: | Re: netlink drops messages., James R. Leu |
| Indexes: | [Date] [Thread] [Top] [All Lists] |