Search String: Display: Description: Sort:

Results:

References: [ +subject:/^(?:^\s*(re|sv|fwd|fw)[\[\]\d]*[:>-]+\s*)*Info\:\s+NAPI\s+performance\s+at\s+\"low\"\s+loads\s*$/: 44 ]

Total 44 documents matching your query.

1. Info: NAPI performance at "low" loads (score: 1)
Author: Manfred Spraul <manfred@xxxxxxxxxxxxxxxx>
Date: Tue, 17 Sep 2002 21:54:31 +0200
NAPI network drivers mask the rx interrupts in their interrupt handler, and reenable them in dev->poll(). In the worst case, that happens for every packet. I've tried to measure the overhead of that
/archives/netdev/2002-09/msg00154.html (10,114 bytes)

2. Re: Info: NAPI performance at "low" loads (score: 1)
Author: Andrew Morton <akpm@xxxxxxxxx>
Date: Tue, 17 Sep 2002 14:45:08 -0700
It was due to additional inl()'s and outl()'s in the driver fastpath. Testcase was netperf Tx and Rx. Just TCP over 100bT. AFAIK, this overhead is intrinsic to NAPI. Not to say that its costs outweig
/archives/netdev/2002-09/msg00167.html (9,879 bytes)

3. Re: Info: NAPI performance at "low" loads (score: 1)
Author: "David S. Miller" <davem@xxxxxxxxxx>
Date: Tue, 17 Sep 2002 14:39:47 -0700 (PDT)
It was due to additional inl()'s and outl()'s in the driver fastpath. How many? Did the implementation cache the register value in a software state word or did it read the register each time to writ
/archives/netdev/2002-09/msg00168.html (9,807 bytes)

4. Re: Info: NAPI performance at "low" loads (score: 1)
Author: Jeff Garzik <jgarzik@xxxxxxxxxxxxxxxx>
Date: Tue, 17 Sep 2002 17:54:42 -0400
David S. Miller wrote: Any driver should be able to get the NAPI overhead to max out at 2 PIOs per packet. Just to pick nits... my example went from 2 or 3 IOs [depending on the presence/absence of a
/archives/netdev/2002-09/msg00169.html (10,011 bytes)

5. Re: Info: NAPI performance at "low" loads (score: 1)
Author: "David S. Miller" <davem@xxxxxxxxxx>
Date: Tue, 17 Sep 2002 14:49:11 -0700 (PDT)
Just to pick nits... my example went from 2 or 3 IOs [depending on the presence/absence of a work loop] to 6 IOs. I mean "2 extra PIOs" not "2 total PIOs". I think it's doable for just about every d
/archives/netdev/2002-09/msg00170.html (10,040 bytes)

6. Re: Info: NAPI performance at "low" loads (score: 1)
Author: Andrew Morton <akpm@xxxxxxxxx>
Date: Tue, 17 Sep 2002 14:58:52 -0700
Looks like it cached it: - outw(SetIntrEnb | (inw(ioaddr + 10) & ~StatsFull), ioaddr + EL3_CMD); vp->intr_enable &= ~StatsFull; + outw(vp->intr_enable, ioaddr + EL3_CMD); Yup. But deltas are interest
/archives/netdev/2002-09/msg00171.html (10,851 bytes)

7. Re: Info: NAPI performance at "low" loads (score: 1)
Author: jamal <hadi@xxxxxxxxxx>
Date: Tue, 17 Sep 2002 20:57:58 -0400 (EDT)
Manfred, could you please turn MMIO (you can select it via kernel config) and see what the new difference looks like? I am not so sure with that 6% difference there is no other bug lurking there; 6%
/archives/netdev/2002-09/msg00172.html (10,096 bytes)

8. Re: Info: NAPI performance at "low" loads (score: 1)
Author: "David S. Miller" <davem@xxxxxxxxxx>
Date: Tue, 17 Sep 2002 18:00:14 -0700 (PDT)
{in,out}{b,w,l}() operations have a fixed timing, therefore his results doesn't sound that far off. It is also one of the reasons I suspect Andrew saw such bad results with 3c59x, but probably that i
/archives/netdev/2002-09/msg00173.html (10,500 bytes)

9. Re: Info: NAPI performance at "low" loads (score: 1)
Author: Jeff Garzik <jgarzik@xxxxxxxxxxxxxxxx>
Date: Tue, 17 Sep 2002 22:11:14 -0400
Just to pick nits... my example went from 2 or 3 IOs [depending on the presence/absence of a work loop] to 6 IOs. I mean "2 extra PIOs" not "2 total PIOs". I think it's doable for just about every d
/archives/netdev/2002-09/msg00177.html (10,999 bytes)

10. Re: Info: NAPI performance at "low" loads (score: 1)
Author: "David S. Miller" <davem@xxxxxxxxxx>
Date: Tue, 17 Sep 2002 19:06:41 -0700 (PDT)
From: Jeff Garzik <jgarzik@xxxxxxxxxxxxxxxx> Date: Tue, 17 Sep 2002 22:11:14 -0400 You're looking at at least one extra get-irq-status too, at least in the classical 10/100 drivers I'm used to seeing
/archives/netdev/2002-09/msg00179.html (10,081 bytes)

11. Re: Info: NAPI performance at "low" loads (score: 1)
Author: Andrew Morton <akpm@xxxxxxxxx>
Date: Tue, 17 Sep 2002 19:16:25 -0700
They weren't "very bad", iirc. Maybe a 5% increase in CPU load. It was all a long time ago. Will retest if someone sends URLs.
/archives/netdev/2002-09/msg00180.html (10,735 bytes)

12. Re: Info: NAPI performance at "low" loads (score: 1)
Author: Jeff Garzik <jgarzik@xxxxxxxxxxxxxxxx>
Date: Tue, 17 Sep 2002 22:36:36 -0400
David S. Miller wrote: From: Jeff Garzik <jgarzik@xxxxxxxxxxxxxxxx> Date: Tue, 17 Sep 2002 22:11:14 -0400 You're looking at at least one extra get-irq-status too, at least in the classical 10/100 dri
/archives/netdev/2002-09/msg00182.html (11,243 bytes)

13. Re: Info: NAPI performance at "low" loads (score: 1)
Author: ebiederm@xxxxxxxxxxxx (Eric W. Biederman)
Date: 18 Sep 2002 11:27:34 -0600
???? I don't see why they should be. If it is a pci device the cost should the same as a pci memory I/O. The bus packets are the same. So things like increasing the pci bus speed should make it take
/archives/netdev/2002-09/msg00191.html (11,256 bytes)

14. Re: Info: NAPI performance at "low" loads (score: 1)
Author: Alan Cox <alan@xxxxxxxxxxxxxxxxxxx>
Date: 18 Sep 2002 18:50:53 +0100
port 0x80 isnt going to PCI space. x86 generally posts mmio write but not io write. Thats quite measurable.
/archives/netdev/2002-09/msg00192.html (10,767 bytes)

15. Re: Info: NAPI performance at "low" loads (score: 1)
Author: "David S. Miller" <davem@xxxxxxxxxx>
Date: Wed, 18 Sep 2002 13:23:34 -0700 (PDT)
???? I don't see why they should be. If it is a pci device the cost should the same as a pci memory I/O. The bus packets are the same. So things like increasing the pci bus speed should make it take
/archives/netdev/2002-09/msg00193.html (10,476 bytes)

16. Re: Info: NAPI performance at "low" loads (score: 1)
Author: Alan Cox <alan@xxxxxxxxxxxxxxxxxxx>
Date: 18 Sep 2002 21:43:09 +0100
Earth calling Dave Miller The inb timing depends on the PCI bus. If you want proof set a Matrox G400 into no pci retry mode, run a large X load at it and time some inbs you should be able to get to a
/archives/netdev/2002-09/msg00194.html (10,470 bytes)

17. Re: Info: NAPI performance at "low" loads (score: 1)
Author: "David S. Miller" <davem@xxxxxxxxxx>
Date: Wed, 18 Sep 2002 13:46:30 -0700 (PDT)
From: Alan Cox <alan@xxxxxxxxxxxxxxxxxxx> Date: 18 Sep 2002 21:43:09 +0100 The inb timing depends on the PCI bus. If you want proof set a Matrox G400 into no pci retry mode, run a large X load at it
/archives/netdev/2002-09/msg00195.html (10,147 bytes)

18. Re: Info: NAPI performance at "low" loads (score: 1)
Author: Alan Cox <alan@xxxxxxxxxxxxxxxxxxx>
Date: 18 Sep 2002 22:15:27 +0100
It doesnt matter what XFree86 is doing. Thats just to load the PCI bus and jam it up to prove the point. It'll change your inb timing
/archives/netdev/2002-09/msg00196.html (10,550 bytes)

19. Re: Info: NAPI performance at "low" loads (score: 1)
Author: "David S. Miller" <davem@xxxxxxxxxx>
Date: Wed, 18 Sep 2002 14:22:50 -0700 (PDT)
From: Alan Cox <alan@xxxxxxxxxxxxxxxxxxx> Date: 18 Sep 2002 22:15:27 +0100 It doesnt matter what XFree86 is doing. Thats just to load the PCI bus and jam it up to prove the point. It'll change your i
/archives/netdev/2002-09/msg00197.html (9,764 bytes)

20. Re: Info: NAPI performance at "low" loads (score: 1)
Author: ebiederm@xxxxxxxxxxxx (Eric W. Biederman)
Date: 19 Sep 2002 08:58:49 -0600
dance.c b/drivers/net/sundance.c ... Please don't change the semantics of module parameters. All of my PCI network drivers have used this name for years with the same seman
/archives/netdev/2002-09/msg00220.html (11,153 bytes)


This search system is powered by Namazu