| To: | netdev <netdev@xxxxxxxxxxx> |
|---|---|
| Subject: | Re: [RFC] netif_rx: receive path optimization |
| From: | Rick Jones <rick.jones2@xxxxxx> |
| Date: | Thu, 31 Mar 2005 13:24:40 -0800 |
| In-reply-to: | <1112303431.1073.67.camel@xxxxxxxxxxxxxxxx> |
| References: | <20050330132815.605c17d0@xxxxxxxxxxxxxxxxx> <20050331120410.7effa94d@xxxxxxxxxxxxxxxxx> <1112303431.1073.67.camel@xxxxxxxxxxxxxxxx> |
| Sender: | netdev-bounce@xxxxxxxxxxx |
| User-agent: | Mozilla/5.0 (X11; U; HP-UX 9000/785; en-US; rv:1.6) Gecko/20040304 |
The repurcassions of going from per-CPU-for-all-devices queue (introduced by softnet) to per-device-for-all-CPUs maybe huge in my opinion especially in SMP. A closer view of whats there now maybe per-device-per-CPU backlog queue. I think performance will be impacted in all devices. imo, whatever needs to go in needs to have some experimental data to back it Indeed.At the risk of again chewing on my toes (yum), if multiple CPUs are pulling packets from the per-device queue there will be packet reordering. HP-UX 10.0 did just that and it was quite nasty even at low CPU counts (<=4). It was changed by HP-UX 10.20 (ca 1995) to per-CPU queues with queue selection computed from packet headers (hash the IP and TCP/UDP header to pick a CPU) It was called IPS for Inbound Packet Scheduling. 11.0 (ca 1998) later changed that to "find where the connection last ran and queue to that CPU" That was called TOPS - Thread Optimized Packet Scheduling. fwiw, rick jones |
| Previous by Date: | Re: [RFC] netif_rx: receive path optimization, Stephen Hemminger |
|---|---|
| Next by Date: | Re: [RFC] netif_rx: receive path optimization, Jamal Hadi Salim |
| Previous by Thread: | Re: [RFC] netif_rx: receive path optimization, Stephen Hemminger |
| Next by Thread: | Re: [RFC] netif_rx: receive path optimization, jamal |
| Indexes: | [Date] [Thread] [Top] [All Lists] |