| To: | jdmason@xxxxxxxxxx |
|---|---|
| Subject: | Re: RFC: NAPI packet weighting patch |
| From: | "David S. Miller" <davem@xxxxxxxxxxxxx> |
| Date: | Tue, 31 May 2005 15:14:43 -0700 (PDT) |
| Cc: | mitch.a.williams@xxxxxxxxx, hadi@xxxxxxxxxx, shemminger@xxxxxxxx, netdev@xxxxxxxxxxx, Robert.Olsson@xxxxxxxxxxx, john.ronciak@xxxxxxxxx, ganesh.venkatesan@xxxxxxxxx, jesse.brandeburg@xxxxxxxxx |
| In-reply-to: | <200505311707.54487.jdmason@us.ibm.com> |
| References: | <1117241786.6251.7.camel@localhost.localdomain> <Pine.CYG.4.58.0505311029510.2128@mawilli1-desk2.amr.corp.intel.com> <200505311707.54487.jdmason@us.ibm.com> |
| Sender: | netdev-bounce@xxxxxxxxxxx |
From: Jon Mason <jdmason@xxxxxxxxxx> Date: Tue, 31 May 2005 17:07:54 -0500 > Of course some performace analysis would have to be done to determine the > optimal numbers for each speed/duplexity setting per driver. per cpu speed, per memory bus speed, per I/O bus speed, and add in other complications such as NUMA My point is that whatever experimental number you come up with will be good for that driver on your systems, not necessarily for others. Even within a system, whatever number you select will be the wrong thing to use if one starts a continuous I/O stream to the SATA controller in the next PCI slot, for example. We keep getting bitten by this, as the Altix perf data continually shows, and we need to absolutely stop thinking this way. The way to go is to make selections based upon observed events and mesaurements. |
| <Prev in Thread] | Current Thread | [Next in Thread> |
|---|---|---|
| ||
| Previous by Date: | Re: RFC: NAPI packet weighting patch, Jon Mason |
|---|---|
| Next by Date: | Re: [PATCH 1/7] [PKT_SCHED] Fix dsmark to count ignored indices while walking, David S. Miller |
| Previous by Thread: | Re: RFC: NAPI packet weighting patch, Jon Mason |
| Next by Thread: | Re: RFC: NAPI packet weighting patch, Jon Mason |
| Indexes: | [Date] [Thread] [Top] [All Lists] |