| To: | akepner@xxxxxxx |
|---|---|
| Subject: | Re: [PATCH] use mmiowb in tg3_poll |
| From: | Lennert Buytenhek <buytenh@xxxxxxxxxxxxxx> |
| Date: | Sun, 29 May 2005 01:12:10 +0200 |
| Cc: | netdev@xxxxxxxxxxx, jbarnes@xxxxxxxxxxxx, gnb@xxxxxxx |
| In-reply-to: | <Pine.LNX.4.33.0410221345400.392-100000@localhost.localdomain> |
| References: | <200410211628.06906.jbarnes@engr.sgi.com> <Pine.LNX.4.33.0410221345400.392-100000@localhost.localdomain> |
| Sender: | netdev-bounce@xxxxxxxxxxx |
| User-agent: | Mutt/1.4.1i |
On Fri, Oct 22, 2004 at 01:51:01PM -0700, akepner@xxxxxxx wrote:
> Returning from tg3_poll() without flushing the PIO write which
> reenables interrupts can result in lower cpu utilization and higher
> throughput.
I'm quite curious what kind of MMIO read latency you see on your
Altix boxen. This app is quite useful for determining those figures
on x86{,_64} machines:
http://svn.gnumonks.org/trunk/mmio_test/mmio_test.c
I'm not sure if it'll run on Itanium out of the box (it currently
assumes PAGE_SIZE <= 4096, you'd probably need to tweak rdtscl(),
and if there's a weird phys:pci address correspondence you might
have to teach it about that as well), but it would be nice if you
could give an approximate indication of exactly how expensive an
MMIO read is on your platform.
cheers,
Lennert
|
| <Prev in Thread] | Current Thread | [Next in Thread> |
|---|---|---|
| ||
| Previous by Date: | Comparison of several congestion control algorithms, Baruch Even |
|---|---|
| Next by Date: | [patch 1/1] Use pci_set_dma_mask() instead of direct assignment of DMA mask, domen |
| Previous by Thread: | Comparison of several congestion control algorithms, Baruch Even |
| Next by Thread: | Re: [PATCH] use mmiowb in tg3_poll, Arthur Kepner |
| Indexes: | [Date] [Thread] [Top] [All Lists] |