| To: | Christoph Lameter <christoph@xxxxxxxxxxx> |
|---|---|
| Subject: | Re: [PATCH] NUMA aware allocation of transmit and receive buffers for e1000 |
| From: | Andrew Morton <akpm@xxxxxxxx> |
| Date: | Tue, 17 May 2005 21:58:45 -0700 |
| Cc: | davem@xxxxxxxxxxxxx, linux-kernel@xxxxxxxxxxxxxxx, netdev@xxxxxxxxxxx, shai@xxxxxxxxxxxx |
| In-reply-to: | <Pine.LNX.4.62.0505172125210.22920@graphe.net> |
| References: | <Pine.LNX.4.62.0505171854490.20408@graphe.net> <20050517190343.2e57fdd7.akpm@osdl.org> <Pine.LNX.4.62.0505171941340.21153@graphe.net> <20050517.195703.104034854.davem@davemloft.net> <Pine.LNX.4.62.0505172125210.22920@graphe.net> |
| Sender: | netdev-bounce@xxxxxxxxxxx |
Christoph Lameter <christoph@xxxxxxxxxxx> wrote: > > On Tue, 17 May 2005, David S. Miller wrote: > > > > Because physically contiguous memory is usually better than virtually > > > contiguous memory? Any reason that physically contiguous memory will > > > break the driver? > > > > The issue is whether size can end up being too large for > > kmalloc() to satisfy, whereas vmalloc() would be able to > > handle it. > > Oww.. We need a NUMA aware vmalloc for this? I think the e1000 driver is being a bit insane there. I figure that sizeof(struct e1000_buffer) is 28 on 64-bit, so even with 4k pagesize we'll always succeed in being able to support a 32k/32 = 1024-entry Tx ring. Is there any real-world reason for wanting larger ring sizes than that? |
| Previous by Date: | Re: [PATCH] NUMA aware allocation of transmit and receive buffers for e1000, Christoph Lameter |
|---|---|
| Next by Date: | Re: [PATCH] TSO Reloaded, David S. Miller |
| Previous by Thread: | Re: [PATCH] NUMA aware allocation of transmit and receive buffers for e1000, Christoph Lameter |
| Next by Thread: | Re: [PATCH] NUMA aware allocation of transmit and receive buffers for e1000, Ganesh Venkatesan |
| Indexes: | [Date] [Thread] [Top] [All Lists] |