netdev
[Top] [All Lists]

Re: [PATCH] NUMA aware allocation of transmit and receive buffers for e1

To: Christoph Lameter <christoph@xxxxxxxxxxx>
Subject: Re: [PATCH] NUMA aware allocation of transmit and receive buffers for e1000
From: Andrew Morton <akpm@xxxxxxxx>
Date: Tue, 17 May 2005 21:58:45 -0700
Cc: davem@xxxxxxxxxxxxx, linux-kernel@xxxxxxxxxxxxxxx, netdev@xxxxxxxxxxx, shai@xxxxxxxxxxxx
In-reply-to: <Pine.LNX.4.62.0505172125210.22920@xxxxxxxxxx>
References: <Pine.LNX.4.62.0505171854490.20408@xxxxxxxxxx> <20050517190343.2e57fdd7.akpm@xxxxxxxx> <Pine.LNX.4.62.0505171941340.21153@xxxxxxxxxx> <20050517.195703.104034854.davem@xxxxxxxxxxxxx> <Pine.LNX.4.62.0505172125210.22920@xxxxxxxxxx>
Sender: netdev-bounce@xxxxxxxxxxx
Christoph Lameter <christoph@xxxxxxxxxxx> wrote:
>
> On Tue, 17 May 2005, David S. Miller wrote:
> 
> > > Because physically contiguous memory is usually better than virtually 
> > > contiguous memory? Any reason that physically contiguous memory will 
> > > break the driver?
> > 
> > The issue is whether size can end up being too large for
> > kmalloc() to satisfy, whereas vmalloc() would be able to
> > handle it.
> 
> Oww.. We need a NUMA aware vmalloc for this?  

I think the e1000 driver is being a bit insane there.  I figure that
sizeof(struct e1000_buffer) is 28 on 64-bit, so even with 4k pagesize we'll
always succeed in being able to support a 32k/32 = 1024-entry Tx ring.  

Is there any real-world reason for wanting larger ring sizes than that?

<Prev in Thread] Current Thread [Next in Thread>