xfs
[Top] [All Lists]

Re: Directories > 2GB

To: Christoph Hellwig <hch@xxxxxxxxxxxxx>, Steve Lord <lord@xxxxxxx>, David Chinner <dgc@xxxxxxx>, linux-fsdevel@xxxxxxxxxxxxxxx, linux-ext4@xxxxxxxxxxxxxxx, linux-kernel@xxxxxxxxxxxxxxx, xfs@xxxxxxxxxxx
Subject: Re: Directories > 2GB
From: David Chinner <dgc@xxxxxxx>
Date: Wed, 11 Oct 2006 09:31:24 +1000
In-reply-to: <20061010091904.GA395@infradead.org>
References: <20061004165655.GD22010@schatzie.adilger.int> <452AC4BE.6090905@xfs.org> <20061010015512.GQ11034@melbourne.sgi.com> <452B0240.60203@xfs.org> <20061010091904.GA395@infradead.org>
Sender: xfs-bounce@xxxxxxxxxxx
User-agent: Mutt/1.4.2.1i
On Tue, Oct 10, 2006 at 10:19:04AM +0100, Christoph Hellwig wrote:
> On Mon, Oct 09, 2006 at 09:15:28PM -0500, Steve Lord wrote:
> > Hi Dave,
> > 
> > My recollection is that it used to default to on, it was disabled
> > because it needs to map the buffer into a single contiguous chunk
> > of kernel memory. This was placing a lot of pressure on the memory
> > remapping code, so we made it not default to on as reworking the
> > code to deal with non contig memory was looking like a major
> > effort.
> 
> Exactly.  The code works but tends to go OOM pretty fast at least
> when the dir blocksize code is bigger than the page size.  I should
> give the code a spin on my ppc box with 64k pages if it works better
> there.

The pagebuf code doesn't use high-order allocations anymore; it uses
scatter lists and remapping to allow physically discontiguous pages
in a multi-page buffer. That is, the pages are sourced via
find_or_create_page() from the address space of the backing device,
and then mapped via vmap() to provide a virtually contigous mapping
of the multi-page buffer.

So I don't think this problem exists anymore...

Cheers,

Dave.
-- 
Dave Chinner
Principal Engineer
SGI Australian Software Group


<Prev in Thread] Current Thread [Next in Thread>