xfs
[Top] [All Lists]

Re: inode size benchmarking

To: "David Chinner" <dgc@xxxxxxx>
Subject: Re: inode size benchmarking
From: "Christian Røsnes" <christian.rosnes@xxxxxxxxx>
Date: Tue, 12 Feb 2008 14:51:15 +0100
Cc: xfs@xxxxxxxxxxx
Dkim-signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=gamma; h=domainkey-signature:received:received:message-id:date:from:to:subject:cc:in-reply-to:mime-version:content-type:content-transfer-encoding:content-disposition:references; bh=VmcAsMxv0vW71DXFzQKhF/D/XIp0dSZw88sOLDhsS0k=; b=ctDR25lxW/U6xadMZzn3KKWt+g8qqGpf3qq6CxFyqNeuv/r3OMT7DeXHxSScMOe0GW5qheE6zK8mSHj3B6UkvYIJzeE9f5FJ3A1vWmcSCL6YuJxC8vyjo30MUod8RUVO9SX/6V9+j+517sN/Qe1q6lb7+0Rsr7y2B4lZwo9xK/M=
Domainkey-signature: a=rsa-sha1; c=nofws; d=gmail.com; s=gamma; h=message-id:date:from:to:subject:cc:in-reply-to:mime-version:content-type:content-transfer-encoding:content-disposition:references; b=UpK1pSl+rIPnOKdDqTWQXhthnXrX4htwsBsWs2cTmf85ov04JRwxkrolqctLnXsbfGMaBzNYD3mIGk8TOirgHWm2n/nDv8GAAMW/CwgNqRdV1DO+NUvyccfxgh7WZy4aQd/w1FE24bKkzyeePGk78m9YT1UP4EKV6BIAjgePFPU=
In-reply-to: <20080212121518.GD155407@sgi.com>
References: <1a4a774c0802120337x55fa2eb6qb7d52511fba3d11c@mail.gmail.com> <20080212121518.GD155407@sgi.com>
Sender: xfs-bounce@xxxxxxxxxxx
On Feb 12, 2008 1:15 PM, David Chinner <dgc@xxxxxxx> wrote:
> On Tue, Feb 12, 2008 at 12:37:36PM +0100, Christian Røsnes wrote:
> > The test server used:
> >
> >  * Debian 4 (Etch)
> >  * Kernel: Debian 2.6.18-6-amd64 #1 SMP Wed Jan 23 06:27:23 UTC 2008
> > x86_64 GNU/Linux
> >  * CPU: Intel(R) Xeon(R) CPU           E5405  @ 2.00GHz
> >  * MEM: 4GB RAM
> >  * DISK: DELL MD1000 7 disks (1TB SATA) in RAID5. PERC6/E controller
> >  * The test partition is 6TB.
>                            ^^^
>
> This does, though.
>
> With 256 byte inodes, the allocator changes behaviour at filesystem
> sizes > 1TB to keep inodes at smaller than 32 bits. This change
> means that data is no longer close to the inodes, thereby seeking
> the disks more as it moves between writing data and writing inodes.
>
> With 2k inodes, that change doesn't occur until 8TB in size (as that
> is the 32bit inode number limit with 2k inodes), so the allocator is
> still keeping inode+data locality as close as possible on a fs size
> of 6TB.
>
> I suggest running the 256 byte inode numbers again with the "inode64"
> mount option (so the allocator behaves the same as for 2k inodes) and
> seeing how much difference remains....
>

Yes, with mount option inode64 the inode=256 tests now run as fast as
the inode=2048 tests.

Thanks

Christian


<Prev in Thread] Current Thread [Next in Thread>