[Top] [All Lists]

Re: 2.6.39 and 3.0 scalability measurement results

To: Eric Whitney <eric.whitney@xxxxxx>
Subject: Re: 2.6.39 and 3.0 scalability measurement results
From: Dave Chinner <david@xxxxxxxxxxxxx>
Date: Tue, 2 Aug 2011 10:50:12 +1000
Cc: Ext4 Developers List <linux-ext4@xxxxxxxxxxxxxxx>, xfs@xxxxxxxxxxx
In-reply-to: <4E361630.9060907@xxxxxx>
References: <4E361630.9060907@xxxxxx>
User-agent: Mutt/1.5.21 (2010-09-15)
On Sun, Jul 31, 2011 at 10:57:52PM -0400, Eric Whitney wrote:
> I've posted the results of my 2.6.38/2.6.39 and 2.6.39/3.0 ext4
> scalability measurements and comparisons on a 48 core x86_64 server
> at:
> http://free.linux.hp.com/~enw/ext4/2.6.39
> http://free.linux.hp.com/~enw/ext4/3.0
> The results include throughput and CPU efficiency graphs for five
> simple workloads, the raw data for same, and lockstats as well.
> The data cover ext4 filesystems with and without journals.  For
> reference, ext3, xfs, and btrfs are included as well.

Can you include the output of the mkfs programs so that we can see
what the structure of the filesystems are? That makes a big
difference when interpreting the XFS results.

And FWIW, I'd be really interested to see the XFS results using the
inode64 mount option, rather then the not-really-ideal-for-multi-TB-
compatibility-reasons default of inode32.

inode64 drastically changes the layout of files and directories in
the filesystems, so I'd expect to see significant differences (good
and bad!) in the workloads using that option. We've been considering
changing it to be the default, so having some idea of how it
compares on your worklaods woul dbe an interesting discussion

BTW, seeing as you are running against multiple diffferent
filesytems, can you cc these emails to linux-fsdevel rather than
just the ext4 list? There is wider interest in your results than
just ext4 developers...


Dave Chinner

<Prev in Thread] Current Thread [Next in Thread>
  • Re: 2.6.39 and 3.0 scalability measurement results, Dave Chinner <=