[Top] [All Lists]

Re: [Jfs-discussion] benchmark results

To: xfs@xxxxxxxxxxx, reiserfs-devel@xxxxxxxxxxxxxxx, linux-ext4@xxxxxxxxxxxxxxx, linux-btrfs@xxxxxxxxxxxxxxx, jfs-discussion@xxxxxxxxxxxxxxxxxxxxx, ext-users <ext3-users@xxxxxxxxxx>, linux-nilfs@xxxxxxxxxxxxxxx
Subject: Re: [Jfs-discussion] benchmark results
From: pg_jf2@xxxxxxxxxxxxxxxxxx (Peter Grandi)
Date: Thu, 24 Dec 2009 13:05:39 +0000
In-reply-to: <alpine.DEB.2.01.0912240205510.3483@xxxxxxxxxxxxxxxxxx>
References: <alpine.DEB.2.01.0912240205510.3483@xxxxxxxxxxxxxxxxxx>
> I've had the chance to use a testsystem here and couldn't
> resist

Unfortunately there seems to be an overproduction of rather
meaningless file system "benchmarks"...

> running a few benchmark programs on them: bonnie++, tiobench,
> dbench and a few generic ones (cp/rm/tar/etc...) on ext{234},
> btrfs, jfs, ufs, xfs, zfs. All with standard mkfs/mount options
> and +noatime for all of them.

> Here are the results, no graphs - sorry: [ ... ]

After having a glance, I suspect that your tests could be
enormously improved, and doing so would reduce the pointlessness of
the results.

A couple of hints:

* In the "generic" test the 'tar' test bandwidth is exactly the
  same ("276.68 MB/s") for nearly all filesystems.

* There are read transfer rates higher than the one reported by
  'hdparm' which is "66.23 MB/sec" (comically enough *all* the
  read transfer rates your "benchmarks" report are higher).

BTW the use of Bonnie++ is also usually a symptom of a poor
misunderstanding of file system benchmarking.

On the plus side, test setup context is provided in the "env"
directory, which is rare enough to be commendable.

> Short summary, AFAICT:
>     - btrfs, ext4 are the overall winners
>     - xfs to, but creating/deleting many files was *very* slow

Maybe, and these conclusions are sort of plausible (but I prefer
JFS and XFS for different reasons); however they are not supported
by your results as they seem to me to lack much meaning, as what is
being measured is far from clear, and in particular it does not
seem to be the file system performance, or anyhow an aspect of
filesystem performance that might relate to common usage.

I think that it is rather better to run a few simple operations
(like the "generic" test) properly (unlike the "generic" test), to
give a feel for how well implemented are the basic operations of
the file system design.

Profiling a file system performance with a meaningful full scale
benchmark is a rather difficult task requiring great intellectual
fortitude and lots of time.

>     - if you need only fast but no cool features or
>       journaling, ext2 is still a good choice :)

That is however a generally valid conclusion, but with a very,
very important qualification: for freshly loaded filesystems.
Also with several other important qualifications, but "freshly
loaded" is a pet peeve of mine :-).

<Prev in Thread] Current Thread [Next in Thread>