On Fri, 25 Dec 2009 at 11:14, tytso@xxxxxxx wrote:
> Did you include the "sync" in part of what you timed?
In my "generic" tests I do "sync" after each of the cp/tar/rm
> Peter was quite
> right --- the fact that the measured bandwidth in your "cp" test is
> five times faster than the disk bandwidth as measured by hdparm, and
> many file systems had exactly the same bandwidth, makes me very
> suspicious that what was being measured was primarily memory bandwidth
That's right, and that's what I replied to Peter on jfs-discussion:
>> * In the "generic" test the 'tar' test bandwidth is exactly the
>> same ("276.68 MB/s") for nearly all filesystems.
True, because I'm tarring up ~2.7GB of content while the box is equipped
with 8GB of RAM. So it *should* be the same for all filesystems, as
Linux could easily hold all this in its caches. Still, jfs and zfs
manage to be slower than the rest.
> --- and not very useful when trying to measure file system
For the bonnie++ tests I chose an explicit filesize of 16GB, two times the
size of the machine's RAM to make sure it will tests the *disks*
performance. And to be consistent across one benchmark run, I should have
copied/tarred/removed 16GB as well. However, I figured not to do that -
but to *use* the filesystem buffers instead of ignoring them. After all,
it's not about disk performace (that's what hdparm could be for) but
filesystem performance (or comparision, more exactly) - and I'm not exited
about the fact, that almost all filesystems are copying with ~276MB/s but
I'm wondering why zfs is 13 times slower when copying data or xfs takes
200 seconds longer than other filesystems, while it's handling the same
size as all the others. So no, please don't compare the bonnie++ results
against my "generic" results withing these results - as they're
(obviously, I thought) taken with different parameters/content sizes.
BOFH excuse #85:
Windows 95 undocumented "feature"