[Top] [All Lists]

Re: [Jfs-discussion] benchmark results

To: Chris Mason <chris.mason@xxxxxxxxxx>, tytso@xxxxxxx, Evgeniy Polyakov <zbr@xxxxxxxxxxx>, Peter Grandi <pg_jf2@xxxxxxxxxxxxxxxxxx>, xfs@xxxxxxxxxxx, reiserfs-devel@xxxxxxxxxxxxxxx, linux-ext4@xxxxxxxxxxxxxxx, linux-btrfs@xxxxxxxxxxxxxxx, jfs-discussion@xxxxxxxxxxxxxxxxxxxxx, ext-users <ext3-users@xxxxxxxxxx>, linux-nilfs@xxxxxxxxxxxxxxx
Subject: Re: [Jfs-discussion] benchmark results
From: Michael Rubin <mrubin@xxxxxxxxxx>
Date: Mon, 4 Jan 2010 10:57:49 -0800
Dkim-signature: v=1; a=rsa-sha1; c=relaxed/relaxed; d=google.com; s=beta; t=1262631491; bh=DqeXOnYnLApXzHr6H+eXI3gRm/o=; h=MIME-Version:In-Reply-To:References:From:Date:Message-ID:Subject: To:Content-Type; b=oKk/6oZFg49hwFS5SoTc1aQPCbkzRXRfBj6OCy6rigfb+LoF3ylAZfyhr6icAzgRn DaDsu8ZNwgQV0mMFq51gA==
Domainkey-signature: a=rsa-sha1; s=beta; d=google.com; c=nofws; q=dns; h=mime-version:in-reply-to:references:from:date:message-id: subject:to:content-type:x-system-of-record; b=Zz2rocmdA06xy+ICnRqHtCqaIEteeDfJvXWHXqIKFl3xjUSog1wcgODO/IZGlt7WA kN7Kvyn5qPwvdAHEOASdg==
In-reply-to: <20100104162748.GA11932@think>
References: <alpine.DEB.2.01.0912240205510.3483@xxxxxxxxxxxxxxxxxx> <19251.26403.762180.228181@xxxxxxxxxxxxxxxxxx> <20091224212756.GM21594@xxxxxxxxx> <20091224234631.GA1028@xxxxxxxxxxx> <20091225161146.GC32757@xxxxxxxxx> <20100104162748.GA11932@think>
Google is currently in the middle of upgrading from ext2 to a more up
to date file system. We ended up choosing ext4. This thread touches
upon many of the issues we wrestled with, so I thought it would be
interesting to share. We should be sending out more details soon.

The driving performance reason to upgrade is that while ext2 had been "good
enough" for a very long time the metadata arrangement on a stale file
system was leading to what we call "read inflation". This is where we
end up doing many seeks to read one block of data. In general latency
from poor block allocation was causing performance hiccups.

We spent a lot of time with unix standard benchmarks (dbench, compile
bench, et al) on xfs, ext4, jfs to try to see which one was going to
perform the best. In the end we mostly ended up using the benchmarks
to validate our assumptions and do functional testing. Larry is
completely right IMHO. These benchmarks were instrumental in helping
us understand how the file systems worked in controlled situations and
gain confidence from our customers.

For our workloads we saw ext4 and xfs as "close enough" in performance
in the areas we cared about. The fact that we had a much smoother
upgrade path with ext4 clinched the deal. The only upgrade option we
have is online. ext4 is already moving the bottleneck away from the
storage stack for some of our most intensive applications.

It was not until we moved from benchmarks to customer workload that we
were able to make detailed performance comparisons and find bugs in
our implementation.

"Iterate often" seems to be the winning strategy for SW dev. But when
it involves rebooting a cloud of systems and making a one way
conversion of their data it can get messy. That said I see benchmarks
as tools to build confidence before running traffic on redundant live


PS for some reason "dbench" holds mythical power over many folks I
have met. They just believe it's the most trusted and standard
benchmark for file systems. In my experience it often acts as a random
number generator. It has found some bugs in our code as it exercises
the VFS layer very well.

<Prev in Thread] Current Thread [Next in Thread>