[Top] [All Lists]

Re: lmdd performance results XFS vs. Ext2

To: Andi Kleen <ak@xxxxxxx>
Subject: Re: lmdd performance results XFS vs. Ext2
From: Rajagopal Ananthanarayanan <ananth@xxxxxxx>
Date: Thu, 08 Jun 2000 01:57:34 -0700
Cc: linux-xfs@xxxxxxxxxxx
References: <393EFB9A.E34181FD@xxxxxxx> <20000608104130.A4168@xxxxxxxxxxxxxxxxxxx>
Sender: owner-linux-xfs@xxxxxxxxxxx
Andi Kleen wrote:
> On Wed, Jun 07, 2000 at 06:49:14PM -0700, Rajagopal Ananthanarayanan wrote:
> > Results of write performance tests:
> > ----------------------------------
> >
> > Sequential write using lmdd, file size is ~209MB on a 2 CPU system with
> > total memory of 64M. The experiment is run over varying write-size from
> > 1K to 1024K, 3 times for each block-size. This shows
> >
> > 1. Ext2 is well tuned (hardly any variation in 3 runs of each blocksize)!
> Please not that 2.3 itself has significant performance regressions for
> huge bulk writes (there were several threads on linux-kernel about that).
> Partly the still broken page cache balance is probably to blame, for
> other things the elevator (Jens Axboe's per device elevator patches seem
> to cause a huge speedup)

Yep, I lurk in linux-mm to garner some of this news; but thanks for the heads 

> With tuning 2.3.99pre2, an very old kernel, you might be duplicating
> work that others already did.

No, the recent changes were to tune XFS itself rather than Linux VM.
Hopefully all the recent tuning in 2.4.0+ will be beneficial for XFS
as it is for ext2 ...

We have one more set of changes to go in the write path so that
pagebuf/kiobufs are used to really cluster the writes onto disk ...
Right now, this is done through kflushd/ll_rw_block/elevator. Some of
these clusters can be large (thousands of pages long) to a single extent
(contiguous blocks on disk). So (a) the clusters don't need to be "discovered"
by an elevator-like algorithm (b) kiobuf based I/O will avoid processing
thousands of buffer-heads.

Rajagopal Ananthanarayanan ("ananth")
Member Technical Staff, SGI.

<Prev in Thread] Current Thread [Next in Thread>