On Mon, 2002-03-04 at 13:46, Andi Kleen wrote:
> On Fri, Feb 15, 2002 at 02:55:04PM -0600, Steve Lord wrote:
> > This was long thought to be the sole cause of slower delete speed in
> > XFS, this turns out not to be the case, it does help delete speed, but
> > if you go and delete 30000 files in one go it will not make a large
> > difference for you. That issue is the next thing being tackled.
> [just noticing some big slow rm -rfs on a xfs disk]
> Could you expand a bit on that issue? I would like to understand it.
Well, the synchronous transaction removal means we get to fill the
internal log buffers before we write them. So we manage to do
larger writes to the log, but we still max out at 32K per write.
We used to manage a lot less than this. Use the xfsstats script from
cmds/xfsmisc/xfsstats.pl and look at xs_log_writes vs xs_log_blocks
xs_log_blocks is in 512 byte chunks.
If you run bonnie++ in the normal configuration it creates 30000
files in one directory. Once a directory gets that big XFS is, on
average, writing 7K to the log for each file create and remove. So
we do one log write for each 4 or 5 files created or removed.
The next chunk of code is to support larger log writes - and to
support better alignment of those writes. Delete performance then
basically increases linearly with the size of the log buffers used.
The alignment thing should help software raid5 with an internal log,
as in that case the non-aligned log writes caused continuous tossing
of the cache in the raid5 code.
There is no date for when we can get this code into linux XFS yet.
The other thing which seems to help a lot is the change I just put in
pagebuf today. XFS metadata was getting pushed out of cache too easily
and we ended up rereading a lot. This change on my box can make the
removal of a kernel tree drop from 10 seconds to 5 seconds, but will
not make a difference if you are removing stuff which has not been
Steve Lord voice: +1-651-683-3511
Principal Engineer, Filesystem Software email: lord@xxxxxxx