xfs
[Top] [All Lists]

Re: 12x performance drop on md/linux+sw raid1 due to barriers [xfs]

To: Linux XFS <xfs@xxxxxxxxxxx>, Linux RAID <linux-raid@xxxxxxxxxxxxxxx>
Subject: Re: 12x performance drop on md/linux+sw raid1 due to barriers [xfs]
From: pg_xf2@xxxxxxxxxxxxxxxxxx (Peter Grandi)
Date: Sat, 6 Dec 2008 18:42:30 +0000
In-reply-to: <alpine.DEB.1.10.0812060928030.14215@xxxxxxxxxxxxxxxx>
References: <alpine.DEB.1.10.0812060928030.14215@xxxxxxxxxxxxxxxx>
> Someone should write a document with XFS and barrier support,
> if I recall, in the past, they never worked right on raid1 or
> raid5 devices, but it appears now they they work on RAID1,
> which slows down performance ~12 times!!

Of the many poorly misunderstood, misleading posts to the XFS
and RAID mailing lists this comparison is particularly bad:

> l1:~# /usr/bin/time tar xf linux-2.6.27.7.tar 
> 0.15user 1.54system 0:13.18elapsed 12%CPU (0avgtext+0avgdata 0maxresident)k
> 0inputs+0outputs (0major+325minor)pagefaults 0swaps
> l1:~#

> l1:~# /usr/bin/time tar xf linux-2.6.27.7.tar
> 0.14user 1.66system 2:39.68elapsed 1%CPU (0avgtext+0avgdata 0maxresident)k
> 0inputs+0outputs (0major+324minor)pagefaults 0swaps
> l1:~#

In the first case 'linux-2.6.27.7.tar' is in effect being
extracted to volatile memory (depending on memory size, flusher
parameters, etc., which are gleefully unreported), in the second
to persistent disk; even worse in the particular case it is a
fairly metadata intensive test (25k inodes), and writing lots of
metadata to disk (twice as in RAID1) as opposed to memory of
course is going to be slow.

Comparing the two makes no sense and imparts no useful
information. It would be more interesting to see an analysis
with data and argument as to whether the metadata layout of XFS
is good or bad or how it could be improved; the issue here is
metadata policies, not barriers.

<Prev in Thread] Current Thread [Next in Thread>