xfs
[Top] [All Lists]

Re: 12x performance drop on md/linux+sw raid1 due to barriers [xfs]

To: Linux RAID <linux-raid@xxxxxxxxxxxxxxx>, Linux XFS <xfs@xxxxxxxxxxx>
Subject: Re: 12x performance drop on md/linux+sw raid1 due to barriers [xfs]
From: pg_mh@xxxxxxxxxx (Peter Grandi)
Date: Sun, 21 Dec 2008 19:16:32 +0000
In-reply-to: <494A07BA.1080008@xxxxxxxxxxx>
References: <alpine.DEB.1.10.0812060928030.14215@xxxxxxxxxxxxxxxx> <1229225480.16555.152.camel@localhost> <18757.4606.966139.10342@xxxxxxxxxxxxxxxxxx> <200812141912.59649.Martin@xxxxxxxxxxxx> <18757.33373.744917.457587@xxxxxxxxxxxxxxxxxx> <494971B2.1000103@xxxxxxx> <494A07BA.1080008@xxxxxxxxxxx>
[ ... ]

>> What really bothers me is that there's no obvious need for
>> barriers at the device level if the file system is just a bit
>> smarter and does it's own async io (like aio_*), because you
>> can track writes outstanding on a per-fd basis,

> The drive itself may still re-order writes, thus can cause
> corruption if halfway the power goes down. [ ... ] Barriers need
> to travel all the way down to the point where-after everything
> remains in-order. [ ... ] Whether the data has made it to the
> drive platters is not really important from a barrier point of
> view, however, iff part of the data made it to the platters, then
> we want to be sure it was in-order. [ ... ]

But this discussion is backwards, as usual: the *purpose* of any
kind of barriers cannot be just to guarantee consistency, but also
stability, because ordered commits are not that useful without
commit to stable storage.

If barriers guarantee transaction stability, then consistency is
also a consequence of serial dependencies among transactions (and
as to that per-device barriers are a coarse and very underoptimal
design).

Anyhow, barriers for ordering only have been astutely patented
quite recently:

  
http://www.freshpatents.com/Transforming-flush-queue-command-to-memory-barrier-command-in-disk-drive-dt20070719ptan20070168626.php

Amazing new from the patent office.y

<Prev in Thread] Current Thread [Next in Thread>