12x performance drop on md/linux+sw raid1 due to barriers [xfs]

Bill Davidsen davidsen at tmr.com
Wed Dec 17 15:40:02 CST 2008


Peter Grandi wrote:
> Unfortunately that seems the case.
>
> The purpose of barriers is to guarantee that relevant data is
> known to be on persistent storage (kind of hardware 'fsync').
>
> In effect write barrier means "tell me when relevant data is on
> persistent storage", or less precisely "flush/sync writes now
> and tell me when it is done". Properties as to ordering are just
> a side effect.
>   

I don't get that sense from the barriers stuff in Documentation, in fact 
I think it's essentially a pure ordering thing, I don't even see that it 
has an effect of forcing the data to be written to the device, other 
than by preventing other writes until the drive writes everything. So we 
read the intended use differently.

What really bothers me is that there's no obvious need for barriers at 
the device level if the file system is just a bit smarter and does it's 
own async io (like aio_*), because you can track writes outstanding on a 
per-fd basis, so instead of stopping the flow of data to the drive, you 
can just block a file descriptor and wait for the count of outstanding 
i/o to drop to zero. That provides the order semantics of barriers as 
far as I can see, having tirelessly thought about it for ten minutes or 
so. Oh, and did something very similar decades ago in a long-gone 
mainframe OS.

-- 
Bill Davidsen <davidsen at tmr.com>
  "Woe unto the statesman who makes war without a reason that will still
  be valid when the war is over..." Otto von Bismark 





More information about the xfs mailing list