[Top] [All Lists]

Re: Write barriers and hardware RAID

To: xfs@xxxxxxxxxxx
Subject: Re: Write barriers and hardware RAID
From: Michael Monnerie <michael.monnerie@xxxxxxxxxxxxxxxxxxx>
Date: Mon, 20 Jul 2009 13:01:01 +0200
In-reply-to: <alpine.LRH.1.10.0907171259590.8586@xxxxxxx>
Organization: it-management http://it-management.at
References: <alpine.LRH.1.10.0907171259590.8586@xxxxxxx>
User-agent: KMail/1.10.3 (Linux/; KDE/4.1.3; x86_64; ; )
I wrote that sections of the FAQ, so I should answer:

On Freitag 17 Juli 2009 J Pälve wrote:
> - The XFS FAQ states that with battery backup'd RAID hardware, both
> write barriers and individual disk cache should be turned off.
> However, I'm getting better benchmark results with both turned on.

I guess it's only the "hard disk cache" turned on leading to better 
performance. But that is a very, very dangerous setup: If you use a RAID 
with 16 hard disks, each having 32MB cache, on a power fail you can 
loose up to 16*32 = 512MB of data, as on a power outage hard disks 
simply drop their caches. And chances are *very* big that a significant 
amount of filesystem metadata is in there, trashing your filesystem 
badly. Never turn this on if you care about your data.

For write barriers, the performance should be a little bit lower if ON 
instead OFF.

> What I'm wondering is, will write barriers work as intended when used
> with hardware RAID controller (PERC 6/E)? Googling only turned up
> results relating to software RAID.

No. RAID controllers simulate written data by telling the host that a 
disk block has been written while it's only in the controller's cache. 
The controller will write it later, when he has time. So basically 
barriers only generate extra I/O there. This is valid if the controller 
has writes "write back". If set to "write through", the RAID controller 
simply does not cache writes, and directly writes them to disk, and only 
afterwards tell the host that data has been written. This will drop your 
write performance very significantly, on a server with much I/O you 
don't want to use write through (aka write cache off).

> - The XFS FAQ also states that virtualization products prevent write
> barriers from working correctly. Is this still the case (specifically
> with ESXi 3.5 and later) and is there anything that can be done about
> it? Does VMFS somehow work around this, or is the problem then just
> "out of sight, out of mind"?

I found an entry for the ".vmx" config file:
scsi0:0.writeThrough = "TRUE"

That should do the desired "do not cache this disk", but I didn't test 
it so far.
I wonder if someone knows of such a setting for XenServer?

If someone has a solution to "VM disk writes cached", I'd be happy to 
hear how to do that.

mfg zmi
// Michael Monnerie, Ing.BSc    -----      http://it-management.at
// Tel: 0660 / 415 65 31                      .network.your.ideas.
// PGP Key:         "curl -s http://zmi.at/zmi.asc | gpg --import"
// Fingerprint: AC19 F9D5 36ED CD8A EF38  500E CE14 91F7 1C12 09B4
// Keyserver: wwwkeys.eu.pgp.net                  Key-ID: 1C1209B4

Attachment: signature.asc
Description: This is a digitally signed message part.

<Prev in Thread] Current Thread [Next in Thread>