[Top] [All Lists]

Re: XFS and XEN

To: xfs@xxxxxxxxxxx
Subject: Re: XFS and XEN
From: Michael Monnerie <michael.monnerie@xxxxxxxxxxxxxxxxxxx>
Date: Wed, 25 Feb 2009 07:40:33 +0100
In-reply-to: <20090224163823.GA19811@xxxxxxxxxxxxx>
Organization: it-management http://it-management.at
References: <200902170959.55077@xxxxxx> <200902241604.29566@xxxxxx> <20090224163823.GA19811@xxxxxxxxxxxxx>
User-agent: KMail/1.10.3 (Linux/; KDE/4.1.3; x86_64; ; )
On Dienstag 24 Februar 2009 Christoph Hellwig wrote:
> It's the usual BS.  The difference is just that you actually see the
> corruption on XFS while it's pretty silent on extN.  If your Hardware
> (or Hypervisor) is not reliable you _will_ lose data.  Either
> silently or with a spectacular blowup if the filesystem actually has
> consistency checking (which XFS has a lot).

Thank you for the explanation. So to clear up: It was not XFS's fault, 
but came from XEN? Can I write it like that on the FAQ?:

Q: Which settings are best with virtualization like VMware, XEN, qemu?

The biggest problem is that those products seem to also virtualize disk 
writes in a way that even barriers don't work anymore, which means even 
a fsync is not reliable. Tests confirm that unplugging the power from 
such a system even with RAID controller with battery backed cache and 
hard disk cache turned off (which is save on a normal host) you can 
destroy a database within the virtual machine (client, domU whatever you 
call it).

In qemu you can specify cache=off on the line specifying the virtual 
disk. For others I have no information what to do.

mfg zmi
// Michael Monnerie, Ing.BSc    -----      http://it-management.at
// Tel: 0660 / 415 65 31                      .network.your.ideas.
// PGP Key:         "curl -s http://zmi.at/zmi.asc | gpg --import"
// Fingerprint: AC19 F9D5 36ED CD8A EF38  500E CE14 91F7 1C12 09B4
// Keyserver: wwwkeys.eu.pgp.net                  Key-ID: 1C1209B4

Attachment: signature.asc
Description: This is a digitally signed message part.

<Prev in Thread] Current Thread [Next in Thread>