xfs
[Top] [All Lists]

Re: XFS corruption with failover

To: Emmanuel Florac <eflorac@xxxxxxxxxxxxxx>
Subject: Re: XFS corruption with failover
From: John Quigley <jquigley@xxxxxxxxxxxx>
Date: Thu, 13 Aug 2009 19:50:38 -0500
Cc: XFS Development <xfs@xxxxxxxxxxx>
In-reply-to: <20090813231739.5c7db91d@xxxxxxxxxxxxxx>
References: <4A8474D2.7050508@xxxxxxxxxxxx> <20090813231739.5c7db91d@xxxxxxxxxxxxxx>
User-agent: Thunderbird 2.0.0.22 (Windows/20090605)
Emmanuel Florac wrote:
By killing abruptly the primary server while doing IO, you're probably
pushing the envelope... You may have a somewhat better luck with a
cluster fs, OCFS2 works very well for me usually (GFS is a complete
PITA to setup).

Acknowledged; we've looked at GFS, and I've been meaning to read up on OCFS2.  
For various reasons, particularly performance, ease of deployment and flexible 
growth, XFS has been the clear winner in our particular case (and our case is 
fairly unique, as our volume is backed by a distributed storage device).

You can get it to flush extremely
often by playing with  /proc/sys/vm/dirty_expire_centiseconds
and /proc/sys/vm/dirty_writeback_centisecs, though. Safer settings
generally imply terrible performance, though, you've been warned.

Okay, interesting, I wasn't aware of these and will look into it.

Ah another thing may be some cache option in the iSCSI target. what
target are you using?

No caching target side - I can speak definitely on that because I wrote it 
(integrates with our data dispersal stack [1]).  Also, we're utilizing the same 
target when failing over; it's just the ISCSI initiator (aka, the NFS server) 
that is changing.

Thank you kindly for the quick response.

- John Quigley

<Prev in Thread] Current Thread [Next in Thread>