[Top] [All Lists]

Re: Questions about XFS

To: Stefan Ring <stefanrin@xxxxxxxxx>
Subject: Re: Questions about XFS
From: Ric Wheeler <rwheeler@xxxxxxxxxx>
Date: Tue, 11 Jun 2013 13:31:35 -0400
Cc: Steve Bergman <sbergman27@xxxxxxxxx>, Linux fs XFS <xfs@xxxxxxxxxxx>
Delivered-to: xfs@xxxxxxxxxxx
In-reply-to: <CAAxjCEyne63XH1Uk6_7jzjaxDbsSopO9E+=6oo3xE=PvjBFcjA@xxxxxxxxxxxxxx>
References: <loom.20130611T112155-970@xxxxxxxxxxxxxx> <51B72D3D.5010206@xxxxxxxxxx> <CAO9HMNGjdikgX+_434aGVJ2NAJ0hxDNLo+Vsa46GH3psXr4sKQ@xxxxxxxxxxxxxx> <51B75C39.3030306@xxxxxxxxxx> <CAAxjCEyne63XH1Uk6_7jzjaxDbsSopO9E+=6oo3xE=PvjBFcjA@xxxxxxxxxxxxxx>
User-agent: Mozilla/5.0 (X11; Linux x86_64; rv:17.0) Gecko/20130514 Thunderbird/17.0.6
On 06/11/2013 01:27 PM, Stefan Ring wrote:
Let's take a simple example - a database app that does say 30

In your example, you are extremely likely to lose up to just shy of 5
seconds of "committed" data - way over 100 transactions!  That can be
*really* serious amounts of data and translate into large financial loss.
Every database software will do the flushing correctly.

Stefan, you are making my point because every database will do the right thing, it won't rely on ext3's magic every 5 second fsync :)


In a second example, let's say you are copying data to disk (say a movie) at
a rate of 50 MB/second.  When the power cut hits at just the wrong time, you
will have lost a large chunk of that data that has been "written" to disk
(over 200MB).
But why would anyone care about that? I know that the system went down
while copying this large movie, so I'll just copy it again.

<Prev in Thread] Current Thread [Next in Thread>