David J N Begley <d.begley@xxxxxxxxxx> writes:
> On Fri, 11 Feb 2005, Andi Kleen wrote:
>> The only good "fix" probably would be to make XFS flush metadata less
>> aggressively. If the metadata was always flushed at roughly the same time
>> as the file data is written you would rarely see this.
> Are you talking here about implementing an "ordered journal" (similar to ext3)
> where data is written before metadata updates,
Ordered data just guarantees that there is never a window where
the machine crashes that you can see "raw" disk blocks after recovery.
"Raw" means blocks that are not under control of the file system
and can be arbitary old data. This can be a theoretical security hole
(although in practice you usually only see some garbage)
As far as I know XFS guarantees this already, so it supports "ordered
data" in the JBD sense.
> or simply reducing the time
> between data/metadata flushes without imposing any ordering?
There is a defined ordering, but the whole thing is not atomic
because data is not written in a transaction (however it usually
depends on one when new blocks are getting allocated)
I meant "simply" reducing the time between the transactional metadata flush
and the non transactional data flush.