xfs
[Top] [All Lists]

Re: State of ext4 auto_da_alloc-like workarounds in XFS

To: xfs@xxxxxxxxxxx, Russell Cattelan <cattelan@xxxxxxxxxxx>
Subject: Re: State of ext4 auto_da_alloc-like workarounds in XFS
From: Eric Sandeen <sandeen@xxxxxxxxxxx>
Date: Mon, 21 Dec 2015 13:47:47 -0600
Delivered-to: xfs@xxxxxxxxxxx
In-reply-to: <20151221193724.GD482@xxxxxxxxxxxxx>
References: <20151221183726.GC482@xxxxxxxxxxxxx> <5678497F.1020104@xxxxxxxxxxx> <20151221193724.GD482@xxxxxxxxxxxxx>
User-agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.11; rv:38.0) Gecko/20100101 Thunderbird/38.4.0

On 12/21/15 1:37 PM, Vallo Kallaste wrote:
> On Mon, Dec 21, 2015 at 12:48:31PM -0600, Eric Sandeen <sandeen@xxxxxxxxxxx> 
> wrote:
> 
> [...]
>>> I'd like to know the current state of ext4 auto_da_alloc-like workarounds 
>>> in XFS,
>>> particularly for RHEL7. Considering the two cases in
>>> https://en.wikipedia.org/wiki/Ext4#Delayed_allocation_and_potential_data_loss
>>> is XFS behaving the same as ext4, both mounted with default options?
>>
>> The sync-on-close-after-file-got-truncated case has been handled since 2007; 
>> see
>>
>> https://git.kernel.org/cgit/linux/kernel/git/dgc/linux-xfs.git/commit/?id=ba87ea699ebd9dd577bf055ebc4a98200e337542
>>
>> The sync-after-rename behavior was suggested and rejected for xfs, see
>>
>> http://marc.info/?t=139845506300002&r=1&w=2
>>
>> If you'd like to add this information to the XFS wiki, please do so!
> 
> Thanks, this is an exemplary answer, not to mention lightning fast as well!
> 
> I would consider if Wiki had at 
> http://xfs.org/index.php/Special:RequestAccount page:
> * Privacy policy http://xfs.org/index.php/XFS.org:Privacy_policy
> * Terms of Service 
> http://xfs.org/index.php?title=XFS.org:Terms_of_Service&action=edit&redlink=1
> 
> The latter being something I must read and agree on before I can request a 
> user
> account.

Hohum...

Russell, can you either remove the requirement for reading a nonexistent page, 
or
add that page?

Thanks,
-Eric

> BR,
> 

<Prev in Thread] Current Thread [Next in Thread>