xfs
[Top] [All Lists]

Re: XFS and nobarrier with SSDs

To: Martin Steigerwald <martin@xxxxxxxxxxxx>, "xfs@xxxxxxxxxxx" <xfs@xxxxxxxxxxx>
Subject: Re: XFS and nobarrier with SSDs
From: Georg Schönberger <g.schoenberger@xxxxxxxxxx>
Date: Mon, 14 Dec 2015 06:43:48 +0000
Accept-language: en-US, de-DE
Delivered-to: xfs@xxxxxxxxxxx
Dkim-signature: v=1; a=rsa-sha1; c=relaxed; d=xortex.com; h=from:to :subject:date:message-id:references:in-reply-to:content-type :content-id:content-transfer-encoding:mime-version; s=postfix; bh=mFkXwUyJbg1sR5gTwmkVqIqy97s=; b=UEKenGzoLzwhDco9KZVWW3K1dWM5 KG4dNKjKIsulvfFJe/4p8O+T5za+c+nsGMd2D+p1BlzoQieOaKaK2bmSf/JpAOMz q+hltOJtdkGrKmfdelg2GbB5SnxjQa8Yqahj4JkcZkYA6ZRogf6L21JDk4FSbsdV ph992oJZ34p2KuE=
Domainkey-signature: a=rsa-sha1; c=nofws; d=xortex.com; h=from:to :subject:date:message-id:references:in-reply-to:content-type :content-id:content-transfer-encoding:mime-version; q=dns; s= postfix; b=PjwmLdkrMB3d+3uIAR5T8V7zLHqqzp3SWQNgaqAmTrG7/YdXFt1df Xrd0oXACdYLzVU9P9wQEXbl02SI9KL67cN8d1GLiTvyxr2RXQzEv27xPVC38ViO+ YLS1FAvszzwGUBpBWZW9MsVk8ofrZ3fuBtuF+UNPnl8NYVMn2eOQJQ=
In-reply-to: <3496214.YTSKClH6pV@merkaba>
References: <E127700EFE58FD45BD6298EAC813FA42020D8173@xxxxxxxxxxxxxxxxxxxxxx> <3496214.YTSKClH6pV@merkaba>
Thread-index: AQHRNMdGBaYyXD3vpkWRruYFudZnLJ7HNjkAgALFBwA=
Thread-topic: XFS and nobarrier with SSDs
On 2015-12-12 13:26, Martin Steigerwald wrote:
> Am Samstag, 12. Dezember 2015, 10:24:25 CET schrieb Georg Schönberger:
>> We are using a lot of SSDs in our Ceph clusters with XFS. Our SSDs have
>> Power Loss Protection via capacitors, so is it safe in all cases to run XFS
>> with nobarrier on them? Or is there indeed a need for a specific I/O
>> scheduler?
> I do think that using nobarrier would be safe with those SSDs as long as there
> is no other caching happening on the hardware side, for example inside the
> controller that talks to the SSDs.
Hi Martin, thanks for your response!

We are using HBAs and no RAID controller, therefore there is no other 
cache in the I/O stack.

>
> I always thought barrier/nobarrier acts independently of the I/O scheduler
> thing, but I can understand the thought from the bug report you linked to
> below. As for I/O schedulers, with recent kernels and block multiqueue I see
> it being set to "none".
What do you mean by "none" near? Do you think I will be more on the safe 
side with noop scheduler?

>
>> I have found a recent discussion on the Ceph mailing list, anyone from XFS
>> that can help us?
>>
>> *http://www.spinics.net/lists/ceph-users/msg22053.html
> Also see:
>
> http://xfs.org/index.php/
> XFS_FAQ#Q._Should_barriers_be_enabled_with_storage_which_has_a_persistent_write_cache.
> 3F
I've already read that XFS wiki entry before and also found some Intel 
presentations where they suggest to use nobarrier with
there enterprise SSDs. But a confirmation from any block layer 
specialist would be a good thing!

>
>> *https://bugzilla.redhat.com/show_bug.cgi?id=1104380
> Interesting. Never thought of that one.
>
> So would it be safe to interrupt the flow of data towards the SSD at any point
> if time with reordering I/O schedulers in place? And how about blk-mq which
> has mutiple software queus?
Maybe we should ask the block layer mailing list about that?

>
> I like to think that they are still independent of the barrier thing and the
> last bug comment by Eric, where he quoted from Jeff, supports this:
>
>> Eric Sandeen 2014-06-24 10:32:06 EDT
>>
>> As Jeff Moyer says:
>>> The file system will manually order dependent I/O.
>>> What I mean by that is the file system will send down any I/O for the
>>> transaction log, wait for that to complete, issue a barrier (which will
>>> be a noop in the case of a battery-backed write cache), and then send
>>> down the commit block along with another barrier.  As such, you cannot
>>> have the I/O scheduler reorder the commit block and the log entry with
>>> which it is associated.
If it is truly that way then I do not see any problems using nobarrier 
with the SSDs an power loss protection.
I have already find some people say that enterprise SSDs with PLP simply 
ignore the sync call. If that's the case
then using nobarrier would have no performance improvement...

Cheers, Georg

<Prev in Thread] Current Thread [Next in Thread>