Am Samstag, 12. Dezember 2015, 10:24:25 CET schrieb Georg Schönberger:
> Hi folks!
> We are using a lot of SSDs in our Ceph clusters with XFS. Our SSDs have
> Power Loss Protection via capacitors, so is it safe in all cases to run XFS
> with nobarrier on them? Or is there indeed a need for a specific I/O
I do think that using nobarrier would be safe with those SSDs as long as there
is no other caching happening on the hardware side, for example inside the
controller that talks to the SSDs.
I always thought barrier/nobarrier acts independently of the I/O scheduler
thing, but I can understand the thought from the bug report you linked to
below. As for I/O schedulers, with recent kernels and block multiqueue I see
it being set to "none".
> I have found a recent discussion on the Ceph mailing list, anyone from XFS
> that can help us?
Interesting. Never thought of that one.
So would it be safe to interrupt the flow of data towards the SSD at any point
if time with reordering I/O schedulers in place? And how about blk-mq which
has mutiple software queus?
I like to think that they are still independent of the barrier thing and the
last bug comment by Eric, where he quoted from Jeff, supports this:
> Eric Sandeen 2014-06-24 10:32:06 EDT
> As Jeff Moyer says:
> > The file system will manually order dependent I/O.
> > What I mean by that is the file system will send down any I/O for the
> > transaction log, wait for that to complete, issue a barrier (which will
> > be a noop in the case of a battery-backed write cache), and then send
> > down the commit block along with another barrier. As such, you cannot
> > have the I/O scheduler reorder the commit block and the log entry with
> > which it is associated.