xfs
[Top] [All Lists]

Re: mount options question

To: Dave Chinner <david@xxxxxxxxxxxxx>
Subject: Re: mount options question
From: Greg Freemyer <greg.freemyer@xxxxxxxxx>
Date: Fri, 29 Aug 2014 23:43:25 -0400
Cc: Stefan Ring <stefanrin@xxxxxxxxx>, Xfs <xfs@xxxxxxxxxxx>, "Johannes B. Kernel" <weber@xxxxxxxxxx>
Delivered-to: xfs@xxxxxxxxxxx
Dkim-signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=mime-version:in-reply-to:references:from:date:message-id:subject:to :cc:content-type:content-transfer-encoding; bh=qaDuh7uTK2RtjS+0S6AoUhoR/G+SdTaxMIcRmiOnRug=; b=zTfszGOmmdhGinrqnErqs5aWmj6R5m9Lss/ld2fsXYf4rn/qFYCgiI3oQAGaJMzte5 MoqsG5lbHuDcNLm3B0jp7VqfgcO9GecnF6637vVaBHl7kstLhO+M3qvb7ZiRy6X5VYPR XvJdicTeAXhSCiH7QVW9qrOQVv4Th92at6zqgc7znEjsguShPfT5HN6jjiKJS30b6n/T YgiI0Cvnr0GfTxtxafOmm2o3F0winGUX1DD0fokADcYseOWrKjyno2HstkJb1KRCcWO9 LC3RO1nQa37LJBuPMfny7C0qmCbKdGI2K4WWULKIORQWRNS5hEcafHIIiXEk2qdIqiNg 4hiQ==
In-reply-to: <20140829234505.GE20518@dastard>
References: <3dc9caf6f9b415f6e4c0ebac1f1626d3@xxxxxxxxxx> <20140827230732.GN20518@dastard> <CAAxjCEy9UyjDVrTZ4GdRyDy330wUqLGWurzrLAE6-7Q0KdYzvA@xxxxxxxxxxxxxx> <20140829083738.GD20518@dastard> <6306cfa5-457d-4794-8fc0-1768f7f1deec@xxxxxxxxxxxxxxxxx> <20140829234505.GE20518@dastard>
On Fri, Aug 29, 2014 at 7:45 PM, Dave Chinner <david@xxxxxxxxxxxxx> wrote:
> On Fri, Aug 29, 2014 at 07:26:59AM -0400, Greg Freemyer wrote:
>>
>>
>> On August 29, 2014 4:37:38 AM EDT, Dave Chinner <david@xxxxxxxxxxxxx> wrote:
>> >On Fri, Aug 29, 2014 at 08:31:43AM +0200, Stefan Ring wrote:
>> >> On Thu, Aug 28, 2014 at 1:07 AM, Dave Chinner <david@xxxxxxxxxxxxx>
>> >wrote:
>> >> > On Wed, Aug 27, 2014 at 12:14:21PM +0200, Marko Weber|8000 wrote:
>> >> >>
>> >> >> sorry dave and all other,
>> >> >>
>> >> >> can you guys recommend me the most stable / best mount options for
>> >> >> my new server with ssdæ and XFS filesystem?
>> >> >>
>> >> >> at moment i would set:
>> >defaults,nobarrier,discard,logbsize=256k,noikeep
>> >> >> or is just "default" the best solution and xfs detect itself whats
>> >best.
>> >> >>
>> >> >> can you guide me a bit?
>> >> >>
>> >> >> as eleavtor i set elevator=noop
>> >> >>
>> >> >> i setup disks with linux softraid raid1. On top of the raid is LVM
>> >> >> (for some data partations).
>> >> >>
>> >> >>
>> >> >> would be nice to hear some tipps from you
>> >> >
>> >> > Unless you have specific requirements or have the knowledge to
>> >> > understand how the different options affect behaviour, then just
>> >use
>> >> > the defaults.
>> >>
>> >> Mostly agreed, but using "discard" would be a no-brainer for me. I
>> >> suppose XFS does not automatically switch it on for non-rotational
>> >> storage.
>> >
>> >Yup, you're not using your brain. :P
>> >
>> >mount -o discard *sucks* on so many levels it is not funny. I don't
>> >recommend that anybody *ever* use it, on XFS, ext4 or btrfs.  Just
>> >use fstrim if you ever need to clean up a SSD.
>>
>
>> In particular trim is a synchronous command in many SSDs, I don't
>> know about the impact on the kernel block stack.
>
> blkdev_issue_discard() is synchronous as well, which is a big
> problem for something that needs to iterate (potentially) thousands
> of regions for discard when a journal checkpoint completes....
>
>> For the SSD
>> itself that means the SSDs basically flush their write cache on
>> every trim call.
>
> Oh, it's worse than that, usually. TRIM is one of the slowest
> operations you can run on many drives, so it can take hundreds of
> milliseconds to execute....
>
>> I often tell people to do performance testing with and without it
>> and report back to me if they see no degradation caused by -o
>> discard.  To date no one has ever reported back.  I think -o
>> discard should have never been introduced and certainly not 5
>> years ago.
>
> It was introduced into XFS as a checkbox feature. We resisted as
> long as we could, but too many people were shouting at us that we
> needed realtime discard because ext4 and btrfs had it. Of course,
> all those people shouting for it realised that we were right in that
> it sucked the moment they tried to use it and found that performance
> was woeful. Not to mention that SSD trim implementations were so bad
> that they caused random data corruption by trimming the wrong
> regions, drives would simply hang randomly and in a couple of cases
> too many trims too fast would brick them...
>
> So, yeah, it was implement because lots of people demanded it, not
> because it was a good idea.
>
>> In theory, SSDs that handle trim as a asynchronous
>> command are now available, but I don't know any specifics.
>
> Requires SATA 3.1 for the queued TRIM, and I'm not sure that there
> is any hardware out there that uses this end-to-end yet. And the
> block layer can't make use of it yet, either...
>
>> In any case, fstrim works for almost all workloads and doesn't
>> have the potential continuous negative impact of -o discard.
>
> Precisely my point - you just gave some more detail. :)
>
Yes, I was only attempting to elaborate on your answer, but thanks for
elaborating on mine.

Greg

<Prev in Thread] Current Thread [Next in Thread>