[Top] [All Lists]

Re: Tuning XFS for real time audio on a laptop with encrypted LVM

To: Pedro Ribeiro <pedrib@xxxxxxxxx>
Subject: Re: Tuning XFS for real time audio on a laptop with encrypted LVM
From: Dave Chinner <david@xxxxxxxxxxxxx>
Date: Fri, 21 May 2010 14:14:15 +1000
Cc: xfs@xxxxxxxxxxx
In-reply-to: <AANLkTilJD2Jfr4W97BiAlyc_7C9jLEhEzuEWSDyVKXYP@xxxxxxxxxxxxxx>
References: <AANLkTilJD2Jfr4W97BiAlyc_7C9jLEhEzuEWSDyVKXYP@xxxxxxxxxxxxxx>
User-agent: Mutt/1.5.20 (2009-06-14)
On Fri, May 21, 2010 at 03:16:15AM +0100, Pedro Ribeiro wrote:
> Hi all,
> I was wondering what is the best scheduler for my use case given my
> current hardware.
> I have a laptop with a fast Core 2 duo at 2.26 and a nice amount of
> ram (4GB) which I use primarily for real time audio (though without a
> -rt kernel). All my partitions are XFS under LVM which itself is
> contained on a LUKS partition (encrypted with AES 128).
> CFQ currently does not perform very well and causes a lot of thrashing
> and high latencies when I/O usage is high. Changing it to the noop
> scheduler solves some of the problems and makes it more responsive.
> Still performance is a bit of a let down: it takes 1m30s to unpack the
> linux-2.6.34 tarball and a massive 2m30s to rm -r.
> I have lazy-count=1, noatime, logbufs=8, logbsize=256k and a 128m log.
> Is there any tunable I should mess with to solve this?

Depends if you value your data or not. If you don't care about
corruption or data loss on sudden power loss (e.g. battery runs
flat), then add nobarrier to your mount options. Otherwise, you're
close to the best performance you are going to get on that hardware
with XFS.

> And what do you
> think of my scheduler change (I haven't tested it that much to be
> honest)?

I only ever use the noop scheduler with XFS these days. CFQ has been
a steaming pile of ever changing regressions for the past 4 or 5
kernel releases, so i stopped using it. Besides, XFS is often 10-15%
faster on no-op for the same workload, anyway...


Dave Chinner

<Prev in Thread] Current Thread [Next in Thread>