[Top] [All Lists]

Re: xfs: very slow after mount, very slow at umount

To: Stan Hoeppner <stan@xxxxxxxxxxxxxxxxx>
Subject: Re: xfs: very slow after mount, very slow at umount
From: david@xxxxxxx
Date: Thu, 27 Jan 2011 12:11:54 -0800 (PST)
Cc: Mark Lord <kernel@xxxxxxxxxxxx>, Justin Piszcz <jpiszcz@xxxxxxxxxxxxxxx>, Christoph Hellwig <hch@xxxxxxxxxxxxx>, Alex Elder <aelder@xxxxxxx>, Linux Kernel <linux-kernel@xxxxxxxxxxxxxxx>, xfs@xxxxxxxxxxx
In-reply-to: <4D41CA16.8070001@xxxxxxxxxxxxxxxxx>
References: <4D40C8D1.8090202@xxxxxxxxxxxx> <20110127033011.GH21311@dastard> <4D40EB2F.2050809@xxxxxxxxxxxx> <4D418B57.1000501@xxxxxxxxxxxx> <alpine.DEB.2.00.1101271040000.31246@xxxxxxxxxxxxxxxx> <4D419765.4070805@xxxxxxxxxxxx> <4D41CA16.8070001@xxxxxxxxxxxxxxxxx>
User-agent: Alpine 2.00 (DEB 1167 2008-08-23)
On Thu, 27 Jan 2011, Stan Hoeppner wrote:

Rather than hundreds or thousands of "tiny" MB sized extents.
I wonder what the best mkfs.xfs parameters might be to encourage that?

You need to use the mkfs.xfs defaults for any single drive filesystem, and trust
the allocator to do the right thing.  XFS uses variable size extents and the
size is chosen dynamically--you don't have direct or indirect control of the
extent size chosen for a given file or set of files AFAIK.

As Dave Chinner is fond of pointing out, it's those who don't know enough about
XFS and choose custom settings that most often get themselves into trouble (as
you've already done once).  :)

The defaults exist for a reason, and they weren't chosen willy nilly.  The vast
bulk of XFS' configurability exists for tuning maximum performance on large to
very large RAID arrays.  There isn't much, if any, additional performance to be
gained with parameter tweaks on a single drive XFS filesystem.

how do I understand how to setup things on multi-disk systems? the documentation I've found online is not that helpful, and in some ways contradictory.

If there really are good rules for how to do this, it would be very helpful if you could just give mkfs.xfs the information about your system (this partition is on a 16 drive raid6 array) and have it do the right thing.

David Lang

A brief explanation of agcount:  the filesystem is divided into agcount regions
called allocation groups, or AGs.  The allocator writes to all AGs in parallel
to increase performance.  With extremely fast storage (SSD, large high RPM RAID)
this increases throughput as the storage can often sink writes faster than a
serial writer can push data.  In your case, you have a single slow spindle with
over 7,000 AGs.  Thus, the allocator is writing to over 7,000 locations on that
single disk simultaneously, or, at least, it's trying to.  Thus, the poor head
on that drive is being whipped all over the place without actually getting much
writing done.  To add insults to injury, this is one of these low RPM low head
performance "green" drives correct?

Trust the defaults.  If they give you problems (unlikely) then we can't talk. ;)

<Prev in Thread] Current Thread [Next in Thread>