[Top] [All Lists]

Re: Failure growing xfs with linux 3.10.5

To: Michael Maier <m1278468@xxxxxxxxxxx>
Subject: Re: Failure growing xfs with linux 3.10.5
From: Stan Hoeppner <stan@xxxxxxxxxxxxxxxxx>
Date: Wed, 14 Aug 2013 17:20:46 -0500
Cc: Eric Sandeen <sandeen@xxxxxxxxxxx>, xfs@xxxxxxxxxxx
Delivered-to: xfs@xxxxxxxxxxx
In-reply-to: <520BC8B1.9060106@xxxxxxxxxxx>
References: <52073905.8010608@xxxxxxxxxxx> <5207D9C4.7020102@xxxxxxxxxxx> <52090C6C.6060604@xxxxxxxxxxx> <20130813000453.GQ12779@dastard> <520A5132.6090608@xxxxxxxxxxx> <520B1B4F.9070800@xxxxxxxxxxxxxxxxx> <520B9CCF.1040908@xxxxxxxxxxx> <520BBEFB.9030002@xxxxxxxxxxxxxxxxx> <520BC8B1.9060106@xxxxxxxxxxx>
Reply-to: stan@xxxxxxxxxxxxxxxxx
User-agent: Mozilla/5.0 (Windows NT 5.1; rv:17.0) Gecko/20130801 Thunderbird/17.0.8
On 8/14/2013 1:13 PM, Michael Maier wrote:
> Stan Hoeppner wrote:
>> If you keep growing until you consume the disk, you'll have ~100
>> allocation groups.  Typically you'd want to have no more than 4 AGs per
>> spindle.  You already have 42 (or 45) which will tend to seek the disk
>> to death with many workloads, driving latency through the roof and
>> decreasing throughput substantially.  Do you notice any performance
>> problems yet?
> What are expected rates for copying e.g. a 10GB file? It's a Seagate
> Barracuda 3000GB Model ST3000DM001 SATA connected to SATA 6 Gb/s chip.
> The source and the destination FS is LUKS crypted. About 3 GB usable RAM
> (cache), AMD FX-8350 processor @ max. 3800MHz.

Too many variables really to hazard a guess.  If you put a gun to my
head, I'd say strictly looking at the ingest rate of the Seagate, at a
little less than half capacity, writing about 1/3rd of the way down the
platters, optimum throughput should be 80-100 MB/s or so in the last 3
AGs, if free space isn't too heavily fragmented.

> It's getting slower as more as the free space on the fs is reduced
> (beginning at about the last GB). 

This is due to writing into fragmented free space in the 40+ AGs.  This
occurs after the last large free space extents have been consumed, those
extents in the last 3 AGs created by the last xfs_growfs.

> Resizing it makes the problem
> disappear again.

After adding another ~90 GB of free space XFS will preferentially write
large files into the new large free extents, avoiding the existing
fragmented free space in the preexisting AGs.

>> Or is this XFS strictly being used as a WORM like backup
>> silo?
> yes

With so many smallish AGs, so many grow ops, and this backup workload,
I'm curious as to what your free space map looks like.  Would you mind
posting the output of the following command, if you can?

$ xfs_db -r -c freesp <dev>



<Prev in Thread] Current Thread [Next in Thread>