xfs
[Top] [All Lists]

Re: Performance decrease over time

To: Dave Chinner <david@xxxxxxxxxxxxx>
Subject: Re: Performance decrease over time
From: aurfalien <aurfalien@xxxxxxxxx>
Date: Fri, 2 Aug 2013 16:00:09 -0700
Cc: Stan Hoeppner <stan@xxxxxxxxxxxxxxxxx>, Markus Trippelsdorf <markus@xxxxxxxxxxxxxxx>, xfs@xxxxxxxxxxx
Delivered-to: xfs@xxxxxxxxxxx
Dkim-signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=subject:mime-version:content-type:from:in-reply-to:date:cc :content-transfer-encoding:message-id:references:to:x-mailer; bh=pz9qAbkwG/H3yMlc+HWUjrRLKUIZMRcFB3epc3ura+w=; b=cg8Qpd83jy6PqW/KOeH1hUwgvLjg0daqtISoltpzvvvUDRiUEnF6xg7/8yi5mlky9j sS80KHUfM3DdaN1XoAsAY+YpNAvh3D4+sG/hgto0TrN0ixMFLUvdsRV7l9efva7gm7yu wtz70PXkkbYit+AUrVjySXnNsyW5gqccJligK6Mo0XgakfCsSjQxeOBVVXzcttYEQgfB SlIVzIfwP6wNa+3VqoSsiM/AnKUEmSpyPcFCoINDyhen8Nca4xmO4nZWUcLqNXNZ3SpJ 6H4tPtOPIOx7m1k24z43epcA98tViF//lJ24SPR5bqBD+5RHnsAQxB1FDZ/+OcuSw74Z aoFg==
In-reply-to: <20130802223007.GY13468@dastard>
References: <20130801202108.GA355@x4> <20130802022518.GZ7118@dastard> <51FB6A4C.5040103@xxxxxxxxxxxxxxxxx> <20130802223007.GY13468@dastard>
On Aug 2, 2013, at 3:30 PM, Dave Chinner wrote:

> On Fri, Aug 02, 2013 at 03:14:04AM -0500, Stan Hoeppner wrote:
>> On 8/1/2013 9:25 PM, Dave Chinner wrote:
>> ...
>> 
>>> So really, the numbers only reflect a difference in layout of the
>>> files being tested. And using small direct IO means that the
>>> filesystem will tend to fill small free spaces close to the
>>> inode first, and so will fragment the file based on the locality of
>>> fragmented free space to the owner inode. In the case of the new
>>> filesystem, there is only large, contiguous free space near the
>>> inode....
>> ...
>>>> What can be
>>>> done (as a user) to mitigate this effect? 
>>> 
>>> Buy faster disks ;)
>>> 
>>> Seriously, all filesystems age and get significantly slower as they
>>> get used. XFS is not really designed for single spindles - it's
>>> algorithms are designed to spread data out over the entire device
>>> and so be able to make use of many, many spindles that make up the
>>> device. The behaviour it has works extremely well for this sort of
>>> large scale scenario, but it's close to the worst case aging
>>> behaviour for a single, very slow spindle like you are using.  Hence
>>> once the filesystem is over the "we have pristine, contiguous
>>> freespace" hump on your hardware, it's all downhill and there's not
>>> much you can do about it....
>> 
>> Wouldn't the inode32 allocator yield somewhat better results with this
>> direct IO workload?
> 
> What direct IO workload? Oh, you mean the IOZone test? 
> 
> What's the point of trying to optimise IOzone throughput? it matters
> nothing to Marcus - he's just using it to demonstrate a point that
> free space is not as contiguous as it once was...
> 
> As it is, inode32 will do nothing to speed up performance on a
> single spindle - it spreads all files out across the entire disk, so
> locality between the inode and the data is guaranteed to be worse
> than an aged inode64 filesystem. inode32 intentionally spreads data
> across the disk without caring about access locality so the average
> seek from inode read to data read is half the spindle. That's why
> inode64 is so much faster than inode32 on general workloads - the
> seek between inode and data is closer to the track-to-track seek
> time than the average seek time.

Totally concur 100%.

In fact I've either obsoleted or made local 32 bit apps as we run most 
everything off a NAS type setup using XFS.

The benies of inode64 in our env were just too much to side line, especially 
with our older SATA 2 disks in use.

In fact, for slower disks, I'd say inode64 is a must.  But I'm talking several 
in a RAID config as I don't do single disk XFS.

- aurf
<Prev in Thread] Current Thread [Next in Thread>