xfs
[Top] [All Lists]

Re: Fragmentation Issue We Are Having

To: David Fuller <dfuller@xxxxxxxxx>
Subject: Re: Fragmentation Issue We Are Having
From: Dave Chinner <david@xxxxxxxxxxxxx>
Date: Thu, 12 Apr 2012 12:16:26 +1000
Cc: xfs@xxxxxxxxxxx
In-reply-to: <CADrkzimg891ZBGK7-UzhGeey16KwH-ZXpEqFr=O3KwD3qA9LwQ@xxxxxxxxxxxxxx>
References: <CADrkzimg891ZBGK7-UzhGeey16KwH-ZXpEqFr=O3KwD3qA9LwQ@xxxxxxxxxxxxxx>
User-agent: Mutt/1.5.21 (2010-09-15)
On Wed, Apr 11, 2012 at 06:04:25PM -0700, David Fuller wrote:
> We seen to be having an issue whereby our database server
> gets to 90% or higher fragmentation.  When it gets to this point
> we would need to remove form production and defrag using the
> xfs_fsr tool.

Bad assumption.

> The server does get a lot of writes and reads.  Is
> there something we can do to reduce the fragmentation or could
> this be a result of hard disk tweaks we use or mount options?
> 
> here is some fo the tweaks we do:
> 
> /bin/echo "512" > /sys/block/sda/queue/read_ahead_kb
> /bin/echo "10000" > /sys/block/sda/queue/nr_requests
> /bin/echo "512" > /sys/block/sdb/queue/read_ahead_kb
> /bin/echo "10000" > /sys/block/sdb/queue/nr_requests
> /bin/echo "noop" > /sys/block/sda/queue/scheduler
> /bin/echo "noop" > /sys/block/sdb/queue/scheduler

They have no effect on filesystem fragmentation.

> Adn here are the mount options on one of our servers:
> 
>  xfs     rw,noikeep,allocsize=256M,logbufs=8,sunit=128,swidth=2304
> 
> the sunit and swidth vary on each server based on disk drives.
> 
> We do use LVM on the volume where the mysql data is stored
> as we need this for snapshotting.  Here is an example of a current state:
>
> xfs_db -c frag -r /dev/mapper/vgmysql-lvmysql
> actual 42586, ideal 3134, fragmentation factor 92.64%

Read this first:

http://xfs.org/index.php/XFS_FAQ#Q:_The_xfs_db_.22frag.22_command_says_I.27m_over_50.25.__Is_that_bad.3F

Then decide whether 10 extents per file is really a problem or not.

Cheers,

Dave.
-- 
Dave Chinner
david@xxxxxxxxxxxxx

<Prev in Thread] Current Thread [Next in Thread>