xfs
[Top] [All Lists]

Re: Performance question

To: Joshua Baker-LePain <jlb17@xxxxxxxx>, Linux xfs mailing list <linux-xfs@xxxxxxxxxxx>
Subject: Re: Performance question
From: Seth Mos <knuffie@xxxxxxxxx>
Date: Wed, 18 Feb 2004 19:41:45 +0100
In-reply-to: <Pine.LNX.4.58.0402181131210.25541@xxxxxxxxxxxxxxxxxx>
Sender: linux-xfs-bounce@xxxxxxxxxxx
At 12:57 18-2-2004 -0500, Joshua Baker-LePain wrote:
I've pretty much ruled out hardware.  I've swapped the 3ware and rebuilt
the array, and the disks all show good SMART data.

Have you considered raid 10 configuration? It will about halve the storage space but increase IO by a multiple.

You could flip the cover and see if the disks are pushed hard by the controller (leds on the 3ware controller). Although from what you described above it's probably a directory growing a bit on the large side.

In narrowing down the problem, it seems that one particular (large)
directory is the main culprit.  This dir is 471,401,788 KB big and has
3,377,520 files (~140KB/file average).  Is the large number of files the
entire culprit?  If so, is there anything I can do to alleviate the
problem?  I already 'mount -o logbufs=8'.  Here's xfs_info on that
partition:

meta-data=/data                  isize=256    agcount=227, agsize=1048576 blks
         =                       sectsz=512

Perhaps creating the filessystem with a larger inode size like 512. You could also use version logs instead of version which mostly helped software raid 5 although this might help hardware as well.

Cheers

--
Seth
I don't make sense, I don't pretend to either. Questions?


<Prev in Thread] Current Thread [Next in Thread>