xfs
[Top] [All Lists]

Re: Performance question

To: Seth Mos <knuffie@xxxxxxxxx>
Subject: Re: Performance question
From: Christian Guggenberger <christian.guggenberger@xxxxxxxxxxxxxxxxxxxxxxxx>
Date: Wed, 18 Feb 2004 19:47:15 +0100
Cc: Joshua Baker-LePain <jlb17@xxxxxxxx>, Linux xfs mailing list <linux-xfs@xxxxxxxxxxx>
In-reply-to: <4.3.2.7.2.20040218193615.035c24c8@xxxxxxxxxxxxx>
References: <4.3.2.7.2.20040218193615.035c24c8@xxxxxxxxxxxxx>
Reply-to: christian.guggenberger@xxxxxxxxxxxxxxxxxxxxxxxx
Sender: linux-xfs-bounce@xxxxxxxxxxx
On Wed, 2004-02-18 at 19:41, Seth Mos wrote:
> At 12:57 18-2-2004 -0500, Joshua Baker-LePain wrote:
> >I've pretty much ruled out hardware.  I've swapped the 3ware and rebuilt
> >the array, and the disks all show good SMART data.
> 
> Have you considered raid 10 configuration? It will about halve the storage 
> space but increase IO by a multiple.
> 
> You could flip the cover and see if the disks are pushed hard by the 
> controller (leds on the 3ware controller). Although from what you described 
> above it's probably a directory growing a bit on the large side.
> 
> >In narrowing down the problem, it seems that one particular (large)
> >directory is the main culprit.  This dir is 471,401,788 KB big and has
> >3,377,520 files (~140KB/file average).  Is the large number of files the
> >entire culprit?  If so, is there anything I can do to alleviate the
> >problem?  I already 'mount -o logbufs=8'.  Here's xfs_info on that
> >partition:
> >
> >meta-data=/data                  isize=256    agcount=227, agsize=1048576 
> >blks
> >          =                       sectsz=512
> 
> Perhaps creating the filessystem with a larger inode size like 512. You 
> could also use version logs instead of version which mostly helped software 
             
should read 'version 2 logs instead of version 1 logs', shouldn't it?:-)

 - Christian




<Prev in Thread] Current Thread [Next in Thread>