On Sun, 30 Dec 2007, Brad Langhorst wrote:
I have this system
- 3ware 9650 controller
- 4 disk raid 10
- 64k stripe size
- this is a vmware host, so lots of r/w on a few big files.
I'm not entirely satisfied with its performance.
Typical blocks/sec from iostat during large file movements is about
100M/s read and 80M/s write.
When I set this up, I did not fully understand all the details... so I
want to check a few things.
- is the partition aligned correctly? i fear not...
/dev/sda1 * 1 24 192748+ 83 Linux
/dev/sda2 25 19449 156031312+ 83 Linux
Is this where I'm losing performance?
- What should the sunit and swidth settings be during mount?
I guess with raid 10 the width is 2 so...
sunit = 128 (64k/512) and swidth = 256 (2*64k/512)
Or maybe I should use width 1 ?
ÿÿ
Remounting (mount -o remount) with these options does not lead
to a noticeable change in performance. Must I recreate the fs or
unmount and remount?
Here's the output of xfsinfo in case it's relevant.
xfs_info /
meta-data=/dev/sda2 isize=256 agcount=16, agsize=2437989
blks
= sectsz=512 attr=0
data = bsize=4096 blocks=39007824,
imaxpct=25
= sunit=0 swidth=0 blks, unwritten=1
naming =version 2 bsize=4096
log =internal bsize=4096 blocks=19046, version=1
= sectsz=512 sunit=0 blks
realtime =none extsz=65536 blocks=0, rtextents=0
#1 What type of performance do you expect with a 4-disk raid10?
#2 You should be able to umount/mount with the new sizes, although I have
not tested it myself b/c I typically use sw raid here (sunit/etc is
optimized for sw raid).
Justin.
|