On Wed, 22 Dec 2010, Chris Wedgwood wrote:
On Wed, Dec 22, 2010 at 12:10:06PM -0500, Justin Piszcz wrote:
Do you have an example/of what you found?
i don't have the numbers anymore, they are with a previous employer.
basically using dbench (there were cifs NAS machines, so dbench seemed
as good or bad as anything to test with) the performance was about 3x
better between 'old' and 'new' with a small number of workers and
about 10x better with a large number
Is this by specifying the sunit/swidth?
Can you elaborate on which paramters you modified?
i don't know how much difference each of inode64 and getting the geom
right made each, but bother were quite measurable in the graphs i made
at the time
from memory the machines are raid50 (4x (5+1)) with 2TB drives, so
about 38TB usable on each one
initially these machines were 3ware controllers and later on LSI (the
two products lines have since merged so it's not clear how much
difference that makes now)
in testing 16GB for xfs_repair wasn't enough, so they were upped to
64GB, that's likely largely a result of the fact there were 100s of
millions of small files (as well as some large ones)
Yikes =) Hopefully its better now?
Is it dependent on the RAID card?
perhaps, do you have a BBU and enable WC? certainly we found the LSI
cards to be faster in most cases than the (now old) 3ware
Yes and have it set to perform(ance).
Going to be using 19HDD x 3TB Hiatchi 7200RPMs, (18HDD RAID-6 + 1 hot spare).
where i am now i use larger chassis and no hw raid cards, using sw
raid on these works spectacularly well with the exception of burst of
small seeky writes (which a BBU + wc soaks up quite well)