[Top] [All Lists]

Re: Anyone using XFS in production on > 20TiB volumes?

To: Justin Piszcz <jpiszcz@xxxxxxxxxxxxxxx>
Subject: Re: Anyone using XFS in production on > 20TiB volumes?
From: Chris Wedgwood <cw@xxxxxxxx>
Date: Wed, 22 Dec 2010 09:32:09 -0800
Cc: xfs@xxxxxxxxxxx
In-reply-to: <alpine.DEB.2.00.1012221209150.5245@xxxxxxxxxxxxxxxx>
References: <alpine.DEB.2.00.1012221128440.5245@xxxxxxxxxxxxxxxx> <20101222170620.GA29117@xxxxxxxxxxxxxxxxxx> <alpine.DEB.2.00.1012221209150.5245@xxxxxxxxxxxxxxxx>
On Wed, Dec 22, 2010 at 12:10:06PM -0500, Justin Piszcz wrote:

> Do you have an example/of what you found?

i don't have the numbers anymore, they are with a previous employer.

basically using dbench (there were cifs NAS machines, so dbench seemed
as good or bad as anything to test with) the performance was about 3x
better between 'old' and 'new' with a small number of workers and
about 10x better with a large number

i don't know how much difference each of inode64 and getting the geom
right made each, but bother were quite measurable in the graphs i made
at the time

from memory the machines are raid50 (4x (5+1)) with 2TB drives, so
about 38TB usable on each one

initially these machines were 3ware controllers and later on LSI (the
two products lines have since merged so it's not clear how much
difference that makes now)

in testing 16GB for xfs_repair wasn't enough, so they were upped to
64GB, that's likely largely a result of the fact there were 100s of
millions of small files (as well as some large ones)

> Is it dependent on the RAID card?

perhaps, do you have a BBU and enable WC?  certainly we found the LSI
cards to be faster in most cases than the (now old) 3ware

where i am now i use larger chassis and no hw raid cards, using sw
raid on these works spectacularly well with the exception of burst of
small seeky writes (which a BBU + wc soaks up quite well)

<Prev in Thread] Current Thread [Next in Thread>