Correct. But I've been using similar methods for evaluation and benchmarking
purposes at my previous employers as well. I guess it's hard to part with the
tools you learn to use. It's not a secret that we use xfs at Facebook. Xfs is
the filesystem of choice for the media infrastructure group at the moment.
Different projects are free to pick anything they want but I think database
tier uses xfs as well. Our usage tends to create large files (databases,
haystacks) so limited fragmentation, preallocation and as-close-to-raw
performance are important features for us - all aspects where xfs excels. When
I am talking about multiple variables which affect the testing it's RAID level,
RAID stripe size (same stripe size sometimes produces different results from
different controller vendors), IO scheduler, memory available, number of
threads, readahead size and other external and application tuning variables.
> -----Original Message-----
> From: Stan Hoeppner [mailto:stan@xxxxxxxxxxxxxxxxx]
> Sent: Tuesday, February 01, 2011 12:12 PM
> To: Peter Vajgel
> Cc: Dave Chinner; Jef Fox; xfs@xxxxxxxxxxx
> Subject: Re: XFS Preallocation
> Peter Vajgel put forth on 2/1/2011 1:20 PM:
> > At the scale we operate it does. We have multiple variables so the number of
> combinations is large. We have hit every single possible hardware and software
> problem and problem resolution can take months if it takes days to reproduce
> problem. Hardware vendors (disk, controller, motherboard manufacturers) are
> more responsive when you can reproduce a problem on the fly in seconds
> in comparative benchmarking). The tests usually run only couple of minutes.
> 12x3TB (possibly multiplied by a factor of X with our new platform) it would
> unacceptable to wait for writes to finish.
> Hi Peter,
> When you mention scale, you're referring to the storage back end at
> your employer, correct?