XFS/Linux Sanity check
Emmanuel Florac
eflorac at intellique.com
Mon May 2 12:13:23 CDT 2011
Le Mon, 2 May 2011 11:47:48 -0400
Paul Anderson <pha at umich.edu> écrivait:
> We are deploying five Dell 810s, 192GiB RAM, 12 core, each with three
> LSI 9200-8E SAS controllers, and three SuperMicro 847 45 drive bay
> cabinets with enterprise grade 2TB drives.
I have very little experience with these RAID coontrollers. However I
have a 9212 4i4e (same card generation and same chipset) in test, and so
far I must say it looks like _utter_ _crap_. The performance is abysmal
(it's been busy rebuilding a 20TB array for... 6 days!); the server
regularly freezes and crashes without any reason (it's a pure dev
system with virtually zero load and zero IO); and there were lots of
filesystem corruptions. I'm running a 2.6.32.25 64 bits plain vanilla
kernel that poses no problem whatsoever with any other configuration.
> In isolated testing, I see around 5GiBytes/second raw (135 parallel dd
> reads), and with a benchmark test of 10 simultaneous 64GiByte dd
> commands, I can see just shy of 2 GiBytes/second reading, and around
> 1.4GiBytes/second writing through XFS. The benchmark is crude, but
> fairly representative of our expected use.
I don't understand why there's such a gap between the raw and XFS
performance. Generally XFS gives 90% performance or more of raw
performance.
> md apparently does not support barriers, so we are badly exposed in
> that manner, I know. As a test, I disabled write cache on all drives,
> performance dropped by 30% or so, but since md is apparently the
> problem, barriers still didn't work.
Frankly, I'd stay away from md at this array size. I'm pretty sure
you're exploring uncharted territory here.
> Ideally, I'd firstly be able to find informed opinions about how I can
> improve this arrangement - we are mildly flexible on RAID controllers,
> very flexible on versions of Linux, etc, and can try other OS's as a
> last resort (but the leading contender here would be "something"
> running ZFS, and though I love ZFS, it really didn't seem to work well
> for our needs).
I can't yet be sure because I plan more testing with this card, but I'd
ditch the LSI controllers for LSI/3Ware or Adaptec (or Areca
eventually), and stay away from md RAID and use hardware RAID. I'm an
hardware RAID freak, but... hardware RAID allows proper write cache,
for a start (because it has BBUs).
--
------------------------------------------------------------------------
Emmanuel Florac | Direction technique
| Intellique
| <eflorac at intellique.com>
| +33 1 78 94 84 02
------------------------------------------------------------------------
More information about the xfs
mailing list