xfs
[Top] [All Lists]

A little RAID experiment

To: Linux fs XFS <xfs@xxxxxxxxxxx>
Subject: A little RAID experiment
From: Stefan Ring <stefanrin@xxxxxxxxx>
Date: Wed, 25 Apr 2012 10:07:37 +0200
Dkim-signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=mime-version:date:message-id:subject:from:to:content-type; bh=8oLz2PDP2h/JJ9aReGF0GemyvOGD9IQkZ6EH5BIVKMU=; b=jCQrMVngH3A0qI+pt9LLmPxrzr4FGZvdivF8m6tUYz6hWxoi1MVNcqGcy0CjhmpHLD +qR525WBf8bjJnTVn5zYMxXJxNIz+MtvxSNXmIJSZfQ/1ZdZJlkYgusUgx8cDKnD6nXY 1ge3dKAawgX3zLstMWqhF5RZgz7wZRfUP14y10APMqU4DkxPCodznyCBVo6NFsohlI/c kpGWbZG5STMCh0MT9tHq16yBMEKAe/edFbEK5ucM3mk5uWrSBcPyfB1UQBggYcGdOfUM vKtF3fIJiMBBGEFdRJmo69mN4PqBYWuLo64QneiwofE9X3lPZz55M7l+t8sJp59M2Lmp jkYg==
This grew out of the discussion in my other thread ("Abysmal write
performance because of excessive seeking (allocation groups to
blame?)") -- that should in fact have been called "Free space
fragmentation causes excessive seeks".

Could someone with a good hardware RAID (5 or 6, but also mirrored
setups would be interesting) please conduct a little experiment for
me?

I've put up a modified sysbench here:
<https://github.com/Ringdingcoder/sysbench>. This tries to simulate
the write pattern I've seen with XFS. It would be really interesting
to know how different RAID controllers cope with this.

- Checkout (or download tarball):
https://github.com/Ringdingcoder/sysbench/tarball/master
- ./configure --without-mysql && make
- fallocate -l 8g test_file.0
- ./sysbench/sysbench --test=fileio --max-time=15
--max-requests=10000000 --file-num=1 --file-extra-flags=direct
--file-total-size=8G --file-block-size=8192 --file-fsync-all=off
--file-fsync-freq=0 --file-fsync-mode=fdatasync --num-threads=1
--file-test-mode=ag4 run

If you don't have fallocate, you can also use the last line with "run"
replaced by "prepare" to create the file. Run the benchmark a few
times to check if the numbers are somewhat stable. When doing a few
runs in direct succession, the first one will likely be faster because
the cache has not been loaded up yet. The interesting part of the
output is this:

Read 0b  Written 64.516Mb  Total transferred 64.516Mb  (4.301Mb/sec)
  550.53 Requests/sec executed

That's a measurement from my troubled RAID 6 volume (SmartArray P400,
6x 10k disks).

>From the other controller in this machine (RAID 1, SmartArray P410i,
2x 15k disks), I get:

Read 0b  Written 276.85Mb  Total transferred 276.85Mb  (18.447Mb/sec)
 2361.21 Requests/sec executed

The better result might be caused by the better controller or the RAID
1, with the latter reason being more likely.

Regards,
Stefan

<Prev in Thread] Current Thread [Next in Thread>