[Top] [All Lists]

Re: Performance problem - reads slower than writes

To: xfs@xxxxxxxxxxx
Subject: Re: Performance problem - reads slower than writes
From: Stan Hoeppner <stan@xxxxxxxxxxxxxxxxx>
Date: Wed, 01 Feb 2012 01:29:30 -0600
In-reply-to: <20120131202526.GJ9090@dastard>
References: <20120130220019.GA45782@xxxxxxxx> <20120131020508.GF9090@dastard> <20120131103126.GA46170@xxxxxxxx> <20120131141604.GB46571@xxxxxxxx> <20120131202526.GJ9090@dastard>
Reply-to: stan@xxxxxxxxxxxxxxxxx
User-agent: Mozilla/5.0 (Windows NT 5.1; rv:9.0) Gecko/20111222 Thunderbird/9.0.1
On 1/31/2012 2:25 PM, Dave Chinner wrote:
> On Tue, Jan 31, 2012 at 02:16:04PM +0000, Brian Candler wrote:

>> Here we appear to be limited by real seeks. 225 seeks/sec is still very good
> That number indicates 225 IOs/s, not 225 seeks/s.

Yeah, the voice coil actuator and spindle rotation limits the peak
random seek rate of good 7.2k drive/controller combos to about 150/s.
15k drives do about 250-300 seeks/s max.  Simple tool to test max random
seeks/sec for a device:

32bit binary:  http://www.hardwarefreak.com/seekerb
source:        http://www.hardwarefreak.com/seeker_baryluk.c

I'm not the author.  The original seeker program is single threaded.
Baryluk did the thread hacking.  Background info:

Usage:   ./seekerb device [threads]

Results for a single WD 7.2K drive, no NCQ, deadline elevator:

  1 threads Results: 64 seeks/second, 15.416 ms random access time
 16 threads Results: 97 seeks/second, 10.285 ms random access time
128 threads Results: 121 seeks/second, 8.208 ms random access time

Actual output:
$ seekerb /dev/sda 128
Seeker v3.0, 2009-06-17,
Benchmarking /dev/sda [976773168 blocks, 500107862016 bytes, 465 GB,
476940 MB, 500 GiB, 500107 MiB]
[512 logical sector size, 512 physical sector size]
[128 threads]
Wait 30 seconds.............................
Results: 121 seeks/second, 8.208 ms random access time (52614775 <
offsets < 499769984475)

Targeting array devices (mdraid or hardware, or FC SAN LUN) with lots of
spindles, and/or SSDs should yield some interesting results.


<Prev in Thread] Current Thread [Next in Thread>