xfs
[Top] [All Lists]

Re: A little RAID experiment

To: stan@xxxxxxxxxxxxxxxxx
Subject: Re: A little RAID experiment
From: Stefan Ring <stefanrin@xxxxxxxxx>
Date: Wed, 18 Jul 2012 14:32:35 +0200
Cc: xfs@xxxxxxxxxxx
Dkim-signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=mime-version:in-reply-to:references:date:message-id:subject:from:to :cc:content-type; bh=Z6Ioe2iV8sfEOREH+9ARyGyX/560mAvfwF08UJI0/CU=; b=FMH1zWnuvUM04xdq6ZIJB1rClXVKPFM59uNN6Ijbc1/UiVeSvWCpVv/O7ImW52TqEh Ein858xrI0ct4HQ/JeyOeehPEDHag4q0MKcclMl4RMo3APfaO5uiA7PpOqkXJVLISAtB 0SqWWKvoGsWAdeXmjfxZaOWm2Ws5yeGxtjEhHU+bCYw7BaiUhe0L4QT+xOwhvSx6xM4y YmIn8RDeb5sKU/qhFXhlrY1NaB3x88M/lcmelOKWQ/90qhzJXRToLxo5KQE1kRFXVo18 3Lkn/BlEexNmpwV1igw7tDWF1uTdd5OkRizla48sRHuS3LiGU2jev8Qjgx5cMBEVpJcW 07dw==
In-reply-to: <50068EC5.5020704@xxxxxxxxxxxxxxxxx>
References: <CAAxjCEzh3+doupD=LmgqSbCeYWzn9Ru-vE4T8tOJmoud+28FDQ@xxxxxxxxxxxxxx> <CAAxjCEzEiXv5Kna9zxZ-ePbhNg6nfRinkU=PCuyX3QHesq5qcg@xxxxxxxxxxxxxx> <5004875D.1020305@xxxxxxxxxxxxxxxxx> <CAAxjCEw-NJzZmX3Q5CJ+aZ_Q7Yo39pMU=-hiXk0ghTMq7q3PWA@xxxxxxxxxxxxxx> <5004C243.6040404@xxxxxxxxxxxxxxxxx> <20120717052621.GB23387@dastard> <50061CEA.4070609@xxxxxxxxxxxxxxxxx> <CAAxjCEwgDKLF=RY0aCCNTMsc1oefXWfyHKh+morYB9zVUrnH-A@xxxxxxxxxxxxxx> <50066115.7070807@xxxxxxxxxxxxxxxxx> <CAAxjCExFUJOKaD-LMPfZvCrS34V1VHgtrhgvPP0jZ3Hm1YV=6g@xxxxxxxxxxxxxx> <50068EC5.5020704@xxxxxxxxxxxxxxxxx>
> Given the LSI 1078 based RAID card with 1 thread runs circles around the
> P2000 with 4, 8, or 16 threads, and never stalls, with responses less
> than 1ms, meaning all writes hit cache, it would seem other workloads
> are hitting the P2000 simultaneously with your test, limiting your
> performance.  Either that or some kind of quotas have been set on the
> LUNs to prevent one host from saturating the controllers.  Or both.

Maybe there exists a load quota of some kind as you are suggesting,
but from what I've seen from screenshots in the installation manuals,
I don't remember any of this.

> This is why I asked about exclusive access.  Without it your results for
> the P2000 are literally worthless.  Lacking complete configuration info
> puts you in the same boat.  You simply can't draw any realistic
> conclusions about the P2000 performance without having complete control
> of the device for dedicated testing purposes.

That's a reasonable suggestion. Alas, I'm not expecting to get that
level of access to the device. I know for a fact though, that it is
only connected to a single machine, which is otherwise completely idle
and controlled by "us" (the company I work for). But even so, I cannot
set up XFS there on a whim because it's in preparation for production
use.

> You have such control of the P400 and LSI do you not?  Concentrate your
> testing and comparisons on those.

The period of full control over the P400 is over, but at least I know
how it is configured. The LSI is in production (meaning: untouchable),
but seems reasonably configured.

At least I have some multi-threaded results from the other two machines:

LSI:

4 threads

[   2s] reads: 0.00 MB/s writes: 63.08 MB/s fsyncs: 0.00/s response
time: 0.452ms (95%)
[   4s] reads: 0.00 MB/s writes: 34.26 MB/s fsyncs: 0.00/s response
time: 1.660ms (95%)
[   6s] reads: 0.00 MB/s writes: 33.92 MB/s fsyncs: 0.00/s response
time: 1.478ms (95%)
[   8s] reads: 0.00 MB/s writes: 36.34 MB/s fsyncs: 0.00/s response
time: 1.589ms (95%)
[  10s] reads: 0.00 MB/s writes: 34.99 MB/s fsyncs: 0.00/s response
time: 1.621ms (95%)
[  12s] reads: 0.00 MB/s writes: 36.41 MB/s fsyncs: 0.00/s response
time: 1.639ms (95%)

8 threads

[   2s] reads: 0.00 MB/s writes: 45.34 MB/s fsyncs: 0.00/s response
time: 2.749ms (95%)
[   4s] reads: 0.00 MB/s writes: 32.15 MB/s fsyncs: 0.00/s response
time: 4.579ms (95%)
[   6s] reads: 0.00 MB/s writes: 33.64 MB/s fsyncs: 0.00/s response
time: 4.644ms (95%)
[   8s] reads: 0.00 MB/s writes: 35.20 MB/s fsyncs: 0.00/s response
time: 4.131ms (95%)
[  10s] reads: 0.00 MB/s writes: 33.88 MB/s fsyncs: 0.00/s response
time: 3.876ms (95%)
[  12s] reads: 0.00 MB/s writes: 33.65 MB/s fsyncs: 0.00/s response
time: 4.929ms (95%)

16 threads

[   2s] reads: 0.00 MB/s writes: 36.90 MB/s fsyncs: 0.00/s response
time: 3.510ms (95%)
[   4s] reads: 0.00 MB/s writes: 35.36 MB/s fsyncs: 0.00/s response
time: 8.629ms (95%)
[   6s] reads: 0.00 MB/s writes: 32.27 MB/s fsyncs: 0.00/s response
time: 10.091ms (95%)
[   8s] reads: 0.00 MB/s writes: 34.79 MB/s fsyncs: 0.00/s response
time: 9.499ms (95%)
[  10s] reads: 0.00 MB/s writes: 35.62 MB/s fsyncs: 0.00/s response
time: 8.801ms (95%)
[  12s] reads: 0.00 MB/s writes: 34.64 MB/s fsyncs: 0.00/s response
time: 9.488ms (95%)

... and so on. Nothing noteworthy after that.

Response time is higher, throughput stays the same.

P400:

4 threads

[   2s] reads: 0.00 MB/s writes: 33.59 MB/s fsyncs: 0.00/s response
time: 0.255ms (95%)
[   4s] reads: 0.00 MB/s writes: 5.11 MB/s fsyncs: 0.00/s response
time: 12.853ms (95%)
[   6s] reads: 0.00 MB/s writes: 5.45 MB/s fsyncs: 0.00/s response
time: 0.677ms (95%)
[   8s] reads: 0.00 MB/s writes: 5.16 MB/s fsyncs: 0.00/s response
time: 0.902ms (95%)
[  10s] reads: 0.00 MB/s writes: 4.56 MB/s fsyncs: 0.00/s response
time: 58.242ms (95%)
[  12s] reads: 0.00 MB/s writes: 5.30 MB/s fsyncs: 0.00/s response
time: 0.669ms (95%)
[  14s] reads: 0.00 MB/s writes: 5.22 MB/s fsyncs: 0.00/s response
time: 0.743ms (95%)
[  16s] reads: 0.00 MB/s writes: 4.73 MB/s fsyncs: 0.00/s response
time: 57.877ms (95%)
[  18s] reads: 0.00 MB/s writes: 4.39 MB/s fsyncs: 0.00/s response
time: 58.417ms (95%)
[  20s] reads: 0.00 MB/s writes: 4.56 MB/s fsyncs: 0.00/s response
time: 57.704ms (95%)
[  22s] reads: 0.00 MB/s writes: 4.81 MB/s fsyncs: 0.00/s response
time: 57.429ms (95%)
[  24s] reads: 0.00 MB/s writes: 4.53 MB/s fsyncs: 0.00/s response
time: 57.895ms (95%)

Some response time fluctuation at first, but it settles quickly.

8 threads

[   2s] reads: 0.00 MB/s writes: 38.61 MB/s fsyncs: 0.00/s response
time: 0.969ms (95%)
[   4s] reads: 0.00 MB/s writes: 4.98 MB/s fsyncs: 0.00/s response
time: 59.886ms (95%)
[   6s] reads: 0.00 MB/s writes: 4.69 MB/s fsyncs: 0.00/s response
time: 60.300ms (95%)
[   8s] reads: 0.00 MB/s writes: 4.57 MB/s fsyncs: 0.00/s response
time: 60.246ms (95%)
[  10s] reads: 0.00 MB/s writes: 4.46 MB/s fsyncs: 0.00/s response
time: 60.626ms (95%)
[  12s] reads: 0.00 MB/s writes: 4.46 MB/s fsyncs: 0.00/s response
time: 60.445ms (95%)
[  14s] reads: 0.00 MB/s writes: 4.61 MB/s fsyncs: 0.00/s response
time: 60.662ms (95%)
[  16s] reads: 0.00 MB/s writes: 4.35 MB/s fsyncs: 0.00/s response
time: 60.571ms (95%)
[  18s] reads: 0.00 MB/s writes: 4.87 MB/s fsyncs: 0.00/s response
time: 60.156ms (95%)
[  20s] reads: 0.00 MB/s writes: 4.77 MB/s fsyncs: 0.00/s response
time: 60.210ms (95%)
[  22s] reads: 0.00 MB/s writes: 4.58 MB/s fsyncs: 0.00/s response
time: 60.463ms (95%)
[  24s] reads: 0.00 MB/s writes: 4.65 MB/s fsyncs: 0.00/s response
time: 60.264ms (95%)

16 threads

[   2s] reads: 0.00 MB/s writes: 17.35 MB/s fsyncs: 0.00/s response
time: 7.764ms (95%)
[   4s] reads: 0.00 MB/s writes: 5.17 MB/s fsyncs: 0.00/s response
time: 62.655ms (95%)
[   6s] reads: 0.00 MB/s writes: 5.15 MB/s fsyncs: 0.00/s response
time: 62.749ms (95%)
[   8s] reads: 0.00 MB/s writes: 4.89 MB/s fsyncs: 0.00/s response
time: 63.258ms (95%)
[  10s] reads: 0.00 MB/s writes: 4.98 MB/s fsyncs: 0.00/s response
time: 62.862ms (95%)
[  12s] reads: 0.00 MB/s writes: 5.26 MB/s fsyncs: 0.00/s response
time: 63.032ms (95%)
[  14s] reads: 0.00 MB/s writes: 5.27 MB/s fsyncs: 0.00/s response
time: 62.599ms (95%)
[  16s] reads: 0.00 MB/s writes: 4.80 MB/s fsyncs: 0.00/s response
time: 63.088ms (95%)
[  18s] reads: 0.00 MB/s writes: 4.84 MB/s fsyncs: 0.00/s response
time: 63.239ms (95%)
[  20s] reads: 0.00 MB/s writes: 5.24 MB/s fsyncs: 0.00/s response
time: 62.712ms (95%)
[  22s] reads: 0.00 MB/s writes: 4.25 MB/s fsyncs: 0.00/s response
time: 63.619ms (95%)
[  24s] reads: 0.00 MB/s writes: 4.90 MB/s fsyncs: 0.00/s response
time: 63.202ms (95%)

Pretty boring.

<Prev in Thread] Current Thread [Next in Thread>