xfs
[Top] [All Lists]

Re: A little RAID experiment

To: Linux fs XFS <xfs@xxxxxxxxxxx>
Subject: Re: A little RAID experiment
From: Stefan Ring <stefanrin@xxxxxxxxx>
Date: Tue, 11 Sep 2012 18:37:38 +0200
Dkim-signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=mime-version:in-reply-to:references:date:message-id:subject:from:to :content-type; bh=mPFiKNcTeksVc52L0nQhFXWTFuYA57xd10lpb/xoGW0=; b=fBhwlw83USElwAAyuX6SKXgOTxHGvhXtq46VqF7m+AjEMjgOagJqgnWBIqc/Y7nj8t 8p2No+C6DHEbNAFXtB6P2/Vdcwf2x7SSBCYDGBfIFxliYEt4TqTgJ4g3t3Z9hmlOTkN8 /DNZJ/Grxvjf3q+YFJNZuT9q8vqZxyduOflJrU6jmGdgldx6m3lW1geISLG2xLVE8+zU zK5vhrjK+xxbnJx/q8q8oBA8Cesbf82AanTyDI77SVzBrYjRODrKA9DCq1fV+8Yzukn3 wi2V/QiJ8epJ8uICd7xLHPMEmN9F5GI/bJiFAR0kWFJTRX4IpE6c3Tyce3zUSXlXVL6f vKLw==
In-reply-to: <20120726083242.GA2877@dastard>
References: <20120717052621.GB23387@dastard> <50061CEA.4070609@xxxxxxxxxxxxxxxxx> <CAAxjCEwgDKLF=RY0aCCNTMsc1oefXWfyHKh+morYB9zVUrnH-A@xxxxxxxxxxxxxx> <50066115.7070807@xxxxxxxxxxxxxxxxx> <CAAxjCExFUJOKaD-LMPfZvCrS34V1VHgtrhgvPP0jZ3Hm1YV=6g@xxxxxxxxxxxxxx> <50068EC5.5020704@xxxxxxxxxxxxxxxxx> <CAAxjCEy2Yj=XWctNg2gACbFy81aTu70YJ13Ee8G6-E3Tqvvs7g@xxxxxxxxxxxxxx> <CAAxjCEzF3nTFoedyKf1o5Nv4yPUJkgvC8nCJcx_2dDx8xqWtWA@xxxxxxxxxxxxxx> <50077A34.5070304@xxxxxxxxxxxxxxxxx> <CAAxjCEy=N9ceAA5V6bnrcMc3961gs-Z2NgNyenPJ+gjE2mYUXQ@xxxxxxxxxxxxxx> <20120726083242.GA2877@dastard>
On Thu, Jul 26, 2012, Dave Chinner <david@xxxxxxxxxxxxx> wrote:
>> 10001
>> 20001
>> 30001
>> 40001
>> 10002
>> 20002
>> 30002
>> 40002
>> 10003
>> 20003
>> ...
>
> That's the problem you should have reported.

I did, but then I got bashed for using RAID 5/6 and about the
specifics of hardware and everything, which shouldn't even matter, but
I let myself get dragged into this discussion.

Anyway, in the meantime I had a closer look at the actual block trace,
and it looks a bit different than the way I interpreted it at first.
It sends runs of 30-50 writes with holes in them, like so:

2, 4-5, 7, 10-12, 14, 16-17

and so on. These holes seem to be caused by the free space
fragmentation. Every once in a while -- somewhat frequently, after 30
or so blocks, as mentioned -- it switches to another allocation group.
If these blocks were contiguous, then the elevator should be able to
merge them, but the tiny holes make this impossible. So I guess
there's nothing that can be substantially improved here. The frequent
ag switches are a bit difficult for the controller to handle, but
different controllers struggle under different work loads, and there's
nothing that can be done about that. I noticed just today that the HP
SmartArray controllers handle truly random writes better than the
MegaRAID variety that I praised so much in my postings.

<Prev in Thread] Current Thread [Next in Thread>
  • Re: A little RAID experiment, Stefan Ring <=