xfs
[Top] [All Lists]

Re: interesting MD-xfs bug

To: Dave Chinner <david@xxxxxxxxxxxxx>
Subject: Re: interesting MD-xfs bug
From: Roman Mamedov <rm@xxxxxxxxxxx>
Date: Fri, 10 Apr 2015 09:43:36 +0500
Cc: NeilBrown <neilb@xxxxxxx>, Joe Landman <joe.landman@xxxxxxxxx>, linux-raid <linux-raid@xxxxxxxxxxxxxxx>, xfs <xfs@xxxxxxxxxxx>
Delivered-to: xfs@xxxxxxxxxxx
In-reply-to: <20150410013156.GH15810@dastard>
References: <5526E8E9.3030805@xxxxxxxxx> <20150409221846.GG13731@dastard> <5526FB2A.8060704@xxxxxxxxx> <20150409225322.GH13731@dastard> <20150409231035.GI13731@dastard> <20150410093652.73204748@xxxxxxxxxxxxxx> <20150410013156.GH15810@dastard>
On Fri, 10 Apr 2015 11:31:57 +1000
Dave Chinner <david@xxxxxxxxxxxxx> wrote:

> RAID 0 on different sized devices should result in a device that is
> twice the size of the smallest devices

> Oh, "RAID0" is not actually RAID 0 - that's the size I'd expect from
> a linear mapping.

> it's actually a stripe for the first 10GB, then some kind of
> concatenated mapping of the remainder of the single device.

It might be not what you expected, but it's also not a bug of any kind, just
the regular behavior of mdadm RAID0 with different sized devices (man md):

       If devices in the array are not all the same size, then once the smallâ
       est device has been  exhausted,  the  RAID0  driver  starts  collecting
       chunks  into smaller stripes that only span the drives which still have
       remaining space.

Once or twice this came VERY handy for me in real life usage.

-- 
With respect,
Roman

Attachment: signature.asc
Description: PGP signature

<Prev in Thread] Current Thread [Next in Thread>