| To: | stan@xxxxxxxxxxxxxxxxx |
|---|---|
| Subject: | Re: Using xfs_growfs on SSD raid-10 |
| From: | Alexey Zilber <alexeyzilber@xxxxxxxxx> |
| Date: | Thu, 10 Jan 2013 11:50:42 +0800 |
| Cc: | xfs@xxxxxxxxxxx |
| Dkim-signature: | v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=mime-version:in-reply-to:references:from:date:message-id:subject:to :cc:content-type; bh=HAYdX2OHBoSBtlBboXU4ackbWtcYx8syr8zlnxBRoTA=; b=ogCyy0IUkRKU0hbirwe5YMQbbx2war8k4GOFeADPGoclWtl8vepPAh55fbtoNYxX6i JI/f2bQUI9LFllKQxfIoUjZNcmYkgcLDcLpZj4haSj9ipZYhbOlonkl/MTFNioggi0jD M21zyluVThBaEv5FoNQ6lfApW2qIQrGel0IjDxQhM0SLb0w62DFYO9SdnbJX8f7UzwnB PNQfr118J26bkph1DcoB9tO7cXXXC2yxGj+DT8qf5Vbg0pMO6HUqwqqV1d26OyVh2w1R HY+bk43nvJvBnIYJbCh7U+N72wp1EODQBRieeSF+0BF2JU3LLYpAnM8NxUqQRxbHTx2k 2OgQ== |
| In-reply-to: | <50EE33BC.8010403@xxxxxxxxxxxxxxxxx> |
| References: | <CAGdvdE3VnYKg8OXFZ-0eALuhK=Qdt-Apj0uwrB8Yfs=4Uun3UA@xxxxxxxxxxxxxx> <50EE33BC.8010403@xxxxxxxxxxxxxxxxx> |
|
Hi Stan, Please see in-line:
On Thu, Jan 10, 2013 at 11:21 AM, Stan Hoeppner <stan@xxxxxxxxxxxxxxxxx> wrote:
Only the sw=3 is no longer valid, correct? There's no way to add sw=5? 1. Mount with "noalign", but that only affects data, not journal writes Is "noalign" the default when no sw/su option is specified at all?
Not a possible solution. The space is for a database and must be contiguous. 3. Add 2 SSDs to the new array and rebuild it as a 6 drive RAID10 to How is this the obvious and preferred path when I still can't modify the sw value? Same problem. Data loss or reformatting is not the preferred path, it defeats the purpose of using LVM. Also, the potential for data loss by enlarging the raid array is huge.
assuming you actually mean 1MB STRIP above, not 1MB stripe. If you Stripesize 1MB actually mean 1MB hardware RAID stripe, then the controller would have I don't believe you're correct here. The SSD Erase Block size for the drives we're using is 1MB. Why does being divisible by 3 matter? Because of the number of drives? Nowhere online have a seen anything about a 768MB+256MB stripe. All the performance info I've seen point to it being the fastest. I'm sure that wouldn't be the case if the controller had to deal with two stripes.
So essentially, my take-away here is that xfs_growfs doesn't work properly when adding more logical raid drives? What kind of a performance hit am I looking at if sw is wrong? How about this. If I know that the maximum number of drives I can add is say 20 in a RAID-10. Can I format with sw=10 (even though sw should be 3) in the eventual expectation of expanding it? What would be the downside of doing that?
Thanks!
|
| <Prev in Thread] | Current Thread | [Next in Thread> |
|---|---|---|
| ||
| Previous by Date: | Re: Using xfs_growfs on SSD raid-10, Stan Hoeppner |
|---|---|
| Next by Date: | Re: Using xfs_growfs on SSD raid-10, Stan Hoeppner |
| Previous by Thread: | Re: Using xfs_growfs on SSD raid-10, Stan Hoeppner |
| Next by Thread: | Re: Using xfs_growfs on SSD raid-10, Stan Hoeppner |
| Indexes: | [Date] [Thread] [Top] [All Lists] |