| To: | stan@xxxxxxxxxxxxxxxxx |
|---|---|
| Subject: | Re: Using xfs_growfs on SSD raid-10 |
| From: | Alexey Zilber <alexeyzilber@xxxxxxxxx> |
| Date: | Thu, 10 Jan 2013 15:19:12 +0800 |
| Cc: | xfs@xxxxxxxxxxx |
| Dkim-signature: | v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=mime-version:in-reply-to:references:from:date:message-id:subject:to :cc:content-type; bh=izuB1wmYsymXP/Td7mYT2RGOtVX4v8sr3qSplSFnCXM=; b=C2ngrQcymqkLbb4k/9tsWa932QHzhbt2ngSb8rUkRXen3pLtb/obInphH2nmzxm6aE 2xdv913jfuMzvcQzylCJ9lMvwhFl4Yg/NN8bpJ0Y79XYYQctLKj5H3oTDRQ4NnledEwX gtIWsZ59Tsg5Gmh6OL5XHSK/sUAeAjydCW/Fx2+PCrYP3cils/GVeuAwwesXW39ylmQY 30/b4hoCeF8aqdYE7JEBG381j1Jlq48+WK6RyKcvIrHsORGBnDtb+pXo2eC9W/JL87OZ drXOMg52S5VM7F8WJGLrzh29V2qLJxQNMv26zGJrzpkFC2Uz/DBLrBe8sHe5YCk1aY/X iwkw== |
| In-reply-to: | <50EE5649.60608@xxxxxxxxxxxxxxxxx> |
| References: | <CAGdvdE3VnYKg8OXFZ-0eALuhK=Qdt-Apj0uwrB8Yfs=4Uun3UA@xxxxxxxxxxxxxx> <50EE33BC.8010403@xxxxxxxxxxxxxxxxx> <CAGdvdE3eeQY1xX0Zdskr461D6ag+JC4tWEozhK32108G3y_=9A@xxxxxxxxxxxxxx> <50EE5649.60608@xxxxxxxxxxxxxxxxx> |
|
Hi Stan, Thanks for the details btw, really appreciate it. Responses inline below:
That's correct, but I was going with the description of the sw option as "number of data disks" which is constantly increasing as you're adding disks. I realize that block striping occurs independently within each array, but I do not know how that translates into parity with the way xfs works with the logical disks. How badly does alignment get messed up with you have sw=3 but you have 6 disks? Or vice/versa, if you specify sw=6, but you only have 3 disks?
It's mostly correct. We're using mysql with innodb_file_per_table, so there's maybe a hundred files or so in a few directories, some quite big.
I'm guessing though that that's still not going to be a huge issue. I've actually been using both LVM and XFS on ssd raid without aligning for a while on a few other databases and the performance has been exceptional. I've decided for this round to go deeper into actual alignment to see if I can get extra performance/life out of the drives.
But both arrays are not identical, ie. the second array has less (or more) drives for example. How does the sw value affect it then?
# /usr/StorMan/arcconf getconfig 1 LD Logical device number 1 Logical device name : RAID10-B
RAID level : 10 Status of logical device : Optimal Size : 1142774 MB Stripe-unit size : 1024 KB
Read-cache mode : Enabled MaxCache preferred read cache setting : Disabled MaxCache read cache setting : Disabled Write-cache mode : Enabled (write-back)
Write-cache setting : Enabled (write-back) when protected by battery/ZMM Partitioned : No Protected by Hot-Spare : No
Bootable : No Failed stripes : No Power settings : Disabled --------------------------------------------------------
Logical device segment information -------------------------------------------------------- Group 0, Segment 0 : Present (Controller:1,Enclosure:0,Slot:2) FG001MMV
Group 0, Segment 1 : Present (Controller:1,Enclosure:0,Slot:3) FG001MNW Group 1, Segment 0 : Present (Controller:1,Enclosure:0,Slot:4) FG001MMT
Group 1, Segment 1 : Present (Controller:1,Enclosure:0,Slot:5) FG001MNY Group 2, Segment 0 : Present (Controller:1,Enclosure:0,Slot:6) FG001DH8
Group 2, Segment 1 : Present (Controller:1,Enclosure:0,Slot:7) FG001DKK The controller itself is: Adaptec 5805Z
According to this article: http://www.pcguide.com/ref/hdd/perf/raid/concepts/perfStripe-c.html "The second important parameter is the stripe size of the array, sometimes also referred to by terms such as block size, chunk size, stripe length or granularity. This term refers to the size of the stripes written to each disk. RAID arrays that stripe in blocks typically allow the selection of block sizes in kiB ranging from 2 kiB to 512 kiB (or even higher) in powers of two (meaning 2 kiB, 4 kiB, 8 kiB and so on.) Byte-level striping (as in RAID 3) uses a stripe size of one byte or perhaps a small number like 512, usually not selectable by the user."
So they're talking about powers of 2, not powers of 3. 1MB would definitely work then.
Please educate me then. Where can I find more information that stripes are calculated by a power of 3? The article above references power of 2.
Right, but say that 1MB stripe is a single stripe. I'm guessing it would fit within a single erase block? Or should I just use 512k stripes to be safe?
Ok, so with the original 6 drives, if it's a raid 10, that would give 3 mirrors to stripe into 1 logical drive.
Is this where the power of 3 comes from?
That's a good idea, though I would use LVM for the concatenation. I just don't trust the hardware to concatenate existing disks to more disks, I'd rather leave that up to LVM to handle, AND be able to take snapshots, etc.
Thanks Stan, very informative! -Alex -- |
| <Prev in Thread] | Current Thread | [Next in Thread> |
|---|---|---|
| ||
| Previous by Date: | Re: Using xfs_growfs on SSD raid-10, Stan Hoeppner |
|---|---|
| Next by Date: | Re: [PATCH] xfstests: test for file clone functionality of btrfs ("reflinks"), David Sterba |
| Previous by Thread: | Re: Using xfs_growfs on SSD raid-10, Stan Hoeppner |
| Next by Thread: | Re: Using xfs_growfs on SSD raid-10, Stan Hoeppner |
| Indexes: | [Date] [Thread] [Top] [All Lists] |