| To: | xfs@xxxxxxxxxxx |
|---|---|
| Subject: | Re: 128TB filesystem limit? |
| From: | Stan Hoeppner <stan@xxxxxxxxxxxxxxxxx> |
| Date: | Fri, 26 Mar 2010 01:09:08 -0500 |
| In-reply-to: | <alpine.DEB.2.00.1003252152350.16138@xxxxxxxxxxxxxx> |
| References: | <alpine.DEB.2.00.1003251609160.12435@xxxxxxxxxxxxxx> <20100325235433.GM3335@dastard> <alpine.DEB.2.00.1003251702190.12435@xxxxxxxxxxxxxx> <20100326003511.GN3335@dastard> <alpine.DEB.2.00.1003251900110.12435@xxxxxxxxxxxxxx> <4BAC3990.30403@xxxxxxxxxxx> <alpine.DEB.2.00.1003252152350.16138@xxxxxxxxxxxxxx> |
| User-agent: | Mozilla/5.0 (Windows; U; Windows NT 5.0; en-US; rv:1.9.1.8) Gecko/20100227 Thunderbird/3.0.3 |
david@xxxxxxx put forth on 3/25/2010 11:56 PM: >>> the next fun thing is figuring out what sort of stride, etc parameters I >>> should have used for this filesystem. >> >> mkfs.xfs should suss that out for you automatically based on talking >> to md; >> of course you'd want to configure md to line up well with the hardware >> alignment. > > in this case md thinks it's working with 10 12.8TB drives, I really > doubt that it's going to do the right thing. > > I'm not exactly sure what the right thing is in this case. the hardware > raid is useing 64K chunks across 16 drives (so 14 * 64K worth of data > per stripe), but there are 10 of these stripes before you get back to > hitting the same drive again. It would be helpful if you told us the primary application(s) that will be writing to this large multi-level RAID setup. Primarily large files or small? Database? ?? -- Stan |
| <Prev in Thread] | Current Thread | [Next in Thread> |
|---|---|---|
| ||
| Previous by Date: | Re: 128TB filesystem limit?, david |
|---|---|
| Next by Date: | Re: 128TB filesystem limit?, Steve Costaras |
| Previous by Thread: | Re: 128TB filesystem limit?, david |
| Next by Thread: | Re: 128TB filesystem limit?, Steve Costaras |
| Indexes: | [Date] [Thread] [Top] [All Lists] |