| To: | rthompson@xxxxxxxxxxxxxxxxxxx |
|---|---|
| Subject: | Re: RHEL ES 4 |
| From: | Eric Sandeen <sandeen@xxxxxxx> |
| Date: | Fri, 18 Nov 2005 09:14:37 -0600 |
| Cc: | linux-xfs@xxxxxxxxxxx |
| In-reply-to: | <1132326431.12165.9.camel@xxxxxxxxxxxxxxxxxxxxxxxxxxxxxx> |
| References: | <32927.68.52.44.223.1132279914.squirrel@xxxxxxxxxxxxx> <437D6935.2090905@xxxxxxx> <1132326431.12165.9.camel@xxxxxxxxxxxxxxxxxxxxxxxxxxxxxx> |
| Sender: | linux-xfs-bounce@xxxxxxxxxxx |
| User-agent: | Mozilla Thunderbird 1.0.6-1.1.fc4 (X11/20050720) |
Rob Thompson wrote: This is in fact a 120 TB (not GB) filesystem that I am trying to build. What I am attempting to do is to take 80 1.6 TB arrays ((8 x 250 GB Raid 5 arrays) 10 arrays from 8 seperate SAN's). Use LVM to make one large volume, then use xfs to format and mount this as a single filesystem. Any advice - or gotcha's would be appreciated. one piece of advice would be to use an opteron or em64_t or ia64 box - the ia32 kernels from RHEL4 have 4k stacks enabled which may not play well with xfs on top of lvm on top of whatever your other drivers are, under any possible nfs sharing, etc. If 4k stacks don'w cut it for you (testing would be in order...) and you must use ia32 kernels, then you would have to rebuild the kernel with 8k stacks - patches have floated around this list previously for that. -Eric Thanks, Rob |
| <Prev in Thread] | Current Thread | [Next in Thread> |
|---|---|---|
| ||
| Previous by Date: | Re: RHEL ES 4, Rob Thompson |
|---|---|
| Next by Date: | Re: RHEL ES 4, Rob Thompson |
| Previous by Thread: | Re: RHEL ES 4, Rob Thompson |
| Next by Thread: | Re: RHEL ES 4, Rob Thompson |
| Indexes: | [Date] [Thread] [Top] [All Lists] |