xfs
[Top] [All Lists]

Re: RHEL ES 4

To: Eric Sandeen <sandeen@xxxxxxx>
Subject: Re: RHEL ES 4
From: Rob Thompson <rthompson@xxxxxxxxxxxxxxxxxxx>
Date: Fri, 18 Nov 2005 09:23:13 -0600
Cc: linux-xfs@xxxxxxxxxxx
In-reply-to: <437DEFDD.7070706@xxxxxxx>
References: <32927.68.52.44.223.1132279914.squirrel@xxxxxxxxxxxxx> <437D6935.2090905@xxxxxxx> <1132326431.12165.9.camel@xxxxxxxxxxxxxxxxxxxxxxxxxxxxxx> <437DEFDD.7070706@xxxxxxx>
Reply-to: rthompson@xxxxxxxxxxxxxxxxxxx
Sender: linux-xfs-bounce@xxxxxxxxxxx
Eric,
Thanks for the advice. I am running the 64-bit kernel:
uname -a
2.6.9-22.ELsmp #1 SMP 
x86_64 x86_64 x86_64 GNU/Linux


Thanks,
Rob



On Fri, 2005-11-18 at 09:14 -0600, Eric Sandeen wrote:
> Rob Thompson wrote:
> > This is in fact a 120 TB (not GB) filesystem that I am trying to build.
> > 
> > What I am attempting to do is to take 80 1.6 TB arrays ((8 x 250 GB Raid
> > 5 arrays) 10 arrays from 8 seperate SAN's). Use LVM to make one large
> > volume, then use xfs to format and mount this as a single filesystem.
> > 
> > Any advice - or gotcha's would be appreciated.
> 
> one piece of advice would be to use an opteron or em64_t or ia64 box - the 
> ia32 
> kernels from RHEL4 have 4k stacks enabled which may not play well with xfs on 
> top of lvm on top of whatever your other drivers are, under any possible nfs 
> sharing, etc.
> 
> If 4k stacks don'w cut it for you (testing would be in order...) and you must 
> use ia32 kernels, then you would have to rebuild the kernel with 8k stacks - 
> patches have floated around this list previously for that.
> 
> -Eric
> 
> > Thanks,
> > Rob


<Prev in Thread] Current Thread [Next in Thread>