| To: | "David Chinner" <dgc@xxxxxxx> |
|---|---|
| Subject: | Re: raid50 and 9TB volumes |
| From: | Raz <raziebe@xxxxxxxxx> |
| Date: | Mon, 3 Sep 2007 17:24:21 +0300 |
| Cc: | linux-xfs@xxxxxxxxxxx |
| Dkim-signature: | a=rsa-sha1; c=relaxed/relaxed; d=gmail.com; s=beta; h=domainkey-signature:received:received:message-id:date:from:to:subject:cc:in-reply-to:mime-version:content-type:content-transfer-encoding:content-disposition:references; b=aTi30IJKh0YgT+ExNbBZoaMsVt9qejvbMeGj8m9YhicBJinRnUwfhOUfKLr71GSdJY/qJDGMOzPWEARlJZtvWXwZUNMSWK/pngB4VntwIol7yYhP8LlllaiCNAdw4GvH/yWxkMMO4JgOlJv7/cyGtjclqBRahDm9wZH4muPad80= |
| Domainkey-signature: | a=rsa-sha1; c=nofws; d=gmail.com; s=beta; h=received:message-id:date:from:to:subject:cc:in-reply-to:mime-version:content-type:content-transfer-encoding:content-disposition:references; b=MKjkRmTzyAV7qIuNyZ6qGyAdazdIkEUvDo+5UFB+2g8lvSzLgGDNgOmOfPSccM5lA3gQvgRtSjClxEiZo3Rc4UoCeLVFvJcfpke8QSMvi8XG+uv6L6cf78Sb4rJPDRjctcWSojNqT+0HndKeUZ5/9+C27sXsCCo35cAUmCqmCrU= |
| In-reply-to: | <5d96567b0708070220u23c895ffk54849fca947b5100@xxxxxxxxxxxxxx> |
| References: | <5d96567b0707160542t2144c382mbfe3da92f0990694@xxxxxxxxxxxxxx> <5d96567b0707160653m5951fac9v5a56bb4c92174d63@xxxxxxxxxxxxxx> <20070716221831.GE31489@xxxxxxx> <18076.1449.138328.66699@xxxxxxxxxxxxxx> <20070717001205.GI31489@xxxxxxx> <18076.4940.845633.149160@xxxxxxxxxxxxxx> <20070717005854.GL31489@xxxxxxx> <5d96567b0707222309y61480271xa8220a0b179764e0@xxxxxxxxxxxxxx> <20070724010105.GN31489@xxxxxxx> <5d96567b0708070220u23c895ffk54849fca947b5100@xxxxxxxxxxxxxx> |
| Sender: | xfs-bounce@xxxxxxxxxxx |
Dave hello. What is the curret status of this problem ? If you recall , xfs in 32bit over 10 TB md device ( raid50 in this case) sees only 8TB ( and no more). The disks I am using are 750GB hitachi. kernel is 2.6.17. thank you raz On 8/7/07, Raz <raziebe@xxxxxxxxx> wrote: > On 7/24/07, David Chinner <dgc@xxxxxxx> wrote: > > On Mon, Jul 23, 2007 at 09:09:03AM +0300, Raz wrote: > > > My QA to re-installed the system. same kernel, different results. now, > > > /proc/paritions > > > reports : > > > 9 1 5114281984 md1 > > > 9 2 5128001536 md2 > > > 9 3 10242281472 md3 > > > > > > blockdev --getsize64 /dev/md3 > > > 10488096227328 > > > > > > but xfs keeps on crashing. when formatting it ot 6.3 TB we're OK. when > > > letting xfs's mkfs choose the > > > > So at 6.3TB everything is ok. At what point does it start having > > problems? 6.4TB, 6.8TB, 8TB, 9TB? > over 8 TB. we checked several times. in 8.5 it crashes. > > I know Neil pointed out that you shouldn't have 10TB but closer to > > 7TB - is this true? > the drives are of 750 GB each. > > Cheers, > > > > Dave. > > -- > > Dave Chinner > > Principal Engineer > > SGI Australian Software Group > > > > > -- > Raz > -- Raz |
| <Prev in Thread] | Current Thread | [Next in Thread> |
|---|---|---|
| ||
| Previous by Date: | Re: [PATCH] Implement ioctl to mark AGs as "don't use/use", Ruben Porras |
|---|---|
| Next by Date: | Re: raid50 and 9TB volumes, Christian Kujau |
| Previous by Thread: | Re: [PATCH] Implement ioctl to mark AGs as "don't use/use", Ruben Porras |
| Next by Thread: | Re: raid50 and 9TB volumes, Christian Kujau |
| Indexes: | [Date] [Thread] [Top] [All Lists] |