xfs
[Top] [All Lists]

Re: raid50 and 9TB volumes

To: David Chinner <dgc@xxxxxxx>
Subject: Re: raid50 and 9TB volumes
From: Neil Brown <neilb@xxxxxxx>
Date: Tue, 17 Jul 2007 09:56:25 +1000
Cc: Raz <raziebe@xxxxxxxxx>, xfs-oss <xfs@xxxxxxxxxxx>
In-reply-to: message from David Chinner on Tuesday July 17
References: <5d96567b0707160542t2144c382mbfe3da92f0990694@mail.gmail.com> <20070716130140.GC31489@sgi.com> <5d96567b0707160653m5951fac9v5a56bb4c92174d63@mail.gmail.com> <20070716221831.GE31489@sgi.com>
Sender: xfs-bounce@xxxxxxxxxxx
On Tuesday July 17, dgc@xxxxxxx wrote:
> On Mon, Jul 16, 2007 at 04:53:22PM +0300, Raz wrote:
> > 
> > Well you are right.  /proc/partitions  says:
> > ....
> >   8   241  488384001 sdp1
> >   9     1 3404964864 md1
> >   9     2 3418684416 md2
> >   9     3 6823647232 md3
> > 
> > while xfs formats md3 as 9 TB.
> > If i am using LBD , what is the biggest size I can use on i386 ?
> 
> Supposedly 16TB. 32bit x 4k page size = 16TB. Given that the size is
> not being reported correctly, I'd say that this is probably not an
> XFS issue. The next thing to check is how large an MD device you
> can create correctly.
> 
> Neil, do you know of any problems with > 8TB md devices on i386?

Should work, but the amount of testing has been limited, and bugs
have existed.

Each component of a raid5 is limited to 2^32 K by the metadata, so
that is 4TB.  At 490GB, you are well under that.
There should be no problem with a 3TB raid5, providing LBD has been
selected.

raid0 over 3TB devices should also be fine.  There was a bug fixed in
May this year that caused problem with md/raid0 was used over
components larger than 4TB on a 32bit host, but that shouldn't affect
you and it does suggest that someone had success with a very large
raid0 once this bug was fixed.

If XFS is given a 6.8TB devices and formats it as 9TB, then I would be
looking at mkfs.xfs(??).

NeilBrown


<Prev in Thread] Current Thread [Next in Thread>