On Tuesday July 17, dgc@xxxxxxx wrote:
> On Mon, Jul 16, 2007 at 04:53:22PM +0300, Raz wrote:
> >
> > Well you are right. /proc/partitions says:
> > ....
> > 8 241 488384001 sdp1
> > 9 1 3404964864 md1
> > 9 2 3418684416 md2
> > 9 3 6823647232 md3
> >
> > while xfs formats md3 as 9 TB.
> > If i am using LBD , what is the biggest size I can use on i386 ?
>
> Supposedly 16TB. 32bit x 4k page size = 16TB. Given that the size is
> not being reported correctly, I'd say that this is probably not an
> XFS issue. The next thing to check is how large an MD device you
> can create correctly.
>
> Neil, do you know of any problems with > 8TB md devices on i386?
Should work, but the amount of testing has been limited, and bugs
have existed.
Each component of a raid5 is limited to 2^32 K by the metadata, so
that is 4TB. At 490GB, you are well under that.
There should be no problem with a 3TB raid5, providing LBD has been
selected.
raid0 over 3TB devices should also be fine. There was a bug fixed in
May this year that caused problem with md/raid0 was used over
components larger than 4TB on a 32bit host, but that shouldn't affect
you and it does suggest that someone had success with a very large
raid0 once this bug was fixed.
If XFS is given a 6.8TB devices and formats it as 9TB, then I would be
looking at mkfs.xfs(??).
NeilBrown
|