On Fri, 26 Mar 2010, Dave Chinner wrote:
On Thu, Mar 25, 2010 at 05:03:52PM -0700, david@xxxxxxx wrote:
On Fri, 26 Mar 2010, Dave Chinner wrote:
On Thu, Mar 25, 2010 at 04:15:42PM -0700, david@xxxxxxx wrote:
I'm working with a raid 0 (md) array on top of 10 16x1TB raid 6
hardware arrays.
....
I then did mkfs.xfs /dev/md0
but a df is showing me 128TB
What is in /proc/partitions?
# cat /proc/partitions
major minor #blocks name
8 0 292542464 sda
8 1 2048287 sda1
8 2 2048287 sda2
8 3 2048287 sda3
8 4 286390755 sda4
8 16 13671874048 sdb
8 17 13671874014 sdb1
8 32 13671874048 sdc
8 33 13671874014 sdc1
....
8 160 13671874048 sdk
8 161 13671874014 sdk1
9 0 136718739840 md0
Is there any reason for putting partitions on these block devices?
You could just use the block devices without partitions, and that
will avoid alignment potential problems....
I would like to raid to auto-assemble and I can't do that without
partitions, can I
is this just rounding error combined with the 1000=1k vs 1024=1k
marketing stuff,
Probably.
or is there some limit I am bumping into here.
Unlikely to be an XFS limit - I was doing some "what happens if"
testing on multi-PB sized XFS filesystems hosted on sparse files
a couple of days ago....
Ok, 128TB is a suspiciously round (in computer terms) number,
especially when the math is 10 sets of 14 drives (each 1TB), so I
figured I'd double check.
136718739840 / 10^9 = 136.72TB <==== marketing number
136718739840 / 2^30 = 127.33TiB <==== what df shows
Thanks.
the next fun thing is figuring out what sort of stride, etc parameters I
should have used for this filesystem.
David Lang
|