On Tuesday July 17, dgc@xxxxxxx wrote:
> On Tue, Jul 17, 2007 at 09:56:25AM +1000, Neil Brown wrote:
> > On Tuesday July 17, dgc@xxxxxxx wrote:
> > > On Mon, Jul 16, 2007 at 04:53:22PM +0300, Raz wrote:
> > > >
> > > > Well you are right. /proc/partitions says:
> > > > ....
> > > > 8 241 488384001 sdp1
> > > > 9 1 3404964864 md1
> > > > 9 2 3418684416 md2
> > > > 9 3 6823647232 md3
> > > >
> > > > while xfs formats md3 as 9 TB.
..
> >
> > If XFS is given a 6.8TB devices and formats it as 9TB, then I would be
> > looking at mkfs.xfs(??).
>
> mkfs.xfs tries to read the last block of the device that it is given
> and proceeds only if that read is successful. IOWs, mkfs.xfs has been
> told the size of the device is 9TB, it's successfully read from offset
> 9TB, so the device must be at least 9TB.
Odd.
Given that the drives are 490GB, and there are 8 in a raid5 array,
the raid5 arrays are really under 3.5GB. And two of them is less than
7GB. So there definitely are not 9TB worth of bytes..
mkfs.xfs uses the BLKGETSIZE64 ioctl which returns
bdev->bi_inode->i_size, where as /proc/partitions uses get_capacity
which uses disk->capacity, so there is some room for them to return
different values... Except that on open, it calls
bd_set_size(bdev, (loff_t)get_capacity(disk)<<9);
which makes sure the two have the same value.
I cannot see where the size difference comes from.
What does
/sbin/blockdev --getsize64
report for each of the different devices, as compared to what
/proc/partitions reports?
NeilBrown
>
> However, internal to the kernel there appears to be some kind of
> wrapping bug and typically that shows up with /proc/partition
> showing an incosistent size of the partition compared to other
> utilities.
|