I recently bought large disk (250Gb, previous largest was 150, all
1 xfs partition mk'ed with default params). mk'd this one with
-i size=2048 and -b size=8192, got output:
# mkfs.xfs -b size=8192 -i size=2048 -L Backups /dev/hdg1
meta-data=/dev/hdg1 isize=2048 agcount=59, agsize=524288 blks
= sectsz=512
data = bsize=8192 blocks=30638963, imaxpct=25
= sunit=0 swidth=0 blks, unwritten=1
naming =version 2 bsize=8192
log =internal log bsize=8192 blocks=14960, version=1
= sectsz=512 sunit=0 blks
realtime =none extsz=65536 blocks=0, rtextents=0
--- but tried to mount:
ishtar:var/log# mount /dev/hdg1 /mnt
mount: Function not implemented
ishtar:var/log# mount -t xfs /dev/hdg1 /mnt
mount: Function not implemented
---
if size=250 billion, then #blocks ~= 509 million (max). Not even close
to value of unsigned or signed 32-bit block value, so that shouldn't be
a factor; oops....seems to be xfs specific bug.
I just remade partition with default params...oh,
Hmmm....I thought linux page size was 8K (?). Shouldn't 8K block size also
work?
If that is the problem, any idea when xfs will be able to use block
sizes > page size?
If that isn't the problem, is this an edge case that isn't being checked
correctly?
Sorry for the bother....should have just stuck w/defaults, but seemed so
wasteful since my average file size on my backup disk is 3.6 megabytes ....
A 64K block size would likely be more efficient on such a disk...sigh.
-l
|