hi,
I have the same problem on a xfs on a raid 1 device (mdraid), wich is almost
full (97%)
and not on other non raid devices (and unfortunately not almost full).
This problem appeared under Fedora3 and Suse9.3
Here's the output of xfs_info for the problematic one :
> xfs_info /dev/md0
meta-data=/data isize=256 agcount=16, agsize=10051648 blks
= sectsz=512
data = bsize=512 blocks=160826368, imaxpct=25
= sunit=64 swidth=128 blks, unwritten=1
naming =version 2 bsize=512
log =internal bsize=512 blocks=65536, version=1
= sectsz=512 sunit=0 blks
realtime =none extsz=65536 blocks=0, rtextents=0
And the others :
> xfs_info /dev/hda3
meta-data=/ isize=256 agcount=16, agsize=7484281 blks
= sectsz=512
data = bsize=512 blocks=119748496, imaxpct=25
= sunit=0 swidth=0 blks, unwritten=1
naming =version 2 bsize=4096
log =internal bsize=512 blocks=58470, version=1
= sectsz=512 sunit=0 blks
realtime =none extsz=65536 blocks=0, rtextents=0
> xfs_info /dev/hdd1
meta-data=/unimportant_data isize=256 agcount=16, agsize=4889780 blks
= sectsz=512
data = bsize=512 blocks=78236480, imaxpct=25
= sunit=0 swidth=0 blks, unwritten=1
naming =version 2 bsize=4096
log =internal bsize=512 blocks=38201, version=1
= sectsz=512 sunit=0 blks
realtime =none extsz=65536 blocks=0, rtextents=0
I've also tried with 4k blocks and it was the same.
Hope this helps,
Mathieu
[[HTML alternate version deleted]]
|