> >I have the same problem on a xfs on a raid 1 device (mdraid), wich is
> >full (97%)
> >and not on other non raid devices (and unfortunately not almost full).
I have raid 1 over raid 0 (it's together raid 10).
It's 55% full now. But it was 99% full few months ago. And there were
errors in dmesg log. Unfortunetly I don't have them :-(
/dev/md9 56G 31G 25G 55% /export
> >This problem appeared under Fedora3 and Suse9.3
I have this problem under RedHat 7.2.
> >Here's the output of xfs_info for the problematic one :
Here is my xfs_info:
# xfs_info /dev/md9
meta-data=/ isize=256 agcount=8, agsize=163856 blks
data = bsize=4096 blocks=1310752, imaxpct=25
= sunit=16 swidth=32 blks, unwritten=0
naming =version 2 bsize=4096
log =internal bsize=4096 blocks=1200, version=1
= sectsz=512 sunit=0 blks
realtime =none extsz=131072 blocks=0, rtextents=0
> It would also be interesting to see the xfs_repair output, and xfs_bmap
> (-v and -a) output of the problematic files prior to running xfs_fsr, if
I don't know which files will be problematic after xfs_fsr.
I'm sorry, I don't have enough time till Friday. Then I'll try to play
with the problem.