On Tue, Aug 26, 2008 at 02:53:23PM +0200, SÅawomir Nowakowski wrote:
> 2008/8/26 Dave Chinner <david@xxxxxxxxxxxxx>:
> run under 2.6.17.17 and 2.6.25.13 kernels?
>
> Here is a situation on 2.6.17.13 kernel:
>
> xfs_io -x -c 'statfs' /mnt/point
>
> fd.path = "/mnt/sda"
> statfs.f_bsize = 4096
> statfs.f_blocks = 487416
> statfs.f_bavail = 6
> statfs.f_files = 160
> statfs.f_ffree = 154
> geom.bsize = 4096
> geom.agcount = 8
> geom.agblocks = 61247
> geom.datablocks = 489976
> geom.rtblocks = 0
> geom.rtextents = 0
> geom.rtextsize = 1
> geom.sunit = 0
> geom.swidth = 0
> counts.freedata = 6
> counts.freertx = 0
> counts.freeino = 58
> counts.allocino = 64
The counts.* numbers are the real numbers, not th statfs numbers
which are somewhat made up - the inode count for example is
influenced by the amount of free space....
> xfs_io -x -c 'resblks' /mnt/point
>
> reserved blocks = 0
> available reserved blocks = 0
....
>
> But under 2.6.25.13 kernel the situation looks different:
>
> xfs_io -x -c 'statfs' /mnt/point:
>
> fd.path = "/mnt/-sda4"
> statfs.f_bsize = 4096
> statfs.f_blocks = 487416
> statfs.f_bavail = 30
> statfs.f_files = 544
> statfs.f_ffree = 538
More free space, therefore more inodes....
> geom.bsize = 4096
> geom.agcount = 8
> geom.agblocks = 61247
> geom.datablocks = 489976
> geom.rtblocks = 0
> geom.rtextents = 0
> geom.rtextsize = 1
> geom.sunit = 0
> geom.swidth = 0
> counts.freedata = 30
> counts.freertx = 0
> counts.freeino = 58
> counts.allocino = 64
but the counts.* values show that the inode counts are the same.
However, the free space is different, partially due to a different
set of ENOSPC deadlock fixes that were done that required different
calculations of space usage....
> xfs_io -x -c 'resblks' /mnt/point:
>
> reserved blocks = 18446744073709551586
> available reserved blocks = 18446744073709551586
Well, that is wrong - that's a large negative number.
FWIW, I can't reproduce this on a pure 2.6.24 on ia32 or 2.6.27-rc4 kernel
on x86_64-UML:
# mount /mnt/xfs2
# df -k /mnt/xfs2
Filesystem 1K-blocks Used Available Use% Mounted on
/dev/ubd/2 2086912 1176 2085736 1% /mnt/xfs2
# xfs_io -x -c 'resblks 0' /mnt/xfs2
reserved blocks = 0
available reserved blocks = 0
# df -k /mnt/xfs2
Filesystem 1K-blocks Used Available Use% Mounted on
/dev/ubd/2 2086912 160 2086752 1% /mnt/xfs2
# xfs_io -f -c 'truncate 2g' -c 'resvsp 0 2086720k' /mnt/xfs2/fred
# df -k /mnt/xfs2
Filesystem 1K-blocks Used Available Use% Mounted on
/dev/ubd/2 2086912 2086880 32 100% /mnt/xfs2
# xfs_io -x -c statfs /mnt/xfs2
fd.path = "/mnt/xfs2"
statfs.f_bsize = 4096
statfs.f_blocks = 521728
statfs.f_bavail = 8
statfs.f_files = 192
statfs.f_ffree = 188
....
counts.freedata = 8
counts.freertx = 0
counts.freeino = 60
counts.allocino = 64
death:/mnt# umount /mnt/xfs2
death:/mnt# mount /mnt/xfs2
# xfs_io -x -c statfs /mnt/xfs2
fd.path = "/mnt/xfs2"
statfs.f_bsize = 4096
statfs.f_blocks = 521728
statfs.f_bavail = 0
statfs.f_files = 64
statfs.f_ffree = 60
....
counts.freedata = 0
counts.freertx = 0
counts.freeino = 60
counts.allocino = 64
# df -k /mnt/xfs2
Filesystem 1K-blocks Used Available Use% Mounted on
/dev/ubd/2 2086912 2086912 0 100% /mnt/xfs2
# xfs_io -x -c resblks /mnt/xfs2
reserved blocks = 8
available reserved blocks = 8
Can you produce a metadump of the filesystem image that your have produced
on 2.6.17 that results in bad behaviour on later kernels so I can see if
I can reproduce the same results here? If you've only got a handful of files
the image will be small enough to mail to me....
Cheers,
Dave.
--
Dave Chinner
david@xxxxxxxxxxxxx
|