xfs
[Top] [All Lists]

Re: XFS issue under 2.6.25.13 kernel

To: "Sławomir Nowakowski" <nailman23@xxxxxxxxx>, xfs@xxxxxxxxxxx
Subject: Re: XFS issue under 2.6.25.13 kernel
From: "Sławomir Nowakowski" <nailman23@xxxxxxxxx>
Date: Wed, 27 Aug 2008 20:09:18 +0200
Dkim-signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=gamma; h=domainkey-signature:received:received:message-id:date:from:to :subject:in-reply-to:mime-version:content-type:references; bh=+TGYVYUoRlFpxB/EV4iXBwOiyk6FYIhe4BxJBDbDMyI=; b=cU3JPJAEtzIWg3FUqr3Rpl3zknjlgexk/HazjpGSEkM5KF7I3d/xlP4Fb0NySb82ZD lxQA7tbmS7lnJ30oXDkJhlLrELM7tdxNFTCWdEhoFXMYKQ9/0GfDiEoIbb1CT4StIdop 0LJXAFHrnMQes5ICrdd9msgtbGUuo05kA2C44=
Domainkey-signature: a=rsa-sha1; c=nofws; d=gmail.com; s=gamma; h=message-id:date:from:to:subject:in-reply-to:mime-version :content-type:references; b=yCyxUCECszQK3hU54ncfDV1d7+qHwQwicJ/YFkchE18M06I22hHlR17thJ9Y4PZgzr uxt51DMV9NPub1/t678XiNJ1Ayx2sWsl5r43sT9++rQ5UeEO+sfvScbxvfZmg2VPN9R+ aabhMpaoHyRew+owe+Kqhq9QE4ufz+U7WnXfQ=
In-reply-to: <20080827005243.GB5706@disturbed>
References: <50ed5c760808220303p37e03e8dge5b868a572374e0b@xxxxxxxxxxxxxx> <20080823010524.GM5706@disturbed> <50ed5c760808250408o44aeaf07me262eab8da8340ba@xxxxxxxxxxxxxx> <20080826014133.GS5706@disturbed> <50ed5c760808260553i7def5e93qb0bcb4d2206a4a38@xxxxxxxxxxxxxx> <20080827005243.GB5706@disturbed>
Sender: xfs-bounce@xxxxxxxxxxx
Dear Dave,

We really apreciate your help..

In the realtion to previous correspondations about differences between
implementation of kernels 2.6.17.13 and 2.6.25.13 we'd like to ask
some questions.

We was based on git repository:

git://git.kernel.org

We have reverted some changes for XFS in 2.6.25.13 kernel. We have
usedf 3 commits:

- 94E1E99F11... (SGI-PV: 964468)
- 4BE536DEBE... (SGI-PV: 955674)
- 4CA488EB4...  (SGI-PV: 971186)

With these changes we have created patch for 2.6.25.13 kernel. This
patch should eliminate additional reservation of disk space in XFS
file system. Our intention was to get similarity space of disk between
2.6.17.13 and 2.6.25.13 kernels.

Does patch that is attached to this mail do everything properly? Is it
100% compatibe with XFS API?

If you wnat anything more from us juts ask. WQe deliver it.

Thank you vey much for your attitude.

Roland

2008/8/27, Dave Chinner <david@xxxxxxxxxxxxx>:
> On Tue, Aug 26, 2008 at 02:53:23PM +0200, Sławomir Nowakowski wrote:
> > 2008/8/26 Dave Chinner <david@xxxxxxxxxxxxx>:
> > run under 2.6.17.17 and 2.6.25.13 kernels?
> >
> > Here is a situation on 2.6.17.13 kernel:
> >
> > xfs_io -x -c 'statfs' /mnt/point
> >
> > fd.path = "/mnt/sda"
> > statfs.f_bsize = 4096
> > statfs.f_blocks = 487416
> > statfs.f_bavail = 6
> > statfs.f_files = 160
> > statfs.f_ffree = 154
> > geom.bsize = 4096
> > geom.agcount = 8
> > geom.agblocks = 61247
> > geom.datablocks = 489976
> > geom.rtblocks = 0
> > geom.rtextents = 0
> > geom.rtextsize = 1
> > geom.sunit = 0
> > geom.swidth = 0
> > counts.freedata = 6
> > counts.freertx = 0
> > counts.freeino = 58
> > counts.allocino = 64
>
> The counts.* numbers are the real numbers, not th statfs numbers
> which are somewhat made up - the inode count for example is
> influenced by the amount of free space....
>
> > xfs_io -x -c 'resblks' /mnt/point
> >
> > reserved blocks = 0
> > available reserved blocks = 0
> ....
>
> >
> > But under 2.6.25.13 kernel the situation looks different:
> >
> > xfs_io -x -c 'statfs' /mnt/point:
> >
> > fd.path = "/mnt/-sda4"
> > statfs.f_bsize = 4096
> > statfs.f_blocks = 487416
> > statfs.f_bavail = 30
> > statfs.f_files = 544
> > statfs.f_ffree = 538
>
> More free space, therefore more inodes....
>
> > geom.bsize = 4096
> > geom.agcount = 8
> > geom.agblocks = 61247
> > geom.datablocks = 489976
> > geom.rtblocks = 0
> > geom.rtextents = 0
> > geom.rtextsize = 1
> > geom.sunit = 0
> > geom.swidth = 0
> > counts.freedata = 30
> > counts.freertx = 0
> > counts.freeino = 58
> > counts.allocino = 64
>
> but the counts.* values show that the inode counts are the same.
> However, the free space is different, partially due to a different
> set of ENOSPC deadlock fixes that were done that required different
> calculations of space usage....
>
> > xfs_io -x -c 'resblks' /mnt/point:
> >
> > reserved blocks = 18446744073709551586
> > available reserved blocks = 18446744073709551586
>
> Well, that is wrong - that's a large negative number.
>
> FWIW, I can't reproduce this on a pure 2.6.24 on ia32 or 2.6.27-rc4 kernel
> on x86_64-UML:
>
> # mount /mnt/xfs2
> # df -k /mnt/xfs2
> Filesystem           1K-blocks      Used Available Use% Mounted on
> /dev/ubd/2             2086912      1176   2085736   1% /mnt/xfs2
> # xfs_io -x -c 'resblks 0' /mnt/xfs2
> reserved blocks = 0
> available reserved blocks = 0
> # df -k /mnt/xfs2
> Filesystem           1K-blocks      Used Available Use% Mounted on
> /dev/ubd/2             2086912       160   2086752   1% /mnt/xfs2
> # xfs_io -f -c 'truncate 2g' -c 'resvsp 0 2086720k' /mnt/xfs2/fred
> # df -k /mnt/xfs2
> Filesystem           1K-blocks      Used Available Use% Mounted on
> /dev/ubd/2             2086912   2086880        32 100% /mnt/xfs2
> # xfs_io -x -c statfs /mnt/xfs2
> fd.path = "/mnt/xfs2"
> statfs.f_bsize = 4096
> statfs.f_blocks = 521728
> statfs.f_bavail = 8
> statfs.f_files = 192
> statfs.f_ffree = 188
> ....
> counts.freedata = 8
> counts.freertx = 0
> counts.freeino = 60
> counts.allocino = 64
> death:/mnt# umount /mnt/xfs2
> death:/mnt# mount /mnt/xfs2
> # xfs_io -x -c statfs /mnt/xfs2
> fd.path = "/mnt/xfs2"
> statfs.f_bsize = 4096
> statfs.f_blocks = 521728
> statfs.f_bavail = 0
> statfs.f_files = 64
> statfs.f_ffree = 60
> ....
> counts.freedata = 0
> counts.freertx = 0
> counts.freeino = 60
> counts.allocino = 64
> # df -k /mnt/xfs2
> Filesystem           1K-blocks      Used Available Use% Mounted on
> /dev/ubd/2             2086912   2086912         0 100% /mnt/xfs2
> # xfs_io -x -c resblks /mnt/xfs2
> reserved blocks = 8
> available reserved blocks = 8
>
> Can you produce a metadump of the filesystem image that your have produced
> on 2.6.17 that results in bad behaviour on later kernels so I can see if
> I can reproduce the same results here? If you've only got a handful of files
> the image will be small enough to mail to me....
>
> Cheers,
>
> Dave.
> --
> Dave Chinner
> david@xxxxxxxxxxxxx
>

Attachment: d10.diff.txt
Description: Text document

<Prev in Thread] Current Thread [Next in Thread>