xfs
[Top] [All Lists]

Re: strange behavior of a larger xfs directory

To: Dave Chinner <david@xxxxxxxxxxxxx>
Subject: Re: strange behavior of a larger xfs directory
From: Hans-Peter Jansen <hpj@xxxxxxxxx>
Date: Wed, 06 Mar 2013 00:48:25 +0100
Cc: xfs@xxxxxxxxxxx
Delivered-to: xfs@xxxxxxxxxxx
In-reply-to: <20130305222941.GP26081@dastard>
References: <4300208.uZ6HVTycB6@xrated> <8026381.3dEJ1E4pzL@xrated> <20130305222941.GP26081@dastard>
User-agent: KMail/4.9.5 (Linux/3.4.28-2.20-desktop; KDE/4.9.5; x86_64; ; )
Am Mittwoch, 6. März 2013, 09:29:41 schrieb Dave Chinner:
> On Tue, Mar 05, 2013 at 09:32:02PM +0100, Hans-Peter Jansen wrote:
> > Am Dienstag, 5. März 2013, 10:05:27 schrieb Dave Chinner:
> > > On Mon, Mar 04, 2013 at 05:40:13PM +0100, Hans-Peter Jansen wrote:
> > > Second solution: Run 3.8.1, make sure you mount with inode32, and
> > > then run the xfs_reno tool mentioned on this page:
> > > 
> > > http://xfs.org/index.php/Unfinished_work
> > > 
> > > to find all the inodes with inode numbers larger than 32
> > > bits and move them to locations with smaller inode numbers.
> > 
> > Okay, I would like to take that route.
> > 
> > I've updated the xfsprogs, xfsdump and xfstests packages in my openSUSE
> > build service repo home:frispete:tools to current versions today, and
> > plan to submit them to Factory. openSUSE is always lagging in this area.
> > 
> > I've tried to include a build of the xfs_reno tool in xfsprogs, since, as
> > you mentioned, others might have a similar need soon. Unfortunately I
> > failed so far, because it is using some attr_multi and attr_list
> > interfaces, that aren't part of the xfsprogs visible API anymore. Only
> > the handle(3) man page refers to them.
> > 
> > Attached is my current state: I've relocated the patch to xfsprogs 3.1.9,
> > because it already carries all the necessary headers (apart from
> > attr_multi
> > and attr_list). The attr interfaces seem to be collected in libhandle now,
> > hence I've added it to the build.
> 
> attr_list and attr_multi are supplied by libattr, you should not
> need the *by_handle variants at all - they are special sauce used by
> xfsdump, not xfs_reno....

Ahh, I see. These interfaces cannot be exercised much, given that google 
didn't relate them to libattr prominently..

> .....
> 
> > +TOPDIR = ..
> > +include $(TOPDIR)/include/builddefs
> > +
> > +LTCOMMAND = xfs_reno
> > +CFILES = xfs_reno.c
> > +LLDLIBS = $(LIBATTR)
> 
> The patch assumes that libattr has been found by autoconf and set up
> in $(LIBATTR), but xfsprogs does not currently use libattr and hence
> that variable isn't set up. Therefore this line is a no-op:
> 
> +LLDLIBS = $(LIBATTR)
> 
> Change it to:
> 
> LLDLIBS = -lattr
> 
> And the xfs_reno should then link.
> 
> BTW, if you want extra points and add the autoconf magic to the
> patch, copy it from the xfsdump tree. The places you need to copy
> from are:
> 
> $ git grep -l LIBATTR |grep -v Makefile
> configure.ac
> include/builddefs.in
> m4/package_attrdev.m4
> $

Done that up to this point ;-) Nice and easy. 
Committed, attached, and public build is on the way.

Dave, you made my day, ahem, night. Thank you very much.

To a casual consumer of this thread: go, grab the xfs_reno.patch in the 
preceding mail, and the attached patch, apply both to a current version of 
xfsprogs, build, and read man xfs_reno. Seriously.

> And for even more bonus points, you could write a basic xfstest that
> creates a bunch of 64 bit inodes and then runs xfs_reno on it and
> checks that they get moved to 32 bit inodes. At that point, we could
> probably pull the xfs_reno patch into xfsprogs and ship it....

Hmm, any idea on how xfs can be tricked into generating 64 bit inodes without 
the need to create an excessive big test fs, or is this an accepted practice?

Note to myself: xfs_reno could use some mount option check. Forgot to remount 
one partition with inode32 and, consequently, moved the offending inodes to 
another 64 bit value..

Cheers,
Pete

Attachment: xfs_reno_fix.diff
Description: Text Data

<Prev in Thread] Current Thread [Next in Thread>