Fwd: xfs_reno

Mark Tinguely tinguely at sgi.com
Tue Mar 12 08:48:46 CDT 2013


On 03/12/13 04:02, Hans-Peter Jansen wrote:
> Am Montag, 11. März 2013, 16:48:03 schrieb Mark Tinguely:
>> On 03/06/13 08:55, Hans-Peter Jansen wrote:
>>> Hi Dave,
>>>
>>> I tried to gather Barrys SOB, but failed so far. His trace ends in 2009
>>> google wise.
>>>
>>> How is this case usually handled?
>>>
>>> Here's the current state of things.
>>>
>>> Cheers,
>>> Pete
>>>
>>>
>>> ----------  Weitergeleitete Nachricht  ----------
>>>
>>> Betreff: xfs_reno
>>> Datum: Mittwoch, 6. März 2013, 12:52:19
>>> Von: Hans-Peter Jansen<hpj at urpla.net>
>>> An: bnaujok at sgi.com
>>>
>>> Hi Barry,
>>>
>>> attached is a slightly mangled version of your xfs_reno tool, that I badly
>>> needed recently. While at it, I plan to submit it, as it saved my *ss.
>>> Thanks.
>>>
>>> Apart from relocation to xfsprogs, I just changed this
>>>
>>> +       log_message(LOG_DEBUG, "%s: %llu %lu %s", msg, node->ino,
>>> +                       node->numpaths, node->paths[0]);
>>>
>>> from %llu to %lu for the node->numpaths argument. It might still be wrong,
>>> as numpath is defined as nlink_t which is a __u32 type, but the %s
>>> printed garbage like this:
>>>
>>> Scanning directory tree...
>>> xfs_reno: add_node_path: ino 8611163235, path
>>> /work/dlbase/hosts/11.2/pico/var/run/screens
>>> xfs_reno: add_node_path: ino 8611163233, path
>>> /work/dlbase/hosts/11.2/pico/var/run/pcscd/pcscd.events
>>> xfs_reno: add_node_path: ino 8611163234, path
>>> /work/dlbase/hosts/11.2/pico/var/run/uscreens
>>> xfs_reno: nodehash: 8611163233 692488159933497345 ��]��f�e�
>>> xfs_reno: nodehash: 8611163234 692366801337581569 ��]��f�e�
>>> xfs_reno: nodehash: 8611163235 692223830466232321 ��]��f�e�
>>>
>>> I guess, gcc is smart enough to see, that the struct members overlap here,
>>> and prints the paths[0] argument as a %llu value. What do you think?
>>>
>>> Anyway, I will revise this during the course of creating a xlstests test
>>> for xfs_reno...
>>>
>>> Do you allow me to add your Signed-off-by to this patch?
>>>
>>> If you want to build this, apply both patches to xfsprogs.
>>>
>>> TIA,
>>> Pete
>>
>> Have you been getting "Out of memory" warnings on your runs? I am.
>
> No, I would have mentioned them. But I guess, my file systems are tiny
> compared to yours. The affected FS have 2.8T and 4.1T, with former with many
> small files and directories, the latter with many 2G files.
>
>> Compiling, I get the warnings about having "\r" in the strings. For example:
>>
>> reno/xfs_reno.c:1415: internationalized messages should not contain the
>> `\r' escape sequence
>
> Well, that's for the spin wheel, that might have interesting effects, when
> localized to right-to-left languages...
>
>>                       ----------
>> I wonder if we should add a temp directory option. It seems to want to
>> use the parent directory of the directory as a temporary.
>
> Without digging into this, I can only guess, but the whole point of xfs_reno
> is relocating the inodes on the FS in question *without* copying files around.
> Using a separate TEMP defeats this purpose, don't it?
>
> It might be in order to add a note to the man page to not try to use it in
> "cross mount operation" scenarios.
>
>> Below is the
>> result of running xfs_reno on the target directory is "/mnt/xxx
>> (changing the \r to<^M>\n for the email):
>>
>> xfs_reno: directory: 128 1 /mn<^M>
>> xfs_reno: /mnt/xfs_reno_epdaJc: Cannet set target extended attributes<^M>
>> xfs_reno: failed to rename: '/mnt/xxx/origin' to
>> '/mnt/xfs_reno_NXQLWI/origin'
>> <^M>
>> xfs_reno: unable to move directory contents: /mnt/xxx to
>> /mnt/xfs_reno_NXQLWI
>> <^M>
>> xfs_reno: Cannot stat /mnt/xfs_reno_epdaJc: Inappropriate ioctl for device
>>
>> <^M>
>> xfs_reno: unable to duplicate directory attributes: /mnt/xfs_reno_epdaJc
>> t/xxx
>>                       ------
>> /mnt is not an XFS filesystem. When mounting on the root, say /mnt, the
>> message look like:
>>
>> xfs_reno: Cannot stat //xfs_reno_epdaJc: Inappropriate ioctl for device
>
> You lost me here. What I can say, is that using eg. "xfs_reno -vpn /work",
> hence on the mount point directly, did as advertised.
>
> Cheers,
> Pete

Sorry, the biggest problem is US daylight savings change and my simple 
brain. Waking this morning, it all mentally clicked. It is working as 
intended. Sorry for the noise.

--Mark.

--Mark.



More information about the xfs mailing list