xfs
[Top] [All Lists]

Re: agi unlinked bucket

To: Christian Kujau <lists@xxxxxxxxxxxxxxx>
Subject: Re: agi unlinked bucket
From: Timothy Shimmin <tes@xxxxxxx>
Date: Mon, 25 Aug 2008 13:26:17 +1000
Cc: Dave Chinner <david@xxxxxxxxxxxxx>, xfs@xxxxxxxxxxx
In-reply-to: <alpine.DEB.1.10.0808250254380.26780@xxxxxxxxxxxxxxxxxx>
References: <alpine.DEB.1.10.0808230017150.20126@xxxxxxxxxxxxxxxxxx> <alpine.DEB.1.10.0808231412230.20126@xxxxxxxxxxxxxxxxxx> <20080825003929.GN5706@disturbed> <alpine.DEB.1.10.0808250254380.26780@xxxxxxxxxxxxxxxxxx>
Sender: xfs-bounce@xxxxxxxxxxx
User-agent: Thunderbird 2.0.0.16 (Macintosh/20080707)
Christian Kujau wrote:
> On Mon, 25 Aug 2008, Dave Chinner wrote:
>> If you do a mount then unmount then rerun xfs-check, does it go
>> away?
> 
> Did that a few times already, and the fs is getting mounted during boot
> anyway, but xfs_check still complains:
> 
> --------------------------------------
> # xfs_check /dev/mapper/md3 2>&1 | tee fsck_md3.log
> agi unlinked bucket 26 is 20208090 in ag 0 (inode=20208090)
> link count mismatch for inode 128 (name ?), nlink 335, counted 336
> link count mismatch for inode 20208090 (name ?), nlink 0, counted 1
> # mount /mnt/md3
> # dmesg | tail -2
>  XFS mounting filesystem dm-3
>  Ending clean XFS mount for filesystem: dm-3
> # grep xfs /proc/mounts
> /dev/mapper/md3 /mnt/md3 xfs ro,nosuid,nodev,noexec,nobarrier,noquota 0 0
> --------------------------------------
> 
> 
> The fs is ~138 GB in size. I shall run a backup and then just let
> xfs_repair have its way. I just thought you guys might have an idea what
> these messages are about and why mounting the fs (Thanks, Dave) does not
> seem to care.
> 
The file systems is divided up into allocation groups, AGs.
In each AG we have an unlinked list which is a hash table array
whose elements (often called buckets) can point to a linked list
of inodes. There is a next unlinked pointer in each inode.
The list is used to represent unlinked inodes (inodes removed from directories)
but are still referenced by processes. If we don't have a clean
unmount then the unlinked lists may not be empty and we have to remove
the inodes on the next mount (done at the same stage as log replay) by
traversing the lists.
So in your case, it looks like in AG#0 on the 26th element of the array it is 
pointing
to inode# 20208090. Which would infer that inode#20208090 was unlinked
but still had references to it at the time the filesystem was not cleanly
unmounted (power loss, crash etc..). It looks like for the root directory inode 
#128
it has a count of 335 but it is finding 336 entries.
And for inode#20208090 it has a link count of 0 and yet it has 1 entry
in the directory. It's as if the inode was deleted (its link count decremented
to zero and its parent directory decremented, unlinked list updated)
but the directory wasn't updated properly.

Hence Dave's comments:
> Ok, so if you do a 'ls -i /' do you see an inode numbered 20208090?
> i.e. is it the unlinked bucket that is incorrect, or the root
> directory.
> You are not using barriers. Are you using write caching? The
> problems with filesystem corruption on powerloss when using volatile
> write caching have traditionally shown up in directory
> corruptions...


--Tim


<Prev in Thread] Current Thread [Next in Thread>