xfs
[Top] [All Lists]

Re: XFS umount issue

To: Dave Chinner <david@xxxxxxxxxxxxx>
Subject: Re: XFS umount issue
From: Nuno Subtil <subtil@xxxxxxxxx>
Date: Wed, 25 May 2011 01:14:41 -0700
Cc: xfs-oss <xfs@xxxxxxxxxxx>
Dkim-signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=gamma; h=domainkey-signature:mime-version:in-reply-to:references:from:date :message-id:subject:to:cc:content-type:content-transfer-encoding; bh=bFUV76oVQhHZhWP7M4vTvtcTSJ6kFMdA3m3TUHSGAXk=; b=J8rcVCJ8wIdEwyZue8MI1kcX8fHsFp1bMMu4HyMdjmIyhy/P4eKAfpqQQ1ZlSqjpec dm/XqcNHKzS98t/Rv1S9+/GTlt9FkFC+LXp9mhG+LQQqBd1jjH7vABauR4324eEpLxaW hC8WnBs8Qk9iuqS2k5ACg9Fc4y2COaZiJbn2A=
Domainkey-signature: a=rsa-sha1; c=nofws; d=gmail.com; s=gamma; h=mime-version:in-reply-to:references:from:date:message-id:subject:to :cc:content-type:content-transfer-encoding; b=dxjadB2/kKqOxB6axtwmHQVTJD+Ov//Qip+qIvlP7jKkRedb1Z9frttg219ioV3afO ikVtSPWllslLkNvmBFtNrcKJnSDNcosxQEWYyPmnfPD42ZbTGpW1mMtJdYnugom0Q03i vdzacyfkZOGvuoqBcTN6fs6kFy7L6DugcNw6Q=
In-reply-to: <20110524233943.GI32466@dastard>
References: <BANLkTikNMrFzxJF4a86ZM55r3D=ThPFmOw@xxxxxxxxxxxxxx> <20110524000243.GB32466@dastard> <BANLkTinJecB+CB-n0Au=yaUFLDiDUwhzwg@xxxxxxxxxxxxxx> <20110524075404.GG32466@dastard> <BANLkTikj_ZY9g3mSmKAAv=qRaSvNQN=B3A@xxxxxxxxxxxxxx> <20110524233943.GI32466@dastard>
On Tue, May 24, 2011 at 16:39, Dave Chinner <david@xxxxxxxxxxxxx> wrote:
> On Tue, May 24, 2011 at 03:18:11AM -0700, Nuno Subtil wrote:
>> On Tue, May 24, 2011 at 00:54, Dave Chinner <david@xxxxxxxxxxxxx> wrote:
>>
>> ...
>>
>> >> > Ok, so there's nothing here that actually says it's an unmount
>> >> > error. More likely it is a vmap problem in log recovery resulting in
>> >> > aliasing or some other stale data appearing in the buffer pages.
>> >> >
>> >> > Can you add a 'xfs_logprint -t <device>' after the umount? You
>> >> > should always see something like this telling you the log is clean:
>> >>
>> >> Well, I just ran into this again even without using the script:
>> >>
>> >> root@howl:/# umount /dev/md5
>> >> root@howl:/# xfs_logprint -t /dev/md5
>> >> xfs_logprint:
>> >>     data device: 0x905
>> >>     log device: 0x905 daddr: 488382880 length: 476936
>> >>
>> >>     log tail: 731 head: 859 state: <DIRTY>
>> >>
>> >>
>> >> LOG REC AT LSN cycle 1 block 731 (0x1, 0x2db)
>> >>
>> >> LOG REC AT LSN cycle 1 block 795 (0x1, 0x31b)
>> >
>> > Was there any other output? If there were valid transactions between
>> > the head and tail of the log xfs_logprint should have decoded them.
>>
>> There was no more output here.
>
> That doesn't seem quite right. Does it always look like sthis, even
> if you do a sync before unmount?

Not always, but almost. Sometimes there's a number of transactions in
the log as well, but this is by far the most common output I got. I'll
try to capture the output for that case as well.

>> >> I see nothing in dmesg at umount time. Attempting to mount the device
>> >> at this point, I got:
>> >>
>> >> [  764.516319] XFS (md5): Mounting Filesystem
>> >> [  764.601082] XFS (md5): Starting recovery (logdev: internal)
>> >> [  764.626294] XFS (md5): xlog_recover_process_data: bad clientid 0x0
>> >
>> > Yup, that's got bad information in a transaction header.
>> >
>> >> [  764.632559] XFS (md5): log mount/recovery failed: error 5
>> >> [  764.638151] XFS (md5): log mount failed
>> >>
>> >> Based on your description, this would be an unmount problem rather
>> >> than a vmap problem?
>> >
>> > Not clear yet. I forgot to mention that you need to do
>> >
>> > # echo 3 > /proc/sys/vm/drop_caches
>> >
>> > before you run xfs_logprint, otherwise it will see stale cached
>> > pages and give erroneous results..
>>
>> I added that before each xfs_logprint and ran the script again. Still
>> the same results:
>>
>> ...
>> + mount /store
>> + cd /store
>> + tar xf test.tar
>> + sync
>> + umount /store
>> + echo 3
>> + xfs_logprint -t /dev/sda1
>> xfs_logprint:
>>     data device: 0x801
>>     log device: 0x801 daddr: 488384032 length: 476936
>>
>>     log tail: 2048 head: 2176 state: <DIRTY>
>>
>>
>> LOG REC AT LSN cycle 1 block 2048 (0x1, 0x800)
>>
>> LOG REC AT LSN cycle 1 block 2112 (0x1, 0x840)
>> + mount /store
>> mount: /dev/sda1: can't read superblock
>>
>> Same messages in dmesg at this point.
>>
>> > You might want to find out if your platform needs to (and does)
>> > implement these functions:
>> >
>> > flush_kernel_dcache_page()
>> > flush_kernel_vmap_range()
>> > void invalidate_kernel_vmap_range()
>> >
>> > as these are what XFS relies on platforms to implement correctly to
>> > avoid cache aliasing issues on CPUs with virtually indexed caches.
>>
>> Is this what /proc/sys/vm/drop_caches relies on as well?
>
> No, drop_caches frees the page cache and slab caches so future reads
> need to be looked up from disk.
>
>> flush_kernel_dcache_page is empty, the others are not but are
>> conditionalized on the type of cache that is present. I wonder if that
>> is somehow not being detected properly. Wouldn't that cause other
>> areas of the system to misbehave as well?
>
> vmap is not widely used throughout the kernel, and as a result
> people porting linux to a new arch/CPU type often don't realise
> there's anything to implement there because their system seems to be
> working. That is, of course, until someone tries to use XFS.....
>
> Cheers,
>
> Dave.
> --
> Dave Chinner
> david@xxxxxxxxxxxxx
>

<Prev in Thread] Current Thread [Next in Thread>