[Top] [All Lists]

Re: XFS umount issue

To: Nuno Subtil <subtil@xxxxxxxxx>
Subject: Re: XFS umount issue
From: Dave Chinner <david@xxxxxxxxxxxxx>
Date: Tue, 24 May 2011 10:02:43 +1000
Cc: xfs-oss <xfs@xxxxxxxxxxx>
In-reply-to: <BANLkTikNMrFzxJF4a86ZM55r3D=ThPFmOw@xxxxxxxxxxxxxx>
References: <BANLkTikNMrFzxJF4a86ZM55r3D=ThPFmOw@xxxxxxxxxxxxxx>
User-agent: Mutt/1.5.20 (2009-06-14)
On Mon, May 23, 2011 at 02:39:39PM -0700, Nuno Subtil wrote:
> I have an MD RAID-1 array with two SATA drives, formatted as XFS.

Hi Nuno. it is probably best to say this at the start, too:

> This is on an ARM system running kernel 2.6.39.

So we know what platform this is occurring on.

> Occasionally, doing an umount followed by a mount causes the mount to
> fail with errors that strongly suggest some sort of filesystem
> corruption (usually 'bad clientid' with a seemingly arbitrary ID, but
> occasionally invalid log errors as well).

So reading back the journal is getting bad data?
> The one thing in common among all these failures is that they require
> xfs_repair -L to recover from. This has already caused a few
> lost+found entries (and data loss on recently written files). I
> originally noticed this bug because of mount failures at boot, but
> I've managed to repro it reliably with this script:

Yup, that's normal with recovery errors.

> while true; do
>       mount /store
>       (cd /store && tar xf test.tar)
>       umount /store
>       mount /store
>       rm -rf /store/test-data
>       umount /store
> done

Ok, so there's nothing here that actually says it's an unmount
error. More likely it is a vmap problem in log recovery resulting in
aliasing or some other stale data appearing in the buffer pages.

Can you add a 'xfs_logprint -t <device>' after the umount? You
should always see something like this telling you the log is clean:

$ xfs_logprint -t /dev/vdb
    data device: 0xfd10
    log device: 0xfd10 daddr: 11534368 length: 20480

    log tail: 51 head: 51 state: <CLEAN>

If the log is not clean on an unmount, then you may have an unmount
problem. If it is clean when the recovery error occurs, then it's
almost certainly a problem with you platform not implementing vmap
cache flushing correctly, not an XFS problem.

> I'm not entirely sure that this is XFS-specific, but the same script
> does run successfully overnight on the same MD array with ext3 on it.

ext3 doesn't use vmapped buffers at all, so won't show such a

> Has something like this been seen before?

Every so often on ARM, MIPS, etc platforms that have virtually
indexed caches.


Dave Chinner

<Prev in Thread] Current Thread [Next in Thread>