xfs
[Top] [All Lists]

Re: [Bisected] Corruption of root fs during git bisect of drm system han

To: Eric Sandeen <sandeen@xxxxxxxxxxx>
Subject: Re: [Bisected] Corruption of root fs during git bisect of drm system hang
From: Markus Trippelsdorf <markus@xxxxxxxxxxxxxxx>
Date: Fri, 19 Jul 2013 18:32:20 +0200
Cc: Stefan Ring <stefanrin@xxxxxxxxx>, Ben Myers <bpm@xxxxxxx>, Mark Tinguely <tinguely@xxxxxxx>, Stan Hoeppner <stan@xxxxxxxxxxxxxxxxx>, Linux fs XFS <xfs@xxxxxxxxxxx>
Delivered-to: xfs@xxxxxxxxxxx
Dkim-signature: v=1; a=rsa-sha256; c=simple; d=mail.ud10.udmedia.de; h= date:from:to:cc:subject:message-id:references:mime-version :content-type:in-reply-to; s=beta; bh=oM0MU6uiygxQV2e9Y1kcfigvnp 3yuJThZSUHcAhJ0YU=; b=VlHXY+1isNnX4l1sd8PnYJJFOmHVA3AEQAMwo67dwY /3Zdh9rhM2DCj86nfc4IxMYClxZoUpQPp6oo9u4xMvT+oiqqs2jud40bW8NH3sl5 J7xFxB1fqQMzFdJEvcyeb60JaxHdexcYQ2/ciAQ855jcSWcxMdwVLQcnKQpHnBEZ Q=
In-reply-to: <51E9630A.3070201@xxxxxxxxxxx>
References: <20130713090523.GA362@x4> <20130712070721.GA359@x4> <20130715022841.GH5228@dastard> <20130715064734.GA361@x4> <20130719122235.GA360@x4> <CAAxjCExBi-4Qgf6-=MBzdkzBmMtu=GTURu46DoD2CzpnF2dinw@xxxxxxxxxxxxxx> <20130719125149.GB360@x4> <51E9630A.3070201@xxxxxxxxxxx>
On 2013.07.19 at 11:02 -0500, Eric Sandeen wrote:
> On 7/19/13 7:51 AM, Markus Trippelsdorf wrote:
> > On 2013.07.19 at 14:41 +0200, Stefan Ring wrote:
> >>> I've bisected this issue to the following commit:
> >>>
> >>>  commit cca9f93a52d2ead50b5da59ca83d5f469ee4be5f
> >>>  Author: Dave Chinner <dchinner@xxxxxxxxxx>
> >>>  Date:   Thu Jun 27 16:04:49 2013 +1000
> >>>
> >>>      xfs: don't do IO when creating an new inode
> >>>
> >>> Reverting this commit on top of the Linus tree "solves" all problems for
> >>> me. IOW I no longer loose my KDE and LibreOffice config files during a
> >>> crash. Log recovery now works fine and xfs_repair shows no issues.
> >>>
> >>> So users of 3.11.0-rc1 beware. Only run this version if you have
> >>> up-to-date backups handy.
> 
> Are you certain about that bisection point?  All that does is
> say:  When we allocate a new inode, assign it a random generation
> number, rather than reading it from disk & incrementing the
> older generation number, AFAICS.  So it simply avoids a read IO.

Yes, I'm sure. 
As I wrote above I also double-checked by reverting the commit on top of
the current Linus tree.

> I wonder if simply changing IO patterns on the SSD changes how
> it's doing caching & destaging <handwave>.

No. The corruption also happens on my conventional (spinning) drives.

> >> What I miss in this thread is a distinction between filesystem
> >> corruption on the one hand and a few zeroed files on the other. The
> >> latter may be a nuisance, but it is expected behavior, while the
> >> former should never happen, period, if I'm not mistaken.
> > 
> > Well, it is natural that fs developers at first try to blame userspace.
> 
> I disagree with that, we just need to be clear about your scenarios,
> and what integrity guarantees should apply.
> 
> > Unfortunately it turned out that in this case there is filesystem
> > corruption. (Fortunately this normally happens only very rarely on rc1
> > kernels).
> 
> Corruption is when you get back data that you did not write,
> or metadata which is inconsistent or unreadable even after a proper
> log replay.
> 
> Corruption is _not_ unsynced, buffered data that was lost on a
> crash or poweroff.
> 
> But I might not have followed the thread properly, and I might
> misunderstand your situation.
> 
> When you experience this lost file [data] scenario, was it after an
> orderly reboot, or after a crash and/or system reset?

To reproduce this issue simply boot into your desktop and then hit
sysrq-c and reboot. After log replay without error messages, the
filesystem is in an inconsistent state and many small config files are
lost. There are also undeletable files. You need to run xfs_repair
manually to bring the filesystem back to normal.

When cca9f93a52d is reverted, you don't loose your config files and the
filesystem is OK after log replay. xfs_repair reports no issues at all.

-- 
Markus

<Prev in Thread] Current Thread [Next in Thread>