xfs
[Top] [All Lists]

Re: Data corruption, md5 changes on every mount

To: Dmitry Panov <dmitry.panov@xxxxxxxxxxx>
Subject: Re: Data corruption, md5 changes on every mount
From: Dave Chinner <david@xxxxxxxxxxxxx>
Date: Mon, 12 Dec 2011 15:15:02 +1100
Cc: xfs@xxxxxxxxxxx
In-reply-to: <4EE54734.8050603@xxxxxxxxxxx>
References: <4EE4AE61.6000306@xxxxxxxxxxx> <20111211235334.GJ14273@dastard> <4EE54734.8050603@xxxxxxxxxxx>
User-agent: Mutt/1.5.21 (2010-09-15)
On Mon, Dec 12, 2011 at 12:13:40AM +0000, Dmitry Panov wrote:
> Hi Dave,
> 
> On 11/12/2011 23:53, Dave Chinner wrote:
> >On Sun, Dec 11, 2011 at 01:21:37PM +0000, Dmitry Panov wrote:
> >>Hi guys,
> >>
> >>I have a 2TiB XFS which is about 60% full. Recently I've noticed
> >>that the daily inc. backup reports file contents change for files
> >>that are not supposed to change.
> >What kernel/platform? What version of xfsprogs? What kind of
> >storage?
> It's linux kernel 3.0.0 at the moment, however it used to run
> different versions and I can't tell for sure when the problem
> started. xfsprogs version is 3.1.2.
> 
> The storage is a 2 node cluster with hardware RAID1+0 and drbd.

Hmmmm. HA, remote replication, network paths in the storage stack.
Not a particularly common setup, so I'd be looking at validating
your drbd setup before looking at XFS.....

> >>I've created an LVM snapshot and ran xfs_check/xfs_repair. xfs_check
> >>did report a few problems (unknown node type). After that I ran a
> >>simple test: mount, calculate md5 of the problematic files, report
> >>if it changed, umount, sleep 10 sec. That script reported that md5
> >>sum of at least one file was changing on every cycle.
> >That sounds like you've got a dodgy drive.
> 
> That would be my guess too, however the problem occurs on both nodes
> (i.e. it doesn't go away when the other node becomes active) and the
> same files affected which makes hard drives or RAID controller or
> RAM failure very unlikely.

Which simply means the corruption has been replicated.

Given that drbd is in the picture and that has a history of causing
filesystem and/or data corruptions, I'd suggest you validate that
drbd is not causing problems first. If you can reproduce the data
corruption on a storage stack that doesn't have drbd in it, then
it's probably a filesystem problem.  However, you need to rule out
the lower storage layers as the cause first.  i.e. once you've
validated that your block device is good, then we can start to look
at whether the filesystem is the cause.

In general, you need a reliable reproducer to do this, so if you can
reproduce the problem anymore, there's little that can be done about
it...

> Is there any way to perform a more thorough check, than xfs_check does?

xfs_repair -n is more thorough than xfs_check. But remember, both
xfs_check and xfs_repair are only chekcing the filesystem structure,
not the contents of your files. The contents of your files are yours
to check....

Cheers,

Dave.
-- 
Dave Chinner
david@xxxxxxxxxxxxx

<Prev in Thread] Current Thread [Next in Thread>