xfs
[Top] [All Lists]

Re: easily reproducible filesystem crash on rebuilding array [XFS bug in

To: Emmanuel Florac <eflorac@xxxxxxxxxxxxxx>
Subject: Re: easily reproducible filesystem crash on rebuilding array [XFS bug in my book]
From: Dave Chinner <david@xxxxxxxxxxxxx>
Date: Wed, 17 Dec 2014 07:04:10 +1100
Cc: xfs@xxxxxxxxxxx
Delivered-to: xfs@xxxxxxxxxxx
In-reply-to: <20141216120821.587cf104@xxxxxxxxxxxxxxxxxxxx>
References: <20141211123936.1f3d713d@xxxxxxxxxxxxxxxxxxxx> <20141215130715.4dfaaa8e@xxxxxxxxxxxxxxxxxxxx> <20141215132500.13210fdb@xxxxxxxxxxxxxxxxxxxx> <20141216120821.587cf104@xxxxxxxxxxxxxxxxxxxx>
User-agent: Mutt/1.5.21 (2010-09-15)
On Tue, Dec 16, 2014 at 12:08:21PM +0100, Emmanuel Florac wrote:
> Le Mon, 15 Dec 2014 13:25:00 +0100
> Emmanuel Florac <eflorac@xxxxxxxxxxxxxx> écrivait:
> 
> > Reading the source I see that the error occured in xfs_buf_read_map, I
> > suppose it's when xfsbufd tries to scan dirty metadata? This is a read
> > error, so it could very well be a simple IO starvation at the
> > controller level (as the controller probably gives priority to
> > whatever writes are pending over reads).
> > 
> > Maybe setting xfsbufd_centisecs to the max could help here? Trying
> > right away... Any advice welcome.
> > 
> 
> Alas, same thing;
> 
> dmesg output:
> 
> 
> ffff8800df1f5020: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00  
> ................
> ffff8800df1f5030: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00  
> ................
> XFS (dm-0): Metadata corruption detected at xfs_inode_buf_verify+0x6c/0xb0, 
> block 0xeffffff40
> XFS (dm-0): Unmount and run xfs_repair
> XFS (dm-0): First 64 bytes of corrupted metadata buffer:
> ffff8800df1f5000: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00  
> ................
> ffff8800df1f5010: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00  
> ................
> ffff8800df1f5020: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00  
> ................
> ffff8800df1f5030: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00  
> ................
> XFS (dm-0): Metadata corruption detected at xfs_inode_buf_verify+0x6c/0xb0, 
> block 0xeffffff40
> XFS (dm-0): Unmount and run xfs_repair

So the underlying storage stack is returning zeros without any IO
errors here. It's probably a lookup operation, so it simply fails
and returns the error to userspace. Every one of these messages is a
separate read IO, but they are all returning zeros.

....

> XFS (dm-0): First 64 bytes of corrupted metadata buffer:
> ffff8800df1f5000: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00  
> ................
> ffff8800df1f5010: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00  
> ................
> ffff8800df1f5020: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00  
> ................
> ffff8800df1f5030: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00  
> ................
> XFS (dm-0): metadata I/O error: block 0xeffffff40 ("xfs_trans_read_buf_map") 
> error 117 numblks 16
> XFS (dm-0): xfs_do_force_shutdown(0x1) called from line 383 of file 
> fs/xfs/xfs_trans_buf.c.  Return address = 0xffffffff8125cc90
> XFS (dm-0): I/O Error Detected. Shutting down filesystem
> XFS (dm-0): Please umount the filesystem and rectify the problem(s)
> XFS (dm-0): xfs_imap_to_bp: xfs_trans_read_buf() returned error 117.
> XFS (dm-0): xfs_log_force: error 5 returned.
> XFS (dm-0): xfs_log_force: error 5 returned.

And here the same read error has occurred in a dirty transaction,
and so the filesystem shut down.

> There is no IO error at the RAID controller level, at all. The buffer
> hasn't been overwritten with zeros, I'm pretty sure it actually timed
> out and just read nothing. This is not a case for an IO error IMO, a
> retry would almost certainly succeed; after all the problem occurred
> after more than 8 hours of continuous heavy read/write activity.

What you see above is a persistent corruption that has been
reported several times as XFS has errored out and then re-read
the data from disk multiple times. A retry would most certainly
return zeros again.

Cheers,

Dave.
-- 
Dave Chinner
david@xxxxxxxxxxxxx

<Prev in Thread] Current Thread [Next in Thread>