<html><head></head><body>Hi Dave, <br>
<br>
My apologies, I completely miscommunicated. The drive dying was unrelated, it happened two months ago. I mentioned it only as background info, but I realize now that was stupid. There were no drive or RAID problems at all at the time the XFS mount died today. The drives are all fine and the RAID log shows nothing significant. <br>
<br>
Thanks, <br>
<br>
Mike <br><br><div style='font-size:10.0pt;font-family:"Tahoma","sans-serif";padding:3.0pt 0in 0in 0in'>
<hr style='border:none;border-top:solid #E1E1E1 1.0pt'>
<b>From:</b> Dave Chinner <david@fromorbit.com><br>
<b>Sent:</b> Wed Dec 04 19:40:34 PST 2013<br>
<b>To:</b> Mike Dacre <mike.dacre@gmail.com><br>
<b>Cc:</b> xfs@oss.sgi.com<br>
<b>Subject:</b> Re: Sudden File System Corruption<br>
</div>
<br>
<pre class="k9mail">On Wed, Dec 04, 2013 at 06:55:05PM -0800, Mike Dacre wrote:<br /><blockquote class="gmail_quote" style="margin: 0pt 0pt 1ex 0.8ex; border-left: 1px solid #729fcf; padding-left: 1ex;"> Hi Folks,<br /> <br /> Apologies if this is the wrong place to post or if this has been answered<br /> already.<br /> <br /> I have a 16 2TB drive RAID6 array powered by an LSI 9240-4i. It has an XFS<br /> filesystem and has been online for over a year. It is accessed by 23<br /> different machines connected via Infiniband over NFS v3. I haven't had any<br /> major problems yet, one drive failed but it was easily replaced.<br /> <br /> However, today the drive suddenly stopped responding and started returning<br /> IO errors when any requests were made. This happened while it was being<br /> accessed by 5 different users, one was doing a very large rm operation (rm<br /> *sh on thousands on files in a directory). Also, about 30 minutes before<br /> we had connected the
globus
connect endpoint to allow easy file transfers<br /> to SDSC.<br /></blockquote><br />So, you had a drive die and at roughly the same time XFS started<br />reporting corruption problems and shut down? Chances are that the<br />drive returned garbage to XFS before died completely and that's what<br />XFS detected and shut down on. If you are unlucky in this situation,<br />the corruption can get propagated into the log by changes that are<br />adjacent to the corrupted region, and then you have problems with log<br />recovery failing because the corruption gets replayed....<br /><br /><blockquote class="gmail_quote" style="margin: 0pt 0pt 1ex 0.8ex; border-left: 1px solid #729fcf; padding-left: 1ex;"> I have attached the complete log from the time it died until now.<br /> <br /> In the end, I successfully repaired the filesystem with `xfs_repair -L<br /> /dev/sda1`. However, I am nervous that some files may have been corrupted.<br /> <br /> Do any of you have any idea what cou
ld have
caused this problem?<br /></blockquote><br />When corruption appears at roughly the same time a drive dies, it's<br />almost always caused by the drive that failed. RAID doesn't repvent<br />disks from returning crap to the OS because nobody configures the<br />arrays to do read-verify cycles that would catch such a condition.<br /><br />Cheers,<br /><br />Dave.</pre></body></html>