xfs
[Top] [All Lists]

Re: "This is a bug."

To: Tapani Tarvainen <tapani@xxxxxxxxxxxxxxxxxx>
Subject: Re: "This is a bug."
From: Brian Foster <bfoster@xxxxxxxxxx>
Date: Thu, 10 Sep 2015 13:55:58 -0400
Cc: xfs@xxxxxxxxxxx
Delivered-to: xfs@xxxxxxxxxxx
In-reply-to: <20150910173138.GB18940@xxxxxxxxxxxxxx>
References: <20150910091834.GC24937@xxxxxxxxxxxxxxxx> <20150910134828.0bdfcc4c@xxxxxxxxxxxxxxxxxxxx> <20150910115548.GD26847@xxxxxxxxxxxxxxxx> <20150910123030.GG26847@xxxxxxxxxxxxxxxx> <20150910123603.GA27863@xxxxxxxxxxxxxxx> <20150910125441.GA28374@xxxxxxxxxxxxxxxx> <20150910130106.GB27863@xxxxxxxxxxxxxxx> <20150910130530.GB28374@xxxxxxxxxxxxxxxx> <20150910145154.GC27863@xxxxxxxxxxxxxxx> <20150910173138.GB18940@xxxxxxxxxxxxxx>
User-agent: Mutt/1.5.23 (2014-03-12)
On Thu, Sep 10, 2015 at 08:31:38PM +0300, Tapani Tarvainen wrote:
> On Thu, Sep 10, 2015 at 10:51:54AM -0400, Brian Foster (bfoster@xxxxxxxxxx) 
> wrote:
> 
> > First off, I see ~60MB of corruption output before I even get to the
> > reported repair failure, so this appears to be an extremely severe
> > corruption and I wouldn't be surprised if ultimately beyond repair
> 
> I assumed as much already.
> 
> > I suspect what's more interesting at this point is what happened to
> > cause this level of corruption? What kind of event lead to this? Was it
> > a pure filesystem crash or some kind of hardware/raid failure?
> 
> Hardware failure. Details are still a bit unclear but apparently raid
> controller went haywire, offlining the array in the middle of
> heavy filesystem use.
> 
> > Also, do you happen to know the geometry (xfs_info) of the original fs?
> 
> No (and xfs_info doesn't work on the copy made after crash as it
> can't be mounted).
> 
> > Repair was showing agno's up in the 20k's and now that I've mounted the
> > repaired image, xfs_info shows the following:
> [...]
> > So that's a 6TB fs with over 24000 allocation groups of size 256MB, as
> > opposed to the mkfs default of 6 allocation groups of 1TB each. Is that
> > intentional?
> 
> Not to my knowledge. Unless I'm mistaken, the filesystem was created
> while the machine was running Debian Squeeze, using whatever defaults
> were back then.
> 

Strange... was the filesystem created small and then grown to a much
larger size via xfs_growfs? I just formatted a 1GB fs that started with
4 allocation groups and ends with 24576 (same as above) AGs when grown
to 6TB.

Brian

> -- 
> Tapani Tarvainen

<Prev in Thread] Current Thread [Next in Thread>