[Top] [All Lists]

Re: xfs_repair segfault

To: Dave Chinner <david@xxxxxxxxxxxxx>
Subject: Re: xfs_repair segfault
From: Viet Nguyen <vietnguyen@xxxxxxxxx>
Date: Wed, 9 Oct 2013 11:59:19 -0700
Cc: xfs@xxxxxxxxxxx
Delivered-to: xfs@xxxxxxxxxxx
Dkim-signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=mime-version:in-reply-to:references:date:message-id:subject:from:to :cc:content-type; bh=+DtwVhJGKQnnRaXOik05lvgaqIoV+c2+/PuCLuzK+uk=; b=MGpqshuXOVamHiE5DLsmzalHH1bFb6R3baIQBb+Q92Bez4RmH+ICmuTKnhT+YYp0wx AaC19NSfJK4qNBahz+IMLNw9zvt7eTh7gEgRJ37x9z79PlhObEz6gLr1suhyqpbrYgAd 2Akl8UVBzVkFEYlNr7m2j1JiIqwVYx7h6r2PX/c3q2CMhyMUUJP+/TEyup05JSwhyYgL pAEmUSj5V8JNnwpREXS2WmWI+lDZC8ncKNBQrFzANR3d/HoN5+hsf0HSSjFFYKmIcBrH fHDPe7rEnnX2cfEblKL1bf3lB75rZAgMiDdtItnM+D6KMFXDPO8DuVX+7H8tbKwV8MH1 it8g==
In-reply-to: <20131008202342.GA4446@dastard>
References: <CAGa4098ZKd2KQfWMgNXYgLr9LJF8r-MpFgQAn3G-W+ovDGHTAw@xxxxxxxxxxxxxx> <20131001201909.GR12541@dastard> <CAGa409_tDjbsdnf+wDiM7666FeQSjmMfOVdqG-SxOD_WUZMiZQ@xxxxxxxxxxxxxx> <20131002104253.GT12541@dastard> <CAGa409_wO74zGP1d85RGZ7WbfBPr7s_tKaW3u9k8=9Ps-D5FjQ@xxxxxxxxxxxxxx> <20131004214353.GK4446@dastard> <CAGa4099NNUJV4_JbU0izLf0k3bcj3afHPTXt=HO2263_TESbNA@xxxxxxxxxxxxxx> <20131008202342.GA4446@dastard>

On Tue, Oct 8, 2013 at 1:23 PM, Dave Chinner <david@xxxxxxxxxxxxx> wrote:
On Mon, Oct 07, 2013 at 01:09:09PM -0700, Viet Nguyen wrote:
> Thanks. That seemed to fix that bug.
> Now I'm getting a lot of this:
> xfs_da_do_buf(2): XFS_CORRUPTION_ERROR

Right, that's blocks that are being detected as corrupt when they
are read. You can ignore that for now.

> fatal error -- can't read block 8388608 for directory inode 8628218

That's a corrupted block list of some kind - it should junk the

> Then xfs_repair exits.

I'm not sure why that happens. Is it exiting cleanly or crashing?
Can you take a metadump of the filesystem and provide it for someone
to debug the problems it causes repair?

It seems to be exiting cleanly with return code 1. I created a metadump, but it's 9.6GB. I suppose I can put up on a secure FTP or something like that, but it does seem a big large to shuffle around.

> What I've been doing is what I saw in the FAQ where I would use xfs_db and
> write core.mode 0 for these inodes. But there are just so many of them. And
> is that even the right thing to do?

That marks the inode as "free" which effectively junks it and then
xfs_repair will free all it's extents next time it is run. Basically
you are removing the files from the filesystem and making them

In the case of directories, it blows away just directory but xfs_repair later on scans for orphan files, no? Or am I mistaken on how that works.
<Prev in Thread] Current Thread [Next in Thread>