On Wed, Oct 09, 2013 at 11:59:19AM -0700, Viet Nguyen wrote:
> On Tue, Oct 8, 2013 at 1:23 PM, Dave Chinner <david@xxxxxxxxxxxxx> wrote:
> > On Mon, Oct 07, 2013 at 01:09:09PM -0700, Viet Nguyen wrote:
> > > Thanks. That seemed to fix that bug.
> > >
> > > Now I'm getting a lot of this:
> > > xfs_da_do_buf(2): XFS_CORRUPTION_ERROR
> > Right, that's blocks that are being detected as corrupt when they
> > are read. You can ignore that for now.
> > > fatal error -- can't read block 8388608 for directory inode 8628218
> > That's a corrupted block list of some kind - it should junk the
> > inode.
> > > Then xfs_repair exits.
> > I'm not sure why that happens. Is it exiting cleanly or crashing?
> > Can you take a metadump of the filesystem and provide it for someone
> > to debug the problems it causes repair?
> It seems to be exiting cleanly with return code 1. I created a metadump,
> but it's 9.6GB. I suppose I can put up on a secure FTP or something like
> that, but it does seem a big large to shuffle around.
How big is it when you compress it? I should get a lot smaller...
> > > What I've been doing is what I saw in the FAQ where I would use xfs_db
> > and
> > > write core.mode 0 for these inodes. But there are just so many of them.
> > And
> > > is that even the right thing to do?
> > That marks the inode as "free" which effectively junks it and then
> > xfs_repair will free all it's extents next time it is run. Basically
> > you are removing the files from the filesystem and making them
> > unrecoverable.
> In the case of directories, it blows away just directory but xfs_repair
> later on scans for orphan files, no? Or am I mistaken on how that works.
It does do that, putting all the unreferenced files into lost+found.
But you lose all the names, and you have to manually work out what
all the files are and what they used to be named and what directory
they belonged to. So it's a mess that would be better avoided if at