xfs
[Top] [All Lists]

Re: Repairing a possibly incomplete xfs_growfs command?

To: "Mark Magpayo" <mmagpayo@xxxxxxxxxxxxx>
Subject: Re: Repairing a possibly incomplete xfs_growfs command?
From: "Barry Naujok" <bnaujok@xxxxxxx>
Date: Wed, 23 Jan 2008 13:57:37 +1100
Cc: xfs@xxxxxxxxxxx
In-reply-to: <9CE70E6ED2C2F64FB5537A2973FA4F0253596D@pvn-3001.purevideo.local>
Organization: SGI
References: <9CE70E6ED2C2F64FB5537A2973FA4F0253595A@pvn-3001.purevideo.local> <20080117234604.GG155407@sgi.com> <9CE70E6ED2C2F64FB5537A2973FA4F0253595B@pvn-3001.purevideo.local> <20080119004018.GH155407@sgi.com> <9CE70E6ED2C2F64FB5537A2973FA4F0253596D@pvn-3001.purevideo.local>
Sender: xfs-bounce@xxxxxxxxxxx
User-agent: Opera Mail/9.24 (Win32)
On Wed, 23 Jan 2008 06:40:52 +1100, Mark Magpayo <mmagpayo@xxxxxxxxxxxxx> wrote:


-----Original Message-----
From: David Chinner [mailto:dgc@xxxxxxx]
Sent: Friday, January 18, 2008 4:40 PM
To: Mark Magpayo
Cc: David Chinner; xfs@xxxxxxxxxxx
Subject: Re: Repairing a possibly incomplete xfs_growfs command?

On Fri, Jan 18, 2008 at 09:50:37AM -0800, Mark Magpayo wrote:
> > > So is this all I need then prior to an xfs_repair?:
> > >
> > > > # for i in `seq 0 1 63`; do
> > > > > xfs_db -x -c "sb $i" -c 'write agcount 64' -c 'write dblock
> > 4761733120'
> > > > /dev/vg0/lv0
> >
> > Yes, I think that is all that is necessary (that+repair was what
fixed
> > the problem at the customer site successfully).
> >
>
> Is this supposed to be the proper output to the command above?
>
> purenas:~# for i in `seq 0 1 63`; do xfs_db -x -c "sb $i" -c 'write
> agcount 64' -c 'write dblock 4761733120' /dev/vg0/lv0; done
> agcount = 64
> field dblock not found
> parsing error

Ah - As eric pointed out, that should be "dblocks".

Cheers,

Dave.
--
Dave Chinner
Principal Engineer
SGI Australian Software Group

Any ideas on how long the xfs_repair is supposed to take on 18TB? I started it Friday nite, and it's now Tuesday afternoon. It's stuck here:

Phase 5 - rebuild AG headers and trees...
        - reset superblock...
Phase 6 - check inode connectivity...
        - resetting contents of realtime bitmap and summary inodes
        - traversing filesystem ...

I figure traversing a filesystem of 18TB takes a while, but does 4 days
sound right?

Was it stuck on Phase 6 all that time? With only 1GB of RAM (from your meminfo output) and 18TB filesystem, Phases 3 and 4 will take a very long time due to swapping.

Phase 6 in your scenario should be relatively quick and light on
memory usage (500MB as reported in your other email).

It is feasible it is deadlocked by trying to double-access a buffer,
or access a buffer that wasn't released. This is an unlikely scenario,
but it is possible.

Regards,
Barry.


<Prev in Thread] Current Thread [Next in Thread>