Thanks Barry. Couple of follow-up questions:
For "making an entire of the device", i presume you mean using dd, since
it's an unmounted filesystem?
Also, I noted that my system's older xfsprogs 2.6.13-1 doesn't include
xfs_metadump; is this a newer utility?
Rather than updating this system, i'm thinking of performing the recovery
from a linux LiveCD type setup. I was thinking of Knoppix 5.1.1, which
includes
linux 2.6.19
xfsprogs 2.8.11-1
Any concerns with these? Or would you strongly recommend i roll my own
xfsprogs 2.9.4 and use the system itself (choice of kernels 2.6.17.11 or
2.6.23.16)?
thanks
slaton
Slaton Lipscomb
Nogales Lab, Howard Hughes Medical Institute
http://cryoem.berkeley.edu
On Thu, 28 Feb 2008, Barry Naujok wrote:
> On Thu, 28 Feb 2008 09:44:04 +1100, slaton <slaton@xxxxxxxxxxxx> wrote:
>
> > Hi,
> >
> > I'm still hoping for some help with this. Is any more information
> > needed in addition to the ksymoops output previously posted?
> >
> > In particular i'd like to know if just remounting the filesystem (to
> > replay the journal), then unmounting and running xfs_repair is the
> > best course of action. In addition, i'd like to know what recommended
> > kernel/xfsprogs versions to use for best results.
>
> I would get xfsprogs 2.9.4 (2.9.6 is not a good version with your
> kernel),
> ftp://oss.sgi.com/projects/xfs/previous/cmd_tars/xfsprogs_2.9.4-1.tar.gz
>
> To be on the safe side, either make an entire copy of your drive to
> another device, or run "xfs_metadump -o /dev/sda1" to capture a metadata
> (no file data) of your filesystem.
>
> Then run xfs_repair (mount/unmount maybe required if the log is dirty).
>
> If the filesystem is in a bad state after the repair (eg. everything in
> lost+found), email the xfs_repair log and request further advise.
>
> Regards,
> Barry.
>
>
> > thanks
> > slaton
> >
> > Slaton Lipscomb
> > Nogales Lab, Howard Hughes Medical Institute
> > http://cryoem.berkeley.edu
> >
> > On Mon, 25 Feb 2008, slaton wrote:
> >
> > > Thanks for the reply.
> > >
> > > > Are you hitting http://oss.sgi.com/projects/xfs/faq.html#dir2 ?
> > >
> > > Presumably not - i'm using 2.6.17.11, and that information indicates the
> > > bug was fixed in 2.6.17.7.
> > >
> > > I've attached the output from running ksymoops on messages.1. First
> > > crash/trace (Feb 21 19:xx) corresponds to the original XFS event; the
> > > second (Feb 22 15:xx) is the system going down when i tried to unmount the
> > > volume.
> > >
> > > Here are the additional syslog msgs corresponding to the Feb 22 15:xx
> > > crash.
> > >
> > > Feb 22 15:47:13 qln01 kernel: grsec: From 10.0.2.93: unmount of /dev/sda1
> > > by /bin/umount[umount:18604] uid/euid:0/0 gid/egid:0/0, parent
> > > /bin/bash[bash:31972] uid/euid:0/0 gid/egid:0/0
> > > Feb 22 15:47:14 qln01 kernel: xfs_force_shutdown(sda1,0x1) called from
> > > line 338 of file fs/xfs/xfs_rw.c. Return address = 0xffffffff88173ce4
> > > Feb 22 15:47:14 qln01 kernel: xfs_force_shutdown(sda1,0x1) called from
> > > line 338 of file fs/xfs/xfs_rw.c. Return address = 0xffffffff88173ce4
> > > Feb 22 15:47:28 qln01 kernel: BUG: soft lockup detected on CPU#0!
> > >
> > > thanks
> > > slaton
> >
> >
>
|