At 17:59 19-2-2003 -0800, l.a walsh wrote:
> Can you tell me what kernel you were running again? Was is
> the 1.2 based
> release with a bit older dump and restore?
linux-2.4.20 (vanilla) with feb 04,04 xfs-all patch
Can you try a newer CVS perhaps. There has been some CVS breakage this
month. You might be affected.
No kernel panic or oops but it flies by awfully fast. It's just
so darn unfriendly...since I can't really do anything other than
press reset at that point....Am in process of creating tbz2 archives
for the other partitions right now...dang bzip2 is ..slow...I only
have 6G of data, and its still cranking (backup ~3G so far)
if /var is seperate partition you might be able to find the filesystem
shutdown message in the /var/log/messages log.
That might prove usefull.
> From: Chris Wedgwood [mailto:cw@xxxxxxxx]
> does xfsdump puke because it gets an error from the kernel, or does it
> barf internally?
Hmm....it appears that something in the kernel driver thinks
that the disk is corrupt so it unmounts the root drive. xfs just
sorta exits... it flies by on the screen so fast and wasn't being
cooperative when I wanted to scroll back.
As soon as XFS detects a corruption error it shuts down the partition to
prevent worse. There must be something wedged that only gets triggered by
I wish there was a "pipe" mode where I could say, no, don't
buffer, and flush it every character. Inefficient as *bleep*, but
in some situations....
Actually, now that I think about it, if I pipe the output
to /tmp the pipe would already be open and its on a different partition,
so theoretically, the output should be saved, no?
nohup xfsdump -yada yada &
this should save at least the xfsdump output. Even if the root fs shutdown
you should still be able to read and write other partitions.
Haven't tried strace...was hoping this bug would tickle someone's
"aha" button and it was already 'solved'....
I'll try to get more info ...but first need to rebuild the utils.
Darn redhat RPM's aren't compatible with SuSE. Maybe you could link it
using the downrev of GLIBC and cover both RH and SuSE? Just a thought.
You can rebuild the rpms safely on a SuSE machine.
You can try the following (not sure if it works) rpm --rebuild --nodeps
This should write some rpms to the rpm build directory, wherever that is on
It might just be your lucky day, if you only knew.