[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
Re: xfsdump, xfsrestore segmentation faults
Ivan,
Thank you very much for the reply, and the suggestion. I've been out of
town, and couldn't reply earlier, however, I can't reproduce the
behavior, which is great.
Getting the latest code from CVS worked very well. I wish I knew why
there were so many orphaned inodes, but for now, things seem stable so
I'll just go with it.
Again, thanks the advice.
Chris
ivanr@sgi.com wrote:
> On Fri, 8 Feb 2002, Christopher Jones wrote:
>
>
>>Hi everyone,
>>
>>I'm experiencing some problems with XFS on a Redhat 7.2 installation
>>using the SGI supplied installer, and was hoping that someone might be
>>able to provide some advice.
>>
>
> Could you try reproducing the core dump with the latest xfsdump version,
> and then send me the core file and xfsdump binary?
>
> I'll take a look at them next week.
>
> Thanks,
> Ivan
>
>
>
>>I've had the system up for 2 months now, but in the last 2 weeks, have
>>fould that both xfsdump and xfsrestore will crash the system. I had
>>consistently been doing backups just fine, until around January 28th,
>>during a dump, the system load peaked up in the 40's as reported by top
>>(although cpu and memory were fine).
>>
>>My typical dump would look like (sans the escapes):
>>
>>/sbin/xfsdump -F -o -l 9 -L session091 \
>> -M tue-2002-02-08 \
>> -f /dev/nst0 /home
>>
>>and a restore:
>>
>>xfsrestore -if /dev/nst0 /home
>>
>>I have two LVM volumes (home and /scratch01):
>>
>>Filesystem Size Used Avail Use% Mounted on
>>/dev/sda1 1.9G 368M 1.5G 19% /
>>/dev/sda6 97M 4.8M 92M 5% /boot
>>/dev/sda7 9.8G 4.9G 4.9G 50% /usr
>>/dev/sda8 3.9G 243M 3.6G 7% /var
>>/dev/sda10 55G 18G 36G 33% /d01
>>none 753M 0 753M 0% /dev/shm
>>/dev/vg03/lv03 98G 64G 33G 66% /home
>>/dev/vg00/lv00 78G 70G 8.7G 89% /scratch01
>>
>>Sytem software:
>>
>>Linux 2.4.9-13SGI_XFS_1.0.2smp i686
>>
>>lvm_1.0.1-rc4
>>xfsdump-1.1.12-0
>>xfsprogs-1.3.16-0
>>xfsprogs-devel-1.3.16-0
>>
>>After the first time the dump seg faulted, I did an xfs_repair, which
>>found many orphaned inodes, and placed them in lost+found (on the order
>>of 200).
>>
>>Here's the output from one of the failed dumps:
>>
>>[root@mistral log]# /sbin/xfsdump -F -o -l 0 -L session087 -M
>>tue-2002-02-08 -f
>>/dev/nst0 /
>>/sbin/xfsdump: using scsi tape (drive_scsitape) strategy
>>/sbin/xfsdump: version 3.0 - Running single-threaded
>>/sbin/xfsdump: level 0 dump of mistral:/
>>/sbin/xfsdump: dump date: Fri Feb 8 03:35:18 2002
>>/sbin/xfsdump: session id: e04d9485-3601-4d5b-86f0-1a946c5f117f
>>/sbin/xfsdump: session label: "session087"
>>/sbin/xfsdump: ino map phase 1: skipping (no subtrees specified)
>>/sbin/xfsdump: ino map phase 2: constructing initial dump list
>>/sbin/xfsdump: ino map phase 3: skipping (no pruning necessary)
>>/sbin/xfsdump: ino map phase 4: skipping (size estimated in phase 2)
>>/sbin/xfsdump: ino map phase 5: skipping (only one dump stream)
>>/sbin/xfsdump: ino map construction complete
>>/sbin/xfsdump: estimated dump size: 382444928 bytes
>>/sbin/xfsdump: preparing drive
>>/sbin/xfsdump: WARNING: media may contain data. Overwrite option specified
>>Segmentation fault
>>
>>I've tried the -v trace, but don't get any different output.
>>
>>Is anyone else having problems with these cominations?
>>
>>Can anyone suggest a next step in troubleshooting?
>>
>>Thanks very much in advance,
>>
>>Chris
>>
>>
>>
>
--
_________________________________________________________________
christopher jones cjones@lifesci.ucsb.edu (805) 893-5144
marine science institute university of california, santa barbara
_________________________________________________________________