xfs
[Top] [All Lists]

Re: xfsdump problems

To: Jeremy Jackson <jerj@xxxxxxxxxxxx>
Subject: Re: xfsdump problems
From: jansen <jansen@xxxxxxxxxxxxxx>
Date: Mon, 15 Dec 2003 11:03:20 -0600
Cc: linux-xfs@xxxxxxxxxxx
In-reply-to: <3FDDE2EF.2000006@xxxxxxxxxxxx>
References: <3FDDCDAC.8030807@xxxxxxxxxxxxxx> <3FDDE2EF.2000006@xxxxxxxxxxxx>
Sender: linux-xfs-bounce@xxxxxxxxxxx
User-agent: Mozilla/5.0 (X11; U; Linux i686; en-US; rv:1.5) Gecko/20031007

Hi,

Jeremy Jackson wrote:
jansen wrote:


Hi,

I am having problems with xfsdump on a 1.3TB hardware RAID array.  First
the dumps started running much slower than it used to and now it is
frequently hanging in the "D" state.  At the moment it seems to only get
to the point of "constructing initial dump list" where it hangs.  The
machine also gets a high load average at this point and becomes slugish.
I ran xfs_check on the partition but it didn't find any problems.  The
RAID array is a promise RM8000 with 8 200 GB disks in RAID5 connected
to a 2x3.02 GHz Dell Precision 450 with an Adaptec 39160 SCSI controller.
I'm currently using xfs 1.3.1 with this kernel: 2.4.22-1.2129.nptl.xfssmp
but it had the same problem with the 2.4.20-20.8.XFS1.3.0smp.  There are
no error messages in syslog or dmesg associated with the hang of xfsdump
and there are no unusual messages when the disk is mounted.

I'd appreciate any help debugging this problem.


Here's the xfsdump command I'm using:

/usr/sbin/xfsdump -l 0 -e -d 2048 -o -T -L premo -f /dev/nst0 /dev/sdb1


Can you say what tape drive you are using?



I'm dumping to an IBM 3581 seven tape LTO SCSI autoloader.


Here's the output from the above xfsdump command:

/usr/sbin/xfsdump: using scsi tape (drive_scsitape) strategy
/usr/sbin/xfsdump: version 2.2.14 (dump format 3.0) - Running single-threaded


You should be aware that this version of xfsdump (2.2.13 and 2.2.14) has problems with multiple sessions on scsi tapes. As a workaround use 2.2.12 until 2.2.15 (includes fix) comes out. I don't think that's the issue here though, since you are using -o.



I did an "strace" on xfsdump and these ioctl commands are the ones that
appear to make the high load average:

14054 ioctl(4, 0xc0105865, 0xbffff530)  = 0
14054 ioctl(4, 0xc0105865, 0xbffff530)  = 0
14054 ioctl(4, 0xc0105865, 0xbffff530)  = 0
14054 ioctl(4, 0xc0105865, 0xbffff530)  = 0
14054 ioctl(4, 0xc0105865, 0xbffff530)  = 0
14054 ioctl(4, 0xc0105865, 0xbffff530)  = 0



It might help to look earlier in the output to see what file #4 points to. Look for open( ... ) = 4 lines.


It appears to be opening the mount point, here's the line:

14054 open("/usr/data/premo3", O_RDONLY|O_LARGEFILE) = 4

I've attached the complete output from the strace for those interested.

Cheers,

Jeremy Jackson


--


------- Stephan

Attachment: xfsdump
Description: Text document

<Prev in Thread] Current Thread [Next in Thread>