xfs
[Top] [All Lists]

xfsdump problems

To: linux-xfs@xxxxxxxxxxx
Subject: xfsdump problems
From: jansen <jansen@xxxxxxxxxxxxxx>
Date: Mon, 15 Dec 2003 09:05:16 -0600
Sender: linux-xfs-bounce@xxxxxxxxxxx
User-agent: Mozilla/5.0 (X11; U; Linux i686; en-US; rv:1.5) Gecko/20031007

Hi,

I am having problems with xfsdump on a 1.3TB hardware RAID array.  First
the dumps started running much slower than it used to and now it is
frequently hanging in the "D" state.  At the moment it seems to only get
to the point of "constructing initial dump list" where it hangs.  The
machine also gets a high load average at this point and becomes slugish.
I ran xfs_check on the partition but it didn't find any problems.  The
RAID array is a promise RM8000 with 8 200 GB disks in RAID5 connected
to a 2x3.02 GHz Dell Precision 450 with an Adaptec 39160 SCSI controller.
I'm currently using xfs 1.3.1 with this kernel: 2.4.22-1.2129.nptl.xfssmp
but it had the same problem with the 2.4.20-20.8.XFS1.3.0smp.  There are
no error messages in syslog or dmesg associated with the hang of xfsdump
and there are no unusual messages when the disk is mounted.

I'd appreciate any help debugging this problem.


Here's the xfsdump command I'm using:

/usr/sbin/xfsdump -l 0 -e -d 2048 -o -T -L premo -f /dev/nst0 /dev/sdb1

Here's the output from the above xfsdump command:

/usr/sbin/xfsdump: using scsi tape (drive_scsitape) strategy
/usr/sbin/xfsdump: version 2.2.14 (dump format 3.0) - Running single-threaded
/usr/sbin/xfsdump: level 0 dump of premo:/usr/data/premo3
/usr/sbin/xfsdump: dump date: Mon Dec 15 07:28:09 2003
/usr/sbin/xfsdump: session id: ebeecaba-8d54-44ed-a645-1b7171b9c00d
/usr/sbin/xfsdump: session label: "premo"
/usr/sbin/xfsdump: ino map phase 1: skipping (no subtrees specified)
/usr/sbin/xfsdump: ino map phase 2: constructing initial dump list

Here's the output of xfs_info (yes I know the sunit and swidth are wrong, and
I now know that "isize" should be 512):

# xfs_info /dev/sdb1

meta-data=/usr/data/premo3       isize=256    agcount=326, agsize=1048575 blks
         =                       sectsz=512
data     =                       bsize=4096   blocks=341794915, imaxpct=25
         =                       sunit=1      swidth=4 blks, unwritten=0
naming   =version 2              bsize=4096
log      =internal               bsize=4096   blocks=32768, version=1
         =                       sectsz=512   sunit=0 blks
realtime =none                   extsz=65536  blocks=0, rtextents=0


Here's the output from xfs_db -c frag /dev/sdb1:

actual 4222235, ideal 3973436, fragmentation factor 5.89%



I did an "strace" on xfsdump and these ioctl commands are the ones that
appear to make the high load average:

14054 ioctl(4, 0xc0105865, 0xbffff530)  = 0
14054 ioctl(4, 0xc0105865, 0xbffff530)  = 0
14054 ioctl(4, 0xc0105865, 0xbffff530)  = 0
14054 ioctl(4, 0xc0105865, 0xbffff530)  = 0
14054 ioctl(4, 0xc0105865, 0xbffff530)  = 0
14054 ioctl(4, 0xc0105865, 0xbffff530)  = 0


--


------- Stephan


<Prev in Thread] Current Thread [Next in Thread>