Hello,
I am testing the backup of a new SCSI-attached RAID5 SATA array on a
SDLT320 tape drive, using xfsdump 2.2.27.
I noticed that xfsdump is "much" slower than tar of cpio, and I would
like to know if I can do anything to speed up things.
NB: I used kernels 2.4.31 and 2.6.12.3, and did not see a noticable
difference using the one or the other. I tried the ihashsize mount
option (when using 2.6.x kernel), and approximatively got the same
results.
1) First, the testbed
The storage is a ~1.5TB array, that I partially filled with the some
contents of our user homes. The 'df' and 'df -i' commands shows the
following:
# df /dev/sda1
/dev/sda1 1463454592 29337040 1434117552 3% /raid
# df -i /dev/sda1
Filesystem Inodes IUsed IFree IUse% Mounted on
/dev/sda1 1444198400 526803 1443671597 1% /raid
The filesystem is completely inactive during the tests.
The SDLT320 drive is 25MB/s capable (tested by pulling a big 500MB tar
file with dd on the tape).
2) The tests
2.a) xfsdump
command used: xfsdump -J - /raid | dd obs=64k of=/dev/nst0
dd shows: 5985202 bytes/sec
I also tested with a 1M blocksize, and did not get a much better number
(6052963).
2.b) tar
command used: tar cf - /raid | dd obs=64k of=/dev/nst0
dd shows: 16518724 bytes/sec
2.c) cpio
command used: find /raid -xdev -print0 | cpio -0 -H newc -o \
| dd obs=64k of=/dev/nst0
dd shows: 15911069 bytes/sec
3) Any tips to speed up xfsdump ?
Thanks,
--
Nicolas
|