xfsrestore performance
xfs.pkoch at dfgh.net
xfs.pkoch at dfgh.net
Fri May 27 17:39:45 CDT 2016
Dear XFS experts,
I was using a 16TB linux mdraid raid10 volume built from 16 seagate
2TB disks, which was formatted with an ext3 filesystem. It contained
a couple of hundred very large files (ZFS full and incremental dumps
with sizes between 10GB and 400GB). It also contained 7 million
files from our users home directories, which where backuped with
rsync --link-dest=<last backup dir>, so most of these files are
just hard links to previous versions.
Two weeks ago I increased the volume from 16TB to 20TB which
can be done with linux mdraid while still using the filesystem.
Reshaping lasts two days. Then I umounted the ext3 filesystem
to grow it from 16TB to 20TB. And guess what - ext3 does not
support filesystem > 16TB.
I decided to change the filesystem to XFS but the system must be
online during weekdays so I have a 48 hour timeframe to do
lenghty copies.
I added a 16TB temporary RAID5 volume to the machine and here's
what I did so far:
1: create a 14TB XFS-filesystem on the temporary RAID5-volume
2: first rsync run to copy the ext3 fs to the temporary XFS-fs,
this took 6 days
3: another rsync run to copy what changed during the first run,
this took another 2 days
4: another rsync run to copy what changed during the second run,
this took another day
5: xfsdump the temporary xfs fs to /dev/null. took 20 hours
5: remounting the ext3 fs readonly and do a final rsync run to
copy what changed during the third run. This took 10 hours.
6: delete the ext3 fs and create a 20TB xfs fs
7: copy back the temporary xfs fs to the new xfs fs using
xfsdump | xfsrestore
Here's my problem Since dumping the temporary xfs fs to /dev/null
needed less than a day I expected the xfsdump | xfsrestore
combination to be finished in less than 2 day. xfsdump | xfsrestore
should be a lot fasten than rsync since it justs pumps blocks from
one xfs fs into another one.
But either xfsrestore is painfully slow or I did something wrong:
Please have a look:
root at backup:/var/tmp# xfsdump -J -p600 - /xtmp | xfsrestore -J -a /var/tmp
- /xtmp2
xfsrestore: using file dump (drive_simple) strategy
xfsdump: using file dump (drive_simple) strategy
xfsrestore: version 3.1.3 (dump format 3.0)
xfsdump: version 3.1.3 (dump format 3.0)
xfsrestore: searching media for dump
xfsdump: level 0 dump of backup:/xtmp
xfsdump: dump date: Fri May 27 13:15:42 2016
xfsdump: session id: adb95c2e-332b-4dde-9c8b-e03760d5a83b
xfsdump: session label: ""
xfsdump: ino map phase 1: constructing initial dump list
xfsdump: status at 13:25:42: inomap phase 1 14008321/28643415 inos scanned,
600 seconds elapsed
xfsdump: ino map phase 2: skipping (no pruning necessary)
xfsdump: ino map phase 3: skipping (only one dump stream)
xfsdump: ino map construction complete
xfsdump: estimated dump size: 12831156312640 bytes
xfsdump: creating dump session media file 0 (media 0, file 0)
xfsdump: dumping ino map
xfsrestore: examining media file 0
xfsrestore: dump description:
xfsrestore: hostname: backup
xfsrestore: mount point: /xtmp
xfsrestore: volume: /dev/md6
xfsrestore: session time: Fri May 27 13:15:42 2016
xfsrestore: level: 0
xfsrestore: session label: ""
xfsrestore: media label: ""
xfsrestore: file system id: 29825afd-5d7e-485f-9eb1-8871a21ce71d
xfsrestore: session id: adb95c2e-332b-4dde-9c8b-e03760d5a83b
xfsrestore: media id: 5ef22542-774a-4504-a823-d007d2ce4720
xfsrestore: searching media for directory dump
xfsrestore: NOTE: attempt to reserve 1162387864 bytes for
/var/tmp/xfsrestorehousekeepingdir/dirattr using XFS_IOC_RESVSP64 failed:
Operation not supported (95)
xfsrestore: NOTE: attempt to reserve 286438226 bytes for
/var/tmp/xfsrestorehousekeepingdir/namreg using XFS_IOC_RESVSP64 failed:
Operation not supported (95)
xfsrestore: reading directories
xfsdump: dumping directories
xfsdump: dumping non-directory files
xfsdump: status at 20:04:52: 1/7886560 files dumped, 0.0% data dumped,
24550 seconds elapsed
xfsrestore: 20756853 directories and 274128228 entries processed
xfsrestore: directory post-processing
xfsrestore: restoring non-directory files
xfsdump: status at 21:27:27: 26/7886560 files dumped, 0.0% data dumped,
29505 seconds elapsed
xfsdump: status at 21:35:46: 20930/7886560 files dumped, 0.0% data dumped,
30004 seconds elapsed
xfsdump: status at 21:46:26: 46979/7886560 files dumped, 0.1% data dumped,
30644 seconds elapsed
xfsdump: status at 21:55:52: 51521/7886560 files dumped, 0.1% data dumped,
31210 seconds elapsed
xfsdump: status at 22:05:45: 57770/7886560 files dumped, 0.1% data dumped,
31803 seconds elapsed
xfsdump: status at 22:15:43: 63142/7886560 files dumped, 0.1% data dumped,
32401 seconds elapsed
xfsdump: status at 22:25:42: 73621/7886560 files dumped, 0.1% data dumped,
33000 seconds elapsed
xfsdump: status at 22:35:51: 91223/7886560 files dumped, 0.1% data dumped,
33609 seconds elapsed
xfsdump: status at 22:45:42: 94096/7886560 files dumped, 0.2% data dumped,
34200 seconds elapsed
xfsdump: status at 22:55:42: 96702/7886560 files dumped, 0.2% data dumped,
34800 seconds elapsed
xfsdump: status at 23:05:42: 102808/7886560 files dumped, 0.2% data dumped,
35400 seconds elapsed
xfsdump: status at 23:16:15: 107096/7886560 files dumped, 0.2% data dumped,
36033 seconds elapsed
xfsdump: status at 23:25:47: 109079/7886560 files dumped, 0.2% data dumped,
36605 seconds elapsed
xfsdump: status at 23:35:52: 112318/7886560 files dumped, 0.2% data dumped,
37210 seconds elapsed
xfsdump: status at 23:45:46: 114975/7886560 files dumped, 0.2% data dumped,
37804 seconds elapsed
xfsdump: status at 23:55:55: 117260/7886560 files dumped, 0.2% data dumped,
38413 seconds elapsed
xfsdump: status at 00:05:44: 118722/7886560 files dumped, 0.2% data dumped,
39002 seconds elapsed
Seems like 2 days was a little optimistic
Any ideas what's going wrong here?
Peter Koch
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://oss.sgi.com/pipermail/xfs/attachments/20160528/743844fe/attachment-0001.html>
More information about the xfs
mailing list