Hello,
I needed to migrate a filesystem to a bigger RAID array (~60GB on old,
slow disks to ~90GB on a faster one) with minium of downtime. The old
filesystem was created with old xfsprogs (2.5.3) and the new one with
2.6.0. I wanted to do this:
- first night:
xfsdump -l0 - /old | xfsrestore -r - /new
which went OK
- second night
remount read-only /old
xfsdump -l1 - /old | xfsrestore -r - /new
(which should bring the differences, ~1GB), but xfsrestore died:
xfsrestore: using file dump (drive_simple) strategy
xfsrestore: version 2.2.14 (dump format 3.0) - Running single-threaded
xfsrestore: searching media for dump
xfsrestore: examining media file 0
xfsrestore: dump description:
xfsrestore: hostname: xxxxxxxxxxxx
xfsrestore: mount point: /old
xfsrestore: volume: /dev/vg01/store
xfsrestore: session time: Thu Nov 27 20:25:36 2003
xfsrestore: level: 1
xfsrestore: session label: ""
xfsrestore: media label: ""
xfsrestore: file system id: e4b3cb15-7927-4bb9-8445-5a0ca8d8798a
xfsrestore: session id: f689b1f9-30f7-46e3-a1af-1cb124916883
xfsrestore: media id: 0f55ae04-8c33-472b-b3b4-7b5907fd2ed9
xfsrestore: searching media for directory dump
xfsrestore: reading directories
xfsrestore: node.c:669: node_map: Assertion `nodegen == hdlgen' failed.
In the end, I skipped the -l1 dump/restore and went with "rsync -v -aH /old/
/new/"
My questions are, if anybody can help me:
- was that migration plan a valid one? I mean, a -l0 dump with a r/w
filesystem + a -l1 dump on a readonly filesystem does a complete
copy?
- is it allowed to do incremental restores when using pipes (i.e. not a
real file/tape)?
- what really happened with xfsrestore? was my old filesystem
corrupted? I have the stderr from xfsdump and stdout from all
xfsrestore sessions available, if anybody is interested. I tried some
more xfsdump -R -l1 - /old|xfsrestore -r - /new (which complained
about -R on xfsrestore, which afterwards complained about mismatch on
ids between inventory and file)
Thanks,
Iustin Pop
|