| To: | xfs@xxxxxxxxxxx |
|---|---|
| Subject: | Re: xfsrestore performance |
| From: | xfs.pkoch@xxxxxxxx |
| Date: | Tue, 31 May 2016 11:39:39 +0200 |
| Delivered-to: | xfs@xxxxxxxxxxx |
| Dkim-signature: | v=1; a=rsa-sha256; c=relaxed/relaxed; d=googlemail.com; s=20120113; h=mime-version:sender:date:message-id:subject:from:to; bh=J+QLpCbgvQFLjXzAkThinJe+qiHkp263p4G9Ve7/Wco=; b=twi1/0i9OL64AMyuI43240jQv+RbDsPme8MMAQjX1PWmaFW1va9HVU5Uoj8GMAfL9S 9xZNOOdSlXTndyoGCVVO02mlBKr8oxgkI73A8GmVJYaFUlVkN2hktL6kzhTRz8dfrXMT hctLNFKiG/+9xQZDYzJ5D2LlD1P3DmZ7KHw/ZWUcs+jLmHyCOB6JpqBu9hyAH/K1muc7 FignLUY3/zMKON34PifBffiqvSK2jvd2j+h+voRHsqrXwBTC1Egi6UMLq24a2MmPyhIy 0AhuxYGT60RhdHckvwdcx/DjxqeJQw/BsrLhfHKm2AjTXSOC62U+CgZ3Ilpj+8AhI4Jy BdzA== |
| Sender: | xfs.pkoch@xxxxxxxx |
|
Dear Dave: Thanks very much for your explanations2016-05-30 1:20 GMT+02:00 Dave Chinner - david@xxxxxxxxxxxxx <xfs.pkoch.2540fe3cfd.david#fromorbit.com@xxxxxxxxxx>: ....  > 5: xfsdump the temporary xfs fs to /dev/null. took 20 hours dump is fast - restore is the slow point because it has to recreate The filesystem is not exactly as you described. Did you notice that Our backup-server has 46 versions of our home-directories and 158 versions of our mailserver, so if a file has not been changed for more than a year it will exist once on the backup server together with 45 / 157 hard links. I'm astonished myself. Firstly about the numbers and also about the fact that our backup-strategy does work quite well. Also rsync does a very good job. It was able to copy all these hard links in 6 days from a 16TB ext3 filesystem on a RAID10-volume to a 15TB xfs filesystem on a RAID5-volume. And right now 4 rsync processes are copying the 15TB xfs filesystem back to a 20TB xfs-filesystem. And it seems as if this will finish today (after 3 days only). Very nice. Keep in mind that it took dump the best part of 7 hours just to read That was my misunderstanding. I was believing/hoping that a tool that was built for a specific filesystem would outperform a generic tool like rsync. I thought xfsdump would write all used filesystem blocks into a data stream and xfsrestore would just read the blocks from stdin and write them back to the destination filesystem. Much like a dd-process that knows about the device-content and can skip unused blocks.  > Seems like 2 days was a little optimistic It would have taken approx 1000 hours  Personally, I would have copied the data using rsync to the Next time I will do it like you suggest with one minor change. Instead of xfs_copy I would use dd, which makes sense if the filesystem is almost filled. Or do you believe that xfs_copy is faster then dd? Or will the xfs_growfs create any problems? I used dd on saturday to copy the 15TB xfs filesystem back into the 20TB raid10 volume and enlarged the filesystem with xfs_growfs. The result was a xfs-filesystem with layout-parameters matching the temporary raid5 volume built from 16 1TB disks with a 256K chunksize. But the new raid10-volume consists of 20 2TB disks using a chunksize of 512K. And growing the filesystem raised the allocation group count from 32 to 45. I reformatted the 20TB volume with a fresh xfs-filesystem and I let mkfs.xfs decide about the layout. Does that give me an optimal layout? I will enlarge the filesystem in the future. This will increase my allocation group count. Is that a problem that I should better have avoided in advance by reducing the agcount? Kind regards and thanks very much for the useful infose Peter Koch -- Peter Koch
Passauer Strasse 32, 47249 Duisburg Tel.: 0172 2470263 |
| <Prev in Thread] | Current Thread | [Next in Thread> |
|---|---|---|
| ||
| Previous by Date: | Re: [PATCH] xfs: fix broken multi-fsb buffer logging, Dave Chinner |
|---|---|
| Next by Date: | Re: shrink_active_list/try_to_release_page bug? (was Re: xfs trace in 4.4.2 / also in 4.3.3 WARNING fs/xfs/xfs_aops.c:1232 xfs_vm_releasepage), Jan Kara |
| Previous by Thread: | Re: xfsrestore performance, Dave Chinner |
| Next by Thread: | OEM Garment Factory for Kappa(44), Christy Huang |
| Indexes: | [Date] [Thread] [Top] [All Lists] |