xfs
[Top] [All Lists]

RE: using xfsdump to synchronise filesystems?

To: "'ivanr@xxxxxxx'" <ivanr@xxxxxxx>, Matthijs van der Klip <matthijs.van.der.klip@xxxxxx>
Subject: RE: using xfsdump to synchronise filesystems?
From: Matthijs van der Klip <matthijs.van.der.klip@xxxxxx>
Date: Thu, 6 Dec 2001 09:31:41 +0100
Cc: Linux XFS Mailing List <linux-xfs@xxxxxxxxxxx>
Sender: owner-linux-xfs@xxxxxxxxxxx
On Thu, 6 Dec 2001, Ivan Rayner wrote:
> I don't think xfsdump is a very good choice here.  xfsdump will scan the
> entire filesystem every time you run it, even if you select a higher level
> dump.  I doubt that for a 10GB filesystem it will be able to run to
> completion within 5 minutes ('course this depends on your circumstances).

Ah, this is information I needed. I already wondered if xfsdump did a scan
or something more intelligent.

BTW I can do a level 2 xfsdump of a 20Gb filesystem in around 80 secs. This
was timed on a filesystem with around 300.000 files.

> I think you'd be better off monitoring the ftp log to see what files have
> changed and then use rsync on those.  I'm sure it'd be fairly easy to do
> this in something like perl.

<slightly off-topic>

Problem with this is the information contained in the ftp logs itself by
default is not sufficient:

1) I can use the xferlog, but this contains only xfers, no mkdir's, chmod's
   etc. Moreover spaces in filenames are replaced by underscores so the log
   essentialy contains invalid filenames.

2) I can use the ftp command log, but this contains literal ftp commands,
   which I would have to parse in order to filter the information I need. It
   doesn't even contains full pathnames to files...

Ofcourse this could be solved by hacking the ftp daemon so it writes the log
I want it to, but I'm not convinced this is the way to go.

</slighty off-topic>


Best regards,

Matthijs van der Klip
NOS Dutch Public Broadcasting Organisation


[[HTML alternate version deleted]]


<Prev in Thread] Current Thread [Next in Thread>