xfs
[Top] [All Lists]

Re: xfsrestore (Linux) generating Amanda's index

To: "Bernhard R. Erdmann" <be@xxxxxxxxxxx>, Steve Lord <lord@xxxxxxx>
Subject: Re: xfsrestore (Linux) generating Amanda's index
From: Timothy Shimmin <tes@xxxxxxxxxxxxxxxxxxxxxxx>
Date: Wed, 18 Jul 2001 09:37:55 +1000
Cc: "amanda-users@xxxxxxxxxx" <amanda-users@xxxxxxxxxx>, Linux XFS Mailing List <linux-xfs@xxxxxxxxxxx>
In-reply-to: <3B54C46A.D667F144@xxxxxxxxxxx>; from be@xxxxxxxxxxx on Wed, Jul 18, 2001 at 01:04:10AM +0200
References: <200107172243.f6HMh3K02985@xxxxxxxxxxxxxxxxxxxx> <3B54C1E4.ECD9EC9F@xxxxxxxxxxx> <be@xxxxxxxxxxx> <200107172243.f6HMh3K02985@xxxxxxxxxxxxxxxxxxxx> <200107172257.f6HMvZu03041@xxxxxxxxxxxxxxxxxxxx> <3B54C46A.D667F144@xxxxxxxxxxx>
Sender: owner-linux-xfs@xxxxxxxxxxx
Hi Bernhard,

On Wed, Jul 18, 2001 at 01:04:10AM +0200, Bernhard R. Erdmann wrote:
> > Are you using real tape media, or a file, I suspect the file case could
> > be smart enough not to do the split as it does not make a whole lot
> > of sense there.
> 
> Amanda triggers xfsdump to write to stdout while splitting it to go to
> tape or a file on the holding disk (via network) and to xfsrestore
> reading from stdin to generate an index. So xfsrestore shouldn't know
> anything of an actual tape media.
> 
BTW, the dump format for file and tape are different.
If you are dumping to a file or stdout then it is in file format.
If this file-format dump is written to tape and you tried to restore
using xfsrestore directly from the tape (-f /dev/st0) then
it wouldn't work. If you convert it back to a file or give it to
restore on stdin, then it will work fine.
Just something to keep in mind.

On Tue, Jul 17, 2001 at 05:43:03PM -0500, Steve Lord wrote:
> 
> It looks like xfsdump will break large files into chunks in the
> dump archive. This has more to do with not splitting a record
> in the dump archive between tape media than anything else I think.
> It is also possible for an interrupted dump to be restored, and
> really big files would tend to be a hinderence to this.
> 
> I found this comment in the code:
> 
> /* a regular file may be broken into several portions if its size
>  * is large. Each portion begins with a filehdr_t and is followed by
>  * several extents.
>  */
> 
> It looks like the inventory list code is reporting each individual
> component of the file which it finds in the archive.
Yep.
It's reporting for each file record it comes across in the dump.
As far as I know a regular file can be split into multiple records if:
(1) we have multiple streams and when the data size is divided
    among the streams, the file in question falls over the
    boundary of the streams (the algorithm for deciding to split is
    a bit more complicated than this); 
(2) the file is greater than 16Mb,
    it is then split into 16Mb chunks (or extent groups as the code calls them)
For Linux, we don't do any multi-threading which means we
don't do any multiple streams, which means case (1) won't happen.
Case (2) is what you are seeing.
I actually have web page notes on xfsdump mentioning this
among other stuff, perhaps I will look into putting
this on oss if people are interested.

On Wed, Jul 18, 2001 at 12:53:24AM +0200, Bernhard R. Erdmann wrote:
> > Is this actually causing problems, or is it just a query as to why
> > you see the odd names?
> 
> Just being curious... I haven't recognized any problems yet. It just
> causes an annoying listing of mangled filenames in addition to the
> original filename in the index, my SysOps will ask me "How can we rely
> on it if it's messing up the index? 
> Does it mess up the backup, too?"
I think this is a little unfair.
It _is_ displaying the contents of the dump - it is showing where
the file records are split.

> Does it mess up the backup, too?
> and I have to write some more lines in the documentation just to ignore
> these offset names...
> 
I guess one could change the code and find another option letter 
to stop this from being reported - I don't know if it's worth it.

--Tim 


<Prev in Thread] Current Thread [Next in Thread>