It looks like xfsdump will break large files into chunks in the
dump archive. This has more to do with not splitting a record
in the dump archive between tape media than anything else I think.
It is also possible for an interrupted dump to be restored, and
really big files would tend to be a hinderence to this.
I found this comment in the code:
/* a regular file may be broken into several portions if its size
* is large. Each portion begins with a filehdr_t and is followed by
* several extents.
*/
It looks like the inventory list code is reporting each individual
component of the file which it finds in the archive.
Is this actually causing problems, or is it just a query as to why
you see the odd names?
Steve
> Hi,
>
> I've experienced strange filenames in Amanda's index like "<filename>
> (offset 16769536)" in addition to "<filename>" for files on Linux XFS
> using xfsdump-1.0.9.
>
> The index is generated by client-src/sendbackup-dump.c:
>
> program->backup_name = XFSDUMP;
> program->restore_name = XFSRESTORE;
>
> indexcmd = vstralloc(XFSRESTORE,
> " -t",
> " -v", " silent",
> " -",
> " 2>/dev/null",
> " | sed",
> " -e", " \'s/^/\\//\'",
> NULL);
> write_tapeheader();
>
> start_index(createindex, dumpout, mesgf, indexf, indexcmd);
>
> What's that offset about?
>
|