xfs
[Top] [All Lists]

Re: Backing up a "live" file system.

To: <justin@xxxxxxxxxx>
Subject: Re: Backing up a "live" file system.
From: ivanr@xxxxxxxxxxxxxxxxx (Ivan Rayner)
Date: Tue, 5 Jun 2001 14:59:38 +1000
Cc: <linux-xfs@xxxxxxxxxxx>
In-reply-to: <200106042046.f54Kk4k01040@xxxxxxxxxxxxxxxxxxxx>
Sender: owner-linux-xfs@xxxxxxxxxxx
On Mon, 4 Jun 2001, Steve Lord wrote:

> >
> > Thanks for fixing my problem with NF and backups, Steve.  After I got
> > caught up with all of the Email, I updated to the 2.4.5 kernel and have
> > returned to testing to see if I can get it to break.  So far, it seems
> > like everything is working great.
> >
> > I have a few questions and concerns, though.
> >
> > When I do a backup now, I get lots of warnings from xfsdump, that look
> > like:
>
> Someone who understands xfs dump better than I can probably give you a
> more reasonable explaination, but basically, the first pass of the dump is
> an inode scan of the whole filesystem. This is used to decide what to
> dump. We then take the list of inodes and go open them in turn, if they
> have gone missing in the meantime then you get warnings about not being
> able to open them. Note that pathnames are not used in the dump process
> to look up the inodes, or to open them.

Unfortunately, while xfsdump does need to store the directory structure in
the dump, when xfsdump gets around to actually dumping the file data, this
information is long gone, so there is no way to output the name of those
files which didn't get dumped.

You could use 'find / -inum xxx' to find the file after the dump, but of
course this would be too late ...  :)   'course if you did find the file
afterwards then we'd have a more serious problem!

> On the restore end I suspect you are seeing more results of files changing
> underneath you - these were inodes which could be opened by dump, but which
> were in unlinked state at dump time.
>
> Finally, the amount of space to be used is only an estimate, I do not know
> how accurate it normally is on Irix, but a factor of 2 looks a bit large.

The size estimate is based on the blocksize multiplied by the number of
blocks used for each file.  The problem here is that there is a huge
number (500,000) of small files, and given that the estimate is off by
about 1k per file, I'd say the difference is just blocksize vs. filesize.


Ivan


> > My backup consists of two actively updated news spools of the comp.*
> > hierarchy.  They are on the order of 500,000 files.  The backup happens
> > as the spools are being updated so that files can change during the course
> > of the backup.  It seems odd that although 8-10 inodes could not be
> > backed up, the xfsrestore could not restore 305 inodes that ?probably?
> > were okay at xfsdump time?  305 files out of 500,000 is not that much, but
> > does not seem too tolerable.  If these files are files that disappeared
> > during the backup process, it might be okay.  Can anyone comment on this?
> >
> > Also if you look at the above xfdump report, it says that the filesystem
> > was about 1.4G and the resultant backup was 860M.  When I did the restore,
> > it was back to about the correct original 1.4G, can anyone comment on why
> > xfsdump is able to get such good compression?
> >
> > Thanks for you help and thanks for the good filesystem.
> >
> >                             .justin.
> >
> >
> > ------------------------------------------------------------------------
> > Justin Leonard Tripp                                   justin@xxxxxxxxxx
> > Configurable Computing Laboratory Research Assistant      CB 461 x8-7206
> > Electrical and Computer Engineering Department  Brigham Young University
>
>

-- 
Ivan Rayner
ivanr@xxxxxxxxxxxxxxxxx


<Prev in Thread] Current Thread [Next in Thread>