Hi Jeremy,
I had a chance to look at this over the weekend and was able to
reproduce the problem immediately. A mod went into 2.2.13 to fix
multiple backups using the TS(SGI) tape driver. There was a call to the
MTBSF tape ioctl which got changed to MTBSFM in order to fix multiple
backups to TS tape devices. I was under the impression that all tape
ioctls followed a standard set of rules and produced the same behaviour
regardless of the OS or tape driver (ie. MTBSFM will always position
tape to the left side of the FM). This is incorrect. I just found out
that TS and ST differ in the following tape ioctls:
TS MTBSF - back space file, position after FM
MTBSFM - back space file, position before FM
ST MTBSF - back space file, position before FM
MTBSFM - back space file, position after FM
These positioning differences are very significant to xfsdump. When the
call to MTBSF got changed to MTBSFM in 2.2.13, it fixed multiple backups
for TS devices but actually broke the functionality for ST tape
devices!! This is fixed in 2.2.15, available in the next few days.
Mandy
--
Mandy Kirkconnell
SGI, Storage Software Engineer
alkirkco@xxxxxxx
Jeremy Jackson wrote:
Hi Mandy,
I'm a bit confused. The problem occurs on multiple systems when using
xfsdump 2.2.14 on a blank tape. The first fs dumps ok, but the rest
just keep overwriting the second session.
The *older* versions seem to work, in fact I have switched 7 or so
systems to the 2.2.11 since it's the only way I can do backups.
Sorry if my post was a bit confusing... I made a second post with some
newer information. It looks to me like 2.2.12 was a regression, not a fix.
Regards,
Jeremy Jackson
Mandy Kirkconnell wrote:
Jeremy Jackson wrote:
I have done dumps of the filesystems of 3 machines to a common tape
drive,
an Exabyte EXB-8500. One machine contains the tape drive, the other
two use
rmt. Five tapes were filled (5GB each), with a sixth only partly used.
Next I wanted to test overwrite handling, as I was having problems
with an
earlier version of xfsdump. I started a dump of a 75GB filesystem,
without
using the -o flag. I began with the partly used sixth tape. The
previously
used dumps had apparently left a stream terminator at media file 12, and
this is where the dump began.
After the tape became full, a tape change was requested. I either
felt like
testing xfsdump, or I wasn't thinking clearly, so I accepted the tape
change
request without changing the tape. As I watched "xfsdump: preparing
drive",
the tape was rewound, and media files were examined. I was shocked
to see
it continue dumping at media file 12 *again*. It found a stream
terminator,
and that was all it was waiting for!
I noticed a bug was fixed in 2.2.13 regarding handling multiple
backups to a
single tape, but I'm not sure what the exact problem was that was fixed.
The earlier dumps on the tapes were with the older version, 2.2.11-1,
and
the *dump in question* was with 2.2.14-1.
As you suspected, this is caused by the problem that got fixed in
2.2.13. It's a nasty one! Any tape backups written with 2.2.12 or
earlier are not reliable when multiple dump sessions are involved.
The problem was introduced when xfsdump was ported from IRIX to Linux.
On IRIX, the TS tape driver sets EOF whether it is to the left OR
right of a tape mark. On Linux, the ST tape driver ONLY sets EOF when
positioned to the right of a tape mark. On IRIX, the MTBSF,1
(backward space 1 filemark) ioctl moves backward along the tape until
it finds a filemark, then positions the tape to the RIGHT of (after)
the filemark. On Linux, the MTBSF,1 ioctl moves back one filemark and
positions the tape to the LEFT of (before) the filemark. These
differences are very important!!
These filemark positioning differences were causing the tape to be
rewound before each dump and the next dump would begin at the exact
same start position of the last dump. The result of this was that
each new dump would always overwrite the previous dump, so a tape
would only ever contain one full (most recent) dump session at a time!!
Please use 2.2.13 or later for any new backups.
Thanks,
Mandy
|