[PATCH 19/20] xfs: run xfs_repair at the end of each test
Dave Chinner
david at fromorbit.com
Wed Jul 6 18:13:40 CDT 2016
On Mon, Jul 04, 2016 at 09:11:34PM -0700, Darrick J. Wong wrote:
> On Tue, Jul 05, 2016 at 11:56:17AM +0800, Eryu Guan wrote:
> > On Thu, Jun 16, 2016 at 06:48:01PM -0700, Darrick J. Wong wrote:
> > > Run xfs_repair twice at the end of each test -- once to rebuild
> > > the btree indices, and again with -n to check the rebuild work.
> > >
> > > Signed-off-by: Darrick J. Wong <darrick.wong at oracle.com>
> > > ---
> > > common/rc | 3 +++
> > > 1 file changed, 3 insertions(+)
> > >
> > >
> > > diff --git a/common/rc b/common/rc
> > > index 1225047..847191e 100644
> > > --- a/common/rc
> > > +++ b/common/rc
> > > @@ -2225,6 +2225,9 @@ _check_xfs_filesystem()
> > > ok=0
> > > fi
> > >
> > > + $XFS_REPAIR_PROG $extra_options $extra_log_options $extra_rt_options $device >$tmp.repair 2>&1
> > > + cat $tmp.repair | _fix_malloc >>$seqres.full
> > > +
> >
> > Won't this hide fs corruptions? Did I miss anything?
>
> I could've sworn it did:
>
> xfs_repair -n
> (complain if corrupt)
>
> xfs_repair
>
> xfs_repair -n
> (complain if still corrupt)
>
> But that first xfs_repair -n hunk disappeared. :(
>
> Ok, will fix and resend.
Not sure this is the best idea - when repair on an aged test device
takes 10s, this means the test harness overhead increases by a
factor of 3. i.e. test takes 1s to run, checking the filesystem
between tests now takes 30s. i.e. this will badly blow out the run
time of the test suite on aged test devices....
What does this overhead actually gain us that we couldn't encode
explicitly into a single test or two? e.g the test itself runs
repair on the aged test device....
Cheers,
Dave.
--
Dave Chinner
david at fromorbit.com
More information about the xfs
mailing list