[Top] [All Lists]

Re: [PATCH] 260: Add another corner case where length is zero

To: Lukáš Czerner <lczerner@xxxxxxxxxx>
Subject: Re: [PATCH] 260: Add another corner case where length is zero
From: Dave Chinner <david@xxxxxxxxxxxxx>
Date: Wed, 10 Oct 2012 08:39:13 +1100
Cc: xfs@xxxxxxxxxxx
In-reply-to: <alpine.LFD.2.00.1210092320480.2326@(none)>
References: <1349785012-28588-1-git-send-email-lczerner@xxxxxxxxxx> <20121009194019.GJ23644@dastard> <alpine.LFD.2.00.1210092320480.2326@(none)>
User-agent: Mutt/1.5.21 (2010-09-15)
On Tue, Oct 09, 2012 at 11:23:13PM +0200, Lukáš Czerner wrote:
> On Wed, 10 Oct 2012, Dave Chinner wrote:
> > Date: Wed, 10 Oct 2012 06:40:19 +1100
> > From: Dave Chinner <david@xxxxxxxxxxxxx>
> > To: Lukas Czerner <lczerner@xxxxxxxxxx>
> > Cc: xfs@xxxxxxxxxxx
> > Subject: Re: [PATCH] 260: Add another corner case where length is zero
> > 
> > On Tue, Oct 09, 2012 at 02:16:52PM +0200, Lukas Czerner wrote:
> > > This commit adds another corner case to test FITRIM argument handling.
> > > In this case we set length to zero and we expect the number of discarded
> > > bytes to be obviously zero, however we've had bug in both ext4 and xfs
> > > where the internal variable would underflow. This test case will be able
> > > to catch that in future.
> > > 
> > > Signed-off-by: Lukas Czerner <lczerner@xxxxxxxxxx>
> > 
> > I'd create another test for this, rather than making 260 suddenly
> > fail for everyone....
> Hmm, I am not sure what is the point. I've created 260 exactly for
> this reason of testing FITRIM argument handling and it already
> contains number of tests like this one. I am not strongly against
> having this in separate test, however it seems rather unnecessary to
> me.

IT's a regression test - it's only supposed to start failing when
the kernel functionality is broken.  That is, someone who is
tracking failures over time will suddenly see a new failure in 260
and wonder what kernel code broke, when in fact nothing was changed
in the kernel code. IOWs, changing the test invalidates all past
history of running the test, and that in turn breaks historic
regression tracking metrics...

This is why we historically have avoided changing existing tests and
instead wrote new tests, no matter how similar the functionality
between the old and new tests are.


Dave Chinner

<Prev in Thread] Current Thread [Next in Thread>