[Top] [All Lists]

Snapshot regression test [WAS: re[6]: Summary - Snapshot Effort]

To: Nathan Scott <nathans@xxxxxxx>
Subject: Snapshot regression test [WAS: re[6]: Summary - Snapshot Effort]
From: Greg Freemyer <freemyer@xxxxxxxxxxxxxxxxx>
Date: Tue, 27 Aug 2002 15:04:35 -0400
Cc: <linux-xfs@xxxxxxxxxxx>
Organization: The NorcrossGroup
Sender: linux-xfs-bounce@xxxxxxxxxxx

I have the very basics of a new test almost working.  I'm assuming it will be 

One big problem I have is that I start a background infinite loop of dd's to 
generate some i/o load.

while true; do
     dd if=/scratch/dummy of=/scratch/junk bs=64k >/dev/null 2>&1
     rm /scratch/junk
done >/dev/null 2>&1 &

At the end of the script I kill this off.  Unfortunately, when it dies the 
parent shell is notifying the user with output like:

> 068: line 127:  3956 Killed                  while true; do
>     dd if=/scratch/dummy of=/scratch/junk bs=64k >/dev/null 2>&1; rm 
> /scratch/junk; sync;
> done >/dev/null 2>&1

in my 068.out file.  Since this has pids in the output, I always get a failure 

Is there a way to say that output should not be compared, or do you know some 
shell trick to avoid having this output generated.

As you can see above, I tried sending stdout and stderr for the subshell to 
/dev/null, but that did not help.

Greg Freemyer
Internet Engineer
Deployment and Integration Specialist
Compaq ASE - Tru64 v4, v5
Compaq Master ASE - SAN Architect
The Norcross Group
 >>  hi,

 >>  On Fri, Aug 23, 2002 at 11:56:59AM -0500, Steve Lord wrote:
 >>  > On Fri, 2002-08-23 at 11:28, Greg Freemyer wrote:
 >>  > > I will download the whole set and first verify the test system runs on
 >>  my system, then try to put together a new test for snapshots.
 >>  > > 
 >>  > > The readme talks about the user providing 2 partitions, one with xfs
 >>  on it, and one scratch.
 >>  > > 
 >>  > > I assume I should use the scratch partition to build a lvm structure
 >>  on, then format it with xfs.

 >>  Yes, sounds like the right approach.

 >>  > > One high level question, at the start of the test I assume I should
 >>  check for the correct installation of LVM and error out if it is not
 >>  available.

 >>  See the _notrun shell function and how some other tests use it.

 >>  > > Is that a reasonable behavior?
 >>  > 
 >>  > Yep, as you can see there are several tests which will skip execution if
 >>  > certain features are not available. Probably testing the kernel for lvm
 >>  > support (after attempting to load the module) would be a good thing.
 >>  > 
 >>  > > 
 >>  > > Even higher level, as I have questions, should I ask them on the list,
 >>  or is there a QA person there I should e-mail directly.
 >>  > > 
 >>  > 
 >>  > Well, you can ask on the list, or ask Nathan Scott (nathans@xxxxxxx), he

 >>  Fire away if need be Greg & I'll try to help.

 >>  Of the current tests (which can be setup to run every night using
 >>  the top of tree code with the "auto-qa" script), test 064 seems to
 >>  have a timing problem which noone has has a chance to investigate
 >>  yet (looks like an issue with the test), so you can expect that one
 >>  to fail.  And I also think Steve sees 021 fail on his box - I don't
 >>  have that failure though, and I suspect a problem in the test or
 >>  the sed/awk/... shell tools from that particular distribution.

 >>  Have fun.

 >>  cheers.

 >>  -- 
 >>  Nathan

<Prev in Thread] Current Thread [Next in Thread>