>> hi Greg,
>> On Wed, Aug 28, 2002 at 07:20:49PM -0400, Greg Freemyer wrote:
>> >
>> > Nathan,
>> >
>> > I think I have a working snapshot regression test.
>> Cool. Should have mentioned this before - have you tried
>> running this via the "check" script?, ie.
>> # cd cmd/xfstests
>> # ./check 068
Yes, I have been testing it exclusively via check. I did have to add 068 to
the "group" file.
>> this is how it would run as part of auto-qa - will need an
>> output file which has the expected output from a "passing"
>> run (ie. 068.out) like the other tests.
My currernt 068.out file is:
>>>>
QA output created by 068
lvm-mod 58112 9 (autoclean)
mkdir: cannot create directory `//scratch': File exists
1000+0 records in
1000+0 records out
SUCCESS, COMPLETED ALL ITERATIONS WITH NO TIME OUTS!!!!!!!!!!!!
Cleanup beginning
umount: /scratch_snap: not mounted
lvremove -- logical volume "/dev/TruStore-Data/scratch_snap" doesn't exist
lvremove -- doing automatic backup of volume group "TruStore-Data"
lvremove -- logical volume "/dev/TruStore-Data/scratch" successfully removed
<<<<
I guess I should be sending even more of the above to /dev/null, to eliminate
the chance of a lvm wording change from causing a failure?
I will send a followup e-mail draft script that gets the output done to just a
couple of messages on a success.
>> Some other suggestions for your script follow...
>> > owner=freemyer@xxxxxxxxxxxxxxxxx
>> >
>> > seq=`basename $0`
>> > echo "QA output created by $seq"
>> >
>> > DELAY_BETWEEN_ITERATIONS=20
>> > ITERATIONS=30
>> > VG=/dev/VGscratch
>> > #SCRATCH_DEV=/dev/xxxx # Only needed if running by hand
>> > #SCRATCH_MNT=/scratch # Only needed if running by hand
>> These would be setup by "check" if running by hand, which
>> uses the common.config (I think) file.
I have them commented out because check sets them, but I have them in there as
a reminder because I also cross posted this to the LVM list, and I thought they
could just uncomment that and run it without having check available.
>> > umount /scratch_snap
>> > rmdir /scratch_snap
>> Might be better to do this in /tmp/scratch_snap?
Will do, but I used /tmp/$$.scratch_snap in line with the existing $tmp
variable.
>> > umount $SCRATCH_MNT
>> >
>> > lvremove -f $VG/scratch_snap
>> > lvremove -f $VG/scratch
>> >
>> > if [ -e /scratch_snap ]; then _notrun "This test requires that
>> /scratch_snap not exist."; fi
>> Could then rmdir it here instead of this step.
Now that it is in /tmp, I don't have a problem with rm -rf on it. I did not
want to do that original location.
>> > mkdir /scratch_snap
>> >
>> > #Verify we have the lvm user tools
>> > LVM=`rpm -qa | grep 'lvm-'`
>> Thats not so good - makes it dependent on rpm - better to do
>> something like:
>> [ -x /sbin/lvcreate ] || _notrun "LVM lvcreate utility is not installed"
>> [ -x /sbin/lvremove ] || _notrun "LVM lvremove utility is not installed"
This assumes where it is installed, but if it works for you, its fine with me.
>> > #Verify we have the a lvm enabled kernel
>> > # TODO (This assumes lvm is a module. What if it is linked? I don't
>> know how to check that.
>> > lsmod | grep lvm-mod;
>> > if [ $? != 0 ]; then _notrun "This test requires the LVM kernel module
>> be present"; fi
>> A better approach here would be to grep for lvm in /proc/devices,
>> this would work for module/non-module builds.
I will ask on the LVM list for a mechanism that works with both LVM 1 and LVM 2
>> > # Mount the LV
>> > mkdir /$SCRATCH_MNT > /dev/null 2&>1
>> > mount $VG/scratch /$SCRATCH_MNT
>> need the leading '/' here?
Gone
>> > while [ -f $tmp.running ]
>> > do
>> > dd if=$SCRATCH_MNT/dummy of=$SCRATCH_MNT/junk bs=64k > /dev/null
>> 2>&1
>> > rm $SCRATCH_MNT/junk # This forces metadata updates the next
>> time around
>> > sync
>> > done &
>> This loop still looks strange to me... (the dd is forever, so why rm
>> and sync?)... but possibly its meant to be so.
I'm afraid I don't understand your confusion. The full code snippit is:
====
# Create a large 64 Meg zero filled file on the LV
dd if=/dev/zero of=$SCRATCH_MNT/dummy bs=64k count=1000
#setup an infinite loop to copy the large file, thus generating heavy i/o
touch $tmp.running
while [ -f $tmp.running ]
do
dd if=$SCRATCH_MNT/dummy of=$SCRATCH_MNT/junk bs=64k > /dev/null 2>&1
rm $SCRATCH_MNT/junk # This forces metadata updates the next time
around
sync
done &
===
The first dd uses a count arg to create a 64 meg file called dummy.
Inside the while loop, I use dd to copy this file to junk. At the end of
64megs, it reaches EOF and terminates. Then I rm junk and do it again.
I have used ps to verify that the pid of the dd command changes every few
seconds, so I'm confident that it is working as designed.
FYI: Apparently the read cache is 64megs or bigger on my test machine because
with "iostat -x -d 10" I don't see any read activity during the test. Since
this means that the write load is as large as I can make it, I consider that a
good thing.
>> cheers.
>> --
>> Nathan
Greg Freemyer
Internet Engineer
Deployment and Integration Specialist
Compaq ASE - Tru64 v4, v5
Compaq Master ASE - SAN Architect
The Norcross Group
www.NorcrossGroup.com
|