xfs
[Top] [All Lists]

Re: [PATCH v2 2/2] xfstests: introduce 280 for SEEK_DATA/SEEK_HOLE copy

To: Dave Chinner <david@xxxxxxxxxxxxx>
Subject: Re: [PATCH v2 2/2] xfstests: introduce 280 for SEEK_DATA/SEEK_HOLE copy check
From: Jeff Liu <jeff.liu@xxxxxxxxxx>
Date: Wed, 08 Feb 2012 22:06:27 +0800
Cc: Christoph Hellwig <hch@xxxxxxxxxxxxx>, Mark Tinguely <tinguely@xxxxxxx>, xfs@xxxxxxxxxxx
In-reply-to: <20120208085557.GI20305@dastard>
Organization: Oracle
References: <4F2FE410.2040508@xxxxxxxxxx> <20120208085557.GI20305@dastard>
Reply-to: jeff.liu@xxxxxxxxxx
User-agent: Mozilla/5.0 (X11; U; Linux i686; en-US; rv:1.9.2.18) Gecko/20110617 Thunderbird/3.1.11
On 02/08/2012 04:55 PM, Dave Chinner wrote:

> On Mon, Feb 06, 2012 at 10:30:40PM +0800, Jeff Liu wrote:
>> Introduce 280 for SEEK_DATA/SEEK_HOLE copy check.
>>
>> Signed-off-by: Jie Liu <jeff.liu@xxxxxxxxxx>
> 
> This has the same problems with $seq.out as 279, so I won't repeat
> them here.
> 
> .....
>> +_cleanup()
>> +{
>> +    rm -f $src $dest
>> +}
>> +
>> +# seek_copy_test_01()
>> +# create a 100Mytes file in preallocation mode.
>> +# fallocate offset start from 0.
>> +# the first data extent offset start from 80991, write 4Kbytes,
>> +# and then skip 195001 bytes for next write.
> 
> Oh, man, you didn't write a program to do this, do you?

Unfortunately, I have already included file creation at seek_copy_tester :(

> This is what
> xfs_io is for - to create arbitary file configurations as quickly as
> you can type them.  Then all you need is a simple program that
> copies the extents, and the test can check everything else.

Yes, xfs_io is pretty cool, and it really convenient for file creation for XFS.
I wrote it(create_data_and_holes()) in seek_copy_tester since I'd make it as a 
general SEEK_DATA/SEEK_HOLE tester
for other file systems without this utility too.

> 
>> +# this is intended to test data buffer lookup for DIRTY pages.
>> +# verify results:
>> +# 1. file size is identical.
>> +# 2. perform cmp(1) to compare SRC and DEST file byte by byte.
>> +test01()
>> +{
>> +    rm -f $src $dest
>> +
>> +    $here/src/seek_copy_tester -P -O 0 -L 100m -s 80991 -k 195001 -l 4k 
>> $src $dest
>> +
>> +    test $(stat --printf "%s" $src) = $(stat --printf "%s" $dest) ||
>> +            echo "TEST01: file size check failed" >> $seq.out
>> +
>> +    cmp $src $dest                                                ||
>> +            echo "TEST01: file bytes check failed" >> $seq.out
> 
> A quick hack (untested) to replace this file creation with xfs_io
> would be:
> 
> test01()
> {
>       write_cmd="-c \"truncate 0\" -c \"falloc 0 100m\""
>       for i in `seq 0 1 100`; do
>               offset=$((80991 + $i * 195001))
>               write_cmd="$write_cmd -c \"pwrite $offset 4k\""
>       done
>       xfs_io -F -f $write_cmd $src
> 
>       $here/scr/sparse_cp $src $dst
>       stat --printf "%s\n" $src $dst
>       cmp $src $dst >> $seq.out || _fail "file bytes check failed"
> }

Thanks for this detailed info :).

> 
> 
>> +}
>> +
>> +# seek_copy_test_02()
>> +# create a 100Mytes file in preallocation mode.
>> +# fallocate offset start from 0.
>> +# the first data extent offset start from 0, write 16Kbytes,
>> +# and then skip 8Mbytes for next write.
>> +# Try flushing DIRTY pages to WRITEBACK mode, this is intended to
>> +# test data buffer lookup in WRITEBACK pages.
> 
> There's no guarantee that that the seeks will occur while the pages
> are in the writeback. It's entirely dependent on IO latency -
> writing 16k of data to a disk cache will take less time than it
> takes to go back up into userspace and start the sparse copy.
> Indeed, i suspect that the 16x16k IOs that this tes does will fit
> all into that category even on basic SATA configs....
> 
> Also, you could the fadvise command in xfs_io to do this, as
> POSIX_FADV_DONTNEED will trigger async writeback -it will then skip
> invalidation of pages under writeback so they will remain in the
> cache. i.e. '-c "fadvise -d 0 100m"'
> 
> Ideally, we should add all the different sync methods to an xfs_io
> command...

Thanks again for the detained info.
It's definitely depending on the IO latency to test cover those page status 
conversion.
I have verified the old patch with page probe routine on my laptop SATA disk 
controller,
but not tried against other faster controllers.  If we agree to make it as a 
general tester, maybe I can
try to implement it by referring to xfs_io fadvise, I guess it use 
posix_fadvise(2), will check it later.

> 
>> +# the first data extent offset start from 512, write 4Kbytes,
>> +# and then skip 1Mbytes for next write.
>> +# don't make holes at the end of file.
> 
> I'm not sure what this means - you always write zeros at the end of
> file, and the only difference is that "make holes at EOF" does an
> ftruncate to the total size before writing zeros up to it. It
> appears to me like you end up with the same file size and shape
> either way....

Oops! this is a code bug. I want to create a hole at EOF if possible when 
"-E(wrote_hole_at_eof)" option was specified.
It can be fixed as below FIXME:

if (off < nr_total_bytes) {
        if (wrote_hole_at_eof) {
                ret = ftruncate(fd, nr_total_bytes);
                if (ret < 0) {
                        error("truncate source file to %zu bytes failed as %s",
                               nr_total_bytes, strerror(errno));
                }
                goto out;  *FIXME, break here *
        }

        ret = write_zeros(fd, nr_total_bytes - off);
        if (ret < 0) {
                error("write_zeros to end of file failed as %s",
                        strerror(errno));
                }
        }
}

> 
>> --- /dev/null
>> +++ b/280.out
>> @@ -0,0 +1 @@
>> +QA output created by 280
> 
> Normally we echo "silence is golden" to the output
> file in this case of no real output to indicate that this empty
> output file is intentional.

Ok.

Thanks,
-Jeff

> 
> Cheers,
> 
> Dave.


<Prev in Thread] Current Thread [Next in Thread>