xfs
[Top] [All Lists]

Re: xfstests: 226: have xfs_io use bigger buffers

To: Alex Elder <aelder@xxxxxxx>
Subject: Re: xfstests: 226: have xfs_io use bigger buffers
From: Eric Sandeen <sandeen@xxxxxxxxxxx>
Date: Wed, 19 May 2010 23:39:52 -0500
Cc: xfs@xxxxxxxxxxx
In-reply-to: <201005192244.o4JMiEPY014864@xxxxxxxxxxxxxxxxxxxxxx>
References: <201005192244.o4JMiEPY014864@xxxxxxxxxxxxxxxxxxxxxx>
User-agent: Thunderbird 2.0.0.24 (Macintosh/20100228)
Alex Elder wrote:
> By default xfs_io uses a buffer size of 4096 bytes.  On test 226,
> the result is that the test runs much slower (at least an order
> of magnitude) than it needs to.
> 
> Add a flag to the "pwrite" command sent to xfs_io so it uses
> larger buffers, thereby speeding things up considerably.
> 
> Signed-off-by: Alex Elder <aelder@xxxxxxx>

Reviewed-by: Eric Sandeen <sandeen@xxxxxxxxxxx>

> 
> ---
>  226 |    9 +++++++--
>  1 file changed, 7 insertions(+), 2 deletions(-)
> 
> Index: b/226
> ===================================================================
> --- a/226
> +++ b/226
> @@ -49,10 +49,14 @@ _scratch_mount
>  
>  loops=16
>  
> +# Buffer size argument supplied to xfs_io "pwrite" command
> +buffer="-b $(expr 512 \* 1024)"
> +
>  echo "--> $loops buffered 64m writes in a loop"
>  for I in `seq 1 $loops`; do
>       echo -n "$I "
> -     xfs_io -F -f -c 'pwrite 0 64m' $SCRATCH_MNT/test >> $seq.full
> +     xfs_io -F -f \
> +             -c "pwrite ${buffer} 0 64m" $SCRATCH_MNT/test >> $seq.full
>       rm -f $SCRATCH_MNT/test
>  done
>  
> @@ -63,7 +67,8 @@ _scratch_mount
>  echo "--> $loops direct 64m writes in a loop"
>  for I in `seq 1 $loops`; do
>       echo -n "$I "
> -     xfs_io -F -f -d -c 'pwrite 0 64m' $SCRATCH_MNT/test >> $seq.full
> +     xfs_io -F -f -d \
> +             -c "pwrite ${buffer} 0 64m" $SCRATCH_MNT/test >> $seq.full
>       rm -f $SCRATCH_MNT/test 
>  done
>  
> 
> _______________________________________________
> xfs mailing list
> xfs@xxxxxxxxxxx
> http://oss.sgi.com/mailman/listinfo/xfs
> 

<Prev in Thread] Current Thread [Next in Thread>