On Tue, Feb 03, 2015 at 07:35:33PM +0100, Christoph Hellwig wrote:
> On Mon, Feb 02, 2015 at 02:48:26PM -0500, J. Bruce Fields wrote:
> > Previously: http://lkml.kernel.org/r/20150106175611.GA16413@xxxxxx
> >
> > > - any advice on testing? Is there was some simple
> > > virtual setup that would allow any loser with no special
> > > hardware (e.g., me) to check whether they've broken the
> > > block server?
> >
> > Run two kvm VMs that share the same disk. Create an XFS
> > filesystem on the MDS, and export it. If the client has blkmapd
> > running (on Debian it needs to be started manually) it will use
> > pNFS for accessing the filesystem. Verify that using the
> > per-operation counters in /proc/self/mounstats. Repeat with
> > additional clients as nessecary.
> >
> > Alternatively set up a simple iSCSI target using tgt or lio and
> > connect to it from multiple clients.
> >
> > Which sounds reasonable to me, but I haven't tried to incorporate this
> > into my regression testing yet.
>
> Additionally I can offer the following script to generate recalls,
> which don't really happen during normal operation. I don't
> really know how to write a proper testcase that coordinates access
> to the exported filesystem and nfs unless it runs locally on the same node,
> though. It would need some higher level, network aware test harness:
Thanks. I got as far as doing a quick manual test with vm's sharing a
"disk":
[root@f21-2]# mount -overs=4.1 f21-1:/exports/xfs-pnfs /mnt/
[root@f21-2]# echo "hello world" >/mnt/testfile
[root@f21-2]# grep LAYOUTGET /proc/self/mountstats
LAYOUTGET: 1 1 0 236 196 0 4 4
I haven't tried to set up automated testing with recalls, but that
shouldn't be hard.
--b.
>
> ----- snip -----
> #!/bin/sh
>
> set +x
>
> # wait for grace period
> touch /mnt/nfs1/foo
>
> dd if=/dev/zero of=/mnt/nfs1/foo bs=128M count=32 conv=fdatasync oflag=direct
> &
>
> sleep 2
>
> echo "" > /mnt/test/foo && echo "recall done"
|