xfs
[Top] [All Lists]

Re: a simple and scalable pNFS block layout server

To: "J. Bruce Fields" <bfields@xxxxxxxxxxxx>
Subject: Re: a simple and scalable pNFS block layout server
From: Christoph Hellwig <hch@xxxxxx>
Date: Tue, 6 Jan 2015 18:56:11 +0100
Cc: Jeff Layton <jlayton@xxxxxxxxxxxxxxx>, linux-nfs@xxxxxxxxxxxxxxx, linux-fsdevel@xxxxxxxxxxxxxxx, xfs@xxxxxxxxxxx
Delivered-to: xfs@xxxxxxxxxxx
In-reply-to: <20150106173222.GF12067@xxxxxxxxxxxx>
References: <1420561721-9150-1-git-send-email-hch@xxxxxx> <20150106173222.GF12067@xxxxxxxxxxxx>
User-agent: Mutt/1.5.17 (2007-11-01)
On Tue, Jan 06, 2015 at 12:32:22PM -0500, J. Bruce Fields wrote:
>       - do we have evidence that this is useful in its current form?

What is your threshold for usefulness?  It passes xfstests fine, and
shows linear scalability with multiple clients that each have 10GB
links. 

>       - any advice on testing?  Is there was some simple virtual setup
>         that would allow any loser with no special hardware (e.g., me)
>         to check whether they've broken the block server?

Run two kvm VMs that share the same disk.  Create an XFS filesystem
on the MDS, and export it.  If the client has blkmapd running (on Debian
it needs to be started manually) it will use pNFS for accessing the
filesystem.  Verify that using the per-operation counters in
/proc/self/mounstats.  Repeat with additional clients as nessecary.

Alternatively set up a simple iSCSI target using tgt or lio and
connect to it from multiple clients.

>       - any debugging advice?  E.g., have you checked if current
>         wireshark can handle the MDS traffic?

The wireshare version I've used decoded the generic pNFS operations
fine, but just dumps the layout specifics as hex data.

Enable the trace points added in this series, they track all stateid
interactions in the server.  AdditÑonally the pnfs debug printks on
client and server dump a lot of information.

<Prev in Thread] Current Thread [Next in Thread>