xfs
[Top] [All Lists]

Re: [PATCH v3 3/3] NFSD: Add support for encoding multiple segments

To: "J. Bruce Fields" <bfields@xxxxxxxxxxxx>
Subject: Re: [PATCH v3 3/3] NFSD: Add support for encoding multiple segments
From: Anna Schumaker <anna.schumaker@xxxxxxxxxx>
Date: Tue, 24 Mar 2015 08:43:31 -0400
Cc: Christoph Hellwig <hch@xxxxxxxxxxxxx>, Marc Eshel <eshel@xxxxxxxxxx>, "linux-nfs@xxxxxxxxxxxxxxx" <linux-nfs@xxxxxxxxxxxxxxx>, linux-nfs-owner@xxxxxxxxxxxxxxx, xfs@xxxxxxxxxxx
Delivered-to: xfs@xxxxxxxxxxx
Dkim-signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=mime-version:sender:in-reply-to:references:from:date:message-id :subject:to:cc:content-type; bh=Crs+qDJ2YG67bdAapHTtK2w5+ugRsr1uixMgs4UNzM4=; b=klAtszllGkblMDK/CDcFn6tXEzFN9uPYpu7iaJyw6PiUmObdkyyoefyMznj169XkSx gIejbYZ3+OjpmTBLlC/ZFBZNxvtzplaXzNKhg6WnLlWqa9BQpV2k7OU9z6lO6C1+gb96 IVU9ZUD/z2BKMZVBzmwyN43VVQGtXLK2kE7jMxHdmeGPxRh02ieIdwC1IwxDetdJ4AM4 QETK9qV7bXBO8+98nol6yiS8uROrguyOaO0xzyo2qzwvFvfL9YGn8MgbmBbQK3tG4JJE rI4tl7NF7L0aOP3fKJcKm8vpTxfxKzHlYVd8UGYoLzF8vSiBk5ziw1QHOSznFFb3WEpW bJtA==
In-reply-to: <20150320182621.GH2036@xxxxxxxxxxxx>
References: <20150318185545.GF8818@xxxxxxxxxxxx> <5509E27C.3080004@xxxxxxxxxx> <20150318205554.GA10716@xxxxxxxxxxxx> <5509E824.6070006@xxxxxxxxxx> <20150318211144.GB10716@xxxxxxxxxxxx> <OFB111A6D8.016B8BD5-ON88257E0D.001D174D-88257E0D.005268D6@xxxxxxxxxx> <20150319153627.GA20852@xxxxxxxxxxxx> <OF38D4D18B.19055EC2-ON88257E0D.0059BA03-88257E0D.005A781F@xxxxxxxxxx> <20150320151718.GD2036@xxxxxxxxxxxx> <20150320162303.GA18786@xxxxxxxxxxxxx> <20150320182621.GH2036@xxxxxxxxxxxx>
Sender: schumakeranna@xxxxxxxxx
On Fri, Mar 20, 2015 at 2:26 PM, J. Bruce Fields <bfields@xxxxxxxxxxxx> wrote:
> On Fri, Mar 20, 2015 at 09:23:03AM -0700, Christoph Hellwig wrote:
>> On Fri, Mar 20, 2015 at 11:17:18AM -0400, J. Bruce Fields wrote:
>> > Maybe this is a question for xfs developers.
>> >
>> > So, we have a new READ_PLUS call that's basically just a version of READ
>> > optimized for sparse files:
>> >
>> >     
>> > http://tools.ietf.org/html/draft-ietf-nfsv4-minorversion2-33#section-15.10
>> >
>> > It allows an NFS server to return either file data (like a normal READ
>> > call) or, at the server's discretion, records saying "this range of the
>> > data is all zeroes".
>> >
>> > Anna tried implementing READ_PLUS for knfsd using
>> > vfs_llseek(.,.,SEEK_HOLE) followed by an ordinary read if that
>> > determines we're not at a hole.
>> >
>> > (Very) preliminary results suggest that's slower than a plain READ for
>> > an xfs file with no holes.  (And *much* slower in the ext4 case for some
>> > reason.)
>>
>> It should be a fairly cheap operastion, and does extent tree operations
>> that are pretty similar to an (uncached) read.  Do you have profiles?
>>
>> > Is that expected, and should we be doing this some other way instead?
>>
>> Are the read cached or uncached?
>
> I don't know, and don't have profiles.  I'll either try to reproduce or
> wait till Anna's back from vacation.

I'm using whatever functions NFSD already uses for reading files,
which I expect go through the VFS.  Is there a flag that controls
cache behavior?

>
>> If they are from pagecache just copying the zeroes is pretty much
>> unbeatable compared to extent tree lookups, so we'd need a new page
>> flag (difficult..) to see that a page is a hole (and then it would
>> only work for the whole page), but for uncached reads an optimization
>> would be to tell a read that it's an NFS READ_PLUS so that it could
>> just read until it reach a hole, and then we'd need some way to
>> communicate the hole size (or just fall back to SEEK_HOLE for that
>> case).
>
> Ugh, OK.  We'll do some more tests before coming back to ask about
> that....

I only had time for the one run, so I'll do more trials and see if
that one read is always so long.  I'm still hoping it was something in
the way my VM was scheduling its tasks!

Anna

>
> --b.
> --
> To unsubscribe from this list: send the line "unsubscribe linux-nfs" in
> the body of a message to majordomo@xxxxxxxxxxxxxxx
> More majordomo info at  http://vger.kernel.org/majordomo-info.html

<Prev in Thread] Current Thread [Next in Thread>