On 03/26/2015 12:13 PM, Trond Myklebust wrote:
> On Thu, Mar 26, 2015 at 12:11 PM, Anna Schumaker
> <Anna.Schumaker@xxxxxxxxxx> wrote:
>> On 03/26/2015 12:06 PM, Trond Myklebust wrote:
>>> On Thu, Mar 26, 2015 at 11:47 AM, Anna Schumaker
>>> <Anna.Schumaker@xxxxxxxxxx> wrote:
>>>> On 03/26/2015 11:38 AM, J. Bruce Fields wrote:
>>>>> On Thu, Mar 26, 2015 at 11:32:25AM -0400, Trond Myklebust wrote:
>>>>>> On Thu, Mar 26, 2015 at 11:21 AM, Anna Schumaker
>>>>>> <Anna.Schumaker@xxxxxxxxxx> wrote:
>>>>>>> Here are my updated numbers! I tested with files 5G in size: one 100%
>>>>>>> data, one 100% hole, and one alternating between hole and data every
>>>>>>> 4K. I collected data for both v4.1 and v4.2 with and without the
>>>>>>> READ_PLUS patches:
>>>>>>>
>>>>>>> ##########################
>>>>>>> # #
>>>>>>> # Without READ_PLUS #
>>>>>>> # #
>>>>>>> ##########################
>>>>>>>
>>>>>>>
>>>>>>> NFS v4.1:
>>>>>>> Trial
>>>>>>> |---------|---------|---------|---------|---------|---------|---------|
>>>>>>> | | 1 | 2 | 3 | 4 | 5 | Average |
>>>>>>> |---------|---------|---------|---------|---------|---------|---------|
>>>>>>> | Data | 8.723s | 7.243s | 8.252s | 6.997s | 6.980s | 7.639s |
>>>>>>> | Hole | 5.271s | 5.224s | 5.060s | 4.897s | 5.321s | 5.155s |
>>>>>>> | Mixed | 8.050s | 10.057s | 7.919s | 8.060s | 9.557s | 8.729s |
>>>>>>> |---------|---------|---------|---------|---------|---------|---------|
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>> NFS v4.2:
>>>>>>> Trial
>>>>>>> |---------|---------|---------|---------|---------|---------|---------|
>>>>>>> | | 1 | 2 | 3 | 4 | 5 | Average |
>>>>>>> |---------|---------|---------|---------|---------|---------|---------|
>>>>>>> | Data | 6.707s | 7.070s | 6.722s | 6.761s | 6.810s | 6.814s |
>>>>>>> | Hole | 5.152s | 5.149s | 5.213s | 5.206s | 5.312s | 5.206s |
>>>>>>> | Mixed | 7.979s | 7.985s | 8.177s | 7.772s | 8.280s | 8.039s |
>>>>>>> |---------|---------|---------|---------|---------|---------|---------|
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>> #######################
>>>>>>> # #
>>>>>>> # With READ_PLUS #
>>>>>>> # #
>>>>>>> #######################
>>>>>>>
>>>>>>>
>>>>>>> NFS v4.1:
>>>>>>> Trial
>>>>>>> |---------|---------|---------|---------|---------|---------|---------|
>>>>>>> | | 1 | 2 | 3 | 4 | 5 | Average |
>>>>>>> |---------|---------|---------|---------|---------|---------|---------|
>>>>>>> | Data | 9.082s | 7.008s | 7.116s | 6.771s | 7.902s | 7.576s |
>>>>>>> | Hole | 5.333s | 5.358s | 5.380s | 5.161s | 5.282s | 5.303s |
>>>>>>> | Mixed | 8.189s | 8.308s | 9.540s | 7.937s | 8.420s | 8.479s |
>>>>>>> |---------|---------|---------|---------|---------|---------|---------|
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>> NFS v4.2:
>>>>>>> Trial
>>>>>>> |---------|---------|---------|---------|---------|---------|---------|
>>>>>>> | | 1 | 2 | 3 | 4 | 5 | Average |
>>>>>>> |---------|---------|---------|---------|---------|---------|---------|
>>>>>>> | Data | 7.033s | 6.829s | 7.025s | 6.873s | 7.134s | 6.979s |
>>>>>>> | Hole | 1.794s | 1.800s | 1.905s | 1.811s | 1.725s | 1.807s |
>>>>>>> | Mixed | 7.590s | 8.777s | 9.423s | 10.366s | 8.024s | 8.836s |
>>>>>>> |---------|---------|---------|---------|---------|---------|---------|
>>>>>>>
>>>>>>
>>>>>> So there is a clear win in the 100% hole case here, but otherwise the
>>>>>> statistical fluctuations are dominating the numbers. Can you get us a
>>>>>> little more stats and then perhaps run the results through nfsometer?
>>>>>
>>>>> Also, could you describe the setup (are these still kvm's), and how
>>>>> you're clearing the cache between runs?
>>>>
>>>> These are still KVMs and my server is exporting an xfs filesystem. I
>>>> clear caches by running "echo 3 > /proc/sys/vm/drop_caches" on the server
>>>> before every read, and I remount my client after reading each set of three
>>>> files once.
>>>
>>> I agree that you have to use the 'drop_caches' interface on the
>>> server, but why not just use O_DIRECT on the clients?
>>
>> I've been reading by using cat from my test shell script: `time cat
>> /nfs/file > /dev/null`. I can write something to read files with O_DIRECT
>> if that would be more useful!
>>
>
> 'dd' can do that for you if the appropriate incantations are performed.
Got it. I'll sacrifice a goat to 'dd' and rerun the tests with O_DIRECT!
>
|