[Top] [All Lists]

Re: XFS with nfs over rdma performance

To: Emmanuel Florac <eflorac@xxxxxxxxxxxxxx>
Subject: Re: XFS with nfs over rdma performance
From: Samuel Kvasnica <samuel.kvasnica@xxxxxxxxx>
Date: Tue, 22 Jan 2013 14:47:54 +0100
Cc: xfs@xxxxxxxxxxx
Delivered-to: xfs@xxxxxxxxxxx
In-reply-to: <20130122144255.1ce6b329@xxxxxxxxxxxxxxxxxxxx>
Organization: IMS Nanofabrication AG
References: <50FE73A4.7020308@xxxxxxxxx> <20130122123926.3e618de2@xxxxxxxxxxxxxxxxxxxx> <50FE8B33.3060208@xxxxxxxxx> <20130122144255.1ce6b329@xxxxxxxxxxxxxxxxxxxx>
User-agent: Mozilla/5.0 (X11; Linux x86_64; rv:17.0) Gecko/20130105 Thunderbird/17.0.2
On 01/22/2013 02:42 PM, Emmanuel Florac wrote:
> Le Tue, 22 Jan 2013 13:50:59 +0100
> Samuel Kvasnica <samuel.kvasnica@xxxxxxxxx> écrivait:
>> Actually, funny way, noalign is always better that sw/su alignment for
>> any hardware-RAID I tried so far.
> It may be because of lvm in case you're using it.
No, no LVM. And no partitions. The filesystem lives directly on raw
device, even no partition table there.
I have actually never seen this working (except on mdraid), but noalign
gives perfect performance so I do not bother much.

>> But I do not think that mkfs or mount is that much relevant here as
>> the local filesystem performance is pretty Ok (actually same as raw
>> dd).
>> The bottleneck issue comes only when exported by NFS over RDMA. And it
>> seems more like a "pumping effect"
>> that bottleneck.
> NFS tends to make small IOs and can be tricky, in your case it reminds
> me of a random access bottleneck, for instance log access. That's why
> I'm wondering what iostat output looks like... Periodical log flushing
> could be the culprit.
But I read/write only long files in this test (100GB). The point is
btrfs has not this issue and I do not remember seeing
it earlier around 2.6.x kernels. There must be some IO-buffer issue. As
I remember there used to be quite some NFS-specific
code within the XFS tree.



<Prev in Thread] Current Thread [Next in Thread>