On 01/22/2013 02:42 PM, Emmanuel Florac wrote:
> Le Tue, 22 Jan 2013 13:50:59 +0100
> Samuel Kvasnica <samuel.kvasnica@xxxxxxxxx> écrivait:
>
>> Actually, funny way, noalign is always better that sw/su alignment for
>> any hardware-RAID I tried so far.
> It may be because of lvm in case you're using it.
No, no LVM. And no partitions. The filesystem lives directly on raw
device, even no partition table there.
I have actually never seen this working (except on mdraid), but noalign
gives perfect performance so I do not bother much.
>
>> But I do not think that mkfs or mount is that much relevant here as
>> the local filesystem performance is pretty Ok (actually same as raw
>> dd).
>>
>> The bottleneck issue comes only when exported by NFS over RDMA. And it
>> seems more like a "pumping effect"
>> that bottleneck.
>>
> NFS tends to make small IOs and can be tricky, in your case it reminds
> me of a random access bottleneck, for instance log access. That's why
> I'm wondering what iostat output looks like... Periodical log flushing
> could be the culprit.
But I read/write only long files in this test (100GB). The point is
btrfs has not this issue and I do not remember seeing
it earlier around 2.6.x kernels. There must be some IO-buffer issue. As
I remember there used to be quite some NFS-specific
code within the XFS tree.
regards,
Samuel
|