On 01/22/2013 12:39 PM, Emmanuel Florac wrote:
> Le Tue, 22 Jan 2013 12:10:28 +0100
> Samuel Kvasnica <samuel.kvasnica@xxxxxxxxx> écrivait:
>> We do not see any remarkable CPU load.
> What does the output from iostat -mx 3 looks like while you're writing
> or reading?
well ok, this will take some time as I need to switch back from btrfs to
>> The interesting point is, we use btrfs filesystem on server instead of
>> xfs now (with otherwise same config) and we are getting consistent,
>> steady throughput
>> around 1.2-1.3GB/s.
>> What is wrong with XFS on 3.x kernel ? Any hints what parameters to
>> look at ?
> What mkfs and mount options did you use? With a large array nobarrier
> and inode64 may make a big difference.
-f -L data -i attr=2 -d agcount=12 -l lazy-count=1,version=2,size=128m
inode64 cannot be relevant at all as the filesystem is empty.
Actually, funny way, noalign is always better that sw/su alignment for
any hardware-RAID I tried so far.
But I do not think that mkfs or mount is that much relevant here as the
local filesystem performance is pretty Ok (actually same as raw dd).
The bottleneck issue comes only when exported by NFS over RDMA. And it
seems more like a "pumping effect"