xfs
[Top] [All Lists]

Re: XFS: Abysmal write performance because of excessive seeking (allocat

To: Peter Grandi <pg_xf2@xxxxxxxxxxxxxxxxxxxxx>
Subject: Re: XFS: Abysmal write performance because of excessive seeking (allocation groups to blame?)
From: Stefan Ring <stefanrin@xxxxxxxxx>
Date: Fri, 6 Apr 2012 07:53:18 +0200
Cc: Linux fs XFS <xfs@xxxxxxxxxxx>
Dkim-signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=mime-version:in-reply-to:references:date:message-id:subject:from:to :cc:content-type:content-transfer-encoding; bh=TYez3/JBJM+Ah1V20JGJ8X5Bo5oWx2k1kmXvOtdxVDM=; b=loP+yfoJYBx4CrSe+Tce9pm+P4EYalpumSzsKri+vUYpvRvn7/EKWVo0Po/zs0+NPg lUwrIPxnDHiJdXDFkWXDezRh3gDOCED7xxisx7ciZFFthsbTJqunPqzJZYs1Tmqzs9Jk GS4VKUhZMTAcG3fBWtxB77a0313W4AwIu2yq24/PQULTWdOxT0RabqeDU0kErD/DBWuZ r20IUX2Juy4rWzDUGnJrVDcq57zKeFbWe7UFAS49tPstRZMB92eTNHSQyHBXpWBqOE75 +3rgY8kiDzUrnRLC6CXvq5ptwyCRzLSgFgtMRkISW01dqcYsqKiPdwwQbLe9HZkinoGo SWvg==
In-reply-to: <20350.9643.379841.771496@xxxxxxxxxxxxxxxxxx>
References: <CAAxjCEwBMbd0x7WQmFELM8JyFu6Kv_b+KDe3XFqJE6shfSAfyQ@xxxxxxxxxxxxxx> <20350.9643.379841.771496@xxxxxxxxxxxxxxxxxx>
> Which brings another subject: usually hw RAID host adapter have
> cache, and have firmware that cleverly rearranges writes.
>
> Looking at the specs of the P400:
>
>  http://h18004.www1.hp.com/products/servers/proliantstorage/arraycontrollers/smartarrayp400/
>
> it seems to me that it has standard 256MB of cache, and only
> supports RAID6 with a battery backed write cache (wise!).
>
> Which means that your Linux-level seek graphs may be not so
> useful, because the host adapter may be drastically rearranging
> the seek patterns, and you may need to tweak the P400 elevator,
> rather than or in addition to the Linux elevator.
>
> Unless possibly barriers are enabled, and even with a BBWC the
> P400 writes through on receiving a barrier request. IIRC XFS is
> rather stricter in issuing barrier requests than 'ext4', and you
> may be seeing more the effect of that than the effect of aiming
> to splitting the access patterns between 4 AGs to improve the
> potential for multithreading (which you deny because you are
> using what is most likely a large RAID6 stripe size with a small
> IO intensive write workload, as previously noted).

Yes, it does have 256 MB BBWC, and it is enabled. When I disabled it,
the time needed would rise from 120 sec in the BBWC case to a whopping
330 sec.

IIRC, I did the benchmark with barrier=0, but changing this did not
make a big difference. Nothing did; that’s what frustrated me a bit
;). I also tried different Linux IO elevators, as you suggested in
your other response, without any measurable effect.

The stripe size is this, btw.: su=16k,sw=4

<Prev in Thread] Current Thread [Next in Thread>