[Top] [All Lists]

Re: How to format RAID1 correctly

To: Helmut Tessarek <tessarek@xxxxxxxxxxx>, Stan Hoeppner <stan@xxxxxxxxxxxxxxxxx>, xfs@xxxxxxxxxxx
Subject: Re: How to format RAID1 correctly
From: Eric Sandeen <sandeen@xxxxxxxxxxx>
Date: Wed, 24 Sep 2014 11:18:18 -0500
Delivered-to: xfs@xxxxxxxxxxx
In-reply-to: <5422E912.1000708@xxxxxxxxxxx>
References: <5422146A.90206@xxxxxxxxxxx> <54222763.40107@xxxxxxxxxxx> <5422285B.6010306@xxxxxxxxxxx> <542234F6.4080000@xxxxxxxxxxxxxxxxx> <5422376D.3000204@xxxxxxxxxxx> <542243E6.1040302@xxxxxxxxxxxxxxxxx> <5422E912.1000708@xxxxxxxxxxx>
On 9/24/14 10:53 AM, Helmut Tessarek wrote:
> On 2014-09-24 0:09, Stan Hoeppner wrote:
>> If you create any striped arrays, especially parity arrays, with md make
>> sure to manually specify chunk size and match it to your workload.  The
>> current default is 512KB.  This is too large for a great many workloads,
>> specifically those that are metadata heavy or manipulate many small
>> files.  512KB wastes space and with parity arrays causes RMW, hammering
>> throughput and increasing latency.
> Thanks again for the valueable information.
> I used to work with databases on storage subsystems, so placing GBs of
> database containers for tableapaces on arrays with a larger stripe size
> was actually beneficial.
> For log files and other data I usually used different cache settings and
> strip sizes.
> So how does this work with SW RAID?
> Does the XFS chunk size equal the amount of data touched by a single r/w
> operation?

It has more to do with where allocations start, so that allocations
don't cross stripe boundaries if possible.


<Prev in Thread] Current Thread [Next in Thread>