On 2014-09-24 0:09, Stan Hoeppner wrote:
> If you create any striped arrays, especially parity arrays, with md make
> sure to manually specify chunk size and match it to your workload. The
> current default is 512KB. This is too large for a great many workloads,
> specifically those that are metadata heavy or manipulate many small
> files. 512KB wastes space and with parity arrays causes RMW, hammering
> throughput and increasing latency.
Thanks again for the valueable information.
I used to work with databases on storage subsystems, so placing GBs of
database containers for tableapaces on arrays with a larger stripe size
was actually beneficial.
For log files and other data I usually used different cache settings and
strip sizes.
So how does this work with SW RAID?
Does the XFS chunk size equal the amount of data touched by a single r/w
operation?
I'm asking because data is usually written in page/extent sizes for
databases. Even if I have a container with 50GB, I might only have to
read/write a 4k page.
Cheers,
K. C.
--
regards Helmut K. C. Tessarek
lookup http://sks.pkqs.net for KeyID 0xC11F128D
/*
Thou shalt not follow the NULL pointer for chaos and madness
await thee at its end.
*/
|