<html><head></head><body>Thanks for the short and clear answer stan.<br>
<br>
Marko<br><br><div class="gmail_quote">On 23. Februar 2014 23:10:43 MEZ, Stan Hoeppner <stan@hardwarefreak.com> wrote:<blockquote class="gmail_quote" style="margin: 0pt 0pt 0pt 0.8ex; border-left: 1px solid rgb(204, 204, 204); padding-left: 1ex;">
<pre class="k9mail">On 2/23/2014 3:37 AM, Marko Weber|8000 wrote:<br />...<br /><blockquote class="gmail_quote" style="margin: 0pt 0pt 1ex 0.8ex; border-left: 1px solid #729fcf; padding-left: 1ex;"> linux /bzImage-3.10.31 root=/dev/md2 elevator=cfq clocksource=hpet<br /></blockquote> ^^^^^^^^^^^^<br /><br />cfq tends to defeat much of the parallelism in XFS, decreasing<br />throughput substantially. This is documented in the XFS FAQ and has<br />been discussed here many times in the past. It has been recommended for<br />a few years now that XFS not be used with the cfq elevator. Use<br />deadline with md arrays on plain HBAs and noop on SSDs or any device<br />with [F|B]BWC, i.e. RAID HBA or SAN controller.<br /><br />If you're using cfq to allow shaping of per process IO with control<br />groups, simply using cfq alone may slow down XFS throughput to the point<br />that you don't need to bother with control group optimizations.<br
/></pre></blockquote></div><br>
-- <br>
Diese Nachricht wurde von meinem Android-Mobiltelefon mit K-9 Mail gesendet.</body></html>