<br><br>On Mon, Dec 12, 2011 at 07:39, Dave Chinner <<a href="mailto:david@fromorbit.com">david@fromorbit.com</a>> wrote:<br>><br>> > ====== XFS + 2.6.29 ======<br>><br>> Read 21GB @ 11k iops, 210MB/s, av latency of 1.3ms/IO<br>
> Wrote 2.3GB @ 1250 iops, 20MB/s, av latency of 0.27ms/IO<br>> Total 1.5m IOs, 95% @ <= 2ms<br>><br>> > ====== XFS + 2.6.39 ======<br>><br>> Read 6.5GB @ 3.5k iops, 55MB/s, av latency of 4.5ms/IO<br>
> Wrote 700MB @ 386 iops, 6MB/s, av latency of 0.39ms/IO<br>> Total 460k IOs, 95% @ <= 10ms, 4ms > 50% < 10ms<br>><br>> Looking at the IO stats there, this doesn't look to me like an XFS<br>> problem. The IO times are much, much longer on 2.6.39, so that's the<br>
> first thing to understand. If the two tests are doing identical IO<br>> patterns, then I'd be looking at validating raw device performance<br>> first.<br>><br><br><div>Thank you Dave.<br><div><br>I also did raw device and ext4 performance test with 2.6.39, all these tests are<br>
doing identical IO patterns(non-buffered IO, 16 IO threads, 16KB block size,<br>mixed random read and write, r:w=9:1):<br>====== raw device + 2.6.39 ======<br>Read 21.7GB @ 11.6k IOPS , 185MB/s, av latency of 1.37 ms/IO<br>
Wrote 2.4GB @ 1.3k IOPS, 20MB/s, av latency of 0.095 ms/IO<br>Total 1.5M IOs, @ 96% <= 2ms<br><br>====== ext4 + 2.6.39 ======<br>Read 21.7GB @ 11.6k IOPS , 185MB/s, av latency of 1.37 ms/IO<br>Wrote 2.4GB @ 1.3k IOPS, 20MB/s, av latency of 0.1 ms/IO<br>
Total 1.5M IOs, @ 96% <= 2ms<br><br>====== XFS + 2.6.39 ======<br>Read 6.5GB @ 3.5k iops, 55MB/s, av latency of 4.5ms/IO<br>Wrote 700MB @ 386 iops, 6MB/s, av latency of 0.39ms/IO<br>Total 460k IOs, @ 95% <= 10ms, 4ms > 50% < 10ms<br>
<br>here are the detailed test results:<div>== 2.6.39 ==<br><a href="http://blog.xupeng.me/wp-content/uploads/ext4-xfs-perf/2.6.39-xfs.txt">http://blog.xupeng.me/wp-content/uploads/ext4-xfs-perf/2.6.39-xfs.txt</a><br><a href="http://blog.xupeng.me/wp-content/uploads/ext4-xfs-perf/2.6.39-ext4.txt">http://blog.xupeng.me/wp-content/uploads/ext4-xfs-perf/2.6.39-ext4.txt</a><br>
<a href="http://blog.xupeng.me/wp-content/uploads/ext4-xfs-perf/2.6.39-raw.txt">http://blog.xupeng.me/wp-content/uploads/ext4-xfs-perf/2.6.39-raw.txt</a><div><br></div><div>== 2.6.29 ==</div><div><a href="http://blog.xupeng.me/wp-content/uploads/ext4-xfs-perf/2.6.29-xfs.txt">http://blog.xupeng.me/wp-content/uploads/ext4-xfs-perf/2.6.29-xfs.txt</a><br>
<a href="http://blog.xupeng.me/wp-content/uploads/ext4-xfs-perf/2.6.29-ext4.txt">http://blog.xupeng.me/wp-content/uploads/ext4-xfs-perf/2.6.29-ext4.txt</a><br><a href="http://blog.xupeng.me/wp-content/uploads/ext4-xfs-perf/2.6.29-raw.txt">http://blog.xupeng.me/wp-content/uploads/ext4-xfs-perf/2.6.29-raw.txt</a></div>
<div><br>><br>> > I tried different XFS format options and different mount options, but<br>> > it did not help.<br>><br>> It won't if the problem is inthe layers below XFS.<br>><br>> e.g. IO scheduler behavioural changes could be the cause (esp. if<br>
> you are using CFQ), the SSD could be in different states or running<br>> garbage collection intermittently and slowing things down, the<br>> filesystem could be in different states (did you use a fresh<br>> filesystem for each of these tests?), etc, recent mkfs.xfs will trim<br>
> the entire device if the kernel supports it, etc.<br><br><br>I did all the tests on the same server with deadline scheduler, and xfsprogs version<br>is 3.1.4. I also ran tests with noop scheduler, but not big difference.<br>
<br>--<br>Xupeng Yun<br><a href="http://about.me/xupeng">http://about.me/xupeng</a></div></div></div></div>