On Sat, Jun 15, 2013 at 06:35:02AM -0400, Mark Seger wrote:
> Basically everything do it with collectl, a tool I wrote and opensourced
> almost 10 years ago. it's numbers are very accurate - I've compared with
> iostat on numerous occasions whenever I might have had doubts and they
> always agree. Since both tools get their data from the same place,
> /proc/diskstats, it's hard for them not to agree AND its numbers also agree
> with /proc/fs/xfs.
Ok, that's all I wanted to know.
> To restate what's going on, I have a very simple script that I'm
> duplicating what openstack swift is doing, namely to create a file with
> mkstmp and than running an falloc against it. The files are being created
> with a size of zero but it seems that xfs is generating a ton of logging
> activity. I had read your posted back in 2011 about speculative
> preallocation and can't help but wonder if that's what hitting me here. I
> also saw where system memory can come into play and this box has 192GB and
> 12 hyperthreaded cores.
> I also tried one more run without falloc, this is creating 10000 1K files,
> which should be about 10MB and it looks like it's still doing 140MB of I/O
> which still feels like a lot but at least it's less than the
1k files will still write 4k filesystem blocks, so there's going to
be 40MB/s there at least.
As it is, I ran a bunch of tests yesterday writing 4k files, and I
got 180MB/s @ 32,000 files/s. That's roughly 130MB/s for data, and
another 50MB/s for log and metadata traffic. But without knowing
your test configuration and using your test script, I can't compare
those results to yours. Can you provide the information in:
> If there is anything more I can provide I'll be happy to do so. Actually I
> should point out I can easily generate graphs and if you'd like to see some
> examples I can provide those too.
PCP generates realtime graphs, which is what I use ;)
> Also, if there is anything I can report
> from /proc/fs/xfs I can relatively easily do that as well and display it
> side by side with the disk I/O.
Let's see if there is something unusual in your setup that might
explain it first...