[pcp] Out-of-tree DSO PMDA

Josef 'Jeff' Sipek jeffpc at josefsipek.net
Sun Feb 19 16:39:57 CST 2012


On Mon, Feb 20, 2012 at 08:11:02AM +1100, Nathan Scott wrote:
> ----- Original Message -----
> > What's the best way to develop an out-of-tree DSO PMDA? Can I avoid
> > having
> > to patch the PCP source to build just the PMDA? (I'd like to keep the
> > PMDA
> > code in the application source tree instead of keeping a patched PCP
> > tree around.)
> 
> This is readily doable ... we do out-of-tree builds for several PMDAs
> we have here that are of no relevance to anyone else.  There's not
> really anything special to it - just include the pcp headers and link
> with libpcp & libpcp_pmda as normal.  You need to ensure you don't
> pick a domain# that conflicts with existing PMDAs, but there is no
> requirement to have an entry in pcp.stdpmid.  These days (last 6-12
> months IIRC) we also ship the builddefs and buildrules files from the
> PCP build in /usr/include/pcp, as a convenience for out-of-tree build
> systems, so you can make use of makefile macros from those too if it
> helps.

Cool.

> > I am considering replacing some code I have which uses the MMV PMDA
> > with a custom PMDA that exports the metrics with less overhead. (The
> > application already has tons of internal metrics and keeping the MMV
> > file updated is significant enough overhead that I'd like to have a
> > faster way to feed the stats right into the PMCD.
> > Currently, I'm thinking that having a DSO PMDA
> > which uses shared memory or a mmap'ed file to access the raw stats
> > structures in the application.)
> 
> That last sentence describes MMV PMDA (dso, mmap), except for the "raw
> stats" bit ... can you detail the overhead issue?  (is this something
> that could/should be improved in MMV?  perhaps via some kind of layout
> hint mechanism in MMV to ensure hot metrics don't share cachelines or
> something like that ... if thats the issue?)

Yeah, I know that the MMV dso is pretty simple.  My problem is that I have
over 1200 values in this stats structure.  There's one global one, and
another in TLS.  Each thread updates the TLS struct, and every so often
(e.g., when an RPC completes) it grabs a lock and adds in all the TLS
values.  Then the lock gets dropped, and the TLS counters get zeroed.

Now, if I enable PCP support in the app, after adding in the stats, it
updates the MMV values as well.  At first, I just did a lookup for every
value update, but that had *significant* performance impact.  So, as a quick
work-around, I made an array of <char*,char*,mmv_disk_value_t*>.  At
startup, I fill in the metric name and instance names (the char pointers)
and the pointer to the MMV value.  Then I sort them.  Then, at update, I
just bsearch to find the right one.  This helped a *lot*, but it still sucks
big time.  (Not to mention that the code is kinda weird looking.)  So, I
have ~1200 values, and they get updated anywhere from nothing to >1000 times
a second.  I do realize that having a global structs for the stats is not a
very good idea, but it works pretty well if it's just 1200 adds.  Once you
throw in a bunch of strcmps, things slow down.  It really makes sense that
there'd be a slowdown... 1000 ops/sec, 1200 values/op, ~11 cmps/value == ~13
million cmps/sec.  (Yes, I could skip over some of the value updates since
adding zero is kinda useless.)

At some point soon, I'd like to also expose some event-type metrics.  MMV,
as far as I can tell, can't handle those at all.

There's another project I'm helping with.  Its architecture lends itself
pretty well to just exporting per-thread stats.  Using MMV would make that
super-simple, but it would be nice to have some way of making sure that
separate threads don't end up fighting over cache lines.

Jeff.

P.S. In case you are wondering how I managed to get so many values, it's
simple.  I have about a dozen different RPCs, and for each I keep track of
the latencies.  Not just the average or something simple like that, but I
have buckets.  There are about 70 buckets per RPC.  I make each RPC a metric
with ~70 instances.

-- 
Ready; T=0.01/0.01 17:09:44



More information about the pcp mailing list