pcp
[Top] [All Lists]

Re: [pcp] My first PMDA, some questions..

To: Nathan Scott <nathans@xxxxxxxxxx>
Subject: Re: [pcp] My first PMDA, some questions..
From: Jan-Frode Myklebust <janfrode@xxxxxxxxx>
Date: Wed, 19 Nov 2014 00:59:54 +0100
Cc: pcp@xxxxxxxxxxx
Delivered-to: pcp@xxxxxxxxxxx
In-reply-to: <69304097.15487953.1416205767949.JavaMail.zimbra@xxxxxxxxxx>
References: <20141116200958.GA8464@xxxxxxxxxxxxxxxxx> <69304097.15487953.1416205767949.JavaMail.zimbra@xxxxxxxxxx>
User-agent: Mutt/1.5.21 (2010-09-15)
On Mon, Nov 17, 2014 at 01:29:27AM -0500, Nathan Scott wrote:
> 
> At what point do you know how many threads you have?  

After "unbound" is started, and I guess it might change if someone
changes the unbound configuration. But should in general be pretty
static..


> If you can find out
> before the call to PMDA.run(), then you can just add them like any other
> metric (with the first parameter to add_metric() being "thread%d.xxx".  If
> you can only find out after the call to run(), then the current python API
> will not work for you.

Ok, hmm... not sure which option I'll go for.


> 
> 
> The client (monitor) tools drive the fetch interval, PMDAs just respond with
> the latest values - they typically have no knowledge of the sampling rate.
> There can also be multiple clients, sampling at different (unrelated) times.

So two people running "pmval metric" will by default trigger 2 fetches
per second?? Or will they be smart enough to share one? 1 fetch per
second should be OK, but much more is getting scary..

> 
> If you have concerns about running the command frequently, one option is to
> cache the results for a short time, and respond with the cached results if
> fetches come in too quickly for the PMDA to comfortably handle.  Chapter 2
> of the Programmers Guide (http://www.pcp.io/doc/pcp-programmers-guide.pdf)
> has a section "2.2.4 - Caching PMDA" with a more details.
> 

Ok.


BTW: Here's my first shot -> https://github.com/janfrode/unbound-pmda
based on the gluster pmda.


  -jf

<Prev in Thread] Current Thread [Next in Thread>