> -----Original Message-----
> From: pcp-bounces@xxxxxxxxxxx [mailto:pcp-bounces@xxxxxxxxxxx] On
> Behalf Of Nathan Scott
> Sent: Wednesday, 2 July 2014 11:47 AM
> To: Amer Ather
> ...
> So, these are counter metrics, and they are exported in milliseconds. In
> order to achieve the utilization metric you're after, the counter needs to
be
> converted to a rate (change-in-value over change-in-time) and the units
> converted to a utilization (initially normalized, then multiplied by 100
to
> produce a percent).
>
> Its not clear exactly what the web client is doing here, but these derived
> metrics should not need the final "... * 1000)" bit - I think thats making
some
> incorrect assumptions, it should just be using "hinv.ncpu". So using
pmval
> instead, with this config...
Indeed, clients that do this correctly have to understand that arithmetic
(of any sort) on metric values may require scaling before the arithmetic.
The metadata that is exported from the PCP agents allows this to be done
correctly every time.
In this case the client should discover that that metric has the "dimension"
of time (to the power 1) and the units of msec, so when it rate converts by
dividing by the time between consecutive samples, then the units of the two
operands of the division need to be converted to the _same_ unit (it does
not matter what that unit is, as long as they are the same, so could be
msec, or sec, or usec). If the client does it this way, then it works
correctly when you connect to an archive from the first computer I used
where time was measured in seconds, or from next year's phone where time may
be measured in nsec.
Similar considerations apply to arithmetic for metrics over the dimension of
"space" where the units could be byte, Kbyte, Mbyte, Gbyte, ...
This is the major reason why the metadata for PCP metrics is so detailed
compared to some other export mechanisms for performance data, e.g. SNMP,
rstatd or the sar binary data format. This was NOT an accident ... 8^)>
|