pcp
[Top] [All Lists]

Re: PCP Grafana questions

To: Martins Innus <minnus@xxxxxxxxxxx>
Subject: Re: PCP Grafana questions
From: fche@xxxxxxxxxx (Frank Ch. Eigler)
Date: Thu, 07 May 2015 16:27:39 -0400
Cc: pcp <pcp@xxxxxxxxxxx>
Delivered-to: pcp@xxxxxxxxxxx
In-reply-to: <554A7198.10509@xxxxxxxxxxx> (Martins Innus's message of "Wed, 06 May 2015 15:55:04 -0400")
References: <554A7198.10509@xxxxxxxxxxx>
User-agent: Gnus/5.1008 (Gnus v5.10.8) Emacs/21.4 (gnu/linux)

minnus wrote:

>     I'm starting to mess with the various pcp-webjs options and have
> a few questions on using the grafana component.  [...]

Very good, thanks for your interest..


> [...]
>     Not sure what they are called, but for lack of a better name I
> can't get the "metric operations" to work [...]

Yes, this is documented in pmwebapi(3) and
http://oss.sgi.com/bugzilla/show_bug.cgi?id=1094 .


> 2.

> My test case is about 1 month worth of a single node's archives with
> ~200 metrics collected at 30 sec intervals (3 GB), roughly 1 file
> per day, ~30 files.  [...] When editing the metrics it takes a while
> for the field to populate. [...] but it would be great to get some
> feedback that something is happening.  [...]

We ship generally -unmodified- webapps, so if the normal
graphite/grafana webapp doesn't have a 'please wait ...' kind of
blinkenlight, it's not there in the pcp-webjs copy either.


> 3.
>
> I am starting to do my own testing, but has anybody done scalability
> studies? My largest dimension is going to be number of hosts.  So is
> it reasonable that I could plot ~5 metrics over a day but across 100
> host archives? 1000 host archives? [...]

The largest views I've handled involved some dozens of hosts (split
over some hundreds of time-sliced archives).  With the pmwebd -M
(multithreaded mode), it's behaved reasonably quickly; make sure
you're on pcp 3.10.4.  How well it scales is also a function of the
web browser - so try the "png" (server-side) as well as "flot"
(browser-side) rendering options.


- FChE

<Prev in Thread] Current Thread [Next in Thread>