pcp
[Top] [All Lists]

Re: [pcp] [issue] pmwebd graphite api performance issue

To: pcp@xxxxxxxxxxx
Subject: Re: [pcp] [issue] pmwebd graphite api performance issue
From: Ken McDonell <kenj@xxxxxxxxxxxxxxxx>
Date: Tue, 28 Jul 2015 06:44:22 +1000
Delivered-to: pcp@xxxxxxxxxxx
In-reply-to: <a00f452763fd43f98eb5123d3ec0c3cf@xxxxxxxxxxxxxxxxxxxxxxxxx>
References: <a00f452763fd43f98eb5123d3ec0c3cf@xxxxxxxxxxxxxxxxxxxxxxxxx>
User-agent: Mozilla/5.0 (X11; Linux x86_64; rv:31.0) Gecko/20100101 Thunderbird/31.7.0
On 27/07/15 23:30, Aurelien Gonnay wrote:
...
The part about *pmGetInDomArchive* is kind of bothering me, since it
looks like it’s spending most of its time in that method.

Any thoughts on how to improve my experience ?

Can you send me one of the archives?

A quick look at the pmwebd source suggests that it is processing the metadata in the pmns traversal callback ... if there are lots of metrics over the _same_ instance domain, and that instance domain is large, then this will call pmGetInDomArchive O(# metrics with the same indom) instead of O(1).

If my crude analysis is correct (this is not my code), then this looks to be a candidate for some serious code optimization.

At your end, I'd be looking at the configuration of your pmloggers and trying to reduce the number of metrics that are being fetched that are unlikely to be insightful, especially if they are across a large indom (proc metrics would be the obvious initial candidates). Your archives appear to be quite large.

Remember that the "best" pmlogger configurations are ones that provide a narrow slice across the metrics (to prove your assertions about where the performance problems are NOT) and deep narrow slices in the places where you believe, or history has shown, that the performance issues of interest are likely to be found. This inevitably requires some customization of the pmlogger configurations to match the local circumstances.

<Prev in Thread] Current Thread [Next in Thread>