pcp
[Top] [All Lists]

Re: [pcp] Process analysis

To: "Frank Ch. Eigler" <fche@xxxxxxxxxx>
Subject: Re: [pcp] Process analysis
From: Shirshendu Chakrabarti <shirshendu@xxxxxxx>
Date: Fri, 14 Nov 2014 12:00:24 +0530
Cc: Nicolas Michel <be.nicolas.michel@xxxxxxxxx>, yves.weber@xxxxxx, pcp@xxxxxxxxxxx
Delivered-to: pcp@xxxxxxxxxxx
In-reply-to: <y0mvbmjdoep.fsf@xxxxxxxx>
References: <CAO5znat7ceWKn8wf0RKC=GNzagqv3dsa=r-zfFTL4MxgLeue9w@xxxxxxxxxxxxxx> <y0mvbmjdoep.fsf@xxxxxxxx>
This can be a probable solution. You can use systemtap to trace the desired process.

Also, there is a systemtap PMDA which can help in this exercise. Other solutions for userspace and kernel level tracing for linux:

2. ktap,Âhttp://www.ktap.org/. ktap is more like dtrace in design.

Please see, AFAIK, there is no known integration with PCP for above solutions.Â

Also, as of systemtap 1.2 there are probes in CPython and JVM.Âhttp://fedoraproject.org/wiki/Features/SystemtapStaticProbes

Please do correct me, if I have committed any errors in this reply.

Thanks,

ShirshenduÂ

On Thu, Nov 13, 2014 at 10:32 PM, Frank Ch. Eigler <fche@xxxxxxxxxx> wrote:

Hi -


be.nicolas.michel wrote:

> We want to give a try to PCP with a centralized pmlogger, and web
> plotter like grafa or graphite. It should be great for visualizing
> the global performance metrics on our servers.

OK, might want to try the pcp 3.10.0 + the webapps code, which should
handle apprx. all that out-of-the-box.


> However, when analyzing performance issues, one also often need to
> go deeper and grab performance metrics at the process level (which
> process consume I/O or memory or CPU at a given time). [...]

That's a little trickier, for a few reasons.

First, pcp usually relays data in a relatively raw form from the
kernel. For the "proc.*" metrics, you can get some per-process status
and statistics, but finding the "top" (by whatever metric) would be left
to your application, and about actual I/O etc. trace type traffic, are
not routinely exposed by the kernel.

Second, the proc.* data is relatively large, so it is not normally
turned on for routine background logging. If you can narrow your
interest to a few metrics of the processes, or perhaps even to a few
particular process instances, then routinely logging them is probably
fine.

Third, if you need to change on-the-fly the logging configuration (so
as to add more proc.* stuff temporarily, or change sampling rates, for
example), there is currently simple no web frontend nor
multiple-pmlogger-affecting mechanism for this. The pmlc program can
adjust a single running pmlogger to some extent. Another way could be
to use a pmmgr instance to supervise a stable of pmloggers for the
remote boxes. pmmgr can recompute pmlogger configuration files from
fragments you modify (to add/subtract proc.* etc.) & restart all the
pmloggers on demand. pmwebd can present a glued-together view of the
various archive files to the graphical webapps across restarts.

- FChE

_______________________________________________
pcp mailing list
pcp@xxxxxxxxxxx
http://oss.sgi.com/mailman/listinfo/pcp

<Prev in Thread] Current Thread [Next in Thread>