pcp
[Top] [All Lists]

Re: [pcp] pmlogger performance

To: pcp@xxxxxxxxxxx
Subject: Re: [pcp] pmlogger performance
From: Ken McDonell <kenj@xxxxxxxxxxxxxxxx>
Date: Tue, 16 Jul 2013 11:26:04 +1000
Delivered-to: pcp@xxxxxxxxxxx
In-reply-to: <51E374C4.5@xxxxxxxxxx>
References: <51E374C4.5@xxxxxxxxxx>
User-agent: Mozilla/5.0 (X11; Linux x86_64; rv:17.0) Gecko/20130623 Thunderbird/17.0.7
On 15/07/13 14:04, Stan Cox wrote:
This is a bit long, but it shows the results of doing some performance
measurement of the pmlogger.
First a configuration is created, only for the target metrics,
e.g. sar.  Second the benchmark is run for the specified interval,
1800 seconds in this case.  The benchmark counts how many iterations
it runs for that time interval.  Then the benchmark is run for that
many iterations simultaneously with pmlogger, which is using an
interval of 1 for 1800 seconds.  We wait for both to complete.  Next,
similarly, the benchmark is run simultaneously with the tool being
measured, e.g. sar, for an interval of 1 for 1800 seconds.  The results
are shown for sar, vmstat, mpstat, and atop
...

I am sorry Stan, but I do not understand the methodology of the experiment, much less how to interpret the results.

But is a sample interval of 1 second realistic in the expected production environments?

I'd expect sar, vmstat, mpstat et al to use less CPU than pmlogger+pmcd+pmda (linux pmda in this case), but not by a very big margin.

Then if you consider what fraction of a total system's resources are committed to performance monitoring (should be small I'd assert), then a not very big margin in a small fraction is expected to be not significant.

Are your results showing something different?

<Prev in Thread] Current Thread [Next in Thread>