On 15/07/13 14:04, Stan Cox wrote:
This is a bit long, but it shows the results of doing some performance
measurement of the pmlogger.
First a configuration is created, only for the target metrics,
e.g. sar. Second the benchmark is run for the specified interval,
1800 seconds in this case. The benchmark counts how many iterations
it runs for that time interval. Then the benchmark is run for that
many iterations simultaneously with pmlogger, which is using an
interval of 1 for 1800 seconds. We wait for both to complete. Next,
similarly, the benchmark is run simultaneously with the tool being
measured, e.g. sar, for an interval of 1 for 1800 seconds. The results
are shown for sar, vmstat, mpstat, and atop
...
I am sorry Stan, but I do not understand the methodology of the
experiment, much less how to interpret the results.
But is a sample interval of 1 second realistic in the expected
production environments?
I'd expect sar, vmstat, mpstat et al to use less CPU than
pmlogger+pmcd+pmda (linux pmda in this case), but not by a very big margin.
Then if you consider what fraction of a total system's resources are
committed to performance monitoring (should be small I'd assert), then a
not very big margin in a small fraction is expected to be not significant.
Are your results showing something different?
|