pcp
[Top] [All Lists]

Re: [pcp] pmlogger performance

To: pcp@xxxxxxxxxxx
Subject: Re: [pcp] pmlogger performance
From: Ken McDonell <kenj@xxxxxxxxxxxxxxxx>
Date: Tue, 16 Jul 2013 12:00:31 +1000
Delivered-to: pcp@xxxxxxxxxxx
In-reply-to: <51E4A7DE.1090307@xxxxxxxxxx>
References: <51E374C4.5@xxxxxxxxxx> <51E4A12C.5000003@xxxxxxxxxxxxxxxx> <51E4A7DE.1090307@xxxxxxxxxx>
User-agent: Mozilla/5.0 (X11; Linux x86_64; rv:17.0) Gecko/20130623 Thunderbird/17.0.7
On 16/07/13 11:54, Stan Cox wrote:

I am retooling the performance checking to:

1. Run the benchmark on all cpus for given length of time,
2. Then run 1. + pmlogger,
3. Then run 1. + tool, e.g. sar,
4. Run steps 2 and 3 for multiple runs varying interval (10s, 1m, 10m?)
and length of run
5. graph and/or simplify the results

Thanks for the clarification, but what performance hypothesis is this aiming to prove?

Is the measure of "goodness" the extent to which the elapsed time of the "benchmark" is extended from 1. by the presence of concurrent load in 2. compared to 3.?

If the benchmark is CPU bound and uses all CPUs 100%, then this is an OK experiment for (some, but not all) of the HPC space, but probably not that relevant for other environments because the extreme CPU saturation will make the test sensitive to any other load perturbation (e.g. additional context switching).

If the benchmark was using 80% of the available CPU cycles and you measured and compared the CPU time (user+sys) for sar/vmstat/... in 1. and pmlogger+pmcd (assuming the linux pmda is installed as a DSO PMDA) for 2. then I think that would be a more realistic measure ... but may be I'm guessing wrong as to the hypothesis this is exploring.

Cheers, Ken.

<Prev in Thread] Current Thread [Next in Thread>