Hi -
nathans wrote:
> [...]
> 1. privileged user requirement
> The code thats there now, I'm not really following; which is always a
> concern for security-sensitive stuff. :)
> Are the wheel and adm groups able to access the performance counters
> normally? [...]
I believe this was just cargo-culted from the systemd pmda, where the
wheel and adm groups do have a specific & relevant meaning. For the
papi/pmda purposes, uid=gid=0 seem like a fine default.
> 2. enabling counters
> [...]
> Initially, lets keep it simple. There has been some doubt expressed
> whether the auto-enabling will actually work with good error-handling
> semantics - noones ever done this before - and it doesn't function at
> all for short-lived PMAPI clients (*) - see example below.
That's not entirely correct.
> The explicit-enabling method (pmstore) can map directly onto the
> PAPI interfaces, so it comes with a 100% success guarantee - there
> is no controversy over whether this approach will work, we all know
> that it does and it has before. [...]
It has tradition and a charming simplicity going for it, but it lacks
- tooling support, so i.e., it can't be used for reliable logging
as pmlogger is not in the business of pmstore'ing constantly
- isolation of users/apps from each other, and from state-losing
upsets like pmcd restarts
> [...]
> (*) e.g., this script should work, but cannot with auto-enabling:
>
> #!/bin/sh
> pmstore papi.control.enable "TOT_INS,L1_DCM"
> pmstore papi.control.reset "TOT_INS,L1_DCM"
>
> /opt/research/bin/gather_experimental_data >& phase1.log
> echo "--- Phase 1 (gather) results ---"
> pmprobe -v papi.preset.total_inst papi.preset.L1_DCM mem.util.shmem
> cat phase1.log
>
> threads=20
> pmstore papi.control.reset "L1_DCM"
> /opt/research/bin/search_experimental_data -p $threads >& phase2.log
> echo "--- Phase 2 (search x $threads) results ---"
> pmprobe -v papi.preset.total_inst papi.preset.L1_DCM mem.util.shmem
> cat phase2.log
Sure it can work with auto-enabling. You'd just replace the pmstore
bits with "pmprobe papi.preset.total_inst". You must have overlooked
[1] which outlines one automatic solution for even this scenario.
Another way is to run a concurrent pcp client to wrap the experiment,
quite possibly pmlogger itself. Such a measure would be wise anyway,
considering the large counter values / wraparounds, restarts, all of
which phenomena would be unnoticed if just a single pcp metric fetch
at the end of an indefintely long experiment were done. (That
consideration seems to apply to non-perfctr pcp metrics too.)
[1] http://oss.sgi.com/pipermail/pcp/2014-June/005033.html
- FChE
|