Ken McDonell wrote:
I'm suggesting that we remove all the asynchronous PMAPI extensions from
libpcp as part of a PCP 4.0 goal.
This mail is intended to trigger discussion on that suggestion.
The motivation is ...
1. the routines are not used by any PMAPI client in the open source
version of PCP
2. the routines are not used by pmchart
3. the implementation is incomplete ... man pages exist for at least the
following but there is no code in libpcp to implement these functions:
pmRequestStore, pmRequestNamesOfChildren (oddly,
pmReceiveNamesOfChildren is in the library!), pmRequestNameAll,
pmReceiveStore and pmReceiveNameAll.
4. the last time I raised this question, the only justification for
these routines was suggest by Max to be ... "when collecting metrics
from multiple hosts I cannot afford for TCP to pick its nose when one of
the hosts go down". But I can find no evidence of any application that
actually uses these routines to deal with the "one of many hosts may be
down" scenario, and indeed the cluster PMDA which might be a candidate
does not use the pmRequest*/pmReceive* family of routines.
5. there is zero QA coverage for the asynchronous routines (I was
recently doing some gcov analysis in pmns.c which brings this issue into
stark relief and triggered this mail).
So unless someone can come up with a real use case
As of Mon 15 March there's a process in our product which uses the async
API to do precisely what the async API is designed for, namely hiding
the fetch latency for slow or broken hosts in a cluster. Please don't
remove it just yet.
But if you do, please make libpcp sensibly thread-safe instead. I don't
know the current status, but the last time I looked, libpcp was not
threadsafe.
and someone steps up
to help with the QA coverage of these routines I am strongly suggesting
that they be expunged at the next really major release, i.e. 4.0
I think extending the coverage of pcpqa is a good and worthy goal.
Aside: I'd be curious to see the results of your coverage study and to
know how you went about doing it.
--
Greg.
|