pcp
[Top] [All Lists]

Re: [pcp] JSON PMDA

To: David Smith <dsmith@xxxxxxxxxx>
Subject: Re: [pcp] JSON PMDA
From: Nathan Scott <nathans@xxxxxxxxxx>
Date: Tue, 21 Apr 2015 19:27:43 -0400 (EDT)
Cc: pcp <pcp@xxxxxxxxxxx>
Delivered-to: pcp@xxxxxxxxxxx
In-reply-to: <5536C228.8010001@xxxxxxxxxx>
References: <54F9F92D.4010202@xxxxxxxxxx> <448002717.7934024.1427683964254.JavaMail.zimbra@xxxxxxxxxx> <552699FE.7040801@xxxxxxxxxx> <2139482617.15593599.1428634701360.JavaMail.zimbra@xxxxxxxxxx> <552D6524.1030803@xxxxxxxxxx> <1237712965.18667183.1429054767135.JavaMail.zimbra@xxxxxxxxxx> <5536C228.8010001@xxxxxxxxxx>
Reply-to: Nathan Scott <nathans@xxxxxxxxxx>
Thread-index: FlAA9MmycRh4a/6rJpRp3j/XYswapg==
Thread-topic: JSON PMDA
Hi David,

----- Original Message -----
> > ----- Original Message -----
> >>> [...]
> >>> Yeah - something like that - have a look at src/libpcp_pmda/src/cache.c
> >>> as
> >>> thats how the instance cache number stability is achieved.  Perhaps we
> >>> can
> >>> extend that with additional APIs to help us out here.
> >>>
> >>
> > Its OK to extend the API/ABI, but not to break it.  Which should be all we
> > need to do here, I think.  Maybe see if we can reduce the range that those
> > cache.c interfaces accept - the two lines with "if (inst == 0x7fffffff) {"
> > there look promising.  If we had a h->maxinst there perhaps, instead of the
> > hard-coded 2^32-1 limit (may need to change the test to '>=' too) we might
> > be done and dusted here.  That'll turn out to be a gross oversimplification
> > I'm sure ... but maybe, just maybe it will work.
> 
> OK, I've been staring at cache.c today, and I've figured out a couple of
> things.
> 
> - Most of the existing code is for an instance cache, there doesn't
> appear to be any existing code for a cluster/metric cache.

A more abstract way to think about it would be 'its a cache for signed 32
bit identifiers allocated in a monotonically increasing way, allowing for
holes and with support for optional persistence' ... which we use only for
instances today.

> - I'm failing to see how changing that 0x07ffffff as you outlined above
> helps. Can you explain that a bit more?

So, if we can generalise the above a little, we may be able to make it a
'cache for identifiers in a range from 0 to some specified maximum, with
identifiers allocated in a monotonically increasing way, allowing holes
and optional persistence'.

The pmInDom is a domain number (JSON 137) and a "serial" number.  So in
our situation here, we could reserve serial #0 for a metric-identifier
cache, #1 for an indom-identifier cache, and then use the rest of the
space for indom-instance caches.

> If you'd like me to add a cluster/metric cache, I'm going to need a bit
> more explanation about what that will entail.

I don't think that is necessary.  I think we may even get away with just
the one metric identifier cache?  (combine cluster and index - using the
full metric name as the cache key.  Maybe?  Not sure, but that would help
with the 1024-metrics-per-source-only problem).

> From a PMDA writer's point of view, I'd think the new APIs would look
> something like (in pseudo code):
> 
> - lookup_cluster(domain_id, name)
> - find_next_available_cluster(domain_id)
> - lookup_metric(domain_id, cluster_id, name)
> - find_next_available_metric(domain_id, cluster_id)

As per the earlier mail with kenj (re ioctl), I think the only new API we
will need for this aspect would be something like:

int pmdaCacheResize(pmInDom indom, int maximum);

the rest of the pmdaCacheOp(3) interfaces should give us the rest of the
cache manipulation functionality you need (like persisting, restoring, &
so on).

cheers.

--
Nathan

<Prev in Thread] Current Thread [Next in Thread>