pcp
[Top] [All Lists]

Re: pmParseMetricSpec(3) problems

To: "Ken McDonell" <kenj@xxxxxxxxxxxxxxxx>
Subject: Re: pmParseMetricSpec(3) problems
From: nscott@xxxxxxxxxx
Date: Thu, 1 May 2008 07:39:25 +1000 (EST)
Cc: pcp@xxxxxxxxxxx
Importance: Normal
In-reply-to: <1209590407.2870.19.camel@xxxxxxxxxxxxxxxxxxxxx>
References: <40997.192.168.3.1.1209514118.squirrel@xxxxxxxxxxxxxxx> <1209590407.2870.19.camel@xxxxxxxxxxxxxxxxxxxxx>
Sender: pcp-bounce@xxxxxxxxxxx
User-agent: SquirrelMail/1.4.8-4.el4.centos
> I think these are both solvable.
>
>>From a spec point of view (see PCPIntro(1))
> a) eating multiple colons in the archive name is a no brainer

*nod*.  Simple Matter Of Programming there.

> b) if neither host: nor archive/ is present the metric spec is still
> valid, so disk.all.total and disk.dev.total[sda1] and
> disk.dev.total[mydisk,yourdisk theirdisk] are all valid and refer to the
> local pmcd's view of the metrics world.

Thats pretty much how it is today - except I think that if there is no
archive/host specified at all, then the passed in value "isarch" is used
in the result structure passed back out of pmParseMetricSpec.

But, my issue was local contexts (not pmcd) - I don't see any way for a
pmMetricSpec to specify this third kind of context today, and not sure
what the best approach to take to implement that is.

> On Wed, 2008-04-30 at 10:08 +1000, nscott@xxxxxxxxxx wrote:
>> ...
>> Second, I've just recently come across the fact that theres no way to
>> specify
>> use of the local context through this interface - it handles only
>> hostnames
>> and archive filenames in its parsing, and the API "int isarch" parameter
>> makes
>> resolving this quite tricky.
>>
>> I'm not sure how best to fix this one.  It may be that we need a new
>> interface
>> here, which allows all context types to be passed in?  Or could we
>> change the
>> "int isarch" field to be the context "int type" and also make
>> localhost:/metric[]
>> to mean local context?  (the latter bit seems quite dodgey to me - but I
>> don't
>> see a better way).

cheers.

--
Nathan


<Prev in Thread] Current Thread [Next in Thread>