(Heh, we've wandered a fair way from multi-archive support now,
and we're not helping Dave here anymore, I suspect - this will
be my last mail on this topic for now)
----- Original Message -----
>
> 1- This requires the client to divine pmcd server connection data for
> the real-time component from the archives (or vice versa). Or else
> force the a user to specify both?
It starts from a host spec (just like pmlogger does), as described
in the earlier unified context discussions.
> requests from live pmcd would be available; but data that the PMAPI
> client *recently requested* wouldn't be.
If it needs to persist, pmlogger needs to write it. End of story.
Keep it simple. As described in the earlier discussions.
This scheme where clients have to force all data to be written to
disk, via a system daemon no less, before they can see it - that's
often not going to be appropriate, sorry. Many tools definitely do
not want that - we simply cannot mandate that behaviour.
> > And another "hop" (synchronous round trip for every PDU)
>
> True (unless perhaps the new pmlogger can anticipate).
I don't see how, and certainly not using a 1-to-1 "librarified" pmcd
client protocol. So we'd invariably be increasing latency across all
PCP clients in one fell swoop. I just cannot see this approach being
feasible, sorry - beyond just the poor performance, there's protocol
deadlock issues, reconnect becomes complex - all error handling, in
fact, is complicated by the multiple hops - there's a raft of issues.
> [...]
> as we still suffer here and there from the contrary assumption,
> archive-label.hostname != usable PM_CONTEXT_HOST parameter.)
That's not being assumed here.
cheers.
--
Nathan
|