Comment # 3
on bug 1091
from Frank Ch. Eigler
(In reply to comment #2)
> This introduces server side memory load and complexity (significant per-client
> server state would be needed),
The amount of per-client state would be approximately equal to one fetch
packet,
namely one kilobyte in this case. The complexity may be more accurately judged
via a straw design.
> at the cost of tiny amounts of network traffic & small latency.
As for network traffic, in this test it was 10% of the total, and
apprx. 100% duplicative of previous traffic. As for latency, that's
not easily answered, since it is a composite of multiple scheduling
delays & context switches, and hops across the network.
> It goes against the design of pmcd to be small, lightweight, and stateless.
Only "stateless" is impacted here, and even that not severely. In the
model roughly proposed, pmcd would remain stateless across individual PMAPI
calls (except, well, for state already stored, like indom profiles). It
would just mean that instead of one response to the fetch pdu, pmcd would
issue multiple responses over time, until the operation is complete. For
example, no nesting of operations is necessary.
If you mean "stateless" to mean the structure of the relatively simple main
message handling loop, then yes, that part would be somewhat complicated.
> It'd also removes the implicit feedback loop that is designed into the
> protocol.
Please elaborate what you mean.
> If you're concerned about it, it would be better to understand where that
> existing latency is coming from and optimise that - such optimisations will
> help all client tools and allows pmcd to stay lean.
Sure, but that is orthogonal to reducing effort and traffic that is, from
the application point of view, pure overhead. We should reduce both.