Hi -
> > > Yes, that is an interesting point. Does this "context sharing" actually
> > > happen currently? [...]
> >
> > I don't know; the code might happen to lump fetches together.
>
> Hmmmm. This sounded interesting at first, but now it sounds more
> and more like its not happening in practice & a bit of a
> distraction. [...]
If your scheme is promoting reuse of a context between data-views
being directed to multiple containers, this will create just such a
context-sharing situation. (Maybe vector's optimizations allow
mass-fetches of all host metrics in one operation, resulting in no
harmful sharing - now. But if vector reuses & reconfigures the same
context between mass-fetches for different containers, you get exactly
the new critical-section I was talking about.)
> > security concerns dictate that every pmda that handles stores needs to
> > be audited & ACL machinery applied.
>
> If you like, sure, but I know most/all of the store metrics and none are
> of particular concern to me off the top of my head. So, let us know how
> your audit goes - any issues there are independent of this effort though
> (iow, via pmcd & existing clients, not new to pmwebd).
Placing the burden of proof of security on someone other than the
person loosening security is not appropriate. If your "off the top of
your head" constitutes an acceptable audit, so be it. Change the
default pmcd.conf [access] acl today.
> The SECURITY section in pmwebd(1) reads like a horror story, BTW, so it
> is "kettle, pot" territory saying store *might* introduce issues when we
> are already significantly exposed there.
If you examine the SECURITY section of pmproxy(1), or pmcd(1) for that
matter, you might notice analogous "horror story" cautions about
encryption, admission control, etc. Or you would, if those tools
even had a SECURITY man page section. They do not. Then again,
"tu quoque" is a loser argument.
As for substance, the listed pmwebd issues do not have any bearing on
the current issue: even if the list of 5 items there were corrected
tomorrow, unprivileged pmstore would be exactly as risky as today.
As for philosophy, it is better to list known problems systematically
than not to list them. Even better would be to fix related core bugs
reported years ago, such as <http://oss.sgi.com/bugzilla/show_bug.cgi?id=941>
and friends.
> [...]
> > By the way, how much processing time or space does this approach
> > (pmStore then pmFetch) save,
>
> Think more about how many sockets and open file descriptors are needed as
> the container count increases - that makes the current model impractical
> for certain situations.
Really? libpcp often reuses the same socket/fd for multiple contexts
targeting the same host (see libpcp/src/context.c:508). This
multiplexing is why there is a ctxnum field in several of the pmapi
wire protocol PDUs.
> e.g. for browsers that cannot open more than a handful of
> connections to one source (the Vector guys who are doing this
> containers work are already observing this). [...]
Really? The pmapi portion of pmwebd is single-threaded (as pointed
out in http://oss.sgi.com/pipermail/pcp/2015-September/008198.html),
so only one pmwebapi request at a time can be serviced by pmwebd.
That means there's only one active browser-to-pmwebd socket at a time
(plus some small number of backlogged ones).
- FChE
|