mgoodwin wrote:
> [...]
> Also, whilst we're discussing it - I've been thinking of changing the
> naming a bit, to make it super obvious what each container does, e.g.
>
> pcp-base : base container for layering all other pcp containers
AIUI, that'd be an *image* only.
> pcp-live-collector - live host pmcd, layered over pcp-base
> pcp-archive-collector - archive collection, using pmlogger [...]
> pcp-monitor - monitoring tools, including gui and py deps. [...]
> pcp-pmie or some such name ... for inference and alerting tasks [...]
Perhaps unmixing the container & image terminology can simplify
matters here. The image just needs to contain the software; it's not
a container until some part of it is actually running. And a
container does not have to run all the software in the image.
It's as though we only need a few base images: one for collection side
(smaller), and one for all the monitoring tools (larger). Then,
depending on how the image is run - turned into a container, one could
get a pmlogger or pmie or whatever running inside. i.e., something
like:
docker run pcp-collector /etc/rc.d/rc_pmie
docker run pcp-collector /etc/rc.d/rc_pmlogger
One image, two containers running different software.
> We could also consider a pcp-data container or something, where PCP
> archives and var/lib data such as pmdaCache and so forth would
> reside and be commonly shared [...]
Perhaps that could be a container created from the pcp-base image,
analogously to the "training/postgres" example at [1].
[1]
https://docs.docker.com/userguide/dockervolumes/#creating-and-mounting-a-data-volume-container
Bottom line, instead of separate Dockerfiles for these different usage
cases, we could ship -shell scripts- that invoke the basic docker
images differently.
- FChE
|