Container environment unification
Henrik Holst
Ethology researcher spending most of his time deep down in the comments section searching for intelligent life
As I posted a few days ago it makes sense to have a separate container image for dev IN PRACTICE. But in theory, it does not make sense because it is the only environment where you'd have a separate container.
I can think of two solutions to this inconsistency:
I am completely open to 1 but I also want to try first to fully try and explore the solution space and see if there is any possibility to get rid of the local development container SKU.
One hugely important thing is that all local containers should mount the host file system and run as the host user (disguised as root, running rootless docker). That gives us access to all the tools on the host. It also makes the container environment less useful. For instance, we cannot use Python environments that symlink the python interpreter on the host if we install packages inside the container in a virtual environment.
I think I need an example to mount the ideas on so let's dig into a simple example.
Case study
As a case study, I consider a simple nodej.js microservice that consumes a domain data object (an invoice object) and stores it for downstream processing.
Does it need separate containers?
Let's consider the Dockerfile:
FROM node:16-alpine as base
ENV LC_ALL=C.UTF-8
ENV NODE_OPTIONS="--experimental-modules --unhandled-rejections=strict"
EXPOSE 5001
WORKDIR /app
FROM base as test
ENV NODE_ENV=development
ENV DEBUG=invoice-ingest:*,-invoice-ingest:debug
COPY config /app/config
COPY src /app/src
COPY test /app/test
COPY package.json package-lock.json /app/
RUN npm ci
CMD ["npm","test"]
FROM base as development
ENV NODE_ENV=development
ENV DEBUG=invoice-ingest:*
RUN apk add --no-cache git bash zsh zsh-vcs fish
CMD ["npx","nodemon","--ignore","data"]
FROM base as production
ENV NODE_ENV=production
ENV DEBUG=invoice-ingest:*,-invoice-ingest:debug
ENV DEBUG_HIDE_DATE=yes
COPY config /app/config
COPY src /app/src
COPY package.json package-lock.json /app/
RUN npm ci
CMD ["npm","start"]
The Dockerfile does not only consider both a development and a production environment container. It also defines an (unused) test target. That can be removed since it is not used.
There is a common base image that sets a few environment variables. That can be kept in the image for sure. The rest has to be vetted.
First off, why is git and zsh installed in the development container?
Presumably because of vs-code default behavior. If you open up a terminal in vs-code it will open up a terminal inside the attached container. That is however something I can actually fix. Just make sure to open up a terminal in the local machine instead of in a remote- or container environment.
The resulting Dockerfile:
FROM node:16-alpine as bas
ENV LC_ALL=C.UTF-8
ENV NODE_OPTIONS="--experimental-modules --unhandled-rejections=strict"
EXPOSE 5001
COPY config /app/config
COPY src /app/src
COPY package.json package-lock.json /app/
WORKDIR /app
RUN npm ci
CMD ["npm","start"]
For the compose file is a bit ugly to start the development environment. But it works.
领英推荐
x-common: & common
user: "0"
restart: unless-stopped
network_mode: service:pod
pid: service:pod
environment: &common-environment
HOME: null
NODE_ENV: development
MONGODB_URI: "mongodb://localhost:27017/"
AZURE_STORAGE_CONNECTION_STRING: DefaultEndpointsProtocol=http;AccountName=devstoreaccount1;AccountKey=Eby8vdM02xNOcqFlqUwJPLlmEtlCDXJ1OUzFT50uSRZ6IFsuFq2UVErCz4I6tq/K1SZFPTOtr/KBHBeksoGMGw==;BlobEndpoint=https://localhost:10000/devstoreaccount1;QueueEndpoint=https://localhost:10001/devstoreaccount1;TableEndpoint=https://localhost:10002/devstoreaccount1;
volumes: &common-volumes
- ${HOME}:${HOME}:rw
- ${HOME}/.vscode-server
services:
[...]
invoice-ingest
<<: *common
build: invoice-ingest
container_name: invoice-ingest # NOT needed but convenient
working_dir: $PWD/invoice-ingest
This raises the question again - why do we even need containers for local development if we are going to use the host for most of the dev tooling anyway?
Well, honestly, we don't need it. We do however need some kind of process mgmt. tool to start the stack components. This is microservices after all. Docker Compose is perfect for that. But it is not the only solution. nomad, supervisord, or even systemd --user could be used as well.
But what about testing should really the testing framework be installed in the production image? Well, no. It should not. That is why I dropped it. The service testing should be done from the outside. Check the API contracts. Having build-in testing inside the container kind of makes sense, but also it does not make sense. Tests should be an ephemeral job that is executed and consumes the microservices API and monitors the output. Integrated unit tests are a waste of time because they cannot be reused in the CI/CD pipeline. We should not bake the dev, or test tooling into the application image. These tools are not required to run the app, so they do not belong in there.
Code lint checks and such could still be used though as this is a validation of code not of API. So let's add that back. ;-)
Here is an outline of how it could be done without overhead to the production
FROM node:16-alpine AS base
ENV LC_ALL=C.UTF-8
ENV NODE_OPTIONS="--experimental-modules --unhandled-rejections=strict"
EXPOSE 5001
COPY config /app/config
COPY src /app/src
COPY package.json package-lock.json /app/
WORKDIR /app
RUN npm ci
FROM base AS test
COPY test ./test
RUN npm test
FROM base
COPY --from=test /emptydir /
CMD ["npm","start"]
Pedantic as I am this seems like a poor tradeoff in ugly vs. utility and I would just run the npm test as a step in the build of the image. If the "unit test" (linting etc) fails then the image build fails just as a compilation error would have failed the build.
FROM node:16-alpine
ENV LC_ALL=C.UTF-8
ENV NODE_OPTIONS="--experimental-modules --unhandled-rejections=strict"
EXPOSE 5001
COPY config /app/config
COPY src /app/src
COPY test /app/test
COPY package.json package-lock.json /app/
WORKDIR /app
RUN npm ci
RUN npm test
CMD ["npm","start"]
What do you think? Is this better?
Choose your own adventure ending:
Blue pill - what are they doing in the Valley?
Follow up by reading this excellent post from Lyft:
Red pill - Docker deep dive
If we are not going to use the real benefit of containers and leverage only process orchestration in Compose and use host tooling, then let's do this properly.
We use NO service-specific images for local development. No Dockerfiles with build specs for a local dev container image. Instead, we use only a compose.yaml and a generic base container that resembles the host OS
services:
invoice-ingest:
image: ubuntu:20.04 # use the same image as your OS
volumes:
- /home:/home
- /usr:/usr # lib and various other stuff will link to /usr
- /var:/var # you could mount this R/O if you're paranoid
- /etc:/etc
# /tmp is tmp, no need to mount that
- $HOME/.vscode-server # workaround for vs-code bug.
environment:
- HOME
- NODE_ENV=development
- LC_ALL=C.UTF-8
- NODE_OPTIONS="--experimental-modules --unhandled-rejections=strict"
working_dir: $PWD
command: [npx,nodemon]
A more clean solution would of course bind and mount the host root / to the container root / but Docker will not allow that. You can then try and bind mount / to /rootfs and chroot inside the container. That works but the chroot UI is clunky and you probably want to replace that with something that makes the switch more seamless.