Container environment unification

Container environment unification

As I posted a few days ago it makes sense to have a separate container image for dev IN PRACTICE. But in theory, it does not make sense because it is the only environment where you'd have a separate container.

I can think of two solutions to this inconsistency:

  1. do not use containers for local development. If there is no container in dev then all containers are the same. Use venv, npm and various other isolation mechanisms instead.
  2. look into the container specifics - what is actually the difference in your dev containers vis-a-vis the non-dev container? Is it functionally possible to merge them into one and still adhere to "best practices"?

I am completely open to 1 but I also want to try first to fully try and explore the solution space and see if there is any possibility to get rid of the local development container SKU.

One hugely important thing is that all local containers should mount the host file system and run as the host user (disguised as root, running rootless docker). That gives us access to all the tools on the host. It also makes the container environment less useful. For instance, we cannot use Python environments that symlink the python interpreter on the host if we install packages inside the container in a virtual environment.

I think I need an example to mount the ideas on so let's dig into a simple example.

Case study

As a case study, I consider a simple nodej.js microservice that consumes a domain data object (an invoice object) and stores it for downstream processing.

Does it need separate containers?

Let's consider the Dockerfile:

FROM    node:16-alpine as base
ENV     LC_ALL=C.UTF-8
ENV     NODE_OPTIONS="--experimental-modules --unhandled-rejections=strict"
EXPOSE  5001
WORKDIR /app


FROM    base as test
ENV     NODE_ENV=development
ENV     DEBUG=invoice-ingest:*,-invoice-ingest:debug
COPY    config /app/config
COPY    src /app/src
COPY    test /app/test
COPY    package.json package-lock.json /app/
RUN     npm ci
CMD     ["npm","test"]


FROM    base as development
ENV     NODE_ENV=development
ENV     DEBUG=invoice-ingest:*
RUN     apk add --no-cache git bash zsh zsh-vcs fish
CMD     ["npx","nodemon","--ignore","data"]


FROM    base as production
ENV     NODE_ENV=production
ENV     DEBUG=invoice-ingest:*,-invoice-ingest:debug
ENV     DEBUG_HIDE_DATE=yes
COPY    config /app/config
COPY    src /app/src
COPY    package.json package-lock.json /app/
RUN     npm ci
CMD     ["npm","start"]        

The Dockerfile does not only consider both a development and a production environment container. It also defines an (unused) test target. That can be removed since it is not used.

There is a common base image that sets a few environment variables. That can be kept in the image for sure. The rest has to be vetted.

First off, why is git and zsh installed in the development container?

Presumably because of vs-code default behavior. If you open up a terminal in vs-code it will open up a terminal inside the attached container. That is however something I can actually fix. Just make sure to open up a terminal in the local machine instead of in a remote- or container environment.

No alt text provided for this image

The resulting Dockerfile:

FROM    node:16-alpine as bas
ENV     LC_ALL=C.UTF-8
ENV     NODE_OPTIONS="--experimental-modules --unhandled-rejections=strict"
EXPOSE  5001
COPY    config /app/config
COPY    src /app/src
COPY    package.json package-lock.json /app/
WORKDIR /app
RUN     npm ci
CMD     ["npm","start"]
        

For the compose file is a bit ugly to start the development environment. But it works.

x-common: & common
  user: "0"
  restart: unless-stopped
  network_mode: service:pod
  pid: service:pod
  environment: &common-environment
    HOME: null
    NODE_ENV: development
    MONGODB_URI: "mongodb://localhost:27017/"
    AZURE_STORAGE_CONNECTION_STRING: DefaultEndpointsProtocol=http;AccountName=devstoreaccount1;AccountKey=Eby8vdM02xNOcqFlqUwJPLlmEtlCDXJ1OUzFT50uSRZ6IFsuFq2UVErCz4I6tq/K1SZFPTOtr/KBHBeksoGMGw==;BlobEndpoint=https://localhost:10000/devstoreaccount1;QueueEndpoint=https://localhost:10001/devstoreaccount1;TableEndpoint=https://localhost:10002/devstoreaccount1;
  volumes: &common-volumes
    - ${HOME}:${HOME}:rw
    - ${HOME}/.vscode-server

services:
  [...]  
  invoice-ingest
    <<: *common
    build: invoice-ingest
    container_name: invoice-ingest    # NOT needed but convenient
    working_dir: $PWD/invoice-ingest        

This raises the question again - why do we even need containers for local development if we are going to use the host for most of the dev tooling anyway?

Well, honestly, we don't need it. We do however need some kind of process mgmt. tool to start the stack components. This is microservices after all. Docker Compose is perfect for that. But it is not the only solution. nomad, supervisord, or even systemd --user could be used as well.

But what about testing should really the testing framework be installed in the production image? Well, no. It should not. That is why I dropped it. The service testing should be done from the outside. Check the API contracts. Having build-in testing inside the container kind of makes sense, but also it does not make sense. Tests should be an ephemeral job that is executed and consumes the microservices API and monitors the output. Integrated unit tests are a waste of time because they cannot be reused in the CI/CD pipeline. We should not bake the dev, or test tooling into the application image. These tools are not required to run the app, so they do not belong in there.

Code lint checks and such could still be used though as this is a validation of code not of API. So let's add that back. ;-)

Here is an outline of how it could be done without overhead to the production

FROM    node:16-alpine AS base
ENV     LC_ALL=C.UTF-8
ENV     NODE_OPTIONS="--experimental-modules --unhandled-rejections=strict"
EXPOSE  5001
COPY    config /app/config
COPY    src /app/src
COPY    package.json package-lock.json /app/
WORKDIR /app
RUN     npm ci

FROM    base AS test
COPY    test ./test
RUN     npm test

FROM    base
COPY    --from=test /emptydir /
CMD     ["npm","start"]        

Pedantic as I am this seems like a poor tradeoff in ugly vs. utility and I would just run the npm test as a step in the build of the image. If the "unit test" (linting etc) fails then the image build fails just as a compilation error would have failed the build.

FROM    node:16-alpine
ENV     LC_ALL=C.UTF-8
ENV     NODE_OPTIONS="--experimental-modules --unhandled-rejections=strict"
EXPOSE  5001
COPY    config /app/config
COPY    src /app/src
COPY    test /app/test
COPY    package.json package-lock.json /app/
WORKDIR /app
RUN     npm ci
RUN     npm test
CMD     ["npm","start"]        

What do you think? Is this better?

Choose your own adventure ending:

Blue pill - what are they doing in the Valley?

Follow up by reading this excellent post from Lyft:

https://eng.lyft.com/scaling-productivity-on-microservices-at-lyft-part-2-optimizing-for-fast-local-development-9f27a98b47ee

Red pill - Docker deep dive

If we are not going to use the real benefit of containers and leverage only process orchestration in Compose and use host tooling, then let's do this properly.

We use NO service-specific images for local development. No Dockerfiles with build specs for a local dev container image. Instead, we use only a compose.yaml and a generic base container that resembles the host OS

services:
  invoice-ingest:
    image: ubuntu:20.04    # use the same image as your OS
    volumes:
    - /home:/home
    - /usr:/usr            # lib and various other stuff will link to /usr
    - /var:/var            # you could mount this R/O if you're paranoid
    - /etc:/etc
    # /tmp is tmp, no need to mount that
    - $HOME/.vscode-server # workaround for vs-code bug.
    environment:
    - HOME
    - NODE_ENV=development
    - LC_ALL=C.UTF-8
    - NODE_OPTIONS="--experimental-modules --unhandled-rejections=strict"
    working_dir: $PWD
    command: [npx,nodemon]        

A more clean solution would of course bind and mount the host root / to the container root / but Docker will not allow that. You can then try and bind mount / to /rootfs and chroot inside the container. That works but the chroot UI is clunky and you probably want to replace that with something that makes the switch more seamless.

要查看或添加评论,请登录

Henrik Holst的更多文章

  • Notes on haproxy.cfg and mTLS

    Notes on haproxy.cfg and mTLS

    I have two applications, Grafana and a custom app, that I wish to expose via a load balancer. In this case i choose…

  • Async generators too evil for Python?

    Async generators too evil for Python?

    Suppose we are getting data from an endpoint and these chunks needs to be handled by a worker. The output of the…

    1 条评论
  • Rohypnol i mellanmj?lken eller?

    Rohypnol i mellanmj?lken eller?

    Sveriges politikerjunta r?stade igenom DCA avtalet som ger USA okontrollerad m?jlighet att placera k?rnvapen p? svensk…

  • Job Title: Expert Technical Writer with Advanced English Writing Techniques

    Job Title: Expert Technical Writer with Advanced English Writing Techniques

    Location: Anywhere with WiFi and a reliable spellchecker About Us: We are a cutting-edge tech company dedicated to…

  • Problem Architect

    Problem Architect

    Position Title: Problem Architect Department: Research and Development Location: Remote Reports To: Chief Technology…

  • Henriks minimal Linux boot

    Henriks minimal Linux boot

    I got tired of minimalistic boot systems trying to be minimal and of not knowing what is redundant boilerplate that can…

    12 条评论
  • The most copied and sold idea

    The most copied and sold idea

    Test-Driven Development (TDD) and Objectives and Key Results (OKRs) share a structural similarity in that both…

  • Streamlining Software Testing

    Streamlining Software Testing

    Testing is essential in the fast-paced field of software development to guarantee the dependability and functionality…

  • The Agile garbage fire

    The Agile garbage fire

    I have concluded that Agile has reached the critical mass where it will self-implode under its weight of technical debt…

  • Azure sucks

    Azure sucks

    This is a work-in-progress living document which I will update from time to time with my day-to-day experiences from…

社区洞察

其他会员也浏览了