The future is not Docker

The future is not Docker

Sounds strange to hear out loud right? But after working over the past 18 months building Depot , I'm fully convinced that it's true.

The future is not Docker, but containers are.

Flashback for a moment to the dotCloud days and remember that Docker, the technology, was spun out as a pivot focused on the potential of container technology. It was built internally and went through many variations that later became Docker the company we know today.

It went through many highs and lows that are well documented across the web. Docker struggled to commercialize via the classic top down approach. They had to battle Kubernetes in an increasingly losing fight. That fight ended with Docker selling off Swarm to Mirantis, pivoting into what they are running around calling product-led-growth (PLG).

It's rumored now that Docker is now turning north of $100M in ARR, to the amazement of the HN community.

So we must be out there to think they're vulnerable.

But if you look at how Docker is making money today, it becomes clear they have all their chips on one spot at the moment. That spot? Docker Desktop.

It's not any new product development, service, or feature that Docker built that is making them money. It was their lawyers, and thus licensing, that are bringing in the money. Not PLG.

Still with me? Good.

Docker Desktop isn't going to just magically disappear. But, if you're looking closely, you can see it's not terribly complicated to replicate or replace. OrbStack, as a single developer attending Stanford, is giving them a pretty big run for their money.

Desktop, at its core, is really an installer that handles the plumbing between your OS and a Linux VM where the engine is running.

It has other random things that they will throw around like bundled tools for Kubernetes, Compose, BuildKit, scanning, etc. Most of those tools can be used without needing to "live" in Desktop, but they'd like you to believe otherwise.

Docker wants to centralize everything into their Desktop product because that's where they make money.

Docker is to complicated and riddled with inefficiencies

The initial decision to build Depot wasn't actually to critique Docker. We started building Depot because we were frustrated with generic CI provider (i.e. GitHub Actions, Circle, etc) and their lack of resources that supported build tools like Docker.

At the time we faced these problems almost daily:

  • CI providers not providing larger runners. Most support them today, but at the time, we were stuck running our own runners to get more CPUs or memory.
  • No disks. Want to know what makes a Docker build really fast? A persistent disk. Instead of giving you real disks CI providers ask you to save and load caches over networks. Networks are slow, unreliable, and often negate the performance benefits of caching.
  • Emulation is the modern day equivalent of watching paint dry. Ever needed to build a container for both Intel and Arm in GitHub Actions? You've undoubtedly met QEMU emulation. It's slow, like mind boggling slow, it can push even the most basic build to over an hour.

But at that time we hadn't really thought much about Docker the company or other problems in the toolchain.

It wasn't until we went and solved the problem for ourselves that we unearthed mountains of complexity and inefficiencies. But didn't the first version of Depot put BuildKit on cloud VMs with EBS volumes? How hard or complex could that be?

It turns out, freakishly complex.

Why? I feel pretty confident in saying that nobody ever actually thought about what it would mean to run BuildKit in a cloud environment as part of a PaaS. It's a massive monolith of code that has paths for all kinds of logic, some that's never been used, and it just keeps going.

BuildKit is doing far to much and we believe there are far better ways to assemble containers today that don't rely on the complexity of it.

But what about Docker?

After working with Docker, BuildKit, and containers for several years, I can't help but feel like it's all far to complicated and inefficient. Fundamentally, we've gone from representing our source code and it's OS dependencies in an AMI to a Dockerfile. If you add a dependency to your package.json that needs an OS dependency, you have to update your Dockerfile.

It's simple to create a Dockerfile that containerizes your app. It's even easier to create one that has terrible build performance, chalked full of CVEs, is north of 20GB in size, and whatever foot gun you trip over when using Docker.

It's all just too damn complex and inefficient.

We believe there must be a better way.






Paul Butler

Building Lambda for WebSockets @ jamsocket.com

10 个月

Thoughts on what should replace it? I think statically-linked binaries combined with linux namespaces/cgroups is an interesting direction. I really just want to be able to build a portable artifact from my compiler instead of having a separate toolchain.

要查看或添加评论,请登录

社区洞察

其他会员也浏览了