A first take on our Docker on ECS Migration
The answer, my friends, is docker. But does anyone remember what the question was?
Catalyst have a number of Enterprise managed service engagements for our clients. Many hosted on Amazon Web Services (AWS). We takes on full hosting and support responsibility for mission-critical application platforms, an obligation we take very seriously. We’ve been in this game back since the days of bare metal, and since then have gone through countless iterations to improve the way we do things. That’s what open source geeks love doing!
Docker and the associated container hype has been a smorgasbord of acronyms, buzzwords and excitement. At the Auckland Linux conference in 2015, it seemed like every second talk was in some way related to linux containers.
Catalyst has been applying the open source docker container tool for some years now. At first the uptake was by developers as part their local environments, and then we started experimenting with container stacks in staging. In late 2016, we began migrating many production workloads into AWS’s Elastic Container Service (ECS). This post is not an explanation of how ECS works, merely us sharing our thoughts from our experiences and learnings to this point.
What have we gained from our move to containers:
- Better application deployment capabilities. In our case this means faster deployment via our GoCD Continuous Delivery server. With less constraints around production activity and the ability to deploy applications in parallel.
- More responsive (i.e. faster) auto scaling on load activity triggers. You can launch new containers faster than new EC2 instances. This means we make better use of our running compute resources, not having to over-provision as much as we did in the past.
- Less environment constraints for different applications on the same compute. We can run PHP5 and PHP7 applications side by side. Run with or without varnish reverse proxy. And have widely varied configuration settings.
- Better resource usage visibility. And the ability to tune and configure our applications to use these resources more efficiently. This means more efficient cloud spend.
And in terms of what we have learnt so far on our ECS container journey:
- Moving to containers can be tricky. You don’t get something for nothing when it comes to technical innovation. There is new complexity.
- This took longer, and was more effort, than we initially thought. And there have been bumps along the road.
- The engineers working on these projects need to possess a mixture of application and infrastructure domain knowledge. With the ability to deep dive into complicated problems that span the application and the underlying Operating System.
- There are definitely lots of ways to do containers in production - ECS vs Kubernetes vs Marathon vs Fleet vs Swarm etc. These discussions can be endless. And a lot of very smart, respected people still don’t agree.
- AWS ECS service has given us the flexibility and toolset that we needed for our docker initiative. And we save ourselves the pain of some of underlying platform management overhead.
- The production stack running containers removes some abilities that we used to have e.g. we can’t just SSH into the running production environment in the same way we used to. This has taken some getting used to.
We now have a production environment that is stable, performant and well understood. And we see even better opportunity in the future to reap the benefits of our migration to container-based architecture.
Join our 6th of June Global B2B Conference | Up to 50 Exhibitors | 10 plus sponsor | 200+ Attendees
1 年Andrew, thanks for sharing!