Docker: Redefining cloud computing landscape
There is a lot of seismic activity going on in cloud computing landscape these days. Big operators are preparing for a tactical shift in the way application runs on cloud. The new game changer is called Docker. Google announced its own docker repository for users in Google cloud along with its already running Google Container Engine for docker. Amazon previewed its EC2 Container Service at re:Invent last year and has launched first developer preview of its docker orchestration service. Microsoft has already announced future docker integration with Windows Server and Azure. If you haven't yet heard about what docker is, continue reading.
What is docker
Docker is a platform for Application Containerization. One of the key problem facing multitude of cloud computing application is: deployment. Ever heard that things run fine on developer laptop but doesn't work in QA? As the application grows, a dependency hell builds up on OS and other infrastructure services. Once the application is deployed, touching or upgrading any component can have disastrous consequences in production. Though Virtual machines solve this problem to a great extent, they are sub optimal for replication. This is because in order to run 1000 instances of your application, you would also need to run 1000 instances of Linux OS and underlying system services, which is costly in terms of memory and CPU. You would need to apply timely updates and patches to all OS components as you can not cleanly abstract OS dependency from your application. An alternative approach is to use PaaS but it looses its benefits when you want tighter control of your run time environment.
Docker solves this problem by creating packaged docker containers which holds everything required to run an application into a single tight package, called docker images. Unlike VMs, docker containers runs as a special process in host OS. A docker engines starts and stop these containers from docker images on demand. These containers can run on a variety of Linux platforms and possibly on Windows too in future. Here is how the dockerized application looks different from traditional virtualization:-
How does it work
Docker isolates applications by using linux cgroups and namespaces. It uses resource isolation (CPU, memory, block I/O, network, etc.) and separate namespaces to completely isolate the application's view of the operating system. It also provides mechanisms to access host's filesystem and network resources. By using docker containers, resources can be isolated, services restricted, and processes provisioned to have a private view of the operating system with their own process ID space, file system structure, and network interfaces. Multiple containers can share the same kernel, but each container can be constrained to only use a defined amount of resources such as CPU, memory and I/O.
The benefits
Docker containers powered by minimalist host OS such as CoreOS provide massive scalability and flexibility of deployment. Applications can benefit for massively distributed deployment without paying too much for resources. Try it out yourself by running 50 instances of an Ubuntu virtual machine in your laptop vs. 50 instances of docker ubuntu container inside a single virtual machine.
Developers can build and ship apps much faster, QA can test them much easier because image is exactly identical to what developer shipped. Apps can be deployed in large scale across different cloud providers and can be easily started and replaced. DevOps can apply patches and upgrade OS without worrying much about side effects and downtimes. The central repositories provided by Docker Hub, Google cloud and many others allow easy reuse of ready to containers, so developers spend less time in configuring things and more time in building apps.
By isolating applications into container, a new paradigm of computing is unlocked. An example of this massive parallelism is cloud9.io which provide each developer a full web based IDE and webserver with complete shell access within docker container for free. Everything which is provisioned to a user is packaged in a single container. A million users would simply mean assembling a million docker container and these docker could run anywhere and still provide the exactly same feature set.
Docker EcoSystem
Docker registries provide reusable and ready to run stacks. Almost every cloud provider is supporting docker and is building a docker orchestration service as well as docker repository. A new breed of startups are jumping into bandwagon of building cloud independent Docker Orchestration. Docker itself is launching its own clustering service called docker Swarm.
The CoreOS team which has been a major part of Docker success story has parted its way and started building its own containerization platform called Rocket. It remains to be seen whether Rocket could challenge the massive success docker enjoy right now.
Overall, docker looks amazing and promising and rightly fills the gap between IaaS and PaaS. Everyone seems to be so excited about it. If you want to know more, check it out here.
Update: Wondering who seems to be running biggest container service in cloud? No wonder that everything in google runs inside a container. It launches 2 billion container per week. Details here
Good Insight into docker! We are planning to go for it.
Azure Cloud Engineering at Microsoft
9 年Very nice article Akash!