The next stage of virtualisation - Containers

The next stage of virtualisation - Containers


Up until a few years ago, the most common way for enterprise software to be developed, sold and deployed was as monolithic packaged applications requiring large computing platforms. More recently, due to the growth of cloud computing, agile development processes, and the need for lightweight, virtualised applications and enterprise architecture flexibility, the market trend has been toward microservices and containerisation.

The reason microservices and containerisation have become so popular is that they allow organisations to decouple software into smaller functional pieces and separate the software from the underlying hardware. Doing both of these things, speeds up development, allows for faster and lower cost updating, increases resiliency and increases scalability.

Containerisation is, in effect, OS-level virtualisation (as opposed to VMs, which run on hypervisors, each with a fully embedded OS). Containers are easily packaged, lightweight and designed to run anywhere. Multiple containers can be deployed in a single VM. A microservice is an application with a single function, such as routing network traffic.

The microservices architectural approach involves developing a single application as a suite of small services, each running its own process and communicating with lightweight mechanisms. These services are built around business capabilities and independently deployable by fully automated deployment components.

The concept of containers is not new; stringing microservices together into functional applications is an evolution of the service-oriented architecture (SOA), which was very popular a few years ago. Also containers have been available in Linux for a very long time, however it was the Docker open-source project that has really accelerated the current uptake of containers.

Docker provides an additional layer of abstraction and automation of operating-system-level virtualization on Linux and Windows. Docker uses the resource isolation features of the Linux kernel such as cgroups and kernel namespaces.

So now you may be wondering what the difference between containers and VMs is. A simple analogy is that a VM is a house, it has its own foundation, plumbing, electric wiring etc. On the other hand we have a container. Think of a container as an apartment in a block of flats. It is still very secure. It has its own doors, however many parts are shared with the other apartments (electric wiring, plumbing, etc).

The advantage is fairly obvious, in order to provide secure access to certain applications (apache, ftps, even database) hefty VMs (i.e. RAM, CPU , HDD, management ) no longer need to be provided.

Now it is possible to run many independent applications, which can be completely isolated from each other.

Containers are very portable. A container application can be developed on a laptop and once completed it can be deployed on any Docker host as long as the architecture is the same. A Docker host can be run on a VMware farm or on any public cloud provider.

At SASIT we have, or will have migrated to containers: apache webserver, reverse proxy, load balancers, mail relay and many other software functions. The general rule for us is: if we can containerise an application, we will.

Running a container on a single Docker host comes with some challenges, when maintenance is needed or if the Docker host goes down, obviously the container will go down with it. Resilience is where container orchestration application comes in. Simply define what the application should look like (RAM, CPU allocation, what port should be listening on) and then pass this information to the orchestration layer.

An orchestration application such as Kubernetes will take care of scheduling the application across the nodes. From this point on it is not necessary to worry about where the container (pod in Kubernetes terms ) runs. If any of the hosts that are part of the Kubernetes cluster go down, the Kubernetes itself takes care of re-scheduling the pod elsewhere. Bear in mind that the idea of most containers is that they can be destroyed at will, and a new one is started, instead and the whole process should take no more than a few seconds, yes that’s right, seconds.

Where traditionally a business would run a single application per VM, with containers it’s easy to run many applications (depending on the specification of the VM itself) on a single host.

In summary, microservices and containers are relatively easy to deploy and can bring major benefits to companies of any size.

Paul Stanton

Modern, comprehensive data consulting for enterprise DevOps, DataOps, ML, AI, and testing, with database subsetting and virtualization, synthetic data, and cross platform data migration.

7 年

Yes, they offer similar benefits, but respectfully disagree that containers virtualize the OS. This is a widespread source of confusion.

回复
Paul Stanton

Modern, comprehensive data consulting for enterprise DevOps, DataOps, ML, AI, and testing, with database subsetting and virtualization, synthetic data, and cross platform data migration.

7 年

Describing Docker containers as OS virtualization is inaccurate and leads to confusion. Abstraction yes, but not virtualization. Docker is simply application multi-tenancy on the host.

回复

have you taken a look at Portainer.io ?

要查看或添加评论,请登录

社区洞察

其他会员也浏览了