Container Security
Lakshminarayanan Kaliyaperumal
Vice President & Head - Cyber Security Technology & Operations at Infosys Ltd
Virtual Machines
A Virtual Machine is essentially an emulation of a real computer that executes programs like a real computer. VMs run on top of a physical machine using a “hypervisor”. A hypervisor, in turn, runs on either a host machine or on “bare-metal”. A hypervisor is a piece of software, firmware, or hardware that VMs run on top of. The hypervisors themselves run on physical computers, referred to as the “host machine”. The host machine provides the VMs with resources, including RAM and CPU. These resources are divided between VMs and can be distributed.
The VM that is running on the host machine is also often called a “guest machine.” This guest machine contains both the application and whatever it needs to run that application (Ex: system binaries and libraries from the inside, the guest machine behaves as its own unit with its own dedicated resources. From the outside, we know that it’s a VM — sharing resources provided by the host machine.
Why this hypervisor layer is required? - the hypervisor plays an essential role in providing the VMs with a platform to manage and execute this guest operating system. It allows for host computers to share their resources amongst the virtual machines that are running as guests on top of them.
Virtualization is a method of sharing physical hardware resources, such as CPU, memory, and disk space, across multiple virtualised devices or services. Virtualization takes advantage of underutilised hardware by sharing unused computing resources with virtual devices, thus reducing the unnecessary cost of idle hardware to the business.
Container
VM which provides hardware virtualization, a container provides operating-system-level virtualization by abstracting the “user space”. An isolated and predictable software environment containing the code for an application and all the required libraries, binaries, and dependencies. Containerisation allows applications to run in lightweight, isolated environments, usually with just one application per Container.
A Container in its unexecuted state resides in storage as a Container image (an executable package containing everything required to create and populate a Container with the required application). Once the image is executed, it builds a Container and runs the required application inside it using the libraries, runtime environment, tools and settings provided. The one big difference between containers and VMs is that containers *share* the host system’s kernel with other containers.
Container Engine
The Container engine acts like a hypervisor, capable of running multiple Containers, whilst also controlling all the relevant components for creating each individual Container. A container engine is a piece of software that accepts user requests, including command line options, pulls images, and from the end user's perspective runs the container.
Containers can be used as part of a microservices architecture – an environment in which a collection of small services or applications are linked together to create a larger platform.
Microservices allow each service or application to operate in isolation, making Containers an ideal technology for building resilience. If one microservice fails, the rest of the platform can continue to operate, whereas in one large, huge application, the failure of one component creates a waterfall effect, taking the whole service offline.
Many large tech firms use microservice environments as part of their business model. DevOps teams aim to enhance the speed at which they can successfully create and improve an application, supported by the use of pre-built Containers and code (either internally developed or externally downloaded code originating from sources such as GitHub or Docker Hub.
?Benefits
1.????Size and Speed - Containers are sized in megabytes, or less. Thus, one can spin up thousands of containers faster on a server without incurring any additional overhead for each instance. It takes seconds to add or remove a container in contrast to minutes to spin up a VM
2.????Portability - Container carries all its dependencies with it, wherever it goes. containerized microservice can be moved from a developer’s laptop running Ubuntu, to an on-premises server on SUSE Linux to a public cloud — with little or no friction. Moving a Container between Cloud Service Providers and Hosting environment is a very simple process.
3.????Consistency - DevOps teams typically use a particular programming language (or a small set) with its associated tools and frameworks. As a container is self-contained piece of code, so long as it can run on a chosen OS, the team does not have to worry about different deployment environments and can concentrate building their specific microservice using their preferred language and tools.
4.????Scalable - If a containerised application is overloaded by more requests than it can handle, another instance can run the same application to share the load. The infrastructure is then better placed to manage the impact of excessive load.
5.????Cost Savings - Containers can run on a reduced footprint (Ex: using less memory and processing power). When leveraging cloud services, organisations can achieve a lower running cost, resulting in significant savings due to this utilization of less resources.
领英推荐
6.????Agility - Current Agile and DevOps based software development has greatly reduced the time between coding, testing and deployment – often called “continuous deployment.” Starting with containers as the unit of deployment right from the start makes these workflows uniform and frictionless, and many steps can be automated using a variety of tools.
Challenges
1.????Shorter life span of Container and traditional security controls - Containers are often used for a short period of time and then turned off until the application is required again. This is a good feature from cost perspective, but traditional security controls such as Firewalls, Antivirus and Vulnerability Management tools are not designed to scale with Containers. Monitoring running container processes in a extensive environment with a container having an average lifespan of hours or even minutes can be particularly challenging.
2.????Using Unsecure Images - Containers are build either using a parent image or a base image. These images are quite useful for building containers because you can reuse the different components of an image rather than building a container from scratch. However, like any other piece of code, images or their dependencies might contain vulnerabilities. Downloads from untrusted sources are of particular concern. This activity exposes an organisation to unseen vulnerabilities.
3.????Limited logging – Containers are isolated units and normally they have limited logging and the same will not available if the Container is switched off.
4.????Privileged Access - Containers running with the privileged flag can do almost anything a host can do, run with all capabilities, and gain access to the host’s devices. The details of usernames and credentials need to be accessible and sometimes are incorrectly included in the application’s code. Using these passwords for anything other than the intended purpose could lead to a data or systems breach by a malicious insider. Additionally, attackers who target code repositories, such as GitHub, would be able to discover credentials, giving them high level access to an organization’s network.
?
5.????Exposure from Misconfigurations – This is one of the worst and most common security challenge for Container environments. While a host of tools are available for vulnerability scanning of container images, configuration management requires more consideration.
?Recommendations
?1.????Hardening - Containers should be hardened as part of securing the environment. Hardening helps to reduce the number of vulnerabilities that may not have been resolved during the creation of a Container. Hardening also shrinks the attack surface, as well as limiting the impact of an attack. Images are the foundation for creating Containers.?
It is therefore important that the image adheres to both the standard Container build policy and the Container hardening process. The security of the image begins with the implementation of strict vulnerabilities scanning practices and policy on image source. Try using a policy that would refuse to use images if it has not been scanned in the last 60 days or if it comes from a non-whitelisted image registry.
2.????Security Logging and Monitoring - An orchestration tool can support log monitoring capabilities for Containers. These logs can help an organisation to detect anomalies that may occur within the application or in the Container. Orchestration tools can also monitor the status of a Container, its inner layers and linked infrastructure. This information can be passed directly onto a SIEM tool to better inform data analysis.
?3.????Secure Communications - Containers communicate with other devices within the environment typically using API calls. Orchestration tools can be used to encrypt interactions between Containers to secure the data being transmitted
?4.????Secrets Management - Don’t bake in secrets into images or expose them unnecessarily; use a secrets management tool like Kubernetes secrets and make sure deployments mount only the secrets they actually need. It’s also best to follow the principle of least privilege, especially when it comes to runtime privileges. This means that any module in a computing environment can only access information, resources or functions that are necessary for it to complete its necessary tasks. Directly inserting passphrases and passwords into code should be prohibited by policy, and developers should use the orchestration tool to manage all passphrases and passwords. If the organisation’s orchestration tool does not support passphrase or password management then a separate tool should be deployed to perform this function such as Privileged Access Management (PAM)
?5.????Code Signing - Code signing can be used in combination with techniques such as certificates or a cryptographic hash, to ensure the Container’s code has not been altered or tampered with after the developer has finished creating and testing it. The developer adds a signature to the code that can then be checked before execution, confirming that no alterations, extraneous or malicious code have been injected into the original code. This mitigates the potential for the code to perform unintended tasks
?6.????Vulnerability Management - Managing vulnerabilities should span the entire container lifespan. App developers must identify potential vulnerabilities in images and avoid utilizing these during production. Builds should be rejected that contain certain fixable vulnerabilities.
?
?
Delegate Sales Executive at Arena International Events Group (GlobalData Plc)
2 年Lakshminarayanan, thanks for sharing!
Director at Cyberintelsys Consulting Services Private Limited
3 年Well explained sir
Technical Program Manager / Cloud Practitioner & Security - Master in Cybersecurity | PMP? | PMI-ACP? | ITIL
3 年Thank you Lakshmi for your valuable insight ??
Cloud Native|Digital Transformation|Application Modernisation|Data Analytics|SaaS|IT|Sales & Business Development
3 年Excellent summing up sir ??
lT GRC, Cybersecurity Risk and Internal Audit Professional
3 年Well articulated Lakshmi.. Thanks