Container Security Best Practices.

Container Security Best Practices.

Containers provide very important functionality: They package various software applications in “containers” to ensure that they are able to run correctly when moved from one computing environment to another.

The container model has all its dependencies packaged into virtual containers. A container not only contains an application but all supporting packages that are needed to run the application effectively. Thus, they provide flexibility, ease of use, and the ability to share resources. However, security is a primary concern when any new technology is pushed into production. Therefore, it is vital to focus on container security because poor security can put various applications and processes at risk for the entire enterprise.

The following are some Best Practices you can adopt to secure your Containerized Environment:

1. Secure the Containers That Support Your Microservices-Based Architecture

Containers support?microservices-based architectures ?in which a large number of small services are run independently. In containers, services may be newly added or decommissioned whenever user requirements change or during scaling. In the microservices model, developers often have a large number of services exposed to the network, which could translate into more network interfaces and a larger attack surface.

From a security standpoint, it would be challenging to gain complete visibility into containers running services at a given time and control all their interactions to keep track of potentially malicious activity. But since containers have shorter life spans than applications built in monolithic methodologies, if an attacker manages to get inside the system, applications or containers, they won’t be able to do as much damage compared to the window of opportunity presented by long-running services on virtual machines (VMs).

In a VM, an attacker could install a rootkit that would reload upon subsequent system boots. By contrast, when attempting to compromise containers, it can be less worthwhile for an attacker to install a rootkit because it is not likely to persist once it is rebooted. However, researchers have discovered a technique called a “shadow container” that allows malicious instructions to remain persistent when a container is rebooted. But what if the victim restarts the host or just restarts, for example, Docker for Windows (a popular development platform for creating containerized apps)? An attacker would lose control.

To overcome these problems, an attacker could write a container shutdown script along with a malicious script/state, store an attack state over shadow containers and write it back to the VM for concealment. When the container restarts, or after the host reboots, it would run the attacker’s container that saved the attack script. This would allow the attacker to go undetected while perpetrating network reconnaissance, planting malware or?moving laterally within the internal network .

In a virtualized environment, an attacker breaking into a public-facing web server by exploiting a vulnerability could be devastating because each VM could be running multiple services. Containerizing the web server, or?isolating it in its own VM , can help teams limit the potential impact of attacks by separating the web server process from other processes. Namespace provides horizontal isolation between containers but does not provide vertical isolation between the host operating system (OS) and the container.

The best way to prevent privilege escalation attacks is to configure your container’s applications to run as unprivileged users. For containers with processes that must run as the root user, you can remap this user to a less privileged user on the host. The mapped user is assigned a range of unique identification codes (UIDs) that function within the namespace as normal UIDs from 0 to 65536, but have no privileges on the host machine itself.

If an attacker makes an alteration, deletes data or inserts invalid data to the containerized applications or processes, the container’s isolation mechanism could provide better protection when compared to the isolation capabilities of VMs. This is partly because VMs often have virtual disks that are used by many processes, but containerization, on the other hand, allows teams to enforce the practice of “no data inside containers,” which can help isolate and protect sensitive data.

In microservices-based architecture, data can be accessed through a RESTful application programming interface (API). REST, or RESTful API, is a web API for networked applications that can leverage HTTP requests. The downside of containerization is that in the event of an attack, forensic analysis is more difficult because containers are often used for a short time and then deleted. Once that happens, it is more challenging to investigate the entire life cycle of an incident by the time it is discovered. Therefore, it is important to have a?real-time monitoring and log analytics solution ?that can be integrated with security information and event management (SIEM) to help in incident investigation.

2. Validate That Your Images Originate From a Trusted Registry


Unlike virtual machines, where different OSs can run on each VM, containers only rely on one kernel, and that kernel represents one type of OS. Containerized application’s images that provide the processes for running apps rely on that kernel’s OS type. From a security perspective, trusting the image is a critical concern throughout the?software development life cycle .

For example, it’s critical to ensure that images are signed by authorized users and originate from a trusted registry because, in containerized environments, images are constantly being added to the organization’s private registry or hub, and containers running the images are frequently spun up and taken down. Even if an image has vulnerability information listed, it may not be presented in a manner that would permit development teams to understand the underlying issues, exposing them to potential risk down the line.

Let’s say a developer pulls images from a registry that contains vulnerabilities. There is no way to know how many vulnerabilities are exploitable. Often, images are based on open-source components, and as more layers and integration tools are incorporated into images to optimize deployment, the attack surface can inadvertently and dramatically increase. Think about open-source components entering production without being scanned, validated and/or patched, becoming risk blind spots and elevating vulnerability and potential impact without risk being acknowledged, managed or controlled properly.

Organizations should also be careful about what versions of software they are running in containers, especially in production environments. It is important to avoid running out-of-date software in production or software that has vulnerabilities or may have been tampered with in an era where?open-source software components ?are often used by developers and the applications they create. It is therefore important to scan open-source components for known vulnerabilities and always keep those components up to date.

To help streamline ongoing updates, continuous vulnerability assessment and remediation programs need to be an integral part of the organization’s IT risk and governance program. Vulnerability assessment must be done prior to storing images in a container’s registry.

3. Reduce Your Containers’ Potential Attack Surface


When it comes to managing risk, reducing the attack surface is a best practice. Preventing code with vulnerabilities from entering production environments is a perfect example of reducing a key attack surface.

The underlying shared kernel architecture of containers requires attention beyond securing the host — namely, maintaining standard configurations and profiles. Unlike in virtualized environments, where a hypervisor serves as a point of control, in the containerized environment, any user or service with access to the kernel root account can see and access all containers that share the Linux kernel. Therefore, it is important to harden kernel and host configurations and ensure optimal isolation to manage access control and resource allocation. For instance, the container itself relies on the kernel as well as the container’s engine for a range of services that are accessed via system calls.

4. Define an Effective Vulnerability Assessment Process


When it comes to managing vulnerability remediation, patching is operated differently in a containerized environment. In the past, the admin team would just update its instances through a manifest, recipe or other patch management tool/process. With containers, there are two components: the base and the application image. We must first update the base image and then rebuild the application image. Defining a proper vulnerability assessment process is key to identifying vulnerabilities.

Most commercial vulnerability scanners offer container scanning to help identify known vulnerabilities and misconfiguration issues. Vulnerability scanners are typically designed to review software packages that are included in the container’s image. They can be configured to cross-check the packages with vulnerability databases, such as the?National Vulnerability Database (NVD) , where they can look up which?Common Vulnerabilities and Exposures (CVEs) , if any, apply to that precise set of packages. This, in turn, can help the team assess the need for remediation.

Vulnerability scanners ?are automated tools, and there are several scanners currently available on the market, including enterprise-grade products and some free solutions. Detecting vulnerabilities may sound straightforward enough, but there can be challenges to this process. For example, for any OS, package A might include a vulnerability that can only be exploited if package B is present or if a particular network protocol is being used. If package B is not present or a particular network protocol is not being used, it’s likely that we wouldn’t be asked to apply the patch for package A. Also, if the patch is not compatible with application code, then we wouldn’t be prompted to apply the patch, either.

Many Linux distribution backport fixes are available to update older versions of packages, rather than replace them. Typical vulnerability scanners that work on a version-dependent basis show vulnerabilities based on the version of installed packages. During vulnerability scanning, if an obsolete package is found, it will flag the vulnerability. Relying solely on scanner results can lead to varying amounts of false positives. That’s why it is important to manually verify vulnerabilities to?reduce the rate of false positives .

The ability to effectively handle false positives is a key differentiator between vulnerability scanners. Using container images and rebuilding new images whenever a code change is needed can effectively improve the patching process. At the same time, it helps reduce the number of vulnerabilities to improve overall resiliency and posture.

5. Shift Left and Make Security a Shared Responsibility.


How do you rebuild images to apply patches? You could do it manually, but most cloud-native code is built using continuous integration tools like Jenkins, Travis, CodeShip and GitLab. The cloud-native approach to scanning involves including a scan as a step in this continuous integration pipeline. With image scanning included in your continuous integration (CI) pipeline, you can automatically check for the introduction of new vulnerabilities with every code change. You can thus automatically fail the build if, for example, a high-severity issue is detected.

Running regular image scanning on deployed images allows security professionals to obtain alerts when a new vulnerability has been found that may affect the code. Since images are an immutable basis of the container, there is no need to scan the containers themselves. This can save a huge amount of time and resources because we only have to rescan one image rather than the thousands of running containers that were spawned from the original image. If we find an affected image, we could opt to rebuild it with the update and redeploy all the affected running containers per that new and improved build.

In traditional deployments, patching is typically performed by an admin or operations team at a stage well past where the development team is involved. With image scanning included in the pipeline, security can be integrated where it’s needed most, and developers can become invested in using the appropriate versions of base images, packages and libraries. This is known as?shifting left , and it becomes a shared responsibility rather than a siloed activity.

6. Embrace Isolation and Least Privilege Best Practices


The superpower of containers is their inherent capacity for isolating applications, processes, users and data. To optimize that attribute, each container should run with the minimum set of privileges possible for its effective operation. Containers running on a host share the same kernel as the host, so there’s an exploitable vulnerability in the kernel that may be used to break out of the container to the host. If a container that you have access to is running with privilege, you can likely get access to the underlying host. If a container you have mounted is in a host file system, you can likely change items in that file system that could allow you to escalate privileges to the host. From a security perspective, it is also important to run containers with a less privileged user.

Perhaps the most common mistake when running containers in production is having containers that run as root users. While building an image, root privileges are typically required to install software and configure images. However, the main process that is executed when the containers start should not run as root. Running as root is very risky because it can allow an attacker who compromises the process to carry out the malicious activity with elevated privileges.

Another risky scenario touches on access management. As users are not namespaced by default, should an attacker manage to break out of the containers and reach the host, they might be able to gain full root-level privileges on the host.

Constraining access to other resources by using control groups (cgroups) can also be effective. Limiting the amount of memory available to containers can help prevent a scenario in which attackers’ malicious activity consumes all the memory on the host and potentially starves other running services. Limiting central processing unit (CPU) and network bandwidth can further prevent attackers from running resource-heavy processes, such as bitcoin mining, mass data exfiltration and torrent peers.

7. Apply Centrally Managed Access Controls


Apply?centrally managed access controls ?on the changes or commands a user can execute based on their role rather than their ability to access the root account. This can help define and enforce proper access control to active containers.

Since containers manage a dynamic and agile development cycle, things move quickly, making it important to keep track of who made changes in configuration settings and who started or shut down containers. With users accessing containers with root access, identifying who made changes to a container’s configuration is nearly impossible. Sometimes, giving root access may be the easiest way for developers to get their jobs done, but then they end up giving or having too much access.

Excessive access levels can create trouble down the line when teams want to audit an issue by tracing what took place and when. Moreover, if an attacker gains root access, they can get access to all containers and increase the negative impact of the attack.

8. Implement Real-Time Threat Detection and Incident Response


No matter how effective you are with vulnerability scanning and container hardening, there are always unknown bugs and vulnerabilities at runtime that could eventually enable intrusions or system compromises. That’s why it’s important to protect your system with real-time threat detection and develop?well-drilled incident response capabilities .

There are many threat detection paradigms nowadays, including behavioral baseline, a mechanism that focuses on understanding an application or system’s typical behavior to identify anomalies. The behavioral baseline includes the creation of a system or application baseline, continuous monitoring against baseline configurations, and detection and response to any changes to baseline configurations. The active response is a good way to respond to an attack, compromise or anomaly as soon as it is detected.

Responses can also come in many different forms: alerting responsible personnel, communication with enterprise ticketing systems, application of predetermined corrective actions to systems and applications, and more. In a containerized environment, an active response could mean performing additional logging, applying additional isolation rules, disabling a user dynamically or even actively deleting the container.

When your organization moves to develop applications with containers, the infrastructural differences can limit your ability to perform forensics since the instance or host could have been replaced. It’s critical to implement specific processes for reviewing all responses to incidents that occur in containerized environments.

要查看或添加评论,请登录

社区洞察

其他会员也浏览了