Docker Security Implementation
Docker Security Implementation

Docker Security Implementation

Docker and Host Configuration

1. Keep Host and Docker Up to Date

It is essential to patch both Docker Engine and the underlying host operating system running Docker, to prevent a range of known vulnerabilities, many of which can result in container spaces.?

Since the kernel is shared by the container and the host, kernel exploits when an attacker manages to run on a container and can directly affect the host. For example, a successful kernel exploit can enable attackers to break out of a non-privileged container and gain root access to the host.

2. Do Not Expose the Docker Daemon Socket

The Docker daemon socket is a Unix network socket that facilitates communication with the Docker API. By default, this socket is owned by the root user. If anyone else obtains access to the socket, they will have permissions equivalent to root access to the host.?

Take note that it is possible to?bind the daemon socket to a network interface, making the Docker container available remotely. This option should be enabled with care, especially in production containers.

To avoid this issue, follow these best practices:

  • Never make the daemon socket available for remote connections, unless you are using Docker's encrypted HTTPS socket, which supports authentication.
  • Do not run Docker images with an option like?-v /var/run/docker.sock://var/run/docker.sock, which exposes the socket in the resulting container.

3. Run Docker in Rootless Mode

Docker provides “rootless mode”, which lets you run Docker daemons and containers as non-root users. This is extremely important to mitigate vulnerabilities in daemons and container runtimes, which can grant root access of entire nodes and clusters to an attacker.

Rootless mode runs Docker daemons and containers within a user namespace. This is similar to the?userns-remap?mode, but unlike it, rootless mode runs daemons and containers without root privileges by default.

???To run Docker in rootless mode:

  1. Install Docker in root mode - see?instructions.?
  2. Use the following command to launch the Daemon when the host starts:
  3. systemctl --user enable docker
  4. sudo loginctl enable-linger $(whoami)
  5. Here is how to run a container as rootless using Docker context:
  6. docker context use rootless
  7. docker run -d -p 8080:80 nginx

4. Avoid Privileged Containers

Docker provides a?privileged mode, which lets a container run as root on the local machine. Running a container in privileged mode provides the capabilities of that host—including:

  • Root access to all devices
  • Ability to tamper with Linux security modules like AppArmor and SELinux
  • Ability to install a new instance of the Docker platform, using the host's kernel capabilities, and run Docker within Docker.

Privileged containers create a major security risk—enabling attackers to easily escalate privileges if the container is compromised. Therefore, it is not recommended to use privileged containers in a production environment. Best of all, never use them in any environment.

To check if the container is running in privileged mode, use the following command (returns true if the container is privileged, or an error message if not):

docker inspect --format =''[container_id]

5. Limit Container Resources

When a container is compromised, attackers may try to make use of the underlying host resources to perform malicious activity. Set Docker memory and CPU usage limits to minimize the impact of breaches for resource-intensive containers.

In Docker, the default setting is to allow the container to access all RAM and CPU resources on the host. It is important to?set resource quotas, to limit the resources your container can use—for security reasons, and to ensure each container has the appropriate resources and does not disrupt other services running on the host.

6. Segregate Container Networks

Docker containers require a network layer to communicate with the outside world through the network interfaces on the host. The default bridge network exists on all Docker hosts—if you do not specify a different network, new containers automatically connect to it.

It is strongly recommended not to rely on the default bridge network—use custom bridge networks to control which containers can communicate between them, and to enable automatic DNS resolution from container name to IP address. You can create as many networks as you need and decide which networks each container should connect to (if at all).?

Ensure that containers can connect to each other only if absolutely necessary, and avoid connecting sensitive containers to public-facing networks.

Docker provides network drivers that let you create your own bridge network, overlay network, or macvlan network. If you need more control, you can create a Docker network plugin.

7. Improve Container Isolation

Operations teams should create an optimized environment to run containers. Ideally, the operating system on a container host should protect the host kernel from container escapes, and prevent mutual influence between containers.

Containers are Linux processes with isolation and resource limitations, running on a shared operating system kernel. Protecting a container is exactly the same as protecting any process running on Linux. You can use one or more of the following Linux security capabilities:

  • Linux namespace
  • Namespaces make Linux processes appear to have access to their own, separate global resources. Namespaces provide an abstraction that gives the impression of running in a container on its own operating system. They are the basis of container isolation.
  • SELinux
  • For Red Hat Linux distributions, SELinux provides an additional layer of security to isolate containers from each other and from the host. It allows administrators to apply mandatory access controls for users, applications, processes and files. It is a second line of defense that will stop attackers who manage to breach the namespace abstraction.
  • AppArmor
  • For Debian Linux distributions, AppArmor is a Linux kernel enhancements that can limit programs in terms of the system resources they can access. It binds access control attributes to specific programs, and is controlled by security profiles loaded into the kernel at boot time.?
  • Cgroups
  • Limits, describes and isolates resource usage of a group of processes, including CPU, memory, disk I/O, and networking. You can use cgroups to prevent container resources from being used by other containers on the same host, and at the same time, stop attackers from creating pseudo devices.?
  • Capabilities
  • Linux allows you to limit privileges of any process, containers included. Linux provides “capabilities”, which are specific privileges that can be enabled for each process. When running a container, you can usually deny privileges for numerous capabilities, without affecting containerized applications.
  • Seccomp
  • The secure computing mode (seccomp) in the Linux kernel lets you transition a process to a secure mode, in which it is only allowed to perform a small set of safe system calls. Setting a seccomp profile for a container provides one more level of defense against compromise.

8. Set Filesystem and Volumes to Read-only

A simple and effective security trick is to run containers with a read-only filesystem. This can prevent malicious activity such as deploying malware on the container or modifying configuration.

The following code sets a Docker container to read only:


9. Complete Lifecycle Management

Cloud native security requires security controls and mitigation techniques at every stage of the application lifecycle, from build to workload and infrastructure. Follow these best practices:

  • Implement vulnerability scanning to ensure clean code at all stages of the development lifecycle.
  • Use a sandbox environment where you can QA your code before it goes into production, to ensure there is nothing malicious that will deploy at runtime.?
  • Implement drift prevention to ensure container immutability.
  • Create an incident response process to ensure rapid response in the case of an attack
  • Apply automated patching.
  • Ensure you have robust auditing and forensics for quick troubleshooting and compliance reporting.

10. Restrict System Calls from Within Containers

In a container, you can choose to allow or deny any system calls. Not all system calls are required to run a container.

With this in mind, you can monitor the container, obtain a list of all system calls made, explicitly allow those calls and no others. It is important to base your configuration on observation of the container at runtime, because you may not be aware of the specific system calls used by your container’s components, and how those calls are named in the underlying operating system.

Securing Images

11. Scan and Verify Container Images

Docker container images must be tested for vulnerabilities before use, especially if they were pulled from public repositories. Remember that a vulnerability in any component of your image will exist in all containers you create from it. If you use a base image to create new images, any vulnerability in the base image will extend to your new images.

Container image scanning is the process of analyzing the content and composition of images to detect security issues, misconfigurations or vulnerabilities.

Images containing software with security vulnerabilities are susceptible to attacks during container runtime. If you are building an image from the CI pipeline, you need to scan it before running it through the build. Images with vulnerabilities that exceed a severity threshold should fail the build. Unsafe images should not be pushed to a container registry accessible by production systems.?

There are many open source and proprietary image scanners available. A comprehensive solution can scan both the operating system (if the container runs a stripped-down Linux distribution), specific libraries running within the container, and their dependencies. Ensure the scanner supports the languages used by the components in your image.

Most container scanning tools use multiple Common Vulnerability and Exposure (CVE) databases, and test if those CVEs are present in a container image. Some tools can also test a container image for security best practices and misconfigurations.

12. Use Minimal Base Images

Docker images are commonly built on top of “base images”. While this is convenient, because it avoids having to configure an image from scratch, it raises security concerns. You may use a base image with components that are not really required for your purposes. A common example is using a base image with a full Debian Stretch distribution, whereas your specific project does not really require operating system libraries or utilities.?

Remember that any additional component added to your images expands the attack surface. Carefully select base images to ensure they suit your purposes, and if necessary, build your own minimal base image.

13. Don’t Leak Sensitive Info to Docker Images

Docker images often require sensitive data for their normal operations, such as credentials, tokens, SSH keys, TLS certificates, database names or connection strings. In other cases, applications running in a container may generate or store sensitive data. Sensitive information should never be hardcoded into the Dockerfile—it will be copied to Docker containers, and may be cached in intermediate container layers, even if you attempt to delete them.?

Container orchestrators like Kubernetes and Docker Swarm provide a secrets management capability which can solve this problem. You can use secrets to manage sensitive data a container needs at runtime, without storing it in the image or in source code.

14. Use Multi Stage Builds

To build containerized applications in a consistent manner, it is common to use multi-stage builds. This has both operational and security advantages.

In a multi-stage build, you create an intermediate container that contains all the tools you need to compile or generate the final artifact. At the last stage, only the generated artifacts are copied to the final image, without any development dependencies or temporary build files.

A well-designed multi-stage build contains only the minimal binary files and dependencies required for the final image, with no build tools or intermediate files. This significantly reduces the attack surface.?

In addition, a multi-stage build gives you more control over the files and artifacts that go into a container image, making it more difficult for attackers or insiders to add malicious or untested artifacts without permission.

15. Secure Container Registries

Container registries are highly convenient, letting you download container images at the click of a button, or automatically as part of development and testing workflows.?

However, together with this convenience comes a security risk. There is no guarantee that the image you are pulling from the registry is trusted. It may unintentionally contain security vulnerabilities, or may have intentionally been replaced with an image compromised by attackers.

The solution is to use a private registry deployed behind your own firewall, to reduce the risk of tampering. To add another layer of protection, ensure that your registry uses Role Based Access Control (RBAC) to restrict which users can upload and download images from it.

Avoid giving open access to your entire team—this simplifies operations, but increases the risk that a team member, or an attacker compromising their attack, can introduce unwanted artifacts into an image.

16. Use Fixed Tags for Immutability

Tags are commonly used to manage versions of Docker images. For example, a?latest?tag is used to indicate that this is the latest version of an image. However, because tags can be changed, it is possible for several images to have a latest tag, causing confusion and inconsistent behavior in automated builds.

There are three main strategies for ensuring tags are immutable and are not affected by subsequent changes to the image:?

  • Preferring a more specific tag—if an image has several tags, a build process should select the tag containing the most information (e.g. both version and operating system).?
  • Keeping a local copy of images—for example, in a private repository, and confirming that tags are the same as those in the local copy.
  • Signing images—Docker offers a Content Trust mechanism that allows you to cryptographically sign images using a private key. This guarantees the image, and its tags, have not been modified.?

Monitoring Containers

17. Monitor Container Activity

Visibility and monitoring are critical to smooth operation and security of Docker containers. Containerized environments are dynamic, and close monitoring is required to understand what is running in your environment, identify anomalies and respond to them.

Each container image can have multiple running instances. Due to the speed at which new images and versions are deployed, issues can quickly propagate across containers and applications. Therefore, it is critical to identify problems early and remediate them at the source—for example, by identifying a faulty image, fixing it, and rebuilding all containers using that image.

Put tools and practices in place that can help you achieve observability of the following components:

  • Docker hosts
  • Container engines
  • Master nodes (if running an orchestrator like Kubernetes)
  • Containerized middleware and networking
  • Workloads running in containers

In large-scale environments, this can only be achieved with dedicated cloud-native monitoring tools.

18. Secure Containers at Runtime

At the center of the cloud native stack are workloads, always a prized asset for hackers. The ability to stop an attack in progress is of utmost importance but few organizations are effectively able to stop an attack or zero-day exploit as it happens, or before it happens.??

Runtime security for Docker containers involves securing your workload, so that once a container is running, drift is not possible, and any malicious action is blocked immediately. Ideally, this should be done with minimal overhead and rapid response time.?

Implement drift prevention measures to stop attacks in progress and prevent zero day exploits.?In addition, use automated vulnerability patching and management to provide another layer of runtime security.

19. Save Troubleshooting Data Separately from Containers

If your team needs to log into your containers using SSH for every maintenance operation, this creates a security risk. You should design a way to maintain containers without needing to directly access them.?

A good way to do this and limit SSH access is to make the logs available outside the container. In this way, administrators can troubleshoot containers without logging in. They can then tear down existing containers and deploy new ones, without ever establishing a connection.

20. Use Metadata Labels for Images

Container labeling is a common practice, applied to objects like images, deployments, Docker containers, volumes, and networks.?

Use labels to add information to containers, such as licensing information, sources, names of authors, and relation of containers to projects or components. They can also be used to categorize containers and their contents for compliance purposes, for example labeling a container as containing protected data.

Labels are commonly used to organize containerized environments and automate workflows. However, when workflows rely on labels, errors in applying a label can have severe consequences. To address this concern, automate labeling processes as much as possible, and carefully control which users and roles are allowed to assign or modify labels.

Bonus Section: Docker CIS Security Benchmark: Safe Docker Configuration

CIS Benchmarks are universal security best practices developed by cybersecurity professionals and experts. Each CIS Benchmark provides guidelines for creating a secure system configuration. The following table summarizes recommendations from the CIS Docker Community Edition Benchmark, specifying how to set up a safe docker configuration.

Download the full?CIS Docker Benchmark.

Host Configuration

  • Create a separate partition for containers
  • Harden the container host
  • Update your Docker software on a regular basis
  • Manage Docker daemon access authorization wisely
  • Configure your Docker files directories, and
  • Audit all Docker daemon activity.

Docker Daemon Configuration

  • Restrict network traffic between default bridge containers and access to new privileges from containers.
  • Enable user namespace support to provide additional, Docker client commands authorization, live restore, and default cgroup usage
  • Disable legacy registry operations and Userland Proxy
  • Avoid networking misconfiguration by allowing Docker to make changes to iptables, and avoid experimental features during production.
  • Configure TLS authentication for Docker daemon and centralized and remote logging.
  • Set the logging level to 'info', and set an appropriate default ulimit
  • Don’t use insecure registries and aufs storage drivers
  • Apply base device size for containers and a daemon-wide custom SECCOMP profile to limit calls.

Container Images and Build File

  • Create a user for the container
  • Ensure containers use only trusted images
  • Ensure unnecessary packages are not installed in the container
  • Include security patches during scans and rebuilding processes
  • Enable content trust for Docker
  • Add?HEALTHCHECK?instructions to the container image
  • Remove?setuid?and?setgid?permissions from the images
  • Use COPY is instead of ADD in Dockerfile
  • Install only verified packages
  • Don’t use update instructions in a single line or alone in the Dockerfile
  • Don’t store secrets in Dockerfiles

Container Runtime

  • Restrict containers from acquiring additional privileges and restrict Linux Kernel Capabilities.
  • Enable AppArmor Profile.
  • Avoid use of privileged containers during runtime, running ssh within containers, mapping privileged ports within containers.
  • Ensure sensitive host system directories aren’t mounted on containers, the container's root filesystem is mounted as read-only, the Docker socket is not mounted inside any containers.
  • Set appropriate CPU priority for the container, set 'on-failure' container restart policy to '5', and open only necessary ports on the container.
  • Apply per need SELinux security options, and overwrite the default ulimit at runtime.
  • Don’t share the host's network namespace and the host's process namespace, the host's IPC namespace, mount propagation mode, the host's UTS namespace, the host's user namespaces.
  • Limit memory usage for container and bind incoming container traffic to a specific host interface.
  • Don’t expose host devices directly to containers, don’t disable the default SECCOMP profile, don’t use docker exec commands with privileged and user option, and don’t use Docker's default bridge docker0.
  • Confirm cgroup usage and use PIDs cgroup limit, check container health at runtime, and always update docker commands with the latest version of the image.

Docker Security Operations

Avoid image sprawl and container sprawl.

Docker Swarm Configuration

  • Enable swarm mode only if needed
  • Create a minimum number of manager nodes in a swarm
  • Bind swarm services are bound to a specific host interface
  • Encrypt containers data exchange on different overlay network nodes
  • Manage secrets in a Swarm cluster with Docker's secret management commands
  • Run swarm manager in auto-lock mode
  • Rotate swarm manager auto-lock key periodically
  • Rotate node and CA certificates as needed
  • Separate management plane traffic from data plane traffic

Holistic Docker Security with Aqua

Aqua provides a platform that secures Cloud Native, serverless and container technologies like Docker. Aqua offers end-to-end security for applications running Docker Enterprise Edition or Community Edition, and protects you throughout the full lifecycle of your continuous delivery and DevOps pipeline: from the point where you shift left, through to runtime controls, firewall, audit, and compliance.

Continuous Image Assurance

Aqua scans images for malware, vulnerabilities, embedded secrets, configuration issues and OSS licensing. You can develop policies that outline, for example, which images can run on your Docker hosts. Aqua’s vulnerabilities database, founded on a continuously updated data stream, is aggregated from several sources and consolidated to make sure only the latest data is used, promoting accuracy and limiting false positives and negligible CVEs.

Aqua offers an open source tool, called?Trivy, which lets you scan your container images for package vulnerabilities.?Trivy?uses the same vulnerability database as Aqua’s commercial scanner. The key difference is that?Trivy?runs according to the build steps created within your Dockerfile.

Runtime Security for Docker

Aqua protects Docker application at runtime, ensuring container immutability and prohibiting changes to running containers, isolating the container from the host via custom machine-learned SECCOMP profiles. It also ensures least privileges for files, executables and OS resources using a machine-learned behavioral profile, and manages network connections with a container firewall.

Aqua further enhances securing Docker as follows:

  • Event logging and reporting—granular audit trails of access activity, scan Docker commands, events, and coverage, container activity, system events, and secrets activity.
  • CIS certified benchmark checks—assess node configuration against Docker and K8s CIS benchmarks with scheduled reporting and testing or Aqua OSS tools.
  • Global compliance templates—pre-defined compliance policies meet security standards such as HIPPA, CIS, PCI, and NIST.
  • Full user accountability—uses granular user accountability and monitored super-user permissions.
  • “Thin OS” host compliance—monitor and scan host for malware, vulnerabilities, login activity, and to identify scan images kept on hosts.
  • Compliance enforcement controls—only images and workloads that pass compliance checks can run in your environment.

Container Firewall for Docker

Aqua’s container firewall lets you visualize network connections, develop rules based on application services, and map legitimate connections automatically. Only whitelisted connections will be allowed, both within a Swarm or Kubernetes cluster, and also between clusters.

Secrets Management

Store your credentials as secrets, don't leave them in your source code. Aqua securely transfers secrets to containers at runtime, encrypted at rest and in transit, and places them in memory with no persistence on disk, so they are only visible to the relevant container. Integrate Aqua’s solution with your current enterprise vault, including CyberArk, Hashicorp, AWS KMS or Azure Vault. You can revoke, update, and rotate secrets without restarting containers.

Source: https://blog.aquasec.com/docker-security-best-practices

要查看或添加评论,请登录

社区洞察

其他会员也浏览了