Security guidelines applied to microservices cloud architectures
Altice Labs
We develop innovative products and services for the telecommunications and information technology market.
This paper aims to give a comprehensive set of guidelines and. In the following sections, we will explain basic concepts related to microservices and cloud computing and highlight their key features and benefits. Furthermore, we will provide practical recommendations for dealing with common security problems in microservices architecture, based on credible literature and our own experience.
Microservices architecture has been around longer than cloud-native computing. It started to become popular about a decade ago, whereas the term cloud-native emerged around 2015. However, cloud-native applications are based on cloud computing principles that extend back to the 1960s. So, let’s start from the beginning and go a bit deeper into these concepts to understand the advantages and possible disadvantages of each one of these technologies and how they can benefit from each other.
Keep reading to explore the best practices to secure microservices applications deployed in a cloud environment or download the full white paper .
Contextualizing
Microservices and cloud-native applications are distinct but often linked concepts in software architecture. Microservices are small, independent services that make up a larger application, each with a specific function and the ability to be developed, deployed, and scaled independently. This architecture aims to simplify the development and deployment of complex applications.
Cloud-native applications, designed for cloud computing environments, leverage cloud features like scalability and elasticity. They can be built using various architectures, including microservices, monolithic, or serverless.
Microservices don’t necessarily have to run in the cloud and can be deployed on-premises using platforms like Kubernetes. However, they are integral to cloud-native apps due to their benefits.
To build cloud-native applications effectively using a microservices architecture, developers need proficiency in certain tools, programming languages, development techniques, and security aspects. This paper provides an overview of these concepts, with a focus on the security aspects of microservices architectures.
Cloud computing is a model that allows easy, on-demand access to a shared pool of configurable computing resources. It has five key characteristics: on-demand self-service, broad network access, resource pooling, rapid elasticity, and measured service.
Previously, organizations relied on local data centers for their applications, which often led to underutilized hardware and lack of elasticity. Many have now moved to the cloud, which addresses these issues.
The cloud offers users more flexibility in resource selection and easy access, among other benefits over traditional IT infrastructures. However, these benefits can turn into drawbacks, if necessary, precautions aren’t taken. Thus, it’s important to understand both the positive and negative aspects when considering cloud features. It offers several advantages, including faster time to market due to the ability to quickly create or remove instances, and an on-demand self-service approach that allows customers to request services as needed. It also provides rapid elasticity, enabling companies to scale resources up or down based on demand, without investing in physical infrastructure.
Clouds can lead to significant cost savings as companies don’t need to invest in their own infrastructure, and clients pay only for the resources and services they use. However, costs can escalate if resources aren’t managed carefully.
Resiliency is another benefit of cloud environments, which offer backup and disaster recovery features. Cloud backup ensures data safety and prevents data loss, while cloud disaster recovery stores backup data, apps, and other resources in cloud storage. Deploying across multiple data center locations enhances resilience. Cloud providers can also offer high availability and reliability, with SLAs of 99.9%
Cloud environments, while beneficial, pose complexities in terms of security and privacy. Concerns include third-party ownership, loss of physical data control, resource sharing, and increased data exposure. However, public cloud providers offer security features like encryption, access control, and monitoring. Yet, vendor lock-in risk remains due to service dependencies.
To mitigate risks, various Cloud Deployment Models are available. Clients can choose private, public, community, or hybrid clouds based on their needs, each with its own characteristics and advantages.
Cloud Service Models like Infrastructure as a Service (IaaS), Platform as a Service (PaaS), and Software as a Service (SaaS) offer varying levels of control and flexibility. IaaS provides fundamental computing resources, giving consumers control over operating systems, storage, and applications. PaaS provides a platform for application execution, with consumers controlling only the deployed applications. SaaS offers cloud-deployed applications accessible via a web browser, with consumers not controlling the infrastructure or application.
While cloud technology is popular, it’s vulnerable to attacks, making security crucial. Security and data protection are shared responsibilities between the customer and provider, with the level of responsibility varying based on the service model used. Understanding these responsibilities is essential for system security.
Principles
Although there is no precise definition of this type of architecture, in 2014 Martin Fowler and James Lewis published their “Microservice” article11, which became a de-facto standard for defining microservices. In this article, they describe Microservices Architecture as "an approach to developing a single application as a suite of small services, each running in its process and communicating with lightweight mechanisms, often an HTTP resource API. These services are built around business capabilities and are independently deployable by fully automated deployment machinery."
Microservice architecture allows services to be upgraded or replaced independently, avoiding the need to redeploy the entire application. It’s organized around business capabilities, with each service managed by a single team, promoting independent work and creating cohesive services.
Unlike traditional projects, in microservices, the development team owns a product throughout its lifetime, enhancing customer satisfaction. Communication between components favors “dumb pipes,” simple protocols like REST over HTTP, making the system more manageable.
In microservices, each team is responsible for their service’s lifecycle, including technology choices. Each service has its own database, allowing for technology diversity. Automation techniques simplify the deployment of microservices.
Given the high probability of failure in a microservice architecture, it’s crucial to detect and handle failures quickly, necessitating individual service monitoring and logging setups. Microservices support evolutionary design, allowing applications to change over time, making the system more scalable and resilient.
The key virtues of microservices architectures include organizational alignment, allowing services to adapt to organizational changes; independent deployment, enabling frequent and rapid code deployment; independent scalability, allowing individual services to scale without affecting the entire system; robustness, with isolated services enhancing system resilience and reducing failure impact; and composability, with independent, well-defined services that can be reused, speeding up new application development.
?
Challenges?
Microservices architectures, while advantageous, also present concerns. These include technology heterogeneity, which allows for diverse technologies but can lead to overload if too many are adopted simultaneously. Costs can increase rapidly due to the need to maintain multiple services and resources. Monitoring and troubleshooting are essential but become more challenging with microservices due to their distributed nature. As system complexity increases, so does the need for monitoring multiple services and supporting various logging solutions.
Testing becomes more complex in a microservice architecture, requiring efficient strategies as the number of unit tests and the scope of end-to-end tests increase. Latency can be an issue due to the distributed processing across multiple independent services.
Data consistency can also be a concern as microservices have different databases managed by different processes, leading to potential consistency issues and increased complexity. Implementing and coordinating distributed transactions and avoiding data duplication are not straightforward tasks.
The elephant in the room
Security is often the most significant concern in microservices architectures due to their heterogeneous and distributed nature. As the number of microservices grows, so do the interactions between components, communication links to protect, and potential attack vectors. The architecture’s complexity increases with the data flow over networks and the number of entry points for each microservice, expanding the system’s attack surface.
It’s crucial to ensure that each microservice entry point is equally protected, as the system’s security is only as strong as its weakest link. However, microservices architectures offer more opportunities for a defense-in-depth strategy than conventional architectures. The system’s functionality is divided into distinct components, allowing for action limitation on each component and varying security levels based on the microservices’ sensitivity and importance.
Securing a microservices system is challenging, but certain principles can guide the design process. These include:
Least privilege: Restrict permissions to only those necessary for users or services to complete their tasks. Start with a deny-by-default policy and grant permissions as needed.
Defense-in-depth: Implement multiple layers of security to mitigate attacks if one layer fails. In a microservice architecture, it’s crucial to implement defense-in-depth strategies for each microservice.
Zero Trust: This security model operates on “Never trust, always verify.” Everything must be verified, even for internal users, and trust should be established only with sufficient evidence of legitimacy.
领英推荐
Security depends on various factors, including system requirements, data sensitivity, organizational policies, and budget. Common problems in this architecture are described below, along with possible solutions/recommendations:
Technology heterogeneity: Balance the advantages of technology diversity with the disadvantages of complexity.
Cost: Microservices might not be the best option for cost reduction as maintaining multiple services and resources can increase costs.
Monitoring and troubleshooting: Implement efficient strategies for monitoring multiple services and supporting various logging solutions.
Testing: Develop efficient testing strategies as the number of unit tests and the scope of end-to-end tests increase with system growth.
Latency: Handle latency issues differently from traditional architectures due to the distributed processing across multiple independent services.
Data consistency: Implement and coordinate distributed transactions and avoid data duplication to manage data consistency issues.
These recommendations are generic and should be used as a starting point. Establish a threat model at the beginning of the architecture design process, conduct regular risk assessments, and validate security policies to ensure overall system security.
To note when designing for security
Edge security
Edge security in focuses on user authentication and access control. Multifactor authentication is recommended for verifying user identity. Authorization, which can be implemented at the edge or service level, checks if a user has necessary permissions. For maximum security, it’s advised to implement authorization at both levels using protocols like OAuth 2.0.
Service-to-service communication
Securing communications is vital. This involves authenticating and authorizing all requests using principles like zero-trust. JSON Web Tokens (JWT) and mutual Transport Layer Security (mTLS) are common methods used. JWT transmits information between parties, while mTLS allows mutual authentication. Both methods should be used together for enhanced security.
Logging
Detailed logging is crucial for identifying and troubleshooting issues. Logs should be written to a local file, not sent directly to the central system, to prevent data loss. A dedicated logging agent should collect log data and send it to the central system. Logs containing sensitive data should be sanitized. The logging agent should publish logs in a structured format like JSON, CSV, or Syslog for easy parsing and analysis. Logs should be secured with access controls and encryption to prevent unauthorized access and data exposure.
Container Security
Containers are used for service isolation. They’re lightweight, efficient, and quick to spin up, making them popular for packaging and deploying microservices. However, their security is crucial. Containers should be immutable, with a read-only filesystem and disabled shell access. If updates are needed, a new container image tag should be created. The base image for a container should be minimal to reduce the attack surface.
Container privileges should be limited as well, with containers running as non-root users where possible. If privileged or root access is needed, capabilities should be used following the principle of least privilege. In environments with many services, containers are often used with orchestrators for automated scaling, deployment, and security processes. The orchestration platform should also be considered when securing microservices deployment, with Kubernetes being the most common.
Container Orchestrator Security
The security of a container orchestrator, such as Kubernetes, depends on its deployment. If a managed orchestrator like Google Kubernetes Engine is used, the cloud provider secures the control plane. However, clients may opt to secure the orchestrator themselves to meet specific needs.
To enhance Kubernetes security, harden pods, the smallest deployable units, using security contexts that apply container-level security recommendations. Use network policies to restrict pod-to-pod communication, applying the principle of least privilege by default and creating specific policies for necessary traffic.
The API Server, a bridge between the cluster and users, should never be exposed on the internet. Restrict access to it using firewall rules or features offered by cloud providers. Avoid using or sharing administrator accounts within the cluster, instead create accounts with limited permissions.
Disable anonymous requests to prevent unauthenticated access to the server. Use strict Role-Based Access Control rules to limit access to the API server. Restrict permissions for service accounts, which pods use to authenticate with the API server.
Encrypt etcd, the database storing all cluster information, using Key Management Service providers. Manage secrets, which store sensitive data, securely by configuring their encryption at rest, restricting access using RBAC rules, or using an external secret provider. Mount secrets as volumes rather than environment variables to prevent exposure in case of a crash and to allow updating without restarting the pod. In Kubernetes, Secrets are used to store sensitive data like passwords and tokens. By default, they are not encrypted but base64 encoded, meaning anyone with access to the API or etcd can read and modify them. Secrets can be accessed through volumes or environment variables in a pod, and if compromised, can be used to attack other system parts.
To secure Secrets, it’s recommended to encrypt them at rest using etcd encryption, restrict access using Role-Based Access Control rules, or use an external secret provider to keep confidential data out of the cluster. Secrets should ideally be mounted as volumes, not environment variables, to prevent exposure in case of a crash and allow updating without restarting the pod. It’s best to mount the volume as a temporary filesystem so secrets are never stored in plain text at rest.
Service Mesh
A service mesh, used with an orchestrator like Kubernetes, is an infrastructure layer that manages microservice communication and coordination. It adds capabilities like observability, traffic management, and security to services without changing their code. The architecture consists of a control plane for tasks like instance creation, monitoring, and policy implementation, and a data plane for service instances, sidecars, and their interactions.
Proxies can manage traffic entering and leaving the system, with an ingress controller for incoming traffic and an egress controller for outgoing traffic. Istio, a popular service mesh solution, provides authorization policies for system communication management and supports mutual TLS and JWT for service-to-service communication.
Distributed tracing, another key feature, tracks how different requests move through the system, helping detect possible attacks and availability issues. Despite its advantages, a service mesh can increase the number of runtime instances and the application’s attack surface as each microservice requires its proxy.
To finalize
This document provides a comprehensive overview of cloud computing and microservices architecture, including their main features, outlining a variety of security measures that can be implemented for microservices architectures in the cloud. However, it’s important to note that there isn’t a universal security solution. The suggestions given in this document are broad and should serve as a starting point. Each specific scenario should be thoroughly examined to identify all necessary security and privacy needs. Only after this analysis should the most suitable solution be chosen.
Authors
Keywords: Security, Cloud Computing, Microservices
Contact us if you want to engage in a deeper discussion on this topic!