Book Extracts: .Net Microservices, Architecture Containerized .Net Applications
Mohammad Abusaa
Engineering Manager | Cloud Architect & SRE | Data Platforms @ Lucid Motors
This article is an extracts from book ".Net Microservices: Architecture Containerized .Net Applications" , that will help you understand how to architect microservices based applications.
Official Book Page
Introduction
The book explains how to architecture and develop microservices-based applications and managing them using containers, It discusses architectural design and implementation approaches using .NET Core and Docker containers.
Related microservice and container-based reference application: eShopOnContainers
The application is an open-source reference app for .NET Core and microservices that is designed to be deployed using Docker containers. The application consists of multiple subsystems, including several e-store UI front ends (a Web MVC app, a Web SPA, and a native mobile app). It also includes the back-end microservices and containers for all required server-side operations.
Why Microservices
Microservices architecture is an approach to building a server application as a set of small services. That means a microservices architecture is mainly oriented to the back-end, although the approach is also being used for the front end.
Why a microservices architecture? In short, it provides long-term agility. Microservices enable better maintainability in complex, large, and highly-scalable systems by letting you create applications based on many independently deployable services that each have granular and autonomous lifecycles.
As an additional benefit, microservices can scale out independently. Instead of having a single monolithic application that you must scale out as a unit, you can instead scale out specific microservices. That way, you can scale just the functional area that needs more processing power or network bandwidth to support demand, rather than scaling out other areas of the application that don’t need to be scaled.
Why Containers and Docker
Containerization is an approach to software development in which an application or service, its dependencies, and its configuration (abstracted as deployment manifest files) are packaged together as a container image.
Each container can run a whole web application or a service, as shown below . In this example, Docker host is a container host, and App1, App2, Svc 1, and Svc 2 are containerized applications or services.
Because containers require far fewer resources (for example, they don’t need a full OS), they’re easy to deploy and they start fast. This allows you to have higher density, meaning that it allows you to run more services on the same hardware unit, thereby reducing costs.
Is it mandatory using containers in microservices architecture ?
Even when containers are enablers and a great fit for microservices, they aren’t mandatory for a microservice architecture and many architectural concepts could be applied without containers, too.
.Net Core vs .Net Framework for Docker containers
You should use .NET Core, with Linux or Windows Containers, for your containerized Docker server application when:
- You have cross-platform needs. For example, you want to use both Linux and Windows Containers.
- Your application architecture is based on microservices.
- You need to start containers fast and want a small footprint per container to achieve better density or more containers per hardware unit in order to lower your costs.
You should use .NET Framework for your containerized Docker server application when:
- Your application currently uses .NET Framework and has strong dependencies on Windows.
- You need to use Windows APIs that are not supported by .NET Core.
- You need to use third-party .NET libraries or NuGet packages that are not available for .NET Core.
What OS to target with .NET containers
Given the diversity of operating systems supported by Docker and the differences between .NET Framework and .NET Core, you should target a specific OS and specific versions depending on the framework you are using.
For Windows, you can use Windows Server Core or Windows Nano Server. These Windows versions provide different characteristics (IIS in Windows Server Core versus a self-hosted web server like Kestrel in Nano Server) that might be needed by .NET Framework or .NET Core, respectively.
For Linux, multiple distros are available and supported in official .NET Docker images (like Debian).
Data sovereignty per microservice
An important rule for microservices architecture is that each microservice must own its domain data and logic. Just as a full application owns its logic and data, so must each microservice own its logic and data under an autonomous lifecycle, with independent deployment per microservice.
The concept of microservice derives from the Bounded Context (BC) pattern in domain-driven design (DDD). DDD deals with large models by dividing them into multiple BCs and being explicit about their boundaries. Each BC must have its own model and database; likewise, each microservice owns its related data. In addition, each BC usually has its own ubiquitous language to help communication between software developers and domain experts.
Logical architecture versus physical architecture
Microservices is a logical architecture.
It’s useful at this point to stop and discuss the distinction between logical architecture and physical architecture, and how this applies to the design of microservice-based applications.
Building microservices doesn’t require the use of any specific technology. For instance, Docker containers aren’t mandatory to create a microservice-based architecture. Those microservices could also be run as plain processes.
Challenges and solutions for distributed data management
Challenge #1: How to define the boundaries of each microservice
You need to focus on the application’s logical domain models and related data. Try to identify decoupled islands of data and different contexts within the same application. Each context could have a different business language (different business terms). The contexts should be defined and managed independently. The terms and entities that are used in those different contexts might sound similar, but you might discover that in a particular context, a business concept with one is used for a different purpose in another context, and might even have a different name. For instance, a user can be referred as a user in the identity or membership context, as a customer in a CRM context, as a buyer in an ordering context, and so forth.
Challenge #2: How to create queries that retrieve data from several microservices
A second challenge is how to implement queries that retrieve data from several microservices, while avoiding chatty communication to the microservices from remote client apps.
API Gateway. For simple data aggregation from multiple microservices that own different databases, the recommended approach is an aggregation microservice referred to as an API Gateway. However, you need to be careful about implementing this pattern, because it can be a choke point in your system.
CQRS with query/reads tables. Another solution for aggregating data from multiple microservices is the Materialized View pattern. In this approach, you generate, in advance (prepare denormalized data before the actual queries happen), a read-only table with the data that’s owned by multiple microservices. The table has a format suited to the client app’s needs.
“Cold data” in central databases. For complex reports and queries that might not require real-time data, a common approach is to export your “hot data” (transactional data from the microservices) as “cold data” into large databases that are used only for reporting.
Challenge #3: How to achieve consistency across multiple microservices
The data owned by each microservice is private to that microservice and can only be accessed using its microservice API. Therefore, a challenge presented is how to implement end-toend business processes while keeping consistency across multiple microservices.
No microservice should ever include tables/storage owned by another microservice in its own transactions, not even in direct queries.
A good solution for this problem is to use eventual consistency between microservices articulated through event-driven communication and a publish-and-subscribe system.
Challenge #4: How to design communication across microservice boundaries
In a distributed system like a microservices-based application, with so many artifacts moving around and with distributed services across many servers or hosts, components will eventually fail. Partial failure and even larger outages will occur, so you need to design your microservices and the communication across them considering the common risks in this type of distributed system.
A popular approach is to implement HTTP (REST)-based microservices, due to their simplicity. An HTTP-based approach is perfectly acceptable; the issue here is related to how you use it. If you use HTTP requests and responses just to interact with your microservices from client applications or from API Gateways, that’s fine. But if you create long chains of synchronous HTTP calls across microservices, communicating across their boundaries as if the microservices were objects in a monolithic application, your application will eventually run into problems.
For instance, imagine that your client application makes an HTTP API call to an individual microservice like the Ordering microservice. If the Ordering microservice in turn calls additional microservices using HTTP within the same request/response cycle, you’re creating a chain of HTTP calls. It might sound reasonable initially. However, there are important points to consider when going down this path:
- Blocking and low performance. Due to the synchronous nature of HTTP, the original request doesn’t get a response until all the internal HTTP calls are finished.
- Coupling microservices with HTTP. Business microservices shouldn’t be coupled with other business microservices. Ideally, they shouldn’t “know” about the existence of other microservices. If your application relies on coupling microservices as in the example, achieving autonomy per microservice will be almost impossible.
- Failure in any one microservice. If you implemented a chain of microservices linked by HTTP calls, when any of the microservices fails (and eventually they will fail) the whole chain of microservices will fail.
In fact, if your internal microservices are communicating by creating chains of HTTP requests as described, it could be argued that you have a monolithic application.
Therefore, in order to enforce microservice autonomy and have better resiliency, you should minimize the use of chains of request/response communication across microservices. It’s recommended that you use only asynchronous interaction for inter-microservice communication, either by using asynchronous message- and event-based communication, or by using (asynchronous) HTTP polling independently of the original HTTP request/response cycle.
The API gateway pattern versus the Direct client-to-microservice communication
In a microservices architecture, each microservice exposes a set of (typically) fine-grained endpoints. but how can client can communicate with microservice endpoints?
Direct client-to-microservice communication
A possible approach is to use a direct client-to-microservice communication architecture. In this approach, a client app can make requests directly to some of the microservices.
A direct client-to-microservice communication architecture could be good enough for a small microservice-based application, especially if the client app is a server-side web application like an ASP.NET MVC app. However, when you build large and complex microservice-based applications (for example, when handling dozens of microservice types), and especially when the client apps are remote mobile apps or SPA web applications, that approach faces a few issues.
Consider the following questions when developing a large application based on microservices:
- How can client apps minimize the number of requests to the back end and reduce chatty communication to multiple microservices?
- How can you handle cross-cutting concerns such as authorization, data transformations, and dynamic request dispatching?
- How can you shape a facade especially made for mobile apps?
The API of multiple microservices might not be well designed for the needs of different client applications. For instance, the needs of a mobile app might be different than the needs of a web app. For mobile apps, you might need to optimize even further so that data responses can be more efficient. You might do this by aggregating data from multiple microservices and returning a single set of data, and sometimes eliminating any data in the response that isn’t needed by the mobile app.
The API Gateway pattern
When you design and build large or complex microservice-based applications with multiple client apps, a good approach to consider can be an API Gateway. This is a service that provides a singleentry point for certain groups of microservices. It’s similar to the Facade pattern from object-oriented design, but in this case, it’s part of a distributed system. The API Gateway pattern is also sometimes known as the “backend for frontend” (BFF) because you build it while thinking about the needs of the client app.
It’s important to highlight that in that diagram, you would be using a single custom API Gateway service facing multiple and different client apps. That fact can be an important risk because your API Gateway service will be growing and evolving based on many different requirements from the client apps. Eventually, it will be bloated because of those different needs and effectively it could be pretty similar to a monolithic application or monolithic service. That’s why it’s very much recommended to split the API Gateway in multiple services or multiple smaller API Gateways, one per client app formfactor type, for instance.
Therefore, the API Gateways should be segregated based on business boundaries and the client apps and not act as a single aggregator for all the internal microservices.
Main features in the API Gateway pattern
- Reverse proxy or gateway routing. The API Gateway offers a reverse proxy to redirect or route requests (layer 7 routing, usually HTTP requests) to the endpoints of the internal microservices.
- Requests aggregation. As part of the gateway pattern you can aggregate multiple client requests (usually HTTP requests) targeting multiple internal microservices into a single client request.
- Cross-cutting concerns or gateway offloading. Depending on the features offered by each API Gateway product, you can offload functionality from individual microservices to the gateway, which simplifies the implementation of each microservice by consolidating cross-cutting concerns into one tier. such as the following functionality:
Authentication and authorization, Service discovery integration, Response caching, Retry policies, circuit breaker, and QoS, Rate limiting and throttling, Load balancing, Logging, tracing, correlation, Headers, query strings and claims transformation, IP whitelisting .
Drawbacks of the API Gateway pattern
- The most important drawback is that when you implement an API Gateway, you’re coupling that tier with the internal microservices.
- Using a microservices API Gateway creates an additional possible single point of failure.
- An API Gateway can introduce increased response time due to the additional network call. However, this extra call usually has less impact than having a client interface that’s too chatty directly calling the internal microservices.
- If not scaled out properly, the API Gateway can become a bottleneck.
- An API Gateway requires additional development cost and future maintenance if it includes custom logic and data aggregation. Developers must update the API Gateway in order to expose each microservice’s endpoints. Moreover, implementation changes in the internal microservices might cause code changes at the API Gateway level.
Communication in a microservice architecture
Client and services can communicate through many different types of communication, each one targeting a different scenario and goals. Initially, those types of communications can be classified in two axes.
The first axis defines if the protocol is synchronous or asynchronous:
- Synchronous protocol. HTTP is a synchronous protocol. The client sends a request and waits for a response from the service. That’s independent of the client code execution that could be synchronous (thread is blocked) or asynchronous (thread isn’t blocked, and the response will reach a callback eventually).
- Asynchronous protocol. Other protocols like AMQP (a protocol supported by many operating systems and cloud environments) use asynchronous messages. The client code or message sender usually doesn’t wait for a response. It just sends the message as when sending a message to a RabbitMQ queue or any other message broker.
The second axis defines if the communication has a single receiver or multiple receivers:
- Single receiver. Each request must be processed by exactly one receiver or service. An example of this communication is the Command pattern.
- Multiple receivers. Each request can be processed by zero to multiple receivers. This type of communication must be asynchronous. An example is the publish/subscribe mechanism used in patterns like Event-driven architecture. This is based on an event-bus interface or message broker when propagating data updates between multiple microservices through events.
Orchestrators
When you create a microservice-based application, you need to deal with complexity. Of course, a single microservice is simple to deal with, but dozens or hundreds of types and thousands of instances of microservices is a complex problem. It isn’t just about building your microservice architecture—you also need high availability, addressability, resiliency, health, and diagnostics if you intend to have a stable and cohesive system.
It looks like a logical approach. But how are you handling load-balancing, routing, and orchestrating these composed applications?
Orchestrators try to solve the hard problems of building and running a service and using infrastructure resources efficiently. This reduces the complexities of building applications that use a microservices approach.
From an architecture and development point of view, if you’re building large enterprise composed of microservices-based applications, it’s important to understand the following platforms and products that support advanced scenarios:
Clusters and orchestrators. When you need to scale out applications across many Docker hosts, as when a large microservice-based application, it’s critical to be able to manage all those hosts as a single cluster by abstracting the complexity of the underlying platform. That’s what the container clusters and orchestrators provide. Kubernetes is an example of an orchestrator, and is available in Azure through Azure Kubernetes Service.
Schedulers. Scheduling means to have the capability for an administrator to launch containers in a cluster so they also provide a UI. A cluster scheduler has several responsibilities: to use the cluster’s resources efficiently, to set the constraints provided by the user, to efficiently load-balance containers across nodes or hosts, and to be robust against errors while providing high availability.