Migrating a Monolithic Application to Microservices (part 1)
Introduction
Many monolithic applications are ageing now, often built on a single VM or Multiple Instance Groups. Essentially they perform a variety of tasks within a single entity. That entity could be a Single VM with 16GB of RAM and 4 vCPUs, the so called e2-standard-4 instance. If you need something more powerful, you can go for e2-standard-16 with 16 vCPUs and 64GB of RAM. These are described in here for Google Cloud.
Often these hosts do a good job in terms of being vertically integrated. Typically, they can process upstream files loads in CSV, JSON, AVRO etc format that land on GCS, process these files through pub-sub messaging and ingest data into Google BigQuery Data warehouse datasets and tables, the so called native tables. Additionally one can run a variety of batch commands in the form of SQL on GBQ through this monolithic application, re-export data from GBQ into Cloud storage in a different format, possibly compressed to reduce the footprint and create external or so called BigLake tables on top of exported data in GCS. Other auxiliary stuff like house keeping can be performed through this monolithic application.
So in a way these monolithic applications (MA) provide multiple utilities all integrated within the same hardware.
Pros of Monolithic applications
- Easy and simple to develop.
- Easy upgradability
- Testing process is easy and well established
- Can be relatively easy to deploy. Just copy the packaged application to the server.
- Can be rescaled by upgrading the hardware
- Requires less expertise and can be operated outside of CI/CD pipeline.
- Most successful applications today were initially developed as monolithic application on-premise.
- Usually provide good performance as applications are tightly integrated and less moving part.
- Can be developed using one or two programming languages.
Cons of Monolithic applications
- Relying on older technology.
- Limited scope for scaling.
- Difficult to change and incorporate new changes or localise upgrades.
- Limited ability to be integrated into continuous deployment.
- Difficult to scale when modules having different functionalities have conflicting resource requirements.
- Highly coupled.
The age of microservices
Let us try to understand what microservice and container means
What is a microservice
Microservices - also known as the microservice architecture - is an architectural style that structures an application as a collection of services that are:
- Highly maintainable and testable
- Loosely coupled
- Independently deployable
- Organized around business capabilities
- Owned by a small team
The microservice architecture enables the rapid, frequent and reliable delivery of large, complex applications. It also enables an organization to evolve its technology stack. Sometimes the term microservice and container is used interchangeably.
What is a container
A container https://www.docker.com/resources/what-container is a standard unit of software that packages up code and all its dependencies, so the application runs quickly and reliably from one computing environment to another. A Docker container image is a lightweight, standalone, executable package of software that includes everything needed to run an application: code, runtime, system tools, system libraries and settings. Container images become containers at runtime and in the case of Docker containers, images become containers when they run on docker engines. Containerized software will always run the same, regardless of the infrastructure. Containers isolate software from its environment and ensure that software works uniformly despite differences, for instance between development and staging.
The difference between Container and microservice
A container is a useful resource allocation and sharing technology. In contrast, a microservice is a software design pattern. So in short:
- Microservices are about the design of software. --> developers
- Containers are about packaging software for deployment. --> DevOps
Note that microservices run within containers.
Pipelines, what they mean
If you are old enough like myself, who have worked your way up from Unix/Linux, you are familiar with the concept of Unix pipelines. So let us take a look at an example, a simple bash shell sequence that pipes commands together like below:
/home/hduser> du | sort -nr | head -10
- du is a generator service, used to estimate file space usage—space used under a particular directory or files on a file system.
- sort is a post-processor service, taking a list of things and producing another, ordered version of this list.
- head is a filter service, taking a list of things and producing another list of things
- -10 is the limit showing the top 10
- The symbol | is the Unix pipe symbol that is used on the command line. What it means is that the standard output of the command to the left of the pipe gets sent as standard input of the command to the right of the pipe.
So we deploy three services here: the generator service, the filter service, and the post-processor service; each can evolve independently, can be replaced by a superior implementation, and most importantly: it can be treated as a black box (we don’t care about how sort works). As long as the interface, that is, how the data flows out of one and into the other, stays the same, the overall service composition will work and the result will be the same.
if we can sum them up, we can state:
Microservices = small services 1 + small service 2 + small service 3 + …
领英推è
So in short, a microservices application involves breaking a monolithic application into its component functions or services. After identifying the individual services, the designers refactor the monolithic application so that each service or functionality runs autonomously as a separate “microservice.†Then these services are loosely connected vi APIs to form the larger microservices-based application.
Pros of microservices
- The resulting microservices-based application offers a pluggable architectural style.
- Fast and cost-efficient upgrades
- Allows scaling one part of the application
- Better fault tolerance
- More resilient application
- Taking full advantage of continuous deployment, support for DevOps
- Faster time to market
Cons of microservices
- Microservice architecture can be complex often having heterogeneous technologies and more moving parts
- Microservices bring in more interconnections and interdependencies that add to the complexity of design
- Added latency. Since microservices interact via APIs or Webhooks , they tend to be slower compared to a monolithic application
- Microservices technology is evolving, so there is a shortage of skillset and capable architects and designers
- microservices architectures come at a premium, as they impose much higher operational complexity than the traditional monolithic applications
Breaking down a monolithic application into microservices
We discussed the reasons for transitioning monolithic applications into microservices. It often seems like architects will have to decide whether they want to sacrifice simplicity of management to enable streamlined, modular development. However, a concept known as the modular monolith may provide development teams the perfect balance between these two extremes.
Modular monolith
Basically, it is a system designed in a modular way. Many designers embark on microservices design because it happens to be the trend. We are all familiar with modular programming. That has been around for years. Modular basically means employing or involving a module or modules as the basis of design and architecture. These have the following characteristics:
- Must be independent and interchangeable
- Must have everything necessary to provide the desired functionality
- Must have defined interface
- Loose coupling
In the diagram below on the left we have a module that has a lot of dependencies and you can definitely not say that it is independent. On the other hand, on the right, the situation is the opposite – the module contains a minimum of dependencies and they are more loose, it is finally more independent:
However, the number of dependencies is just one measure of how well our module is independent. The second measure is how strong the dependency is. In other words, do we call it very often using multiple methods or occasionally using one or a few methods?
In the first case, it is possible that we have defined the boundaries of our modules incorrectly and we should merge both modules if they are closely related:
The last attribute affecting the independence of the module is the frequency of changes of the components on which it depends on. As you can guess less often, they are changed, the more the module is independent. On the other hand, if changes are frequent, we must change our module often and it loses its independence:
Microservice Encapsulation
Often modules are grouped together as a component of a microservice. In general a microservice will encapsulates the following components:
- GUI Module providing presentation layer
- Application logic module providing business/application layer
- Data access module providing data layer
They may have other components as well.
In summary:
- Identify simple decoupled functionality
- Cut the dependency on monolithic
- Identify and split frequently called functionalities and early stages
- Decouple vertically
- Decouple the most used, most changed functionalities
- Start migrating and adding in using a classic phased approach
- Define and validate the interfaces among microservices
- As ever, simple is beautiful
In part 2 of these series, I will give an example of real-world application and hopefully more.
Disclaimer:?Great care has been taken to make sure that the technical information presented in this?article is accurate, but any and all responsibility for any loss, damage or destruction of data or any other property which may arise from relying on its content?is explicitly disclaimed. The author will in no case be liable for any monetary damages arising from such loss, damage or destruction.
Mich Talebzadeh (Ph.D.) - great article (pt1), a step by step walk through of the classic strangler application pattern, a really effective way to dig out legacy systems.