Microservices: Software engineering to the rescue
www.logovisual.com

Microservices: Software engineering to the rescue

(Versión en Espa?ol aquí)

“We cannot solve our problems with the same thinking we used when we created them”. Albert Einstein.

The emergence of distributed systems, in turn, meant a challenge for software engineering. Previous techniques of analysis and design used back then, were created primarily to model applications that interacted with relational databases and flat files. Those of us who have taken courses on these topics at the university, some years ago, surely remember the book "Structured Design" by Ed Yourdon and Larry L. Constantine, a "bible" then for the initiated in software engineering.

Structured design -and structured analysis- reigned those years, and tools such as data flow diagrams, with their levels from the general (or context diagram) to its decomposition into detail levels, where they represented the main and primitive functions, data bases, external entities and the generated flows, all served us to design any system we were in charge of. It is true that most systems were the basic CRUD type (create, read, update and delete). During the code implementation of these functions, developers had to ensure that cohesion of coded modules was maintained, and coupling degree was optimal to avoid complexity at integration time.

With distributed computing first and the internet later, new tools and techniques were necessary to deal with the challenges posed by such technologies. Some either appeared or consolidated, like the Unified Modeling Language (UML), new application architecture models such as component-based software engineering (CBSE), well-known 3-tier architecture (presentation, logic, and data layers), service-oriented architecture (SOA), message-based and loosely-coupled models (like ESB), as well as design patterns such as MVC (model-view-controller), among others.

With all these methodologies and technologies, we in IT created a myriad of applications that have accompanied the increasing digitalization of all business processes -regardless industry and at global scale-. For many organizations, it is impossible to make a division between the software and the business processes, since the delivery of services depends entirely on their applications. At this point the ability to produce good software becomes a distinctive competence and not a complementary skill. This new reality demands from the technology areas a continuous and high speed delivery of new functionality, but also an uninterrupted operation of their applications and the infrastructure that supports it.

Rebirth of software engineering.

As organizations were adopting information systems for most of its business and operational processes, the implementation projects for those solutions became also complex, their costs raised from hundreds to thousands (or even millions of dollars), and their times were extended in many cases from months to years.

The emergence of large-scale development and implementation projects required the consolidation of a brand-new role, the project manager (PM). From the second half of the 1990s, standards such as the PMBOK and methodologies like PRINCE2 were consolidated. Even a software-specific reference model, such as CMMI for Development, put more emphasis on organizational, management, planning, risk and requirements elements, than on the "hard" elements of software engineering, such as design, construction, integration and implementation.

Among the strategies of these methodologies, given the high rates of delivery failure in projects, -and as a principle of action and reaction- was to strengthen the formal stage of software requirements specifications (SRS), with the expectation that an extensive and complete specification would reduce the impact of changes during the execution. Then large documents were demanded, as a contract between business units and IT, adding more time and effort to delivery.

The new PM role created somehow a hierarchy in IT teams, many of our professionals aspired to it and flocked to training for project management certifications and associated disciplines. In my opinion, in the process, many organizations lost their good technical professionals -transferring them to management - and neglected architecture and design, core work of software engineering, favoring aspects like "on time", "on budget" and "on scope ", but in the latter case with emphasis mostly on functional or business needs.

Applications have also grown in complexity to cover new business functions, and became large, and monolithic, systems whose cascade failures, due to the high interdependence of its components is frequent, and whose maintenance is costly because that complexity. The practice of cloning or reusing code (because time pressure and project budget limitations), and not use good design patterns, have contributed to the "spaghetti style" architecture and code that many organizations have.

Development of new functions is often time-consuming due to the size of the applications (in lines of code) and additional regression testing activities are required to verify that previous functions were not affected by the change. From operations point of view, performance problems are usually solved by allocating more computational resources like memory, storage or processors to the servers where this complex applications resides (scale-up), and not necessarily improving the code.

Can be said that in the last few years we are facing a new software crisis, such as the one that books refers in the early years of computer science, and that led the scholars to propose the adoption of methodologies such as structured analysis and design. The big difference is that the omnipresence of the software today makes the impact of this crisis more impressive.

Some proposals to manage this crisis favor the adoption of agile methodologies or disciplines such as DevOps. Both have in common the empowerment of teams, self-management, horizontality, the importance of technical roles, the stability of the teams over time and, in some organizations, even the extension of its responsibilities to support the application and product in production. These latter functions are traditionally under the scope of operations team. Such approach demands for organizations to strengthen their technical roles, and in turn, that their professionals leave the current area of comfort and update their knowledge in an accelerated manner.

It is important to understand that agile methodologies are not the same thing as DevOps, and that DevOps is not the same either that microservice-based architectures. However, it is true that they complement each other very well, and some organizations, such as Netflix, Amazon, Spotify or Paypal have followed the whole route, obtaining the benefits of the full package.

Microservices.

According to Sam Newman, author of the book "Building Microservices - Defining Fine-Grained Systems", microservices are granular, autonomous and decoupled services that work together to meet a need in any given domain.

On the other hand, according to the book "Microservice Architecture - Aligning Principles, Practices and Culture," although there are slightly different opinions regarding all its characteristics, there is a consensus on the main ones:

  • Are small.
  • Messaging enabled.
  • Bounded by contexts.
  • Autonomously Developed.
  • Independently deployable.
  • Decentralized.
  • Built and released with automated processes.

For example, when buying a book at Amazon.com, purchase order service invokes different specific microservices that autonomously resolve their field of competence such as: inventory validation, wish lists, credit card processing, suggestions, product reviews, and other functions complementary and concurrent to main service.

Each microservice focuses on implementing a specific business function. The cohesion principle is important to prevent code from growing without control in the future, by adding new misaligned logic to the main function.

By autonomy, it must be understood that microservices can be deployed independently from other services, and be accessed through network calls by invoking their APIs (usually REST-JSON), so they can also be implemented on different servers. Another important concept of autonomy is that microservices contain internally all necessary layers to operate (presentation, logic and even its own database in some cases). This enables a high level of decoupling and independence with respect to the technology that supports it, and allows them to be changed without affecting its consumers.

Another characteristic is that microservices must be designed for not maintaining any state (stateless), so they can easily recover from their own failures, failures in their consumers and failures from other services they invoke.

In addition, the services must be designed with circuit breaker logic, so, if other consumed services or legacy applications fails, it is possible to manage such event without generating time-outs or failures in chain, to later recover the service automatically when the source failure is solved.

One of the benefits of decoupling and not having permanent states, is that we can multiply the number of servers that support microservices (scale-out), and thus can respond -with elasticity- the increase in demand (whether seasonal or permanent). It also allows the application to be agnostic from the platform, and being able to coexist in environments on premise, public cloud or hybrid cloud.

To control the workflows of microservices, Sam Newman in his book proposes that instead of the usual orchestration model, in which one service assumes the role of dispatcher of all other services that must be activated to complete a business process (for example create a new customer in the company with everything involved), a choreography model can be used so this main process publishes a business event that is consumed by different subscribers for that microservice. This way allows a greater uncoupling and less single points of failure. In this architecture, it is valid to use platforms like APIs gateways to create facades for services (either for access channels or for other microservices), or to use a message broker to guarantee the delivery of trigger events, but no business logic is expected in this intermediate platform.

Martin Fowler, a renowned consultant on development methodologies, and one of the authors of the famous "Manifesto for Agile Software Development", uses the phrase "smart endpoints and dumb pipes" to illustrate the relationship expected between microservices and integration elements.

In regards to stored data, bearing in mind that scalability, operational continuity and performance are the main objectives of the microservices architecture, some organizations -such as Netflix- use patterns to access data by caching in memory for queries and access to databases through microservices proxies for writing. Other organizations even subdivide their databases, so they are consumed exclusively by a single associated microservice, thus avoiding their shared use, reducing integrity problems, locks and other events associated with concurrency; When another service requires access to the data it does so by calling a business function of the service owner.

These of course are not the only techniques to create a successful microservice architecture, Newman's book includes various design options to achieve quality software and meet the goals of elasticity, scalability and ease of deployment.

Considering that the division of a monolithic application into microservices will certainly generate several dozens of them, -or even more-, it is required to enable a way by which they can locate another without having predefined the IP addresses or the names of the servers where these services reside. The way this is achieved is by providing "service discovery" functions, so that consumer services can locate dynamically and in real-time their service providers, allowing the infrastructure to be changed without having to recompile and deploy the applications again. Organizations that have reached maturity levels in microservices also use tools for release management, as well as containerization (such as Docker containers) to reduce the impact of the inevitable fragmentation of logical components.

The organization and culture: work teams the size of two pizzas.

For an organization to adopt microservices must be more than just begin to use new strategies of technical design or deploy new tools. At the bottom, there must be a significant cultural change, and one of the attributes required by teams that support a microservice architecture is the integration and effective communication among its members. It was Jeff Bezos, Amazon's CEO, who instituted the "two-pizzas rule", whereby his work teams should be the size that would allow two pizzas to feed them. While the rule may seem simple and do not propose an exact number of people, it sends a clear message about the importance of team size as a factor affecting focus, effectiveness and communication. In many traditional projects the problems with delays are solved by adding more people to the team, but practical experience shows that there is a limit after which that measure loses effectiveness.

The microservice architecture also raises the challenge of a decentralized mechanism for IT governance that can effectively articulate the work of all these independent teams. At the "GOTO 2015 conference Microservices @ Spotify", Kevin Goldsmith, VP of Engineering explained the roles of each of his 90 squads in charge of the nearly 800 microservices available. The teams incorporate roles as backend developers, frontend developers, certifiers, UI designers, and product owners. Each team is responsible for -approximately- 10 microservices.

But size is not the only attribute for these teams, it is also openness to innovation, autonomy and ability to rapid reaction, as well as its permanence in time, -as opposed to traditional project teams, which tend to remain united only as long as the project is in effect.

According to Martin Fowler, the microservices model is a rupture from project-oriented organizations. Its phrase "Products not projects" emphasizes the iterative and incremental characteristics in the software evolution, as opposed to previous practices of accumulating large batches of functionality that is packaged in projects that are delivered after many months, -or even years-, in large releases, which sometimes lost their validity during such extensive construction.

Are microservices applicable to my organization?

Before we begin to make a massive redesign of all our applications, it is valid to make us first this question. Distributed processing, without good governance, leads to anarchy, and while it may seem contradictory that many digital companies are adopting this model, -given the potential risks- they have understood the contexts in which this architecture applies, and the readiness, that not only the platforms, but, above all, the IT organization requires.

One of the tips of Nginx Blog Tech in the article "Refactoring to Monolith into Microservices" is not to start a conversion of monolithic applications to microservices as a big bang, on the contrary, use an incremental approach, by building a new application in parallel to the previous one, that allows to "disassemble" the old functionality progressively.

It is important to first identify a suitable application to be refactored, -or built-, with microservices approach. Some characteristics that make it advisable is if it attends to a complex or fuzzy business problem, since the needs with high functional complexity usually fail in the traditional approaches because it is not simple to model them in functional specifications; frequently changing applications are also appropriate because their development and testing times are lengthened by the high rotation generated and the chain work they generate by being monolithic; finally, applications whose scalability is imperative and for which we do not want to set dependencies to any specific infrastructure are also adequate.

Regardless of whether or not to take a decision to adopt a microservice model immediately or to defer that decision for later, it is clear that in the near future the architecture of applications will increase its complexity, either by the deep digitization of the organizations; by the adoption of the Internet of Things (IoT) when our customers or their devices will send us thousands -or millions- of requests; but also by the attractiveness of cloud computing, in its Infrastructure as a Service (IaaS) version with its cost reduction, stability and scalability; or at its more powerful and disruptive way as a Platform as a Service (PaaS) that allow us to reuse existing commercial components and build applications as Lego pieces.

In my opinion, among the early reasonable steps to follow are starting to explore these technologies in detail, contrasting them with our current reality, incorporating in our methodologies updated techniques of Software Engineering, fine tuning into our IT processes for instance with DevOps practices and, updating the technology platforms for the time we are ready to take the big leap.

Xavier Gutierrez

Do you like the article? Press Like to let me know, Comment to give me your opinion, or Share if you want someone else to read it.

Previously published posts:

DevOps: The “New Deal” in IT

要查看或添加评论,请登录

Xavier Gutierrez的更多文章

社区洞察