Implementing Microservice Communication (Part 2)

Implementing Microservice Communication (Part 2)

We have done our first part where we did get understanding of Communication style we can implement in between services. But that only not summarizes the communication, there is more to it, More than just communication style. We also need basic standards and maintain the communication for long time.

In this article we are going to discuss all those things So we can maintain well communication between services. We can start with Serialization formats we wanna use in communication.


Serialization formats

There are two types of serialization formats, first is Textual and another one is Binary.

Textual formats

in serialization refer to data serialization methods where the data is represented in human-readable text. The two most common textual serialization formats are JSON (JavaScript Object Notation) and XML (eXtensible Markup Language). Here's a breakdown of textual formats with their advantages and disadvantages:

Advantages:

  1. Human-Readable: Textual formats like JSON and XML are human-readable, making them easy to understand and debug. This readability simplifies development, troubleshooting, and collaboration among developers.
  2. Interoperability: Textual formats are widely supported across different programming languages and platforms. This interoperability allows systems written in different languages to communicate with each other seamlessly, facilitating integration in heterogeneous environments.
  3. Simplicity: Textual formats have simple syntax and are easy to parse and generate. This simplicity reduces the complexity of serialization and deserialization processes, resulting in faster development cycles and improved productivity.
  4. Lightweight: While textual formats may not be as compact as binary formats, they are generally lightweight and have minimal overhead. This makes them suitable for scenarios where network bandwidth or storage space is not a significant concern.
  5. Schema Flexibility: Textual formats offer flexibility in data representation and schema evolution. Unlike binary formats that often require strict schema definitions, textual formats like JSON and XML allow for dynamic data structures and schemaless designs, accommodating evolving data requirements.

Disadvantages:

  1. Verbose: Textual formats can be verbose, especially when dealing with complex data structures or large datasets. This verbosity increases the size of serialized data, leading to higher network bandwidth usage and slower transmission speeds.
  2. Parsing Overhead: Parsing textual data incurs computational overhead compared to binary formats. Textual formats require additional parsing and string manipulation operations, which can impact performance, especially in high-throughput systems.
  3. Limited Efficiency: Textual formats are less efficient in terms of storage and processing compared to binary formats. They regularly result in larger payload sizes and slower serialization and deserialization times, particularly in resource-constrained environments.
  4. Schema Redundancy: Textual formats may include redundant schema information within each serialized message. This redundancy can increase message size and processing overhead, especially when transmitting multiple messages with similar schema definitions.
  5. Type Ambiguity: Textual formats like JSON lack built-in support for specifying data types explicitly. While this flexibility allows for dynamic typing, it can lead to ambiguity and inconsistencies in data interpretation, requiring additional validation and error handling mechanisms.


Binary formats

in serialization refer to data serialization methods where the data is represented in binary form, which is not human-readable. Common binary serialization formats include Protocol Buffers, Apache Avro, MessagePack, and Thrift. Here's an overview of binary formats with their advantages and disadvantages:

Advantages:

  1. Efficiency: Binary formats are more efficient in terms of both storage and processing compared to textual formats. They result in smaller payload sizes, reducing network bandwidth usage and storage requirements. Additionally, binary serialization and deserialization processes are faster and consume fewer computational resources.
  2. Compactness: Binary formats encode data in a more compact form compared to textual formats. This compactness is particularly beneficial for transmitting large volumes of data over the network, especially in resource-constrained environments or scenarios where bandwidth is limited.
  3. Type Safety: Binary formats often enforce strong typing and schema enforcement, ensuring that data is serialized and deserialized according to a predefined schema. This type of safety reduces the risk of data corruption or misinterpretation, improving data integrity and reliability.
  4. Schema Evolution: Binary formats support schema evolution, allowing for backward and forward compatibility of data schemas. This flexibility enables systems to evolve over time without breaking existing data contracts, facilitating seamless updates and migrations.
  5. Performance: Binary serialization and deserialization processes are inherently faster and more efficient than their textual counterparts. This performance advantage is particularly significant in high-throughput systems or latency-sensitive applications, where milliseconds matter.

Disadvantages:

  1. Lack of Human Readability: Binary formats are not human-readable, making debugging and troubleshooting more challenging compared to textual formats. Developers may require specialized tools or utilities to inspect and interpret binary data, increasing complexity and development overhead.
  2. Interoperability: While binary formats are efficient for communication between services using the same serialization framework, they may face interoperability challenges when integrating with systems using different serialization formats. Interoperability between heterogeneous systems may require additional conversion or translation layers.
  3. Schema Rigidity: Binary formats often require strict schema definitions, which can be less flexible compared to textual formats. Changes to schemas may require careful management and coordination to ensure backward compatibility, potentially introducing complexities in schema evolution.
  4. Vendor Lock-in: Some binary serialization frameworks may be proprietary or tightly coupled with specific technologies or platforms, leading to vendor lock-in. This can limit flexibility and portability, especially in environments where interoperability and platform independence are essential.
  5. Learning Curve: Adopting binary serialization formats may require additional training and familiarity with the associated frameworks and tools. Developers may need to invest time and resources in learning new concepts and best practices for working with binary data formats.


Schemas

Another very important part of communication between services is to define schemas.

Schemas are structural definitions that specify the organization, constraints, and relationships of data elements within a data format or data model. They serve as blueprints for data structures, providing a formal description of how data should be structured, represented, and validated. Here's an overview of schemas with their advantages and disadvantages:

Advantages:

  1. Data Consistency: Schemas ensure data consistency by defining the structure, data types, and constraints of data elements. This consistency helps maintain data integrity and prevents errors and inconsistencies in data representation and interpretation.
  2. Interoperability: Schemas facilitate interoperability by providing a common language for describing data structures and formats. Systems and applications that adhere to the same schema can exchange data seamlessly, even if they are developed using different technologies or platforms.
  3. Schema Evolution: Schemas support schema evolution, allowing data structures to evolve over time without breaking compatibility with existing data. Changes to schemas, such as adding new fields or modifying data types, can be managed in a controlled manner to ensure backward and forward compatibility.
  4. Data Validation: Schemas enable data validation by defining the rules and constraints that data must adhere to. Data can be validated against the schema to ensure its correctness and integrity, helping to prevent invalid or malformed data from entering the system.
  5. Serialization and Deserialization: Schemas guide the serialization and deserialization processes, ensuring that data is encoded and decoded correctly when transmitted between systems or stored in persistent storage. Schemas provide a standardized format for data representation, facilitating data interchange and integration.
  6. Documentation: Schemas serve as documentation for data structures, providing a formal description of the data elements, their relationships, and their constraints. Developers can refer to schemas to understand how data is structured and how it should be used, improving communication and collaboration.

Disadvantages:

  1. Complexity: Managing schemas can introduce complexity, especially in large-scale systems with complex data structures and evolving data requirements. Schema design, versioning, and governance require careful planning and management to avoid conflicts and inconsistencies.
  2. Overhead: Schemas add overhead to the development process, as developers need to define, maintain, and manage schemas for each data structure or data model used in the system. This overhead can increase development time and effort, especially in systems with numerous schemas and frequent schema changes.
  3. Versioning Challenges: Schema evolution and versioning can be challenging, particularly when making backward-incompatible changes. Updating schemas across all affected systems and ensuring compatibility with existing data can be time-consuming and error-prone.
  4. Dependency Management: Changes to schemas can impact multiple systems and applications that rely on those schemas for data interchange or integration. This introduces dependencies between systems, making it challenging to update or replace schemas without affecting other components of the system.
  5. Performance Overhead: Schemas may introduce performance overhead, especially in systems with complex data structures or frequent data validation requirements. Validating data against schemas and enforcing schema constraints can consume computational resources and impact system performance, particularly in high-throughput environments.


Despite these disadvantages, careful planning, version control, and governance strategies can help mitigate the challenges associated with schemas in microservices architectures.


Handling Change Between Microservices

Avoiding breaking changes

in microservices is crucial for maintaining system stability and ensuring seamless operation. Here are five key strategies to achieve this:

  1. Safe Expansion Changes: When expanding functionalities or making modifications to existing services, ensure backward compatibility whenever possible. New features should be additive and should not alter the existing behavior of the service. By following the principle of "safe expansion changes," you can prevent disruptions to dependent services or clients.
  2. Tolerant Reader: Implement the tolerant reader pattern to handle variations in the data schema between different versions of microservices. Instead of expecting strict adherence to a specific schema, design services to gracefully handle both expected and unexpected data formats. This flexibility allows services to consume data without breaking, even when encountering minor schema changes.
  3. Choose the Right Technology: Selecting the appropriate technology stack plays a crucial role in minimizing breaking changes. Opt for technologies and frameworks that support versioning, backward compatibility, and robust error handling. For instance, using RESTful APIs with clear versioning mechanisms or leveraging message queues for asynchronous communication can facilitate smoother evolution of microservices.
  4. Explicit Interface: Define clear and explicit interfaces for microservices to communicate with each other. Clearly document input parameters, expected outputs, and error handling mechanisms. By establishing well-defined contracts between services, you reduce the likelihood of unintended changes and misunderstandings that could lead to breaking changes. Additionally, consider using contract-first approaches such as OpenAPI or gRPC to formalize service interfaces.
  5. Catch Accidental Breaking Changes Early: Implement automated testing, continuous integration, and deployment pipelines to detect breaking changes as early as possible in the development lifecycle. Include comprehensive unit tests, integration tests, and contract tests to validate the behavior of microservices across different scenarios. Additionally, leverage tools for static code analysis and dependency management to identify potential compatibility issues before they propagate to production environments.

By adhering to these principles and strategies, you can mitigate the risk of breaking changes in microservices, ensuring smooth evolution and seamless operation of your distributed system.


Managing breaking changes

in microservices requires careful planning and execution to minimize disruptions to the overall system. Here are three strategies for managing breaking changes effectively:

  1. Lockstep Deployment: Lockstep deployment involves updating all dependent services simultaneously to ensure compatibility between microservices versions. This approach requires coordination and synchronization across teams to deploy changes in a coordinated manner. By maintaining consistency among all services, lockstep deployment reduces the risk of compatibility issues and ensures that the system remains functional during the transition period.
  2. Coexist Incompatible Microservices Versions: In situations where immediate synchronization of all services is not feasible, coexisting incompatible microservices versions can be employed. This approach allows both old and new versions of a microservice to run concurrently within the system. By routing requests to the appropriate version based on configurable rules or traffic splitting techniques, coexistence enables a gradual transition without disrupting service availability. However, it adds complexity to the system and requires careful management of routing and versioning mechanisms.
  3. Emulate the Old Interface: When introducing breaking changes to a microservice, another approach is to emulate the old interface temporarily. This involves maintaining backward compatibility by preserving the existing interface for a transitional period while internally implementing the new functionality. By intercepting requests and mapping them to the new behavior behind the scenes, the service can continue to serve clients without requiring immediate updates. This approach provides a seamless migration path for consumers while allowing for phased adoption of the new features.

Choosing the preferred strategy depends on various factors, including the impact of the breaking changes, the urgency of deployment, and the complexity of the system. Here's a guide on when to use each approach:

Lockstep Deployment:

  • Preferable when there are critical breaking changes that affect the functionality of dependent services.
  • Suitable for scenarios where teams can coordinate deployments effectively and ensure synchronized updates across the
  • Recommended for environments where maintaining consistency and avoiding compatibility issues are top priorities.


Coexist Incompatible Microservices Versions:

  • Ideal for scenarios where immediate synchronization of all services is impractical or risky.
  • Useful when transitioning to a new version gradually while maintaining service availability.
  • Suitable for large-scale deployments or systems with complex dependencies where lockstep deployment is challenging.


Emulate the Old Interface:

  • Recommended when introducing breaking changes incrementally or when clients cannot be updated immediately.
  • Useful for minimizing disruption to existing clients while gradually migrating to the new functionality.
  • Suitable for scenarios where maintaining backward compatibility is essential for a smooth transition.


Ultimately, the choice of strategy depends on the specific requirements, constraints, and goals of the microservices architecture and deployment process.ds on the specific requirements, constraints, and goals of the microservices architecture and deployment process.


Sharing code

via libraries can be a powerful tool for code reuse across microservice boundaries, but it comes with potential pitfalls that can lead to coupling and deployment headaches. One crucial aspect to avoid is overly coupling microservices and consumers, where even small changes to the microservice can necessitate updates to all consumers. This scenario often arises when shared code, such as a library of common domain objects, is used across multiple services. Any modification to these shared entities requires updates to all services, leading to synchronization challenges and the need to drain message queues of invalid content.

To mitigate this risk, it's essential to ensure that shared code remains encapsulated within service boundaries. While using common libraries for internal concepts like logging is acceptable, sharing domain-specific code externally introduces coupling. A practical approach employed by companies like real-estate.com.au involves using tailored service templates for new service creation instead of sharing code directly. This approach prevents coupling from leaking into the system by copying the necessary code for each new service.

When sharing code via libraries, it's crucial to recognize that updating all instances simultaneously is impractical. Since each microservice packages its dependencies, upgrading a shared library requires redeploying each affected service. Attempting to update all instances simultaneously can lead to widespread deployments and associated complications.

Accepting the presence of multiple versions of the same library concurrently is essential when leveraging libraries for code reuse across microservices. While gradual updates to align all instances with the latest version are possible, it's essential to acknowledge and accommodate the coexistence of multiple versions. However, if simultaneous updates across all users are necessary, implementing code reuse via a dedicated microservice might be a more suitable approach.

Despite these challenges, there's a specific use case associated with reuse through libraries that warrants further exploration.


We covered most of the things we needed, But we are left with very important part like How to arrange Services and Discovery, API Gateway and Documenting Services. We will cover them in next part of article till then enjoy my other article in this newsletter, see in the next.

要查看或添加评论,请登录

Deepak Mandal的更多文章

  • Implementing Microservice Communication (Part 1)

    Implementing Microservice Communication (Part 1)

    Finding the optimal technology for facilitating communication between microservices entails careful consideration of…

  • Communication styles in microservices

    Communication styles in microservices

    To even start thinking about any implementation, we always need something we called communication style. Defining…

  • Controllers in NestJS

    Controllers in NestJS

    Controllers play a crucial role in any web application, as they act as the intermediary between the client’s requests…

    1 条评论
  • Getting Started with?NestJS

    Getting Started with?NestJS

    In the world of backend development, Node.js has established itself as a popular choice due to its efficiency and…

  • Model Microservices (Part 2)

    Model Microservices (Part 2)

    Domain-Driven Design In the world of software architecture and design, one approach that has gained significant…

  • Model Microservices (Part 1)

    Model Microservices (Part 1)

    Hello, This article continues the previous articles/posts on LinkedIn further in the deep jungle of microservices. You…

  • What is Django?

    What is Django?

    Here I am with such a trivial question. Whoever reading this, just thinking of this as the first step toward the Django…

社区洞察

其他会员也浏览了