Reusable Code, Managed?Coupling

Reusable Code, Managed?Coupling

Introduction

In software development, adhering to established principles and guidelines is crucial for maintaining our software and accelerating the development process. One such principle is “Don’t Repeat Yourself” (DRY). This principle promotes reusability and maintainability, simplifying our work as programmers by sharing code and functionality across different components and teams. Sharing functionality requires careful consideration and trade-off analysis, particularly in distributed architectures, where the impact is more significant compared to monolithic architectures, where the entire solution is a single deployment unit.

“Code reuse is a normal part of software development… In most monolithic architectures, code reuse is rarely given a second thought?—?it’s a matter of simply importing or auto-injecting shared class files…”?—?Software Architecture: The Hard Parts.

After reading this article, you will understand various methods for sharing code and functionality, along with their benefits and drawbacks, and know when to use each method. Let’s explore these methods, starting with the most common one: Shared Library.

Shared Library

A shared library is a common method for sharing code across teams and components. We define a scope of functionality, package it under the source code folder, and link other projects to it, or we can pack it into a library artifact and consume it in our other projects.

Pros:

  • Developer-Friendly: Shared libraries are easy to create and use, requiring minimal learning curve. There are numerous tools and artifact repositories available for creating shared packages, and almost every programming language supports this method.
  • Performance: Using a shared library is fast because the code runs within the service itself, making it an integral part of the service.
  • Compile-Time Safety: Being compile-based reduces the likelihood of runtime errors.

Cons:

  • Static coupling: When using a shared library, the coupling between business logic and infrastructure functionality becomes static.
  • Deployability: In a distributed architecture, shared libraries can lead to undesirable coupling. Services that use the same library become interconnected and dependent on it. An update to the shared library might necessitate rebuilding and redeploying all dependent services, which goes against the principles of independence and decoupling. This means a service might need to be redeployed due to a change in shared functionality rather than its own business logic.
  • Limited Sharing: Shared libraries in a specific programming language restrict sharing common functionality with teams using different languages. This limitation is significant because containerization allows each container to be isolated and run any language needed for a specific task. Losing this flexibility is not desirable.
  • Versioning Complexity: Managing versions across multiple projects can become challenging.

Use case

Ideal for reusable components like formatters, such as date, currency, or text formatters. For instance, a shared date formatter library could be used across multiple services to ensure consistent date formatting throughout the system.

Shared Service

Another method for sharing code that addresses the disadvantages of a shared library is to encapsulate the shared functionality within a service and expose it via an API. This approach reduces dependency on a specific programming language and shifts the coupling from static to dynamic, mitigating the issues associated with shared libraries.

Pros:

  • Programming Language Flexibility: We can choose the best programming language for the scoped functionality and expose it to other services via an API.
  • Independent Deployment: Services using the common functionality as an independent service do not depend on the source code. This allows us to update and deploy the common functionality service without redeploying the services that use it.

Cons:

  • Performance: Communication between services now occurs over the network, which is slower than using a shared library. Performance can degrade further in a distributed architecture, especially if the orchestrator starts the common functionality service on different nodes than the dependent services. While network communication performance can be improved with gRPC, it may not be worth the risk in a distributed setup.
  • Maintenance: Relying exclusively on this method can significantly increase the number of services. However, the impact can be minimal if infrastructure as code (IaC) is implemented effectively.
  • Scalability: Since services depend on the shared service, handling throughput at scale becomes essential.
  • Availability: The shared service must be highly available to ensure that dependent services function properly.

Use case

A shared service is perfect for centralized functions like authentication and authorization. Instead of embedding authentication logic in each service, you can create a shared authentication service that different services access via an API, ensuring consistent and secure user authentication across the organization.

Sidecar

In the previous section, we discussed the performance disadvantage of using microservices for sharing functionality. A sidecar can be a great option to address this issue. While it may not be as fast as a shared library, it offers significantly better performance in distributed architectures.

So, what is a sidecar? According to the Kubernetes documentation:

“Sidecar containers are the secondary containers that run along with the main application container within the same Pod. These containers are used to enhance or to extend the functionality of the primary app container…” You can read more about it here: https://kubernetes.io/docs/concepts/workloads/pods/sidecar-containers/

This method involves extending a container’s functionality by adding a secondary container next to it. This ensures that the orchestrator creates both containers within the same node, as they run in the same pod.

Pros:

  • Performance: The shared functionality container running near the main container is crucial for performance in a distributed architecture. Additionally, using gRPC can further enhance performance.
  • Programming Language Flexibility: Similar to a shared service, a sidecar allows the use of different programming languages.
  • Independent Deployment: As with a shared service, the shared functionality container can be updated and deployed independently.

Cons:

  • Service Mesh Learning Curve: Implementing a sidecar requires learning and properly using service mesh solutions, which can take time.
  • Increased Service Management: While using sidecars does increase the number of services to manage, effective IaC can minimize this disadvantage.

Use case

A good example of using a sidecar is logging. In this scenario, a sidecar container runs alongside the main application container within the same Pod. The application writes logs to a shared volume, and the sidecar processes and forwards these logs to a centralized logging service, allowing the main application to focus on its core tasks.

Wrap-Up

Each method we discussed has its own strengths, and by combining them, we can take full advantage of what they offer. Choosing the right method depends on the specific challenge you’re facing, so it’s important to weigh the pros and cons carefully.

The Shared Library method is great for things like formatters and calculators that don’t change often and can be well tested.

The Shared Service method is useful for sharing functionality across teams without worrying about programming language limitations. It works well for things like authentication and authorization, but it’s not ideal for distributed architectures due to possible network performance issues.

The Sidecar method is perfect for distributed architectures. It’s similar to shared services but allows the shared functionality, like logging, to run alongside the main application container, improving performance and allowing independent updates.

By combining these methods and using them in the right situations, we can build a system that’s flexible, easy to maintain, and performs well.


Thank you for reading. I really enjoyed writing this article and sharing it ??with all of you. I hope you found it insightful. I’d be happy to hear your thoughts and comments, and which methods you prefer to use and when.

?Thank you!


Daniel Oved ????

DevOps Team Lead @ CloudZone

2 个月

Fantastic article, Liran Peretz! Your take on reusable code and managing coupling was highly relevant. Appreciate the fresh perspective—looking forward to more of your great content!

Yoav Lax

DevOps Group Tech Lead @ Varonis | Community Manager of "Azure DevOps IL Community" | Mr. Migration

3 个月

Great article, Liran Peretz! Your insights on code reuse and managed coupling provide a clear roadmap for making informed decisions in software development. Thanks for sharing! ??????

要查看或添加评论,请登录

社区洞察

其他会员也浏览了