Introduction to Service Discovery

Introduction to Service Discovery

What is Service Discovery?

Service Discovery as the name suggests allows us to know or discover where each instance of a service is located. Service discovery consists of a component called service registry, which acts as the address book holding network addresses of all the registered services. Consequently, if a client wants to make a request to a service, it must use a Service Discovery mechanism.

Need for Service Discovery

In the world of distributed computing, SOA (service oriented architecture) and micro services taking the lead, it becomes important for all these (distributed) services to communicate with each other in order to fulfil a business need. In the absence of service discovery mechanisms each (micro)service needs to know and save the exact network address(comprising of host, port or IP address) of other (micro) services (down streams) that it needs to communicate with. This is a problem in itself and when we see it in the light of cloud based deployments where the addresses are not stable since we used containerised deployments with Kubernetes like container orchestration systems (dynamically spinning up new instances and destroying older ones).

Thus there is a need for service discovery, which in turn comprises of

  1. Service registry - Where all the service instances alive register themselves and send periodic heartbeats (as a proof of being alive). This can be thought of as a central database that holds the information about registered service instances (i.e their network addresses)
  2. Service Discovery - Actual way to find and share the address(es) of the service provider instances when the client wants to connect to the provider

A Service Consumer and Service Provider (a service exposing REST API). The Service Consumer needs the Service Provider to read and write data.

The following diagram shows the communication flow with service discovery mechanism in place:

  1. The location of the Service Provider is sent to the Service Registry (a database containing the locations of all available service instances).
  2. The Service Consumer asks the Service Discovery Server for the location of the Service Provider.
  3. The location of the Service Provider is searched by the Service Registry in its internal database and returned to the Service Consumer.
  4. The Service Consumer can now make direct requests to the Service Provider.

Types of Service Registry

Step 1 i.e Service registry for the service provider instances can be done in two ways: -

  1. Self Registration - Where the service instances register and de register themselves and also might send periodic heart beats as a sign of being alive and operational.
  2. Third Party Registration - Here to decouple the service instances and service registry (service instances no longer need to know where the service registry is).Instead, another system component known as the Service Register is responsible for registration. The Service Register keeps track of changes to running instances by polling the deployment environment or subscribing to events. When it detects a newly available service instance, it records it in its database. The Service Registry also de-registers terminated service instances. Though it decouples the service registry and service instances (that need to be registered) it introduces the dependency of a third party component on deployment environment and just like service registry itself, this third party component needs to be made highly available and resilient to failures and changes.



Types of Service Discovery

We discussed the first part of the story which is also a pre requisite for discovery i.e service registry next lets discuss the second and most important part of the story i.e the actual service discovery. Similar to service registry, service discovery can be done in two ways:

  1. Client side service discovery - When service client (consumer) is responsible for determining the network address for server instances and also load balance the requests to them (client side load balancing). Here the client queries the service registry, gets available server locations and then applies a load balancing algorithm to select one of them to which request is actually routed. One of the main concern with this is the client side load balancing that adds an extra responsibility to the client application along with registry lookup. An example for this is Service Discovery via Eureka
  2. Server Side Discovery - Here we introduce a service called the load balancer service, this is a dedicated service that gets the server instances information form the service registry and also selects the server instances to route the request to. Thus in this case the client is agnostic of discovery process and load balancing, all it needs to do is connect to the provided load balancer. There is an overhead added to maintain an always available load balancer service. In some cases the deployment environment may provide the load balancer support along with service discovery for example Kubernetes.



Lets now take a look at two prominent Service Discovery Architectures - Starting with Apache Zookeeper and followed by widely adopted Netflix Eureka.

Zookeeper for Service Discovery

ZooKeeper is a distributed, open-source coordination service which allows the distributed services to coordinate with each other .Coordination services are notoriously hard to get right. They are especially prone to errors such as race conditions and deadlock. The motivation behind ZooKeeper is to relieve distributed applications the responsibility of implementing coordination services from scratch.

Zookeeper Key Terms

Zookeeper is based on Leader/Follower pattern. Flowing are key terms goo to know in order to understand zookeeper architecture better.

Client

Clients, one of the nodes in our distributed application cluster, access information from the server. For a particular time interval, every client sends a message to the server to let the sever know that the client is alive.

Similarly, the server sends an acknowledgement when a client connects. If there is no response from the connected server, the client automatically redirects the message to another server.

Server

Server, one of the nodes in our ZooKeeper ensemble, provides all the services to clients. Gives acknowledgement to client to inform that the server is alive. EnsembleGroup of ZooKeeper servers. The minimum number of nodes that is required to form an ensemble is 3.

Leader

Server node which performs automatic recovery if any of the connected node failed. Leaders are elected on service startup. Only leaders are able to write new information and broadcast the same to the followers. Thus its the only server that has a Request processor. Client request can land to any follower but that write request is sent to the leader node.

Follower

Server node which follows leader instruction.

Znode

A znode is a data structure used in Zookeeper to store information. It is similar to a file or a directory in a filesystem, and Zookeeper uses znodes to represent and manage configuration data, synchronization primitives (e.g., locks, counters), and other state information for distributed applications.A znode can contain data and have metadata, such as version numbers, timestamps, and ACLs (Access Control Lists). It can be ephemeral (only exist while a session is active and cannot have child znodes) or persistent (remain in the system until explicitly deleted). Another type of znodes used is Sequential znodes that can be either persistent or ephemeral. When a new znode is created as a sequential znode, then ZooKeeper sets the path of the znode by attaching a 10 digit sequence number to the original name. Used basically for locking and synchronization.

Znodes are organized hierarchically in a tree structure, and each znode is identified by a unique path (e.g., /app/config). Each znode stores data and can have child znodes as well.

How Service Discovery Works

The Service Discovery mechanism with load balancing uses the curator-x-discovery package from the Apache Curator?—?Java/JVM client library for Apache ZooKeeper.

As we discussed earlier service discovery procedure consists of

  1. Service Registry
  2. Service Discovery

Service Registry Step

When a Kafka service instance appears, it registers with the Service Registry ZooKeeper in the namespace of its service, adding its ephemeral znode. Znode stores the host:port of the server as its data. The service instance sends heartbeat requests to save its registration.

Service Registry is a component that contains a database of all available service instances. It stores information about the currently available instances of each service and their network data for establishing a connection.

The Service Registry monitors running instances for changes by polling the deployment environment or by subscribing to events. When the Service Registry detects a new service instance available, it writes it to its database. The Service Registry also unregisters failed (disabled) service instances.

Service Discovery Step

  1. The client connects to the ZooKeeper ensemble and requests the desired service.
  2. Using the ServiceProvider class used in Apache Curator, ZooKeeper uses a load balancing algorithm to select one of the available service instances and returns a host:port for one of the registered instances.
  3. The client connects to the service instance.

Eureka for Service Discovery

Eureka is a REST (Representational State Transfer) based service that is primarily used in the AWS cloud for locating services for the purpose of load balancing and failover of middle-tier servers. We call this service, the Eureka Server. Eureka also comes with a Java-based client component, the Eureka Client, which makes interactions with the service much easier. The client also has a built-in load balancer that does basic round-robin load balancing.

Netflix's Eureka Server, integrated with Spring Boot, provides an elegant solution for managing service discovery aspects. Eureka works on Client-side service discovery allows services to find and communicate with each other without hard-coding the hostname and port. The only ‘fixed point’ in such an architecture is the service registry, with which each service has to register. The registry provides clients with all the instance addresses for a particular service provider, thus clients need to load balance and select a particular instance to connect to.

Client-side Load Balancing

Load balancing was introduced with the main goal of looking up and cache the registry in each service instance to improve performance.

How does it work:

  1. After the services register themselves at Eureka, the SD(service discovery) will know each service’s physical location and port number of each service instance along with a service ID (service name) that are being started.
  2. When service A calls service B, it will use Netflix Ribbon (client-side load balance) library. Ribbon will then contact Netflix Eureka (SD) to retrieve the correspondent service information and then cache it locally.
  3. Ribbon will regularly contact SD and refresh its local cache.

High Level Architecture

The architecture above depicts how Eureka is deployed at Netflix and this is how we would typically run it. There is one eureka cluster per region which knows only about instances in its region. There is at the least one eureka server per zone to handle zone failures.

Services register with Eureka and then send heartbeats to renew their leases every 30 seconds. If the client cannot renew the lease for a few times, it is taken out of the server registry in about 90 seconds. The registration information and the renewals are replicated to all the eureka nodes in the cluster. The clients from any zone can look up the registry information (happens every 30 seconds) to locate their services (which could be in any zone) and make remote calls.

Eureka clients tries to talk to Eureka Server in the same zone. If there are problems talking with the server or if the server does not exist in the same zone, the clients fail over to the servers in the other zones.

Once the server starts receiving traffic, all of the operations that is performed on the server is replicated to all of the peer nodes that the server knows about. If an operation fails for some reason, the information is reconciled on the next heartbeat that also gets replicated between servers.

Hence there is Peer to Peer communication instead of Leader follower pattern.

Resiliency in Eureka

Eureka clients are built to handle the failure of one or more Eureka servers. Since Eureka clients have the registry cache information in them, they can operate reasonably well, even when all of the eureka servers go down. Another important aspect that differentiates proxy-based load balancing from load balancing using Eureka is that our application can be resilient to the outages of the load balancers, since the information regarding the available servers is cached on the client. This does require a small amount of memory, but buys better resiliency.

Eureka Servers are resilient to other eureka peers going down. Even during a network partition between the clients and servers, the servers have built-in resiliency to prevent a large scale outage.

Self Preservation Mode

Self-preservation mode is primarily used as a protection in scenarios where there is a network partition between a group of clients and the Eureka Server. In these scenarios, the server tries to protect the information it already has.

If we consider a scenario of network partition, it could be that because of the network partition client heart beats are unable to reach to a partitioned server. Eureka servers will enter self preservation mode if they detect that a larger than expected number of registered clients have terminated their connections in an ungraceful way, and are pending eviction at the same time. This is done to ensure catastrophic network events do not wipe out eureka registry data, and having this be propagated downstream to all clients.

To better understand self preservation, it is helpful to first understand how does eureka clients 'end' their registration lifecycle. The eureka protocol requires clients to execute an explicit unregister action when they are permanently going away. For example, in the provided java client, this is done in the shutdown() method. any clients that fails 3 consecutive heartbeat renewals is considered to have an unclean termination, and will be evicted by the background eviction process. It is when > 15% of the current registry is in this later state, that self preservation will be enabled.

When in self preservation mode, eureka servers will stop eviction of all instances until either:

  1. the number of heartbeat renewals it sees is back above the expected threshold, or
  2. self preservation is disabled (see below)

Self preservation is enabled by default, and the default threshold for enabling self preservation is > 15% of the current registry size.

Zookeeper vs Eureka

  1. Purpose: Apache ZooKeeper is a distributed coordination service that can be used for a variety of purposes, including service discovery and configuration management. Netflix Eureka, on the other hand, is specifically designed for service discovery and registration.
  2. Architecture: In ZooKeeper, there are a set of servers that coordinate with each other to provide a single view of the system state(master slave architecture). Eureka, on the other hand, uses a peer-to-peer architecture where each service instance registers itself with one of the Eureka server instances this is then eventually communicated to all peer instances in service registry.
  3. Data Consistency: Eureka prioritizes availability over consistency. It uses an eventual consistency model, where there might be some delay in propagating updates across all instances. In contrast, Zookeeper emphasizes consistency and uses a strong consistency model, ensuring that all clients see the same view of the data at any given time.
  4. Service Monitoring: Eureka provides built-in health monitoring of services, regularly checking if they are up or down. It automatically handles instances that are not responsive or healthy. In Zookeeper, monitoring of services needs to be implemented separately using custom solutions.
  5. Service Discovery Mechanism: Eureka uses a peer-to-peer replication mechanism for service discovery. Clients periodically fetch the registry information from the Eureka server and cache it locally. Zookeeper, on the other hand, maintains a hierarchical namespace called Znodes, which clients can access to discover and coordinate services. In Zookeeper writes are only handled by the leader node in ensemble and reads of data can be made via any node.
  6. Availability and Scalability: Eureka is designed to be highly available and scales horizontally. It provides self-preservation mechanisms that prevent cascading failures and allow the system to continue working even if some instances fail. Zookeeper ensures high availability by replicating data across the ensemble, and it can handle a large number of concurrent clients.
  7. Integration with Other Technologies: Eureka is particularly well-suited for integration with Spring Cloud ecosystem, providing seamless integration with Spring Boot applications. Zookeeper, being a general-purpose coordination service, can be integrated with various platforms and frameworks, making it more versatile for different use cases.

Hands-on with Eureka for service discovery

Since now we know about architectures, purpose, and high level design of two of the popular service discovery providers.

Let's now get our hands dirty with some code and try to see how service discovery works with Eureka and Springboot, we would be using maven for dependency management.

Scenario: We would be developing one Eureka Server and two client applications. Both the client applications register themselves with the server(registry) and then we would leverage Feign client for one application to connect to another. Here the client application making the call is Company service and the application serving data is Employee application and both are registered to the Eureka service registry.

Eureka Server with Springboot

<?xml version="1.0" encoding="UTF-8"?>
<project xmlns="https://maven.apache.org/POM/4.0.0" xmlns:xsi="https://www.w3.org/2001/XMLSchema-instance"
	xsi:schemaLocation="https://maven.apache.org/POM/4.0.0 https://maven.apache.org/xsd/maven-4.0.0.xsd">
	<modelVersion>4.0.0</modelVersion>
	<parent>
		<groupId>org.springframework.boot</groupId>
		<artifactId>spring-boot-starter-parent</artifactId>
		<version>3.3.5</version>
		<relativePath/> <!-- lookup parent from repository -->
	</parent>
	<groupId>com.example</groupId>
	<artifactId>eureka-demo-server</artifactId>
	<version>0.0.1-SNAPSHOT</version>
	<name>eureka-demo-server</name>
	<description>Demo project for Spring Boot</description>
	<url/>
	<licenses>
		<license/>
	</licenses>
	<developers>
		<developer/>
	</developers>
	<scm>
		<connection/>
		<developerConnection/>
		<tag/>
		<url/>
	</scm>
	<properties>
		<java.version>17</java.version>
		<spring-cloud.version>2023.0.3</spring-cloud.version>
	</properties>
	<dependencies>
		<dependency>
			<groupId>org.springframework.cloud</groupId>
			<artifactId>spring-cloud-starter-netflix-eureka-server</artifactId>
		</dependency>

		<dependency>
			<groupId>org.springframework.boot</groupId>
			<artifactId>spring-boot-starter-test</artifactId>
			<scope>test</scope>
		</dependency>
	</dependencies>
	<dependencyManagement>
		<dependencies>
			<dependency>
				<groupId>org.springframework.cloud</groupId>
				<artifactId>spring-cloud-dependencies</artifactId>
				<version>${spring-cloud.version}</version>
				<type>pom</type>
				<scope>import</scope>
			</dependency>
		</dependencies>
	</dependencyManagement>

	<build>
		<plugins>
			<plugin>
				<groupId>org.springframework.boot</groupId>
				<artifactId>spring-boot-maven-plugin</artifactId>
			</plugin>
		</plugins>
	</build>

</project>
        

Above is the pom file for the Eureka server, the dependency of interest being

<dependency>
			<groupId>org.springframework.cloud</groupId>
			<artifactId>spring-cloud-starter-netflix-eureka-server</artifactId>
		</dependency>        

Next we need to add properties to application yml, specifying the port for this application and some others.

spring:
  application:
    name: eureka-demo-server
server:
  port: 8761
eureka:
  client:
    register-with-eureka: false
    fetch-registry: false
  server:
    enableSelfPreservation: true
logging:
  level:
    com.netflix.eureka: OFF
    com.netflix.discovery: OFF        

We have enabled self preservation mode, point to note is we disabled register with eureka for this application as we do not want the server to register itself.

Let's try to access local host on the given port and see the output. We see the below page as output.

Employee Application

First step is right set of dependency in our maven POM file.

<?xml version="1.0" encoding="UTF-8"?>
<project xmlns="https://maven.apache.org/POM/4.0.0" xmlns:xsi="https://www.w3.org/2001/XMLSchema-instance"
    xsi:schemaLocation="https://maven.apache.org/POM/4.0.0 https://maven.apache.org/xsd/maven-4.0.0.xsd">
    <modelVersion>4.0.0</modelVersion>
    <parent>
       <groupId>org.springframework.boot</groupId>
       <artifactId>spring-boot-starter-parent</artifactId>
       <version>3.3.5</version>
       <relativePath/> <!-- lookup parent from repository -->
    </parent>
    <groupId>com.example</groupId>
    <artifactId>employee-demo-service</artifactId>
    <version>0.0.1-SNAPSHOT</version>
    <name>employee-demo-service</name>
    <description>Demo project for Spring Boot</description>
    <properties>
       <java.version>17</java.version>
       <spring-cloud.version>2023.0.3</spring-cloud.version>
    </properties>
    <dependencies>
       <dependency>
          <groupId>org.springframework.boot</groupId>
          <artifactId>spring-boot-starter-web</artifactId>
       </dependency>
       <dependency>
          <groupId>org.springframework.cloud</groupId>
          <artifactId>spring-cloud-starter-netflix-eureka-client</artifactId>
       </dependency>

       <dependency>
          <groupId>org.springframework.boot</groupId>
          <artifactId>spring-boot-starter-test</artifactId>
          <scope>test</scope>
       </dependency>
    </dependencies>
    <dependencyManagement>
       <dependencies>
          <dependency>
             <groupId>org.springframework.cloud</groupId>
             <artifactId>spring-cloud-dependencies</artifactId>
             <version>${spring-cloud.version}</version>
             <type>pom</type>
             <scope>import</scope>
          </dependency>
       </dependencies>
    </dependencyManagement>

    <build>
       <plugins>
          <plugin>
             <groupId>org.springframework.boot</groupId>
             <artifactId>spring-boot-maven-plugin</artifactId>
          </plugin>
       </plugins>
    </build>

</project>        

Our dependency of interest being the

<dependency>
			<groupId>org.springframework.cloud</groupId>
			<artifactId>spring-cloud-starter-netflix-eureka-client</artifactId></dependency>        

The above dependency allows this application to be a eureka client and register itself with the registry. To be able to register itself it needs to know the location of registry, thats where our application.yml properties file comes into play.

spring:
  application:
    name: employee-demo-service
server:
  port: 8081
eureka:
  client:
    service-url:
      defaultZone: https://localhost:8761/eureka
  instance:
    hostname: localhost        

Its important to configure Eureka instance hostname as localhost so that we are able to discover this client as being available on our local at port 8081.

Next we expose a simple controller in the application with a dummy get endpoint

@RestController
@RequestMapping("/v1/employees")
public class EmployeeController {

    @GetMapping("/{id}")
    public String getEmployeeName(@PathVariable String id) {
        return "E1";
    }
}        

Once we start this application we would be able to see it registered in the Eureka registry

Company Application

Next we will develop the company application, this application will discover and call the employee application get end point in order to retrieve employee details (reference: controller code in employee application).

Below is a look at the POM file for the same.

<?xml version="1.0" encoding="UTF-8"?>
<project xmlns="https://maven.apache.org/POM/4.0.0" xmlns:xsi="https://www.w3.org/2001/XMLSchema-instance"
    xsi:schemaLocation="https://maven.apache.org/POM/4.0.0 https://maven.apache.org/xsd/maven-4.0.0.xsd">
    <modelVersion>4.0.0</modelVersion>
    <parent>
       <groupId>org.springframework.boot</groupId>
       <artifactId>spring-boot-starter-parent</artifactId>
       <version>3.3.5</version>
       <relativePath/> <!-- lookup parent from repository -->
    </parent>
    <groupId>com.example</groupId>
    <artifactId>company-demo-service</artifactId>
    <version>0.0.1-SNAPSHOT</version>
    <name>company-demo-service</name>
    <description>Demo project for Spring Boot</description>
    <properties>
       <java.version>17</java.version>
       <spring-cloud.version>2023.0.3</spring-cloud.version>
    </properties>
    <dependencies>
       <dependency>
          <groupId>org.springframework.boot</groupId>
          <artifactId>spring-boot-starter-web</artifactId>
       </dependency>
       <dependency>
          <groupId>org.springframework.cloud</groupId>
          <artifactId>spring-cloud-starter-netflix-eureka-client</artifactId>
       </dependency>
       <dependency>
          <groupId>org.springframework.cloud</groupId>
          <artifactId>spring-cloud-starter-feign</artifactId>
          <version>1.4.6.RELEASE</version>
       </dependency>
       <dependency>
          <groupId>org.springframework.boot</groupId>
          <artifactId>spring-boot-starter-test</artifactId>
          <scope>test</scope>
       </dependency>
    </dependencies>
    <dependencyManagement>
       <dependencies>
          <dependency>
             <groupId>org.springframework.cloud</groupId>
             <artifactId>spring-cloud-dependencies</artifactId>
             <version>${spring-cloud.version}</version>
             <type>pom</type>
             <scope>import</scope>
          </dependency>
       </dependencies>
    </dependencyManagement>

    <build>
       <plugins>
          <plugin>
             <groupId>org.springframework.boot</groupId>
             <artifactId>spring-boot-maven-plugin</artifactId>
          </plugin>
       </plugins>
    </build>

</project>        

The two dependencies to note are:

<dependency>
    <groupId>org.springframework.cloud</groupId>
    <artifactId>spring-cloud-starter-netflix-eureka-client</artifactId>
</dependency>
<dependency>
    <groupId>org.springframework.cloud</groupId>
    <artifactId>spring-cloud-starter-feign</artifactId>
    <version>1.4.6.RELEASE</version>
</dependency>        

Of these the new one is the Feign client. Feign is a declarative web service client. It makes writing web service clients easier. Spring Cloud integrates Ribbon and Eureka to provide a load balanced http client when using Feign.

Our Feign client is to connect to employee application to get employee details.

@FeignClient("EMPLOYEE-DEMO-SERVICE")
public interface EmployeeServiceClient {

    @GetMapping("/v1/employees/{id}")
    String getEmployeeName(@PathVariable String id);
}        

Since we are using Eureka for service discovery, the name of the client matches the Eureka service name i.e EMPLOYEE-DEMO-SERVICE. Next we specify the endpoint we want to connect for the employee demo service to get employee information.

The Feign client needs to be enabled hence we need to add the annotation as depicted below.

@SpringBootApplication
@EnableDiscoveryClient
@EnableFeignClients
public class CompanyDemoServiceApplication {

    public static void main(String[] args) {
       SpringApplication.run(CompanyDemoServiceApplication.class, args);
    }

}        

Lets now look at the Controller code for Company application - This is the endpoint we would leverage to make a call to company application. It has the (Feign) client dependency to call the employee demo service.

@RestController
@RequestMapping("/v1/companies")
public class CompanyController {

    private final EmployeeServiceClient employeeServiceClient;

    public CompanyController(EmployeeServiceClient employeeServiceClient) {
        this.employeeServiceClient = employeeServiceClient;
    }

    @GetMapping("/{id}")
    public Company getCompanyDetails(@PathVariable String id){
        Company company = new Company();
        company.setCompanyId(id);
        Employee employee = new Employee();
        employee.setName(employeeServiceClient.getEmployeeName("123"));
        company.setEmployee(employee);
        return company;
    }
}        

This would then leverage Eureka to discover Employee Demo Service host and port (address), load balancer used here is Ribbon for client side load balancing. Thus it would then return us complete response having company and employee details.

The company and employee models look like below.

public class Company {
    private String companyId;
    private Employee employee;

    public String getCompanyId() {
        return companyId;
    }

    public Employee getEmployee() {
        return employee;
    }

    public void setCompanyId(String companyId) {
        this.companyId = companyId;
    }

    public void setEmployee(Employee employee) {
        this.employee = employee;
    }
}        
public class Employee {
    private String name;

    public String getName() {
        return name;
    }

    public void setName(String name) {
        this.name = name;
    }
}        

Once we run the company application, it also being a Eureka client registers itself as shown below.


Lets try to make a call leveraging Postman.

Service Discovery with Service Mesh

Now let's see how we can have service discovery in a distributed environment where we have container orchestration done by Kubernetes and hence new instances are spun up and addresses change. We also want the inter service communication to be secure, load balances and resilient. This is where we can leverage Service Mesh Architecture.

A service mesh is a dedicated infrastructure component that handles service to service communication within distributed micro services. Service meshes can make service-to-service communication fast, reliable and secure. It uses proxy-based communication where the proxies are deployed as side cars to application code. These proxies act as intermediaries between microservices and an organization's network and route all traffic to and from the service via its proxy server. This maximizes the efficiency of all interconnected elements.

This takes us to two important parts of a service mesh - Control Plane and Data Plane.

Data Plane

The data plane is the data handling component of a service mesh. It includes all the sidecar proxies and their functions. When a service wants to communicate with another service, the sidecar proxy takes these actions:

  1. The sidecar intercepts the request
  2. It encapsulates the request in a separate network connection
  3. It establishes a secure and encrypted channel between the source and destination proxies

The sidecar proxies handle low-level messaging between services. They also implement features, like circuit breaking and request retries, to enhance resiliency and prevent service degradation. Service mesh functionality—like load balancing, service discovery, and traffic routing—is implemented in the data plane.

Control Plane

The control plane acts as the central management and configuration layer of the service mesh.

With the control plane, administrators can define and configure the services within the mesh. For example, they can specify parameters like service endpoints, routing rules, load balancing policies, and security settings. Once the configuration is defined, the control plane distributes the necessary information to the service mesh's data plane. Thus control plane works as the brain of the service mesh.

The proxies use the configuration information to decide how to handle incoming requests. They can also receive configuration changes and adapt their behaviour dynamically. We can make real-time changes to the service mesh configuration without service restarts or disruptions.

This makes it very different in approach than an API gateway (a separate service that takes over cross cutting concerns like logging, authN/authZ etc but requires separate deployment and handling incase of configuration updates).

Service mesh implementations typically include the following capabilities in the control plane:

  • Service registry that keeps track of all services within the mesh
  • Automatic discovery of new services and removal of inactive services
  • Collection and aggregation of telemetry data like metrics, logs, and distributed tracing information

Summary

In this blog we explored service discovery, concept, need for it. We discussed on Service registry being first step towards discovery, looking into types of registry mechanisms. Client and Server side service discovery and their differences. We zoomed into Zookeeper and Eureka as service discovery providers and their architectures, looking into how these two are different yet able to achieve service discovery functionality. We also explored hands on Eureka service discovery. Concluded it by exploring Service Mesh and how it can be leveraged for service discovery in distributed microservices deployed (in cloud) as containerised services.

Sources of Knowledge

  1. https://zookeeper.apache.org/doc/current/zookeeperOver.html
  2. https://github.com/Netflix/eureka/wiki/Eureka-at-a-glance
  3. https://docs.arenadata.io/en/ADStreaming/current/concept/architecture/zookeeper/service_discovery.html
  4. https://rafael-as-martins.medium.com/step-up-your-microservices-architecture-with-netflix-eureka-cb3b92f90a18
  5. https://aws.amazon.com/what-is/service-mesh/

要查看或添加评论,请登录

Aneshka Goyal的更多文章

  • Introduction to Distributed Tracing

    Introduction to Distributed Tracing

    What is Distributed Tracing? The word tracing is to trace the request as it flows through the system. Since modern…

    2 条评论
  • Introduction to Micro frontend

    Introduction to Micro frontend

    What is Micro frontend? The term “micro frontends” debuted in the 2016 ThoughtWorks Technology Radar guide. At its…

  • Introduction to Pub-Sub and Streams with Redis&SpringBoot

    Introduction to Pub-Sub and Streams with Redis&SpringBoot

    Publish/Subscribe Problem: Let's say we have synchronous messaging between two components of our system called as…

    2 条评论
  • Introduction to Time Series Database - InfuxDB

    Introduction to Time Series Database - InfuxDB

    What is Time Series Data? As the title of the blog depicts we would be discussing about time series databases and in…

    1 条评论
  • Introduction to Ontology

    Introduction to Ontology

    What is Ontology? An ontology is a formal and structural description of knowledge about a specific domain. Knowledge is…

  • From Java 17 to Java 21 - Features and Benefits

    From Java 17 to Java 21 - Features and Benefits

    Java has been constantly evolving with new features and enhancements. With the recent LTS (Long term support) version…

    2 条评论
  • Vault Authentication and Springboot integration

    Vault Authentication and Springboot integration

    What is Vault? Vault is an identity-based secrets and encryption management system. A secret is anything that we want…

  • Introduction to gRPC with Spring boot

    Introduction to gRPC with Spring boot

    Overview RPC stands for remote procedure calls. In this the client is able to directly invoke a method on server…

    6 条评论
  • Introduction to Triple Crown

    Introduction to Triple Crown

    Organizations are always trying to improve how they work, in order to increase efficiency and reduce errors. This…

    4 条评论
  • From Java 8 to Java 17- Features and Benefits

    From Java 8 to Java 17- Features and Benefits

    Java language has undergone several changes and modifications since Java 1.0.

    1 条评论

社区洞察

其他会员也浏览了