System Design: Load balancers

System Design: Load balancers

Thanks to the original creator: https://medium.com/geekculture/system-design-basics-load-balancer-5aa1c6b0f88d

What is Load Balancing?

Load balancing refers to efficiently managing traffic across a set of servers, also known as?server farms?or?server pools. Each load balancer sits between client devices and backend servers, receiving and then distributing incoming requests to any available server capable of fulfilling them.

What are Load Balancers?

A load balancer is a device that acts as a reverse proxy and distributes network or application traffic across several backend servers. It is used to increase the concurrent capacity of a distributed system by increasing availability and performance. It improves the overall performance of applications by decreasing the burden on servers associated with managing and maintaining application and network sessions, as well as by performing application-specific tasks.

Load Balancers are categorized into two different groups: Layer 4 and Layer 7. Layer 4 corresponds to load balancing on a network level (Network & Transport Layer), optimizing the flow of packets through protocols like IP, TCP, FTP, etc. Layer 7 works on an application level, optimizing HTTP requests, APIs, etc.

A load balancer may be:

  • A physical device or a virtual instance running in a distributed system
  • Incorporated into application delivery controllers (ADCs) designed to more broadly improve the performance and security at the microservice level.
  • A conglomeration of several load balancers, running on different algorithms based on the use case in a system.

No alt text provided for this image

Load Balancing Algorithms

Some of the algorithms used for load balancing are:

  • Round Robin: Requests are redirected to different servers sequentially in a round-robin manner (one after another sequentially)
  • Least connections: Requests are sent to the backend server with the least number of requests. The relative computing capacity is considered when deciding where the request should be sent
  • IP Hashing: A mapping of backend servers is done with the client IP address. Based on the IP of the client’s request the backend server is selected. This strategy is used where some specific servers are given preference over others.

Sticky Session

Session stickiness, or session persistence, is a mechanism by which load balancers can couple the requests to the backend systems. This ensures that different requests for the same session can be processed by different servers without loss of information.

No alt text provided for this image

The advantage of sticky sessions is that servers within the distributed system don’t need to interact between them. Each system can work independently. Also, there is an added advantage of RAM cache utilization which results in better responsiveness.

But this is not without its cons. A server may become overloaded with too many sessions or might result in data loss if servers are shifted mid-session. There’s also the added latency added due to one central load balancer.


Elastic Load Balancers

An?Elastic Load Balancer (ELB) ?can scale load balancers and applications based on real-time traffic automatically. ELB automatically distributes incoming application traffic across multiple targets and virtual appliances in one or more Availability Zones (AZs).

It uses system health checks to learn the status of application pool members (application servers) and routes traffic appropriately to available servers, manages fail-over to high availability targets, or automatically spin-up additional capacity.

ELBs scale your load balancer as traffic increases. The load balancer acts as a point of contact for all incoming requests, and monitoring the health of the instances distributes load among them.

No alt text provided for this image

Elastic Load Balancing automatically distributes incoming application traffic across multiple server instances. It enables you to achieve greater levels of fault tolerance in your applications, seamlessly providing the required amount of load balancing capacity needed to distribute application traffic.

Elastic Load Balancing detects unhealthy instances and automatically reroutes traffic to healthy instances until the unhealthy instances have been restored. Customers can enable Elastic Load Balancing within a single or multiple Availability Zones for more consistent application performance.

ELBs can be configured at three levels in a system:

  • Application Load Balancer: Application Load Balancer is best suited for load balancing at the application level (HTTP/HTTPS requests). It provides advanced request routing targeted at the delivery of modern application architectures, including microservices and containers.

No alt text provided for this image

  • Network Load Balancer: This is done at Network Level. It is best suited for load balancing of Transmission Control Protocol (TCP), User Datagram Protocol (UDP), and Transport Layer Security (TLS) traffic where extreme performance is required.

No alt text provided for this image

  • Gateway Load Balancer: Gateway Load Balancer makes it easy to deploy, scale, and run third-party virtual networking appliances. Providing load balancing and auto-scaling for fleets of third-party appliances, Gateway Load Balancer is transparent to the source and destination of the traffic. This capability makes it well-suited for working with third-party appliances for security, network analytics, and other use cases.

No alt text provided for this image

Happy Learning!

要查看或添加评论,请登录

Talha A.的更多文章

  • What is Infrastructure as Code?

    What is Infrastructure as Code?

    Thanks TO : https://medium.com/technology-hits/introduction-to-terraform-76e5b18581fe https://medium.

    1 条评论
  • Dependency Inversion vs. Dependency Injection.

    Dependency Inversion vs. Dependency Injection.

    Thanks to the original writer and article :…

  • REST API Naming Standards & Best Practices

    REST API Naming Standards & Best Practices

    Thanks to the original writer and article: https://senoritadeveloper.medium.

    1 条评论
  • System Design Basics: API Gateway

    System Design Basics: API Gateway

    Thanks to the original article: https://medium.com/geekculture/system-design-basics-api-gateway-6e3387698f92 API…

    1 条评论
  • What is the Serverless architecture?

    What is the Serverless architecture?

    Thanks to the original article: https://medium.com/@raviyasas/serverless-for-beginners-cc72a41a7c9e| Technology is…

    1 条评论
  • To Perform Effective Code Reviews

    To Perform Effective Code Reviews

    10 Simple Code Review Tips for Effective Code Reviews Software code review is a process to ensure that the code meets…

    1 条评论
  • Most Useful Software Architecture Patterns

    Most Useful Software Architecture Patterns

    Layered Pattern (n-tier) The layered architecture pattern is one of the most common patterns. The idea behind a Layered…

    1 条评论
  • 15 fundamental tips on REST API design

    15 fundamental tips on REST API design

    Thanks to the original writer and article :…

    2 条评论

社区洞察

其他会员也浏览了