Configure Load Balancer with HAproxy using AWS EC2 instances.
?? Task Description
Statement: Deploy a Load Balancer and multiple Web Servers on AWS instances through ANSIBLE!
??Provision EC2 instances through ansible.
?? Retrieve the IP Address of instances using the dynamic inventory concept.
??Configure the web servers through the ansible role.
??Configure the load balancer through the ansible role.
??The target nodes of the load balancer should auto-update as per the status of web servers.
Let us discuss some Basic concepts.
What is HAProxy?
HAProxy (High Availability Proxy) is a TCP/HTTP load balancer and proxy server that allows a webserver to spread incoming requests across multiple endpoints. HAProxy is known as “the world’s fastest and most widely used software load balancer.” This is useful in cases where too many concurrent connections over-saturate the capability of a single server. Instead of a client connecting to a single server which processes all the requests, the client will connect to an HAProxy instance, which will use a reverse proxy to forward the request to one of the available endpoints, based on a load-balancing algorithm.
HAProxy Architecture
Above image shows the architecture of Reverse Proxy i.e HAProxy.
Note: To implement this architecture we should have 4 VM(s) or 4 instances. I am using AWS cloud to launch the 4 instances. 1 instance works as Controller Node, 1 instance for Load Balancer and other 2 instances are Managed Nodes(webservers).
In my previous article we've launched a EC2 instance and configured webserver using dynamic inventory. Check out this article:
I've talked about dynamic inventory, by which we can extract information about system to be configured dynamically using some python scripts. Here we'll be using same concept to gather facts about multiple ec2 instances and use it to configure HaProxy on one instance and other with Apache Httpd.
Before we begin, check out prerequisite :
- Ansible controller node
- AWS account
That's all, now let's begin:
First things first, checking ansible config file:
vim /etc/ansible/ansible.cfg
Notice, inventory, roles path set them accordingly and also set the privilege escalation as we need root power to operate.
And to talk about inventory, we've used same scripts as in previous article, but keep one thing in mind that here we've two different configuration to be executed one for load balancer and other for webserver, now for that we need two groups in inventory to differentiate it.
we'll be creating ansible roles:
ansible-galaxy init ec2Instances
Let's write the Playbook:
we're launching 1 instance with loadBalancer tag and 3 instances with webserver tag.
Now let's launch this role with play book, don't forget to put access key in loadbal.yml as we did in previous article.
vim launchEC2.yml
Let's Execute the playbook-
ansible-playbook launchEC2.yml
Now let's try to ping our hosts:
ansible all -m ping
To check, the group name we can check it by executing the python script in bash:
The highlighted portion, i.e. the groups generated by script.
let's check the connectivity to these instances by pinging:
ansible tag_name_loadbalancer -m ping ansible tag_name_webserver -m ping
Now let's configure load balancer, for that we'll create another role :
ansible-galaxy init loadbalancer
Let's write the PlayBook for LoadBalancer:
Second task is calling a handler, which will restart the haproxy server as the changes in configuration file is made.
Let's write one more playbook for load balancer:
Now execute this above playbook:
ansible-playbook taskLB.yml
It ran successfully....
------
let's configure webservers:
ansible-galaxy init Webserver
vim webser.yml
That's it, now execute this role also with a playbook:
vim taskWEB.yml
Lets run this playbook now Webserver:
ansible-playbook taskWEB.yml
Our Playbook is working very fine.
Now Lets check whether our AWS ec2 get setup.
Note down the public IP of LoadBalancer instance, and
now to check if load balancing is working or not, refresh the page, and see what comes
First refresh:
Second refresh:
it's redirected to different server both time, observe the Hostname, now lets confirm.
That's all, setup is successful...!!
THANK YOU to All...!!
DevOps @Plane | DevSecOps Engineer @TechAlchemy | SRE intern @EnableIT | GSoC'22 @Suse | ML | Devops | Linux
4 年Best of luck
SDE @Billdesk | Ex-DevOps Intern @SimplifyVMS | 1530 @Leetcode | Spring Boot | Data Structures | Algorithms | System Design | DevOps | ReactJs
4 年Great work