Haproxy (Load Balancer) Configuration Using Ansible
The aim of this article is:
- 12.1 Use Ansible playbook to Configure Reverse Proxy i.e. Haproxy and update it's configuration file automatically on each time new Managed node (Configured With Apache Webserver) join the inventory.
- 12.2 Configure the same setup as 12.1 over AWS using instance over there.
To carry out the above task, I have installed three different RHEL8 Virtual Machines on the top of Oracle Virtual Box. These three VMs will work as the backend servers for the Load Balancer.
I will be configuring these backend servers as web-servers. And I am going to configure my localhost (Controller Node of Ansible) as an Haproxy server.
So, let’s carry out this practical??
Here is the configuration file and the inventory of the ansible:
Checking Connectivity with all the nodes:
Now, let’s start with configuring the target nodes as web servers:
Here, I have created a single playbook where I have created two different plays. One play is for configuring the target nodes as webservers and the second play is for configuring the localhost as the Haproxy server.
Here is the playbook for configuring the target nodes as webservers:
At first, I have configured yum in the target nodes (as they are RHEL8) and then, I have installed “httpd” package for the webserver. In the next step, I have used the copy module to create content for the webpages.
And here, I have used the ansible_hostname variable of ansible facts so that each webpage should show different content with a different hostname. And for this purpose, I have manually changed the hostname of the target nodes (we could change the hostname using Ansible as well).
And in the next step, I have started the service.
So, this was a play using which I configured the target nodes as web servers.
In the next play, the task is to configure the localhost as an Haproxy server.
Here is the play that I have used:
I have used “haproxy” software for load balancing purpose. For the configuration file of Haproxy, I have a pre-created configuration template in which I have used some of the Jinja2 programs in order to dynamically identify the IP of the backend servers.
Here is a part of that program in the configuration file: (You can get the entire configuration file from the GitHub Link that I have mentioned in the post).
And finally, I have started the Haproxy service.
Here is the output of both the plays:
Now, I have also created some firewall rules for all the nodes, here is the playbook for the same:
The output of the playbook:
And hence, the load balancer has been configured successfully:
The entire setup has been done successfully using Ansible.
Now, if we want to do the same setup on the top of AWS Cloud, we have to make some changes in the configuration file of Ansible, in order to provide the sudo access and key to log in.
Here, I am going to launch four ec2 instances out of which, one instance will work as the Load balancer and the remaining will work as backend servers.
Here is the configuration file of ansible for ec2 instance:
Here, I have provided some extra information such as: the location of the key to log in, the remote username and the privilege escalation details i.e., when ansible will log in using the remote user then it will require root permissions in order to perform tasks.
Then, I have used the above playbook to provision 4 instances:
Before running this playbook, we need boto and boto3 library of python to connect to the API of AWS:
The output of the playbook:
And the instances have been provisioned successfully:
Now, in the next step, I have updated the inventory with the IP address of all the instances in separate groups:
Here, we do not need to give the username and password for the remote system as we already provided the required information in the configuration file.
And now, almost everything would be the same in the playbook for the configuration of the webserver and load balancer. We just need to change the group names in the playbook and the ha.conf file.
Here is the playbook:
The updated part of ha.conf file:
The output of the plays:
And the same setup has been configured over AWS cloud as well:
ThankYou for staying.
Have a good day !! ???????