Deploying Load Balancer on AWS using Ansible
Ansible + AWS + HTTPD + HAProxy

Deploying Load Balancer on AWS using Ansible

This project focuses on using Ansible to launch four instances on the AWS Cloud, then configure three as webservers and the one left as a Load Balancer. The instance acting as the Load Balancer will be configured with HAProxy software, a popular open source software that provides a high availability load load balancer and proxy server for TCP/HTTP-based applications. The instances which will be used as webservers will be configured with Apache httpd.

A few basic pre-requisites are given below:

  1. Ansible and Python installed on the local machine i.e., the Controller Node.
  2. An AWS account.

Before beginning with writing playbooks, install the boto SDK required by Ansible to contact to AWS Cloud. It is nothing but a Python library thus can be installed easily using 'pip3 install boto' command.

Create an IAM user on the AWS account with some policy such as PowerUserAccess. Download the Access Key and the Secret Key provided as a .csv file. Then using the respective values for Access Key and Secret Key provided in the file, enter the commands given below:

export AWS_REGION='<region eg., ap-south-1>'
export AWS_ACCESS_KEY_ID='<ACCESS-KEY assigned>'
export AWS_SECRET_ACCESS_KEY='<SECRET-KEY assigned>'

Ansible will be using AWS services as the IAM user created above. These commands will set the values as environment variables which will be required by Ansible afterwards.

Also it would be appropriate to create a Dynamic Inventory for Ansible so that as soon as the EC2 instances are launched, they are added to Ansible's inventory without any human intervention. Create a directory and download two scripts using the wget command. These scripts are pre-created by the Ansible Community and are available for free-use. Visit the GitHub link here.

wget https://raw.githubusercontent.com/ansible/ansible/stable-2.9/contrib/inventory/ec2.py
wget https://raw.githubusercontent.com/ansible/ansible/stable-2.9/contrib/inventory/ec2.ini

Update the file 'ec2.py' and add the line '#!/usr/bin/python3' on top. Make these files executable by using the chmod command:

chmod +x ec2.ini
chmod +x ec2.py

Update the configuration file of ansible to add this directory as the inventory for Ansible as shown. The by-default configuration file is '/etc/ansible/ansible.cfg'

[defaults]
inventory = /path_to/directory
host_key_checking = FALSE
roles_path = /etc/myroles
private_key_file = /path/to/key.pem
remote_user = ec2-user
ask_pass = FALSE
become = TRUE

[priviledge escalation]
become = TRUE
become_user = root
become_method = sudo
become_ask_pass = FALSE

And now, we can proceed with creating a playbook to launch an instance. Collect some data before writing the playbook which will be required to launch an instance such as Amazon AMI ID, instance type, AWS Region and a VPC Subnet ID.

2 Security Groups and a key-pair must be created before which will be required to attach to the instances, use the AWS Management Console. Configure one of the Security Groups to allow traffic through ports 80 and 22 via TCP and SSH respectively and the other to allow traffic through ports 8080 and 22 via TCP and SSH. Download the key-pair in some directory and save/copy the IDs of the security groups too. Also mention this key in the configuration file as 'private_key_files'.

Also the playbook will be requiring the AWS Access key and the Secret key of the IAM user created before. It is more recommended for security to use a vault. Create one using the command given below. This associates the vault with a unique ID and a password will be required to use/work with this file.

ansible-vault create --vault-id <id>@prompt <vault-name>.yml

Then edit the file, use the command given below. Enter the password provided before when prompted.

ansible-vault edit <vault-name>.yml

Store the values of the Access key and Secret key in YAML format as shown, save and exit.

AWS_ACCESS_KEY: '<Access Key>'
AWS_SECRET_KEY: '<Secret Key>'

The playbook will be containing two tasks; one to launch instances for webservers and the other to launch one for a load balancer. The playbook is also available to use on the GitHub repository provided at the bottom of this article.

- hosts: localhost
  vars_files:
          - <vault-name>.yml
  tasks:
          - name: Launch EC2 instance over AWSCloud
            ec2:
                    region: "ap-south-1"
                    image: "<AMI ID>"                 # eg., ami-0e306788ff2473ccb
                    instance_type: "t2.micro"
                    count: 3
                    vpc_subnet_id: "<Subnet ID>"
                    group_id: "<Security-Group ID 1>"
                    instance_tags:
                            Name: "webserver"
                            user: "ansible"
                    key_name: "<key-pair>"
                    assign_public_ip: yes
                    state: present
                    wait: yes
                    aws_access_key: "{{ AWS_ACCESS_KEY }}"
                    aws_secret_key: "{{ AWS_SECRET_KEY }}"
            register: instn
          - debug:
                  msg:
                          - "Webserver1: {{ instn.instances[0].public_ip }}"
                          - "Webserver2: {{ instn.instances[1].public_ip }}"
                          - "Webserver3: {{ instn.instances[2].public_ip }}"

          - name: Launch EC2 instance for LoadBalancer
            ec2:
                    region: "ap-south-1"
                    image: "<AMI ID>"                 # eg., ami-0e306788ff2473ccb
                    instance_type: "t2.micro"
                    count: 1
                    vpc_subnet_id: "<Subnet ID>"
                    group_id: "<Security-Group ID 2>"
                    instance_tags:
                    instance_tags:
                            Name: "loadbalancer"
                            user: "ansible"
                    key_name: "<key-pair>"
                    assign_public_ip: yes
                    state: present
                    wait: yes
                    aws_access_key: "{{ AWS_ACCESS_KEY }}"
                    aws_secret_key: "{{ AWS_SECRET_KEY }}"
            register: instn
          - debug:
                  var: instn.instances[0].public_ip

Make sure that the 'Security-Group ID 1' is the one allowing TCP on Port 80 and 'Security-Group ID 2' is the one allowing TCP on Port 8080.

Run the playbook using the command as shown. Provide the password for vault when prompted.

ansible-playbook --vault-id <UniqueID>@prompt <playbook-name>.yml

Wait and watch as the EC2 instances are launched. The Public IPs assigned to each by AWS is also shown as the output. Confirm by using the EC2 dashboard on AWS Management Console.

Output: instances launched
AWS EC2 dashboard

Also use the commands 'ansible all --list-hosts' as shown above to confirm that the instances were added dynamically to the Ansible Inventory. The next step begins to configure the instances accordingly as required. Either create playbooks or better to create and use roles.

The roles must be used such that one of the four instances launched above is configured as a Load Balancer and the other as webservers. To do this, create a new inventory file as shown below and update the inventory in the configuration file of Ansible. Use the Public IPs returned as output before without repetition.

[webservers]
Public-IP-1 ansible_ssh_user=ec2-user ansible_ssh_private_key=/path/to/key.pem
Public-IP-2 ansible_ssh_user=ec2-user ansible_ssh_private_key=/path/to/key.pem
Public-IP-3 ansible_ssh_user=ec2-user ansible_ssh_private_key=/path/to/key.pem

[loadbalancer]
Public-IP-4 ansible_ssh_user=ec2-user ansible_ssh_private_key=/path/to/key.pem

Use the commands given below to create two roles in the directory mentioned as 'roles_path' in the configuration file. The roles are also available to use on the same GitHub repository given on this article.

ansible-galaxy init awswebserver
ansible-galaxy init awslb
roles directory structure

Role for webservers: 'awswebserver'

  • Mention the tasks by editting the file: /awswebserver/tasks/main.yml
---
# tasks file for awswebserver
- name: install httpd
  package:
          name: httpd
          state: present
  become: yes

- name: start httpd service
  service:
          name: httpd
          state: started
  become: yes

- name: copy webpages to deploy
  template:
          src: index.html
          dest: /var/www/html
  become: yes
  • Add a template to be used for webpages for each of the backend servers. Create a file in /awswebserver/template/ with name index.html
<h2>testing webpage</h2>
<HR>
hello from {{ ansible_hostname }}

Role for load balancer: awslb

  • Mention the tasks by editting the file: /awslb/tasks/main.yml
---
# tasks file for awslb
- name: install HAProxy software
  package:
          name: "haproxy"
          state: present
  become: yes

- name: configure HAProxy using local template
  template:
          src: "haproxy.cfg"
          dest: "/etc/haproxy/haproxy.cfg"
  notify: restart HAProxy service
  become: yes

- name: start HAProxy service
  service:
          name: "haproxy"
          state: started
  become: yes
  • If new configurations are made then the service must be restarted which can be easily done handlers. Edit the file /awslb/handlers/main.yml and create a handler with the same name used for notify.
---
# handlers file for awslb
- name: restart HAProxy service
  service:
          name: "haproxy"
          state: restarted
  become: yes

To create the template file: 'haproxy.cfg':

  1. Install HAProxy software on the local system using yum or apt-get and copy the by-default configuration file /etc/haproxy/haproxy.cfg to the templates directory of the role awslb.
  2. Then update the lines at the bottom as shown.
frontend & backend

Both the roles are ready to use. Create a playbook as shown below to start configuration:

- hosts: webservers
  roles:
          - awswebserver

- hosts: loadbalancer
  roles:
          - awslb

Run the playbook using the command given below.

ansible-playbook playbook-name.yml

Ansible starts configuring the instances used under group: [webservers] in inventory as webservers while the one under group: [loadbalancer] is configured as a load balancer. Finally the output is shown below:

No alt text provided for this image

Use the Public IP of the instance used as Load Balancer to view the webpage deployed over the webservers. And refresh a few times to see 3 webpages with different content each holding the 'ansible_hostname'.

webpage: hostname1
hostname2
hostname3

So finally using Ansible a Load Balancer has been deployed configured to balance load between 3 Webservers all running over the AWS Cloud, launched and configured again by Ansible.

Visit the GitHub repository given below to download the roles or the playbooks used in this project.

Thankyou!

要查看或添加评论,请登录

Sarthak Sharma的更多文章

社区洞察

其他会员也浏览了