ARTH-Task 12.2

ARTH-Task 12.2

Task Description ??

12.1 Use Ansible playbook to Configure Reverse Proxy i.e. Haproxy and update its configuration file automatically each time a new Managed node (Configured With Apache Webserver) joins the inventory. we do this in the last task now we have to do it over AWS cloud.

12.2 Configure the same setup as 12.1 over AWS using instance over there


In this Task, we will configure HAPROXY or we can say Reverse-Proxy setup by this we can achieve a load-balancer in the same way as Google, Facebook, or other company uses or we can say by this we can solve a real industry use-case that every company needs because when we launch a web server then it is not possible to give its IP of that server to the client to reach and also company have to deploy multiple web servers for the same site so that we can reach efficiently without any failure. so we have many IP then it is not realistic to give to isn't a range of IP and say try all this and get our site for use so for this we use this load-balancer concept with the help of the proxy tool. but one more important thing in this task that we deploy all this setup over the AWS cloud .with the help of ansible for doing this we need to do some more things that will help us to overcome thee tasks.

What is HAProxy?

HAProxy is free, open-source software that provides a high availability load balancer and proxy server for TCP and HTTP-based applications that spread requests across multiple servers. It is written in C and has a reputation for being fast and efficient (in terms of processor and memory usage).

HAProxy is used by a number of high-profile websites including GoDaddyGitHubBitbucket, Stack OverflowRedditSlackSpeedtest.netTumblrTwitter, and Tuenti and is used in the OpsWorks product from Amazon Web Services.

What is Reverse-Proxy?

In computer networks, a reverse proxy is a type of proxy server that retrieves resources on behalf of a client from one or more servers. These resources are then returned to the client, appearing as if they originated from the reverse proxy server itself. Unlike a forward proxy, which is an intermediary for its associated clients to contact any server, a reverse proxy is an intermediary for its associated servers to be contacted by any client. In other words, a proxy is associated with the client(s), while a reverse proxy is associated with the server(s); a reverse proxy is usually an internal-facing proxy used as a 'front-end' to control and protect access to a server on a private network.

What is an HAProxy load balancer?

HAProxy is free, open-source software that provides a high availability load balancer and proxy server for TCP and HTTP-based applications that spread requests across multiple servers. It is written in C and has a reputation for being fast and efficient (in terms of processor and memory usage).

Ansible

Ansible is an open-source software provisioning, configuration management, and application-deployment tool enabling infrastructure as code. It runs on many Unix-like systems and can configure both Unix-like systems as well as Microsoft Windows. It includes its own declarative language to describe system configuration. Ansible was written by Michael DeHaan and acquired by Red Hat in 2015. Ansible is agentless, temporarily connecting remotely via SSH or Windows Remote Management (allowing remote PowerShell execution) to do its tasks.

AWS -

Amazon Web Services (AWS) is a subsidiary of Amazon providing on-demand cloud computing platforms and APIs to individuals, companies, and governments, on a metered pay-as-you-go basis. These cloud computing web services provide a variety of basic abstract technical infrastructure and distributed computing building blocks and tools. One of these services is Amazon Elastic Compute Cloud (EC2), which allows users to have at their disposal a virtual cluster of computers, available all the time, through the Internet.

CLOUD-COMPUTING -

Cloud computing is the on-demand availability of computer system resources, especially data storage (cloud storage) and computing power, without direct active management by the user. The term is generally used to describe data centers available to many users over the Internet.[2] Large clouds, predominant today, often have functions distributed over multiple locations from central servers. If the connection to the user is relatively close, it may be designated an edge server.


Pre-Requisite

For doing these tasks we need some setup that are

One server with HAProxy load-balancer and THAT WILL OVER AWS CLOUD

Two servers with running webserver THIS ALSO OVER THE CLOUD

Ansible controller-node setup also required to do these tasks with the help of ansible THIS WILL BE OVER AT OUR VM.

We need the access key and the secret key of the account that we have over the AWS-cloud but here our root user not provide us access key so we have to create a new user via IAM user service

Hand's on the practical to the task

In this particular task, we are going to set up a load-balancer with the help of the HAProxy tool .by this we achieve a real-industry use-case like big or every company use now a day as we take the example of was it provide us a website to do the task but inside that there are millions of server working at the same time as same as Google and Facebook also as we see they all provide us a simple one URL by which all the person in the world can access the same website that always happening with the help of proxy or we can say due to load-balancer. and here we do automate this all set up by ansible sp, first of all, we have to set up an ansible controller node from where we can set up all this setup of load-balancer. Basically in this, we connect all our webserver to a proxy server and we can apply our DNS over the IP that proxy server and convert them into URL with a name via dhcp server and then provide that URL to the client to reach that web-server in the backend that load-balancer can balance the load it can send a client to web-server for getting the site ready. for the client.

In this task, we have our ansible controller node over our local VM where we can install ansible and configure and make it ready for controller node and here our manage-node will be over the cloud but for this, we have a special challenge that where we have to get the IP of that instances and then write them into inventory that will be a manually work for us and a big challenge for us so for overcoming this challenge there is a concept of Dynamic Inventory come here we don't have to write the inventory we write a program that will fetch the IP from our AWS account and store into a file and that file is wok like an inventory for us and one more interesting thing is that it will be deleted after the rule of an inventory completed means it will create while running playbook and also remove when our playbook completely run.

The program that uses for this is very famous and written in python language and use boto liberty as a backend to go to AWS API and retrieve IP from there and all the imp that we want to retrieve that we have to write in .ini file that is a source file. these two ec.py and ec2.ini that are diff for diff version on ansible so download for the same version that u have in your controller-node.

but there is an imp point that we have to note that as here we don't have ay manage-node in our inventory so in our local inventory we have to write the location of our localhost which means this all playbook sun on the localhost that performs launching of ec2_instance and other that we mention in the playbook and then our dynamic inventory will create on our localhost then and then it performs the task that we gave in there play .here first we launch web server node and then load-balancer node.

so our local inventory file looks like that

[localhost]

192.168.1.102 ansible_user=root ansible_ssh_pass=redhat ansible_connection=ssh

After writing the inventory file now we also have to change the configuration of ansible also that we can easily go to the AWS cloud and perform our tasks like creating an instance and doing configuration.

[defaults]
inventory = /root/ansible/vsftpd/inventory
host_key_checking=false
remote_user=ec2-user
ask_pass=false
private_key_file=/root/Downloads/arth.pem


[privilege_escalation]
become=true
become_method=sudo
become_user=root
become_ask_pass=false

here we do some changes as we create a operate conf file and separate inventory for this not dynamic we create a separate static inventory for this setup where we give our localhost and here we give remote-user as ec2-user because here we don't have root password and a key that we have that is for ec2-user so we use ec2-user here. and do our all the task with Sudo power that we write in privilege_escalation and one more thing that is we have to specify the private_key path so that we can take it from where and use for ssh login

After all this, we can also make a code by which we come to know that our setup I working or not we simply write a code that will retrieve the IP of that particular we do it for showing that our setup in working but in reality, we have the same site on all webserver without any change of single bits

cat index.php
<pre>


<?php


print `/usr/sbin/ifconfig enp0s3`;


?>
</pre>

we use PHP to get our output so we just need to the installation of PHP on our server

 we also have to configure the file or we can say the load-balancer config file by which we can achieve this this is the main setup in this setup if this config fill not work good then it is not possible to do reverse-proxy in this file we have to take the help of jinja templates for doing dynamic our this process

if we configure HAProxy file without the help of jinja then we have to manually go to that file and add the name i=of new server in that file so this will decrease efficiency so we have to automate this also by doing this we only have to need to only in our inventory file then all the process and work ansible do for us so we use this because in large scale this is not possible to come again and again and change some server so for doing this task we use jinja templating

What is jinja templating?

Jinja is a web template engine for the Python programming language. It was created by Armin Ronacher and is licensed under a BSD License. Jinja is similar to the Django template engine but provides Python-like expressions while ensuring that the templates are evaluated in a sandbox. It is a text-based template language and thus can be used to generate any markup as well as source code.

The Jinja template engine allows customization of tags, filters, tests, and globals. Also, unlike the Django template engine, Jinja allows the template designer to call functions with arguments on objects. Jinja is Flask's default template engine and it is also used by Ansible and Trac.

Our config file with jinja or without jinja here we see only the main part not all

frontend main
    bind *:8080
    acl url_static       path_beg       -i /static /images /javascript /stylesheets

.
.
.
.
.
#---------------------------------------------------------------------
# round robin balancing between the various backends
#---------------------------------------------------------------------
backend app
    balance     roundrobin


    server app1 192.168.43.149:80 check
    server app2 192.168.43.243:80 check

this is the view of our HAProxy file when we have to do all the steps manually go and change the server no and remove server but to overcome this we use jinja templating that we can copy with ansible so there we use template module because template module has power to debug jinja keyword and convert it into our main file for this we first have to copy our this new HAProxy file into the same playbook folder from where we have to run the playbook and then we can change that HAProxy config file so that with use of copy we can firstly dynamically get that IP of the server where we have our webserver.

in that jinja file, we tell it to jinja file that we give him our inventory group name so it will retrieve all the IP from that group and put that IP here and we also use index. loop here also to change the number of apps that are required so let's see the jinja input HAProxy config file.

frontend main
    bind *:8080
    acl url_static       path_beg       -i /static /images /javascript /stylesheets


.
#---------------------------------------------------------------------
# round robin balancing between the various backends
#---------------------------------------------------------------------
backend app
    balance     roundrobin


{% for i in groups['webserver'] %}
    server app{{ loop.index}} {{ i }}:80 check
{% endfor %}

here in this, we see we use some symbols that symbols are jinja symbols that are used for there own purpose {% is used when we have use of condition and iteration and {{ is used when we have to retrieve the string. and in this we keep bind lines the same because we have to do once that we do manually also but here in below we use jinja here we run for loop means do until the last point come and the group is a keyword in ansible which is used for inventory groups so we take the help of that to go our loop inside that particular group on inventory. so by this for loop go inside the inventory group and retrieve all the IP that is present in our group webserver and in inventory webserver group we only put IP of that server that are web-server configured and are ready to go under the load-balancing

And our HAProxy uses a round-robin algorithm behind to give the load to the server behind the lb.

In the playbook, we use some ansible module that is

Template -

This module is flagged as a stable interface which means that the maintainers for this module guarantee that no backward-incompatible interface changes will be made.

Copy -

use to copy a file from one place to other

Package -

We can install software with this module

ec2 -

  • Creates or terminates ec2 instances.
  • state=restarted was added in 2.2

add_host -

This module is part of ansible-base and included in all Ansible installations. In most cases, you can use the short module name add_host even without specifying the collections: keyword. Despite that, we recommend you use the FQCN for easy linking to the module documentation and to avoid conflicting with other collections that may have the same module name.

wait_for -

  • You can wait for a set amount of time timeout, this is the default if nothing is specified.
  • Waiting for a port to become available is useful for when services are not immediately available after their init scripts return which is true of certain Java application servers. It is also useful when starting guests with the virt module and needing to pause until they are ready.
  • This module can also be used to wait for a regex match a string to be present in a file.

here we also use one more concept of ansible by the help of that we can store our access-key and secret inside that and then import that file in our main file to use that in our playbook for running. the name of this concept is ansible-vault.

Encrypting content with Ansible Vault

Ansible Vault encrypts variables and files so you can protect sensitive content such as passwords or keys rather than leaving it visible as plaintext in playbooks or roles. To use Ansible Vault you need one or more passwords to encrypt and decrypt the content. If you store your vault passwords in a third-party tool such as a secret manager, you need a script to access them. Use the passwords with the ansible-vault command-line tool to create and view encrypted variables, create encrypted files, encrypt existing files, or edit, re-key, or decrypt files. You can then place encrypted content under source control and share it more safely.

No alt text provided for this image

our vault-pass image and ansible conf file also and all this put in the same folder as seen in this pic

Now let's come to our main play that configures all this setup with the help of all this information that we give above

- hosts: localhost
  vars_files:
          -     mycred1.yml

this means we can run our playbook on localhost because we don't know manage-node they are created while running the playbook

Var_file use to get our vault-protected file so that we can use this in our play for creating instances

No alt text provided for this image

here as we see our task running on localhost to create an ec2-instance is working good without any issue

after this, we have some variable that we can declare at the beginning of the playbook so can use them in after section of the playbook

  vars:
          region: ap-south-1
          subnet: subnet-b6c8dfde
          sg: arth_sg
          type: t2.micro
          number: 2

now after all this now we can launch the instance over the cloud with help of the playbook

  tasks:
  - name: launch ec2-instance
    ec2:
            key_name: arth
            instance_type: "{{ type }}"
            image: ami-0a9d27a9f4f5c0efc
            wait: true
            group: "{{ sg }}"
            count: "{{ number }}"
            vpc_subnet_id: "{{ subnet }}"
            assign_public_ip: yes
            region: "{{ region }}"
            state: present
            aws_access_key: "{{ access_key }}"
            aws_secret_key: "{{ secret_key }}"
            instance_tags:
                    Name: webserver
    register: ec2

by the use of this section of the play, we can create an instance over the AWS and we give here key_name instance_type that we want security_group vpc_id ami_id and all the main thing that is required to launch an instance. and after this store all the op of this section into a register that is called as ec2 here..

after this, we have to add the IP of this into the inventory so we use add_host-module and wait_for is used because for the further configuration we need the ssh connection to the newly created instance that will come after some specific time so for the time we have to wait for that

  - name: add public ip of newinstances
    add_host:
            hostname: "{{ item.public_ip }}"
            groupname:  webserver
    loop: "{{ ec2.instances }}"
  - name: wait for ssh come
    wait_for:
            host: "{{ item.public_dns_name }}"
            port: 22
            state: started
    loop: "{{ ec2.instances }}"

we put in the loop if we have more instances than this work for all the instances.

now till here our instance will be created and get ready for ssh now we have to write a diff play because now we have another os so what we have to do that we have to write here. so first we configure the website so we write webserver playbook first

- hosts: webserver
  gather_facts: no

now we mention ourtasks

 tasks:
          - name: install php
            package:
                    name: "php-7.2.11"
                    state: present


          - name: install httpd
            package:
                    name: "httpd"
                    state: present


          - name: copy file
            copy:
                    src: "index.php"
                    dest: "/var/www/html/index.php"
          - name: start service
            service:
                    name: "httpd"
                    state: restarted
                    enabled: yes

In this, we install PHP and httpd because httpd provide us webserver services and PHP give us the interpreter that will run the code that is written in PHP so we install both and then we copy our code to destination and after this, we have to start the service of httpd that final our webserver running over the AWS that we totally automate with ansible

Successfully running of web-server service example

No alt text provided for this image
No alt text provided for this image


Now after this we have to do our load-balencer setup

- hosts: localhost
  vars_files:
          -     mycred1.yml

we again run this on localhost because we again have to provision AWS instance o we run on localhost and import the versatile that have credentials.

 vars:
          region: ap-south-1
          subnet: subnet-b6c8dfde
          sg: arth_sg
          type: t2.micro
          number: 1

declaration of variables before starting our asks.

  tasks:
  - name: launch ec2-instance
    ec2:
            key_name: arth
            instance_type: "{{ type }}"
            image: ami-0a9d27a9f4f5c0efc
            wait: true
            group: "{{ sg }}"
            count: "{{ number }}"
            vpc_subnet_id: "{{ subnet }}"
            assign_public_ip: yes
            region: "{{ region }}"
            state: present
            aws_access_key: "{{ access_key }}"
            aws_secret_key: "{{ secret_key }}"
            instance_tags:
                    Name: mylb
    register: ec2

we launch one more instance over AWS with the name tag my lb that works like us load-balancer. here we copy our proxy conf file that we edit above with a template module that converts jinja templating not normal after retrieving the dynamic IP inside them. this is really a good task .fit add_host them this task is done with via ssh to the instance.

  - name: add public ip of newinstances
    add_host:
            hostname: "{{ item.public_ip }}"
            groupname:  mylb
    loop: "{{ ec2.instances }}"
  - name: wait for ssh come
    wait_for:
            host: "{{ item.public_dns_name }}"
            port: 22
            state: started
    loop: "{{ ec2.instances }}"



No alt text provided for this image

actually a pic of the proxy file without copping to the server now chk after copy into server via template module

No alt text provided for this image


after all this now we have to run our task on my to configure as it load-balancer

- hosts: mylb
  gather_facts: no
  tasks:
          - name: install haproxy
            package:
                    name: "haproxy"
                    state: present
          - name: copy config file
            template:
                    src: "haproxy.cfg"
                    dest: "/etc/haproxy/haproxy.cfg"


          - name: start service
            service:
                    name: "haproxy"
                    state: restarted

here we install proxy and then copy their conf file inside it and then start their service then we get our load-balancer is working fine without any issue.

No alt text provided for this image

this pic shows our playbook run successfully and over setup is ready without any issue.

this would really a good task where we create a web server and load-balance over the cloud and start their service via ansible automation we don't do any manual thing after completing playbook or setup at once time then we just need to run the playbook.

the proof of all ok is

No alt text provided for this image

As we in this pic ansible create 2 webservers and 1 mylb instance over the AWS cloud successfully without any issue. do all the work automatically with ansible of provisions or configuration of os both the server at same particular time.

No alt text provided for this image

As we see in the above pics that we go to 13.233.196.61 but we redirect to the other IP automatically that is our task

HENCE PROVED OUR PRACTICAL IS SUCCESSFULLY COMPLETE WITHOUT ANY ISSUE.

THANKS FOR READING .....HAVE A NICE DAY




Adarsh Kumar

Cloud Technical Solutions Engineer @ Google

4 年

Well done GAURAV RATHI ??

要查看或添加评论,请登录

Gaurav Rathi的更多文章

  • GUI Application inside Docker

    GUI Application inside Docker

    Docker containers run as a process under the main operating system, you can run multiple containers on a single…

  • Industry Use-Cases of Jenkins

    Industry Use-Cases of Jenkins

    In this article, we are going to learn about the DevOps tool and the name of that tool is Jenkins This is the main tool…

  • Research for Industry Use-Cases of Openshift

    Research for Industry Use-Cases of Openshift

    In this article we are going to learn about OpenShift and how it works and what is the need for these tools in the…

  • NETFLIX-AWS CASE STUDY

    NETFLIX-AWS CASE STUDY

    In this blog, we are going to learn how Netflix uses AWS cloud to run its business. AWS provides resources to Netflix…

  • Industry Use-Case of Neural Networks(NN)

    Industry Use-Case of Neural Networks(NN)

    What is a Neural Network? Neural networks can be taught to perform complex tasks and do not require programming as…

  • AKS-Azure Kubernetes Service

    AKS-Azure Kubernetes Service

    AKS is a good example of INFRASTRUCTURE as a Service and PLATFORM as a Service. Here as we know Azure is a public…

  • Summary of Expert Session Delivered on Kubernetes | Container | Openshift

    Summary of Expert Session Delivered on Kubernetes | Container | Openshift

    Hello Connections, “Expert session on Industry use cases of Kubernetes/OpenShift ” was one of the great sessions. Which…

  • Git and GitHub Workshop

    Git and GitHub Workshop

    What is GIT? Git is a distributed version control system for tracking changes in any set of files, originally designed…

  • ARTH - Task 14.2 ??????

    ARTH - Task 14.2 ??????

    *Task Description*?? ?? *14.2* Further in ARTH - Task 10 has to create an Ansible playbook that will retrieve new…

  • ARTH - Task 14.3 ??????

    ARTH - Task 14.3 ??????

    Task Description: Create an Ansible Playbook which Dynamically Loads Variable Filename same as OS_Name and just by…

社区洞察

其他会员也浏览了