Auto-configuration of ASG launched instances as webserver behind the Loadbalancer ( Reverse proxy using ha-proxy )

Auto-configuration of ASG launched instances as webserver behind the Loadbalancer ( Reverse proxy using ha-proxy )

Task Description ???????

?? Launch an AWS instance with the help of ansible

?? Retrieve the public IP which is allocated to the launched instance.

?? With the help of the retrieved Public IP configure the load balancing setup using ha-proxy


To launch the instance, we use ansible but it is not meant for that even though to prove ansible as the provisioning tool, I am here used ansible to launch the instances in the AWS

3 Instances for webserver

1 Instance for Loadbalancer ( ha-proxy )

[root@master LB-demo]# cat ec2-lb-provisioner.yaml
- hosts: localhost
  tasks:
  - name: Provisioning OS from AWS (EC2)
    ec2:
      key_name: "ohio-afri"
      instance_type: "t2.micro"
      image: "ami-0a54aef4ef3b5f881"
      wait: yes
      count: 3
      vpc_subnet_id: "subnet-27f1936b"
      assign_public_ip: yes
      region: "us-east-2"
      state: present
      group_id: "sg-0e8d8587ed3987865"
      instance_tags:
        provisioner: ansible
        type: web-server
    register: ec2os




  - name: Provisioning OS from AWS (LB)
    ec2:
      key_name: "ohio-afri"
      instance_type: "t2.micro"
      image: "ami-0a54aef4ef3b5f881"
      wait: yes
      count: 1
      vpc_subnet_id: "subnet-27f1936b"
      assign_public_ip: yes
      region: "us-east-2"
      state: present
      group_id: "sg-0e8d8587ed3987865"
      instance_tags:
        provisioner: ansible
        type: load-balancer
    register: lbos




  - name: Waiting for SSH
    wait_for:
      host: "{{ item.public_ip }}"
      port: 22
      state: started
    with_items:
     - "{{ ec2os.instances }}"
     - "{{ lbos.instance }}"


Here, After executing this playbook, 4 instances get launched with 3 for webserver with webserver tagged and one for load balancer with load balancer tagged.

No alt text provided for this image


and then I have created two different roles for the load balancer and web server configuration as below

No alt text provided for this image


No alt text provided for this image


Here the template directory contains the dynamic files, that will be updated with the respective variable during the execution of the playbook

These are the contents of both the templates...

No alt text provided for this image

Then I created one playbook to execute the respective roles in the respective tagged instances for web server and load balancer

[root@master ansible]# cat role-test-webserver-loadbalancer.yaml


- hosts: tag_type_web_server
  remote_user: ec2-user
  become: yes
  roles:
        - role: web_lb


- hosts: tag_type_load_balancer
  remote_user: ec2-user
  become: yes
  roles:
        - role: loadbalancer


After executing this playbook everything will get deployed in the respective nodes.

Output:

No alt text provided for this image

Here when I use the IP of the load balancer to the browser with 8080 port, it will route the traffic to the instances 80 port, so here we achieve the reverse proxy.

And whenever the load gets increased in the instance, it will trigger the autoscaling group to scale out, and here I am using the crontab to execute this playbook to run every 5 min interval to maintain the configuration. so I am using dynamic inventory concepts to fetch the IP of the instances filtered by tags.

So the newly launched instances by ASG will get auto-configured as webserver and also it will get registered to the load balancer.


No alt text provided for this image
No alt text provided for this image
No alt text provided for this image


Hence the Magic is done !!!! Hurray

Thanks for reading.......

Taban C.

SDE III @ Expedia | Platforms | AI

4 年

I like your notes. Did you ever try Kops + Terrafrom + kubernetes + Jenkins and Git? For a complete ci/cd.

要查看或添加评论,请登录

Mohamed Afrid的更多文章

社区洞察

其他会员也浏览了