AWS DevOps Lift and Shift Project
Thanks to the Mentors @Imran Telli and @TrainwithShubham

AWS DevOps Lift and Shift Project

AWS DevOps Project

Hosting multi-tier web application stack on AWS cloud for production (Lift And Shift)

AWS services used :

1. EC2 instances (t2.micro)

2. ACM (Amazon certificate Manager)

3. Load Balancer and Target Groups

4. Route 53

5. Auto Scaling Group, Launch Template, AMI

6. Security Groups

7. IAM Roles and Users

8. S3 bucket

Objectives of Using AWS Cloud:

1. Pay-as-you-go model.

2. scalability

3. Easy to manage

4. Flexible infrastructure

Infrastructure :

Steps :

1. First of all created 3 security groups. First for the Application Load balancer (vprofile_lb_sg), second for the App instance (vprofile_app_sg), and third for backend Instances(vprofile_backend_sg).

2. Created a key pair for ec2 instances.

3. Created 4 ec2 instances. First for Rabbit MQ (vprofile_rmq01), Second for Database (vprofile_db01), Third for Memcache (vprofile_mc01), and last one for frontend (vprofile_app01). Created all three backend service ec2 instances using CENOTS 9 AMI and t2.micro instance type. For the front-end one created it using Ubuntu AMI and the same t2.micro instance type.

4. Created a Domain at Go Daddy and then registered that domain in Amazon route 53. Created a hosted zone and then created a record in the hosted zone for the three backend service ec2 instances using the private IPS of the ec2 instances and updated the NS records at the Godaddy site from route 53.

5. Created a record in Amazon Certificate Manager using CNAME name and CNAME value.

6. Cloned the Repository AWSlift&Shift from the GitHub URL provided by Imran Teli and made changes to the code in application.properties file.

7. Further created the S3 bucket using AWS console. And uploaded the artifacts in the S3 bucket. For updating the S3 bucket Created a USER with s3 full access and used that user details to configure awscli. Further created IAM role with s3 full access and attached that role to vprofile_app01 instance so that it can access the s3 bucket created.

8. Now SSH to the frontend ec2 instance i.e. vprofile_app01 and then copied the artifacts from S3 bucket to Target folder i.r. /tmp/ and then verified the updated target folder using the cat command.

9. Once data is verified in the target folder of the ec2 instance we verified the site using the private IP of the instance at the 8080 port for the tomcat instance.

Since all the functionalities(Database, RabbitMQ, Memcache) were working fine using the public IP we further used Load balancers. First, we created the Target group and added the tomcat instance or frontend instance i.e. vprofile_app01 to the target Group and further created an application load balancer using the security Group created earlier.

Verify the Site using the DNS link mentioned for the Load balancer. If required we can create a CNAME record for the same in Route 53 as well.

10. Further created the Image of the vprofile_app01 instance and further used that Image to create a launch configuration. Further using the launch configuration created an autoscaling group specifying the min, desired, and maximum capacity for the instances.

Process :

The users will access the website using the Link. That URL will be pointing to the Load balancer. And the entry will be mentioned in GoDaddy DNS.

The load balancer will be protected using the security groups created above. It will allow all the traffic from HTTP and HTTPS. The certificate for HTTPS encryption was mentioned in AMC (Amazon certificate Manager).

The Load balancer will route the traffic to the Autoscaling group. The app security group will allow the traffic on port 8080 from the Load balancer, SSH from port 22, and 8080 from the same instance IP.

As per the load, the instance capacity will be scaled out or scaled in based on the requests.

The application will require the backend instances vprofile_mc01, vprofile_db01, vprofile_rmq01. The private IPs of these instances will be stored in Route 53. The Tomcat Instance will access the backend services using the name mentioned in the Route53 and application.properties file of the project.

The backend security groups will allow traffic from load balancer SG on different custom ports depending upon the server. For example 3306 for SQL/Aurora, and 11211 for Memcache, Since all the different backend services will interact with each other. We will add one more rule which is all traffic from the backend security group.


Special thanks to Mentors Imran Teli and TrainWithShubham


要查看或添加评论,请登录

Kamalpreet Singh的更多文章

  • Docker for DevOps

    Docker for DevOps

    Hi Folks, In this article, we will be learning about Docker from zero. This means we will be covering all the commands…

  • Docker Compose File

    Docker Compose File

    TrainWithShubham Imran Teli Bhupinder Rajput l ??????? ?????? l ??????? ?????? Hi Folks, In this article, we will learn…

  • Terraform with AWS provider

    Terraform with AWS provider

    Infrastructure as a code (IAAC) tool allows us to manage infrastructure with configuration files rather than GUI…

    5 条评论
  • Continuous Integration on AWS Cloud

    Continuous Integration on AWS Cloud

    Hi Folks, In the last article, we posted the Continuous Integration Project using Jenkins, Nexus, SonarQube, and Slack.…

  • Continuous Integration Using Jenkins, Nexus, SonarQube, Slack

    Continuous Integration Using Jenkins, Nexus, SonarQube, Slack

    Hi Folks, Created this continuous integration Project using Jenkins for continuous integration, Git as a Version…

  • Re-Architecting Web App using AWS Cloud

    Re-Architecting Web App using AWS Cloud

    Refactoring the AWS Lift and Shift project: https://www.linkedin.

  • Kubernetes Architecture

    Kubernetes Architecture

    Hi Folks, Kubernetes is known as the Container Orchestration Tool. In this article, I will be explaining K8S…

  • Docker for DevOps

    Docker for DevOps

    TrainWithShubham Imran Teli Technical Guftgu Docker Notes DevOps is the methodology used to reduce conflicts between…

    1 条评论
  • GIT for DevOps

    GIT for DevOps

    TrainWithShubham Imran Teli Technical Guftgu GIT (Distributed Version Control System) GIT was introduced by Linus…

    2 条评论
  • Linux Command

    Linux Command

    Hi Folks, After working as a DevOps Engineer for almost 3 years and learning DevOps from Various sources, Concatenating…

社区洞察

其他会员也浏览了