AWS PROJECT AND TERRAFORM

AWS PROJECT AND TERRAFORM

Infrastructure as Code with Terraform

The goal of this guide is to provide an overview of a complete project and to serve as a step-by-step guide for DevOps professionals who are interested in starting a similar project. By following the instructions and explanations provided in this guide, you'll gain a better understanding of how Terraform can be used to manage infrastructure as code and streamline the process of deploying and maintaining complex systems.


Throughout this guide, I provide explanations of components and tools used in the project, along with step-by-step instructions for deploying and managing the infrastructure. By the end of this guide, you'll have a better understanding of how Terraform can be used to manage infrastructure for a complex system, and you'll be well-equipped to start your own projects using this powerful tool.


Purpose of this project

The purpose of this project is to provide a secure, highly available infrastructure to deploy web applications. The infrastructure is defined in code using the Terraform domain-specific language (DSL). The code is stored in version control, and the infrastructure is provisioned using Terraform command-line interface (CLI). The infrastructure state is stored in an S3 bucket using Terraform backend.


By defining your infrastructure as code using Terraform, you'll be able to easily version and track changes to your infrastructure over time. Additionally, storing your infrastructure state in an S3 bucket using Terraform's backend helps ensure that your infrastructure remains consistent and recoverable even if there are any issues or failures.


By using a highly available infrastructure, you'll be able to minimize downtime and provide a reliable experience for your users. And by using secure practices throughout your infrastructure, you'll be able to help protect against potential security threats.


Steps:


  • Setup Terraform with Backend
  • Setup VPC [Secure & HA]
  • Provision Beanstalk Environment
  • Provison Backend Services

RDS

Elastic Cache

Active MQ

  • Security group, key pairs, Bastion host



So what are you waiting for, let's get started!

Phase 1- Open a repository on GitHub.


In order for us to track any changes in the code, we will open repositories on github.

copy the HTTPS Code and open in intellij a new project from version control.


Phase 2- Install terraform

The easiest way to install TerraForm is with Chocolatey,

To install Terraform using Chocolatey (choco) package manager, follow these steps:

  • Open a command prompt as an administrator.
  • Install Chocolatey by running the following command:

Set-ExecutionPolicy Bypass -Scope Process -Force; [System.Net.ServicePointManager]::SecurityProtocol = [System.Net.ServicePointManager]::SecurityProtocol -bor 3072; iex ((New-Object System.Net.WebClient).DownloadString('https://chocolatey.org/install.ps1'))

  • Once Chocolatey is installed, install Terraform by running the following command:

choco install terraform

  • Wait for the installation to complete. Once it is finished, Terraform is installed on your system.
  • You can verify that Terraform is installed by opening a new command prompt and running the following command:

terraform -version

??This should display the installed version of Terraform.


Phase 3 - S3 bucket?(to store Terraform's state file)

We will create a file that will define the S3 bucket in which we will store the state of are infrastructure

We will create an S3 bucket on AWS console and create a folder inside it

We will post the following code into the file.?


terraform {

???backend "s3" {

????bucket = "terraform-state-770"

????key = "terraform/backend"

????region = "us-east-1"


???}

}


This Terraform configuration block defines a backend configuration for the "s3" backend type, which is used to store Terraform's state file that tracks the current state of your infrastructure.

The backend block specifies the name of the S3 bucket where Terraform will store its state file, which in this case is named "terraform-state-770".

The key parameter specifies the name of the state file within the S3 bucket, which in this case is "terraform/backend" (The folder we created in AWS S3 bucket)

The region parameter specifies the AWS region where the S3 bucket is located, and in this case, it's "us-east-1", the region is important because it affects the availability, durability, and latency of S3 storage.?

By storing the Terraform state file in an S3 backend, it can be shared between team members and persisted between Terraform runs, ensuring consistency and reducing the risk of human error. The entire configuration block is wrapped in the terraform block, which is used to define global settings that apply to the entire Terraform configuration.

Go into Git bash to the folder we opened (that we opened from intellij)?and type - “terraform init”

This command will configured the backend "s3"


terraform init

terraform init?is a command in the Terraform that initializes a new or existing Terraform working directory. This command is used to prepare a working directory containing Terraform configuration files for use with Terraform.

When you run terraform init, Terraform will perform the following actions:

It will download and install the necessary plugins for the providers specified in your Terraform configuration.

It will initialize the backend, which is where Terraform stores the state of your infrastructure.

It will load any modules specified in your configuration files.

It will configure any variables or settings that are needed for the Terraform workspace.

This command must be run whenever you create a new Terraform project or make changes to an existing one. It ensures that your working directory is properly configured and ready to use with Terraform.

The terraform init command can be customized using various options and flags, such as specifying a backend configuration file or configuring remote state storage. You can find more information about the available options in the Terraform documentation.


Phase 4 - Provider file.

Open a file and give him the name- Provider.tf (feel free to name it whatever you want)


provider "aws" {

???region = var.AWS_REGION


}


This is a Terraform configuration file for provisioning resources on AWS using the AWS provider.

This declares the provider ("aws" In our case), indicating that this configuration file will create and manage resources in AWS.

The second line sets the AWS region to the value specified in the variable "AWS_REGION". This variable will be defined elsewhere in the Terraform external configuration file.


Phase 4 - Variable

Open a file (Variable.tf) and put the following code in it -


variable AWS_REGION {

???default = "us-east-1"


}


variable AMIS {

???type = map

???default = {

???????us-east-1 = "ami-09cd747c78a9add63"

???????us-east-2 = "ami-09cd747c78a9add63"

???????ap-south-1 = "ami-09cd747c78a9add63"


???}

}


variable PRIV_KEY_PATH {

???default = "terraformkey"

}


variable PUB_KEY_PATH {

???default = "vprofilekey.pub"


}


variable USERNAME {

???default = "ubuntu"

}


variable MYIP {

???default = "183.83.39.125/32"

}


variable rmquser {

???default = "rabbit"

}


variable rmqpass {

???default = "Gr33n@pple123456"

}


variable dbuser {

???default = "admin"

}


variable dbpass {

???default = "admin123"

}


variable dbname {

???default = "accounts"

}


variable instance_count {

???default = "1"

}


variable VPC_NAME {

???default = "terraform-VPC"

}


variable Zone1 {

???default = "us-east-1a"

}


variable Zone2 {

???default = "us-east-1b"

}


variable Zone3 {

???default = "us-east-1c"

}


variable VpcCIDR {

???default = "172.21.0.0/16"

}


variable PubSub1CIDR {

???default = "172.21.1.0/24"

}


variable PubSub2CIDR {

???default = "172.21.2.0/24"

}


variable PubSub3CIDR {

???default = "172.21.3.0/24"

}


variable PrivSub1CIDR {

???default = "172.21.4.0/24"

}

variable PrivSub2CIDR {

???default = "172.21.5.0/24"

}

variable PrivSub3CIDR {

???default = "172.21.6.0/24"

}


This is a are variable declarations for various values that will be used in the Terraform code to provision resources on AWS.


The AWS_REGION variable specifies the default AWS region as "us-east-1".

The AMIS variable is a map of AMI IDs for different regions. Each region has its specific AMI, choose the one that suits you according to the region you chose, in our case I chose ubuntu 18 in three different regions

The PRIV_KEY_PATH variable specifies the path to the private key file to use for SSH access.

The PUB_KEY_PATH variable specifies the path to the public key file to use for SSH access.

The USERNAME variable specifies the default username to use when connecting to instances via SSH.

The MYIP variable specifies the default IP address to use for security group rules.

The rmquser and rmqpass variables specify the default RabbitMQ username and password to use when creating an instance.

The dbuser, dbpass, and dbname variables specify the default username, password, and database name to use when creating a database instance.

The instance_count variable specifies the default number of instances to create.

The VPC_NAME variable specifies the name of the VPC to create.

The Zone1, Zone2, and Zone3 variables specify the default availability zones to use when creating instances.

The VpcCIDR variable specifies the CIDR block for the VPC.

The PubSub1CIDR, PubSub2CIDR, and PubSub3CIDR variables specify the CIDR blocks for the public subnets.

The PrivSub1CIDR, PrivSub2CIDR, and PrivSub3CIDR variables specify the CIDR blocks for the private subnets.

These variables can be referenced and used throughout the Terraform code to dynamically provision resources with the specified values.


Phase 5 - Pairs kye

We will generate a ssh-key in git bash with “ssh-keygen” command

Save the key with the same name as you defined in the PRIV_KEY_PATH?variable and the PUB_KEY_PATH variable in the variable file.

Create a new file, I named it “keypairs.tf”


resource "aws_key_pair" "terraformkey" {

?key_name = "terraformkey"

?public_key = file(var.PUB_KEY_PATH)

}


This is a Terraform resource block that creates an AWS EC2 key pair named "terraformkey" using the specified public key file.


The aws_key_pair resource block is used to manage AWS EC2 key pairs. In this case, the key_name argument specifies the name of the key pair to create, and the public_key argument specifies the public key file to use.

The value of var.PUB_KEY_PATH is passed to public_key using the file function, which reads the contents of the specified file and returns it as a string.

Once this resource is created, the private key corresponding to this key pair will be available for download from the AWS Management Console, and can be used to authenticate to instances launched using this key pair.

Now we will do commit?

And in the Git Bash:?

“terraform init”- which we explained about earlier?

“terraform fmt”?

“terraform plan”

“terraform?apply”

Some information about these commands:


terraform fmt

terraform fmt is a command in Terraform that is used to format the Terraform configuration files in a consistent and standardized manner. This command can be used to automatically adjust the indentation, spacing, and syntax of your Terraform code to make it more readable and easier to maintain. The terraform fmt command is useful for ensuring that your code adheres to best practices and conventions.


terraform plan

terraform plan is a command in Terraform that is used to create an execution plan. This plan outlines the changes that Terraform will make to your infrastructure when you apply the configuration.?

Running terraform plan is a safe way to preview the changes that Terraform will make to your infrastructure before actually applying those changes. The output of terraform plan provides information about the resources that will be created, modified, or deleted, as well as any errors or warnings that Terraform may encounter.

terraform apply

terraform apply is a command in Terraform that is used to apply the changes to your infrastructure as specified in your configuration files. When you run terraform apply, Terraform will create, modify, or delete resources in your infrastructure as necessary to bring it into the desired state. It is important to note that terraform apply can cause changes to your infrastructure, so it should be used with caution. Before running terraform apply, it is recommended to first run terraform plan to preview the changes that will be made.

In summary, terraform fmt is used to format your code, terraform plan is used to preview the changes that will be made to your infrastructure, and terraform apply is used to actually apply those changes.


Phase 6 - VPC

Now that our machine is running, we'll be free to do the vpc

We will check the models in the terraform registry?

The Terraform Registry is a repository of modules and providers that can be used to extend the functionality of Terraform.


In addition to providers, the Terraform Registry also includes modules, which are pre-written Terraform configurations that can be used as building blocks to create more complex infrastructure. These modules can be written by anyone and can be used to help users quickly and easily create and configure resources on various cloud platforms.

In the Terraform Registry, you can find modules for various use cases, such as creating Kubernetes clusters, deploying databases, configuring networking, and more. Each module includes a README file that explains how to use the module, as well as the inputs and outputs of the module.


Create a new file- “vpc.tf”


module "vpc" {

?source?= "terraform-aws-modules/vpc/aws"

?version = "3.19.0"


?name = var.VPC_NAME

?cidr = var.VpcCIDR

?azs = [var.Zone1, var.Zone2, var.Zone3]

?private_subnets = [var.PrivSub1CIDR, var.PrivSub2CIDR, var.PrivSub3CIDR]

public_subnets =[var.PubSub1CIDR, var.PubSub2CIDR, var.PubSub3CIDR]

?enable_nat_gateway = true

?single_nat_gateway = true

?enable_dns_hostname = true

?enable_dns_support = true


?tags = {

???Terraform = "true"

???Environment = "Prod"

?}


?vpc_tags = {

???Name = var.VPC_NAME


?}

?}


This is a Terraform module block that uses the terraform-aws-modules/vpc/aws module to create an Amazon Virtual?

Private Cloud (VPC) on AWS.


The module block is used to call a Terraform module. In this case, the module is named "vpc" and it is sourced from the "terraform-aws-modules/vpc/aws" module.


The name argument specifies the name of the VPC, while the cidr argument specifies the CIDR block for the VPC.

The azs argument specifies the availability zones to use for the VPC, while the private_subnets and public_subnets arguments specify the CIDR blocks for the private and public subnets.


The enable_nat_gateway and single_nat_gateway arguments are used to enable a NAT gateway for the VPC, which allows instances in the private subnets to access the Internet.

The enable_dns_hostname and enable_dns_support arguments enable DNS hostname resolution and DNS support for the VPC.


The tags argument specifies tags to apply to all resources created by this module, while the vpc_tags argument specifies tags to apply to the VPC itself.


Once this module is executed, it will create a VPC with public and private subnets in the specified availability zones, as well as other resources such as Internet Gateways and NAT Gateways.


Phase 6 - SG


Create a new file- sg.tf


resource "aws_security_group" "terra-bean-elb-sg" {

?name????= "terra-bean-elb-sg"

?description = "Security group for bean-elb"

?vpc_id???= module.vpc.vpc_id

?egress {

???from_port?= 0

???protocol??= "-1"

???to_port??= 0

???cidr_blocks = ["0.0.0.0/0"]

?}

?ingress {

???from_port?= 80

???protocol??= "tcp"

???to_port??= 80

???cidr_blocks = ["0.0.0.0/0"]

?}

}

resource "aws_security_group" "terra-bastion-sg" {

?name????= "terra-bastion-sg"

?description = "Security group for bastionisioner ec2 instance"

?vpc_id???= module.vpc.vpc_id

?egress {

???from_port?= 0

???protocol??= "-1"

???to_port??= 0

???cidr_blocks = ["0.0.0.0/0"]

?}

?ingress {

???from_port?= 22

???protocol??= "tcp"

???to_port??= 22

???cidr_blocks = ["0.0.0.0/0"]

?}

}

resource "aws_security_group" "terra-prod-sg" {

?name????= "terra-prod-sg"

?description = "Security group for beanstalk instances"

?vpc_id???= module.vpc.vpc_id

?egress {

???from_port?= 0

???to_port??= 0

???protocol??= "-1"

???cidr_blocks = ["0.0.0.0/0"]

?}

?ingress {

???from_port???= 22

???protocol????= "tcp"

???to_port????= 22

???security_groups = [aws_security_group.terra-bastion-sg.id]

?}

}

resource "aws_security_group" "terra-backend-sg" {

?name????= "terra-backend-sg"

?description = "Security group for RDS, active mq, elastic cache"

?vpc_id???= module.vpc.vpc_id

?egress {

???from_port?= 0

???protocol??= "-1"

???to_port??= 0

???cidr_blocks = ["0.0.0.0/0"]

?}

?ingress {

???from_port???= 0

???protocol????= "-1"

???to_port????= 0

???security_groups = [aws_security_group.terra-prod-sg.id]

?}

?ingress {

???from_port???= 3306

???protocol????= "tcp"

???to_port????= 3306

???security_groups = [aws_security_group.terra-bastion-sg.id]

?}

}


resource "aws_security_group_rule" "sec_group_allow_itself" {

?from_port????????= 0

?protocol????????= "tcp"

?to_port?????????= 65535

?type??????????= "ingress"

?security_group_id????= aws_security_group.terra-backend-sg.id

?source_security_group_id = aws_security_group.terra-backend-sg.id

}


This Terraform code defines several AWS security groups to control traffic to different resources. Let's break down each resource block:


aws_security_group.terra-bean-elb-sg: This security group is for the load balancer and allows incoming traffic on port 80 from any IP address.


aws_security_group.terra-bastion-sg: This security group is for the EC2 instances that act as bastion hosts (also known as jump boxes) and allows incoming traffic on port 22 from any IP address.


aws_security_group.terra-prod-sg: This security group is for the Elastic Beanstalk instances and allows incoming traffic on port 22 from instances that are part of the aws_security_group.terra-bastion-sg security group.

aws_scurity_group.terra-backend-sg: This security group is for resources such as RDS, ActiveMQ, and Elasticache. It allows incoming traffic on port 3306 from instances that are part of the aws_security_group.terra-bastion-sg security group and allows incoming traffic on all ports from instances that are part of the aws_security_group.terra-prod-sg security group.

aws_security_group_rule.sec_group_allow_itself: This security group rule allows incoming traffic from any port on the same security group. This is used to enable resources in the same security group to communicate with each other.


Overall, these security groups are used to control network traffic between different resources in the AWS infrastructure. They define the allowed incoming and outgoing traffic based on protocols, ports, and IP addresses, and they provide a secure way to manage the communication between different components.


Phase 7 - RDS, ElastiCache & Amazon MQ

Create a new file- backend-services.tf for RDS, ElastiCache & Amazon MQ


resource "aws_db_subnet_group" "terra-rds-subgrp" {

?name???= "main"

?subnet_ids = [module.vpc.private_subnets[0], module.vpc.private_subnets[1], module.vpc.private_subnets[2]]

?tags = {

???Name = "Subnet group for RDS"

?}

}


resource "aws_elasticache_subnet_group" "terra-ecache-subgrp" {

?name???= "terra-ecache-subgrp"

?subnet_ids = [module.vpc.private_subnets[0], module.vpc.private_subnets[1], module.vpc.private_subnets[2]]

?tags = {

???Name = "Subnet group for ECACHE"

?}

}


resource "aws_db_instance" "terra-rds" {

?allocated_storage???= 20

?storage_type?????= "gp2"

?engine????????= "mysql"

?engine_version????= "8.0.32"

?instance_class????= "db.t2.micro"

?db_name????????= var.db_name

?username???????= var.dbuser

?password???????= var.dbpass

?parameter_group_name?= "default.mysql8.0"

?multi_az???????= "false"

?publicly_accessible??= "false"

?skip_final_snapshot??= true

?db_subnet_group_name?= aws_db_subnet_group.terra-rds-subgrp.name

?vpc_security_group_ids = [aws_security_group.terra-backend-sg.id]

}


resource "aws_elasticache_cluster" "terra-cache" {

?cluster_id?????= "terra-cache"

?engine???????= "memcached"

?node_type??????= "cache.t2.micro"

?num_cache_nodes???= 1

?parameter_group_name = "default.memcached1.6"

?port????????= 11211

?security_group_ids?= [aws_security_group.terra-backend-sg.id]

?subnet_group_name??= aws_elasticache_subnet_group.terra-ecache-subgrp.name

}


resource "aws_mq_broker" "terra-rmq" {

?broker_name????= "terra-rmq"

?engine_type????= "ActiveMQ"

?engine_version??= "5.15.0"

?host_instance_type = "mq.t2.micro"

?security_groups??= [aws_security_group.terra-backend-sg.id]

?subnet_ids????= [module.vpc.private_subnets[0]]

?user {

???password = var.rmqpass

???username = var.rmquser

?}

}




These resources define an RDS instance, an ElastiCache cluster, and an ActiveMQ message broker. The RDS instance is using the MySQL engine and is set up to use a subnet group that includes three private subnets in the VPC. It is also using a security group that allows inbound traffic from the terra-prod-sg security group and outbound traffic to any IP address.


The ElastiCache cluster is using the Memcached engine and is set up to use a subnet group that includes the same three private subnets in the VPC as the RDS instance. It is also using the terra-backend-sg security group, which allows inbound traffic from the terra-prod-sg security group and outbound traffic to any IP address.

The ActiveMQ message broker is using the ActiveMQ engine and is set up to use a single private subnet in the VPC. It is also using the terra-backend-sg security group, which allows inbound traffic from the terra-prod-sg security group and outbound traffic to any IP address.

Overall, these resources are setting up backend infrastructure components required for the application.

* skip_final_snapshot??= true

Use the above option only to save money in a study environment, but in a production environment it is better not to use it in order to have a SnapShot in case of deletion


Phase 7 - create application

Create a new file for bean?


resource "aws_elastic_beanstalk_application" "terra-prod" {

?name = "terra-prod"

}


This is a Terraform resource block for creating an Elastic Beanstalk application on AWS. The name attribute specifies the name of the application.

Elastic Beanstalk is a platform as a service (PaaS) offering from AWS that allows developers to quickly deploy and manage web applications. With Elastic Beanstalk, you can easily deploy and scale web applications written in various programming languages such as Java, .NET, PHP, Node.js, Python, Ruby, and Go.


Phase 7 -?Environment details?

Create a new file for details of the environment

bean-env.tf

resource "aws_elastic_beanstalk_environment" "terra-bean-prod" {

?application????= aws_elastic_beanstalk_application.terra-prod.name

?name????????= "terra-bean-prod"

?solution_stack_name = "64bit Amazon Linux 2 v4.3.3 running Tomcat 8.5 Corretto 11"

?cname_prefix????= "terra-bean-prod-domain"

?setting {

???name???= "VPCId"

???namespace = "aws:ec2:vpc"

???value??= module.vpc.vpc_id

?}

?setting {

???name???= "IamInstanceProfile"

???namespace = "aws:autoscaling:launchconfiguration"

???value??= "aws-elasticbeanstalk-ec2-role"

?}

?setting {

???name???= "AssociatePublicIpAddress"

???namespace = "aws:ec2:vpc"

???value??= "false"

?}


?setting {

???name???= "Subnets"

???namespace = "aws:ec2:vpc"

???value??= join(",", [module.vpc.private_subnets[0], module.vpc.private_subnets[1], module.vpc.private_subnets[2]])

?}


?setting {

???name???= "ELBSubnets"

???namespace = "aws:ec2:vpc"

???value??= join(",", [module.vpc.public_subnets[0], module.vpc.public_subnets[1], module.vpc.public_subnets[2]])

?}

?setting {

???name???= "InstanceType"

???namespace = "aws:autoscaling:launchconfiguration"

???value??= "t2.micro"

?}


?setting {

???name???= "EC2KeyName"

???namespace = "aws:autoscaling:launchconfiguration"

???value??= aws_key_pair.terraformkey.key_name

?}


?setting {

???name???= "Availability Zones"

???namespace = "aws:autoscaling:asg"

???value??= "Any 3"

?}

?setting {

???name???= "MinSize"

???namespace = "aws:autoscaling:asg"

???value??= "1"

?}

?setting {

???name???= "MaxSize"

???namespace = "aws:autoscaling:asg"

???value??= "8"

?}


?setting {

???name???= "environment"

???namespace = "aws:elasticbeanstalk:application:environment"

???value??= "prod"

?}

?setting {

???name???= "LOGGING_APPENDER"

???namespace = "aws:elasticbeanstalk:application:environment"

???value??= "GRAYLOG"

?}

?setting {

???name???= "SystemType"

???namespace = "aws:elasticbeanstalk:healthreporting:system"

???value??= "enhanced"

?}

?setting {

???name???= "RollingUpdateEnabled"

???namespace = "aws:autoscaling:updatepolicy:rollingupdate"

???value??= "true"

?}

?setting {

???name???= "RollingUpdateType"

???namespace = "aws:autoscaling:updatepolicy:rollingupdate"

???value??= "Health"

?}

?setting {

???name???= "MaxBatchSize"

???namespace = "aws:autoscaling:updatepolicy:rollingupdate"

???value??= "1"

?}


?setting {

???name???= "CrossZone"

???namespace = "aws:elb:loadbalancer"

???value??= "true"

?}

?setting {

???name???= "StickinessEnabled"

???namespace = "aws:elasticbeanstalk:environment:process:default"

???value??= "true"

?}


?setting {

???name???= "BatchSizeType"

???namespace = "aws:elasticbeanstalk:command"

???value??= "Fixed"

?}


?setting {

???name???= "BatchSize"

???namespace = "aws:elasticbeanstalk:command"

???value??= "1"

?}


?setting {

???name???= "DeploymentPolicy"

???namespace = "aws:elasticbeanstalk:command"

???value??= "Rolling"

?}


?setting {

???name???= "SecurityGroups"

???namespace = "aws:autoscaling:launchconfiguration"

???value??= aws_security_group.terra-prod-sg.id

?}


?setting {

???name???= "SecurityGroups"

???namespace = "aws:elbv2:loadbalancer"

???value??= aws_security_group.terra-bean-elb-sg.id

?}


?depends_on = [aws_security_group.terra-bean-elb-sg, aws_security_group.terra-prod-sg]

}




This is a Terraform configuration for an AWS Elastic Beanstalk environment. The resource block creates an Elastic Beanstalk environment named "terra-bean-prod" running on the "64bit Amazon Linux 2 v4.3.3 running Tomcat 8.5 Corretto 11" solution stack.


The Elastic Beanstalk environment is configured with various settings using the setting block. The VPCId setting specifies the VPC ID where the environment is launched.

The IamInstanceProfile setting specifies the IAM instance profile used by the EC2 instances.?

The AssociatePublicIpAddress setting specifies whether to associate public IP addresses with instances in the VPC.

The Subnets setting specifies the private subnets to launch the instances in.?

The ELBSubnets setting specifies the public subnets for the load balancer.


Other settings include the EC2 instance type, key pair, availability zones, minimum and maximum number of instances, and rolling update policy. The environment is also configured with logging and monitoring settings, including the Graylog appender and enhanced system health reporting.


The depends_on argument lists the dependencies for this resource, which are the security groups used by the Elastic Beanstalk environment and load balancer.



Phase 7 - bastion-host

Now we need to run files on MYSQL but we can't because it is in a Private Subnet, so our only way to access it is through the Bastion Host.


A bastion host is a special-purpose server or instance that is designed to provide secure access to a private network or infrastructure. It acts as an intermediary between the public internet and the private network, allowing authorized users to access and manage resources on the private network.


We will create a file- bastion-host.tf


resource "aws_instance" "terra-bastion" {

?ami??????????= lookup(var.AMIS, var.AWS_REGION)

?instance_type?????= "t2.micro"

?key_name???????= aws_key_pair.terraformkey.key_name

?subnet_id???????= module.vpc.public_subnets[0]

?count?????????= var.instance_count

?vpc_security_group_ids = [aws_security_group.terra-bastion-sg.id]


?tags = {

???Name??= "vprofile-bastion"

???PROJECT = "vprofile"

?}




?provisioner "file" {

???content??= templatefile("db-deploy.tmpl", { rds-endpoint = aws_db_instance.terra-rds.address, dbuser = var.dbuser, dbpass = var.dbpass })

???destination = "/tmp/vprofile-dbdeploy.sh"

?}


?provisioner "remote-exec" {

???inline = [

?????"sudo apt-get update",

?????"sudo apt-get install -y mysql-client",

?????"chmod +x /tmp/vprofile-dbdeploy.sh",

?????"sudo /tmp/vprofile-dbdeploy.sh"


???]

?}

?connection {

???user????= var.USERNAME

???private_key = file(var.PRIV_KEY_PATH)

???host????= self.public_ip

?}


?depends_on = [aws_db_instance.terra-rds]

}



This Terraform resource block for creating an AWS EC2 instance named "terra-bastion". It uses the "aws_instance" resource type and defines various configuration options:


"ami": the AMI ID to use for the instance. It looks up the value in a variable named "AMIS" using the current AWS region.

"instance_type": the instance type to use, which is set to "t2.micro".

"key_name": the name of the AWS key pair to use for SSH access.

"subnet_id": the ID of the subnet in which to launch the instance. It uses the first public subnet from a VPC module that we defined earlier.

"count": the number of instances to create. It's set to a variable named "instance_count".

"vpc_security_group_ids": a list of security group IDs to apply to the instance. It includes a reference to the resource named "aws_security_group.terra-bastion-sg" Which is the SG file that we created.

"tags": a map of tags to apply to the instance.

The block also includes two "provisioner" blocks, which are used to execute scripts on the instance after it's created:


"provisioner file": This block uses the "templatefile" function to generate a script file named "vprofile-dbdeploy.sh" and copy it to the instance's "/tmp" directory. It includes variables for an RDS endpoint, a database username, and a database password. The variables are passed in through Terraform variables.

"provisioner remote-exec": This block uses SSH to execute a series of commands on the instance, including updating the package repositories, installing the MySQL client, making the script file executable, and running the script.

Finally, the block includes a "connection" block, which is used to define the SSH connection settings for executing the remote-exec provisioner. It includes a username, a private key file path, and the instance's public IP address.


This EC2 instance depends on an RDS instance defined elsewhere in the Terraform configuration, as indicated by the "depends_on" clause at the end of the block.


Phase 8 - db-deploy


We create a file- db-deploy.tmpl


sudo apt update

sudo apt install git mysql-client -y

git clone -b vp-rem https://github.com/devopshydclub/vprofile-project.git

mysql -h ${rds-endpoint} -u ${dbuser} --password=${dbpass} accounts --ssl-mode=DISABLED < /home/ubuntu/vprofile-project/src/main/resources/db_backup.sql





This is a shell script that runs on are EC2 instance- Bastion Host,?and performs the following tasks:


Update the package repositories using "sudo apt update".

Install the Git and MySQL client packages using "sudo apt install git mysql-client -y".

Clone a Git repository using the "git clone" command. The repository URL and branch are hardcoded as "https://github.com/<your_repository/terra-aws.git" and "terra-branch" respectively.

Use the "mysql" command to connect to a MySQL database using the RDS endpoint, database username, and password provided as variables. It specifies the "accounts" database and the "--ssl-mode=DISABLED" option to disable SSL verification.?

It runs a SQL file named "db_backup.sql" located in "/home/ubuntu/vprofile-project/src/main/resources/".


Now we will do commit?


And in the Git Bash:?

“terraform init”?

“terraform fmt”

“terraform plan”

“terraform?apply”


Take the EP of your instance from AWS console, and if everything is wall you will get a login page of MySQL


Username and password admin_vp


Congratulations you have finished the project, now you can do a Deployment to your Artifact (I will do that in another article)

I hope you found this article helpful!

要查看或添加评论,请登录

社区洞察

其他会员也浏览了