Provision AWS EC2 Instance and RDS with Terraform, and Deploy Spring Boot App to EC2 Instance via GitHub Action Pipeline

Provision AWS EC2 Instance and RDS with Terraform, and Deploy Spring Boot App to EC2 Instance via GitHub Action Pipeline

These days, Terraform is a popular tool for managing infrastructure as code. Manually creating AWS EC2 instances and deploying Spring Boot applications to them can be challenging. In this tutorial, I will demonstrate how to use Terraform to provision an AWS EC2 instance and an AWS RDS MySQL database, and then deploy a Spring Boot project to that EC2 instance using a GitHub Action pipeline.

Prerequisites:

To follow this tutorial, you need to have the following prerequisites:

  • An AWS account
  • Terraform installed on your local machine.
  • GitHub Account
  • A Spring Boot Project
  • Putty

What We will do :

  • Create AWS EC2 & RDS(MYSQL) Instance using Terraform
  • Deploy Spring Boot Project in that EC2 Instance Using GitHub Action

Create AWS EC2 & RDS(MYSQL) Instance using Terraform:

Step 1: Create and Configure an AWS Free Tier Account

AWS offers a free tier account with a 12-month free offer that allows you to use most of its resources for educational purposes. To get started on AWS, you first need to create an IAM user with AdministratorAccess and obtain an access key and secret key.

After that, you can download the AWS CLI MSI installer for Windows from the provided link and install it. Once the installation is complete, open the command prompt and set your access key ID and secret key using the following commands:

set AWS_ACCESS_KEY_ID=your_access_key_id

set AWS_SECRET_ACCESS_KEY=your_secret_access_key

Here is the list of information that you may need from your AWS account:

  • Access key and secret key for your IAM user with AdministratorAccess permissions. We already configure aws using this.


No alt text provided for this image


  • VPC ID (Virtual Private Cloud Identifier) - a logical isolated section of the AWS Cloud.
No alt text provided for this image
  • AMI ID (Amazon Machine Image Identifier) - a pre-configured virtual machine image used to create EC2 instances.
No alt text provided for this image
  • Subnet ID - a range of IP addresses in your VPC.You can pick any of them.
No alt text provided for this image
You can use any of the subnet
  • SSH key pair - a public and private key pair used for secure remote access to EC2 instances.
No alt text provided for this image
  • Security Group ID for EC2 instance: We use this in ec2_sg variable
No alt text provided for this image
  • Security Group ID for mysql : We use this in ec2_db_vpc_security_group_id variable
No alt text provided for this image

Make sure to create the security groups mentioned earlier with the appropriate inbound rules.

To enable SSH connections and execute scripts inside the EC2 instance created using Terraform, create a key pair named "dev-ssh.pem" and associate it with the instance. Save the file in the same Terraform directory.

To use this key pair with Putty, convert the "dev-ssh.pem" file to "dev-ssh.ppk" using Puttygen, as Putty does not support the ".pem" file format.

Step 2: Download Terraform and Run

You can download Terraform from the following link: https://developer.hashicorp.com/terraform/downloads. After downloading, you'll need to set up environment variables for Terraform in your system path.

Now Create a directory named "terraform" and create two files inside it - "main.tf" and "variables.tf". Copy and paste the following code into the files, making sure to replace the credentials/values with your own:

Paste the following code into the Variable.tf

variable "ec2_tags" {
  default = "ec2-dev"
}

variable "ec2_region" {
  default = "us-east-1"
}


variable "ec2_ami" {
  default = "ami-0fa1de1d60de6a97e"
}


variable "ec2_instance_type" {
  default = "t2.micro"
}


variable "ec2_count" {
  type    = number
  default = 1
}

variable "ec2_sg" {
  default = ["sg-0c920c925762479d5", "sg-09dc01f85cf789d74"]
}
variable "ec2_key_pair_name" {
  default = "dev-ssh"
}


variable "ec2_subnet_id" {
  default = "subnet-06cc6d5dee89cee1d"
}
variable "ec2_vpc_id" {
  default = "vpc-0a1becb89bed4b41b"
}

variable "ec2_db_engine" {
  default = "mysql"
}
variable "ec2_db_instance_class" {
  default = "db.t2.micro"
}
variable "ec2_db_version" {
  default = "8.0.27"
}


variable "ec2_db_user" {
  default = "admin"
}
variable "ec2_db_password" {
  default = "Passw0rd!123"
}
variable "ec2_db_identifier" {
  default = "devrdsdb"
}
variable "ec2_db_storage" {
  type    = number
  default = "20"
}
variable "ec2_db_storage_type" {
  default = "gp2"
}


variable "ec2_db_tags" {
  default = "devrdsdb"
}


variable "ec2_db_vpc_security_group_id" {
  default = "sg-0e2d762a2c0ae4f32"
}

Replace the following variables' default values with your own values that you have got from aws:

  • ec2_db_vpc_security_group_id
  • ec2_vpc_id
  • ec2_subnet_id
  • ec2_sg
  • ec2_ami

Now, Paste the following code into the main.tf file.

provider "aws" {
  region  = var.ec2_region
}
resource "aws_instance" "ec2_instance" {
  ami             = var.ec2_ami
  count           = var.ec2_count
  instance_type   = var.ec2_instance_type
  key_name = var.ec2_key_pair_name
  security_groups = [element(var.ec2_sg, count.index)]
  subnet_id       = var.ec2_subnet_id
  tags = {
    Name = "${var.ec2_tags}-${count.index + 1}"
  }
  connection {
    type        = "ssh"
    host        = self.public_ip
    user        = "ec2-user"
    private_key = file("${path.module}/dev-ssh.pem")
  }
  provisioner "remote-exec" {
    inline = [
      "${file("${path.module}/user_data_script.sh")}"
    ]
  }
}

resource "aws_db_instance" "devrdsdb" {
  allocated_storage      = var.ec2_db_storage
  storage_type           = var.ec2_db_storage_type
  identifier             = var.ec2_db_identifier
  engine                 = var.ec2_db_engine
  engine_version         = var.ec2_db_version
  instance_class         = var.ec2_db_instance_class
  username               = var.ec2_db_user
  password               = var.ec2_db_password
  publicly_accessible    = true
  skip_final_snapshot    = true
  vpc_security_group_ids = ["${var.ec2_db_vpc_security_group_id}"]
  tags = {
    Name = var.ec2_db_tags
  }
}

The "main.tf" Terraform file includes a "user_data_script.sh" script file, which contains commands for installing Java and Maven, as well as all the commands required for the GitHub runners. This way, you don't have to manually execute them on the EC2 instance.

After you create runners in your GitHub repository, you can find the token.

No alt text provided for this image

Here's an example of what the script file might look like:

#!/bin/bas
sudo yum update -y
sudo amazon-linux-extras install -y java-openjdk11
echo "--------Apache-Maven Install--------"
sudo yum install -y maven
sudo yum install perl-Digest-SHA -y
echo "--------GITHUB ACTION SETUP--------"
mkdir actions-runner && cd actions-runner
curl -o actions-runner-linux-x64-2.303.0.tar.gz -L https://github.com/actions/runner/releases/download/v2.303.0/actions-runner-linux-x64-2.303.0.tar.gz
echo "e4a9fb7269c1a156eb5d5369232d0cd62e06bec2fd2b321600e85ac914a9cc73  actions-runner-linux-x64-2.303.0.tar.gz" | shasum -a 256 -c
tar xzf ./actions-runner-linux-x64-2.303.0.tar.gz
printf "\n\n\n" | ./config.sh --url https://github.com/Istiaq-Hossain-Shawon/shipping-rates-middleware --token ACCGYQLSA2MONRROPZH37UDEHG36U
sudo ./svc.sh install 
sudo ./svc.sh starth

Next go to terraform folder , open Command prompt and execute  below command

terraform init
terraform fmt
terraform plan
terraform validate
terraform apply

That's it, It will provision 2 EC2 instance and a MySQL database RDS on AWS. And install java and maven in both machine.

Remember if you do any change in the user_data_script file then do terraform apply ,It will destroy and create the ec2 instance again.

Now, you can connect to the EC2 instance via SSH. And we will use Putty for that.

To establish a connection with the EC2 instance via Putty, we require the hostname of the instance. You can find it from your ec2 instance.

No alt text provided for this image

Now, Download putty from here. https://www.putty.org/

Provide Host Name from your EC2 Instance.

No alt text provided for this image

Then Provide SSH key. For that , Browse dev-ssh.ppk file in putty

No alt text provided for this image
Browse default-ec2.ppk in putty

and then click open.

A command prompt will open and It will connect to your aws ec2 instance.

Deploy Spring Boot to EC2 instance using GitHub Actions:

Step 1: Setup Spring boot project

We will now automate the deployment process to our EC2 instance using GitHub Actions.

To begin with, you can create a new Spring Boot project by visiting https://start.spring.io/. Alternatively, you can use an existing Spring Boot project that uses MySQL database and is kept on GitHub.

Once you have set up your AWS RDS MySQL database instance, you can retrieve the connection URL from the AWS RDS console. Make sure to use this URL to connect your Spring Boot application to the AWS RDS instance.


No alt text provided for this image

Update your database connection string in your spring boot project with the endpoint of your AWS RDS instance.

No alt text provided for this image

Step 2 : Create Runners

To create runners, go to your GitHub repository and follow these steps:

  1. Click on the Settings tab and then go to Actions.
  2. Under the Runners section, click on the New self-hosted runner button.
  3. In the Terraform script file, we have already used the token to set up the GitHub runners.


No alt text provided for this image
No alt text provided for this image
Select Linux

Then,select Linux for runner image.

Now, Copy and paste the commands one by one from your GitHub runners to the EC2 console using Putty, but do not execute the "run.sh" command. Instead, we will run the following command.

sudo ./svc.sh install 
sudo ./svc.sh start

That's it! Your EC2 instance is now connected to your Spring Boot project GitHub repository.

Step 3: Create Workflow

Last thing , We need to create a workflow .

To create a workflow in GitHub, navigate to the Actions tab and choose "Java with Maven" from the available options.

No alt text provided for this image

This will create a new workflow file named "maven.yml". In this file, we will use "runs-on: self-hosted" and to execute the jar file, you need to copy the path of your jar file from the EC2 instance and paste it into the "run" section of "maven.yml" file.

The jar file path should be something like this:

/home/ec2-user/actions-runner/_work/project-name/project-name/target/ 

Make sure to replace "project-name" with your actual project name.

- name: Excute Jar fil
      run: sudo kill -9 `sudo lsof -t -i:8095` & sudo java -jar {YOUR_PATH}/middleware-0.0.1-SNAPSHOT.jar &

So Overall code would be like this:

# This workflow will build a Java project with Maven, and cache/restore any dependencies to improve the workflow execution tim
# For more information see: https://docs.github.com/en/actions/automating-builds-and-tests/building-and-testing-java-with-maven

# This workflow uses actions that are not certified by GitHub.
# They are provided by a third-party and are governed by
# separate terms of service, privacy policy, and support   
# documentation .

name: Java CI with Maven

on:
  push:
    branches: [ "main" ]

jobs:
  build:

    runs-on: self-hosted

    steps:
    - uses: actions/checkout@v3
    - name: Set up JDK 11
      uses: actions/setup-java@v3
      with:
        java-version: '11'
        distribution: 'temurin'
        cache: maven
    - name: Build with Maven
      run: mvn -B package --file pom.xml
    - name: Excute Jar file
      run: sudo kill -9 `sudo lsof -t -i:8095` & sudo java -jar /home/ec2-user/actions-runner/_work/shipping-rates-middleware/shipping-rates-middleware/target/middleware-0.0.1-SNAPSHOT.jar &


You're all set!

Simply commit and push your changes to the project repository, and the deployment will be automated on your EC2 instance.

要查看或添加评论,请登录

Istiaq Hossain Shawon的更多文章

社区洞察

其他会员也浏览了