Secure Three-Tier Web Application with GitHub Actions CI/CD, S3 Hosting, Deployment to Dev and Prod Servers, and Comprehensive Monitoring
In this personal project, I built a secure and scalable three-tier web application architecture using AWS services for deployment and automation. The frontend was hosted on Amazon S3 while the backend was deployed to EC2 instances within a Virtual Private Cloud (VPC). I managed both development and production environments using a single GitHub repository, with separate branches (dev for development/testing and main for production). Automated CI/CD pipelines were established using GitHub Actions for streamlined deployment processes. For monitoring, I utilized AWS CloudWatch and AWS Managed Grafana to visualize performance metrics and set up alerts, ensuring robust monitoring and security measures were in place.
Phase 1: Set Up GitHub Repository
3.????? Set Up Branches:
Commands to set up the branches:
git checkout -b main
Then, create a dev branch:
git checkout -b dev
4.????? Push Initial Project Structure to GitHub:
git add .
git commit -m "Initial commit for three-tier app"
git push origin dev
This will push the project to the dev branch.
Phase 2: Set Up AWS Infrastructure Using Terraform
Step 2.1: Provision VPC and Subnets
To define the networking infrastructure, you will create a vpc.tf file that includes the VPC, subnets, and routing tables.
Details for the Infrastructure:
Terraform Configuration (vpc.tf ):
Create a file named vpc.tf in your project directory and add the following configuration:
provider "aws" {
region = "us-west-2" # Ensure this is your preferred region
}
# Create VPC
resource "aws_vpc" "main_vpc" {
cidr_block = "10.0.0.0/16"
enable_dns_support = true
enable_dns_hostnames = true
tags = {
Name = "vpc-three-tier-webapp-oluwa"
}
}
# Internet Gateway for public access
resource "aws_internet_gateway" "igw" {
vpc_id = aws_vpc.main_vpc.id
tags = {
Name = "igw-three-tier-webapp-oluwa"
}
}
# Public Subnet for ALB (Frontend)
resource "aws_subnet" "public_subnet" {
vpc_id = aws_vpc.main_vpc.id
cidr_block = "10.0.1.0/24"
map_public_ip_on_launch = true
tags = {
Name = "public-subnet-oluwa"
}
}
# Private Subnet for Backend (EC2)
resource "aws_subnet" "private_backend_subnet" {
vpc_id = aws_vpc.main_vpc.id
cidr_block = "10.0.2.0/24"
tags = {
Name = "private-backend-subnet-oluwa"
}
}
# Private Subnet for Database (future use)
resource "aws_subnet" "private_database_subnet" {
vpc_id = aws_vpc.main_vpc.id
cidr_block = "10.0.3.0/24"
tags = {
Name = "private-database-subnet-oluwa"
}
}
# Route Table for Public Subnet
resource "aws_route_table" "public_route" {
vpc_id = aws_vpc.main_vpc.id
route {
cidr_block = "0.0.0.0/0"
gateway_id = aws_internet_gateway.igw.id
}
tags = {
Name = "public-route-table-oluwa"
}
}
# Associate Public Subnet with the Route Table
resource "aws_route_table_association" "public_association" {
subnet_id = aws_subnet.public_subnet.id
route_table_id = aws_route_table.public_route.id
}
# NAT Gateway for Private Subnets (Backend and Database)
resource "aws_eip" "nat_eip" {
domain = "vpc"
}
resource "aws_nat_gateway" "nat_gateway" {
allocation_id = aws_eip.nat_eip.id
subnet_id = aws_subnet.public_subnet.id
tags = {
Name = "nat-gateway-oluwa"
}
}
# Private Route Table with NAT for Backend and Database Subnets
resource "aws_route_table" "private_route" {
vpc_id = aws_vpc.main_vpc.id
route {
cidr_block = "0.0.0.0/0"
gateway_id = aws_nat_gateway.nat_gateway.id
}
tags = {
Name = "private-route-table-oluwa"
}
}
Explanation:
?Step 2.2: Apply the Terraform Configuration
Once the vpc.tf file is created, initialize and apply the Terraform configuration to create the VPC and subnets.
terraform init
terraform plan
2. Apply the Configuration: To create the infrastructure, run:
terraform apply
Terraform will prompt for confirmation. Type yes to proceed with creating the infrastructure.
After applying, Terraform will create your VPC, subnets, and associated resources. You will see the infrastructure set up in your AWS console under VPC and Subnets.
Phase 3: Set Up S3 Buckets for Frontend
Step 3.1: Create S3 Buckets for Dev and Prod
Its time to create two S3 buckets for hosting the frontend, one for the dev environment and one for prod.
Terraform Configuration (s3.tf )
Create a new file named s3.tf in your project’s root directory to define the S3 buckets:
# Dev Frontend Bucket (Private)
resource "aws_s3_bucket" "dev_frontend_bucket" {
bucket = "dev-frontend-bucket-oluwa"
tags = {
Name = "dev-frontend-bucket-oluwa"
Environment = "Dev"
}
}
# Prod Frontend Bucket (Private)
resource "aws_s3_bucket" "prod_frontend_bucket" {
bucket = "prod-frontend-bucket-oluwa"
tags = {
Name = "prod-frontend-bucket-oluwa"
Environment = "Prod"
}
}
# Configure website hosting for Dev Frontend Bucket
resource "aws_s3_bucket_website_configuration" "dev_website" {
bucket = aws_s3_bucket.dev_frontend_bucket.id
index_document {
suffix = "index.html"
}
error_document {
key = "index.html"
}
}
# Configure website hosting for Prod Frontend Bucket
resource "aws_s3_bucket_website_configuration" "prod_website" {
bucket = aws_s3_bucket.prod_frontend_bucket.id
index_document {
suffix = "index.html"
}
error_document {
key = "index.html"
}
}
Apply the Terraform Configuration
terraform init
terraform apply
Before proceeding to deploy the frontend to S3, let's first push everything you've done so far to your GitHub repository.
Step 1: Commit and Push to GitHub
git add .
2.????? Commit the changes with a meaningful message:
git commit -m "Added Terraform configuration for VPC, Subnets, and S3 setup"
Push the changes to the dev branch (then merge to the dev if you prefer):
git push origin dev? # Push to dev branch
We will work on dev going forward and merge to prod as needed.
Step 3.2: Deploy Frontend to S3
I will build the React.js frontend and sync the build output with the corresponding S3 bucket.
First, we need to ensure that the React.js frontend application code is in place.
If you haven't initialized a React project yet, you can create one using:
npx create-react-app frontend
cd frontend
Once you're inside the frontend directory, run the following command to build the React app:
npm run build
This command will create an optimized build of your React.js application, and the output will be stored in a folder called build.
Step 5: Sync the Build Output to the Dev S3 Bucket
Now that you have the build ready, you can deploy it to the Dev S3 Bucket.
Use the AWS CLI to sync the contents of the build directory to your Dev S3 Bucket (dev-frontend-bucket-oluwa). Run the following command:
aws s3 sync ./build s3://dev-frontend-bucket-oluwa --region us-west-2
This command will sync the contents of the build directory to your development S3 bucket and make it publicly readable. The --acl public-read flag ensures that the files are accessible over the web.
Step 6: Sync the Build Output to the Prod S3 Bucket
To deploy the production version of your React.js application, sync the same build output to the Prod S3 Bucket (prod-frontend-bucket-oluwa):
aws s3 sync ./build s3://prod-frontend-bucket-oluwa --acl public-read
This will upload the production version of the React.js frontend to the production S3 bucket.
Verifying the Deployment
Once the files are synced, you can verify the deployment by accessing the S3 Website URL for each bucket:
2. Prod Frontend: Visit https://prod-frontend-bucket-oluwa.s3-website-<your-region>.amazonaws.com
Phase 4: Set Up EC2 Instances for Backend
Step 4.1: Create EC2 Instances for Dev and Prod
In this step, we will define Terraform configurations for EC2 instances.
# Dev Backend EC2 Instance
resource "aws_instance" "dev_backend_instance" {
ami = "ami-0c55b159cbfafe1f0" # Latest Amazon Linux 2
instance_type = "t2.micro"
subnet_id = aws_subnet.private_backend_subnet.id
tags = {
Name = "dev-backend-instance-oluwa"
}
}
# Prod Backend EC2 Instance
resource "aws_instance" "prod_backend_instance" {
ami = "ami-0c55b159cbfafe1f0" # Latest Amazon Linux 2
instance_type = "t2.micro"
subnet_id = aws_subnet.private_backend_subnet.id
tags = {
Name = "prod-backend-instance-oluwa"
}
}
To create the EC2 instances:
terraform init
2.????? Apply the Configuration:
Apply the Terraform configuration to create the EC2 instances:
terraform apply
Review the proposed changes and type yes when prompted to confirm the creation of the EC2 instances.
Once the EC2 instances are created, you’ll have two backend servers running on AWS, one for the Dev environment and one for the Prod environment.
Step 4.2: Backend Code (Node.js + Express)
In this step, we will create the backend code for our application using Node.js and Express. The backend will expose an API endpoint at /api and will run on port 5000.
Step 4.2.1: Install Node.js and Express
For both dev-backend-instance-oluwa and prod-backend-instance-oluwa, you will need to SSH into the instances to install Node.js and set up the backend.
2.????? Install Node.js:
sudo yum update -y
sudo yum install -y gcc-c++ make
curl -sL https://rpm.nodesource.com/setup_18.x | sudo bash -
sudo yum install -y nodejs
Verify the installation by running:
node -v
npm -v
3. Create a Directory for the Backend:
On each instance, create a directory for your backend application:
mkdir backend
cd backend
2.????? Initialize a Node.js Project:
Inside the backend directory, initialize a Node.js project:
npm init -y
Install Express:
npm install express
Step 4.2.2: Backend Code (server.js)
Once Node.js and Express are set up, you can create the backend server code.
领英推荐
touch server.js
2.????? Write the Backend Code:
Open the server.js file in a text editor (like nano or vi), and paste the following code:
const express = require('express');
const app = express();
app.get('/api', (req, res) => {
res.send('Hello from the Backend!');
});
const PORT = 5000;
app.listen(PORT, () => {
console.log(`Backend server running on port ${PORT}`);
});
?3. Run the Backend Server:
To run the backend server on the EC2 instance, use the following command:
node server.js
The server should now be running and accessible on port 5000.
Step 4.2.3: Test the Backend API
To ensure the backend is running properly, you can test the /api endpoint.
If everything is working, you should see the response: "Hello from the Backend!".
For the dev
For the prod
Phase 5: Set Up CI/CD Pipelines
Step 5.1: Create CI/CD Pipelines for Dev and Prod
We are going to continue using the GitHub repository set up with two branches: dev for the development environment and main for production.
In your GitHub repository, create a directory called .github/workflows. Inside this directory, create a file named dev-pipeline.yml for the development pipeline. Define the development pipeline using the following steps:
name: Dev CI/CD Pipeline
on:
push:
branches:
- dev
jobs:
build:
name: Build Frontend
runs-on: ubuntu-latest
steps:
- name: Checkout code
uses: actions/checkout@v2
- name: Set up Node.js
uses: actions/setup-node@v2
with:
node-version: '18'
- name: Install dependencies and build (Frontend)
run: |
cd frontend
npm install
npm run build
- name: Upload build directory as artifact
uses: actions/upload-artifact@v3
with:
name: frontend-build
path: frontend/build
deploy:
name: Deploy to Dev Environment
needs: build
runs-on: ubuntu-latest
steps:
- name: Checkout code
uses: actions/checkout@v2
- name: Download build artifact
uses: actions/download-artifact@v3
with:
name: frontend-build
path: frontend/build
- name: Deploy Frontend to S3
env:
AWS_ACCESS_KEY_ID: ${{ secrets.AWS_ACCESS_KEY_ID }}
AWS_SECRET_ACCESS_KEY: ${{ secrets.AWS_SECRET_ACCESS_KEY }}
AWS_SESSION_TOKEN: ${{ secrets.AWS_SESSION_TOKEN }}
AWS_REGION: 'us-west-2'
run: |
if [ -d "frontend/build" ]; then
aws s3 sync frontend/build s3://dev-frontend-bucket-oluwa
else
echo "Build directory does not exist."
exit 1
fi
- name: Deploy Backend to Dev Instance
env:
AWS_ACCESS_KEY_ID: ${{ secrets.AWS_ACCESS_KEY_ID }}
AWS_SECRET_ACCESS_KEY: ${{ secrets.AWS_SECRET_ACCESS_KEY }}
AWS_SESSION_TOKEN: ${{ secrets.AWS_SESSION_TOKEN }}
AWS_REGION: 'us-west-2'
run: |
aws ssm send-command \
--document-name "AWS-RunShellScript" \
--targets "Key=InstanceIds,Values=i-0aa6f466cf9dd4345" \
--parameters commands='["cd /home/ec2-user/backend", "git pull", "npm install", "pm2 restart server"]' \
--comment "Deploy backend code to dev instance" \
--timeout-seconds 600 \
--max-concurrency "1" \
--max-errors "0"
3. Create the Prod CI/CD Pipeline:
In the .github/workflows directory, create another file named prod-pipeline.yml for the production pipeline.
Define the production pipeline similarly but configured to deploy to the production S3 bucket:
name: Prod CI/CD Pipeline
on:
push:
branches:
- main
jobs:
build:
name: Build Frontend
runs-on: ubuntu-latest
steps:
- name: Checkout code
uses: actions/checkout@v2
- name: Set up Node.js
uses: actions/setup-node@v2
with:
node-version: '18'
- name: Install dependencies and build (Frontend)
run: |
cd frontend
npm install
npm run build
- name: Upload build directory as artifact
uses: actions/upload-artifact@v3
with:
name: frontend-build
path: frontend/build
deploy:
name: Deploy to Prod Environment
needs: build
runs-on: ubuntu-latest
steps:
- name: Checkout code
uses: actions/checkout@v2
- name: Download build artifact
uses: actions/download-artifact@v3
with:
name: frontend-build
path: frontend/build
- name: Deploy Frontend to S3
env:
AWS_ACCESS_KEY_ID: ${{ secrets.AWS_ACCESS_KEY_ID }}
AWS_SECRET_ACCESS_KEY: ${{ secrets.AWS_SECRET_ACCESS_KEY }}
AWS_SESSION_TOKEN: ${{ secrets.AWS_SESSION_TOKEN }}
AWS_REGION: 'us-west-2'
run: |
if [ -d "frontend/build" ]; then
aws s3 sync frontend/build s3://prod-frontend-bucket-oluwa
else
echo "Build directory does not exist."
exit 1
fi
- name: Deploy Backend to Prod Instance
env:
AWS_ACCESS_KEY_ID: ${{ secrets.AWS_ACCESS_KEY_ID }}
AWS_SECRET_ACCESS_KEY: ${{ secrets.AWS_SECRET_ACCESS_KEY }}
AWS_SESSION_TOKEN: ${{ secrets.AWS_SESSION_TOKEN }}
AWS_REGION: 'us-west-2'
run: |
aws ssm send-command \
--document-name "AWS-RunShellScript" \
--targets "Key=InstanceIds,Values=i-0a58d4dd7d1dcb72c" \
--parameters commands='["cd /home/ec2-user/backend", "git pull", "npm install", "pm2 restart server"]' \
--comment "Deploy backend code to prod instance" \
--timeout-seconds 600 \
--max-concurrency "1" \
--max-errors "0"
4.? Configure GitHub Secrets:
Add the required AWS credentials (AWS_ACCESS_KEY_ID, AWS_SECRET_ACCESS_KEY, and AWS_SESSION_TOKEN) as GitHub secrets under your repository’s settings.
Ensure these credentials have the necessary permissions to interact with the S3 buckets for both dev and prod.
5.? Commit and Push Pipeline Configurations:
Make sure you’re on the correct branch (e.g., dev for development pipeline setup or main for production).
Stage the new workflow files:
git add .github/workflows/dev-pipeline.yml
git add .github/workflows/prod-pipeline.yml
Commit your changes:
git commit -m "Added CI/CD pipeline configurations for dev and prod"
Push the changes to the appropriate branch:
git push origin dev? # For development pipeline
git push origin main # For production pipeline
6. Verify CI/CD Pipeline Execution:
Navigate to the Actions tab in your GitHub repository.
Verify that the pipelines for both the dev and main branches run successfully upon push events.
Ensure that the build and deployment stages complete without errors, and confirm that the frontend is correctly deployed to the respective S3 buckets.
Phase 6: Monitoring and Security
Step 6.1: Set Up CloudWatch and CloudWatch Agent
Here, I will implement AWS CloudWatch for Monitoring by installing the CloudWatch Agent via AWS Systems Manager Run Command. I will leverage AWS SSM (Systems Manager) for installation and configuration.
Step 1: Install the CloudWatch Agent via AWS Systems Manager
Open Systems Manager.
2. Run Command:
Under Instances & Nodes, click on Run Command.
Click Run command at the top of the page.
3. Select the Command:
In the Command document section, search for and select AWS-RunShellScript.
4. Specify Command Parameters:
In the Command parameters field, paste the following shell command to install the CloudWatch agent:
sudo yum install -y amazon-cloudwatch-agent
For Targets, select the appropriate instances (dev and prod). You can select them by instance IDs or using tags.
5. Execution Role:
Ensure that the execution role has the necessary permissions (e.g., AmazonSSMManagedInstanceCore).
6. Run Command:
Click Run to execute the command.
Wait until the status shows Success to confirm the agent installation.
Step 2: Configure the CloudWatch Agent Using SSM
After installing the CloudWatch Agent, you need to configure it to collect metrics and logs. AWS provides a command to generate an initial configuration file.
sudo /opt/aws/amazon-cloudwatch-agent/bin/amazon-cloudwatch-agent-config-wizard
This will launch the configuration wizard, where you can choose which metrics (CPU, memory, disk) and logs to monitor. The output will be saved as a configuration file.
2. Apply the Configuration File:
Run another command using AWS-RunShellScript to start the agent with the new configuration:
sudo /opt/aws/amazon-cloudwatch-agent/bin/amazon-cloudwatch-agent-ctl -a start -c file:/opt/aws/amazon-cloudwatch-agent/etc/amazon-cloudwatch-agent.json -m ec2
3. Verify Configuration:
Check if the CloudWatch agent is running properly:
sudo /opt/aws/amazon-cloudwatch-agent/bin/amazon-cloudwatch-agent-ctl -m ec2 -a status
Ensure it shows the status as running.
Step 3: Create CloudWatch Dashboards and Alarms
Now that the CloudWatch agent is installed and configured on both instances, you can visualize the metrics and set up alarms.
Create CloudWatch Dashboards
Set Up CloudWatch Alarms
3. Configure Alarm Conditions: Set thresholds (e.g., CPU utilization > 80% for more than 5 minutes). You can add multiple metrics and configure specific conditions for each (e.g., memory usage < 70%).
4. Set Alarm Actions: Create an SNS topic if you don’t have one, or select an existing SNS topic to receive email notifications or SMS alerts. Ensure you subscribe your email to the SNS topic so you receive notifications when the alarm is triggered.
5. Name and Save the Alarm: Give the alarm a descriptive name, such as High-CPU-Dev-Instance or Low-Memory-Prod-Instance. Click Create alarm.
Step 4: Verify Monitoring and Alerts
Additional Monitoring using Grafana
Lets also do some monitoring by setting Up AWS Managed Grafana
Step 1: Set Up an AWS Managed Grafana Workspace
4. Network Access Settings: Select the appropriate VPC and subnet settings if you have a specific configuration. By default, AWS Managed Grafana will be accessible over the internet unless you choose to restrict it within a VPC. Click Next.
5. Review and Create the Workspace: Review your configuration settings. Click Create workspace. Wait a few minutes while AWS provisions the workspace. You’ll receive a notification once it’s ready.
Step 2: Configure Data Sources in Grafana
3. Configure the CloudWatch Data Source: Enter a Name for your data source (e.g., AWS CloudWatch). Under Default region, select the region where your EC2 instances are deployed (e.g., us-west-2). Click Save & Test to verify the connection. If successful, you’ll see a confirmation message.
Click on the Grafana workspace URL and log in via SSO.
Click Dashboard and add a new dataset. Select Cloudwatch
Filled in the required, save and test.
Step 3: Create Dashboards in AWS Managed Grafana
2. Customize the Panels: Adjust the visualization type (e.g., line graph, bar chart) based on the data you want to present. Set Time Ranges to dynamically show data over specific periods (e.g., last 1 hour, 24 hours). Configure Thresholds for critical metrics (e.g., if CPU utilization > 80%) to highlight anomalies.
3. Save the Dashboard: Click Save (disk icon) in the top-right corner. Give the dashboard a name (e.g., Dev and Prod Metrics). Optionally, organize dashboards into folders for better categorization (e.g., a folder named Three-Tier Web App Monitoring).
Conclusion
This project demonstrated the deployment and monitoring of a secure three-tier web application using AWS services, ensuring high availability, scalability, and operational efficiency.
I began by creating a GitHub repository with separate branches for development (dev) and production (main). Using Terraform, I set up the foundational AWS infrastructure, including a custom Virtual Private Cloud (VPC) with public and private subnets, NAT gateways, and associated route tables. The frontend was hosted on S3 for both environments, configured to serve as static websites.
The backend was deployed to EC2 instances, utilizing Node.js and Express for application logic. To automate the CI/CD pipeline, I used GitHub Actions, enabling seamless deployments to S3 and EC2. This approach ensured that every code update in the dev or main branch triggered the build and deployment processes automatically, maintaining consistency and minimizing manual intervention.
For monitoring, I integrated AWS CloudWatch, installing the CloudWatch Agent on both dev and prod instances through AWS Systems Manager. Metrics such as CPU, memory, and disk usage were visualized using CloudWatch dashboards, and alarms were set up to notify on key thresholds. Additionally, I configured AWS Managed Grafana, leveraging CloudWatch as a data source to provide advanced visualization capabilities, enabling detailed insight into the application's performance across environments.
This comprehensive setup highlights the capabilities of AWS in delivering secure, scalable, and automated solutions, demonstrating best practices in DevOps and cloud architecture. The use of managed services such as AWS Managed Grafana and CloudWatch allowed for centralized monitoring without incurring the cost of additional EC2 instances.