GitHub Actions Scouting Myntra App | DevSecOps

GitHub Actions Scouting Myntra App | DevSecOps

STEP 1A: Setting up AWS EC2 Instance and IAM Role

  1. Sign in to the AWS Management Console
  2. Access the AWS Management Console using your credentials
  3. Navigate to the EC2 Dashboard
  4. Launch Instance
  5. Choose an Amazon Machine image (AMI) : ubuntu 22.04 LTS
  6. Instance Type: t2.large
  7. Add Storage: 30GB
  8. Review and Launch
  9. Accessing the Instance

STEP 1B: IAM ROLE

Search for IAM in the search bar of AWS and click on roles.

Click on Create Role

Select entity type as AWS service

Use case as EC2 and click on Next.

For permission policy select Administrator Access (Just for learning purpose), click Next

Provide a Name for Role and click on Create role.

Role is created

Now Attach this role to Ec2 instance that we created earlier, so we can provision cluster from that instance.

Go to EC2 Dashboard and select the instance.

Click on Actions –> Security –> Modify IAM role.

Select the Role that created earlier and click on Update IAM role.

Connect the instance to MobaXterm or Putty.

STEP 2: INSTALL REQUIRED PACKAGES ON INSTANCE

Create Docker & Docker scout Installation script

vi docker-setup.sh        

Script

# Add Docker's official GPG key:
sudo apt-get update
sudo apt-get install ca-certificates curl gnupg
sudo install -m 0755 -d /etc/apt/keyrings
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo gpg --dearmor -o /etc/apt/keyrings/docker.gpg
sudo chmod a+r /etc/apt/keyrings/docker.gpg
# Add the repository to Apt sources:
echo \
  "deb [arch=$(dpkg --print-architecture) signed-by=/etc/apt/keyrings/docker.gpg] https://download.docker.com/linux/ubuntu \
  $(. /etc/os-release && echo "$VERSION_CODENAME") stable" | \
  sudo tee /etc/apt/sources.list.d/docker.list > /dev/null
sudo apt-get update
sudo apt-get install docker-ce docker-ce-cli containerd.io docker-buildx-plugin docker-compose-plugin
curl -sSfL https://raw.githubusercontent.com/docker/scout-cli/main/install.sh | sh -s -- -b /usr/local/bin
sudo usermod -aG docker ubuntu
newgrp docker
sudo chmod 777 /var/run/docker.sock        


Provide executable versions

sudo chmod 777 docker-setup.sh
sh docker-setup.sh        

Create Script for other packages

vi script-packages.sh        

Script

#!/bin/bash
sudo apt update -y
sudo touch /etc/apt/keyrings/adoptium.asc
sudo wget -O /etc/apt/keyrings/adoptium.asc https://packages.adoptium.net/artifactory/api/gpg/key/public
echo "deb [signed-by=/etc/apt/keyrings/adoptium.asc] https://packages.adoptium.net/artifactory/deb $(awk -F= '/^VERSION_CODENAME/{print$2}' /etc/os-release) main" | sudo tee /etc/apt/sources.list.d/adoptium.list
sudo apt update -y
sudo apt install temurin-17-jdk -y
/usr/bin/java --version
# Install Terraform
sudo apt install wget -y
wget -O- https://apt.releases.hashicorp.com/gpg | sudo gpg --dearmor -o /usr/share/keyrings/hashicorp-archive-keyring.gpg
echo "deb [signed-by=/usr/share/keyrings/hashicorp-archive-keyring.gpg] https://apt.releases.hashicorp.com $(lsb_release -cs) main" | sudo tee /etc/apt/sources.list.d/hashicorp.list
sudo apt update && sudo apt install terraform
# Install kubectl
sudo apt update
sudo apt install curl -y
curl -LO https://dl.k8s.io/release/$(curl -L -s https://dl.k8s.io/release/stable.txt)/bin/linux/amd64/kubectl
sudo install -o root -g root -m 0755 kubectl /usr/local/bin/kubectl
kubectl version --client
# Install AWS CLI
curl "https://awscli.amazonaws.com/awscli-exe-linux-x86_64.zip" -o "awscliv2.zip"
sudo apt-get install unzip -y
unzip awscliv2.zip
sudo ./aws/install
# Install Node.js 16 and npm
curl -fsSL https://deb.nodesource.com/gpgkey/nodesource.gpg.key | sudo gpg --dearmor -o /usr/share/keyrings/nodesource-archive-keyring.gpg
echo "deb [signed-by=/usr/share/keyrings/nodesource-archive-keyring.gpg] https://deb.nodesource.com/node_16.x focal main" | sudo tee /etc/apt/sources.list.d/nodesource.list
sudo apt update
sudo apt install -y nodejs        

Give executable permissions for script and run it

sudo chmod 777 script-packages.sh
sh script-packages.sh        

Check package versions

docker --version
terraform --version
aws --version
kubectl version
node -v
java --version        

Run the SonarQube container

docker run -d --name sonar -p 9000:9000 sonarqube:lts-community        

Now copy the IP address of the ec2 instance

<ec2-public-ip:9000>        

Provide Login and password

login admin
password admin        

Update your SonarQube password & This is the SonarQube dashboard


Step 3: Integrating SonarQube with GitHub Actions

Integrating SonarQube with GitHub Actions allows you to automatically analyze your code for quality and security as part of your continuous integration pipeline.

We already have SonarQube up and running

On SonarQube Dashboard click on Manually

Next, provide a name for your project and provide a Branch name and click on setup

On the next page click on With GitHub actions

This will Generate an overview of the Project and provide some instructions to integrate

Let’s Open your GitHub and select your Repository

In my case it is Myntra-clone and Click on Settings

It will open a page like this click on New Repository secret

Now go back to Your SonarQube Dashboard

Copy SONAR_TOKEN and click on Generate Token

Click on Generate

Let’s copy the Token and add it to GitHub secrets

Now go back to GitHub and Paste the copied name for the secret and token

Name: SONAR_TOKEN

Secret: Paste Your Token and click on Add secret

Now go back to the SonarQube Dashboard

Copy the Name and Value

Go to GitHub now and paste-like this and click on add secret

Our Sonarqube secrets are added and you can see

Go to SonarQube Dashboard and click on continue

Now create your Workflow for your Project. In my case, the Netflix project is built using React Js. That’s why I am selecting Other

Now it Generates and workflow for my Project

Go back to GitHub. click on Add file and then create a new file

Create sonar-project.properties file in your repository and paste the following code

sonar.projectKey=myntra        

Here is the file name

.github/workflows/build.yml  #you can use any name iam using sonar.yml        

Copy content and add it to the file

name: Build,Analyze,scan
on:
  push:
    branches:
      - main
jobs:
  build-analyze-scan:
    name: Build
    runs-on: [self-hosted]
    steps:
      - name: Checkout code
        uses: actions/checkout@v2
        with:
          fetch-depth: 0  # Shallow clones should be disabled for a better relevancy of analysis
      - name: Build and analyze with SonarQube
        uses: sonarsource/sonarqube-scan-action@master
        env:
          SONAR_TOKEN: ${{ secrets.SONAR_TOKEN }}
          SONAR_HOST_URL: ${{ secrets.SONAR_HOST_URL }}        

Commit changes

Now click on Actions

Go to the SonarQube dashboard and click on projects and you can see the analysis

If you want to see the full report, click on issues.

Step 4: Add GitHub Runner

Go to GitHub and click on Settings –> Actions –> Runners

Click on New self-hosted runner

Now select Linux and Architecture X64

Use the below commands to add a self-hosted runner

Go to Putty or MobaXterm and connect to your ec2 instance

And paste the commands

mkdir actions-runner && cd actions-runner        
curl -o actions-runner-linux-x64-2.310.2.tar.gz -L https://github.com/actions/runner/releases/download/v2.310.2/actions-runner-linux-x64-2.310.2.tar.gz        

This command downloads a file called “actions-runner-linux-x64-2.310.2.tar.gz” from a specific web address on GitHub and saves it in your current directory.

Let’s validate the hash installation

echo "fb28a1c3715e0a6c5051af0e6eeff9c255009e2eec6fb08bc2708277fbb49f93  actions-runner-linux-x64-2.310.2.tar.gz" | shasum -a 256 -c        

Now Extract the installer

tar xzf ./actions-runner-linux-x64-2.310.2.tar.gz        

Let’s configure the runner

If you provide multiple labels use commas for each label

Let’s start runner

./run.sh        

EKS provision

Clone the repo onto your instance

This changes the directory to EKS terraform files

Change your S3 bucket in the backend file

Initialize the terraform

terraform init        

Validate the configuration and syntax of files

terraform validate        

Plan and apply

It will take 10 minutes to create the cluster

Node group ec2 instance

Now add the remaining steps

Next, install npm dependencies

- name: NPM Install
        run: npm install # Add your specific npm install command        

Create a Personal Access token for your DockerHub account

Go to docker hub and click on your profile –> Account settings –> security –> New access token

It asks for a name Provide a name and click on generate token

Copy the token save it in a safe place, and close

Now Go to GitHub again and click on settings

Search for Secrets and variables and click on and again click on actions

It will open a page like this click on New Repository secret

Add your DockerHub username with the secret name as

DOCKERHUB_USERNAME   #use your dockerhub username        

Paste the token that you generated and click on Add secret.

- name: Docker Login
        run: docker login -u ${{ secrets.DOCKERHUB_USERNAME }} -p ${{ secrets.DOCKERHUB_TOKEN }}
      - name: Docker Scout Scan
        run: |
          docker-scout quickview fs://.
          docker-scout cves fs://.
      - name: Docker build and push
        run: |
          # Run commands to build and push Docker images
          docker build -t myntra .
          docker tag myntra mouni9948/myntra:latest
          docker login -u ${{ secrets.DOCKERHUB_USERNAME }} -p ${{ secrets.DOCKERHUB_TOKEN }}
          docker push mouni9948/myntra:latest
        env:
          DOCKER_CLI_ACI: 1
      - name: Docker Scout Image Scan
        run: |
          docker-scout quickview mouni9948/myntra:latest
          docker-scout cves mouni9948/myntra:latest        

  1. Docker Login
  2. Docker Scout Scan
  3. Docker build and push
  4. Docker Scout Image Scan

Image is pushed to DockerHub

DEPLOY

deploy:
    needs: build-analyze-scan
    runs-on: self-hosted # Use your self-hosted runner label here        
- name: Run the container
        run: docker run -d --name myntra -p 3000:3000 mouni9948/myntra:latest        

If you run this workflow.

Output

output

<ec2-ip:3000>        

Deploy to EKS

- name: Update kubeconfig
        run: aws eks --region <cluster-region> update-kubeconfig --name <cluster-name>        
- name: Deploy to EKS
        run: kubectl apply -f deployment-service.yml        

Complete Workflow

name: Build,Analyze,scan
on:
  push:
    branches:
      - main
jobs:
  build-analyze-scan:
    name: Build
    runs-on: [self-hosted]
    steps:
      - name: Checkout code
        uses: actions/checkout@v2
        with:
          fetch-depth: 0  # Shallow clones should be disabled for a better relevancy of analysis
      - name: Build and analyze with SonarQube
        uses: sonarsource/sonarqube-scan-action@master
        env:
          SONAR_TOKEN: ${{ secrets.SONAR_TOKEN }}
          SONAR_HOST_URL: ${{ secrets.SONAR_HOST_URL }}
      - name: npm install dependency
        run: npm install
      - name: Docker Login
        run: docker login -u ${{ secrets.DOCKERHUB_USERNAME }} -p ${{ secrets.DOCKERHUB_TOKEN }}
      - name: Docker Scout Scan
        run: |
          docker-scout quickview fs://.
          docker-scout cves fs://.
      - name: Docker build and push
        run: |
          # Run commands to build and push Docker images
          docker build -t myntra .
          docker tag myntra mouni9948/myntra:latest
          docker login -u ${{ secrets.DOCKERHUB_USERNAME }} -p ${{ secrets.DOCKERHUB_TOKEN }}
          docker push mouni9948/myntra:latest
        env:
          DOCKER_CLI_ACI: 1
      - name: Docker Scout Image Scan
        run: |
          docker-scout quickview mouni9948/myntra:latest
          docker-scout cves mouni9948/myntra:latest
  deploy:
   needs: build-analyze-scan
   runs-on: [self-hosted]
   steps:
      - name: docker pull image
        run: docker pull mouni9948/myntra:latest
      - name: Deploy to container
        run: docker run -d --name game -p 3000:3000 mouni9948/myntra:latest
      - name: Update kubeconfig
        run: aws eks --region ap-south-1 update-kubeconfig --name EKS_CLOUD
      - name: Deploy to kubernetes
        run: kubectl apply -f deployment-service.yml        

commit changes

Run this workflow now

kubectl get all        

output

Destruction workflow

name: Build,Analyze,scan
on:
  push:
    branches:
      - main
jobs:
  build-analyze-scan:
    name: Build
    runs-on: [self-hosted]
    steps:
      - name: Checkout code
        uses: actions/checkout@v2
        with:
          fetch-depth: 0  # Shallow clones should be disabled for a better relevancy of analysis
      - name: Deploy to container
        run: |
          docker stop myntra
          docker rm myntra
      - name: Update kubeconfig
        run: aws eks --region ap-south-1 update-kubeconfig --name EKS_CLOUD
      - name: Deploy to kubernetes
        run: kubectl delete -f deployment-service.yml        

It will take 10 minutes to destroy the EKS cluster

Meanwhile, delete the Dockerhub Token

Once cluster destroys

Delete The ec2 instance and IAM role.

Delete the secrets from GitHub also.


Thank you for reading. I hope you found this article helpful.

Happy Learning :-)

Mounika Jilakari.

要查看或添加评论,请登录

mouni ka的更多文章

  • Setup ELK Stack Architecture

    Setup ELK Stack Architecture

    Continuous Monitoring: Continuous Monitoring is an important part of software development. It measures the health of…

  • Port Numbers

    Port Numbers

    DevOps engineers often work with a variety of tools and services that communicate over specific port numbers. ????????…

    1 条评论
  • Terraform: Deploying a 2-Tier Architecture

    Terraform: Deploying a 2-Tier Architecture

    Scenario Deploying a 2-tier architecture that includes all our code in a single main.tf file (known as a monolith) with…

  • Lost your Amazon EC2 Keypair?

    Lost your Amazon EC2 Keypair?

    In this, we have created two instances (DB-Server and Helper-EC2), of which one has lost its keypair. We lost our…

  • In this project I’m doing how to deploy a WordPress website on AWS(EC2 Ubuntu Instance) using Docker.

    In this project I’m doing how to deploy a WordPress website on AWS(EC2 Ubuntu Instance) using Docker.

    WordPress is a popular web hosting site that is very easy to use and setup. The platform allows users to create and…

  • Jenkins Pipeline (Docker + Kubernetes)

    Jenkins Pipeline (Docker + Kubernetes)

    Step - 1: Jenkins Server Setup 1.1) Create Ubuntu VM (20.

  • Hosting Static Website With AWS S3

    Hosting Static Website With AWS S3

    Hosting Static Website With AWS S3 Steps to develop Static website 1) login to AWS Portal 2) Search for S3 Service 3)…

    2 条评论
  • Jenkins CI/CD Pipeline Setup

    Jenkins CI/CD Pipeline Setup

    Jenkins CI/CD Pipeline with GitHub + Maven + Nexus + SonarQube + Tomcat Here we will complete our setup in 6 steps Step…

    1 条评论
  • DevOps Project Setup

    DevOps Project Setup

    Spring Boot + Angular + Docker + Kubernetes – Project Setup In this project I have deployed one Full Stack Application…

社区洞察

其他会员也浏览了