Automated Netflix Clone Deployment on Cloud Using Jenkins and DevSecOps

Automated Netflix Clone Deployment on Cloud Using Jenkins and DevSecOps

In this blog, I am going to walk you through the steps involved in deploying a Netflix clone on the cloud using Jenkins while applying DevSecOps principles for security and automation at every stage. This deployment will be done with AWS EC2, Docker, and a CI/CD pipeline to make continuous integration and delivery easier. I will also show how to introduce security scanning into the workflow, which will help protect your application during this process.

Now, let's go into detail through each step, from preparation of the infrastructure to deployment of an app in a secure and automated manner.

Phase 1: Initial Setup and Deployment


Step 1: Launch EC2 (Ubuntu 22.04)

The first step is to provision an EC2 instance on AWS with Ubuntu 22.04 as the operating system. Here is how to do it:

1 - Login to AWS Console

Go to the AWS Management Console and sign in to your account.


AWS Console

2 - Launch EC2 Instance:

In the EC2 Dashboard, click on Launch Instance and select Ubuntu 22.04 as the Amazon Machine Image (AMI). Choose the t2.large instance type, which offers better performance but comes with additional costs. Configure the instance with necessary network settings, security groups, and download the key pair for SSH access to securely connect to the instance.


Security Group Setting

3 - Connect to EC2 Instance:

use the Connect option to access the instance directly through the browser.


Step 2: Clone the Code

Now that we have an EC2 instance running, it’s time to clone the Netflix Clone project code. Follow these steps:

1 - Update Packages:

Before cloning the repository, it's essential to update the EC2 instance’s packages.

sudo apt-get update        

2 - Clone the Application Code:

Use the following command to clone the Netflix Clone repository from GitHub to your EC2 instance:

git clone https://github.com/kushank-patel/Netflix-Clone-DevSecOps-Project.git        


Step 3: Install Docker and Run the App Using a Container

Docker is the next crucial step in the deployment. It allows us to containerize the application and run it in isolated environments, ensuring portability across different environments.

1 - Install Docker:

Update the package list and install Docker using the following commands:

sudo apt-get update

sudo apt-get install docker.io -y

sudo usermod -aG docker $USER  # Replace with your system's username, e.g., 'ubuntu'

newgrp docker

sudo chmod 777 /var/run/docker.sock        

2 - Build and Run the Application:

  • With Docker installed, navigate to the cloned project folder and build the Docker image

cd Netflix-Clone-DevSecOps-Project

docker build -t netflix .        

  • Run the application as a Docker container:

docker run -d --name netflix -p 8081:80 netflix:latest        

  • You can stop and remove the container and image with the following commands:

docker stop <containerid>

docker rmi -f netflix        

3 - Error Handling:

If you run into an error, it might be due to missing the TMDB API key (which is required for the application to retrieve movie data). We’ll address this next.


Step 4: Get the TMDB API Key

To get movie data for your Netflix clone, you need to sign up for an API key from The Movie Database (TMDB).

1 - Create an Account on TMDB

Go to the TMDB website and create an account if you don’t already have one.

2 - Obtain API Key

After logging in to TMDB, navigate to Profile Settings and click on API in the left sidebar. Click Create New API Key, accept the terms, fill in the required details, and you'll receive your TMDB API key upon submission.


You will find your API key on this page of TMDB

3 - Integrate the API Key into the Application

The next step is to pass the TMDB API key to the application. You can pass it as a build argument while rebuilding the Docker image

docker build --build-arg TMDB_V3_API_KEY=<your-api-key> -t netflix .        

Replace <your-api-key> with the actual API key you obtained from TMDB.

Phase 2: Security

Install SonarQube and Trivy

To ensure your application is secure and follows best practices, you'll need to install SonarQube and Trivy on your EC2 instance for vulnerability scanning.

Install SonarQube:

1 - Run the following Docker command to start the SonarQube container:

docker run -d --name sonar -p 9000:9000 sonarqube:lts-community        


2 - To access SonarQube, navigate to https://<publicIP>:9000 in your browser. By default, the username and password are both admin.


Login Screen Sonarqube

Initial Login Id : admin Password: admin


Update old password page

Install Trivy:

1 - Update the system and install necessary dependencies:

sudo apt-get install wget apt-transport-https gnupg lsb-release        


Scan files with Trivy

2 - Add the Trivy repository key and update the package list:

wget -qO - https://aquasecurity.github.io/trivy-repo/deb/public.key | sudo apt-key add -

echo deb https://aquasecurity.github.io/trivy-repo/deb $(lsb_release -sc) main | sudo tee -a /etc/apt/sources.list.d/trivy.list

sudo apt-get update        

3 - Install Trivy:

sudo apt-get install trivy        

4 - To scan a Docker image with Trivy, use the following command:

trivy image <imageid>

Integrate SonarQube and Configure        


Scan image with Trivy

1 - Integrate SonarQube with your CI/CD pipeline:

Add the SonarQube plugin to your Jenkins pipeline to automatically analyze the code quality and security issues during the build process.

Ensure the SonarQube server is running and accessible in your network.

2 - Configure SonarQube:

Once integrated, configure SonarQube to analyze your project’s codebase for potential vulnerabilities, bugs, and code quality issues. You can set up quality gates to enforce security standards and ensure the health of your project.


This phase ensures that both the code quality and security vulnerabilities are automatically scanned and managed as part of your DevSecOps pipeline.

Phase 3: CI/CD Setup

Install Jenkins for Automation

To automate the deployment process, install Jenkins on the EC2 instance. Follow these steps:

1 - Install Java (required for Jenkins):

sudo apt update

sudo apt install fontconfig openjdk-17-jre

java -version        

Expected Output:

openjdk version "17.0.8" 2023-07-18

OpenJDK Runtime Environment (build 17.0.8+7-Debian-1deb12u1)

OpenJDK 64-Bit Server VM (build 17.0.8+7-Debian-1deb12u1, mixed mode, sharing)        


Output from Terminal

2 - Install Jenkins:

Add the Jenkins repository key:

sudo wget -O /usr/share/keyrings/jenkins-keyring.asc https://pkg.jenkins.io/debian-stable/jenkins.io-2023.key        

Add the Jenkins repository to the sources list:

echo deb [signed-by=/usr/share/keyrings/jenkins-keyring.asc] https://pkg.jenkins.io/debian-stable binary/ | sudo tee /etc/apt/sources.list.d/jenkins.list > /dev/null        

Update the system and install Jenkins:

sudo apt-get update

sudo apt-get install jenkins        

Start and enable Jenkins:

sudo systemctl start jenkins

sudo systemctl enable jenkins        

3 - Access Jenkins:

Open a web browser and navigate to https://<publicIP>:8080 to access Jenkins.


Initial Screen of Jenkins



Jenkins Initial Password Location


Getting Started Page


Install Necessary Plugins in Jenkins

To enhance Jenkins functionality, install the following plugins:

1 - Go to Manage Jenkins → Plugins → Available Plugins.


Jenkins Main Screen

2 - Search for and install the plugins listed below (select Install Without Restart for each)

  • Eclipse Temurin Installer
  • SonarQube Scanner
  • NodeJs Plugin
  • Email Extension Plugin


Plugins

These plugins will enable Jenkins to manage dependencies, integrate with SonarQube for quality checks, handle Node.js configurations, and send email notifications, ensuring an efficient CI/CD pipeline.


Configuring Jenkins for CI/CD Pipeline

Step 1 - Configure Java and Node.js in Global Tool Configuration

1 - Navigate to Manage Jenkins → Global Tool Configuration.

2 - Under JDK, click Add JDK:

Name: JDK 17

Install automatically.

JDK setup

3 - Under NodeJS, click Add NodeJS:

Name: NodeJS 16

Install automatically.

4 - Click Apply and Save.

Step 2: Configure SonarQube

1 - Generate a Token in SonarQube:

Log in to your SonarQube dashboard, navigate to My Account → Security, generate a new token, and copy it for Jenkins integration.



Sonarqube Token

2 - Add SonarQube Token in Jenkins:

In Jenkins, navigate to Dashboard → Manage Jenkins → Credentials → System → Global credentials, click Add Credentials, select Secret Text, paste the SonarQube token, and assign a meaningful ID like sonar-token.


3 - Configure SonarQube in Jenkins:

Navigate to Manage Jenkins → Configure System, locate the SonarQube Servers section, add a new server configuration with the name SonarQube, server URL https://<SonarQube_PublicIP>:9000, select the previously created authentication token, and click Apply and Save.


Configure SonarQube in Jenkins

Global Tool Configuration

Navigate to Manage Jenkins → Global Tool Configuration, scroll to the SonarQube Scanner section, add a new scanner named Sonar Scanner, enable the Install automatically option, and click Apply and Save.

Create a Jenkins Webhook

1 - In Your GitHub Repository:

Navigate to Settings → Webhooks and click Add Webhook.

2 - Webhook Configuration

Set the Payload URL to https://<Jenkins_PublicIP>:8080/github-webhook/, choose Content Type as application/json, select the event trigger (e.g., Just the push event), and click Add Webhook to save.

Notes:

  • Use Configure System in Jenkins for managing server configurations like SonarQube.
  • Use Global Tool Configuration for installing and setting up tools such as SonarQube Scanner or NodeJS.


Installing and Configuring Tools in Jenkins for Dependency and Docker Management

1 - Install Dependency-Check Plugin

In the Jenkins Dashboard, go to Manage Jenkins → Manage Plugins, search for OWASP Dependency-Check under the Available tab, select it, and click Install without restart.


2 - Configure Dependency-Check Tool

After installation, go to Manage Jenkins → Global Tool Configuration, find the OWASP Dependency-Check section, add a tool name (e.g., DP-Check), and save the settings.


Installing and Configuring Tools in Jenkins for Dependency and Docker Management

Install Dependency-Check Plugin

  1. Open the Jenkins Dashboard and navigate to Manage Jenkins → Manage Plugins.
  2. Under the Available tab, search for OWASP Dependency-Check.
  3. Select the plugin and click Install without restart.


Plugins

Configure Dependency-Check Tool

  1. After installation, go to Manage Jenkins → Global Tool Configuration.
  2. Find the OWASP Dependency-Check section.
  3. Add a tool name, e.g., DP-Check, and save the settings.


Install Docker Tools and Plugins

1 - Open the Jenkins Dashboard and navigate to Manage Jenkins → Manage Plugins.

2 - Under the Available tab, search for Docker and install these plugins

  • Docker
  • Docker Commons
  • Docker Pipeline
  • Docker API
  • docker-build-step


Plugins

3 - Click Install without restart to install the selected plugins.


Add DockerHub Credentials

Go to Manage Jenkins → Manage Credentials, under System → Global credentials (unrestricted), click Add Credentials, select Secret text as the credential type, add your DockerHub username and password, assign an ID like docker, and save.


Docker Credentials


Steps to Add Email Notification in Jenkins and Set Up Gmail App Password

Step 1 - Add Email Notification in Jenkins

1 - Navigate to Manage Jenkins

Go to Manage Jenkins → Configure System.

2 - Configure Email Notification

Scroll down to the E-mail Notification section, enter smtp.gmail.com for the SMTP server, set the Default user e-mail suffix to @gmail.com, select Use SMTP Authentication, and enter your Gmail email address and password (to be configured next), then check the Test configuration by sending test e-mail box and send a test email to verify the setup.


Step 2 - Create App Password on Gmail

1 - Enable 2-Step Verification

  • Go to your Google Account.
  • Navigate to Security → 2-Step Verification and enable it.

2 - Create App Password:

Once 2-Step Verification is enabled, go to the App passwords section in Security settings, select Mail as the app and Other (Custom name) as the device, enter a custom name (e.g., "Jenkins Email"), click Generate, and copy the 16-character App Password that appears.

Step 3 - Add Email and Password in Jenkins Credentials

1 - Go to Manage Jenkins → Manage Credentials

Navigate to Manage Jenkins → Manage Credentials.

2 - Add New Credentials

Under (global), click Add Credentials, set the Kind to Username with password, enter your Gmail email address as the Username, your Gmail App Password (created as per the instructions below) as the Password, provide a meaningful ID like gmail-credentials, and click OK to save.


Having installed the Dependency-Check plugin, configured the tool, and added Docker-related plugins along with your DockerHub credentials in Jenkins, you can now proceed with configuring your Jenkins pipeline to incorporate these tools and credentials into your CI/CD process.

You can find the pipeline Groovy code from here


Jenkins Pipeline



Failure message via email in post step of pipeline


Success Message from the pipeline

Now create one more instance on the AWS of the size t2.medium for the monitoring

Go to the AWS Management Console and open the EC2 Dashboard, click Launch Instance to create a new EC2 instance, search for Ubuntu in the Choose an Amazon Machine Image (AMI) step, and select your desired Ubuntu version (e.g., Ubuntu Server 22.04 LTS) from the available options, then proceed with the setup.



Security Group for Monitoring server

Phase 4: Monitoring

Install Prometheus:

1 - Create a dedicated user for Prometheus:

sudo useradd --system --no-create-home --shell /bin/false prometheus        

2 - Download Prometheus:

wget https://github.com/prometheus/prometheus/releases/download/v2.47.1/prometheus-2.47.1.linux-amd64.tar.gz        

3 - Extract files, move them, and create necessary directories:

tar -xvf prometheus-2.47.1.linux-amd64.tar.gz

cd prometheus-2.47.1.linux-amd64/

sudo mkdir -p /data /etc/prometheus

sudo mv prometheus promtool /usr/local/bin/

sudo mv consoles/ console_libraries/ /etc/prometheus/

sudo mv prometheus.yml /etc/prometheus/prometheus.yml        


4 - Set ownership for directories:

sudo chown -R prometheus:prometheus /etc/prometheus/ /data/        


5 - Create a systemd service for Prometheus:

sudo nano /etc/systemd/system/prometheus.service        

Add the following content:

[Unit]

Description=Prometheus

Wants=network-online.target

After=network-online.target

StartLimitIntervalSec=500

StartLimitBurst=5

[Service]

User=prometheus

Group=prometheus

Type=simple

Restart=on-failure

RestartSec=5s

ExecStart=/usr/local/bin/prometheus \

--config.file=/etc/prometheus/prometheus.yml \

--storage.tsdb.path=/data \

--web.console.templates=/etc/prometheus/consoles \

--web.console.libraries=/etc/prometheus/console_libraries \

--web.listen-address=0.0.0.0:9090 \

--web.enable-lifecycle

[Install]

WantedBy=multi-user.target


6 - Enable and start Prometheus:

sudo systemctl enable prometheus

sudo systemctl start prometheus        

7 - Verify the Prometheus service status:

sudo systemctl status prometheus        


You can access Prometheus via https://<your-server-ip>:9090.


Prometheus Screen

Install Node Exporter:

1 - Create a user for Node Exporter and download:

sudo useradd --system --no-create-home --shell /bin/false node_exporter

wget https://github.com/prometheus/node_exporter/releases/download/v1.6.1/node_exporter-1.6.1.linux-amd64.tar.gz        


2 - Extract and move files:

tar -xvf node_exporter-1.6.1.linux-amd64.tar.gz

sudo mv node_exporter-1.6.1.linux-amd64/node_exporter /usr/local/bin/

rm -rf node_exporter*        


3 - Create a systemd service for Node Exporter:

sudo nano /etc/systemd/system/node_exporter.service        

Add the following content:

[Unit]

Description=Node Exporter

Wants=network-online.target

After=network-online.target

StartLimitIntervalSec=500

StartLimitBurst=5

[Service]

User=node_exporter

Group=node_exporter

Type=simple

Restart=on-failure

RestartSec=5s

ExecStart=/usr/local/bin/node_exporter --collector.logind

[Install]

WantedBy=multi-user.target


4 - Enable and start Node Exporter:

sudo systemctl enable node_exporter

sudo systemctl start node_exporter        

5 - Verify the Node Exporter status

sudo systemctl status node_exporter        



Configure Prometheus to Scrape Metrics:

1 - Modify the prometheus.yml file:

sudo nano /etc/prometheus/prometheus.yml        

Add the following configuration:

global:

scrape_interval: 15s

scrape_configs:

- job_name: 'node_exporter'

static_configs:

- targets: ['localhost:9100']

- job_name: 'jenkins'

metrics_path: '/prometheus'

static_configs:

- targets: ['<your-jenkins-ip>:<your-jenkins-port>']


2 - Validate the configuration:

promtool check config /etc/prometheus/prometheus.yml        

3 - Reload Prometheus configuration:

curl -X POST https://localhost:9090/-/reload        


Node Exporter added to the Prometheus

You can access the Prometheus targets at https://<your-prometheus-ip>:9090/targets.


Install Grafana

1 - First, ensure that all necessary dependencies are installed:

sudo apt-get update

sudo apt-get install -y apt-transport-https software-properties-common        


2 - Add the GPG key for Grafana:

wget -q -O - https://packages.grafana.com/gpg.key | sudo apt-key add -        

3 - Add the repository for Grafana stable releases:

echo "deb https://packages.grafana.com/oss/deb stable main" | sudo tee -a /etc/apt/sources.list.d/grafana.list        

4 - Update the package list and install Grafana:

sudo apt-get update

sudo apt-get -y install grafana
        

5 - Enable Grafana to start automatically after a reboot:

sudo systemctl enable grafana-server

sudo systemctl start grafana-server        

6 - Verify that the Grafana service is running correctly:

sudo systemctl status grafana-server        


You can access the Grafana targets at https://<your-server-ip>:3000


Grafana Dashboard


7 - Change the Default Password

When you log in for the first time, Grafana will prompt you to change the default password for security reasons. Set a new password following the prompts.

8 - Add Prometheus Data Source

To visualize metrics from Prometheus in Grafana, click the gear icon in the left sidebar, go to "Data Sources," and select "Add data source." Choose Prometheus as the data source type, set the URL to https://localhost:9090, and click "Save & Test" to ensure the connection is successful.


Importing Dashboard


To monitor the main server

In same way if you want to monitor the jenkins they add the Prometheus Plugin


Prometheus Plugin


As per the above given configuration


Jenkins Matrix added to Prometheus


Import Jenkins dashboard in Grafana


Jenkins Dashboard in Grafana

Phase 6: Kubernetes Setup and Monitoring

In this phase, you'll set up a Kubernetes cluster with node groups to provide a scalable environment for deploying and managing your applications. Kubernetes clusters are ideal for managing containerized applications, and with node groups, you can scale your resources based on demand.

Install Prometheus and Monitor Kubernetes

Prometheus is a powerful monitoring and alerting toolkit that will allow you to collect and store metrics from your Kubernetes cluster. To get started with monitoring your cluster, we'll install Prometheus Node Exporter using Helm.

Install Node Exporter using Helm

1 - First, you'll need to add the Prometheus Community Helm repository:

helm repo add prometheus-community https://prometheus-community.github.io/helm-charts        

2 - create a namespace where the Node Exporter will run:

kubectl create namespace prometheus-node-exporter        

3 - Install the Node Exporter using Helm in the newly created namespace:

helm install prometheus-node-exporter prometheus-community/prometheus-node-exporter --namespace prometheus-node-exporter        

4 - Update Prometheus Configuration:

- job_name: 'node_exporter'

metrics_path: '/metrics'

static_configs:

- targets: ['nodeip:9100']

This configuration will scrape the Node Exporter metrics from the nodeip at port 9100. Make sure to replace nodeip with your actual node IP addresses.

Note: Don't forget to reload or restart Prometheus to apply the new configuration.

Deploy Application with ArgoCD

ArgoCD is a powerful continuous delivery tool that helps you deploy applications on your Kubernetes clusters using GitOps principles. To begin, you'll need to install ArgoCD and configure your GitHub repository as a source for your application.

1 - Install ArgoCD:

You can install ArgoCD on your Kubernetes cluster by following the official instructions in the EKS Workshop. Once installed, ArgoCD will allow you to manage application deployments directly from your GitHub repository.


Initial Password and id are admin


2 - Set GitHub Repository as Source:

After installing ArgoCD, configure your GitHub repository as the source for your application. This setup will allow ArgoCD to pull application manifests directly from the repository, ensuring that your deployments are always synchronized with your version-controlled configuration.

3 - Create an ArgoCD Application:

apiVersion: argoproj.io/v1alpha1

kind: Application

metadata:

name: my-app

spec:

destination:

name: ""

namespace: default

project: default

source:

repoURL: 'https://github.com/my-repo/my-app.git'

targetRevision: HEAD

path: k8s/manifests

syncPolicy:

automated:

prune: true

selfHeal: true


4 - Access the Application:

To access the deployed application, ensure that port 30007 is open in your security group. Once that’s done, open a new browser tab and navigate to NodeIP:30007. Your application should now be running and accessible.


Phase 7: Cleanup

Once your application is successfully deployed and monitoring is set up, it's time to clean up any unnecessary resources in your AWS environment.

Cleanup AWS EC2 Instances

In this phase, you'll terminate any unused AWS EC2 instances to prevent unnecessary charges and keep your infrastructure organized.

1 - Identify Unused Instances:

Go to the AWS Management Console and navigate to the EC2 Dashboard. Review your running instances and identify those that are no longer needed.

2 - Terminate Instances:

Select the instances that are not in use, and click on "Actions" → "Instance State" → "Terminate." This will stop any unnecessary charges and free up resources.

3 - Review Billing:

It's always a good practice to review your AWS billing dashboard to ensure that no unexpected charges are incurred from unused resources.


Conclusion:

In this project, I have successfully designed and implemented a DevSecOps pipeline for Netflix using AWS and Kubernetes to streamline the CI/CD workflow, enhance monitoring, and improve security. By carefully planning and executing the process across seven phases Infrastructure Setup, Jenkins Pipeline Configuration, Dependency and Docker Management, Monitoring, Kubernetes Cluster Setup, Application Deployment, and Cleanup. I have built a robust, scalable, and secure system for continuous integration and deployment. Each phase was crucial in ensuring that the pipeline not only delivers applications efficiently but also maintains high security standards, scalability, and system availability.

Throughout the process, I integrated security measures, monitoring tools, and optimized the pipeline to guarantee that Netflix's systems are secure, highly available, and performant. This setup provides a seamless workflow from development to production, ensuring that Netflix's services can scale rapidly while maintaining strict security protocols.

By leveraging tools like Jenkins, SonarQube, Prometheus, Grafana, and Kubernetes, this pipeline does more than just deploy applications. It continuously monitors application performance, gathers system-level metrics, and enforces security checks across the pipeline, ensuring a secure and efficient process at every stage.

Tools and Technologies Used in This Project

  1. Jenkins: For automating the CI/CD pipeline, building, testing, and deploying applications.
  2. SonarQube: For static code analysis, ensuring code quality, and detecting vulnerabilities.
  3. Docker: For containerizing applications to ensure consistent deployment environments.
  4. Prometheus: For monitoring application performance and system metrics.
  5. Grafana: For visualizing metrics collected by Prometheus and providing real-time dashboards.
  6. Kubernetes: For orchestrating containerized applications in a scalable environment.
  7. AWS EC2: For provisioning and managing the infrastructure needed to host the application.
  8. ArgoCD: For automating the deployment of applications to Kubernetes clusters.
  9. Node Exporter: For gathering system-level metrics from the Kubernetes nodes.
  10. Trivy: For vulnerability scanning of container images, ensuring secure deployment.
  11. OWASP: For applying secure coding practices and leveraging OWASP’s tools and guidelines to ensure the security of the development pipeline.


Hasnain Haider

Cloud Security Professional | Cloud Trainer | 4x AWS | 2x Azure |4x Fortinet | Azure Admin | AWS Solutions Architect | DevSecOps | AWS Security- Specialty

1 个月

Very informative

回复

要查看或添加评论,请登录

Kushank Patel的更多文章

社区洞察