Step-by-Step Guide to Setting Up a CI/CD Pipeline with GitLab and AWS
Reza Chegini
Certified GCP & AWS DevOps Engineer| Seeking Entry-Level Cloud Developer, DevOps, SRE Roles, Software Engineer or Developer | Aspiring DevOps & SRE
In this post, I’m sharing a detailed breakdown of a CI/CD pipeline I designed and implemented using GitLab CI/CD, Docker, and AWS. This pipeline automates the build, test, packaging, and deployment processes to ensure efficient and reliable application delivery.
Here’s the full pipeline code:
stages:
- build
- test
- sonarqube
- package
- deploy
build-job:
stage: build
tags:
- ec2
- shell
script:
- pip install -r requirements.txt
sast:
stage: test
include:
- template: Jobs/SAST.gitlab-ci.yml
sonarqube-check:
stage: sonarqube
image:
name: sonarsource/sonar-scanner-cli:latest
entrypoint: [ "" ]
variables:
SONAR_USER_HOME: "${CI_PROJECT_DIR}/.sonar" # Analysis task cache
GIT_DEPTH: "0" # Fetch all branches
cache:
key: "${CI_JOB_NAME}"
paths:
- .sonar/cache
script:
- sonar-scanner
allow_failure: true
rules:
- if: $CI_COMMIT_BRANCH == 'main'
package-job:
stage: package
tags:
- ec2
- shell
script:
- docker login -u $CI_REGISTRY_USER -p $CI_REGISTRY_PASSWORD registry.gitlab.com
- docker build -t registry.gitlab.com/lowyiiii/python-project .
- docker push registry.gitlab.com/lowyiiii/python-project
deploy-job:
stage: deploy
tags:
- ec2
- shell
script:
- aws configure set aws_access_key $AWS_ACCESS_KEY_ID
- aws configure set aws_secret_access_key $AWS_SECRET_ACCESS_KEY
- aws configure set default_region $AWS_DEFAULT_REGION
- kubectl apply -f Application.yaml
Setting up a CI/CD pipeline can seem complex, but breaking it down into manageable steps reveals how straightforward and efficient it can be. Below, I’ll guide you through a GitLab pipeline I created to automate the development lifecycle, from code integration to deployment in a Kubernetes environment on AWS.
The Big Picture
The pipeline is divided into five stages:
stages:
- build
- test
- sonarqube
- package
- deploy
Each stage plays a critical role in automating the process of building, testing, and deploying an application.
Step 1: Build Stage
This stage prepares the application by installing dependencies and setting up the environment.
build-job:
stage: build
tags:
- ec2
- shell
script:
- pip install -r requirements.txt
What Happens Here?
Step 2: Static Application Security Testing (SAST)
This stage integrates GitLab’s built-in security scanning.
sast:
stage: test
include:
- template: Jobs/SAST.gitlab-ci.yml
What Happens Here?
?? Why SAST? Security vulnerabilities caught during development are cheaper and easier to fix than those caught in production.
Step 3: SonarQube Code Quality Check
This stage uses SonarQube to evaluate the codebase for quality and maintainability.
sonarqube-check:
stage: sonarqube
image:
name: sonarsource/sonar-scanner-cli:latest
entrypoint: [ "" ]
variables:
SONAR_USER_HOME: "${CI_PROJECT_DIR}/.sonar"
GIT_DEPTH: "0"
cache:
key: "${CI_JOB_NAME}"
paths:
- .sonar/cache
script:
- sonar-scanner
allow_failure: true
rules:
- if: $CI_COMMIT_BRANCH == 'main'
What Happens Here?
?? What’s Special?
Step 4: Package Stage
This is where the application is containerized using Docker.
package-job:
stage: package
tags:
- ec2
- shell
script:
- docker login -u $CI_REGISTRY_USER -p $CI_REGISTRY_PASSWORD registry.gitlab.com
- docker build -t registry.gitlab.com/lowyiiii/python-project .
- docker push registry.gitlab.com/lowyiiii/python-project
What Happens Here?
?? Why Docker? Docker ensures that the application runs consistently across different environments, from development to production.
Step 5: Deploy Stage
The final stage deploys the Dockerized application to a Kubernetes cluster using AWS.
deploy-job:
stage: deploy
tags:
- ec2
- shell
script:
- aws configure set aws_access_key $AWS_ACCESS_KEY_ID
- aws configure set aws_secret_access_key $AWS_SECRET_ACCESS_KEY
- aws configure set default_region $AWS_DEFAULT_REGION
- kubectl apply -f Application.yaml
What Happens Here?
?? Scalable Deployment: This stage ensures the application can scale dynamically based on traffic by leveraging Kubernetes’ orchestration capabilities.
领英推荐
Setting Up Environment Variables in GitLab
Environment variables are essential for securely managing secrets like AWS keys and Docker credentials in CI/CD pipelines. GitLab provides a simple way to define these variables, which are stored securely and injected into your pipeline jobs at runtime. This guide walks you through the process of setting up and using these variables effectively.
How to Set Up Environment Variables
Step 1: Navigate to Your Project
Step 2: Add Variables
Step 3: Set Permissions
Step 4: Save Changes
Click Save Variable to store it securely.
Variables Used in This Pipeline
Docker Registry Variables
Example:
AWS Credentials
Example:
SonarQube Variables (if applicable)
Other Variables
How These Variables Are Used
Docker Login
The pipeline uses CI_REGISTRY_USER and CI_REGISTRY_PASSWORD to authenticate with GitLab’s container registry, ensuring secure access to private Docker images.
AWS Configuration
SonarQube Integration
Benefits of Using Environment Variables in GitLab
Key Benefits of This Pipeline
Final Thoughts
CI/CD pipelines like this one are essential for modern DevOps practices, enabling teams to deliver high-quality software efficiently. By integrating tools like GitLab CI/CD, Docker, and AWS, you can:
DevOps & Automation Expert | Kubernetes, Docker, CI/CD Pipelines, Terraform | Cloud Specialist (AWS, Azure, GCP) | AI & ML Innovator | Patent Holder & Certified Jenkins Engineer
2 个月Setting up those environment variables is crucial for keeping secrets safe, right? Any specific challenges you faced while writing this guide?