A Step-by-Step DevOps project
Waleed Magdy
Head of DevOps and Cloud Services ??? @ Bexprt | Driving Innovation, Resilience | AI | DevOps Lead Advisor
Introduction:
In today's fast-paced tech landscape, mastering DevOps tools and technologies is a paramount goal for IT professionals aiming to streamline workflows, enhance collaboration, and accelerate project delivery. Countless individuals invest time and effort into completing courses on tools like Terraform, ArgoCD, Istio, Kubernetes, and AWS, equipping themselves with the theoretical knowledge to revolutionize their development processes.
Yet, what often remains unspoken is the formidable challenge that arises once the courses are completed and the real-world integration journey begins. The struggle of connecting the dots between these powerful tools and effectively implementing them into a cohesive DevOps pipeline can be both daunting and perplexing.
It's a narrative that many of us have encountered firsthand – the excitement of acquiring new skills, followed by the frustration of translating those skills into tangible results within our projects. This article aims to bridge that gap by providing a comprehensive guide to not just the 'how' of using these tools, but the 'how' in the context of a holistic DevOps approach.
So, let's get started. Open your terminals, fire up your browsers, and let's embark on a journey that transforms theoretical understanding into practical mastery. It's time to connect the dots, overcome the hurdles, and embrace the full potential of DevOps integration.
Project Overview:
Our DevOps project will revolve around creating an AWS infrastructure, deploying a Kubernetes cluster using Amazon Elastic Kubernetes Service (EKS), and establishing a streamlined workflow for continuous integration and deployment.
We will be using the Bookinfo demo app as our foundation. This app, comprising various services, exemplifies the complexities of a modern Microservices architecture.
Stack:
The project utilizes the following technologies and tools:
We Will try to cover more Technologies and concept in this Article as possible.
Getting Started:
To begin, you need an AWS account. If you don't have one, head to the AWS website and sign up for an account.
We need IAM user Access Key and Secret Key to be used with Terraform
Never disclose your Access Keys to anyone, and consistently utilize Secrets Managers. Currently, I'm utilizing a Sandbox AWS Account on cloud.guru, eliminating any concerns about sharing these credentials with you.
Clone the repository:
Clone this repository?devops_project to your Github Account or to your local machine.
I created the repository with most of the work we need to do in this article.
Terraform Workflow
In this article I am using Terraform Cloud to let Terraform runs in a consistent and reliable environment.
We will discuss other tools like Atlantis in other articles so keep following for the upcoming.
First create account on Terraform Cloud if you don’t have one.
Terraform Cloud Sign up ( it has a Free License if you are asking ?? )
Create your first organization and then Set up a workspace in Terraform Cloud. This will help manage your infrastructure as code and enable collaboration.
Choose Version Control Workflow to work with your repository on Github which we will choose to do here.
If you want to work with Terraform from your Terminal you can go for CLI-driven Workflow.
Version Control Workflow > Connect to Github > choose the repository > configure Setting
In Advanced options configure the Terraform Working Directory terraform as our Terraform code is inside terraform directory
Before talking about the Terraform files I want to share with you two links to read about Terraform Best Practices terraform-best-practices and Terraform — Best Practices
Learn and Pick the right Terraform code Structure you need to follow.
Now let’s talk about the Terraform Directory before running our first plan and apply.
terraform.tf
This Terraform configuration block includes settings for Terraform Cloud (formerly known as Terraform Enterprise) and configures the AWS provider. Let's break down the code step by step:
terraform {
cloud {
organization = "devops-project-org"
workspaces {
name = "devops-project-workspace"
}
}
}
provider "aws" {
region = "us-east-1"
}
vpc.tf
This Terraform code snippet is used to create a Virtual Private Cloud (VPC) in Amazon Web Services (AWS) using the terraform-aws-modules/vpc/aws module. Let's break down the code step by step:
module "vpc" {
source = "terraform-aws-modules/vpc/aws"
}
name = "my-vpc"
cidr = "10.0.0.0/16"
azs = ["us-east-1a", "us-east-1b", "us-east-1c"]
private_subnets = ["10.0.1.0/24", "10.0.2.0/24", "10.0.3.0/24"]
public_subnets = ["10.0.101.0/24", "10.0.102.0/24", "10.0.103.0/24"]
enable_nat_gateway = true
enable_vpn_gateway = true
tags = {
Terraform = "true"
Environment = "dev"
}
ecr.tf
resource "aws_ecr_repository" "my_repo" {
name = "my-ecr-repo"
image_tag_mutability = "MUTABLE"
}
output "repository_url" {
value = aws_ecr_repository.my_repo.repository_url
}
eks.tf
module "eks" {
source = "terraform-aws-modules/eks/aws"
version = "~> 19.0"
}
cluster_name = "my-cluster"
cluster_version = "1.27"
cluster_endpoint_public_access = true
cluster_addons = {
coredns = { most_recent = true }
kube-proxy = { most_recent = true }
vpc-cni = { most_recent = true }
}
vpc_id = module.vpc.vpc_id
subnet_ids = module.vpc.private_subnets
control_plane_subnet_ids = module.vpc.public_subnets
eks_managed_node_group_defaults = {
instance_types = ["m6i.large", "m5.large", "m5n.large", "t3.large"]
}
eks_managed_node_groups = {
green = {
use_custom_launch_template = false
min_size = 1
max_size = 10
desired_size = 1
instance_types = ["t3.large"]
capacity_type = "SPOT"
}
}
fargate_profiles = {
default = {
name = "default"
selectors = [ { namespace = "default" } ]
}
}
tags = {
Environment = "dev"
Terraform = "true"
}
Terraform Cloud Env Vars
We need to configure our organization with our Access Key and Secret Key and you can do it specific for the workspace or globally for the organization.
We will do it globally now for the organization by creating Variable Set
under the organization setting go to Variable sets and Create new one
Keep in mind the AWS Access Keys are case_sensitive so you need to write the keys exactly like this AWS_ACCESS_KEY_ID , AWS_SECRET_ACCESS_KEY
Plan and Apply Terraform Code
Now we are ready to start the Plan
Review the Plan resources and then Confirm & Apply
领英推荐
Check AWS Resources Creation
Verify in the AWS Management Console that your defined resources have been created as intended.
Deploy EKS-Manage EC2 Instance
Of course it doesn’t have to be an EC2 instance you can use your Terminal.
You can use Terraform to provision this instance and I think you should do this.
We will setup this instance to manage our EKS cluster from.
sudo apt install unzip
curl "<https://awscli.amazonaws.com/awscli-exe-linux-x86_64.zip>" -o "awscliv2.zip"
unzip awscliv2.zip
sudo ./aws/install
aws --version
curl -LO "<https://dl.k8s.io/release/$>(curl -L -s <https://dl.k8s.io/release/stable.txt>)/bin/linux/amd64/kubectl"
Kubectl
curl -LO "<https://dl.k8s.io/$>(curl -L -s <https://dl.k8s.io/release/stable.txt>)/bin/linux/amd64/kubectl.sha256"
echo "$(cat kubectl.sha256) kubectl" | sha256sum --check
sudo install -o root -g root -m 0755 kubectl /usr/local/bin/kubectl
kubectl version --client
aws eks update-kubeconfig --name my-cluster --region us-east-1
kubectl get nodes
connect: connection refused
"Not working" ??
You need to troubleshoot why kubectl client can’t talk with the EKS endpoint
Hint: Something is blocking the requests to the EKS endpoint ??
Now after You can talk to our EKS, you should add this fix to our Terraform code.
If you are facing any problems getting this please don’t hesitate letting me know.
Now you can run
kubectl get nodes
##OUTPUT##
NAME STATUS ROLES AGE VERSION
ip-10-0-1-149.ec2.internal Ready <none> 64s v1.27.3-eks-a5565ad
Install Required CLIs
We installed the AWS CLI, kubectl Now we need to install istioctl and argo CLI and install the required k8s resources.
curl -L <https://istio.io/downloadIstio> | sh -
cd istio-1.18.2/
export PATH=$PWD/bin:$PATH
istioctl install --set profile=demo -y
curl -sSL -o argocd-linux-amd64 <https://github.com/argoproj/argo-cd/releases/latest/download/argocd-linux-amd64>
sudo install -m 555 argocd-linux-amd64 /usr/local/bin/argocd
rm argocd-linux-amd64
sudo install -m 555 argocd-linux-amd64 /usr/local/bin/argocd
kubectl create namespace argocd
kubectl apply -n argocd -f <https://raw.githubusercontent.com/argoproj/argo-cd/stable/manifests/install.yaml>
kubectl patch svc argocd-server -n argocd -p '{"spec": {"type": "LoadBalancer"}}'
argocd admin initial-password -n argocd
It’s not a best practice to do it from the Terminal but I will give you a hint “Terrafrom”
Try to do it with Terraform and let me know if you need a help.
Update Workflows with ECR URL
Modify your continuous integration workflows to include the ECR repository URL for Container image storage.
Under .github/workflows/ you will find the Github Actions we will use to build/push our container images let’s break one workflow down
details_workflow.yml
Here's what each step does:
Update GitHub Repo with AWS Secrets
under Setting > Secrets and Variables > Actions
Run Workflows
Let’s the party begins
The Workflows Will run if there is a push inside the services directories or manually, I will run them Manually Now
You will find that I only added the Update Kubernetes Deployment Image part to details_workflow.yml
You need to complete the other Workflows
You need to check the manifests/kubernetes image part to mach it with the Workflows
Check ECR Repo
Argo CD
Deploy the Microservices Manifests:
You can check the Argo CD home page also you can check the resources in the EKS on AWS Console
If you check any POD in staging Namespace you will find that each one has two Containers
Istio Proxy uses Envoy
Envoy proxies are deployed as sidecars to services, logically augmenting the services with Envoy’s many built-in features, for example:
Istio Gateways and VirtualServices
Deploy GWs and VS
kubectl get services -n istio-system
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
istio-ingressgateway LoadBalancer 172.20.27.197 ae271cd157c214ab888061809021225a-1922516608.us-east-1.elb.amazonaws.com 15021:32042/TCP,80:30092/TCP,443:31659/TCP,31400:31529/TCP,15443:32377/TCP 148m
??????
Monitoring
under argocd/apps/observability Create NEW APP in monitoring Namespace
We have Prometheus (Metrics Datastore), Loki (Logging), Jaeger (Tracing)In a short words
Grafana
one of a many dashboard you can import and a lot to explore
Kiali
A comprehensive monitoring tool for Istio Service Mesh and also there is a lot to explore.
The dashboard can give you a Live fast response to any issue the could happen to any of your Microservice
Assignment
Conclusion
The DevOps project outlined above provides an in-depth journey into building a streamlined development and deployment pipeline. By creating an AWS infrastructure, setting up an EKS cluster, integrating Istio and Argo CD, and automating workflows, you'll gain practical experience in orchestrating modern software delivery. This hands-on project not only enhances your technical skills but also equips you with the knowledge to architect efficient DevOps pipelines for your future projects. Happy DevOps building.
Excuse me as I tried to make it Short as possible but I couldn’t make it shorter ??
Let me know what you think if you need any help.
Regards
Waleed
Manager IT Infrastructure
1 年One of the most informative articles I have read about devops. ?? much appreciated and keep posting.
MSc | Cloud Engineer
1 年Great.