End to end Flask application deploy using code pipeline

End to end Flask application deploy using code pipeline

Contents

  1. Project Overview

2. Prerequisite

3. Steps

  • 3.1. Write one flask application code.
  • 3.2. Write Dockerfile
  • 3.3. Terraform code to deploy infrastructure
  • 3.4. Yaml file for each Codebuild project
  • 3.5. Create build project for each stages
  • 3.6. Create pipeline

4. Output

5. Conclusion

1. Project Overview

  • Flask is a lightweight and flexible web framework for Python, designed to make it easy to build web applications and APIs. It is built on top of the Werkzeug toolkit and the Jinja2 template engine, and provides a simple and intuitive way to handle HTTP requests and responses?.
  • Deploying a Flask application to ECS provides a highly scalable, secure, and cost-effective platform for running and managing the application. It simplifies the deployment and management of the Flask application, allowing developers to focus on building and improving the application itself.
  • Using a pipeline for deploying a Flask application to ECS provides a reliable, consistent, and automated deployment process that improves the reliability of the application and reduces the risk of errors. It also allows developers to work together and collaborate on the deployment process, improving communication and reducing the time to deployment.

2. Prerequisite

  • Role for Codebuild and Codepipeline should be present with sufficient permission
  • Code Commit and ECR repo should be present to push code and images accordingly

3. Steps

  • There are multiple steps present for this end to end deployment. I will try to categorize to make it simple

3.1 Write one flask application code

To deploy any flask app in ECS , need to write one flask application code. I used below sample code for this deployment.

from flask import Flask
app = Flask(__name__)
@app.route('/')
def hello_world():
? ? ? ? return 'Hello New World'
# main driver function
if __name__ == '__main__':
? ? ? ? app.run(host='0.0.0.0',port=80)k        

3.2. Write Dockerfile

As we are going to deploy app in ECS , need to create flask image for this deployment. Dockerfile is needed to create images automatically during pipeline. Pipeline will use this Dockerfile to push image to ECR and use it in our ECS Services.


FROM python:3.8-slim-buster
WORKDIR /python-docker
RUN pip3 install flask
COPY a.py .
EXPOSE 80
CMD [ "python3", "a.py"]        


3.3. Terraform code to deploy infrastructure

The following are the required steps to start working with Terraform on AWS:

In codecommit repository add below files.

  • Creating a vpc.tf : We can create a custom VPC to deploy the ECS cluster

resource "aws_vpc" "main" 
? cidr_block = "10.0.0.0/16"
}
resource "aws_subnet" "main" {
? vpc_id = aws_vpc.main.id


? cidr_block? ? ? ? ? ? ? = "10.0.1.0/24"
? availability_zone? ? ? ?= "us-east-1a"
? map_public_ip_on_launch = true


? tags = {
? ? Name = "TF-SG"
? }
}
resource "aws_subnet" "main2" {
? vpc_id? ? ? ? ? ? ? ? ? = aws_vpc.main.id
? cidr_block? ? ? ? ? ? ? = "10.0.2.0/24"
? availability_zone? ? ? ?= "us-east-1b"
? map_public_ip_on_launch = true


? tags = {
? ? Name = "TF-SG"
? }
}
resource "aws_internet_gateway" "gw" {
? vpc_id = aws_vpc.main.id


? tags = {
? ? Name = "TF-GW"
? }
}
resource "aws_route_table" "rt" {
? vpc_id = aws_vpc.main.id


? route {
? ? cidr_block = "0.0.0.0/0"
? ? gateway_id = aws_internet_gateway.gw.id
? }
? tags = {
? ? Name = "TF-RT"
? }
}
resource "aws_route_table_association" "a" {
? subnet_id? ? ? = aws_subnet.main.id
? route_table_id = aws_route_table.rt.id
}
resource "aws_route_table_association" "b" {
? subnet_id? ? ? = aws_subnet.main2.id
? route_table_id = aws_route_table.rt.id
}

{        

  • Creating a alb.tf : Creating a load balancer for deploying the ECS

resource "aws_lb" "test" 
? name? ? ? ? ? ? ? ?= "alb"
? internal? ? ? ? ? ?= false
? load_balancer_type = "application"
? security_groups? ? = [aws_security_group.allow_tls.id]
? subnets? ? ? ? ? ? = [aws_subnet.main.id, aws_subnet.main2.id]
? ip_address_type? ? = "ipv4"
}


resource "aws_lb_target_group" "target_group" {
? name? ? ? ? = "tg"
? port? ? ? ? = 80
? protocol? ? = "HTTP"
? target_type = "ip"
? vpc_id? ? ? = aws_vpc.main.id


#? ?health_check {
#? ? ?healthy_threshold? ?= "3"
#? ? ?interval? ? ? ? ? ? = "300"
#? ? ?protocol? ? ? ? ? ? = "HTTP"
#? ? ?matcher? ? ? ? ? ? ?= "200"
#? ? ?timeout? ? ? ? ? ? ?= "3"
#? ? ?path? ? ? ? ? ? ? ? = "/"
#? ? ?unhealthy_threshold = "2"
#? ?}
}
resource "aws_lb_listener" "listener" {
? load_balancer_arn = aws_lb.test.id
? port? ? ? ? ? ? ? = "80"
? protocol? ? ? ? ? = "HTTP"


? default_action {
? ? type? ? ? ? ? ? ?= "forward"
? ? target_group_arn = aws_lb_target_group.target_group.id
? }
}

{        

  • Creating a ecs.tf : ecs.tf file is commonly used to define the resources required to run applications on Amazon Elastic Container Service (ECS)

resource "aws_ecs_cluster" "foo" 
? name = "white-hart"


? setting {
? ? name? = "containerInsights"
? ? value = "enabled"
? }
}


resource "aws_cloudwatch_log_group" "log-group" {
? name = "flaskapp-logs"
}


resource "aws_ecs_task_definition" "aws-ecs-task" {
? family = "task"


? container_definitions = <<DEFINITION
? [
? ? {
? ? ? "name": "container",
? ? ? "image": "582662663083.dkr.ecr.us-east-1.amazonaws.com/faskapp-new:latest",
? ? ? "portMappings": [
? ? ? ? {
? ? ? ? ? "containerPort": 80,
? ? ? ? ? "hostPort": 80
? ? ? ? }
? ? ? ],
? ? ? "logConfiguration": {
? ? ? ? "logDriver": "awslogs",
? ? ? ? "options": {
? ? ? ? ? "awslogs-group": "${aws_cloudwatch_log_group.log-group.id}",
? ? ? ? ? "awslogs-region": "${var.aws_region}",
? ? ? ? ? "awslogs-stream-prefix": "flaskapp-"
? ? ? ? }
? ? ? },
? ? ? "cpu": 256,
? ? ? "memory": 512,
? ? ? "networkMode": "awsvpc"
? ? }
? ]
? DEFINITION


? requires_compatibilities = ["FARGATE"]
? network_mode? ? ? ? ? ? ?= "awsvpc"
? memory? ? ? ? ? ? ? ? ? ?= "512"
? cpu? ? ? ? ? ? ? ? ? ? ? = "256"
? execution_role_arn? ? ? ?= aws_iam_role.ecsTaskExecutionRole.arn
? task_role_arn? ? ? ? ? ? = aws_iam_role.ecsTaskExecutionRole.arn
}
resource "aws_ecs_service" "aws-ecs-service" {
? name? ? ? ? ? ? ? ? ?= "ecs-service"
? cluster? ? ? ? ? ? ? = aws_ecs_cluster.foo.id
? task_definition? ? ? = aws_ecs_task_definition.aws-ecs-task.id
? launch_type? ? ? ? ? = "FARGATE"
? scheduling_strategy? = "REPLICA"
? desired_count? ? ? ? = 5
? force_new_deployment = true


? network_configuration {
? ? subnets? ? ? ? ? = [aws_subnet.main.id, aws_subnet.main2.id]
? ? assign_public_ip = true
? ? security_groups = [
? ? ? aws_security_group.allow_tls.id,
? ? ]
? }
? load_balancer {
? ? target_group_arn = aws_lb_target_group.target_group.arn
? ? container_name? ?= "container"
? ? container_port? ?= 80
? }




}

{        

  • Creating a provider.tf : provider.tf file is used to define the provider configuration for your Terraform project and is an essential part of creating and managing resources with Terraform.

provider "aws" 
? region? = var.aws_region
??
}
variable "aws_region" {
? default = "us-east-1"
}{        

  • Creating a iam.tf : iam.tf file is used to manage IAM resources in AWS. It allows you to create and manage IAM resources and define access policies and permissions, ensuring consistency and reducing manual errors.

data "aws_iam_policy_document" "assume_role_policy" 
? statement {
? ? actions = ["sts:AssumeRole"]


? ? principals {
? ? ? type? ? ? ? = "Service"
? ? ? identifiers = ["ecs-tasks.amazonaws.com"]
? ? }
? }
}
resource "aws_iam_role" "ecsTaskExecutionRole" {
? name? ? ? ? ? ? ? ?= "execution-task-role"
? assume_role_policy = data.aws_iam_policy_document.assume_role_policy.json
}
resource "aws_iam_role_policy_attachment" "ecsTaskExecutionRole_policy" {
? role? ? ? ?= aws_iam_role.ecsTaskExecutionRole.name
? policy_arn = "arn:aws:iam::aws:policy/service-role/AmazonEC2ContainerServiceforEC2Role"
}{        

  • Creating a security.tf : security.tf file is used to define and configure security group

resource "aws_security_group" "allow_tls" {
? name? ? ? ? = "allow_tls"
? description = "Allow TLS inbound traffic"
? vpc_id? ? ? = aws_vpc.main.id


? ingress {
? ? description = "TLS from VPC"
? ? from_port? ?= 80
? ? to_port? ? ?= 80
? ? protocol? ? = "tcp"
? ? cidr_blocks = ["0.0.0.0/0"]
? }
? egress {
? ? from_port? ?= 0
? ? protocol? ? = "-1"
? ? to_port? ? ?= 0
? ? cidr_blocks = ["0.0.0.0/0"]
? }
}
        

  • 8. creating a backend.tf?: backend.tf file is used to configure the backend that will be used to store the state of your infrastructure. It allows you to define the backend, share state data, ensure state consistency, and enable remote operations.

terraform
? ? backend "s3" {
? ? ? ? region = "us-east-1"
? ? ? ? profile = "default"
? ? ? ? key = "codebuild/ecsnew.tfstate"
? ? ? ? bucket = "rahul030198"
? ? ? ? dynamodb_table? = "terraform-locking-state"?
? ? }
}{        


3.4. Yaml file for each Codebuild project


  • Creating a image_push.yaml?: Used this yaml file to create image automatically using codebuild


version: 0.2
env:
? variables:
? ? ACCOUNT_ID: "582662663083"
? ? REPO_NAME: "faskapp-new"
? ? AWS_REGION: "us-east-1"
phases:?
? install:
? ? commands:?
? ? - COMMIT_HASH=$(echo $CODEBUILD_RESOLVED_SOURCE_VERSION | cut -c 1-7)
? ? - echo $COMMIT_HASH
? ? - apt update -y
? ? - apt install docker.io -y
? ? - nohup /usr/local/bin/dockerd --host=unix:///var/run/docker.sock --host=tcp://127.0.0.1:2375 --storage-driver=overlay2 &
? ? - timeout 15 sh -c "until docker info; do echo .; sleep 1; done"
? pre_build:?
? ? commands:?
? ? - aws ecr get-login-password --region $AWS_REGION | docker login --username AWS --password-stdin $ACCOUNT_ID.dkr.ecr.$AWS_REGION.amazonaws.com
? build:?
? ? commands:?
? ? - docker build -t $ACCOUNT_ID.dkr.ecr.$AWS_REGION.amazonaws.com/$REPO_NAME:latest -t $ACCOUNT_ID.dkr.ecr.$AWS_REGION.amazonaws.com/$REPO_NAME:$COMMIT_HASH .
? post_build:?
? ? commands:?
? ? - docker push $ACCOUNT_ID.dkr.ecr.$AWS_REGION.amazonaws.com/$REPO_NAME:latest
? ? - docker push $ACCOUNT_ID.dkr.ecr.$AWS_REGION.amazonaws.com/$REPO_NAME:$COMMIT_HASH2        

  • ?Creating a plan.yaml : Used this yaml file to create the plan of the terraform


version: 0.2
phases:
? install:
? ? commands:
? ? ? - "apt install unzip -y"
? ? ? - "wget https://releases.hashicorp.com/terraform/1.0.7/terraform_1.0.7_linux_amd64.zip"
? ? ? - "unzip terraform_1.0.7_linux_amd64.zip"
? ? ? - "mv terraform /usr/local/bin/"
? pre_build:
? ? commands:
? ? ? - terraform init?


? build:
? ? commands:
? ? ? - terraform plan

2        


  • Creating a apply.yaml : Used this yaml file to apply the infrastructure


version: 0.2
phases:


? install:
? ? commands:
? ? ? - "apt install unzip -y"
? ? ? - "wget https://releases.hashicorp.com/terraform/1.0.7/terraform_1.0.7_linux_amd64.zip"
? ? ? - "unzip terraform_1.0.7_linux_amd64.zip"
? ? ? - "mv terraform /usr/local/bin/"
? pre_build:
? ? commands:
? ? ? - terraform init??


? build:
? ? commands:
? ? ? - terraform apply -auto-approve2        

  • Creating a destroy.yaml Used this yaml file to destroy infrastructure


version: 0.2
phases:
? install:
? ? commands:
? ? ? - "apt install unzip -y"
? ? ? - "wget https://releases.hashicorp.com/terraform/1.0.7/terraform_1.0.7_linux_amd64.zip"
? ? ? - "unzip terraform_1.0.7_linux_amd64.zip"
? ? ? - "mv terraform /usr/local/bin/"
? pre_build:
? ? commands:
? ? ? - terraform init?


? build:
? ? commands:
? ? ? - terraform destroy -auto-approve2        


3.5. Create build project for each stages : Now we need to configure our build project. Go to CodeBuild click on create build project and do the following configuration.

1. Build project for image push :

  • Give a suitable name for your build project.

No alt text provided for this image

Give Source provider as codecommit and repository which you have already created. Branch as Reference type and give your corresponding branch.

No alt text provided for this image

  • Select Environment image and operating system , I went with managed image and ubuntu.

No alt text provided for this image

  • Specify the Role that we have created for codebuild and the buildspecfile , for mine it is image_push.yaml.

No alt text provided for this image


  • Similarly I created three more build projects for plan , apply and destroy, with buildspecfiles as plan.yaml , apply.yaml and destroy.yaml.

No alt text provided for this image


3.6. Create pipeline: The AWS CodePipeline will be used for CI/CD (Continuous Integration/Continuous Delivery). Our pipeline consists of five stages — Sources, image push, terraform-plan, terraform apply, approval for destroy , terraform destroy

No alt text provided for this image
No alt text provided for this image
No alt text provided for this image

4. Output : Able to access ECS application using load balancer DNS name

No alt text provided for this image


5. Conclusion : Deploying a Flask application to ECS can provide a powerful and reliable platform for running your application in the cloud, and can help you streamline the process of deploying and managing your application at scale.


This was pretty long, but I hope the extra details help anyone wanting to do something similar, CI/CD fully integrated into AWS.

Any thoughts? Drop a comment! Like the article? Clap a few times below and let me know how much you enjoyed it .

Thank you!

Sourav Dinda

AWS DevOps | SRE | Senior Software Engineer | CKA | GitOps | 2x AWS certified | 3x Redhat Certified( EX294+EX180+EX280) | Python Backend Developer | AWS DevOps Engineer | FastAPI | Flask | Trainer | Codechef

1 年

Useful content

Sourav Bhowmick

Data Analyst @ BT | | ExTCSer | | Business Analyst | | Sql & plsql Developer

1 年

Awesome

RAJ KUMAR SAHU

Ex-DevOps Engineer | Full Time Entrepreneur

1 年

Useful Content

要查看或添加评论,请登录

Hitaishi Sengupta的更多文章

社区洞察

其他会员也浏览了