Deploying Secure Full Stack Web Apps: GitLab CI/CD, S3 Static Frontend, ECS Backend, and OIDC Security.

Deploying Secure Full Stack Web Apps: GitLab CI/CD, S3 Static Frontend, ECS Backend, and OIDC Security.

Author: Arsha P.S


Introduction

Deploying a full-stack web application, featuring a ReactJS frontend, Node.js backend, and MySQL databases, offers versatility across various cloud environments. In this context, we're exploring deployment on AWS. To store the application code, we leverage GitLab repositories, with GitLab pipelines managing the CI/CD process.?


The frontend is constructed using an S3 bucket with enabled static web hosting and Cloudfront for efficient CDN deployment. Simultaneously, the backend is deployed on ECS Fargate. While GitLab is a potential CI/CD solution, we've opted for enhanced security using OpenID Connect. This method securely generates temporary credentials for AWS authentication, eliminating the need to store credentials within GitLab.

Prerequisites

  • AWS account with User having full admin access to S3 bucket, Cloudfront, IAM services
  • Gitlab project repo to store the code.
  • ReactJs Frontend application.
  • Nodejs backend application

Frontend

Architecture


?

Implementation

Create the S3 bucket

?????????????First we need to login to the AWS account and go to the “s3” service. Click on the create bucket. Give the bucket a unique name, Let it be “react-sample-app-”.?

?

?

Also keep the block public Access On. Now click on “create bucket”. Choose the bucket created and go to the permissions tab and at the bottom we can see “Static website hosting” option . For the Index and Error document, enter index.html and then click Save changes.

  1. Enabling static Web Hosting

The required files inside build folder should to copied to the s3 bucket. You can directly Upload the file by clicking the upload button from the management console or use aws cli commands after configuring the credentials.

?

As we are about to configure the aws cloudfront and allow only the traffic from the cloudfront to the S3 bucket, the bucket policy should be written to allow traffic from cloudfront only . But for the initial testing we can make “Block public Access” OFF and make the bucket policy to allow traffic from external as :?



{

    "Version": "2012-10-17",

    "Statement": [

        {

            "Sid": "PublicReadGetObject",

            "Effect": "Allow",

            "Principal": "*",

            "Action": "s3:GetObject",

            "Resource": "arn:aws:s3:::clockhash-example/*"

        }

    ]

}        

Copy the url under the static web hosting section and Try accessing the same from your browser.

2. Configuring AWS cloufront

The next step is to create a cloudfront distribution for our static website created. For that, go to cloudfront and click on create distribution. Choose the origin domain as our s3 bucket. The origins access will be “origin Access control” and make the s3 allows? traffic coming from the cloudfront only by adding the bucket policy . we can copy that from console itself after creation of the distribution . Keep the rest of the settings as default.? Create distribution. Also copy the bucket policy and edit the same under the S3 bucket permission.

{

    "Version": "2008-10-17",

    "Id": "PolicyForCloudFrontPrivateContent",

    "Statement": [

        {

            "Sid": "1",

            "Effect": "Allow",

            "Principal": {

                "Service": "cloudfront.amazonaws.com"

            },

            "Action": "s3:GetObject",

            "Resource": "arn:aws:s3:::react-sample-app-ch/*"

        },

        {

            "Sid": "2",

            "Effect": "Allow",

            "Principal": {

                "AWS": "arn:aws:iam::cloudfront:user/CloudFront Origin Access Identity <Distribution_ID>"

            },

            "Action": "s3:GetObject",

            "Resource": "arn:aws:s3:::react-sample-app-ch/*"

        }

    ]

}

         

?Now the s3 bucket will only allow traffic from cloudfront which is more secure.

CICD with GitLab

Set up OpenID Connect .

The OpenID Connect? enables a secure way of access from gitlab to AWS without storing the credentials as the variables under the Gitlab CICD settings. The OIDC will generate temporary access token each time when Gitlab try to access the AWS. For that use the following steps :

?

Create an identity provider in the aws


The first step for? OpenID Connect implementation is the creation of web identity provider. Use the below steps.

  1. Go to IAM service in AWS.
  2. Choose the identity provider and click on “Add provider”. Give the provider URL as the gitlab url “https://gitlab.com” and a friendly name to the audience , here let it be “clockhash” and get the thump print
  3. click on the add provider button.
  4. Now create a Web identity role for the provider created. Select? role under the IAM and create a web identity role . Attach permission to the role to access the s3 bucket. Because here the gitlab pipeline need only access to s3, we have to add extra permissions if the pipeline need access to other resources.

Next step is adding the variables for the identity provider under gitlab. Login to the gitlab and under your project click on the variable sections. Add the variables as in image.

?

?

Also inorder to have more fine grained access like the restricting the access while running a specify branch, edit the trusted entity attached for the IAM role attached.

For example :??


{

    "Version": "2012-10-17",

    "Statement": [

        {

            "Effect": "Allow",

            "Principal": {

                "Federated": "arn:aws:iam::679214156698:oidc-provider/gitlab.com"

            },

            "Action": "sts:AssumeRoleWithWebIdentity",

            "Condition": {

                "StringLike": {

                    "gitlab.com:aud": "dotmundo",

                    "gitlab.com:sub": "project_path:group_name/<projects_name>/*:ref_type:branch:ref:<branch_name>"

                }

            }

        }

    ]

}        

?DNS configuration in route53

We can either use the AWS for registering our domain or use other external domain registrar options for getting a domain name for our application. Here I used godaddy.?


First we need to create a hosted zone under the route53. Inorder to use Nameserver under aws , modify the DNS records under your godaddy and add the AWS’s nameserver under it.?

Now add an Alias for cloudfront distribution as an A record in the hosted zone under the route53.

?

Certifcate generation in ACM and Enabling SSL

ACM is an aws service which create certificates for our domains. For creating the certificate, go to ACM and click on create certificate. Enter the FQDN , you can also create wildcard certificate if needed. Click create certificate. Also keep in mind the region selected should be N.Virgina.

Now click on the certificate created and click on? “Create records in Route 53”, this will generate a CNAME record under the route53. Then enable “redirect http to https “ under cloudfront and add the certificate by editing the cloudfront distribution created. This will enable ssl for our domain.?

?

Configure Gitlab CICD

Now we can proceed on implementing the CICD part of our project. The frontend and backend are placed in separate project. Push all your code to the repo. Create the pipeline. The frontend? pipeline include 2 stages ,?

  • Stage 1 : build the react code and generate the artifact for static website
  • ?Stage 2 : Access to aws and sync the datas in build folder to s3. Also do the cache invalidation.

See the gitlab ci pipeline script for reference :?


#pipeline to test in aws

stages:   

  - build_artifact

  - deploy


build_artifact:

  image: node:18.14.1

  stage: build_artifact

  script:

    - mv ./.env.staging ./.env

    - npm install && npm run build

  artifacts:

    paths:

     - build/

  only:

    - develop


deploy:

  image: registry.gitlab.com/runner:latest

  stage: deploy

  id_tokens:

    GITLAB_OIDC_TOKEN:

       aud: clockhash

  script:

    - >

      export $(printf "AWS_ACCESS_KEY_ID=%s AWS_SECRET_ACCESS_KEY=%s AWS_SESSION_TOKEN=%s"

      $(aws sts assume-role-with-web-identity

      --role-arn ${ROLE_ARN}

      --role-session-name "GitLabRunner-${CI_PROJECT_ID}-${CI_PIPELINE_ID}"

      --web-identity-token ${GITLAB_OIDC_TOKEN}

      --duration-seconds 3600

      --query 'Credentials.[AccessKeyId,SecretAccessKey,SessionToken]'

      --output text))

    - aws s3 sync build/ s3://$S3_BUCKET --request-payer requester

    - aws cloudfront create-invalidation --distribution-id $DISTRIBUTION_ID --paths "/*"

  dependencies:

    - build_artifact

  only:

    - develop        

Also add the “DISTRIBUTION_ID” which the cloudfront distribution id as a variable under gitlab .

Backend?

The backend is an API with nodejs using mysql as database. The App is dockerised .The backend part can be configured with aws ECS? with RDS as the database. We can use the gitlab as the code repository and for the implementation of CICD.

Architecture.

Implementation:?

Create the RDS.

Here in our application mysql database is needed . For that we can create a database using AWS database service offering ie, RDS. For creating RDS use the below steps.

  1. Go to RDS service. Click on create database. Choose mysql Engine Version according to the requirement of the application.
  2. ?In case of the deployment option we can go for multiAZ if it is creating for production otherwise single AZ will be enough.
  3. Provide? a master username and password for accessing the database.
  4. Choose the instance type.
  5. Select “password Authetication” for the database authentication.

The instance can be placed inside a private subnet and configured to enable access only from ECS in the security group.Finallly click on create.

After creating the database modify your application to make use of the created database by changing the environment variables for the database.


Create an ECR repo and upload the image.

We can use AWS’s ECR registry for storing our custom docker image. The ECS can pull the image from ECR for running the tasks. For creating an ECR repo use the following steps.

  1. Go to ECR under the aws services.?
  2. Click on the repository . Give it a friendly name , here I used “clockhash-ecr”
  3. Now Click on create repository.?

For using the aws ecr registry first we need to autheticate with the aws. For that, we need AWS credentials for CLI access (Access ID and Secret key) which can be? created under your IAM user and follow the below steps.

  • Install aws cli in your system
  • Open a Terminal or Command Prompt and execute "aws configure" and provide the access_id, secret key and a default region(here : eu-west-1) there.
  • ?Execute the below command to autheticate with the ecr.

?$(aws ecr get-login --no-include-email --region eu-west-1 )        

  • ?It will show login succeed
  • Now you can execute docker commands to push and pull the images.

Here we are pushing the images to ECR from the gitlab where we have set authentication by leveraging the OIDC so the above steps can be used for testing from your local end.

Create an ECS cluster

We can use ECS fargate to deploy the backend application.For the creation use the below steps:

  1. Go to ECS under the? aws service.
  2. Choose Fargate option while creation?
  3. Click on create the cluster.

A cloudformation stack will be generated for the ECS cluster creation and we can see the ECS cluster creation progress under cloudformation.


Add the secret Manager for storing the Environment variables

?

We can securily save the environment variables inside the secret manager. The secret manager is a service offering from aws for storing data securely.?

Create the Task definition

? We can create the task definition for ECS either upload the json or by giving custom data in management console. Hereby sharing the task definition used in our application.



{

    "family": "staging-clockhash",

    "containerDefinitions": [

        {

            "name": "backend-clockhash",

            "image": "123456789.dkr.ecr.eu-west-1.amazonaws.com/clockhash-staging:test",

            "cpu": 0,

            "portMappings": [

                {

                    "containerPort": 8080,

                    "hostPort": 8080,

                    "protocol": "tcp"

                }

            ],

            "essential": true,

            "environment": [],

            "mountPoints": [],

            "volumesFrom": [],

            "secrets": [

              

                {

                    "name": "DB_DATABASE",

                    "valueFrom": "arn:aws:secretsmanager:eu-west-1:123456789:secret:clockhash/staging-qjxyFS:DB_DATABASE::"

                },

                {

                    "name": "DB_HOST",

                    "valueFrom": "arn:aws:secretsmanager:eu-west-1:123456789:secret:clockhash/staging-qjxyFS:DB_HOST::"

                },

                {

                    "name": "DB_PASSWORD",

                    "valueFrom": "arn:aws:secretsmanager:eu-west-1:123456789:secret:clockhash/staging-qjxyFS:DB_PASSWORD::"

                },

                {

                    "name": "DB_USER",

                    "valueFrom": "arn:aws:secretsmanager:eu-west-1:123456789:secret:clockhash/staging-qjxyFS:DB_USER::"

                }            

                         ],

            "logConfiguration": {

                "logDriver": "awslogs",

                "options": {

                    "awslogs-create-group": "true",

                    "awslogs-group": "/ecs/staging-clockhash",

                    "awslogs-region": "eu-west-1",

                    "awslogs-stream-prefix": "ecs"

                }

            }

        }

    ],

    "executionRoleArn": "arn:aws:iam::123456789:role/ecsTaskExecutionRole",

    "networkMode": "awsvpc",

    "requiresCompatibilities": [

        "FARGATE"

    ],

    "cpu": "1024",

    "memory": "3072"

}
        

You have to add the reference of the secret manager for accessing the env variable for running the containers.?

Create a service under the cluster.

Now we need to create a service for the task to run based on the task definition we have created.?

  1. Select the cluster we have created. Click on create service. Choose the fargate option
  2. select the task definition we have created. For achieving more security instead of running the tasks inside a private subnet on a custom VPC.?
  3. Create a load balancer and target group while creating the service itself. Now click the create button. ?

Now you can see the tasks running under the task section inside the service.

CICD implementation for backend :?

Here also we can use gitlab for CICD. In the backend app, while running the pipeline new docker images will be created as per the code changes. The newly created image will be tagged with COMMIT ID in gitlab. Then uploaded to ECR.???

There are different option for deploying the changes to the ECS. But here in order to keep everything simple we can make use of the aws cli command to create a new task definition and enforce a “force deployment” while running the pipeline itself.

The authentication to AWS uses the same OICD that we have created earlier hence the same variable should be used here also. See the pipeline script for the reference


stages:

 - deploy-api


deploy-api:

  image: registry.gitlab.com/runner:latest

  stage: deploy-api

  services:

    - docker:dind

  id_tokens:

    GITLAB_OIDC_TOKEN:

       aud: clockhash

  variables:

    AWS_ECR_REGISTRY_IMAGE: ${AWS_ECR_REGISTRY}:${CI_COMMIT_SHORT_SHA}

    

  script:

    - >

      export $(printf "AWS_ACCESS_KEY_ID=%s AWS_SECRET_ACCESS_KEY=%s AWS_SESSION_TOKEN=%s"

      $(aws sts assume-role-with-web-identity

      --role-arn ${ROLE_ARN}

      --role-session-name "GitLabRunner-${CI_PROJECT_ID}-${CI_PIPELINE_ID}"

      --web-identity-token ${GITLAB_OIDC_TOKEN}

      --duration-seconds 3600

      --query 'Credentials.[AccessKeyId,SecretAccessKey,SessionToken]'

      --output text))

    - aws sts get-caller-identity

    - cd ./api/backend

    - apk add --no-cache curl

    - docker build

        --tag ${AWS_ECR_REGISTRY_IMAGE}

        --file ./Dockerfile

        "."

    

    - $(aws ecr get-login --no-include-email --region eu-west-1 )

    - docker push ${AWS_ECR_REGISTRY_IMAGE}

    - cd ../../workflow/dev/

    - chmod +x deploy-aws.sh

    - ./deploy-aws.sh

  only:

    - develop



        

The commands for the updating in task definition is placed inside the deploy-aws.sh script.


#!/bin/bash

TASK_DEFINITION=$(aws ecs describe-task-definition --task-definition "${ECS_STAGING_TASK}" --region "eu-west-1")


NEW_TASK_DEFINITION=$(echo $TASK_DEFINITION | jq --arg IMAGE "$AWS_ECR_REGISTRY_IMAGE" '.taskDefinition | .containerDefinitions[0].image = $IMAGE | del(.taskDefinitionArn) | del(.revision) | del(.status) | del(.requiresAttributes) | del(.compatibilities) | del(.registeredAt) | del(.registeredBy)')


NEW_TASK_INFO=$(aws ecs register-task-definition --region "eu-west-1" --cli-input-json "$NEW_TASK_DEFINITION")


NEW_REVISION=$(echo $NEW_TASK_INFO | jq '.taskDefinition.revision')


aws ecs update-service --cluster "${ECS_STAGING_CLUSTER}" --service "${ECS_STAGING_SERVICE}" --task-definition "${ECS_STAGING_TASK}:${NEW_REVISION}"  --region "eu-west-1" --force-new-deployment        

While running the gitlab pipeline the new task revision with new image will be created. After that, the service will get updated with the new task? revision

Conclusion

The blog illustrates the easy and secure way of implementing a 3 tier application? stack coupled with enhanced security through OIDC authentication .By automating the deployment process, teams can achieve faster time-to-market and reduce manual errors. As organizations continue to embrace modern development practices and priorities security, integrating GitLab CI/CD with OIDC authentication for S3 deployment emerges as a powerful solution to meet the demands of today's dynamic software landscape.

要查看或添加评论,请登录

ClockHash的更多文章

社区洞察

其他会员也浏览了