Building a Resume Website with AWS Serverless Architecture

Building a Resume Website with AWS Serverless Architecture

Introduction

In this blog, we will delve into the process of building a straightforward resume website using AWS serverless architecture. Throughout the tutorial, we will leverage a range of AWS services, including Amazon S3, Route53, CloudFront, AWS Certificate Manager, Lambda, and DynamoDB. Additionally, we will explore the use of GitHub Actions to facilitate smooth code changes and deployments.

Solution Architecture

Serverless Architecture

  • Amazon S3 is utilized for hosting the static content of the website.
  • Route53 is responsible for DNS management and domain registration.
  • CloudFront is used to improve the performance and global accessibility of the website through content delivery and caching.
  • ACM (AWS Certificate Manager) ensures secure communication by providing SSL/TLS certificates for the website.
  • AWS Lambda functions are used for implementing serverless backend processing.
  • DynamoDB is employed as the database for storing data related to the resume website.
  • GitHub Actions is leveraged for continuous integration and continuous deployment (CI/CD) of code changes.

Part 1 : Hosting a Resume website

To learn how to host a Static Resume Website with Amazon S3, CloudFront, and Route53, please refer to the article below:

Effortless Global Content Delivery: Hosting a Static Website with Amazon S3, CloudFront, and Route 53 | LinkedIn

This article provides detailed steps and instructions on setting up the hosting environment using Amazon S3, CloudFront, and Route53.

Part 2: Implementing a Visitor counter for the Resume website with AWS Lambda and DynamoDB

In our resume website implementation, we will incorporate a visitor counter feature. To achieve this, we will utilize a DynamoDB table to store and manage the visitor counter value. By leveraging the capabilities of DynamoDB, we can accurately track and display the number of visitors to our website.

Step 1: Create a DynamoDB table

To create a table in DynamoDB for storing the view counter data, follow these steps:

  1. Go to the DynamoDB console and access the "Tables" section.
  2. Click on the "Create table" button to begin the table creation process.
  3. Specify a table name that is meaningful, such as "ViewCounterTable".
  4. Set the primary key for the table as "id".
  5. Leave the other settings as default.

Once the table is created, proceed to the "Explore items" section. At this point, there will be no existing items in the table. Follow these steps to create a new item:

  1. Click on the "Create item" button.
  2. Define an attribute named "Views" with a value of "1".
  3. Set the attribute type as "Number" to ensure proper data storage and manipulation.

DynamoDB Table

Step 2: Create a Lambda Function

To create a Lambda function that interacts with the DynamoDB table and increments the view counter, follow these steps:

  1. Go to the AWS Lambda console and click on "Create function."
  2. Provide a meaningful name for your function, such as "IncrementViewCounter."
  3. Choose the latest supported runtime for your function. For example, you can select Python if you used Python for your implementation.
  4. Select the option to create a new execution role with basic Lambda permissions. This will automatically create an IAM role with the necessary permissions for the Lambda function to execute.

In the advanced settings of the Lambda function, we will configure the following:

  1. Enable Function URL: We will enable the Function URL to allow interaction with the function through HTTP requests. This will provide an endpoint that can be accessed to invoke the Lambda function.
  2. Set Authorization Type as "NONE": To allow unrestricted access to the Lambda function, we will set the Authorization type as "NONE." This means that no authentication or authorization will be required to invoke the function.
  3. Enable CORS (Cross-Origin Resource Sharing): We will enable CORS to implement security measures and restrict access to the Lambda function. By configuring CORS, we can whitelist specific URLs as the only allowed origins for fetching data from the API. This helps to prevent unauthorized access and mitigate potential security risks.

Step 3: Adding Permissions to Lambda Function

To grant the necessary permissions for the Lambda function to retrieve and update the viewer count in DynamoDB, follow these steps:

  1. Navigate to the "Configuration" tab of your Lambda function in the AWS Lambda console.
  2. Access the permission sidebar in the configuration tab.
  3. Click on the execution role associated with the Lambda function. This will redirect you to the IAM (Identity and Access Management) console.
  4. In the IAM console, locate the execution role and click on it to view its details.
  5. Add the "AmazonDynamoDBFullAccess" permission policy to the execution role. This policy will grant both Read and Write access to DynamoDB, allowing the Lambda function to retrieve and update the viewer count in the DynamoDB table.

While in the configuration menu, it's important to configure the CORS (Cross-Origin Resource Sharing) "Allow-Origin" setting to restrict access to the Lambda function URL only from our specific domain name. This helps prevent unauthorized access from other domains and mitigates potential security risks.

Step 4: Adding Code to the Lambda Function

After creating the function and granting the necessary permissions, we will need to add the code that fetches the item from the DynamoDB table. This code will allow our Lambda function to retrieve the current view count.

The Sample code written in Python:

import json
import boto3
dynamodb = boto3.resource('dynamodb')
table = dynamodb.Table('Table Name')
def lambda_handler(event, context):
    # TODO implement
    response = table.get_item(Key={
        'id':1
    })
    views = response['Item']['views']
    views = views + 1
    print(views)
    response = table.put_item(Item={
        'id':1,
        'views':views
    })
    return views        

The provided code snippet utilizes the Boto3 library to interact with DynamoDB and accomplish the following actions:

  1. Retrieves the current value of the ‘views’ attribute from the specified DynamoDB table (ensure to replace the table name with your own).
  2. Increments the retrieved value by 1.
  3. Prints the updated value of ‘views’.
  4. Updates the ‘views’ attribute in the DynamoDB table with the new value.
  5. Returns the updated ‘views’ count as the output of the Lambda function.

To test the functionality of your deployed Lambda function, you can use the curl command in your terminal. Follow these steps:

  1. Open your terminal or command prompt.
  2. Execute the following curl command, replacing function_url with the actual URL of your deployed Lambda function:

curl -X POST function_url        

  1. After executing the command, check the DynamoDB table to verify if the value has indeed increased. You can use the AWS Management Console or any other preferred method to view the table data.

Part 3: Implementing the display of Visitor counter on the Resume Website

In this section, we will update the website code to incorporate the view count obtained from the API and display it on the website.

To display the visitor counter on your website, create a variable in the JavaScript file of your static website and assign to it the LambdaFunctionURL from the previous step and reference the JavaScript code within your index.html file.

const counter = document.querySelector(".counter-number");
async function updateCounter() {
    let response = await fetch("Your-LambdaFunction-URL");
    let data = await response.json();
    counter.innerHTML = ` This page has ${data} Views!`;
}

updateCounter();        

This JavaScript code retrieves the view count data from Lambda using the fetch function and updates the content of an HTML element with the class "counter-number" to display the count. The updateCounter function is responsible for displaying the view count, and it is automatically executed when the page loads.

Part 4: Implementing Source Control and CI/CD with GitHub Actions

In this section, we will set up a GitHub repository and configure GitHub Actions to automate the process of pushing changes to our S3 bucket whenever we commit changes to our website code. This Continuous Integration/Continuous Deployment (CI/CD) pipeline will streamline our development workflow by automatically updating our S3 bucket and reflecting the changes on our website. Follow these steps to achieve this:

  1. Create a new GitHub repository or navigate to an existing repository that will host your website code.
  2. Push your website code to the repository, ensuring that it includes all the necessary files and directories.
  3. Setup CI/CD with GitHub Actions: To set up the GitHub Actions configuration for your website, follow these steps:

  • Open your website code in a text editor or IDE.
  • In the root folder of your website, create a new folder named .github/workflows. The .github folder should be at the same level as your HTML, CSS, and JavaScript files.
  • Inside the .github/workflows folder, create a new YAML file named cicd.yml. This file will hold the GitHub Actions configuration for your CI/CD pipeline.

Add the following code snippet to the “cicd.yml” file:

name: Upload website to S3

on:
  push:
    branches:
      - main

jobs:
  deploy:
    runs-on: ubuntu-latest
    steps:
      - uses: actions/checkout@master
      - uses: jakejarvis/s3-sync-action@master
        env:
          AWS_S3_BUCKET: ${{ secrets.AWS_S3_BUCKET }}
          AWS_ACCESS_KEY_ID: ${{ secrets.AWS_ACCESS_KEY_ID }}
          AWS_SECRET_ACCESS_KEY: ${{ secrets.AWS_SECRET_ACCESS_KEY }}
          AWS_REGION: 'us-east-1'#make sure the Region reflects yours
          SOURCE_DIR: 'website' #make sure the SOURCE_DIR is the correct name        

This GitHub Actions workflow is triggered whenever there is a push event on the “main” branch of the repository. It utilizes the “actions/checkout” action to fetch the latest code from the repository. The “jakejarvis/s3-sync-action” action is used to synchronize the files from the “website” directory to the S3 bucket.

GitHub Action to Sync S3 Bucket : https://github.com/jakejarvis/s3-sync-action?source=post_page-----b087ddef6b32--------------------------------

4. Add Environment Variables for GitHub Actions

To ensure successful execution of the actions, you need to define the required environment variables including your S3 bucket name, access key ID, and secret access key. Follow these steps to add the environment variables as secrets in your GitHub repository:

  1. Go to your GitHub repository.
  2. On the left sidebar, click on "Settings".
  3. In the repository settings, find and click on "Secrets" or "Secrets and variables".
  4. Click on "New repository secret".
  5. Provide a name for the secret, such as "AWS_S3_BUCKET".
  6. Enter the corresponding value for the secret, which is the name of your S3 bucket.
  7. Repeat steps 4 and 5 for the other required environment variables, such as "AWS_ACCESS_KEY_ID" and "AWS_SECRET_ACCESS_KEY".
  8. Save the secrets.

To test the successful execution of GitHub Actions, you can modifythe code html or JS code, save the changes, and utilize the Source Control button in VSCode to commit the modifications and synchronize them.

After synchronizing the changes, you can return to GitHub and verify the successful completion of the action, which results in the successful push of your changes to the S3 bucket. Follow these steps to verify the completion:

  1. Go to your GitHub repository.
  2. Navigate to the "Actions" tab.
  3. Find the workflow that corresponds to the CI/CD pipeline you set up.
  4. Check the status of the workflow. If it shows a green checkmark or "Success" status, it indicates that the workflow has completed successfully.
  5. Click on the workflow to view more details, such as the executed steps and any associated logs or output.
  6. Additionally, you can visit your S3 bucket and verify that the changes from your website code have been successfully pushed and reflected in the bucket's content.

??Congratulations! We have succesfully deployed our Serverless Resume Webisite.??

By following the above steps and implementing the provided guidelines, you have successfully enhanced your resume website with a visitor counter, automated deployment using GitHub Actions, and secure management of environment variables. These enhancements not only improve the functionality and user experience of your website but also streamline your development workflow, allowing for seamless updates and efficient maintenance of your online resume.







Nikhil Kamode

Trainee Full-Stack Developer at Real IT Solutions Pune |Lean 6 Sigma White belt Certified|Scrum Foundation Certified|Double Star Ranger@Trailhead |OCI Certified |4X Alibaba Cloud Certified | Databricks 1x Certified

10 个月

how can I become part of this cohort?

回复

要查看或添加评论,请登录

Sudarkodi Muthiah的更多文章

社区洞察

其他会员也浏览了