Project #1 — AWS Portfolio Website

Project #1 — AWS Portfolio Website

Intro:

In this project, I will be building a website that combines several different services within AWS, and then migrate the website into a serverless build to to reduce costs and utilize the benefits of a microservices website structure.

Project Brief:

The customer, a start-up known as exampleCorp, is in the process of migrating their AWSPortolio website into the cloud. They have been running this application as a monolithic app for the past ten years, on a single server with all of the code and the functionality bundled on a single server running in a server in the head office.

For many reasons, including performance issues, the cost of operating and managing the server, and technical debt — exampleCorp has decided to migrate this application to the cloud. Anytime they needed to make a change to the website, they would accrue many hours of downtime whilst they gained access to the web server and made their changes — and because of the application’s monolithic nature, any changes were fraught with risk — could this change in one aspect of the application cause a problem in the rest of the application? This needed to change.

Their in-house software development team has taken the application code and separated it into the main website files (HTML, css, javascript for server-side functionality) and split out the microservices (AWS Latest news page, Blog Post addition service, View Counter and the Contact Form) into CloudFormation templates including services like Lambda, DynamoDB, and Amazon Eventbridge. They have also taken the application code and put it onto an Amazon Machine Image (AMI).

Your job as the migration lead is to bring up the application on AWS and integrate the EC2-based website with the microservices and APIs. Additionally, the company wants to ultimately reduce costs and migrate all application components to serverless services.

Part 1 — Migrate to a server-based highly available website

Step 1 — Launch an EC2 instance to host the webserver on

  • Create my AMI image needed with the proper permissions needed to access the instance via SSH, then upload the website files that I will be using with the webserver.
  • Once the instance is created, I will upload my website files that I created into the directory, and download the files needed to run the webserver. After this is complete, I will use the public IP address of my instance to test and verify that my website is up and running. I will need to adjust my security group as well to allow HTTP traffic to my instance so the webserver is accessible.

Step 2 — Upload CloudFormation Templates for the create the infrastructure of our environment

  • I have verified that my website is up and running, and I will now upload the following 4 CloudFormation templates that I have written that will create the surrounding infrastructure for the website:

  1. AWSLatestNews.yaml — This microservice does two things, fetches the most recent 10 Blog posts, using Amazon EventBridge on a daily schedule, from the ‘AWS Latest News’ RSS feed and adds them into a DynamoDB table. The second aspect of this microservice populates my web page with the 10 most recently added blog posts from DynamoDB using Lambda fronted by a function URL which sits on your application servers — Dynamodb table for stories and links — Lambda function which grabs stories from the RSS feed — Eventbridge rule which runs daily triggering the function — Lambda function which is triggered by a function URL whenever my page is refreshed, grabbing the latest stories and displaying them on your webpage.
  2. Blog.yaml — This template is the ‘blog’ microservice. It will work as follows: — Website admin uploads a txt file to an S3 bucket. — S3 Event notifications trigger a Lambda function which transforms the uploaded text file into an HTML element. — Separately, an API endpoint (triggered by refreshing the page) grabs all the ‘posts’ from S3 and displays them on the page.
  3. Contactform.yaml — This template deploys a contact form, which allows users to hit an API endpoint in my code, and send an email to you via SNS.
  4. Viewcounter.yaml — This is the ‘view counter’ microservice, which simply populates a dynamodb database with an additional value, and then retrieves it every time a page is refreshed — adding a simple way of seeing how many people have visited your site.

  • I ran into an error while uploading my AWSLatestNews.yaml template. I forgot to assign permissions to allow myself to get the layer version from Lambda, so I went back and added the code to allow the permissions, as well as reference back in the template to find the layer ARN once it is created in the earlier steps of the formation.

Step 3 — Create Microservice Endpoints

Blog:

  • First, I will go to the API Gateway and create an HTTP API with a GET Method that is integrated with our “FetchPostFunction” Lambda function, so we are able to retrieve our POST requests on our website. I will also enable CORS to allow clients from different domains to access and interact with the API. After my API is created, I will then take the Invoke URL and update it in our code for our blog file on the website so they are synchronized.
  • Next, I will go over to my Upload Bucket in S3, and create an event notification to trigger the “CreatePostFunction” Lambda function I have created, and the triggering event will be that a txt file has been uploaded using the API call “S3:PutObject”.

ViewCounter:

  • I will set up a Lambda function URL (with CORS enabled) for my “ViewsFunction” Lambda function, and make sure to enable all methods under for CORS settings as well.
  • Once I have my Lambda function URL, I will copy the URL and add it into my application code under my Index file. Once it has been edited and saved, I confirmed that the URL was working properly by visiting the URL in my browser, and observing the ViewCount has increased by 1.

Contact Form:

  • I will follow the same steps as the ViewCounter section, and apply the function URL for my “ContactForm” Lambda function into the Index file as well, and test the results.

AWS Latest News:

  • I will repeat the same steps as I did in the previous two sections, and create a function URL for my “UpdateWebpageFunction” Lambda function, and update the function URL into my AWS file.

Testing:

Before we move onto the next section, I just want to run tests on each component to make sure it is all working in unison. To test, I will take the public IP of my instance, and enter into my browser to pull up the website. I will run a test submission on the form, and as expected, I received a “thank you” message, and the count viewer went up by 1.

I also went into the backend to verify that the SNS topic, the form submission, the DynamoDB tables, and the Lambda functions were all working properly and were showing up in the CloudWatch logs.

Also, I went into the website and clicked up at the top tab where it says “AWS News” to confirm that the news articles from the RSS were appearing, and this was also working properly.

Step 4 — Add our custom domain name to the website via Route 53, create a CloudFront Distribution, and attach it to an ALB

  • First, I will go into Route 53, and create an A record that points my website’s public IP address to my custom domain name. I will also create another A record that points www requests to my site as well.
  • Next, I will need to request an SSL/TLS certificate in AWS Certificate Manager. Once the request is submitted, I will just need to take the CNAME records ACM provides, and add them to my hosted zone on Route 53 in order to verify my ownership of the domain.
  • After I have the certificate validated, I will go into EC2 and create a new Application Load Balancer, Target Group, and Security Group to use for the deployment.
  • Next, I will need to go in and create my CloudFront distribution. I will configure it to use my SSL certificate and to forward HTTP to HTTPs for security purposes. I will also set the ALB I created in the last step as the Origin Domain Name so I can route all of my traffic through the ALB. I will also want to ensure that my CORS settings are configured properly for my ALB and my Distribution.
  • Finally, I will need to go in and update my A records in my hosted zone to point to the newly created CloudFront distribution. I will wait a few minutes, then run a test to confirm that I can access my website.

Step 5— Create an Auto Scaling Group and attach it to our Application Load Balancer

  • First, I will create a Launch Template from our current running instances, and use that to attach to the Auto Scaling Group.
  • I will then create my Auto Scaling Group, and attach it to my ALB. I set the minumum capacity to 2 and the maximum capacity to 4, with a desired capacity of 2 instances. The scaling policy is set to use CloudWatch monitoring to scale into new instances if CPU utilization goes above 80%.
  • Next, I will add in my health checks for the Target Group. I will go into the Health Checks tab, and set the protocol to HTTP, and the path for the Health Checks as /health.html.
  • I will need to create the /health.html page in my web server. I will navigate to my instance, and create a file called “healthcheck.html”, and insert a piece of code that will return a HTTP 200 OK status when accessed, and this will trigger the Health Checks to mark the instance as “healthy”.

Part 2 — Migrate the website to a Serverless Infrastructure

Step 1 — Create an s3 Bucket to host my static website on

  • First, I will need to create a bucket in S3 with a unique name. I will make the bucket publicly accessible, and enable static website hosting.
  • Next, I will log into one of my EC2 instances, and sync the website files to the s3 bucket.
  • Then I will need to change my CloudFront distribution to face the S3 static website rather than the ALB.

Step 2 — Decommission Server-based components

  • Finally, I will need to go in and decommission any remaining server-based components. This will include my EC2 instances, ALB, and Auto Scaling Group.
  • After the decommissioning of the Server-based components, we are left with only our s3 buckets hosting our website, our Lambda Functions, and our DynamoDB table, which are all serverless applications.

That’s it! Our serverless website environment has been completed! Here is a link to the website to see it live in action:

https://www.zackawslabs.com/

Also, I recorded a video breaking down an overview of the infrastructure in more detail here:

https://www.loom.com/share/a955b501e69b4451bfa039c6fb67d8e8

Next project will be uploaded soon, I will be posting these on my LinkedIn and Medium pages here:

https://medium.com/@reachecom

要查看或添加评论,请登录

Zack Bliss的更多文章

社区洞察

其他会员也浏览了