Content Delivery Network.

Content Delivery Network.


In this article I'm going to share my work on

  1. webserver configured on EC2 instance.
  2. Document root (/var/www/html) made persistent by mounting on EBS Block device.
  3. Static objects used in code as pictures stored in S3.
  4. Setting up Content Delivery Network using CloudFront and using the origin domain as S3 bucket.
  5. Finally place the cloud Front URL on the webapp code for security and low latency.

Cloud Front:

Before going to the setup we should know about what is Cloud Front. Cloud Front is an aws service which provides Content Delivery Network (CDN). Consider you are going to run a website which you like to setup globally and your website has images that stored in a Data Centre (DC) say in Mumbai. If your customers are located so far there will be more latency (delay). This will make your customers dissatisfied.

To avoid this AWS provide Content Delivery Network in this local catche of the data is stored in Egde Location (Small Data Center) across the globe between the Edge Location through high speed global network. When the first customer access it, it will go to Egde Center (for the first the Edge location does not have the local catche it known as Miss) so it go the data center where the data is loacted access it and make local catche in it and provide to customer. After this everytime customers near the Egde location need not to come to the DC to access the data (the data will be accessed from the Edge Location itself - called as Hit) so the latency will be reduced.


No alt text provided for this image

Launching instance by AWS CLI:

We need to launch an instance to perform these task.

Step 1 : First we need to install AWSCLI in our system. link to install - https://awscli.amazonaws.com/AWSCLIV2.msi

Step 2 : For configuration we access key ID and Secret key. for this we need to create an IAM user. Go to IAM service page click Users > Add User.

No alt text provided for this image


Give requied user name and password and click next. note: dont forget to check both the access type. Select the required policy (in my case i have selected Power User Access this will allow us to use all the services except IAM and billing dashboard).

No alt text provided for this image


Click next. Finally your access ID and Security key is created.

No alt text provided for this image

Step 3: Now go to the cmd type the command aws configure. Then give Access ID, Secret Key and default region to configure.

No alt text provided for this image

Step 4: Now we need to create an key. The command to create an key pair is as follows aws ec2 create-key-pair --key-name <"value"> --query ""KeyMaterial" --output text <filename> - this will also create a local copy in your system.

No alt text provided for this image

Step 5: Creating security group - aws ec2 create-security-group --group-name <value> --description <value>.

We are going to start apache webserver so we need to allow protocol tcp 80 (HTTP) and also we need to allow tcp 22 (SSH). Command to change the inbound rules -

aws ec2 authorize-security-group-ingress --group-id <value> --protocol <value> --port <value> --cidr <value>

No alt text provided for this image

Step 6: Finally we are going to launch the instance. Command to launch an instance is as follows.

aws ec2 run-instances --image-id <value> --key-name <value> --security-groups <value> --security-group-ids <value> --instance-type <value> --count <value>.

No alt text provided for this image

Attaching an EBS:

EBS is an block storage. we are going attach this to our instance and mount our code in this. So that when our instance corrupt we wont lose the files.

Step 1: Creating an EBS volume - aws ec2 create-volume --availability-zone <value> --size <value>

No alt text provided for this image

Step 2: Attaching the above created volume to the instance - aws ec2 attach-volume --volume-id <value> instance-id <value> --device <value>

No alt text provided for this image

Uploading an Image in S3 bucket:

S3 is fully managed object storage service. We are going to add an image in our webpage so are going to upload that image in S3 bucket for high durability and availability.

Step 1 : Creating an S3 bucket. aws s3api create-bucket <value> --bucket <value> --region <value> --create-bucket-configuration LocationConstraint=<region-name>.

Step 2 : upload an image in aws s3. aws cp <image-name> s3://<bucket-name>

No alt text provided for this image

In step 2 the object by default is not accessible to others. To make it accessible to the public add --acl public-read.

No alt text provided for this image

Webser configured on EC2 Instance:

To get the screen of the instance from cmd use : ssh -i <"key-name"> ec2-user@ec2-<ip address>.<region>.compute.amazonaws.com.

If you face any problem in connecting check whether go to Apps and features > optional features >openssh client is installed. If not click on Add feature and install it.

No alt text provided for this image

If you still face the problem try these commands on the powershell. First go the directory where your key pair is located.

$path=".\filename.pem"

icacls.exe $path/reset

icacls.exe $path /GRANT:R "$(env:USERNAME):(R)"

icacls.exe $path /inheritance:r


No alt text provided for this image

First login as root : sudo su - root

To use the EBS we need to do three steps

Step 1: Partition: fdisk /dev/xvdf - path of the EBS volume attached. command m is for help, p is for to see the partition created yet, n - to create the new partition, w - to save the partition created, q -quit.

No alt text provided for this image

Step 2: Format: Formatting the volume will create Inode table which makes the search faster. To format use mkfs.ext4 /dev/xvdf

No alt text provided for this image

To use the Apache webser we need to install it. Command to install - yum install httpd

No alt text provided for this image

To start the webserver service use - systemctl start httpd - this will remain temporarily to make it permanent use systemctl enable httpd.

No alt text provided for this image

Step 3: Mount: Atlast we have to link /dev/xvdf and /var/www/html. mount /dev/xvdf /var/www/html

No alt text provided for this image

Create a html file with the image we stored in the S3 - to create a html file use gedit <filename.html> or vi <filename.html>.

No alt text provided for this image

Finally our webpage looks like this. To access it type in the browser as -https://<public ip>/<filename.html>

No alt text provided for this image

Cloud Front :

Our webpage is created finally, but there is some problem in it. The problem is when customers from far distance try to access it, it might delay- called as latency. As now we have having only one image so it might not take that much delay but think of amazon shopping , flipcart where they have to put more images. to avoid this we are going to use CDN by cloudfront. We are now going to create a distribution and we are going to get new URL for the image and place it instead of the link we get from the S3 bucket. This new URL is so intelligent it will automaticlly create local cache in Edge location. so customers need not to come to original location (DC). Therefore latency is much reduced.

Step 1: aws cloudfront create-distribution --origin-domain-name <S3 bucket name>

No alt text provided for this image

Step 2: Replace the link of taken from S3 service by the new URL.

No alt text provided for this image

Atlast our wepage is ready.

No alt text provided for this image


Thanks for reading,

-Hema R

要查看或添加评论,请登录

Hema R的更多文章

  • Kubernetes use case:

    Kubernetes use case:

    What is Kubernetes? Kubernetes also known as K8s, is an open-source system for automating deployment, scaling, and…

  • To make google pingable and not facebook

    To make google pingable and not facebook

    Hello everyone, In this article Im going to make only google to be pingable and not facebook. First we need to check…

  • Industry use case for kubernetes.

    Industry use case for kubernetes.

    Hello everyone, Recently I have attended expert session on the topic of industry use case for Kubernetes / Openshift…

  • Journey of ARTH program

    Journey of ARTH program

    Hi everyone, In this article I'm going to share how my journey of ARTH started and how it is been going on? I'm…

  • Industry usecases of Ansible

    Industry usecases of Ansible

    Hello everyone, This article is about the session I had yesterday (28/12/2020). It is about the real use cases of…

  • How industries are solving challenges using Ansible.

    How industries are solving challenges using Ansible.

    Before Ansible: Beginning of networked computing when deploying and managing servers reliably and efficiently has been…

  • Logical Volume Management

    Logical Volume Management

    Logical Volume Management: Consider we are having two harddisk with size 10 GB and 20 GB, suppose we are having an…

  • Domino's uses ML from AWS for predictive ordering.

    Domino's uses ML from AWS for predictive ordering.

    Domino,s Pizza Enterprises Limited (Domino’s) , top global pizza business, which is the largest Domino’s franchise…

  • AWS CLI

    AWS CLI

    Amazon Web Services: Amazon Web Services (AWS) is one of the top public cloud computing services. It provides a massive…

    2 条评论
  • Why LG rely on AWS?

    Why LG rely on AWS?

    AWS IOT SERVICE : Before going to the topic we must have to know about AWS (Amazon Web Services) IOT (Internet of…

    2 条评论

社区洞察

其他会员也浏览了