Getting started with AWS and Cloud-Native development
Regardless of industry, many of us have heard the terms “cloud-native”, “serverless”, ‘IaC’ (infrastructure as code). Those new to cloud platforms can feel overwhelmed by the sheer number and complexity of services. It feels like new tech keeps getting released at breakneck speed and before you know it, the required tech stack for your resume keeps growing at an exponential rate. But don’t worry - getting started with cloud-native development isn’t as difficult as it might seem. All it takes is patience, reading the docs and dealing with non-user friendly UIs.
Everyone has their own reasons for trying serverless. Personally, I needed to run a compute-heavy video-generating model for a personal project and wanted to find a cheap way to do it. You might raise an eyebrow and ask, “AWS & cheap?” Yes, you can use a compute-heavy deep learning instance for less than $0.50 USD/hour.
In this article, I’ll try to simplify the process of getting started with AWS and hopefully reduce the barrier of entry. By the end of this article, you’ll have your own infrastructure as code setup with Terraform, a ready-to-use EC2 instance of your choice with cost saving options (later in this article), and a billing alarm email subscription. This setup is beginner-friendly and not intended for serious production use.
Before we begin, there are some prerequisites. If you don’t have these, don’t worry—I’ll provide links to the resources I used when I started.
Requirements:
Note: this walkthrough is system-independent. Just make sure to follow specific installation instructions for your platform.
Getting Started with Terraform
Terraform and other IaC platforms (like OpenTofu, AWS CloudFormation, etc.) allow you to preconfigure your infrastructure with services, modules, tools, and scripts. This eliminates the need to repeat the same setup steps every time you create a new infrastructure. You keep a configuration file and state files that represent your infrastructure, hence the name - Infrastructure as Code.
1. Create a Project Directory:
2. Configuration File Example:
terraform {
required_providers {
aws = {
source = "hashicorp/aws"
version = "~> 5.49.0"
}
}
required_version = ">= 1.2.0"
}
provider "aws" {
region = "us-west-2"
}
data "aws_vpc" "default" {
default = true
}
resource "aws_security_group" "web_server_sg_tf" {
name = "web-server-sg-tf"
description = "Allow HTTPS to web server"
vpc_id = data.aws_vpc.default.id
}
resource "aws_security_group_rule" "allow_all" {
type = "ingress"
description = "allow all"
from_port = 0
to_port = 0
protocol = "-1"
cidr_blocks = ["0.0.0.0/0"]
security_group_id = aws_security_group.web_server_sg_tf.id
}
resource "aws_security_group_rule" "allow_all_outbound" {
type = "egress"
description = "allow all"
from_port = 0
to_port = 0
protocol = "-1"
cidr_blocks = ["0.0.0.0/0"]
security_group_id = aws_security_group.web_server_sg_tf.id
}
resource "aws_instance" "ec2_demo" {
ami = "ami-0cf2b4e024cdb6960"
instance_type = "t3.micro"
vpc_security_group_ids = [aws_security_group.web_server_sg_tf.id]
associate_public_ip_address = true
key_name = "your-keypair-name"
tags = {
Terraform = "true"
Name = "EC2 Test"
Environment = "prod"
}
root_block_device {
volume_type = "gp3"
volume_size = "30"
delete_on_termination = true
}
}
Note: replace “your-keypair-name” inside the “aws_instance” resource with the actual name of your EC2 key pair.
3. Initialize and Apply Configuration:
terraform init
terraform validate
terraform apply
Connecting to Your Instance
1. Obtain EC2 Public DNS:
2. Establish SSH Connection:
Unix
ssh -i ~./path/to/your/ec2keypair.pem ubuntu@publicDNS
Example Windows:
领英推荐
ssh -i C:\Users\YourUsername\path\to\your\ec2keypair.pem [email protected]
3. Verify Connection:
Cleanup
terraform destroy
You can also store persistent data with RDS and other data storage services provided by AWS but this is beyond the scope of this article.
Bonus: bash script that installs Docker on your instance.
#!/bin/bash
for pkg in docker.io docker-doc docker-compose docker-compose-v2 podman-docker containerd runc; do sudo apt-get remove $pkg; done
sudo apt-get update
sudo apt-get -y install ca-certificates curl gnupg
sudo install -m 0755 -d /etc/apt/keyrings
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo gpg --dearmor -o /etc/apt/keyrings/docker.gpg
sudo chmod a+r /etc/apt/keyrings/docker.gnupg
echo \
"deb [arch="$(dpkg --print-architecture)" signed-by=/etc/apt/keyrings/docker.gpg] https://download.docker.com/linux/ubuntu \
"$(. /etc/os-release && echo "$VERSION_CODENAME")" stable" | \
sudo tee /etc/apt/sources.list.d/docker.list > /dev/null
sudo apt-get update
sudo apt-get -y install docker-ce docker-ce-cli containerd.io docker-buildx-plugin docker-compose-plugin
sudo usermod -aG docker $(whoami)
This script will install official packages from Docker with their dependencies as well as add your user to a docker user group (otherwise docker can deny access).
Now, modify the ‘aws_instance’ resource in 'main.tf' by adding this line below the key_name variable field:
user_data = file("./init.sh")
Run ‘terraform apply’ in terminal and proceed with the ssh connection command (you will need to copy the new public dns from EC2 dashboard or AWS CLI).
Once connected verify docker installation via 'which docker' (sometimes you have to wait as the installation is still happening). Then proceed to close the connection and re-open it via ssh, this is needed to update the docker user group which we just created. Verify that you can use docker via 'docker ps'. Feel free to close the connection and destroy this instance with terraform destroy.
If you ran into some package installation issues on your instance with normally available packages, sometimes its a good idea to delete all terraform files (except for 'main.tf') and start over with 'terraform init'. This can happen if your state files get corrupted.
Before we wrap up I must show you a couple important features:
Billing alarms - notify you when a bill crosses a user-defined threshold. This is critical so that you don’t incur nasty bills without realizing it, which can happen if you forget to terminate an instance or other services. I will leave this demo to a video by AWS which assists you with a budget setup and alarms - AWS Budgets Tutorial - Setup Alerts for AWS Billing | Amazon Web Services. Here are the docs too - Create a billing alarm to monitor your estimated AWS charges - Amazon CloudWatch. Note that you will have to create an SNS topic to receive email notifications. You can do so by following this guide - Email notifications - Amazon Simple Notification Service and/or referring to this article - https://medium.com/@ajitfawade/how-to-monitor-your-aws-billing-and-get-alerts-using-cloudwatch-and-sns-2ac531042567.
EC2 Spot instances & EC2 Fleet
EC2 Spot Instances are a cost-effective approach to using AWS EC2, offering spare compute capacity at significantly reduced prices compared to On-Demand Instances. Spot Instances can be up to 90% cheaper than On-Demand Instances. However, they come with the caveat that AWS can reclaim these instances when it needs the capacity back, with a two-minute warning.
When using our current Terraform config, we create a dedicated EC2 instance. Depending on the instance type, this can get expensive. However, if you don't need a constant dedicated instance, EC2 Spot Instances can be a more economical choice. Think of it as borrowing an instance until someone else needs it.
You might ask what’s the point if you lose the state and the running services? Well, many instance types, outside of deep learning AMIs (Amazon Machine Image), support hibernation, which means their state can be preserved. Furthermore, EC2 Spot Instances can be managed to minimize downtime for your users using EC2 Fleet.
EC2 Fleet allows you to manage multiple Spot Instances and even mix them with On-Demand Instances to ensure that your application always has the required capacity. You can define a fleet with a combination of instance types and pricing models, and EC2 Fleet will automatically provision the most cost-effective instances based on your specified criteria. This setup is perfect for web applications and services that can tolerate interruptions but require cost savings.
I won’t go into detail about EC2 Fleet, but the idea is that you can have multiple instances where one is a fallback instance ready to take over when another is interrupted. You can have a ‘fleet’ of such instances that are significantly cheaper to maintain than dedicated instances. This does come with some architectural complexity, as you will need to manage the fleet and handle instance interruptions gracefully. However, AWS provides numerous resources to help you manage this effectively.
Here’s how we can modify our Terraform config file to create a spot instance with hibernation:
resource "aws_instance" "ec2_spot_test" {
ami = "ami-0cf2b4e024cdb6960"
instance_type = "t3.micro"
vpc_security_group_ids = [aws_security_group.task_manager_sg_tf.id]
associate_public_ip_address = true
key_name = "your_keypair_name"
user_data = file("./init.sh")
instance_market_options {
market_type = "spot"
spot_options {
instance_interruption_behavior = "hibernate"
max_price = 0.040
spot_instance_type = "persistent"
valid_until = "2024-06-21T23:59:00Z"
}
}
tags = {
Terraform = "true"
Name = "ec2-spot-test"
Environment = "dev"
}
root_block_device {
volume_type = "gp3"
volume_size = "30"
delete_on_termination = true
}
}
resource "aws_ebs_encryption_by_default" "task-manager-ebs" {
enabled = true
}
I encourage you to explore the code and refer to the Terraform docs to better understand the configuration. Inside spot_options, you specify the max_price you’re willing to pay for the given instance (you can check out instance prices here: Amazon EC2 Spot Instances - Product Details). Since prices can fluctuate based on availability, you set its type (here we set it to persistent) and the date until you want to keep using it. The last line specifies default encryption for our instance (required for hibernation).
This was a lengthy post, but I hope it cleared up the enigma of the cloud and IaC. Now you know how to write your own infrastructure as code, configure it, and deploy on AWS. Once you gain experience, it will become more intuitive, and you will come to appreciate the engineering behind these platforms and what they can do for you.
I hope you enjoyed this guide and wish you the best of luck on your development journey!