How to Create Amazon S3 Bucket and Amazon EC2 Instance Using Terraform
Ariful Islam Shawon
Software Engineer | DevOps & Cloud Engineer | AWS Certified DevOps Engineer | Expertise in Docker, Kubernetes, CI/CD, Terraform & Linux | Cloud-Native Enthusiast ??Email: [email protected] |??Website: aishawon.info
Terraform is a powerful Infrastructure as Code (IaC) tool that enables you to define cloud infrastructure resources in a simple configuration language and manage them efficiently. This guide will walk you through the process of provisioning both an Amazon EC2 instance and an S3 bucket using Terraform.
What You'll Learn
Prerequisites
Before we dive into writing the Terraform configuration, ensure you have the following:
Bash Script
terraform -v
2. AWS CLI Installed and Configured:
Bash Script
aws configure
You will need your AWS Access Key, Secret Access Key, and Default Region. Terraform will use this to interact with AWS resources.
3. IAM User with Sufficient Permissions: Make sure your IAM user has the necessary permissions:
Create Amazon S3 Bucket
Step 1: Set Up the Working Directory
To begin, create a directory where you will store your Terraform configuration files:
Bash Script
mkdir terrafom_project_01
cd terraform_project_01
Inside this directory, you'll create and manage the main.tf file which will contain your infrastructure configuration.
Step 2: Define the Terraform version and the AWS Provider
The provider block is a necessary component to tell Terraform which cloud platform you're working with. In this case, it’s AWS.
Create the main.tf file and add the following block:
HCL
terraform {
required_providers {
aws = {
source = "hashicorp/aws"
version = "~> 5.0"
}
}
}
provider "aws" {
region = "us-east-1" # Specify the AWS region you want your resources to be created in
}
Region: This field defines where your resources will be launched. You can modify us-east-1 to any AWS region you prefer, like us-west-2 or eu-west-1.
Step 3: Create an S3 Bucket
Now that we have defined the AWS provider, we can configure our S3 bucket. This bucket will be used to store objects (such as files or logs) with different access control levels.
Here’s how you can define a simple private S3 bucket in main.tf:
HCL
# Create an S3 bucket
resource "aws_s3_bucket" "example" {
bucket = "aishawon-test-bucket" # Ensure the name is globally unique
tags = {
Name = "Test Bucket"
Environment = "Development"
}
}
#Block the S3 bucket public acess
resource "aws_s3_bucket_public_access_block" "example" {
bucket = aws_s3_bucket.example.id
block_public_acls = true
block_public_policy = true
ignore_public_acls = true
restrict_public_buckets = true
}
Key Details:
Create an EC2 Instance
Step 4: Create an EC2 Instance
Now let’s define an EC2 instance that will run a virtual machine in AWS. We will also define a security group to control access to the instance.
Add the following block to the main.tf file:
HCL
#Create the Amazon EC2 Instance
resource "aws_instance" "my_ec2" {
ami = "ami-0866a3c8686eaeeba" # Replace with a valid AMI ID from your chosen region
instance_type = "t2.micro" # Free-tier eligible instance type
tags = {
Name = "MyEC2Instance"
}
vpc_security_group_ids = [aws_security_group.my_sg.id] # Reference the security group defined below
}
Security Group
We will need to define a security group to allow SSH access to the EC2 instance. Add this block:
HCL
#Creating security group
resource "aws_security_group" "my_sg" {
name = "allow_ssh"
description = "Allow SSH inbound traffic"
vpc_id = "vpc-06dd3e83f3122806b" # Replace with your actual VPC ID
ingress {
from_port = 22 # Allow inbound traffic on port 22 (SSH)
to_port = 22
protocol = "tcp"
cidr_blocks = ["0.0.0.0/0"] # Open to the world (only for testing, restrict this for production)
}
egress {
from_port = 0
to_port = 0
protocol = "-1"
cidr_blocks = ["0.0.0.0/0"] # Allow all outbound traffic
}
}
Key Details:
Step 5: Adding Outputs
To make it easier to access the EC2 instance and verify the S3 bucket creation, we can output some values like the instance's public IP and bucket name. Add this at the end of your main.tf file:
HCL
output "bucket_name" {
value = aws_s3_bucket.my_bucket.bucket
}
output "ec2_instance_id" {
value = aws_instance.my_ec2.id
}
output "ec2_public_ip" {
value = aws_instance.my_ec2.public_ip
}
This output block helps you get crucial information (such as the public IP of the EC2 instance) after running Terraform commands.
领英推荐
Here is the complete HCL Code of steps 2 - 5
HCL
terraform {
required_providers {
aws = {
source = "hashicorp/aws"
version = "~> 5.0"
}
}
}
# Configure the AWS Provider
provider "aws" {
region = "us-east-1"
}
# Create an S3 bucket
resource "aws_s3_bucket" "example" {
bucket = "aishawon-test-bucket" # Ensure the name is globally unique
tags = {
Name = "Test Bucket"
Environment = "Development"
}
}
#Block the S3 bucket public acess
resource "aws_s3_bucket_public_access_block" "example" {
bucket = aws_s3_bucket.example.id
block_public_acls = true
block_public_policy = true
ignore_public_acls = true
restrict_public_buckets = true
}
#Create the Amazon EC2 Instance
resource "aws_instance" "my_ec2" {
ami = "ami-0866a3c8686eaeeba" # Replace with a valid AMI ID from your chosen region
instance_type = "t2.micro" # Free-tier eligible instance type
tags = {
Name = "MyEC2Instance"
}
vpc_security_group_ids = [aws_security_group.my_sg.id] # Reference the security group defined below
}
#Creating security group
resource "aws_security_group" "my_sg" {
name = "allow_ssh"
description = "Allow SSH inbound traffic"
vpc_id = "Your VPC Name" # Replace with your actual VPC ID
ingress {
from_port = 22 # Allow inbound traffic on port 22 (SSH)
to_port = 22
protocol = "tcp"
cidr_blocks = ["0.0.0.0/0"] # Open to the world (only for testing, restrict this for production)
}
egress {
from_port = 0
to_port = 0
protocol = "-1"
cidr_blocks = ["0.0.0.0/0"] # Allow all outbound traffic
}
}
output "bucket_name" {
value = aws_s3_bucket.example.id
}
output "ec2_instance_id" {
value = aws_instance.my_ec2.id
}
output "ec2_public_ip" {
value = aws_instance.my_ec2.public_ip
}
Your VPC Name Replace with your actual VPC ID
Step 6: Initialize and Apply Terraform
Now that the configuration is complete, it’s time to initialize Terraform and apply the configuration to create the resources.
Bash Script
terraform init
2. Plan the Execution: Before applying the changes, you can review the execution plan to see what Terraform will do:
Bash Script
terraform pan
This command will list all the resources that will be created.
3. Apply the Configuration: After reviewing the plan, apply the configuration to create your EC2 instance and S3 bucket:
Bash Script
terraform apply
Type yes when prompted to confirm the action.
Step 7: Verifying the Resources
After running Terraform Apply, Terraform will output the details of the created resources (such as the EC2 instance ID and public IP address).
You can also use SSH to connect to the EC2 instance:
Bash Script
ssh -i /path/to/your/private/key.pem ec2-user@<EC2_PUBLIC_IP>
Make sure to replace <EC2_PUBLIC_IP> with the actual public IP of your instance and provide the correct path to your private key.
Step 8: Final Output in AWS Consol View
Step 9: Destroying the Resources
Once you’ve finished using the resources, it’s a good idea to clean up to avoid incurring any unnecessary costs. Use the following command to destroy the infrastructure:
Bash Script
terraform destroy
Type yes When prompted, Terraform will delete the EC2 instance and S3 bucket along with any associated resources.
Conclusion
This detailed guide demonstrated how to use Terraform to create Amazon S3 buckets and EC2 instances in AWS. By following this approach, you can easily manage your AWS infrastructure using code, ensuring a consistent and scalable workflow. Terraform provides an effective way to automate cloud infrastructure provisioning, making it easier to maintain and version your resources over time.
Feel free to extend this setup by adding more advanced configurations like auto-scaling, load balancing, or more complex bucket policies.
-Ariful Islam Shawon
B.Sc. in Software Engineering
Software Engineer, DevOps Engineer
Cloud Engineer and Solution Architect
2x AWS Certified, AWS Certified DevOps Engineer?—?Professional