How to Create Amazon S3 Bucket and Amazon EC2 Instance Using Terraform

How to Create Amazon S3 Bucket and Amazon EC2 Instance Using Terraform

Terraform is a powerful Infrastructure as Code (IaC) tool that enables you to define cloud infrastructure resources in a simple configuration language and manage them efficiently. This guide will walk you through the process of provisioning both an Amazon EC2 instance and an S3 bucket using Terraform.

What You'll Learn

  1. Configuring the AWS provider in Terraform.
  2. Creating and managing an Amazon EC2 instance using Terraform.
  3. Creating an S3 bucket and managing its access policies.
  4. Applying and destroying Terraform configurations.

Prerequisites

Before we dive into writing the Terraform configuration, ensure you have the following:

  1. Terraform Installed:

  • Install Terraform by following the instructions for your OS here.
  • Verify installation by running the command:

Bash Script
terraform -v        


2. AWS CLI Installed and Configured:

  • Install AWS CLI from here.
  • Set up your AWS credentials using the AWS configure command

Bash Script
aws configure        

You will need your AWS Access Key, Secret Access Key, and Default Region. Terraform will use this to interact with AWS resources.


3. IAM User with Sufficient Permissions: Make sure your IAM user has the necessary permissions:

  • EC2: ec2:RunInstances, ec2:DescribeInstances, ec2:TerminateInstances.
  • S3: s3:CreateBucket, s3:PutBucketPolicy, s3:DeleteBucket.


Create Amazon S3 Bucket

Step 1: Set Up the Working Directory

To begin, create a directory where you will store your Terraform configuration files:

Bash Script

mkdir terrafom_project_01
cd terraform_project_01        

Inside this directory, you'll create and manage the main.tf file which will contain your infrastructure configuration.


Step 2: Define the Terraform version and the AWS Provider

The provider block is a necessary component to tell Terraform which cloud platform you're working with. In this case, it’s AWS.

Create the main.tf file and add the following block:

HCL

terraform {
  required_providers {
    aws = {
      source  = "hashicorp/aws"
      version = "~> 5.0"
    }
  }
}

provider "aws" {
  region = "us-east-1"  # Specify the AWS region you want your resources to be created in
}        

Region: This field defines where your resources will be launched. You can modify us-east-1 to any AWS region you prefer, like us-west-2 or eu-west-1.


Step 3: Create an S3 Bucket

Now that we have defined the AWS provider, we can configure our S3 bucket. This bucket will be used to store objects (such as files or logs) with different access control levels.

Here’s how you can define a simple private S3 bucket in main.tf:

HCL

# Create an S3 bucket
resource "aws_s3_bucket" "example" { 
  bucket = "aishawon-test-bucket" # Ensure the name is globally unique

  tags = {
    Name        = "Test Bucket"
    Environment = "Development"
  }
}

#Block the S3 bucket public acess
resource "aws_s3_bucket_public_access_block" "example" {
  bucket = aws_s3_bucket.example.id

  block_public_acls       = true
  block_public_policy     = true
  ignore_public_acls      = true
  restrict_public_buckets = true
}        

Key Details:

  • aws_s3_bucket: This block creates an S3 bucket.

  1. bucket: The unique name of the bucket (globally unique).
  2. ACL: Defines the bucket’s access control. Here we set it as private, which means no public access is allowed.
  3. Tags: Metadata labels help categorize your resources.


  • aws_s3_bucket_public_access_block:

  1. This resource enforces public access restrictions on the bucket to ensure it cannot be exposed accidentally.


Create an EC2 Instance


Step 4: Create an EC2 Instance

Now let’s define an EC2 instance that will run a virtual machine in AWS. We will also define a security group to control access to the instance.

Add the following block to the main.tf file:

HCL

#Create the Amazon EC2 Instance
resource "aws_instance" "my_ec2" {
  ami           = "ami-0866a3c8686eaeeba"  # Replace with a valid AMI ID from your chosen region
  instance_type = "t2.micro"  # Free-tier eligible instance type

  tags = {
    Name = "MyEC2Instance"
  }

  vpc_security_group_ids = [aws_security_group.my_sg.id]  # Reference the security group defined below
}
        

Security Group

We will need to define a security group to allow SSH access to the EC2 instance. Add this block:

HCL

#Creating security group
resource "aws_security_group" "my_sg" {
  name        = "allow_ssh"
  description = "Allow SSH inbound traffic"
  vpc_id      = "vpc-06dd3e83f3122806b"  # Replace with your actual VPC ID

  ingress {
    from_port   = 22  # Allow inbound traffic on port 22 (SSH)
    to_port     = 22
    protocol    = "tcp"
    cidr_blocks = ["0.0.0.0/0"]  # Open to the world (only for testing, restrict this for production)
  }

  egress {
    from_port   = 0
    to_port     = 0
    protocol    = "-1"
    cidr_blocks = ["0.0.0.0/0"]  # Allow all outbound traffic
  }
}        

Key Details:

  • Your VPC Name Replace with your actual VPC ID
  • AMI: Amazon Machine Image (AMI) is a template used to launch an EC2 instance. You can find a suitable AMI for your region in the AWS Console. For this example, the AMI ID is for an Ubuntu Server.
  • Instance Type: t2.micro is used because it is eligible for the AWS free tier and is good for testing or small-scale applications.
  • Security Group: Controls the traffic that can reach your EC2 instance. In this case, it allows SSH access (port 22) from any IP address (0.0.0.0/0), which should only be done in non-production environments.


Step 5: Adding Outputs

To make it easier to access the EC2 instance and verify the S3 bucket creation, we can output some values like the instance's public IP and bucket name. Add this at the end of your main.tf file:

HCL

output "bucket_name" {
  value = aws_s3_bucket.my_bucket.bucket
}

output "ec2_instance_id" {
  value = aws_instance.my_ec2.id
}

output "ec2_public_ip" {
  value = aws_instance.my_ec2.public_ip
}
        

This output block helps you get crucial information (such as the public IP of the EC2 instance) after running Terraform commands.


This output will look like

Here is the complete HCL Code of steps 2 - 5

HCL

terraform {
  required_providers {
    aws = {
      source  = "hashicorp/aws"
      version = "~> 5.0"
    }
  }
}

# Configure the AWS Provider
provider "aws" {
  region = "us-east-1"
}


# Create an S3 bucket
resource "aws_s3_bucket" "example" { 
  bucket = "aishawon-test-bucket" # Ensure the name is globally unique

  tags = {
    Name        = "Test Bucket"
    Environment = "Development"
  }
}

#Block the S3 bucket public acess
resource "aws_s3_bucket_public_access_block" "example" {
  bucket = aws_s3_bucket.example.id

  block_public_acls       = true
  block_public_policy     = true
  ignore_public_acls      = true
  restrict_public_buckets = true
}

#Create the Amazon EC2 Instance
resource "aws_instance" "my_ec2" {
  ami           = "ami-0866a3c8686eaeeba"  # Replace with a valid AMI ID from your chosen region
  instance_type = "t2.micro"  # Free-tier eligible instance type

  tags = {
    Name = "MyEC2Instance"
  }

  vpc_security_group_ids = [aws_security_group.my_sg.id]  # Reference the security group defined below
}

#Creating security group
resource "aws_security_group" "my_sg" {
  name        = "allow_ssh"
  description = "Allow SSH inbound traffic"
  vpc_id      = "Your VPC Name"  # Replace with your actual VPC ID

  ingress {
    from_port   = 22  # Allow inbound traffic on port 22 (SSH)
    to_port     = 22
    protocol    = "tcp"
    cidr_blocks = ["0.0.0.0/0"]  # Open to the world (only for testing, restrict this for production)
  }

  egress {
    from_port   = 0
    to_port     = 0
    protocol    = "-1"
    cidr_blocks = ["0.0.0.0/0"]  # Allow all outbound traffic
  }
}


output "bucket_name" {
  value = aws_s3_bucket.example.id
}

output "ec2_instance_id" {
  value = aws_instance.my_ec2.id
}

output "ec2_public_ip" {
  value = aws_instance.my_ec2.public_ip
}        

Your VPC Name Replace with your actual VPC ID


Step 6: Initialize and Apply Terraform

Now that the configuration is complete, it’s time to initialize Terraform and apply the configuration to create the resources.


  1. Initialize Terraform: Run the init command to download the necessary provider plugins and prepare your working directory:

Bash Script

terraform init        


2. Plan the Execution: Before applying the changes, you can review the execution plan to see what Terraform will do:

Bash Script

terraform pan        

This command will list all the resources that will be created.



3. Apply the Configuration: After reviewing the plan, apply the configuration to create your EC2 instance and S3 bucket:

Bash Script

terraform apply        

Type yes when prompted to confirm the action.




Step 7: Verifying the Resources

After running Terraform Apply, Terraform will output the details of the created resources (such as the EC2 instance ID and public IP address).

  • EC2 Instance: Log in to the EC2 Dashboard to verify the instance is running.
  • S3 Bucket: Check the S3 Dashboard to ensure the bucket has been created.

You can also use SSH to connect to the EC2 instance:

Bash Script

ssh -i /path/to/your/private/key.pem ec2-user@<EC2_PUBLIC_IP>        

Make sure to replace <EC2_PUBLIC_IP> with the actual public IP of your instance and provide the correct path to your private key.


Step 8: Final Output in AWS Consol View


Amazon S3 Bucket


EC2 Instances

Step 9: Destroying the Resources

Once you’ve finished using the resources, it’s a good idea to clean up to avoid incurring any unnecessary costs. Use the following command to destroy the infrastructure:

Bash Script

terraform destroy        

Type yes When prompted, Terraform will delete the EC2 instance and S3 bucket along with any associated resources.



Conclusion

This detailed guide demonstrated how to use Terraform to create Amazon S3 buckets and EC2 instances in AWS. By following this approach, you can easily manage your AWS infrastructure using code, ensuring a consistent and scalable workflow. Terraform provides an effective way to automate cloud infrastructure provisioning, making it easier to maintain and version your resources over time.

Feel free to extend this setup by adding more advanced configurations like auto-scaling, load balancing, or more complex bucket policies.



-Ariful Islam Shawon

B.Sc. in Software Engineering

Software Engineer, DevOps Engineer

Cloud Engineer and Solution Architect

2x AWS Certified, AWS Certified DevOps Engineer?—?Professional

Amazon Web Services (AWS)

要查看或添加评论,请登录

Ariful Islam Shawon的更多文章

  • Emerging Trends in AWS DevOps

    Emerging Trends in AWS DevOps

    Emerging Trends in AWS DevOps The intersection of Amazon Web Services (AWS) and DevOps has significantly transformed…

  • Scaling Applications with AWS DevOps

    Scaling Applications with AWS DevOps

    In today’s rapidly evolving digital landscape, scalability is a cornerstone of modern application development and…

  • Cost Management in AWS DevOps

    Cost Management in AWS DevOps

    Introduction Efficient cost management is a critical aspect of implementing DevOps practices on AWS. While AWS provides…

  • Monitoring and Logging in AWS DevOps

    Monitoring and Logging in AWS DevOps

    Monitoring and Logging in AWS DevOps Introduction In DevOps, effective monitoring and logging are critical for ensuring…

  • Infrastructure as Code (IaC) with AWS

    Infrastructure as Code (IaC) with AWS

    Introduction Infrastructure as Code (IaC) is a key practice in modern DevOps and cloud engineering, enabling the…

  • CI/CD Pipelines on AWS

    CI/CD Pipelines on AWS

    Introduction Continuous Integration and Continuous Deployment (CI/CD) are pivotal practices in modern software…

  • Setting Up Your AWS Environment for DevOps

    Setting Up Your AWS Environment for DevOps

    Setting Up Your AWS Environment for DevOps Introduction Successfully implementing DevOps practices requires a…

  • Introduction to AWS DevOps Engineering

    Introduction to AWS DevOps Engineering

    What is AWS DevOps Engineering? AWS DevOps Engineering refers to the combination of DevOps practices with the powerful…

  • AWS VPC Flow Logs: Enhancing Network Visibility and Security

    AWS VPC Flow Logs: Enhancing Network Visibility and Security

    Understanding AWS VPC Flow Logs: Enhancing Network Visibility and Security In cloud computing, gaining insight into…

  • Understanding AWS NAT Instances: A Key Networking Concept

    Understanding AWS NAT Instances: A Key Networking Concept

    Understanding NAT Instances: A Key Networking Concept In the world of cloud computing and network infrastructure, NAT…

社区洞察

其他会员也浏览了