A Practical and Gentle Introduction to Terraform and AWS Cloud

A Practical and Gentle Introduction to Terraform and AWS Cloud

Terraform is a configuration language that helps us to define an architecture using human-readable code, extendable, and easy to automate. Terraform was borns with the need to deploy services across the cloud with the philosophy, of one-time code to multiple cloud providers, one of the features is the interoperability between tools, terraform can be used with different provisioning tools like puppet, chef, and ansible.

Terraform uses a state to compare resources you want to create against the existing one, which allows storage of a picture of the current inventory of the resources created, the possible updates, and replacements. Those processes are driven by three fundamental stages:?

  • Init: Create a state file to put all resources into the group, behind the scenes, terraform build a graph with all resources and dependencies, if resources have no dependencies can be considered a resource to create in parallel.
  • Plan: With the state file created, compare one with all resources created or not on the cloud, and identify which resources will be modified, destroyed, or ignore.?
  • Apply: Step to publish all resources.

Create the terraform state storage

There are multiples way to create the state, my preference at this time and simplicity will use an AWS S3 bucket file to store the state file, let's create a state file storage,

First, go to https://s3.console.aws.amazon.com/s3/buckets? then create a bucket, my recommendation use a prefix following the account id in order to maintain the unique name for the bucket?

No alt text provided for this image

  • Choose ACLs enabled.
  • Enabled the checkbox Block?all?public access.
  • Enabled versioning checkbox.
  • Click Create bucket.

For a didactical reason, the scope of this post is to cover how a state is created by hand, there are multiple terraform integrations that automate it quickly like terragrunt.?All code steps are hosted in this repo

Additionally, we need to set up the awscli and credentials, please follow the steps to get the AWS IAM user credentials:

Go to create iam user

No alt text provided for this image

Write the name of your user, click next

Click on the attached policies directly and check the Admin access

No alt text provided for this image

Click next and create the user.

To set up the credentials access and secret key, click on the user already created and follow the next steps

No alt text provided for this image

Click on the security credentials tab and click on Create access key.?

No alt text provided for this image

Choose CLI and next.

Once you create the access keys, you are able to copy that.

No alt text provided for this image

After getting credentials from Console, download the .csv data file, we will use it later. Install and config the AWS CLI

Install awscli for your operative system

Once you installed it, next is to configure the CLI tool with this command:

aws configure --profile <YOUR-PROFILE>
        

The param --profile works as an alias for the user account you are using. From previous steps copy values .csv file like this screenshot:

No alt text provided for this image

You can find more information about how to enforce security awscli profiles on

Important: after configuring awscli, always active your profile using the command:

export AWS_PROFILE="<YOUR-PROFILE>"        

Otherwise, we will have this error when we are running the provisioning commands

No alt text provided for this image

Terraform installation

To install Terraform I recommend using a version manager like Tfenv to install different versions of Terraform and switch between them with ease. After installing tfenv, we just need to run these commands to get Terraform installed

tfenv install 1.4.0
tfenv use 1.4.0        

Finally, we have completed the setup to start deploying resources in the cloud, Let’s add the aws provider resource snippet code to start provisioning.?

terraform {
  backend "s3" {
	bucket = "your bucket to store state"
	key    = "terraform.state"
	region = "us-east-1"
  }

required_providers {
  aws = {
	 source  = "hashicorp/aws"
	 version = "~> 4.0"
  }
}
	
// All infra is over AWS and is the provider used by default.
provider "aws" {
  region = "us-east-1"
}        

Once we have the AWS provider setup we can start declaring our infra.?Here is an example of how to create a S3 bucket using the HCL configuration language which saves us a couple of clicks


resource "aws_s3_bucket" "storage_bucket" {
	  bucket = var.storage_name
	

	  tags = {
	    Name        = "storage"
	    Environment = "Production"
	  }
	

	  lifecycle {
	    ignore_changes = [tags]
	  }
}        

In our project let's run a couple of commands to analyze the output:?

terraform init -reconfigure -lock=false        
No alt text provided for this image

Terraform telling us about the s3 backend has been configured correctly, the plugins are download, and a couple information more.?

Running plan: display all inventory resources previously declared in the terraform files (.tf extension), we just need to run in the terminal:

terraform plan        
No alt text provided for this image

This is the format in Terraform that will display the inventory and changes logs due we previously configured the state, which allows to terraform remember things that are created and things to update or replace. The keyword (known after apply) will appear a bunch of times in the planning stage, those values are output values generated by the cloud provider like server id, arn, and so on.

Running apply, (or deploy)?

terraform apply -auto-approve?        
No alt text provided for this image

This is the expected output we can see if the apply has been made successful. The outputs value is placed in outputs.tf file which allows us to print the outputs generated by the provider. Now we go to AWS Console list buckets? https://s3.console.aws.amazon.com/s3/buckets and we can see our bucket created. To upload files to the s3 bucket I will use a s3 object sentence to upload all files that contain a folder, in the project repo contains a folder with those files.

resource "aws_s3_object" "storage" {
	  for_each = fileset("${var.path_files}/", "*")
	  bucket   = aws_s3_bucket.storage_bucket.id
	  key      = each.value
	  source   = "${var.path_files}/${each.value}"
	  etag     = filemd5("${var.path_files}/${each.value}")
	  acl      = "public-read"
}        

We are using the loops https://developer.hashicorp.com/terraform/language/expressions/for?that provide terraform to iterate over that folder and create a separate eligible to upload.

The function and parameters:

  • bucket: is the reference of the bucket when we upload the files?
  • key:?contains the name of the file?
  • source: is the complete directory path of the file?
  • etag: this etag checks if the content of the file should be replaced.
  • acl:?is the access config for the files.?
  • fileset: Returns a list of all files from a directory path
  • filemd5: Returns the md5 algorithm given a file path.?

Conclusion

Terraform is an excellent tool to automate, config, and define large software architectures, from the earlier version until the new one the evolution of this tool and community participation.


Thanks for reading!



要查看或添加评论,请登录

Hendrix Roa的更多文章

社区洞察

其他会员也浏览了