Terraform Remote Backend Configuration with S3 (Without DynamoDB)

Before diving into the main purpose, it’s essential to cover the fundamentals to build a solid foundation.

What’s Terraform ?

  • Terraform is an open-source Infrastructure as Code (IaC) tool that allows you to define, provision, and manage infrastructure across multiple cloud providers and on-premises environments using a simple declarative language. It’s a go-to tool for automating and scaling infrastructure efficiently(https://www.terraform.io/).

What is Terraform Backend ?

  • Terraform uses "state data" to keep track of all the things it creates and manages, like a to-do list of its work. Adding a remote backend means asking Terraform to save this list in a safe place you can access from anywhere, like AWS S3 or Google Cloud Storage (GCS).You can find the full list of options and more details here.


The Usual Practice:

  • The usual practice, as per the documentation, involves using AWS S3 bucket to store the state file and DynamoDB for state locking and consistency. DynamoDB to ensure that multiple users or processes do not overwrite the state file simultaneously.

Example of remote configuration block.

terraform {
  backend "s3" {
    bucket         = "example-bucket"
    key            = "path/to/state"
    region         = "us-east-1"
    dynamodb_table = "example-table"
  }
}        

Here, the backend is configured to use S3 for storing the Terraform state file.

  • Bucket: Specifies the name of the S3 bucket where the state file will be stored.
  • Key: Indicates the name (or path) of the state file within the bucket.
  • Region: Refers to the AWS region where the S3 bucket is located.
  • DynamoDB Table: Specifies the DynamoDB table name used for state locking and consistency.

While this approach works well to prevent overwrites and ensures state locking, it also introduces additional costs and creates a dependency on DynamoDB. For teams or projects aiming to minimize costs or simplify infrastructure, avoiding DynamoDB might be a more practical choice, though it requires extra caution to avoid state conflicts.

The New Approach:

Recent update from AWS. Amazon S3 now supports conditional writes (Link). Which means, that allows users to specify conditions under which certain write operations (like PUT, COPY, or DELETE) can be performed. This feature provides an extra layer of control and helps prevent accidental overwrites or deletions by ensuring that an operation only occurs if specific conditions are met.


Terraform now supports S3 native state locking

The latest release of Terraform v1.10.0 supports S3 native state locking.

“backend/s3: The s3 backend now supports S3 native state locking. When used with DynamoDB-based locking, locks will be acquired from both sources. In a future minor release of Terraform the DynamoDB locking mechanism and associated arguments will be deprecated.

How does S3 native state locking works ? Refer this PR for more information (link)

Acquiring a Lock To acquire a lock, a .tflock file is uploaded to an S3 bucket to establish a lock on the state file. If the lock file does not already exist, the upload succeeds, thereby acquiring the lock. If the file already exists, the upload fails due to a conditional write, indicating that the lock is already held by another Terraform client.

Releasing a Lock To release a lock, the corresponding lock file is deleted from the S3 bucket. This action removes the file, thereby releasing the lock and making it available for other Terraform clients.


Now that we know it's possible to eliminate the use of DynamoDB by utilizing S3's native state locking functionality, let's proceed to create an S3 bucket to store the state file.

resource "aws_s3_bucket" "backend-24-test" {
    bucket = "backend-24-test"
    object_lock_enabled = true
    tags = {
      Name = "backend-24-test"
    }
  
}

resource "aws_s3_bucket_versioning" "backend-24-test" {
    bucket = "backend-24-test"
    versioning_configuration {
      status = "Enabled"
    }
  
}        

Follow the standard process: add the provider information, run terraform init, then terraform plan, and finally execute terraform apply.

After the bucket is created with object_lock_enabled, it can be used to store the state file.


Below is a sample snippet for configuring the remote backend to use S3:

terraform {
  required_providers {
    aws = {
      source = "hashicorp/aws"
      version = "5.80.0"
    }
  }
  backend "s3" {
    bucket = "backend-24-test"
    key = "backend/terraform.tfstate"
    use_lockfile = true
    encrypt = true
    
  }
}

provider "aws" {
  # Configuration options
  region = "eu-west-1"
}        

With this approach, we successfully eliminated the additional cost and dependency of DynamoDB in our Infrastructure as Code (IaC) setup.



要查看或添加评论,请登录

Vadiraj J.的更多文章

社区洞察

其他会员也浏览了