Codify Infra on Azure - terraform - Part1

Codify Infra on Azure - terraform - Part1

No alt text provided for this image

Infrastructure as Code has now become a buzz word in this era of cloud.With many Cloud providers releasing numerous Services, Resources and Tools regularly, it is now of paramount importance to have a structured way of handling all these resources across departments within an Organisation - Creating, Updating, Deleting and overall management.

The challenges are multifold -

  • Different cloud providers have their own ways (read APIs) to allow users to manage their resources
  • Releases/Updates/Patches of resources are at different times
  • Various levels of Complexities
  • For big enterprises like ITES companies, additional challenge is to implement this ia uniform as well as readable, manageable way; the Infra team is separate and there are NO programmers!!

Let us talk about the two most widely used and popular such tools/frameworks on Azure that exists today in the market -

  1. Azure Resource manager template also acronym-ed as ARM template popularly
  2. Terraform templates from Hashicorp

We would divide this series into two separate segments - ARM based approach and Terraform based approach - would run two separate series in parallel.

Purpose for this current series is to focus on terraform solely as the heading suggests.

We would start of with some basic deployment of resources - these resources are all the building blocks for creating and managing more advanced resources. e.g. AKS cluster needs around 10 smaller resources for its functioning - each of them are very important and indispensable. So, we would focus on each such resources before getting into actual AKS cluster creation!

Let us start with Storage as the mostly used and critical yet simple to deploy resource on Azure.

But before we get into that, let us start with Terraform Basics, Installations and Setup:

Terraform uses HCL - HashiCorp Configuration Language - a human-readable language for automated deployments

Installations - https://learn.hashicorp.com/terraform/getting-started/install

Important terms in Terraform world -

Provider - Which resource provider to be used by terraform to deploy resources - in this case it would Azure for now. Azure has 2 more genres of providers - one for Stack and one for Azure Active Directory

States - Entire terraform works with states - this is actually mapping of the actual cloud resources and the corresponding config file parameters. Terraform refreshes these states whenever a new deployment is sought for and takes action accordingly. e.g. Delete request for a resource should first check the current state of the resource in cloud - existing, in what format etc. Based on the result of state matching it decides to take appropriate action - in this case - Delete the resource!

Working Directory - This is the place on local drive where the terraform config file(s) will be kept and all terraform commands to run from there (or pointing to that location). Ideally every resource deployment should have a separate folder structure for simplicity and flexibility

Init - This is the first operation that should be performed for every resource deployment whenever there is a change in the Working Directory; e.g. adding new files or changing the version of the Provider etc. This basically prepares the Working Directory for use and be used by the subsequent commands. Different initialisation steps are performed and also downloading of required provider plugin versions and necessary dependencies

Plan - Creates an execution Plan for the intended state/operation without making any physical change. Basically this is to check if desired changes matches the expected set of changes or states. An optional -out parameter would ensure that the plan is saved locally; which would be use by subsequent terraform apply command! Most of the error handling is done at this level

Apply - This is actually the Commit to Azure confirming the desired stare to take effect; changes happen on Azure and only error related to execution are handled here; with majority of the errors are already handled at the Plan level

Note: This the the bare minimum list to start with in terraform; many more would be discussed in the course of this series

Storage

Providers

provider "azurerm" {

    version = "=2.11.0"
    features {}
    subscription_id = "<subscription_id>"
    tenant_id = "<tenant_id>"

}

This sets the Azure Provider version and other details like subscription_id and tenant_id

terraform init command will act upon this first and downloads the necessary plugin of desired version and its dependencies.

Resource Group

resource "azurerm_resource_group" "rg" {

    name = "terraform-workshop-rg"
    location = "eastus"
    tags = { // (Optional)

        Primary_Owner = "<email_id>"
        Purpose = "workshop"

    }
}

azurerm_resource_group is the resource Type and rg is the name of the resource in terraform context i.e. the name by which terraform internally refers to it. This also brings a degree of object orientation or structured approach while accessing the resource and its properties e.g. azurerm_resource_group.rg.localtion

Storage Account

resource "azurerm_storage_account" "storage" {

    name = "terrwkshpstg"    
    resource_group_name = azurerm_resource_group.rg.name // Ref1
    location = azurerm_resource_group.rg.location // Ref2
    account_kind = "StorageV2"
    account_tier = "Premium"
    access_tier = "Hot"
    account_replication_type = "LRS"
}

This is quite self-explanatory so won't spend much time on this! Note the Ref1 and Ref2 comments - these are values coming from previous block of azurerm_resource_group

Steps

  • terraform init
  • terraform plan -out="storage-plan"
  • terraform apply "storage-plan"

So, that is it…with the simple steps you have your first Storage account using terraform is deployed and ready to be used!

Now obviously there are plenty of more parameters…to configure a Storage account creation; please refer here: https://www.terraform.io/docs/providers/azurerm/r/storage_account.html

Modules and Dependencies

As usual, as we completed the first step, next endeavour is to make the Storage resource more secured - either using IP Restrictions Or integrating with a Virtual Network (aka Subnet of VNET) through Service Endpoint.

So let us see how to achieve this in terraform…the trick is to use the concept of terraform Modules

Let us create an Azure VNET first with corresponding Subnet to be integrated with the above Storage resource.

This would ensure that all communication that Storage resource would be accessed only from within the mapped Subnet i.e. resources which sits within that Subnet or themselves integrated with that Subnet. On Azure this is achieved using Service Endpoint - which is secured endpoint created for a particular type of Resource - Microsoft.Storage in this case.

Terraform makes this entire mapping process very automated in a very simple way; let us see that in action:

resource "azurerm_virtual_network" "vnet" {

    name = "terraform-workshop-vnet"
    location = azurerm_resource_group.rg.location
    resource_group_name = azurerm_resource_group.rg.name
    address_space = ["173.0.0.0/16"]    

}

This will snippet will ensure the creation of the virtual network on Azure with the address space - 173.0.0.0/16

resource "azurerm_subnet" "storage-subnet" {

    name = "terraform-workshop-storage-subnet"    
    resource_group_name = azurerm_resource_group.rg.name
    virtual_network_name = azurerm_virtual_network.vnet.name
    address_prefixes = ["173.0.0.0/24"]
    service_endpoints = ["Microsoft.Storage"] // extremely important

}
output "storage-subnet-id" {    // extremely important
  value = azurerm_subnet.storage-subnet.id
}

The line - service_endpoints = ["Microsoft.Storage"] is responsible for creating the Service Endpoint for any Storage resource.<br Similarly, the output section ensure the subnet id is exposed for use by the other config modules i.e. Storage module in this case!

Then in the Storage config file -

resource "azurerm_storage_account" "storage" {

    name = "terrwkshpstg"    
    resource_group_name = azurerm_resource_group.rg.name
    location = azurerm_resource_group.rg.location
    account_kind = "StorageV2"
    account_tier = "Premium"
    access_tier = "Hot"
    account_replication_type = "LRS"

    network_rules {

        default_action = "Deny"
        virtual_network_subnet_ids = ["module.network.storage-subnet-id"]
        bypass = ["Metrics"]

    }

}

network_rules is the key where the Storage is mapped with he above Subnet having exposed a Service Endpoint.This ensure that all communication between the resources own the above subnet and the Storage is over Microsoft Backbone network.

Only thing that still remains a mystery is the virtual_network_subnet_ids = ["module.network.storage-subnet-id"]

How is this thing being mapped? Very simple -

module "network" {
  source = "../Network/"

}

With this at the top of the storage config file - ensures that the module is referred and the objects that are output from that module can be referred!

This is how, with the help of terraform config, an automated deployment of Storage onto Azure can happen, integrated with Network rules

Refs:


要查看或添加评论,请登录

社区洞察

其他会员也浏览了