Terraform: when only API calls can help
Terraform is a popular Infrastructure-as-Code tool that many organizations use to provision resources across various cloud providers. Using its own language, you can define the necessary cloud resources and specify their expected configuration. File templates created with this language document the "desired cloud infrastructure state." Terraform then creates an execution plan ("how to achieve the desired state defined by the cloud engineer?") and executes it to turn these desires into reality.
Terraform's major selling point is its multi-cloud capability; for major cloud SaaS providers, there is a plugin (provider) that supports the management of that particular SaaS's resources.
This means that (instead of writing own code to send API requests to the SaaS providers) the cloud engineer can use a single convenient purpose-built language (HashiCorp Configuration Language) to orchestrate resources across multiple cloud realms. For example, if your deployable application requires resources from multiple providers, such as AWS (API Gateway, Lambda, IAM Roles, etc.), a database in MongoDB Atlas, an OAuth provider in Okta, and a Kafka cluster in Confluent Cloud, these definitions can reside in the same Terraform template (following the appropriate Terraform provider resource interfaces) and be executed (provisioned/updated/deleted) in one go.
The abundance of Terraform providers enables efficient resource management most of the time. So why would we want to use "raw" REST (or similar) interfaces to execute changes in cloud resources?
Here are a few cases when going low-level with Terraform could benefit cloud engineers:
Let's break down a few use-cases and see how to utilize custom API calls in Terraform templates - only when nothing else seem to help.
Use Case: Debugging Network Connectivity and Validating API Credentials
Let's assume we are trying to manage resources of a cloud provider in Terraform, but either the network connectivity is not working (such as firewall blocks), or we need to ensure that our API credentials are properly set for the use case.
We already have our Terraform template, and we would like to add a capability to debug these cases.
Testing can be tricky. Particularly with the secret credentials, as once they are set in Terraform Enterprise's UI, they are no longer retrievable for a visual check.
Therefore, we write code:
variable "API_URL" {
type = string
description = "The URL of the remote service"
}
variable "API_TOKEN" {
type = string
sensitive = true
description = "The authentication token for the remote service"
}
resource "null_resource" "connectivity_check" {
provisioner "local-exec" {
command = <<EOT
echo "Checking network connectivity and API-Token..."
curl -s -S -i --fail ${var.API_URL} \
-H "Authorization: Api-Token $API_TOKEN"
EOT
environment = {
# Sensitive data is passed as environment variables
# Using the "nonsensitive" modifier lets the output of the script printed into the logs (for debuging purpose)
# but still keep sensitive data as a secret.
API_TOKEN = nonsensitive(var.API_TOKEN)
}
}
triggers = {
# A map of arbitrary strings that, when changed, will force the
# null resource to be replaced, re-running any associated provisioners.
always_run = "${timestamp()}" # run the provisioner always
}
}
The null_resource in the template is combined with a local-exec provisioner, which runs a curl command. This command will cause the Terraform resource to fail if the network connectivity or the API credentials are not appropriate.
Note: If you have a recent version of curl available with your Terraform deployment, you may want to use --fail-with-body instead of --fail. This flag makes curl fail on server errors while still outputting the response body, which makes debugging easier.
The local-exec provisioner's command in this setup is triggered during every Terraform apply. Note that in the command, Terraform variables/expressions can be referenced with the ${var....} notation. However, environment variables must not be referred to with the curly ${} notation; instead, use the $MYENVVAR syntax.
Speaking of environment variables, we pass any sensitive values as environment variables to the script. If the sensitive variable were injected directly into the command (such as with "var"), Terraform would refuse to log the output, as the log is assumed to contain secrets that need to be hidden.
Use Case: Provisioning a Resource with a REST Call Without the Need for Destroying
In this example, we POST the content of a JSON file (my-config.json) to a remote endpoint to create a resource. If the JSON file is updated, the configuration will be resubmitted during the next Terraform apply. If needed, the command can use the Terraform resource's ID from ${self.id}, as shown in the code example below:
领英推荐
resource "null_resource" "service_metadata" {
provisioner "local-exec" {
command = <<EOT
echo "Deploying configuration using a REST call"
# self.id is the resource's random id set during creation, such as 3901625180809218500
curl -s -S -i --fail -X POST ${var.API_URL}/${self.id} \
-H 'Content-Type: application/json' \
-H "Authorization: Api-Token $API_TOKEN" \
-d @${path.module}/my-config.json
EOT
environment = {
API_TOKEN = nonsensitive(var.API_TOKEN)
}
}
triggers = {
metadata_md5_hash = md5(file("${path.module}/my-config.json"))
}
}
The simplicity of this code comes with a price - the resource can only be updated in-place (which is usually favorable), but the external resource won't be destroyed with an API call when the Terraform resource is removed.
The next use case addresses this.
Use Case: Provisioning a Resource with a REST Call with Replacement Support
Using a local-exec provisioner is really a last resort, and when the replacement and destruction of the resource are also desired in the lifecycle, things get more complicated.
The example below uses two local-exec provisioners: the first for creating and the second for destroying the resource.
When a configuration attribute of the resource changes, the resource will be replaced (deleted and then recreated with a new ID). The complete recreation of the resource may not be appropriate for every use-case, so this code needs to be used carefully, and the resulting functionality needs to be thoroughly tested.
In this example, we have two extra variables.
variable "API_URL" {
type = string
description = "The URL of the remote service"
}
variable "API_TOKEN" {
type = string
sensitive = true
description = "The authentication token for the remote service"
}
variable "deploy_custom_resource" {
type = bool
description = "Specifies if the custom resource should be provisioned or torn down"
}
variable "custom_resource_attr" {
type = number
description = "An attribute of the custom resource (chaning this destroys+recreates the custom resource)"
}
resource "terraform_data" "custom_resource_config_to_trigger_replacement" {
input = {
# the change of this attribute will trigger the replacement of the custom_resource
# (see below the lifecycle/replace_triggered_by attr.)
ATTR = var.custom_resource_attr
}
}
resource "terraform_data" "custom_resource" {
# the resource must not be removed from the template before it is properly destroyed:
# count helps in provisioning and destroying the resource.
count = var.deploy_custom_resource ? 1 : 0
lifecycle {
replace_triggered_by = [
terraform_data.custom_resource_config_to_trigger_replacement
]
}
# the configuration of the resource is stored in the "input" map:
# we need this, as during destroy only these retained input values can be accessed, the "vars" can not
input = {
API_URL = var.API_URL
API_TOKEN = var.API_TOKEN
ATTR = var.custom_resource_attr
}
provisioner "local-exec" {
when = create # Creation-time provisioners are only run during creation, not during updating or any other lifecycle
# self.id is the resource's random id set during creation, such as 6605275d-73ca-a206-ae39-8e976e415f7c
command = <<EOT
curl -s -S -i --fail -X POST ${self.input.API_URL}/${self.id} \
-H 'Content-Type: application/json' \
-H "Authorization: Api-Token $API_TOKEN" \
-d '{
"id": "${self.id}",
"custom-attr": "${self.input.ATTR}"
}'
EOT
environment = {
API_TOKEN = nonsensitive(self.input.API_TOKEN)
}
}
provisioner "local-exec" {
when = destroy
command = <<EOT
curl -s -S -i --fail -X DELETE ${self.input.API_URL}/${self.id} \
-H "Authorization: Api-Token $API_TOKEN"
EOT
environment = {
API_TOKEN = nonsensitive(self.input.API_TOKEN)
}
}
}
In the implementation, two terraform_data resources are used:
Conclusion
In this article, we explored various use cases for REST calls in Terraform.
We discussed scenarios where using low-level REST interfaces can be beneficial, such as when the Terraform provider library lacks support for specific features or when dealing with on-premises resources.
We also provided practical examples of using local-exec provisioners to validate API credentials, debug network connectivity, and manage resource lifecycles, including creation and destruction.
By understanding these advanced techniques, cloud engineers can enhance their Terraform workflows, ensuring more robust and flexible infrastructure management.
https://www.dhirubhai.net/feed/update/urn:li:activity:7287295357804326913/