Secret's Management Processes In-Depth Guide

Secret's Management Processes In-Depth Guide

Introduction

HashiCorp Vault is a powerful tool for managing secrets and protecting sensitive data in modern cloud environments. Whether you're a developer or an IT professional, understanding how to install and use Vault can significantly enhance your security practices. This guide will walk you through the installation process, configuration steps, and practical use cases for HashiCorp Vault, making it accessible even if you're new to the concept.

What is HashiCorp Vault?

HashiCorp Vault is a tool designed to securely store and manage sensitive information such as API keys, passwords, and certificates. It provides a unified interface to various secret engines and ensures that access to secrets is tightly controlled and auditable.

Installing HashiCorp Vault

Prerequisites

Before installing Vault, ensure your system meets the following prerequisites:

  • An Ubuntu operating system
  • Administrative access (sudo privileges)
  • Internet connectivity

Step 1: Update and Install GPG

First, update your package lists and install GPG:

sudo apt update && sudo apt install gpg        

Step 2: Download the Signing Key

Next, download the HashiCorp GPG key to a new keyring:

wget -O- https://apt.releases.hashicorp.com/gpg | sudo gpg --dearmor -o /usr/share/keyrings/hashicorp-archive-keyring.gpg        

Step 3: Verify the Key's Fingerprint

Verify the fingerprint to ensure the key's integrity:

gpg --no-default-keyring --keyring /usr/share/keyrings/hashicorp-archive-keyring.gpg --fingerprint        

Step 4: Add the HashiCorp Repository

Add the HashiCorp repository to your system:

echo "deb [arch=$(dpkg --print-architecture) signed-by=/usr/share/keyrings/hashicorp-archive-keyring.gpg] https://apt.releases.hashicorp.com $(lsb_release -cs) main" | sudo tee /etc/apt/sources.list.d/hashicorp.list        

Step 5: Install Vault

Finally, install Vault using the following command:

sudo apt install vault        

Starting HashiCorp Vault

Development Mode

To start Vault in development mode, use the following command:

vault server -dev -dev-listen-address="0.0.0.0:8200"        

This command starts Vault in development mode, allowing you to test its functionalities without the need for a complex setup.

Configuring Vault Environment

Open a new terminal session and set the Vault address:

export VAULT_ADDR='https://0.0.0.0:8200'        

Enabling Inbound Rules

Ensure your security group in AWS allows inbound traffic to port 8200.

Accessing Vault UI

Open your web browser and navigate to:

https://<your-public-ip>:8200        

Use the provided root token to log in.

Managing Secrets in Vault

Introduction to Secret Engines

Secret engines in Vault are plugins that manage different types of secrets. By default, Vault does not enable any secret engines. You need to configure and enable them manually.

Creating a Key-Value Secret Engine


When using a Key-Value (KV) engine, such as HashiCorp Vault, it's essential to understand that nothing is enabled by default. This means that there are no pre-configured secret engines, users, or policies. You'll need to create these components from scratch.

To get started, you'll need to provide a mount path for your KV engine. This path is where the engine will store sensitive data, such as usernames and passwords. Once you've entered this information, the KV engine will automatically encrypt the data using its built-in encryption capabilities.

Now that the KV engine is set up, it's time to create some secret data. This can include sensitive information such as API keys, passwords, or encryption keys. By storing this data in the KV engine, you can ensure it's securely encrypted and access-controlled.

At this point, the secrets stored in the KV engine are inaccessible to anyone. To grant access to these secrets, we need to create a role within HashiCorp Vault. This role is similar to an IAM role in AWS, as it defines the permissions and access controls for a specific entity. In this case, we'll create a role for Terraform or Ansible, which will allow them to access the secrets stored in the KV engine.

AppRole-based authentication in HashiCorp Vault is analogous to AWS IAM roles. Just as IAM roles provide temporary security credentials to AWS services, AppRoles in Vault grant temporary access to secrets and resources to applications, services, or users, without sharing the underlying credentials.

With AppRole-based authentication in HashiCorp Vault, you can authenticate other applications and services. However, a limitation is that you cannot create AppRoles using the UI - it's only possible to do so through command-line interfaces. To create an AppRole, you'll need to execute a specific command in a new session.

vault policy write terraform - <<EOF
path "*" {
  capabilities = ["list", "read"]
}

path "secrets/data/*" {
  capabilities = ["create", "read", "update", "delete", "list"]
}

path "kv/data/*" {
  capabilities = ["create", "read", "update", "delete", "list"]
}


path "secret/data/*" {
  capabilities = ["create", "read", "update", "delete", "list"]
}

path "auth/token/create" {
capabilities = ["create", "read", "update", "list"]
}
EOF        

If you encounter an error like 'Error uploading policy: Put "https://127.0.0.1:8200/v1/sys/policies/acl/terraform ": http: server gave HTTP response to HTTPS client' when attempting to upload a policy, it may indicate a mismatch between the protocol used by your client and the server. In this case, you may need to set the following environment variable to resolve the issue:

$ export VAULT_ADDR='https://127.0.0.1:8200'        

The unseal key and root token are provided below for your reference, in case you need to seal, unseal, or re-authenticate the Vault.

Moving forward, we'll create an IAM role.?In a production environment, please carefully review and configure these parameters to ensure secure token management.

vault write auth/approle/role/terraform \
    secret_id_ttl=10m \
    token_num_uses=10 \
    token_ttl=20m \
    token_max_ttl=30m \
    secret_id_num_uses=40 \
    token_policies=terraform        

Success! Data written to: auth/approle/role/terraform

Now?Role and Policy is attached

vault read auth/approle/role/terraform/role-id        

Output:

Key        Value
---        -----
role_id    9aa43a86-a238-9a58-94a1-d3cce0e3bb38        
vault write -f auth/approle/role/terraform/secret-id        

Output:

Key                   Value
---                   -----
secret_id             ac9386d6-2ada-6b15-e4d4-14551b09998b
secret_id_accessor    133fffef-3004-732e-f204-52d3c3ee1522
secret_id_num_uses    40
secret_id_ttl         10m        

password as tag value on aws ec2 instance

usually project team want name of s3 bucket as secret then you create value in hashicorp vault and take that secret value create that as the name of S3 bucket.

Execute below in any code editor

main.tf

provider "aws"{
  region = "us-east-1"
}

provider "vault" {
  address = "18.232.170.248:8200"
  skip_child_token = true #authentication fails if not skipped

  auth_login {
    path = "auth/approle/login"

    parameters = {
      role_id = "bb4b24c6-ae35-90d1-ae8d-ba95b65d7b4a"
      secret_id = "da3ee7e4-74e4-8813-61b8-7dc0dc370344"
    }
  }
}        

terraform init

To read resources go to Data Sources

To create resources navigate to Resources

provider "aws"{
  region = "us-east-1"
}

provider "vault" {
  address = "https://18.232.170.248:8200"
  skip_child_token = true #authentication fails if not skipped

  auth_login {
    path = "auth/approle/login"

    parameters = {
      role_id = "bb4b24c6-ae35-90d1-ae8d-ba95b65d7b4a"
      secret_id = "da3ee7e4-74e4-8813-61b8-7dc0dc370344"
    }
  }
}

data "vault_kv_secret_v2" "example" {
  mount = kv
  name  = secret-test
}        
terraform apply        

Output:

ault_kv_secret_v2.example: Read complete after 1s [id=kv/data/secret-test]        

No updates are required. Your infrastructure is already aligned with your configuration.

Terraform has successfully compared your actual infrastructure with your configuration and found no discrepancies, so no changes are necessary.

The Terraform apply operation is complete. Resources: 0 added, 0 modified, 0 deleted.

In this scenario, we're using Terraform solely to read the instance's current state, rather than making any changes.

provider "aws"{
  region = "us-east-1"
}

provider "vault" {
  address = "https://54.82.66.247:8200"
  skip_child_token = true #authentication fails if not skipped

  auth_login {
    path = "auth/approle/login"

    parameters = {
      role_id = "bb4b24c6-ae35-90d1-ae8d-ba95b65d7b4a"
      secret_id = "da3ee7e4-74e4-8813-61b8-7dc0dc370344"
    }
  }
}

data "vault_kv_secret_v2" "example" {
  mount = "kv"
  name  = "secret-name"
}

resource "aws_instance" "name" {
  ami = "ami-053b0d53c279acc90"
  instance_type = "t2.micro"
  tags ={
    secret = data.vault_kv_secret_v2.example.data["kundan"] # as we need only value of username
  }
  
}        
terraform apply        

Apply complete! Resources: 1 added, 0 changed, 0 destroyed.

Along with Ec2 instance creation vault secrets will be attached as tags.

Ansible-Vault

Encrypting Files with Ansible Vault

Encrypt a file using Ansible Vault:

ansible-vault create aws_credentials.yaml --vault-password-file vault.pass        

Decrypting Files

Decrypt the file when needed:

ansible-vault decrypt aws_credentials.yaml --vault-password-file vault.pass        

Editing Encrypted Files

Edit the encrypted file securely:

ansible-vault edit aws_credentials.yaml --vault-password-file vault.pass        

Mini Project: Using AWS Secrets Manager with a Lambda Function

This mini-project demonstrates how to create a Lambda function that retrieves a secret from AWS Secrets Manager and uses it to connect to an RDS database.

Steps

Step 1: Create a Secret in AWS Secrets Manager

  1. Navigate to AWS Secrets Manager:Open the AWS Management Console and go to Secrets Manager.
  2. Store a New Secret:Click "Store a new secret", choose "Credentials for RDS database", and enter the database credentials.Choose an encryption key (default AWS managed key or a custom key).
  3. Configure the Secret:Name the secret (e.g., myDatabaseSecret).Add any necessary tags.
  4. Review and Store:Review the settings and click "Store".

Step 2: Create an IAM Role for Lambda

  1. Navigate to IAM:Open the AWS Management Console and go to IAM.
  2. Create a New Role:Click "Roles" and then "Create role".Choose "AWS service" and select "Lambda".
  3. Attach Policies:Attach the AWSLambdaBasicExecutionRole and SecretsManagerReadWrite policies.
  4. Configure and Create Role:Name the role (e.g., lambda-secrets-manager-role) and create it.

Step 3: Create a Lambda Function

  1. Navigate to AWS Lambda:Open the AWS Management Console and go to Lambda.
  2. Create Function:Click "Create function", choose "Author from scratch", name the function (e.g., RetrieveSecretFunction), and select the runtime (e.g., Python 3.8).
  3. Set Execution Role:Under "Permissions", choose "Use an existing role" and select the role created in Step 2.
  4. Create Function:Click "Create function".

Step 4: Add Code to Lambda Function

  • Edit Lambda Function:
  • Add the Following Code:

import boto3
import json

def lambda_handler(event, context):
    client = boto3.client('secretsmanager')

    secret_name = "myDatabaseSecret"
    region_name = "us-west-2"

    try:
        response = client.get_secret_value(SecretId=secret_name)
    except Exception as e:
        raise e

    secret = response['SecretString']
    secret_dict = json.loads(secret)

    username = secret_dict['username']
    password = secret_dict['password']

    return {
        'statusCode': 200,
        'body': json.dumps({
            'username': username,
            'password': password
        })
    }        

  • Deploy the Function:

Step 5: Test the Lambda Function

  • Create a Test Event:In the Lambda console, create a new test event with a dummy payload (e.g., {}).
  • Run the Test:Execute the test event and check the output. You should see the username and password retrieved from AWS Secrets Manager.


Conclusion

HashiCorp Vault and Ansible Vault are crucial tools for managing and securing sensitive data. HashiCorp Vault offers robust solutions for storing and controlling access to secrets, while Ansible Vault ensures that sensitive files are encrypted and easily managed.

Integrating AWS Secrets Manager with a Lambda function, as demonstrated, provides a secure and efficient way to handle secrets, avoiding the pitfalls of hardcoding sensitive information. By following the steps outlined, you can confidently manage secrets across various platforms and applications, ensuring your data remains protected.

Implement these tools today to strengthen your security posture and simplify your secret management processes.

要查看或添加评论,请登录

社区洞察

其他会员也浏览了