Hangman Part III: Automation (Terraform and GitHub Actions)
Brett Howell
DevOps Engineer @ Menrva Technologies | AWS Certified Solutions Architect
Play Hangman:
Introduction:
Welcome to the final installment of our series where we've journeyed through the deployment of a Python Flask application, aptly named Hangman, using an array of AWS services. In previous posts, we've built a Dockerized Flask application, established a secure and robust VPC, configured an RDS instance, and deployed the app using ECS. Now, we're drawing back the curtain on the silent hero that has been orchestrating this intricate dance behind the scenes - Infrastructure as Code (IaC) using Terraform and Continuous Integration / Continuous Deployment (CI/CD) using GitHub Actions.
Part I:
Part II:
Terraform
Why Terraform?
Managing the infrastructure for a growing application can quickly become a complex task, especially when it involves numerous services and technologies. This complexity not only increases the chances of human error but also makes the process of replicating or updating the infrastructure time-consuming.
This is where Terraform, an open-source Infrastructure as Code tool created by HashiCorp, steps in. Terraform allows us to define and manage our infrastructure using configuration files. By treating our infrastructure as code, we can version control it, just like we do with our application code. This ensures that our infrastructure is consistent and repeatable, which is crucial for maintaining the stability of our application as it grows.
Setting Up
In order to setup Terraform for this configuration you will start with your 'main.tf' file. This is where all of the data for your configuration goes and is technically all you need to get running. Optionally but highly recommended is using a 'variables.tf' file. This is where you can store information or 'variables' that might get used repetitively throughout your configuration and save you time. Then you simply can reference those variables in your 'main.tf' file.
At the start of your 'main.tf' file you will have to specify a few things before you can get started writing the code for your infrastructure. You start with the 'terraform' block and specify which provider you will be using. If you've been following along in this series you know we will be choosing AWS.
Next we will deal with the 'state' file. The state file stores the status of the resources you are using with a provider. This makes it easy for Terraform to see what resources in your configuration are already deployed if you are ever to make changes to your configuration and doesn't redeploy resources that are already there. If you are working locally then it is fine to let this file be stored locally but since this repository is being stored on GitHub we don't want the sensitive information that is stored in the state file to be viewable by the public. This is where the 'S3 Backend' comes into play. AWS has a feature that allows you to store your state file in an 'S3 Bucket' (AWS's file storage option). This will keep most of this projects sensitive data secure and also GitHub Actions to deploy this configuration to AWS from my repository. Yay automation!!!
The last step to setting up your Terraform configuration is to specify the AWS region you want the resources to deploy into. That one is pretty straightforward if you're familiar with AWS.
Resources
In the 'resource' blocks, we define all the AWS resources our application needs. For our Hangman game, these include a VPC, ECS cluster, an RDS instance, an Application Load Balancer, and many more.
There is almost no way one could memorize all of the resource blocks for that Terraform uses for AWS, this is why it is absolutely essential to go to HashiCorp's registry and view the AWS Documentation as your creating your configuration.
I wont get into the specifics of all of the resources I used because I would be mostly repeating Part II in this series. Though seeing as this is the bread and butter of what Terraform can do I highly recommend checking out my 'main.tf' file in my repository that is linked at the bottom of this article.
I also want to note 'data' blocks in this configuration. A data block is a resource that is already created in your AWS account that you want to use for a resource but don't necessarily want terraform to manage. In my case this was an ACM certificate for my Application Load Balancer that I already had created previous to this configuration. Also take note that Terraform has an 'import' feature that allows you import resources that have already been created in AWS via the console or CLI and have them managed by your state file.
Terraform Conclusion
With the Terraform configuration complete and the state file being stored in S3 we are ready to push these files to GitHub and get started with Actions!
For those in the Cloud/DevOps space that haven't taken the time to learn Terraform, I would highly recommend it! CloudFormation is a great tool but it is proprietary to AWS. Terraform allows to work with multiple clouds (AWS, Azure, and GCP) as well as Kubernetes. Mumshad Mannambeth has a great intro course over at Udemy.
GitHub Actions
What are GitHub Actions?
领英推荐
GitHub Actions is a CI/CD (Continuous Integration/Continuous Deployment) tool that allows you to automate workflows directly from your GitHub repository. With GitHub Actions, you can build, test, and deploy your code right from GitHub. You can also assign tasks like responding to issues, managing pull requests, or publishing packages, making it a versatile tool for a variety of software development practices. In our case we are going to automate the containerization of our application when there is an update and deploy it to AWS with our Terraform configuration.
Connecting GitHub Actions with AWS using OIDC
For GitHub Actions to interact with AWS - and thereby deploy our application - it needs appropriate permissions. We achieve this by connecting GitHub Actions with AWS using OpenID Connect (OIDC). OIDC is an identity layer on top of the OAuth 2.0 protocol, allowing clients to verify the identity of the user based on the authentication performed by an authorization server.
By setting up an OIDC provider with AWS and creating an IAM role that trusts this provider, we can let GitHub Actions assume this role and gain the necessary permissions to deploy our application. This setup not only simplifies permissions management but also enhances the security of our CI/CD pipeline. You can read more about GitHub Actions OIDC provider and AWS here.
Leveraging Secrets in GitHub Actions
Storing sensitive information like AWS credentials or database passwords in plaintext in our repository is a security risk. GitHub Actions addresses this problem with 'secrets'. Secrets are encrypted environment variables that are created in GitHub repositories to store sensitive information.
In our workflow, we use secrets to store the role GitHub can assume for AWS Single Sign On, database connection strings, and more. These secrets are then referenced in our GitHub Actions workflow file, ensuring that our sensitive information is securely stored and only accessible to the GitHub Actions environment during runtime.
Understanding the Workflow
Now, let's understand the workflow defined in our 'app-update.yml' file. This workflow is triggered whenever we push changes to the main branch of our repository. Once triggered, the workflow executes the following steps:
It's important to note that there is a second workflow I created called 'tf-update.yml' that simply runs a Terraform deployment when there are changes to be made to the infrastructure but not the Hangman application itself.
GitHub Actions Conclusion
By leveraging GitHub Actions, we've created a robust and secure CI/CD pipeline for our Hangman game. Now, whenever we push changes to our repository, our code is automatically built, tested, and deployed to AWS. This automation ensures our application is always running the latest version of the code and removes the need for manual deployment processes.
Project Complete!!!
As we wrap up this series, it's crucial to take a step back and appreciate the breadth of our journey. From crafting a Python Flask application to mastering the art of containerization with Docker, from understanding the nuances of database management with MySQL to navigating the intricacies of cloud networking with VPC, each step of this journey presented its unique challenges and learning opportunities.
We didn't merely deploy an application; we dove headfirst into a wide array of technologies and practices emblematic of modern application development and deployment. Each component of this project required us to grasp new concepts, explore unfamiliar territories, and piece together a diverse technological puzzle.
Docker introduced us to the world of containers, giving us the tools to ensure consistent and reproducible application environments. MySQL honed our skills in database management, teaching us to construct, manipulate, and maintain a relational database. The Amazon VPC (Virtual Private Cloud) invited us to wrestle with cloud networking, enabling us to build a secure and isolated section of the AWS Cloud where we could launch our resources.
Then came the robust suite of AWS services, each serving a distinct purpose in our infrastructure. ECS and Fargate powered our application deployment, RDS took charge of our database needs, ALB balanced our application traffic, and the list goes on. With each service, we learned to appreciate the scalability and reliability of cloud infrastructure.
However, the learning didn't stop at deployment. We ventured into Infrastructure as Code with Terraform, transforming the way we managed and provisioned our infrastructure. We then delved into the realm of CI/CD with GitHub Actions, automating our application's building and deployment processes.
In retrospect, this journey has been as much about learning and growth as it has been about building and deploying an application. Each technology I encountered, each problem I solved, and each solution I implemented added to my arsenal of skills and knowledge, reinforcing my readiness to tackle even more complex and challenging projects in the future.
Going Forward
As we conclude this series, I am already looking ahead to the next phase of our project. There are a few key improvements and additions I plan to implement to continue enhancing the application and my skills. Here's a glimpse of what's to come:
These enhancements will not only make Hangman more robust, engaging, and secure but will also provide us with opportunities to learn and apply new technologies. I am excited to continue this journey of learning and growth, and I can't wait to share our progress with you.
Stay tuned for updates on these enhancements and more as we continue to build and learn!
DevOps Engineer || CKA Certified || 2x AWS Certified|| 1x HashiCorp Terraform Certified|| CICD || Docker || Kubernetes|| Linux
1 年Informative article. Interesting too. ??