Securing the Cloud #19
Brandon Carroll
Explaining Cloud Infrastructure Security in simple and entertaining ways.
Welcome to the 19th edition of the Securing the Cloud Newsletter! In this edition, we look at AWS CodePipeline and discuss a bit about the mindset shift required for cloud networking and security professionals. We'll explore how thinking like a programmer, regardless of your background, can benefit you in managing cloud infrastructure and security.
Cloud Security Best Practices: Embracing a Programmer's Mindset & Digging into CodePipeline
In cloud security, the evolution from traditional methods to innovative practices is something you'll likely experience as your career develops. The adoption of Infrastructure as Code (IaC) with tools like Terraform in AWS CodePipeline is a prime example. This approach not only streamlines operations but also enhances security and compliance.
As you know, I have been sharing the various elements of a GitOps workflow on AWS, with CodeCommit, CodePipeline, and CodeBuild. I think that learning these skills is essential if you are going to work in any larger organization that has a more formal process for deploying infrastructure. Truth be told, you'd benefit from using these tools in smaller organizations as well, although it is tempting to just ssh into your gear and make changes whenever there's a need. Still, that's probably not the best way to do things. That's why I think as a networking professional and a security professional, if you're going to work in cloud, you have to embrace a developer's mindset. Coding concepts are becoming increasingly relevant, even in cloud networking and security. To start this journey, I highly recommend exploring this article that introduces you to key DevOps principles .
But let's get into the share for this week, and I want to talk about CodePipeline. Now back in the 17th edition of this newsletter I shared some basics on how to get stared with CodePipeline. But there's something interesting that happens as the pipeline progresses that I want to dig into today. Let me explain.
This is what my pipeline looks like:
And this is what my pipeline.json file looks like (The file that defines the steps the pipeline goes through):
{
"pipeline": {
"name": "MyGitOpsPipeline",
"roleArn": "arn:aws:iam::670977908213:role/CodePipelineServiceRole",
"artifactStore": {
"type": "S3",
"location": "my-gitops-pipeline-terraform-artifacts"
},
"stages": [
{
"name": "Source",
"actions": [
{
"name": "Source",
"actionTypeId": {
"category": "Source",
"owner": "AWS",
"provider": "CodeCommit",
"version": "1"
},
"outputArtifacts": [
{
"name": "SourceArtifact"
}
],
"configuration": {
"RepositoryName": "IACRepo",
"BranchName": "main"
}
}
]
},
{
"name": "DeployOnDev",
"actions": [
{
"name": "Build",
"actionTypeId": {
"category": "Build",
"owner": "AWS",
"provider": "CodeBuild",
"version": "1"
},
"inputArtifacts": [
{
"name": "SourceArtifact"
}
],
"outputArtifacts": [
{
"name": "TerraformPlanArtifact"
}
],
"configuration": {
"ProjectName": "TerraformBuildProject"
}
},
{
"name": "ApproveChanges",
"actionTypeId": {
"category": "Approval",
"owner": "AWS",
"provider": "Manual",
"version": "1"
},
"configuration": {
"CustomData": "Please review the applied changes in the Dev environment before proceeding."
}
}
]
},
{
"name": "PostApprovalDestroy",
"actions": [
{
"name": "Destroy",
"actionTypeId": {
"category": "Build",
"owner": "AWS",
"provider": "CodeBuild",
"version": "1"
},
"inputArtifacts": [
{
"name": "SourceArtifact"
}
],
"configuration": {
"ProjectName": "TerraformDestroyProject"
}
}
]
}
]
}
}
The first stage is the is called Source in the JSON file and that is also reflected in the screenshot of the AWS Console. Notice that the source files come from my CodeCommit repo. These resources are grabbed from the repo and zipped up into an archive on S3 as an "artifact." You see this here in the code:
"outputArtifacts": [
{
"name": "SourceArtifact"
}
If you go look at that in S3 here is what you see:
Each time you run the pipeline it creates an artifact with the code. You can download that file, unzip it, and see what's inside.
Now if you notice the next step in the pipeline is we go to the DeployOnDev stage. In the JSON file, it says the input artifacts come from the SourceArtifacts folder.
"inputArtifacts": [
{
"name": "SourceArtifact"
}
],
CodePipeline passes the output artifact from the previous stage on to the next. So it opens the source artifact, has all the code and deploys it in my dev account. In this case I deployed a VPC and two EC2 instances. After I manually approve it moves to the next stage, which is to destroy the resource created in the dev account.
Now here is the tip! When I move to the next stage, PostApprovalDestroy, I have to use a new CodeBuild project to create a new container that can issue the terraform destroy command. This means that I have a new buildspec.yml file with a different name than the first one, and all the instructions the CodeBuild project needs. I called mine buildspec_destroy.yml and its sitting in my CodeCommit repo. Now the CodeBuild project tries to get it out of the artifacts output by the previous stage. So, make sure your buildspec file is included in the artifacts you output from the previous stage or you might end up like me, banging your head on a desk trying to figure out why it doesn't see the file thats clearly sitting in your repo.
领英推荐
My workaround at this step was to just tell the pipeline to go look at the SourceArtifacts again and of course it found the buildspec_destroy.yml file there, which is what I needed. You can see what I did by looking at the output in the DeployOnDev stage:
"outputArtifacts": [
{
"name": "TerraformPlanArtifact"
}
And then note that I did not use that as my input artifact for the PostApprovalDestroy stage:
"inputArtifacts": [
{
"name": "SourceArtifact"
}
Prior to changing the inputArtifacts the look at the SourceArtifact folder I had failure after failure. Good news, It all works now!
Career Advice: Learning from Failure
The path to success is often paved with failures, and my recent experiences with AWS CodePipeline are a testament to that. Up above you saw my pipeline after it was successful. This is what it looked like before:
This made me think.. Each failed attempt is a stepping stone. In our careers, it's crucial to view failures not as setbacks but as opportunities to learn and improve. Remember, persistence is key, and every adjustment brings you closer to your goal! So don't be afraid to fail at times. It will only make you better!
Learning and Certification Tips: Free Resources and Inspirational Guides
My share today is specifically for those keen on expanding their AWS knowledge. If you haven't already, check out the AWS Skills Centers . These physical locations offer free training resources. You can check the schedule and sign up online. If you are near one, you gotta check it out.
Additionally, I'd like to highlight my friend and colleague Aaron Hunter . Aaron is a Developer Advocate as well. If you're going at the AWS Cloud Practitioner Certification, check out this Power Hour on Twitch! It's an excellent resource for anyone preparing for this certification.
Conclusion
As we close the 19th edition of our newsletter, try to remember the importance of a growth mindset in cloud security. Embracing failures as learning opportunities and continually seeking knowledge can lead us to remarkable achievements. Keep evolving, keep learning, and as always, "Happy Labbing!"
Unleashing the Untapped Potential of Individuals, Companies, Organizations, and Communities through Inspired Ideation and Creativity | Chief Dream Officer at Web Collaborative ??
9 个月Great edition! Can't wait to dive in. ??
IT Solutions Architect @ IT OFFICERS? -IT Solutions Dubai | SIRA Certified
9 个月Great insights! Looking forward to reading it.
I help you learn AWS and prepare to become CERTIFIED through free online training | Principal Developer Advocate at Amazon Web Services (AWS)
9 个月Thanks for the shoutout in your article, Brandon! Keep up the great work ??