Five simple tricks all DevOps Engineers have under their sleeves.
Javier Colladon
Cloud & AI integration Expert | Inspiring & Innovative Leader | Author - Content Creator - History Buff |
It's technical Wednesday again, and today, we will discuss continuous integration/continuous deployment, those fancy words you usually hear compressed in CI/CD, which is the backbone of cloud DevOps.
Let's start from the beginning and define CI/CD: From a DevOps perspective, it's a collection of best practices and techniques for automating building, testing, and releasing infrastructure or software changes, reducing manual errors, speeding up delivery, and ensuring reliable, secure, and user-friendly solutions.
Okay, now we all understand Dev-Ops, and we will get our special collaborator and guru of this space, Mr. Captain Obvious, who will provide us with five valuable tricks to optimise those pesky pipelines. Don't get me wrong; I know these tricks are trivial, and every engineer will say this is no breakthrough; however, since non-engineers read this space, too, I am trying to keep our Wednesday sessions open to all audiences.
If you want to see more in-depth technical content, advanced tricks, solution—or architectural-oriented content, let me know in the comments, and I will start adding the topics to our weekly sections.
Now, and without further ado, we can go to today's tricks.
Security Integration.
It's important to detect any security issue before our deployment hits the big P (Production Environment), and this may be difficult, but there is an easy way to deal with this, and the best part is that it can be free. What to do? Incorporate a vulnerability scanner into your pipeline, which can be as free as you want. For example, there is an open-source tool named Trivy, developed by Aqua Security, that you can integrate and use to check your code, dependencies and container images for known issues. How to?: something like this
name: Security Scan
on: [push]
jobs:
security-scan:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v2
- name: Run Trivy Vulnerability Scan
run: trivy fs --exit-code 1 --severity HIGH, CRITICAL .
in the example above, the test will fail when a high or critical vulnerability is found.
Please don't share your secrets.
In coding and life, this advice is always helpful. Your secrets must stay secrets, and the good news is, most of the online repositories like GitHub will warn you if you are doing something stupid enough as hardcore your passwords in your script; nevertheless, you would be surprised at how many times I had to have "the talk" with an engineer (sometimes a senior one) that did it, just because it was a test bed, or because it was internal, then committed the change and boom, for some reason it ended up in a public project. Always remember, it only takes 5 seconds of stupidity to screw it up, and nobody is immune to this. Keep this in mind, and always use environmental variables to push your secrets into the code, no matter how safe it seems.
领英推荐
Always Enforce Code Quality Checks
Here is another one that will help maintain consistency and catch issues before release, "Integrate linting and testing early."
In the following Jenkinsfile example, by adding these few lines for linting (a process that checks the code for programmatic and style errors), the pipeline will stop if any state fails, preventing low-quality code from moving on.
stage('Lint') {
steps {
sh 'npm install && npm run lint'
}
}
stage('Test') {
steps {
sh 'npm test'
}
}
Role Based Access Control (RBAC) is your best friend.
Permissions should—or shall I say "MUST"—be assigned on the minimum required access premise and kept away from actual people when possible. By using role-based permissions, you are just simplifying the task of granting permissions ONLY to the ones who need them. In the cloud, you can go further and access permissions to other components without having people go there and manually access the systems to perform tasks.
In this example, we use GitHub-protected branches and add required reviewers to ensure that only authorised individuals finalise the change.
name: Build
on:
pull_request:
branches: [main]
jobs:
build:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v2
- run: rpm ci && npm run build
Cache me if you can
Why rebuild everything every time if we can get away with it? That's a good question, and the answer is Caching! By caching dependencies, you can reduce the duration of the pipeline, for example, a GitHub Actions job, to cut unnecessary work and speed up the process.
jobs:
build:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v2
- uses: actions/cache@v3
with:
path: node_modules
key: ${{ runner.os}}-node-${{ hashFiles('package-lock-json') }}
- run: npm ci
- run: npm run build
And with this, I will call it a day. I hope you can find this content useful and that it can kickstart some ideas for your own deployments and Dev-Ops scripts.
Let me know what you think in the comments below.
Have a wonderful Wednesday, and as usual, be curious and always learn something new!
CRO @ TESTIFI I Sales Expert I Startup Enthusiast
3 个月Great point, Javier. Keeping DevOps pipelines clean and optimized is crucial, especially as we approach more advanced AI technologies. Efficient DevOps practices can really accelerate a company like Orange Business towards achieving more robust AI solutions.