Let's add another CI to the existing DevOps Framework….
Murali Mohan Josyula (JMM)
AVP - Research Technology Management - AT&T Communication Services India Pvt. Ltd.
Hello Friends, hope all is well ..
In continuation to my previous DevOps blog, I would like add another workflow step to the DevOps.. Here is my thought process..
We have moved from the age old sequential / waterfall development models to Agile / Fail Fast models and striving for faster delivery and Customer Satisfaction. Many of us using Failing Fast models most of the times and keep moving on with failures quite often, though we correct them then and there and deliver the stuff as per the needs, by meeting the timelines or not meeting the timelines.
Let's start thinking differently and question ourselves.. "Why should I fail.. Why can't I prevent the failures upfront and deliver fast.."
Rather than failing fast all the time, prevent failures by learning from our failures so that we fail less frequently. And the best way to avoid the failures is using AI & ML. This can be introduced at any stage of Agile Delivery. However, I would like to confine the context to DevOps in this write up.
One of the ways, that I want to avoid failures is to add another CI to DevOps Work Flow. You heard it right, It is another CI…. Continuous Improvement.
Adding Continuous Improvement to DevOps - workflow
When ever we talk about DevOps, we always stress upon CI - Continuous Integration, CD - Continuous Delivery, CT - Continuous Testing & CM - Continuous Monitoring). I would like to add one more CI (Continuous Improvement) to DevOps work flow as a mandatory step.
We have seen CI (Continuous Improvement) as part of CMMI processes, Six-Sigma and ISO...etc.. My point is to bring this into the DevOps process and make it part of the pipe line and automate it. This is possible with AI & ML.
As it requires lot of Data, we have to start logging the data that is related to the integrations, deployments, testing & monitoring, as part of DevOps Automation.
Integration Data: This should provide the info about the latest integrated code, branch details (GIT), release info, Build errors, Critical information of Static Code Analysis. Etc.
Deployment Data: Deployed versions, Information about Container images, Deployment errors, dependent infrastructure / libraries etc info, check list information that is to be followed for any deployment etc
Testing Data: Test results/Defects of Functional Testing, Integration Testing, Regression Testing, Test results of Business Validation Testing (BVT) etc (Test automation performed as part of DevOps)
Monitoring Data: It is nothing but, mostly Operations Management data that is being used from application logs, server logs, OS logs, ITSM data (incidents/errors raised by end users, events / alerts based on the set thresholds etc), infrastructure related logs etc.
I would suggest, adding Dynamic Code Analysis as well to the DevOps process (as part of CM - Continuous Monitoring) - which starts working, post the deployment. Log all the issues, incidents etc...and map them to the deployed version of the code, with fine grain details like methods that are taking more time, slow queries, thread contentions, memory leaks etc.
Once we have the data, we can use Analytics across the DevOps pipelines and can get lot of insights and control the things in a much better way and with agility. We can create ML models to predict/anticipate risks like Bandwidth usage, Performance glitches, issues with integrated code, thread contentions n etc at lower environments and mitigate /avoid them before the production deployments. We need to make the ML models cognitive (self learning), so that it learns not only from the existing data patterns but also from the new data patterns getting added to the system. We can also add AI/ML models that can analyze hundreds and thousands of test results and predict/determine that the new changes to a particular piece of the code base brought into the deployment is erroneous, and take action as required.
This is nothing but Continuous Improvement(CI) and am sure adding this CI also to the DevOps workflow would certainly shorten the life cycle with higher quality.
Happy Reading!!.. Be at Home & Be Safe..
Regards
Murali Mohan Josyula (JMM)
Building
4 年The data is definitely there but the architecture is tough. Would this still work with Jenkins?
Managing Director at Scoop Technologies CEO
4 年Nice Article. To come out the failures with AI and ML by Adding Continuous Improvement to DevOps - workflow!!