Empowering DevOps with Generative AI!
Pavan Belagatti
GenAI Evangelist (67k+)| Developer Advocate | Tech Content Creator | 30k Newsletter Subscribers | Empowering AI/ML/Data Startups
The intersection of generative AI and DevOps automation is intriguing.
Global Generative AI in DevOps Market size is expected to be worth around?USD 22,100 Mn?by 2032?from?USD 942.5 Mn in 2022, growing at a?CAGR of 38.20%?during the forecast period from 2023 to 2032.
Source credits: marketresearch.biz
How AI Is Used in Harness?
Richard MacManus, a Senior Editor at The New Stack, asked Jyoti Bansal, the CEO of Harness on how AI is used in Harness and below is his answer.
One key area where AI is utilized at Harness is in ensuring that code changes do not negatively impact performance, quality, or security. Harness leverages AI models within their continuous delivery pipelines, which compare code changes against data from monitoring and logging systems like DataDog, Azure Monitor, and Splunk. These AI models can identify any potential issues before code changes are deployed into production in their systems, which enables rapid and reliable deployment pipelines.
Another AI technique employed by Harness is “test intelligence.” This addresses the common challenge of lengthy test execution times. By using AI models, Harness identifies the parts of the code that correlates with specific tests, allowing its developers to optimize the tests that need to be run. Instead of running a large suite of tests for every code change, Harness can determine the specific tests required for a given code change. This significantly reduces test execution time and increases developer productivity.
Know more in TheNewStack report: AI Has Become Integral to the Software Delivery Lifecycle
The future: DevOps AI Assistant and AI-based Platform Engineering
Image source credits: Gartner's Hype Cycle 2022
The complexity of DevOps with the introduction of Kubernetes, Terraform, Helm Charts and other tools led to increased overhead of DevOps teams on one hand while also increasing the developer dependency on DevOps on the other hand. Organizations couldn’t keep up with the talent debt while a major portion of existing DevOps resource time was focused on addressing developers’ requests- enter the world of Developer Experience in Platform Engineering.
Also, we can’t ignore internal developer portals here as there is already a big buzz around this. Platform engineering and internal developer platforms are indeed emerging as significant trends in the future of DevOps. These approaches aim to streamline and enhance the software development process by providing developers with robust and scalable platforms that facilitate efficient and collaborative work.
Know more in the original article: The Role of AI in DevOps
I also did the CodeRush show with Amit, the CEO of Kubiya where he explained how Generative AI will become an integral part of DevOps and how to create automated workflows.?
Check out the video.
I also asked developers where do they believe GenerativeAI will genuinely shine in DevOps. And you can see what they think in my poll.
领英推荐
Now tell me what do you think in the comments.
Also, one interesting aspect I shared yesterday was about how ChatGPT is powered by Kuberenes. Read??????
OpenAI began running Kubernetes on top of AWS in 2016 then migrated to Azure. OpenAI runs key experiments in fields including robotics and gaming both in Azure and in its own data centers, depending on which cluster has free capacity. They use Kubernetes mainly as a batch scheduling system and rely on our autoscaler to dynamically scale up and down our cluster. This lets us significantly reduce costs for idle nodes, while still providing low latency and rapid iteration.
As of today,
?? Azure still is an exclusive cloud provider for all OpenAI workloads.
?? Tens of thousands of Nvidia Corp.’s A100 graphics chips are used to power OpenAI’s supercomputer.
?? Terraform, Python, Kafka, PostgreSQL, and Cosmos DB are actively used.
?? Matt Rickard, ex-Googler who worked on Kubernetes and was one of Kubeflow maintainers, posted a few observations about why he thinks OpenAI prefers K8s over HPC frameworks and what makes this K8s usage case special.
All credits to Palark GmbH & Devansh Sharma.
Below are the links to know more:
Harness Continuous Verification Feature uses AI and ML
Harness Continuous Verification is a powerful tool that can help you ensure the quality and performance of your deployments. With Harness, you can easily set up a pipeline to verify your deployments, connecting a variety of monitoring tools of your choice. Once you've set up your verification step in the pipeline, Harness uses unsupervised machine learning to detect anomalies in the deployed applications or services. You can set a threshold for these anomalies, and when they cross the set threshold, the organizations will be able to auto roll back and de-risk their deployments.
There is this article I wrote recently where I show you how to verify your Kubernetes deployments using the Harness continuous verification feature.
Try the Continuous verification tutorial.
Thanks:)