Are You Ready For These DevSecOps  Trends?

Are You Ready For These DevSecOps Trends?

As we enter 2024, AI not only will make a big splash, but dive deep into nearly every fabric of IT development and user scenarios.

This TechCrunch article caught my eye – as it reflected what we at AiThoughts and LastMile have been priming ourselves up for.

Some key takeaways from the article :

  • As per their global DevSecOps Survey, over 81% of respondents said they would like more training on how to use AI effectively
  • As per GitLab’s research, currently 41% of DevSecOps teams use AI for automated test generation as part of software development. That number is expected to double this year and be an embedded part of the development process within two years
  • AI-powered code increases the risk of vulnerabilities being introduced and higher chances of data privacy breaches
  • AI tools that rely on internet data for training (like LLMs) will inherit the biases expressed across online content. This will lead to amplification of existing biases an creating new ones thus striking at the heart of fair, safe and just AI

?

Leads me to assume that :

  • Whilst awareness of what constitutes responsible AI is increasing, the knowledge of practices required to ensure that AI systems are compliant to regulatory norms, privacy norms, and are safe, trustworthy, fair is still nascent
  • As AI gets embedded into code development (via Co-pilot and other equivalent tools), it is quite likely that security breaches will increase. AI has learnt some good coding practices and some bad ones that have already created vulnerabilities!
  • Testers will be overrun by AI was a common refrain I heard – and there are the nay-sayers and ayes in the crowd. What is true is that if AI generated tests are going to be the trend, then there needs to be a better set of eyes watching what AI tests! AI generated code + AI generated tests – maybe good for my friendly neighbourhood spiderman, but sure as hell not for business-critical applications. So the question will be who will watch the watchdog!
  • With nearly every regulatory compliance across the world focused on fairness, privacy, safety and security – standards and processes tailored to every enterprise’s context will need to be in place and adequate governance mechanisms put in place. That AI oversight (like the testing oversight or the CISO oversight) is something that is still a huge gap that needs to be plugged.

With L Ravichandran , Yeshwant Satam and Anil Sane , we have created offerings that help organizations come up to speed, arm them with the processes, standards and tools to help them deliver responsible AI.

Feel free to reach out!

?

Abishek Bhat

Vice President, Business Development at Trigent Software | Sales and Management Leader | Digital Transformation Catalyst

8 个月

Nice.

回复
Shrini Kulkarni

Independent Consultant | Ex QA Director, OLA | Ex VP JP Morgan | Ex VP Barclays | IIT Madras Alumnus

8 个月

Awesome article Diwakar Menon . Probably best i have seen. Thanks for drawing attention to key aspects of caution amidst euphoria of AI

Madhav K

Content Marketing Manager | Marketing Communications | Copywriting | Crafting Stories that Sell

8 个月

Wishing you the very best on your new adventure, Diwakar Menon.

要查看或添加评论,请登录

社区洞察

其他会员也浏览了