Balancing Act: The Promise and Perils of AI in Open Source Coding

Balancing Act: The Promise and Perils of AI in Open Source Coding

AI and open source code are revolutionizing the development landscape, empowering teams to accelerate their workflows and stay competitive in today’s dynamic digital economy. However, this rapid innovation raises a critical question: Are organizations compromising security in their race for speed? If development outpaces security measures, the risks could outweigh the rewards, leaving systems vulnerable to exploitation.

AI and Open source code are being utilized extensively, it is important to understand the latest modern development and security and how to mitigate the risks

Before we delve into pros and cons, companies can’t blindly trust open source solutions, as they really have very little idea who has created or contributed towards them. There are security concerns to consider when we leverage AI generated open-source code.

Value of utilizing AI and Open Source Code:

There is a value behind utilizing the open source code. A recent survey report states that 83% of the security leaders acknowledge that their developers using AI to generate code.

Accelerated Delivery:

Benefit: Faster release cycles for new products and updates enable businesses to respond quickly to market demands. This speed enhances competitiveness and customer satisfaction by delivering timely solutions.

Collaborative Strength:

Benefit: Leveraging a community of developers fosters seamless collaboration and effective change management. This open network not only boosts innovation but also ensures transparency and accountability throughout the development process.

Advancing Technology:

Benefit: Combining speed and teamwork fuels technological breakthroughs and drives innovation in research. Collaborative efforts streamline the integration of cutting-edge technologies into practical applications.

?

Value of AI and Open Source Code

A recent survey revealed that 75% of developers reported either having no trust or only partial trust in open-source libraries.

Understanding the Security risks involved

When considering the challenges surrounding AI, open source software, and software development, there are several risks that must be carefully evaluated. For example, AI assistants like ChatGPT and Copilot can help generate code, but the real concern lies in how to verify the security of each line of open-source code. In fact, latest venafi report states,

·?????? 75% of developers believe it is impossible to fully verify the security of such code.

·?????? 78% think AI developed code will lead to security reckoning

·?????? 91% are concerned about developers using AI to generate code

In nutshell we have to verify the open source libraries and for instance Software Supply Chain Framework helps address the risks associated with sourcing, using, and integrating third-party code (such as open-source components), If I had to delve into this framework, it is beneficial w.r.t Risk mitigation, Security verification, Compliance & Trust, Transparency and Traceability.

Common attacks and examples:

·?????? After the CrowdStrike outage, attackers inserted malware into open-source code, calling them "repairs". There is a possibility of spoofing internal package names and publish it in registry

·?????? Typos in a GitHub action that matches a typo squatter’s action can run malicious code. In this case, misspelling the open-source libraries, waiting for developers to download the package.

·?????? Attempts to insert backdoor flaw into Linux file transfer tool XZ Utils. This tool is widely used for compression and decompression

·?????? Flood attacking – attackers send huge amounts of non-malicious information through an AI system to cover up something else (malicious code).

Risks of using AI and Open-Source Code

  • The pace of development is outpacing the speed of security.
  • Only 47% of companies have policies to ensure the safe use of AI in development environments.
  • Despite the risks, AI and open source code are here to stay.

Let’s look at some of the concerns when developers using AI to generate or write code.

1.?????? Threat actors poisoning AI models to insert malicious code

2.?????? Lack of quality-control or maintenance of open source libraries

3.?????? Establishing and verifying the provenance of code

4.?????? Keeping security on pace with development

5.?????? It will be harder to establish accountability for errors

6.?????? Adding more complexity to securing CI/CD pipelines

Strategies to Enhance Security:

What can be done to enhance the security?

?????????????? 92% believe code signing should be used to ensure open source code can be trusted. Use of secure code signing methodologies.

We wanted to ensure that we are using trusted software and all the underlying dependencies which are authorized to run in any enterprise environment. For example, NIST SP 800-204D (National Institute of Standards and Technology Special Publication 800-204D) provides guidelines on security considerations for microservices architectures. This framework provides guidance around secure development practices, ensuring that we build trustworthy AI systems, provides guidelines for secure integration.

Reference: https://nvlpubs.nist.gov/nistpubs/SpecialPublications/NIST.SP.800-204D.pdf

While we are focusing on the secure code signing process, we also wanted to stop unauthorized code running inside your environment. W.r.t AI and ML models, how the AI code gets generated, where LLM models came from and we need to verify python code which has been generated from the LLM model. There could be a possibility of pre-trained models and plugins potentially add vulnerabilities. We should have all the foundational elements to stop the unauthorized code.


Stop Unauthorized Code with Code Signing Trust Chain

What are your thoughts on this? Do you have any insights or experiences to share regarding this topic? Feel free to join the conversation and contribute your perspective in the comments below!

The views expressed in this article are personal and do not represent the opinions of my employer or any associated organizations. This content is intended for informational purpose only.

要查看或添加评论,请登录

Guruprakash Subbarao的更多文章

社区洞察

其他会员也浏览了