Developers take note: AI package hallucination is a thing.

Developers take note: AI package hallucination is a thing.

Welcome to the latest edition of Chainmail: Software Supply Chain Security News, which brings you the latest software supply chain security headlines from around the world, curated by the team at ReversingLabs.

This week: AI’s threat to supply chain security added a new wrinkle: AI package hallucination. Also: a fake security company is posting malicious packages to GitHub claiming to be exploits for popular platforms.?

This Week’s Top Story

AI Threat To Supply Chain Security Grows with ChatGPT “AI Package Hallucination” Attack

We have already seen the writing on the wall about the myriad ways that generative AI will remake the software development industry. Microsoft GitHub’s CoPilot, Amazon’s CodeWhisperer, CodeComplete and other AI-powered applications are churning out code by the truckload to speed the work of developers and make sites obsolete like StackOverflow, which have long been “go-tos” for developers seeking solutions to coding problems. How warm has been the embrace by developers? A survey of 500 developers by GitHub found that more than 90% said they were now using AI to assist them in their work.? But AI isn’t without its problems. Research presented at last year’s Black Hat Briefings revealed that CoPilot generated code, which is based on an AI “study” of existing, human generated code, came rife with common security flaws like SQL injection, out of bounds errors and recommendations to use insecure encryption algorithms.?

But balky code isn’t the only risk posed by AI-assisted development. Researchers at the firm Vulcan.io this week warned that attackers can exploit ChatGPT's tendency for hallucination to spread malicious packages into developers' environments. They found that ChatGPT sometimes recommends non-existent packages as solutions to coding problems, including URLs, references, and code libraries that don't exist. The ChatGPT recommended packages were not published in legitimate package repositories, making it difficult to detect the malicious intent. According to Vulcan, attackers who drafted off those hallucinated package recommendations and created corresponding malicious packages and infrastructure could easily deceive developers relying on the AI into downloading and using them.

The tendency of ChatGPT and other large language model AI systems to “hallucinate” is well documented. The technology news site CNET experimented using artificial intelligence to write articles, but was called out after readers found the results riddled with errors. A lawyer who relied on ChatGPT for a legal brief was called out by the judge because the brief included mention of numerous fictitious cases invented by the AI.?

A fake security company is posting malicious GitHub repositories??

An unidentified party has been creating malicious GitHub repositories under the guise of a fake security company called High Sierra Cyber Security, new research by VulnCheck reveals. The malicious actor(s) poses as security researchers and uses the identities of real security professionals to establish credibility. The repositories claim to offer exploits for popular products including Chrome, Exchange, and Discord but actually deliver malware. (SC Magazine)

New Supply Chain Attack Exploits Abandoned S3 Buckets to Distribute Malicious Binaries

A new supply chain attack has been discovered, where threat actors exploit expired Amazon S3 buckets to serve rogue binaries without modifying the modules themselves, The Hacker News reports. The attack was first observed by researchers at the firm Checkmarx. It involved an npm package called bignum, with attackers rerouting the package's pointer to a hijacked bucket. Researchers found that the malware within the binaries can steal user credentials and transmit them to the malicious S3 bucket. The attack highlights the importance of developers and organizations being aware of the potential risks posed by abandoned hosting buckets or obsolete subdomains, Checkmarx said. (The Hacker News)

OWASP Releases API Security Top 10 for 2023

The OWASP API Security Top 10 list for 2023 has been released, highlighting the most critical security weaknesses and risks associated with APIs. The updated list includes familiar categories from the previous edition, along with some changes and additions. Broken Object Level Authorization, Broken Authentication and Broken Object Property Level Authorization topped the list of API security concerns. The goal of the Top 10 list is to educate developers, architects, and organizations about API security risks and guide them in developing and maintaining secure APIs. (SC Magazine)

RomCom Threat Actor Targets Ukrainian Politicians, US Healthcare

The threat actor known as RomCom launched a cyberattack targeting Ukrainian politicians and a U.S. healthcare organization aiding refugees, says Dark Reading. The attackers used a trojanized version of Devolutions Remote Desktop Manager, distributed through phishing tactics and fake websites. The attack is motivated by geopolitics, aiming to exfiltrate sensitive information. (Dark Reading)

Can Large Language Model and Generative AI Solve the Application Security Problem??

Generative AI (GenAI) is reshaping work across industries… and that may well extend to application security, according to an article over at unite.ai. Generative AI based on large language models can tackle long-standing challenges in the app sec space. Utilizing modern LLMs (Large Language Models) trained on vast amounts of code repositories, generative AI can automate vulnerability detection, simulate realistic attack scenarios, generate intelligent patches, and enhance threat intelligence. While LLMs still have gaps in achieving perfect application security, ongoing advancements in AI and security are expected to bridge these gaps in the future. Already, GenAI can simulate realistic attack scenarios, including sophisticated multi-step attacks, to help organizations strengthen their defenses and generate patches for vulnerabilities, saving time and minimizing human error in the patch development process. (Unite.ai)

Resource Round Up

Podcast: How Do You Trust Open Source Software?

ConversingLabs’ host Paul Roberts chats with Naveen Srinivasan, an OpenSSF Scorecard?maintainer, about his talk at this year’s RSA Conference on how to better trust open source software. In their conversation, Naveen explains how the OpenSSF Scorecard tool can help developers understand the security posture of open source dependencies.

[Listen Now]

Upcoming Webinar: What’s in your [Crypto] Wallet??

On June 22, ReversingLabs’ Tim Stahl will leverage several software supply chain analysis concepts to perform comparisons across similar crypto-wallet software packages, highlighting the risks and threats from within the packages to everyday users. These elements can be used to assess a vendor's overall “build quality” and the level of risk inherent in their software pipeline across products.

[Register Now]

Upcoming Webinar: Eliminating Threats Lurking in Open Source Software

On June 28, ReversingLabs’ experts will take a deep dive into how open source components are constructed, the risks associated with usage, questions teams should ask themselves as they assess issues, and how to safely use foreign code with ReversingLabs Software Supply Chain Security platform.?

[Register Now]


要查看或添加评论,请登录

ReversingLabs的更多文章

社区洞察

其他会员也浏览了