When building and developing cutting edge systems, there is a high stake of probabilities that threat actors follow the trend and try to get in the supply chain. "PyPI Attack: ChatGPT, Claude Impersonators Deliver JarkaStealer via Python Libraries" Due diligence and due care are critical elements to ensure that the development process is not impacted but such malware. This is especially true in the field of AI, where the rapid pace and rush into this field can lead to ignoring basic security principles. #cybersecurity #AI #softwaresupplychain https://lnkd.in/gErcF-it
Upperity, Identity, Signature, dedicated AI and governance, a cloud alternative的动态
最相关的动态
-
??Cybersecurity researchers have discovered two malicious packages uploaded to the Python Package Index (PyPI) repository that impersonated popular artificial intelligence (AI) models like OpenAI ChatGPT and Anthropic Claude to deliver an information stealer called JarkaStealer. The packages, named gptplus and claudeai-eng, were uploaded by a user named "Xeroline" in November 2023, attracting 1,748 and 1,826 downloads, respectively. Both libraries are no longer available for download from PyPI. "The malicious packages were uploaded to the repository by one author and, in fact, differed from each other only in name and description," Kaspersky said in a post. ?? Stay connected for industry’s latest content –?Follow Dr. Anil Lamba, CISSP #linkedin #teamamex #JPMorganChase #cybersecurity #technologycontrols #infosec #informationsecurity #GenAi #linkedintopvoices #cybersecurityawareness #innovation #techindustry #cyber #birminghamtech #cybersecurity #fintech #careerintech #handsworth #communitysupport #womenintech #technology #security #cloud #infosec #riskassessment #informationsecurity #auditmanagement #informationprotection #securityaudit #cyberrisks #cybersecurity #security #cloudsecurity #trends #grc #leadership #socialmedia #digitization #cyberrisk #education #Hacking #privacy #datasecurity #passwordmanagement #identitytheft #phishingemails #holidayseason #bankfraud #personalinformation?#creditfraud
PyPI Attack: ChatGPT, Claude Impersonators Deliver JarkaStealer via Python Libraries
thehackernews.com
要查看或添加评论,请登录
-
???? Exciting or ominous? Cybersecurity researchers uncovered a sneaky plot on PyPI where malicious packages disguised as OpenAI models were spreading an info-stealing trojan! ???? Stay sharp, techies: Xeroline's gptplus & claudeai-eng might not be your AI buddies after all. #ainews #automatorsolutions ?? Python Package Index under siege! Two impostor packages slipped past the gates, pretending to be our trusty AI companions. ??? Don't let JarkaStealer swipe your data under the radar! ?? What's next for our digital fortresses? As the battle of wits unfolds between cyber-criminals and our cyber-savvy defenders, will we see more devious tactics emerge? ? Stay informed, stay vigilant! Arm yourself with knowledge and cutting-edge defense mechanisms to outsmart the online prowlers. ???? ?? Could this signal a new era of AI-targeted cyber attacks? Let's learn from history and gear up for the future! Share your thoughts - how can the tech community fortify our digital defenses against such stealthy threats? ????? #cybersecurity #ITsecurity ?? Imagine the possibilities... What if cyber villains could weaponize our favorite AI tools against us? Innovate, anticipate, and adapt - that's the name of the game in the ever-evolving landscape of cybersecurity. ???? Remember, knowledge is power in the realm of cybersecurity! Let's unite to harness our collective expertise and protect our digital realms from malicious intruders. Stay sharp, stay safe! ????? #techcommunity #cyberdefense #CyberSecurityAINews ----- Original Publish Date: 2024-11-21 23:58
PyPI Attack: ChatGPT, Claude Impersonators Deliver JarkaStealer via Python Libraries
thehackernews.com
要查看或添加评论,请登录
-
The Llama Drama vulnerability in the Llama-cpp-Python package exposes AI models to remote code execution (RCE) attacks, enabling attackers to steal data. Currently, over 6,000 models are affected by this vulnerability. #cybersecurity https://lnkd.in/eEfnkR6q
AI Python Package Flaw ‘Llama Drama’ Threatens Software Supply Chain
https://hackread.com
要查看或添加评论,请登录
-
AI agents, which combine large language models with automation software, can successfully exploit real world security vulnerabilities by reading security advisories ?"Our vulnerabilities span website vulnerabilities, container vulnerabilities, and vulnerable Python packages. Over half are categorized as 'high' or 'critical' severity by the CVE description." https://lnkd.in/gpY_Dxck #LLM #AIEthics #MachineLearning #Cybersecurity
GPT-4 can exploit real vulnerabilities by reading advisories
theregister.com
要查看或添加评论,请登录
-
The "Llama Drama" vulnerability in the Python package "llama_cpp_python" poses a significant threat to the software supply chain by allowing remote code execution (RCE). Identified as CVE-2024-34359, this flaw enables attackers to execute arbitrary code due to the misuse of the Jinja2 template engine. This vulnerability was disclosed by the security researcher retr0reg and is indicative of the broader challenges in securing open-source ecosystems against sophisticated cyber threats. The exploitation of such vulnerabilities can lead to severe data breaches and compromise of system integrity. For further details on how "Llama Drama" impacts AI models and software supply chains, visit Hackread's coverage at https://lnkd.in/excwzqQk. #Cybersecurity #Python #Vulnerability #SoftwareSupplyChain #AI
AI Python Package Flaw ‘Llama Drama’ Threatens Software Supply Chain
https://hackread.com
要查看或添加评论,请登录
-
New AI Tool To Discover 0-Days At Large Scale With A Click Of A Button: Vulnhuntr, a static code analyzer using large language models (LLMs), discovered over a dozen zero-day vulnerabilities in popular open-source AI projects on Github (over 10,000 stars) within hours.? These vulnerabilities include Local File Inclusion (LFI), Cross-Site Scripting (XSS), Server-Side Request Forgery (SSRF), Remote Code Execution (RCE), Insecure Direct Object Reference (IDOR), and Arbitrary File Overwrite […] The post New AI Tool To Discover 0-Days At Large Scale With A Click Of A Button appeared first on GBHackers Security | #1 Globally Trusted Cyber Security News Platform.
New AI Tool To Discover 0-days At Large Scale With A click Of A Button
https://gbhackers.com
要查看或添加评论,请登录
-
The "Llama Drama" vulnerability in the Python package "llama_cpp_python" poses a significant threat to the software supply chain by allowing remote code execution (RCE). Identified as CVE-2024-34359, this flaw enables attackers to execute arbitrary code due to the misuse of the Jinja2 template engine. This vulnerability was disclosed by the security researcher retr0reg and is indicative of the broader challenges in securing open-source ecosystems against sophisticated cyber threats. The exploitation of such vulnerabilities can lead to severe data breaches and compromise of system integrity. For further details on how "Llama Drama" impacts AI models and software supply chains, visit Hackread's coverage at https://lnkd.in/eMmEv-4w. #Cybersecurity #Python #Vulnerability #SoftwareSupplyChain #AI
AI Python Package Flaw ‘Llama Drama’ Threatens Software Supply Chain
https://hackread.com
要查看或添加评论,请登录
-
By?Brent Dirks Apr 23, 2024 According to a new study?from four computer scientists at the University of Illinois Urbana-Champaign, OpenAI’s paid chatbot, GPT-4, is capable of autonomously exploiting zero-day vulnerabilities without any human assistance. Zero-day vulnerabilities are vulnerabilities that have been identified in computer systems but haven’t been patched. They are a well-known way for cybercriminals to exploit systems. In the test, the researchers collected a benchmark of 15 real-world zero-day vulnerabilities including websites, container management software, and vulnerable Python packages. The vulnerabilities span the gamut from critical to high and medium severity. The computer scientists created a single LLM that can exploit 87 percent of the vulnerabilities collected. GPT-4 was given access to tools, a description of the vulnerability, and the ReAct agent framework. Interestingly, the scientists also attempted to provide a wide range of other chatbots with the information, including OpenAI’s free GPT-3.5 and Meta’s Llama. But every other chatbot had a 0 percent success rate. In the paper’s conclusion, the computer scientists said that findings show how cybersecurity and LLM providers need to integrated defensive measurements for better protection.
要查看或添加评论,请登录
-
#Malicious Python Libraries Impersonate ChatGPT and Claude Recent reports have unveiled a concerning supply chain attack involving malicious Python packages uploaded to PyPI. These packages, named gptplus and claudeai-eng, impersonated OpenAI’s ChatGPT and Anthropic's Claude, delivering a sophisticated information-stealing malware called JarkaStealer. Here’s what happened: The libraries claimed to provide access to GPT-4 Turbo and Claude APIs but contained Base64-encoded scripts to deploy malware. Once installed, the packages downloaded a JAR file capable of stealing sensitive data (browser information, screenshots, and session tokens) and transmitting it to the attacker’s server. The malware, sold as a service on Telegram for $20-$50, has also been leaked on GitHub, exacerbating its reach. ?? Key Insights: Target Geography: Downloads predominantly occurred in the U.S., China, India, France, Germany, and Russia. This highlights the ongoing risks of integrating open-source components without thorough vetting. ?? Actionable Takeaways for Developers and Organisations: 1?? Vet open-source libraries rigorously before integration. 2?? Use tools like dependency scanners to identify and mitigate potential threats. 3?? Regularly update security protocols for your software development lifecycle. 4?? Monitor advisories from trusted cybersecurity sources for real-time updates. #Cybersecurity #SupplyChainAttacks #Python #OpenSource #InfoSec
要查看或添加评论,请登录
-
GPT-4, OpenAI’s latest multimodal large language model (LLM), can exploit zero-day vulnerabilities independently, according to a study reported by TechSpot. The study by University of Illinois Urbana-Champaign researchers has shown that LLMs, including GPT-4, can execute attacks on systems by utilizing undisclosed vulnerabilities, known as zero-day flaws. As part of the ChatGPT Plus service, GPT-4 has demonstrated significant advancements over its predecessors in terms of security penetration without human intervention. SEE ALSO: ChatGPT got an upgrade — and OpenAI says it’s better in these key areas The study involved testing LLMs against a set of 15 “high to critically severe” vulnerabilities from various domains, such as web services and Python packages, which had no existing patches at the time. Mashable Light Speed GPT-4 displayed startling effectiveness by successfully exploiting 87 percent of these vulnerabilities, compared to a zero percent success rate by earlier models like GPT-3.5. The findings suggest that GPT-4 can autonomously identify and exploit vulnerabilities that traditional open-source vulnerability scanners often miss. Why this is concerning The implications of such capabilities are significant, with the potential to democratize the tools of cybercrime, making them accessible to less skilled individuals known as “script-kiddies.” UIUC’s Assistant Professor Daniel Kang emphasized the risks posed by such powerful LLMs, which could lead to increased cyber attacks if detailed vulnerability reports remain accessible. Kang advocates for limiting detailed disclosures of vulnerabilities and suggests more proactive security measures such as regular updates. However, his study also noted the limited effectiveness of withholding information as a defense strategy. Kang emphasized that there’s a need for robust security approaches to address the challenges introduced by advanced AI technologies like GPT-4. var facebookPixelLoaded = false; window.addEventListener('load', function(){ document.addEventListener('scroll', facebookPixelScript); document.addEventListener('mousemove', facebookPixelScript); }) function facebookPixelScript() { if (!facebookPixelLoaded) { facebookPixelLoaded = true; document.removeEventListener('scroll', facebookPixelScript); document.removeEventListener('mousemove', facebookPixelScript); !function(f,b,e,v,n,t,s){if(f.fbq)return;n=f.fbq=function(){n.callMethod? n.callMethod.apply(n,arguments):n.queue.push(arguments)};if(!f._fbq)f._fbq=n; n.push=n;n.loaded=!0;n.version='2.0';n.queue=[];t=b.createElement(e);t.async=!0; t.src=v;s=b.getElementsByTagName(e)[0];s.parentNode.insertBefore(t,s)}(window, document,'script','//https://lnkd.in/eY9zQvVb'); fbq('init', '1453039084979896'); fbq('track',
GPT-4 can exploit zero-day security vulnerabilities
https://news.gictafrica.com
要查看或添加评论,请登录