March Cybersecurity Roundup: Addressing Emerging Threats and Solutions

March Cybersecurity Roundup: Addressing Emerging Threats and Solutions

GitHub has malicious repositories problem

Recently there has been an infestation of malicious git repositories that are forked from legitimate ones but are altered to deliver also a heavily obfuscated malicious code. Github has automation that can prevent most of that activity, but given the scale of the attack, even a small percentage missed would result in a great number of forked malicious repos, that bear the name of the legitimate one. This is a form of supply chain attack, bordering with social engineering, presenting the malicious code as more trusted than the legitimate one. Probable solution is to pay attention to the date the repository was created, the organization/maintainer of it, and how many stars it has on GitHub.


There is a remedy for cloud security fatigue - runtime insights

The problem of cloud security is tackled from two different perspectives: shift-left and shift-right. The shift-left approach targets the SDLC and increases the security of the software product while in development and testing. The shift-right approach consists of security monitoring and operations and also detection and response.

SysDig has written a whitepaper showing that runtime insights serve as the connective tissue the shift-left and shift-right tactics and have also shown some key notes to deal with the vast number of vulnerabilities that are normally on the desk of the cloud security engineer.


Firewall for AI applications developed by Cloudflare

Enterprise customers of Cloudflare can now benefit from AI Firewall. It seems to be a product, loosely based on a Web Application Firewall (WAF) but specifically altered to fit the use case of the Large Language Models (LLMs). It includes some standard properties like DoS protection identification of leaking sensitive data (PII, SSNs, proprietary code, etc) or some specific AI like preventing model abuse via prompt injections and also applying prompt and response validation using the boundaries defined by the model creator and inspecting every API call.


Tesla vehicles vulnerable to easy MiTM attack

Using cheap hardware and an Android phone a malicious actor can take over a Tesla vehicle using a simple Evil Twin wi-fi approach. Creating a WiFi hotspot with name, common in the Telsa charging station he can trick the Tesla owner to connect to it using their credentials and to steal them and the OTP code, login into their account, and add a new phone to the owner's account. There are a couple of significant security threats on Tesla's side here - the owner does not receive notification for the new phone added and the phone does not need to be in the car or the car does not need to be unlocked. The recommended solution remains the same: do not trust random WiFi hotspots, even if they appear to be from trusted brands. Instead, opt for using RFID keys to unlock the vehicle


Three authentication security bugs found in ChatGPT plugins

Misconfiguration in the implementation of the OAuth standard, used by OpenAI, opens the way for attackers being able to gain unauthorized access to victim’s accounts or to connect them to malicious plugins that can be exploited. With the common trend of corporate users uploading proprietary data and intellectual property into the chatbot this is getting much and much bigger issue. Cybersecurity company Salt Labs demonstrated that using these techniques they can gain access to user’s Github account and the private repositories they have access to.

Even though these vulnerabilities were very quickly patched the evolving landscape of AI most likely would open many more.


Encryption does not prevent spying on conversations with AI chatbots

Offensive AI Research Lab in Israel has discovered that even though communication between most chatbots (ChatGPT and Bard) is TLS encrypted the chatbot receives the tokens one by one on the fly and by their size they can be inferred by the so-called “token-length sequence” side channel attack. The usage of specially trained LLMs can speed up the process. This attack has around 30% perfect accuracy and around 55% high accuracy.


Apple silicon have built in hardware flaw, that fetches encryption keys

Another side-channel attack enables the theft of secrets originating from the architecture of the M1 and M2 silicon. The processor's data memory-dependent prefetcher (DMP) is trying to predict what will be needed in the near future and fetches it from memory and loads it into the CPU cache.

A group of academic researchers from the US created a proof of concept app, named GoFetch, that is able to retrieve a 2048-bit RSA key in less than an hour and a little over two hours to extract a 2048-bit Diffie-Hellman key.

Unfortunately, the mitigation of this attack will lead to a big hit in the performance of M1 and M2 Apple chips.


要查看或添加评论,请登录

社区洞察