Microsoft warns of Azure shared key abuse, Attackers hide stealer behind AI Facebook ads, OpenAI to launch bug bounty program
Microsoft warns of Azure shared key authorization abuse
Researchers are warning that an Azure shared key authorization attack could allow full access to accounts and data, privilege escalation, lateral network movement, and remote code execution (RCE). Shared keys are part of Azure infrastructure by default and, compared to Azure Active Directory (AD), they provide inferior security because whoever possesses the keys can abuse shared key authorization. Microsoft recommends disabling shared key authorization in Azure or applying least privilege and monitoring for key access to help mitigate the risk.
(SecurityWeek ?and?Dark Reading )
Attackers hide stealer behind AI chatbot Facebook ads
Researchers have spotted cybercriminals posting fake ads on hijacked Facebook business and community pages, promising free downloads of AI chatbots such as ChatGPT and Google Bard. Instead, users download the well-known, RedLine info-stealer. RedLine Stealer is a malware-as-a-service (MaaS) that targets browsers to collect user data including credentials, payment-card details, and system details. RedLine can also upload and download files and execute commands. RedLine malware is a popular choice for hackers due to its versatility and the fact that it only costs around $100 to $150 on the Dark Web.
(Dark Reading )
OpenAI to launch bug bounty program
On Tuesday, ChatGPT maker OpenAI announced the launch of a new bug bounty program. The program will pay registered security researchers for uncovering vulns in OpenAI Application Programming Interface (API) and ChatGPT. Bug bounty pay-outs will range from $200 for low-severity security flaws up to $20,000 for exceptional discoveries. OpenAI clarified that model issues, jailbreaks and bypasses are out of scope unless there is an associated security issue.
FBI warns consumers of phone “juice jacking” from public charging stations
The FBI is alerting consumers not to use public charging stations because fraudsters could infect such machines with malware and steal their data. The practice known as “juice jacking” was first coined in 2011 after researchers created a charging station to show the potential for hacking such kiosks. Officials said the alert is a refresher to a similar warning released by the FBI and Federal Communications Commission (FCC) back in 2021. It’s not clear how common “juice jacking” is but experts warn that the attack could allow hackers to take full control of a victim’s device. The safer alternative is using one’s own USB cord and plugging into an electrical outlet or a portable charger.
(The Guardian )
And now a word from our sponsor, AppOmni
领英推荐
Spyware offered to cyberattackers via Python repository
Researchers have discovered threat actors advertising an info-stealer on the Python Package Index (PyPI), the official Python public repository. Researchers say the perpetrators are a Spanish malware-as-aservice (MaaS) gang called SylexSquad who conspicuously named their program “reverse-shell.” Reverse shells are commonly used by hackers to remotely harvest data from targeted computers. Researchers speculate that the hackers’ motives for hosting their malware in a public code repository could range from gaining notoriety to having more control and ability to share their malware. The discovery also serves as a reminder to organizations to use caution when pulling code from public repos like PyPI.
(Dark Reading )
You should probably patch that (Patch Tuesday edition)
Yesterday, Microsoft issued a wagon-load of 97 security fixes as part of April 2023 Patch Tuesday. The fixes included an update for seven Critical bugs and one actively exploited zero-day vulnerability. The zero-day bug (CVE-2023-28252) is related to the Windows Common Log File System (CLFS) Driver and allows for privilege elevation. Also notably, Microsoft fixed a CPU bug affecting Firefox users due to a bug in Windows Defender.?
Meanwhile, Apple also plugged two actively exploited zero-day bugs in iOS and macOS. The first (CVE-2023-28206) is an IOSurfaceAccelerator out-of-bounds write issue, potentially enabling an app to execute arbitrary code with kernel privileges. The second (CVE-2023-28205) is a WebKit use-after-free flaw that allows data corruption or arbitrary code execution when reusing freed memory.
Additionally, Adobe rolled out security fixes for at least 56 vulnerabilities in a wide range of products, some serious enough to expose Windows and macOS users to code execution attacks.
Finally, Google Cisco, Fortinet and SAP joined in on the patching fun, each releasing security updates for multiple products.
(Bleeping Computer ?and?Infosecurity Magazine ?and?SecurityWeek ?and?PCWorld )
iPhone victims hacked with rogue calendar invites
On Tuesday, researchers at Microsoft and Citizen Lab issued reports revealing that, in early 2021, hackers embedded spyware from a little-known company called QuaDream into malicious calendar invites to hack the iPhones of journalists, political figures and NGO workers. QuaDream’s spyware can record phone calls and audio, take pictures, steal files, track user location and delete forensic traces of its own existence. The exploit used a then unpatched zero-day vuln in Apple iOS 14. A spokesperson from Apple said there’s no evidence the exploit was used after March 2021, when the company fixed the bug.?
(TechCrunch )
Reddit moderators brace for ChatGPT spam apocalypse
Reddit’s ChatGPT-powered bot problem is “pretty bad” right now, according to a Reddit moderator with knowledge of the platform’s moderation systems. Hundreds of accounts have already been removed from the forum but more are being discovered daily. Most removals have to be done manually because Reddit’s automated systems struggle with AI-created content. Botnets have been spotted posting prolifically to the “ask” subs. However, the AskPhilosophy moderator said, “ChatGPT has a style that’s fairly easy to identify, but the real test is the quality, and it appears that ChatGPT is very bad at philosophy.” Despite current AI quality issues, most subs are bracing themselves for large language models like GPT-4 getting better at crafting human-sounding content.
(VICE )