In today's episode, we explore the alarming rise of cybercriminal techniques, including the widespread Hijacked Domains attacks termed 'Sitting Ducks,' affecting reputable brands and organizations. We also discuss OpenAI's ChatGPT sandbox vulnerabilities, which allow excessive access to its internal systems, and examine the RustyAttr trojan’s use of macOS extended file attributes to hide malicious code. Additionally, we cover the sentencing of Robert Purbeck, a hacker who extorted personal data from healthcare providers, reflecting on the broader implications for cybersecurity. Article URLs: 1. https://lnkd.in/gky66kU4 2. https://lnkd.in/ePZFtuHM 3. https://lnkd.in/gtE6mumN 4. https://lnkd.in/gBc99unm Music: https://lnkd.in/eWncCdNv Timestamps 00:00 - Introduction 01:12 - Sitting Ducks 02:33 - macOS RustyAttr 03:18 - OpenAI ChatGPT security risks 05:00 - Robert Purbeck Sentenced 1. What are today's top cybersecurity news stories? 2. How are hackers hijacking domains in the Sitting Ducks attack? 3. What vulnerabilities are present in the ChatGPT sandbox environment? 4. What new techniques are hackers using to hide malicious code on macOS? 5. What is the story behind the extortion case of hacker Robert Purbeck? 6. How did threat actors exploit extended file attributes in macOS? 7. What are the implications of the Sitting Ducks attack scheme on businesses? 8. What measures can organizations take to protect against domain hijacking? 9. How did hackers manage to remain undetected with RustyAttr malware? 10. What are the potential risks associated with accessing the ChatGPT playbook? hijacked domains, Sitting Ducks, phishing, DNS settings, Mozilla, OpenAI, ChatGPT, security, macOS, Trojan, Lazarus, cybersecurity, Robert Purbeck, data theft, extortion, privacy
The Daily Decrypt的动态
最相关的动态
-
A vulnerability in ChatGPT could allow attackers to store false information and malicious instructions in users' long-term memory settings. OpenAI released a partial fix to prevent such incidents. According to the researcher who found the vulnerability, the injected memory persists, enabling data exfiltration even in new conversations. Notably, the attack cannot be executed through the ChatGPT web interface. Read more at the link below. #ChatGPT #OpenAI #Cybersecurity #DataProtection
要查看或添加评论,请登录
-
OpenAI-owned ChatGPT might have a vulnerability that could allow threat actors to launch distributed denial of service (DDoS) attacks on unsuspecting targets. #cybersecurity https://lnkd.in/e3h3kQAC
要查看或添加评论,请登录
-
Watch Out for AI Memory Attacks ?? Hackers can now use prompt injections to plant data into ChatGPT’s long-term memory, even after starting a new conversation. While OpenAI’s updates prevent this through the web interface, untrusted content can still cause issues. Other people may wonder how these bad actors do this concerning activity. They use techniques like inserting harmful data during conversations or exploiting weaknesses in the AI's memory system to manipulate its outputs. To protect yourself, watch for any new memory being added during sessions, and regularly check stored memories for anything suspicious. OpenAI offers guidance on managing these memories to stay safe. Be cautious, review stored memories often, and follow OpenAI’s tips to avoid AI memory attacks. ?? Check out the full post for more insights and follow our page for updates ?? https://lnkd.in/gCNca-_a and don't forget to follow us page Start With WCPGW #AIAttacks #DataSecurity #CyberSecurity #OpenAI #MemoryManagement #StaySafeOnline #startwithwcpgw #Wcpgw
要查看或添加评论,请登录
-
Who doesn't see the humor in something so impactful being rendered a puppet of sorts? Yet, that should only be (hopefully) one's first, cursory reaction. More to the point, we are all collectively being made numb to this tech's incapacities and far from comedic, we should be outraged at the insidiousness and callous disregard for quality business practices. In hindsight, tech rollouts always featured a bug or two -- but that was a bug and not a feature. So for a moment, do allow me to wax poetical about an era where the end users of a novel UI weren't the ones identifying and then publicizing the errors. This current phenomenon takes the phrase "the consumer is the product" to a level where the customer also pays for the privilege of accessing the tech, finetuning and jailbreaking it -- and then, regardless of a primarily B2B (OpenAI) or B2C (Anthropic) arrangement, the end user is ONLY #liable for malfeasance and is NEVER #credited with the rights to any outcome -- should it prove valuable. This state of affairs is disconcerting at best. At worst, it's a compounding catastrophe that's more entertaining than it is distressing -- and this is proof positive that the entirety of this fiasco, even as it continues to unwind, has more than numbed our minds. And a numb mind that's not alert, curious or skeptical is fodder these days for brilliant, charismatic tech CEOs who're able to employ rigorous marketing agencies (you know, the ones who aren't "overpromising" or misleading the public because they fully expect to create the nonexistent future); who're able to deploy agile, strident lobbyists to embolden political parties on either side of the aisle of an inevitable and not very expensive utopia of outcomes; and tech CEOs who're now able to count the government, the military, other corporations, the politicians, and the general public at large as interwoven interests who've 'bought in' because #AGI is inevitable. Just imagine: if #Lecun now vehemently asserts that Oct/24 LLMs aren't even as *intelligent* as babies or cats, the accepted reasoning must be that they will eventually become that way -- which of course, justifies the current #AIArmsRace and assigns not just inherent value to the technology but a value so immense it triggers the two words that every government uses to override other (common sense) interests: #NationalSecurity. FWIW: I hope my above analysis is very wrong. And that's all it is, one person's opinion -- yet, it behooves me to mention anew that all this #AITech is literally <improving> [uh oh, have I also bought into the #AIHypePanaceaAndUtopia?] so with each passing day, the goalposts/endzone/finish line are all getting closer and within striking distance (for them). Besides, what am I worried about? It's not like everyone from the legislators to the judicial inaction to the referees & commissioners have been *influenced* by or signal government guarantees, military/defense contracts and a $10 Trillion war chest...
Author | Keynote Speaker | Board Member | Associate Professor working on AI Ethics at the University of Oxford
"The researcher demonstrated how he could trick ChatGPT into believing a targeted user was 102 years old, lived in the Matrix, and insisted Earth was flat and the LLM would incorporate that information to steer all future conversations. These false memories could be planted by storing files in Google Drive or Microsoft OneDrive, uploading images, or browsing a site like Bing—all of which could be created by a malicious attacker." #privacy #cybersecurity #AIEthics #LLMs https://lnkd.in/emQDbd56
要查看或添加评论,请登录
-
“A now-patched security vulnerability in OpenAI's ChatGPT app for macOS could have made it possible for attackers to plant long-term persistent spyware into the artificial intelligence (AI) tool's memory.? ? The technique, dubbed?SpAIware, could be abused to facilitate "continuous data exfiltration of any information the user typed or responses received by ChatGPT, including any future chat sessions," security researcher Johann Rehberger?said.? ? The issue, at its core, abuses a feature called?memory, which OpenAI introduced earlier this February before rolling it out to ChatGPT Free, Plus, Team, and Enterprise users at the start of the month.” OpenAI clarifies that ChatGPT's memory evolves over time and is not tied to specific conversations, meaning that deleting a chat does not erase the memory; the memory itself must be removed. Researchers have discovered a method to exploit this by injecting malicious instructions into ChatGPT's memory, leading it to retain false information or directives that persist across future interactions. This vulnerability could potentially result in ChatGPT unintentionally transmitting conversation details to an attacker. AI memory features hold incredible?potential but if not carefully managed, can be exploited and misused by attacker for the wrong purpose. Read more about the news article and share your findings with us! https://lnkd.in/gRutbzxv ? ? #cybertronium #cybertroniummalaysia #artificialintelligence #vulnerability?
要查看或添加评论,请登录
-
#genai #ChatGPT #vulnerability was exploited by hackers, allowing them to plant false information and #malicious #instructions in user's long-term #memory settings. LLM users can prevent such attacks by monitoring sessions for signs of new memories and regularly reviewing stored memories for any suspicious content. OpenAI offers guidance on managing the memory tool and specific memories stored within it. https://lnkd.in/gKK3Yn3D) #CyberSecurity #DataSecurity #ChatGPT #OpenAI
要查看或添加评论,请登录
-
ChatGPT Search can be tricked into misleading users, new research reveals. OpenAI’s ChatGPT search tool may be open to manipulation using hidden content, and can return malicious code from websites it searches, a Guardian investigation has found. The Guardian tested how ChatGPT responded when asked to summarise webpages that contain hidden content. This hidden content can contain instructions from third parties that alter ChatGPT’s responses – also known as a “prompt injection” – or it can contain content designed to influence ChatGPT’s response, such as a large amount of hidden text talking about the benefits of a product or service. These techniques can be used maliciously, for example to cause ChatGPT to return a positive assessment of a product despite negative reviews on the same page. A security researcher has also found that ChatGPT can return malicious code from websites it searches. Jacob Larsen, a cybersecurity researcher at CyberCX, said he believed that if the current ChatGPT search system was released fully in its current state, there could be a “high risk” of people creating websites specifically geared towards deceiving users. However, he cautioned that the search functionality had only recently been released and OpenAI would be testing – and ideally fixing – these sorts of issues... Source: https://lnkd.in/eSbmuzE2 #cybersecurity #openai #chatgpt #artificialintelligence #datapoisoning
要查看或添加评论,请登录
-
-
More and more ChatGPT attacks are coming out. This one uses an external web page, which when read by ChatGPT causes it to save all of the data about the user to an external server. Data exfiltration. #cybersecurity
要查看或添加评论,请登录
-
A vulnerability in ChatGPT's long-term conversation memory feature was identified, allowing malicious instructions to be inserted and private information to be extracted from sessions. The instructions persist across all future conversations due to being stored in long-term memory. #CyberSecurity #DataPrivacy
要查看或添加评论,请登录
-
Hackers can use ChatGPT to plant false memories and create persistent data leaks! Learn how this new exploit works and how to protect your data. Read more: https://ow.ly/Wq8f50TvvE7 #cybersecurity #AIsecurity #ChatGPT #datasecurity #AI #DFIR #cyber #technoloogy #infosec #CISO
要查看或添加评论,请登录