Merlin Labs Memo -- Week of April 10-14
Merlin Cyber
Merlin is your trusted source for best-in-class and innovative and emerging cyber solutions for the U.S. public sector.
Fake Evidence of Ransomware Attack Leads to Real Royal Ransomware Attack
The Royal ransomware group - an offshoot of the former Conti group that emerged as a part of Conti’s disbandment in 2022 and offers Ransomware-as-a-Service (RaaS) “appears to have targeted more than 1,000 organizations with a social engineering attack designed to trick victims” into installing the ransomware. The way the socially-engineered scheme works is by sending victims false notifications that they’ve been breached by a well-known ransomware group (Midnight Group) and tricking them into opening a file that supposedly contains the details of the breach. In fact, the file is a malware loader that launches the Royal ransomware. Victims are further encouraged to open the malware-infused file based on threats that the “stolen” data will be posted to the dark web if payment is not received – creating an urgency that is common in phishing attacks. According to Data Breach Today, “The ploy – scaring victims into thinking their systems have been locked by ransomware and then manipulating them into installing the actual ransomware - is a variation of a gambit known as BazarCall, a “callback phishing” tactic pioneered by Conti.” -- Via Data Breach Today
Our Take: In an “adding insult to injury” sort of way, this ransomware attack uses our heightened fear of and rapid response to ransomware attacks against us. And what better way to install malware than having us do it for them? Multiple sources have reported that as many as 90% of data breaches begin with phishing. The Royal ransomware group in particular has been known to target critical infrastructure sectors including manufacturing, communications, healthcare, and education, making the stakes even higher. What’s the answer? In short, the best defense is a good offense. First, I can’t stress enough the importance of frequent and ongoing security and awareness training for people and organizations to make sure users are not clicking on links and files they shouldn’t be clicking on, whether in an email or on a web browser. This won’t stop all phishing breaches but will make a big dent. Next, practice good cybersecurity hygiene. By equipping a network with the right cybersecurity tools deployed in a zero-trust manner and staying up-to-date on patches and upgrades, another chunk is taken out of the opportunity landscape for attackers. Phishing-resistant multi-factor authentication (MFA) and anti-phishing (cyber-secure) browser, email and API-centric tools are available and should be considered as core elements in a cybersecurity toolbox. Finally, the most important way to protect against ransomware (whether deployed as a result of phishing or other attack vectors) is having a robust backup and recovery capability combined with end-to-end data encryption. This type of secure, high-availability business continuity approach to system data is the only way to fully mitigate the risk of data loss. There is no assurance that a paid ransom will always result in the full or usable recovery of stolen data – so protect your business and data by assuming you will need a Plan B. -- Sarah Hensley
Additional Reading
- Warning issued over Royal ransomware (Malware Bytes)
- Stop Ransomware: Royal Ransomware (CISA)
- BazarCall attack increasingly used by ransomware threat actors (Tech Republic)
- Callback phishing attacks evolve their social engineering tactics (Bleeping Computer)
Zombies and Shadows: API Security Issues
Some definitions: A zombie API is one that had a purpose that is now abandoned or forgotten. It’s still there, waiting for input, but it’s only the malicious who will be sending it a message now. Zombies may have been considered secure in the past, but may be subject to vulnerabilities discovered after they were deployed – vulnerabilities that attackers will 100% leverage when they find the zombie API, ready to respond. A Shadow API is still being used, but it’s undocumented and likely evading controls so that someone can do a job easily, without security constraints. Shadow APIs can also mean that data is exposed without security constraints, authentication may be granted without security restraints, and OWASP attacks may succeed without security constraints.
So how do we fight the zombies in the shadows? What can we do if our own organization practices good software hygiene but we use third-party applications and have uncertainties about them?
Our Take: Internal controls involve logging and code scanning and then calling out APIs that don’t belong in the applications. For in-house and third-party applications, tracking network traffic is key for picking up on APIs that shouldn’t be exposed. Port scans will find many listeners that shouldn’t be there. More critical, however, is the NetFlow or packet capture that shows traffic that shouldn’t be there, particularly traffic that traverses the firewall. An application-aware firewall coupled with NetFlow monitoring can help greatly to find problems and where they live in the network.
Beyond detection, there has to be a culture of security before performance. If there isn’t a culture that places security at the top, then these shadows and zombies remain in place. Shadows will remain in place because developers know that they get the job done. Zombies will remain in place out of fear that turning them off leads to a job not getting done, like a special year-end process or something similarly infrequent. No matter! These things need to be tracked and phased out or have security ringfences placed around them. To be sure, many organizations have had employees with significant institutional knowledge leave for other opportunities, and they are most at risk for zombies and shadows when the person who knows about them walks out the door. To combat that, documentation must also be a priority for security so that someone can look up exactly why an API was put into place and then make a determination about whether or not it is still needed. -- Dean Webb
Additional Reading
- Why Are Zombie APIs and Shadow APIs So Scary? (DarkReading)
- How to Detect and Control the Spread of Shadow APIs (TechTarget)
Security Dies in Silos
We all know that organizations with an ad-hoc approach to security are most likely to be breached. Having little or no plan for security is a terrible way to run an operation. What we don’t accept as conventional wisdom, however, is that having different groups involved in security operating in silos is the second-worst way to run an operation. In a survey conducted by Hyperproof.io, more than 1,000 firms were surveyed about their security structures and breaches in 2022. The findings:
- 61% suffered a breach
- 46% of firms with multiple security silos had a breach
- 36% of firms with integrated security teams and manual tools had a breach
- 30% of companies with integrated and automated security had a breach
Siloed groups mean organizational boundaries over tool ownership and usage, with tools likely only being coordinated with other tools owned by the same group. Siloed groups also means duplication of efforts in both risk management activities and compliance work.
Our take: While it’s not likely that the org chart gets completely redrawn after reading this article, some preliminary sketches are on order. Finding ways to bring security groups together, even if they have different management lines, will reduce redundant effort and improve tool integration and automation. This also means bringing compliance into the security huddle, as integration means integrating all the teams. -- Dean Webb
Additional Reading:
- Survey Findings Show Link Between Data Silos and Security Vulnerabilities (DarkReading)
- 2023 IT Compliance Benchmark Report (Hyperproof)
Regulating AI
Cybersecurity and Infrastructure Security Agency (CISA) Director Jen Easterly warned on Thursday that the United States needs to quickly determine the regulatory landscape for development of AI technologies, which she said have the potential to become the most consequential – and perhaps dangerous – technologies of the 21st century.
During a panel discussion hosted by the Atlantic Council, Easterly laid out in stark terms what’s at stake for the U.S. and the world if AI and other technologies are allowed to proliferate without the benefit of government guardrails.
“We are hurtling forward in a way that I think is not the right level of responsibility, implementing AI capabilities in production, without any legal barriers, without any regulation,” she said.
“Frankly, I’m not sure that we are thinking about the downstream safety consequences of how fast this is moving and how bad people like terrorists – I used to be the head of counterterrorism at the White House – or cyber criminals or adversary nation-states can use some of these capabilities not for the amazing things that they can do but for really bad things that can happen – weaponization of cyber, a weaponization of genetic engineering, weaponization of biotech,” she said.
“I have been trying hard to think about how we can implement certain controls around how this technology starts to proliferate in a very accelerated way,” said Easterly. -- Via MeriTalk
Our Take: The calls for regulating AI are growing louder, as they should. Examples abound of the amazing things GPT-4 has done, such as passing The Uniform Bar Exam, the SAT, the GRE, and the US Medical Licensing Exam. A cybersecurity researcher claims to have used ChatGPT to develop a zero-day exploit that can steal data from a compromised device. Alarmingly, the malware even evaded detection from all vendors on VirusTotal.
The term "artificial intelligence" covers a wide range of technologies. There are commercially available AI implementations to help doctors rapidly review clinical data to assist in making a thorough diagnosis in complex cases. The related technology, machine learning, is used to help cybersecurity professionals sift through enormous amounts of data such as network traffic and logs to uncover threats. The technology has been extremely effective in these cases.
However, it is easy to see how it can be exploited for harm. In an interview with ABC News last month, OpenAI (the creator of Chat GPT) CEO Sam Altman, was asked, “What is the worst possible outcome?” He replied, “There’s a set of very bad outcomes. One thing I’m particularly worried about is that these models could be used for large scale disinformation. I am worried that these systems, now that they’re getting better at writing computer code, could be used for offensive cyberattacks.”
While developers of these technologies may build in guardrails to prevent such exploitation, there is also the possibility that these guardrails are not sufficient, or could be maliciously bypassed. The U.S. Commerce Department on Tuesday said it will spend the next 60 days fielding opinions on the possibility of AI audits, risk assessments and other measures that could ease consumer concerns about these new systems. “There is a heightened level of concern now, given the pace of innovation, that it needs to happen responsibly,” said Assistant Commerce Secretary Alan Davidson, administrator of the National Telecommunications and Information Administration.
The European Union has proposed an Artificial Intelligence Act that will assign three risk categories relating to AI:
- applications and systems that create “unacceptable risk” will be banned, such as government-run social scoring used in China
- applications considered “high-risk”, such as CV-scanning tools that rank job applicants, will be subject to specific legal requirements, and
- all other applications will be largely unregulated.
The legal, ethical, privacy and moral implications abound. Protections that are effective are required but will be complex. We will continue to monitor this story for future developments. -- Joe DiMarcantonio, PMP
Additional Reading:
- Biden Administration Seeks Input on AI Safety Measures (SecurityWeek)
- The AI singularity is Here (InfoWorld)
- Calls to Regulate AI are Growing Louder. But How Exactly do you Regulate a Technology Like This? (GCN)
- ChatGPT, the AI Revolution, and the Security, Privacy and Ethical Implications (SecurityWeek)
- A Researcher Used ChatGPT to Create Dangerous Data-Stealing malware (TechSpot)
Readers of our Newsletter: What’s working, what’s not, and what’s on your mind? Leave a comment below or email [email protected]. Thank you!