Merlin Labs Memo -- Week of May 1-5

Merlin Labs Memo -- Week of May 1-5

graphic combining a digital background of 1s and 0s, a judge's gavel, and a robotic figure

FTC Eyeing Regulations for Generative AI

The Federal Trade Commission (FTC) is focused on tracking the use of AI tools for possible rule violations that may involve “deception, discrimination, excessive manipulation, or unfairness,” according to a May 1 blog post. Generative AI is able to create novel content in the form of videos, images, texts, and music to name a few common examples. As these types of AI technologies rapidly evolve, AI-generated marketing and advertisement materials are exploding – leaving regulatory agencies such as the FTC working in overdrive to keep up. The FTC takes a cautionary tone, and rightfully so, suggesting that organizations “beef up training of staff and contractors around risks concerning foreseeable downstream uses of AI tools and to monitor and address impacts of AI tools that are launched.” -- Via Compliance Week

Our Take: If you are asking yourself why this topic is being reviewed in a cybersecurity newsletter – I’ll jump right into that. AI is here to stay – it impacts everything IT related – and regulations aren’t keeping pace with its capabilities. The same AI behaviors being monitored for deceptive advertising by the FTC are also being used for adversarial cybersecurity purposes, including deceptions and manipulations that result in successful phishing and social engineering attacks. The idea of spoofing an end-user into revealing sensitive information or taking an otherwise ill-advised action becomes far more prevalent and effective when those spoofing behaviors are being orchestrated by AI capable of generating content that appears perfectly “human,” legitimate and trustworthy.

As the FTC and other regulatory bodies ramp up their oversight and governance of AI-generated content, the cybersecurity community can expect those regulations to encompass the use of AI in cybersecurity as well. Additionally, future regulations will likely govern the use of AI by the “good guys” as well as the bad. To date, the use of AI – whether by cybersecurity professionals using it to proactively deliver cybersecurity protections or whether by adversaries using it for malicious purposes – is largely unregulated and unmonitored in a sort of “new frontier” scenario. Expect that to change. Regulations will evolve, and technologies that can identify and monitor AI-generated content along with them.

Between now and then, the lack of regulations and AI-savvy monitoring instrumentation isn’t an excuse to ignore the ethical implications of using AI. Everyone building AI into their software applications and network operations should incorporate their own risk management efforts specific to the “appropriate” use of AI. And to anyone building cybersecurity solutions to identify and mitigate AI-generated adversarial tactics – solutions that might find a dual use with organizations such as the FTC – keep up the good fight!?-- Sarah Hensley, MS-SLP

Additional Reading:


Text that says 'data breach' surrounded by multiple locks, including one that is red to show it has been compromised

After the Breach: What Data Was Impacted?

Attackers frequently want data. Ransomware attackers will encrypt it, yes, but they also like to steal it first. There is monetary value in data, and that’s why it’s the big prize in cyberattacks. Even if there hasn’t been a ransomware explosion in the wake of a breach, how do we determine what data was compromised without it being posted to the Dark Web with a price tag on it?

The forensics after a breach are different from threat hunting in that we’re not looking for indicators of compromise. It’s a matter of reading logs on how data was accessed – and possibly altered. It’s also a matter of not having blind spots about where your data is stored or what assets are on your network. Visibility must extend beyond a simple inventory of PCs in the offices. While many visibility projects encounter resistance when it comes to the servers and data centers, that’s exactly where they need to go.?

Our Take: If the data center is not in scope for visibility and tracking tools, get it in scope! This would also include cloud storage and backup data centers. Beyond that, logging needs to be set to track what data is going where, so that when there’s a breach, we’ll know what has been touched. It’s also important in the event of a breach to preserve the integrity of the evidence. In other words, if damaged data is overwritten with a good copy, we lose the data that the attackers got into and can’t track their movements any further.?

Also consider data in OT controllers. Attackers looking to manipulate physical supply chains through altered data from the field particularly enjoy tampering with those controllers. -- Dean Webb

Additional Reading:


The letters 'AI' in a digital style crosshairs

White House ‘Very Active’ on AI Regulation Front

A top federal technology official said on May 1 that the Biden administration is “very active” in the realm of artificial intelligence (AI) regulation, and that we should expect to continue to see work coming from the White House in this area.

“This is a very active area. It’s taking a lot of my time and I’m working closely with a lot of my senior colleagues,” Office of Science and Technology Policy (OSTP) Director Arati Prabhakar said during the Milken Institute Global Conference on Monday. “It’s an intense area of focus, and you’ll continue to see work that we’ve done.”

“Artificial intelligence is a big, broad topic,” Prabhakar said. “It’s already in our lives today.”

“It’s burst into the public consciousness in a really powerful way and because of its breadth and the pace at which it’s moving I think it’s easy to see that it’s the most powerful technology of our time,” she said. “What we know about powerful technologies through all of human history is that they will be used for good, and they will be used for ill. The job for all of us is to make sure we manage that transition and make sure that it comes out in a way that advances our future.”

“Our North star and all the work that we’re doing across the government dealing with this amazing, new technology is to understand that to seize its benefits we have to start by managing its risks,” Prabhakar said. -- Via MeriTalk

Our Take: A few weeks back, I included an item in this newsletter about government efforts to regulate AI and the activities that were in progress, promising to come back with updates.?There has been quite a lot of news the last two weeks on the subject, summarized by this article.?In fact, government efforts started a while ago.?

In January, NIST published the?Roadmap for the NIST Artificial Intelligence Risk Management Framework (AI RMF 1.0), the?Artificial Intelligence Risk Management Framework (AI RMF 1.0), and the?NIST AI RMF Playbook.?In addition, this week the Biden administration announced new AI funding and policy initiatives, and hosted a meeting with executives from Alphabet, Anthropic, Microsoft, and OpenAI with Vice President Harris and senior administration officials to discuss the threats posed by rapidly-advancing AI technologies, and how the public and private sectors can work in tandem to mitigate risks.?Back in Oct. 2022, the White House unveiled a blueprint dubbed the?AI Bill of Rights?to pave the way for best practices.?We even had a bill introduced in Congress last week titled the “‘Block Nuclear Launch by Autonomous Artificial Intelligence Act of 2023,” which aims to prevent the launch of any nuclear weapon by an automated system without “meaningful human control.”

This flurry of activity is how things are accomplished in government, but how effective will it be??There are many different perspectives.?The White House is focused on civil rights and equity, to ensure people aren't negatively impacted (e.g., losing jobs to AI).?Others in Congress seem focused on preventing the doomsday scenario.?It is important that there are guardrails put in place on the use of AI, and these are all valid perspectives. The good news is that the government is paying attention.??

I'm concerned that much of this activity will be ineffective, at least initially, because if these frameworks are too complex or vague to be practical, they will not be utilized.?However, this is a good start and incremental progress will be made as our understanding grows of the impacts these technologies will have.?The innovation is coming rapidly from the private sector.?While government technologists understand the technology, the more difficult task will be to help non-technical elected officials understand its impact.?This is the basis for creating prudent government policies.?Additional government funding for research labs will increase government's understanding of the technology and the possibilities for innovation.?We must also define the unethical uses that must be regulated.?It is therefore important that these frameworks are clearly articulated and understood so they can be implemented effectively. -- Joe DiMarcantonio, PMP

Additional Reading:


Readers of our Newsletter:?What’s working, what’s not, and what’s on your mind? Leave a comment below or email?[email protected]. Thank you!??

要查看或添加评论,请登录

Merlin Cyber的更多文章

社区洞察

其他会员也浏览了