AI deepfakes get very real as 2024 election season begins
[Photo: VA/Eugene Russell-Army Veteran/Flickr; Paolo Villanueva/Flickr]

AI deepfakes get very real as 2024 election season begins

Welcome to AI Decoded, Fast Company’s weekly LinkedIn newsletter that breaks down the most important news in the world of AI. I’m Mark Sullivan , a senior writer at Fast Company, covering emerging tech, AI, and tech policy.

This week, I’m taking a look at the recent deepfakes of President Biden and Taylor Swift, and asking who’s to blame and what it all means in an election year. Plus: how the biotech and biology worlds are using AI, and Microsoft’s New Future of Work report.

If a friend or colleague shared this newsletter with you, you can sign up to receive it every week here . And if you have comments on this issue and/or ideas for future ones, drop me a line at [email protected] , and follow me on X (formerly Twitter) @thesullivan .


AI DEEPFAKE TECH IS ADVANCING FASTER THAN LEGAL FRAMEWORKS TO CONTROL IT?

Over the past two weeks the world got a preview of the kind of damage AI deepfakes are capable of inflicting. Some New Hampshire voters received robocalls featuring an AI-generated Joe Biden telling them not to vote in the state primary election. Just days later, 4chan and Telegram users generated explicit deepfakes of pop star Taylor Swift using a diffusion model-powered image generator; the images quickly spread across the internet . Though details remain scarce in both cases—we don’t yet know who created the fake Biden robocall, nor do we know what tool was used to make the Swift deepfakes—it’s clear we may just be at the beginning of a long and ugly road.

Former Facebook Public Policy director Katie Harbath tells me deepfakes might be an even bigger problem for people outside the celebrity class. AI-generated depictions of people like Biden and Swift get a lot of attention and are quickly debunked, but everyday people—say, someone running for city council, or an unpopular teacher—could be more vulnerable. “I’m especially worried about audio, as there are just less contextual clues to tell if it’s fake or not,” Harbath says.

The deepfakes are particularly troubling because they’re as much a product of the social media age as they are of the AI age. (The Swift images spread like wildfire on X, which struggled to contain any such posts in part because its owner, Elon Musk, decided to gut the platform’s content moderation teams when he bought the company in 2022.)

Social media platforms have little legal incentive to quickly extinguish such content, in large part because Congress has failed to regulate social media. And social platforms benefit from Section 230 of the 1996 Communications Decency Act, which shields “providers of interactive computer services” from liability for user-created content.

The Biden robocalls, on the other hand, underscore the fact that it’s possible to commit such dastardly AI crimes without leaving a lot of bread crumbs behind. Bad actors—domestic or foreign—may be emboldened to circulate even more damaging fake content as we move deeper into election season.

Several deepfake bills have been introduced in Congress, but none have come anywhere near the president’s desk. Last summer, Republicans on the Federal Election Committee blocked a proposal to more explicitly prohibit the deployment of AI-generated depictions. Biden has already assembled a legal task force to quickly address new deepfakes, but AI works at the lightning speed of social networks, not at the slower plod of courts. (If there’a a sliver of hope it’s that some states, most recently Georgia , are considering classifying deepfakes as a felony.)

Even if the AI tool used to create a deepfake can be detected, it’s questionable whether the people who made the AI tool can be held liable.

A central legal question may be whether or not Section 230’s protections extend to AI tool makers, says First Amendment lawyer Ari Cohn at the tech policy think tank TechFreedom. Are generative AI companies such as Stable Diffusion and OpenAI shielded from lawsuits related to content users create with image generators or chatbots? Section 230 aims to protect “providers of interactive computer services,” which could easily describe ChatGPT. Some argue that because generative AI tools create novel content, they’re not entitled to immunity under Section 230, while others claim that because the tool simply fulfills a content request, responsibility lies solely with the user.

It remains to be seen how the courts will decide that question, Cohn says. Even more interesting is whether the courts’ position will extend to makers of open-source generative AI tools. Deepfake makers prefer open-source tools because they can easily remove restrictions on what types of content can be produced, and remove watermarks or metadata that might make the content traceable to a tool or a creator.


AI IN BIOLOGY WILL BE USED FOR WAY MORE THAN DRUG DISCOVERY?

Though science will find meaningful uses for large language models, it’ll likely be other kinds of models working with very different data sets that do the heavy lifting in solving the world’s big problems.

While LLMs deal in words, scientific problems are often expressed in other terms—numerical vectors defining things like DNA sequences and protein behaviors. Ginkgo Bioworks head of AI Anna Marie Wagner says that humans invented language, so it’s taken a long time for AI to be able to do things with language that humans can’t already do. With new LLMs, we now have a tool that can read 100 documents in five minutes, and summarize their similarities and differences.?

“Human beings did not invent biology—we are students of it, so AI is already much better at it than humans, and has been for a very long time, at certain types of tasks, like taking in massive amounts of biological data and making sense of it,” Wagner says.

The biology world uses AI in bioinformatics as a way of managing the vast amounts of information scientists collect to understand the behaviors of the most basic building blocks of life—DNA, RNA, and proteins. But unlike the field of natural language, Wagner says, biology is still very early in the process of discovering, and codifying, all the possible ways that various sequences of DNA can manifest (via RNA, then proteins) in the human body, or in the body of a microbe, or in a stalk of corn. Understanding the logic behind each possible step in that process implies a mind-bogglingly large body of information.?

Ginkgo has been using AI for years to help design proteins to catalyze certain chemical reactions, or to develop new drugs, or for designing DNA sequences in synthetic biology. Wagner says people often associate biology with the pharma industry and biotech, and while that’s where the money is today, biology will be applied to a much wider set of challenges than just drug discovery in the future.?

“Biology is the only substrate, the only scientific discipline, that is capable of solving the great challenges of the world—food security, climate change, human health—all of those are biological problems,” says Wagner. “There has already been so much value created [with AI], even with the tiny little surface-scratching work that we’re doing now.”?


MICROSOFT’S NEW FUTURE OF WORK REPORT IS ALL ABOUT AI

Not surprisingly, Microsoft’s recently released New Future of Work report focuses on the use of AI in the workplace. The report, which draws on surveys of folks both within Microsoft and outside the company, yields some eye-catching stats and themes. For example, it took people 37% less time on average to complete common writing tasks when they used AI tools, and consultants produced over 40% higher quality on a simulated consulting project. Meanwhile, users solved simulated decision-making problems twice as fast when using LLM-based search over traditional search. However, in some tasks, when the LLM made mistakes, BCG consultants with access to the tool were 19 percentage points more likely to produce incorrect solutions.?

A few more findings from the Microsoft report:?

  • Researchers think that as AI tools are more widely used at work, the role of human workers will shift toward “critical integration” of AI output, requiring expertise and judgment.?

  • AI assistants might be used less as “assistants” and more as “provocateurs” that can promote critical thinking in knowledge work. AI provocateurs would challenge assumptions, encourage evaluation, and offer counterarguments.

  • Writing prompts for AI models remains hard. Prompt behavior can be brittle and nonintuitive. Seemingly minor changes, including capitalization and spacing can result in dramatically different LLM outputs.?

You can read the full report here .?


MORE AI COVERAGE FROM FAST COMPANY:

FROM AROUND THE WEB:?

Varun Kareparambil

Crafting Tailored Security Solutions for UHNWIs & Corporations | Creator of AI ThreatScape Newsletter

9 个月

AI-enabled threats (such as deepfakes) are just getting started! Governments and private organisations are far from prepared to counter what lies in store. One key element in improving preparedness is awareness. This is one mission that I’ve been working on for the past 8 months - trying to spread awareness around AI-driven threats through my newsletter. In the latest edition, I explain how audio deepfakes will be the preferred weapon to disrupt the US Presidential elections. You can read the full piece here ???? https://open.substack.com/pub/aithreatscape/p/why-audio-deepfakes-will-be-the-preferred?utm_campaign=post&utm_medium=web For anybody wanting to keep themselves informed on how AI-threats are playing out, consider subscribing to my newsletter, AI ThreatScape (it’s free!)??

回复

This could be another reason why the U.S. Senate sat down with the Tech Giants today...

回复
Brandon Nielsen

Saving the world by design.

9 个月

The FEC doesn’t even have the authority needed to regulate AI in media, which is a glaring omission. Instead we get a snippet that Republicans on the FEC blocked a proposal last year?

Mark Sullivan

Senior Writer at Fast Company

9 个月

thanks for reading, folks! please chime in with questions, opinions, requests . . .

回复

要查看或添加评论,请登录

社区洞察

其他会员也浏览了