Deepfake Technology Is Now a Threat to Everyone. What Do We Do?
Kartik Hosanagar
AI, Entrepreneurship, Digital Transformation, Mindfulness. Wharton professor. Cofounder Yodle, Jumpcut
Legislation hasn’t kept up with the fast-moving technology, so the market may have to create its own solution
A version of this article appeared as an expert column in The Wall Street Journal.
Last month, MIT professor Sinan Aral tweeted a video of himself endorsing a scam investment fund’s stock trading algorithm. Except it wasn’t Aral in the video, but an AI creation in his likeness—an AI-generated, highly persuasive deepfake. Aral is a leading expert on the study of misinformation online, so it is striking that fraudsters targeted him. Less savvy victims are easier marks: in 2020, criminals used similar technology to simulate the voice of a company director and fool his Hong Kong banker into transferring $35 million out of the firm’s account.
?Deepfakes have been around for a few years now, but at the moment we’re at an inflection point. Thanks to a number of free deepfake apps that are just a Google search away, it is only a matter of time before someone you know becomes a victim of deepfake scams.
The term deepfake has its origins in porn, where hackers were inserting celebrity faces onto explicit content, but it has come to mean the use of AI to create synthetic media (images, audio, video) in which someone appears to be doing or saying what in reality they haven’t done or said. The technology is not always misused. Cadbury, for example, partnered with Bollywood celebrity Shahrukh Khan on a marketing campaign for small businesses in India hit by COVID-19: Business owners uploaded details of their stores, and Cadbury used deepfake technology to create the effect of Khan promoting these names in tailored TV ads. ?The campaign was a hit (and transparent about its fakery). ?
?But positive use cases are likely to be overshadowed in the coming years by the technology’s potential role in financial fraud, identity theft, and worse—from the savaging of reputations to the stoking of civil and political unrest.
?Time for a crackdown
?Current laws targeting fraudulent impersonation were not designed for a world with deepfake technology. And yet efforts at the federal level to update these laws have so far stuttered. Several bills have been put forward in Congress since 2018, but most have disappeared into subcommittee or have failed to pass. A couple less aggressive bills have progressed. A defense act signed by President Trump in late 2019 requires reporting of deepfakes related to US elections, and established a prize to prompt research into the technology, but it will be years before the research delivers practical solutions. ?In August, a U.S. Senate committee unanimously voted to advance to the Senate the bipartisan Deepfake Task Force Act. But the bill is preliminary, proposing the creation of a team to investigate the potential ways of curbing malevolent uses of deepfake technology. Any crackdown would still be years away.
领英推荐
?A stumbling block to aggressive and decisive legislation is the need to also protect parodies and other free speech. Some states have skirted this by taking oblique angles: In 2019, Virginia criminalized the non-consensual sharing of deepfake pornography by expanding a revenge porn law. But another challenge remains – in an online world where people can anonymously upload content, it may even be hard to find the individuals behind deepfakes. Professors Danielle Citron of Boston University and Robert Chesney of the University of Texas propose putting the onus on website platforms like Facebook and YouTube by making their protections in relation to user-generated content provisional—conditional on their taking “reasonable steps” to police their own platforms.
?Broad adoption of these kinds of laws could create meaningful deterrents … eventually. In the meantime, impersonation through deepfakes is presently cheap and often unprosecutable. Further, the technology is moving so fast that lawmakers will likely always lag behind. So, I believe that it is technology that needs to do a better job of protecting us from a problem it helped create.
?Fight fire with fire?
?Many companies agree, and are trying to find technical solutions for distinguishing between fake and authentic content. One such solution is to detect deepfakes via machine learning methods. For instance, while deepfakes appear highly realistic, the technology is not yet capable of generating natural eye blinking in the impersonated individuals. As such, machine learning algorithms have been trained to detect deepfakes using eye blinking patterns . While these ML-based detectors can be successful in the short term, people looking to evade such systems will likely just respond with better technology—creating an ongoing and expensive cat-and-mouse game. For example, research at UCSD has shown that creating several videos with slightly modified video frames successfully deceives deepfake detectors and doesn’t even require the attacker to “be aware of the inner workings of the machine learning model used by the detector.”
?There is a better approach with longer time horizons: media provenance or authentication systems to verify the origins of images and videos. For instance, Microsoft has developed a prototype of a system called AMP (Authentication of Media via Provenance) , which enables media content creators to create and assign a certificate of authenticity to their content. Under this system, every time you watch a video of (say) the U.S. President, the technology would help your browser or media viewing software verify the source of the video (e.g. a news network or the Whitehouse). The process wouldn’t take more than a few seconds. What this means for end users is assurance regarding the authenticity of content; it could be delivered as simply as through an icon—much like the current browser padlock icon, which indicates any information you send to that particular website is protected from third-party tampering en route. While the creation of such authentication and provenance systems is a good first step, these systems can be effective in practice only if they are widely adopted by all content creators, which will take time.
?While there is little that deepfake victims like Aral can do today, legislation may eventually change that. The market could be quicker—provided we, as consumers and citizens, care.
Kartik Hosanagar is a professor of Technology and Digital Business at the OID department of the?Wharton School of the University of Pennsylvania where he is also?faculty co-lead of AI for Business.?Professor Hosanagar is?the author of “A Human’s?Guide to Machine Intelligence: How Algorithms Are Shaping Our Lives and How We Can Stay in Control.” Twitter: @khosanagar.
Cyberstalking, Privacy, AI Policy Writer, with a little Royal Gossip
3 个月https://www.dhirubhai.net/pulse/face-break-ai-christine-axsmith-yxn7e/
Delivery Leader| Digital Platforms | Agile IT Delivery
2 年Thanks Prof. Hosanagar for reposting this valuable information about the mis-use of technology like Deep-fake and its fall-outs!