Deepfakes – Is this the future of fake news?

Deepfakes – Is this the future of fake news?

All is not good with Artificial Intelligence, it’s like an nuclear energy, which when in wrong hands or use can create havoc in people’s life. Recently, an extremely popular deepfake video featuring Rashmika Mandanna has gone viral online. The film that the AI created has sparked worries about the exploitation of Artificial Intelligence (AI) to disseminate fake news. The ‘Pushpa move’ actress was traumatized and scared, she went on to express her concern on social media stating, “Something like this is honestly, extremely scary not only for me, but also for each one of us who today is vulnerable to so much harm because of how technology is being misused”.

Rashmika Mandanna is not the first or the last victim of the misuse of AI technology. Have you seen Mark Zuckerberg boast about having "total control of billions of people's stolen data," viral video of Barack Obama call Donald Trump a "President Trump is total and complete dipshit," or seen Bruce Willis’s in Russian telecom company megafon commercial? If the answer is "yes," you have witnessed a deepfake. In the Russian advertisement Bruce Willis' likeness is superimposed on the face of a Russian actor who is about the same age and build creting the actor’s ‘digital twin’. The advertisement was created by the Russian company ?Deepcake using an artificial neural network. The two time Emmy award winner dined the fake news that he has sold the rights to his face and that he has no deal with Deepcake.

Deepfakes, the 21st century equivalent of Photoshopping, create visuals of fictitious events using deep learning-a subset of machine learning, an artificial intelligence technique. Deep learning is used to create fake content that looks real , deep learning algorithms—which train themselves to solve problems using vast amounts of data—are used to swap out faces in pictures, videos, and other digital content.

?Deepfakes – Is this the future of fake news?

?Many refer to this deepfake technology as the “future of fake news", however it has already been controversially applied, such as using celebrity faces in pornography. According to DeepMedia, a business that develops algorithms to detect synthetic media, there have been eight times as many speech deepfakes released online this year and three times as many deepfake videos of all kinds uploaded online during the same period in 2022. It is estimated, in 2023 there will be approximately 500,000 voice and video deepfakes broadcast worldwide on social media platforms. Bill Gates the Microsoft founder expressed his concern on the misuse of AI, he quoted on CNN that Deepfakes and AI generated fake news are a huge concern for elections and democracy. He stressed the need for the laws to have clear guidelines about deepfakes usage, so that people can differentiate between fake and real digital content.

?Deepfakes – beyond videos

?Deepfakes are a subset of synthetic media in which content—usually in the form of photos, videos, or audio—is generated or altered through the use of deep learning algorithms. Deepfakes come in a variety of forms, each concentrating on a distinct generation or manipulation technique. You can swap faces in pictures or videos, lip sync to create the illusion that someone is saying something different from what they actually say by altering the audio in a video, imitate the speech patterns and intonation of its target by using artificial intelligence to clone their voice, with the use of this deepfakes, one can create puppetry animation as if a person appear to be speaking or doing things they never did in a video by manipulating their movements and to manipulate emotions by making someone look sad where actually that person picture or video had a happy expression.

?It's crucial to remember that although deepfake technology has many uses in the creative and entertainment industries, it can also be employed maliciously for things like identity theft and the dissemination of fake news. Thus, worries regarding the moral and legal ramifications are becoming more and more prevalent.

?How to spot it?

?As technologies advance, it becomes more difficult to spot deepfakes. American researchers found most deepfake videos feature humans with their eyes open and they do not blink, algorithms never actually get to learn how to blink. It appeared to be the solution to the detection issue at first. But as soon as the study was released, blinking deepfakes started to surface. This is how the game works: once a vulnerability in deepfakes is identified, it is immediately addressed. Deepfakes of lower grade are simpler to identify. There could be uneven skin tone or poor lip synchronisation. Transposed faces may exhibit flashing at their edges. Furthermore, deepfakes have an especially difficult time rendering small details like hair, inaccurately depicted teeth and jewellery, uneven lighting effects. But all said and done, a normal person surfing the Internet and bombarded by so much of content have little time or inclination or technology to detect deepfakes. According to a recent research by University College London, humans are unable to detect more than 1 out of 4 deepfake speech. In another research conducted by University of Sydney, found that human brain can detect deepfakes better than the individual himself. The experiment used one behavioural and one neuro-imaging, the outcome was that the neuro-imagery method discovered that the brain could identify 54% of the deepfakes, whereas when people were verbally asked they could only identify it 37% of time.

?Does the law protect us from deepfakes?

?Though deepfakes aren't unlawful in and of themselves, makers and distributors run the risk of breaking the law. A deepfakes may violate copyright, violate data protection laws, or be defamatory if it subjects the victim to mockery, depending on what is contained in it. Additionally, posting sexual and intimate photographs without permission is a distinct criminal offence known as revenge porn, for which offenders face a maximum two-year prison sentence. On this, British law is divided. Deepfakes are covered by Scotland's revenge porn statute, which makes it illegal to reveal or threaten to reveal any image or video that purports to represent someone else in a private setting. Pornography that is intended to exact revenge is illegal and is penalised with both jail time and fines. It is regarded by some as virtual rape. The US government forbids revenge pornography under rigorous guidelines. In most states, a number of laws are in place and adhered to. Regretfully, India did not create a new revenge pornography statute. Since revenge pornography is not specifically prohibited by law in India, the accused faces penalties under Indian Penal Code sections and Information Technology Act.

The Solution

?As deepfake technology develops, so do the methods and instruments for identifying and countering it. It's critical to continue being watchful and proactive in your attempts to counteract deepfakes detrimental effects on the digital environment. To fight deepfakes created by AI, to spot and stop it, AI again may be the answer. We need to be ahead of the curve and develop algorithms to spot and stop deepfakes, media literacy too work but again in countries like India it is a challenge. Government, regulatory bodies and the law must tighten screws on the creators and amplifiers of deepfakes.

Deepfake technology is becoming more and more dangerous, especially when it is in the wrong hands. The risks associated with deepfakes are growing in importance as artificial intelligence advances. But we can cooperate to reduce these hazards and make sure deepfake technology is applied for the greater good with the creation of new detection tools and a persistent emphasis on ethics and education. Fake or real? The scary future of AI-generated realities is a tricky balancing act to mitigate the dangers of deepfakes while paving the way for the good work that can be done.

?

?

?

?

?

?

?

?

?

?

?

?

要查看或添加评论,请登录

社区洞察

其他会员也浏览了