Deepfake – Baby-faced killer
Before talking about the deep fake technology itself, it is worth paying attention to the importance of image and sound in messages. Since ancient times, man has used oral traditions as a form of education
To understand why someone created tools for virtual appearance or voice change, one would have to go back in time to the pre-computer era. For a long time, whether actors or ordinary people, occasionally pretended to be other people. However, it was rarely used for abuse, but often as an element of art. This does not change the fact that people who were physically similar to famous people had it easier in many respects. They were easier to notice in the public space and often profited from it. Some turned to crime. This precedent continues, but in today's era of access to information about identification and identity verification
Joke, wit, snarky, where is the line between innocent fun and crime? Should rights be respected on an equal footing with reality in the virtual world? Who is the more dangerous criminal - a man running around the mall with a gun in his hand or maybe a kid sitting in front of a monitor and impersonating another person? This discussion should not take place at all, because every crime is a threat and its escalation can lead to tragedy. The times of the Internet nerd who makes stupid jokes appear on the web are over, today every harmful action has a real reflection in physical life. However, this does not change the fact that more and more perfect tools of extermination appear in space. In my opinion, deep fake software has entered the path of crime with a bang and is strengthening its position there. So what is this technology?
With the development of personal computers and the emergence of newer programs for image and sound processing, it is easier to come across scams. The first image processing using graphic programs
In 2017, pornographic films with public figures appear on the web. A user nicknamed "deep fake" begins a new, rapidly developing era of fraud. Deepfake technology i.e. a portmanteau of "deep learning" and "fake," enters the Internet salons. Of course, in the beginning, the model of downplaying dominates in the comments. Joke, malice, nothing serious at all. In the meantime, a series of applications are being created, thanks to which we can transfer facial features between characters, turn them into a dog or a cat, and finally change the voice and environment. Anyone, even a tech layman, can become a wizard of special effects straight from the cinema after a few minutes. In all this fun we missed the threat that has blossomed around us. As mentioned earlier, most of our professional and personal lives have moved online in many respects. It suddenly turned out that we no longer have time to verify everything and, what's worse, we have to start either trusting all the content around us or pretending that nothing threatens us. Everything is for a better feeling. This accompanies us until we become the victim of a direct attack. Deepfake uses AI to map the image and sound and create a copied "mask" on its basis, which we can put on the image of another person. The first fake images could be exposed quite easily. The image contained many errors, and human characteristics, i.e. blinking, breathing, and gestures, were distorted or might not be present. Despite everything, new faces appeared all the time, whether in pornographic films, whether it was giving false information, or just as a joke. Not all actions bore the hallmarks of a crime. Some of them referred to the old art of acting and were part of the performance. There were also the first staged shows to draw attention to the dangers of the new technology. However, there was no way to stop the wave from flooding us. The huge increase in the popularity of social networking sites, online broadcasts, and media messages certainly did not help. The desire to appear on the web made us forget about protecting our biometrics
What possibilities, more directly threatening our existence, does deep fake bring? If someone had an exuberant imagination and the desire to "exist at any price" with a high degree of probability with the help of deep fake, they could achieve it. Fake news reported by a famous presenter? Here you go. Stealing funds from a random person's bank account? But of course. Disrupting the operation of the exchange by providing false data. It's already happening. Declaring bankruptcy of a large corporation or an apocalyptic scenario - declaring nuclear war. All this and much more is possible, and much of it has already happened. In 2020, criminals use voice cloning technology to steal $35 million from a bank in Hong Kong. Impersonating the director of one of the UAE companies, they make a transfer for the amount indicated above to many banks. The case gains publicity after the fact and the search for criminals is very difficult. In 2022, criminals are cloning the image of Patrick Hillman, COO of Binance. Using his image, they meet clients and potential clients of the Binance exchange to obtain information about their plans, transactions, and accumulated capital. In the same year, Russian hackers call the Polish president Andrzej Duda and talk to him impersonating the French president Macron. These few examples show how dangerous and how deep the hands of criminals reach thanks to deep fake technology. Today everyone is at risk. When we receive a suspicious email, with a little effort we can protect ourselves from the problems hidden in it. Unless the situation is different when we get a voice message from someone close to us we trust. And yet it can also be a recording of an image where someone known to us gives us guidelines, and we, unaware of the threat, commit a harmful act on ourselves. Thanks to AI technology, today's emails are even more dangerous. They are better personalized, they evade our vigilance better, and what's more, you can send many more of them in the same time interval. So are we doomed?
领英推荐
It is worth adding here that many countries still have not introduced appropriate regulations regarding deep fake, and the existing ones do not fully cover the scale of abuse, and thus it is easy to bypass them. Even countries like Australia, where an appropriate law protecting the image and prohibiting the use of it by impersonating other people was introduced very quickly, are not perfectly protected. It's all because we are still a bit behind the dynamically developing technology. Many organizations have been established to investigate and protect against abuses of deep fake technology. Some work with governments and others are private or commercial ventures. However, they all have one goal. Secure and protect. On the one hand, we have organizations such as DARPA or DFDC, but also social networking sites themselves, aware of the risks, are implementing new regulations prohibiting the dissemination of photos, sounds, or videos based on deep fake technology. Even sites related to the porn industry, after a wave of scandals caused by the distribution of illegal films, have launched a campaign to verify and remove unconfirmed content. However, all this is still not enough. A unified law has not yet been created to regulate the use of technology, and this technology is generally available. It is also hard to find a wide-ranging information campaign aimed at identifying the risks associated with the distribution of one's biometric identity. It is enough to point out that our phones can be unlocked using a face scan, which thanks to cloning will open access to everything we have on the drives - files, sensitive data, access to banks, cars, or professional work.
It is impossible to turn back time and erase everything we have put in our history on the Internet. Nor can we hide and be afraid to put our heads outside our homes. Building paranoia only favors criminals. We need to increase our awareness
Thank you for reading our newsletter and stay tuned for more updates from Vault Security!