Deepfake – Baby-faced killer
Vault Security Team

Deepfake – Baby-faced killer

Before talking about the deep fake technology itself, it is worth paying attention to the importance of image and sound in messages. Since ancient times, man has used oral traditions as a form of education for future generations. At the same time, it turned out that strengthening or even replacing the verbal message with an image is more effective. Based on cave paintings, we can imagine today how primitive people lived. But do we know the truth in this way? When was the lie born? Of course, we can seek biblical knowledge on this subject, but more practical or pragmatic minds are likely to reject such knowledge. Since we don't know the date when the lie was created, can we trust historical records? What is the difference between a true story - a "tuned-up" story - a legend and finally a lie? To enhance the impression, the procedure of coloring the truth was often used. An average-sized ruler with a pimply face is less epic than a broad-chested man with a noble countenance. If Snow White was of average beauty, who would want to read her story? So for years, we lived by faith in truth rather than exposition. So it is not surprising that with the advent of photography, trust in such documentation was very significant. Over time, the film showed that a story can be delivered directly to the recipient with an almost unreflective reception. No wonder forensic science immediately took advantage of this form of documentation and applied it in operational activities. Soon after, it turned out that photography can lie as well as support the truth. What were the motives for such actions?

To understand why someone created tools for virtual appearance or voice change, one would have to go back in time to the pre-computer era. For a long time, whether actors or ordinary people, occasionally pretended to be other people. However, it was rarely used for abuse, but often as an element of art. This does not change the fact that people who were physically similar to famous people had it easier in many respects. They were easier to notice in the public space and often profited from it. Some turned to crime. This precedent continues, but in today's era of access to information about identification and identity verification, this activity has been severely limited. Similarly, image cloning entered the public space more strongly with the development of cinema. One actor playing many characters, often twins, was an attraction as a technological novelty. Whether it's Arnold Schwarzenegger in "The 6th Day" or Michael Keaton in "Multiplicity", these are just a few examples of the use of technology to "clone" characters of identical appearance. Today, we have reached times where, instead of employing a staff of people and investing huge amounts of money in obtaining the effect of changing the identity, we can use easy-to-use software based on AI mapping technology.

Joke, wit, snarky, where is the line between innocent fun and crime? Should rights be respected on an equal footing with reality in the virtual world? Who is the more dangerous criminal - a man running around the mall with a gun in his hand or maybe a kid sitting in front of a monitor and impersonating another person? This discussion should not take place at all, because every crime is a threat and its escalation can lead to tragedy. The times of the Internet nerd who makes stupid jokes appear on the web are over, today every harmful action has a real reflection in physical life. However, this does not change the fact that more and more perfect tools of extermination appear in space. In my opinion, deep fake software has entered the path of crime with a bang and is strengthening its position there. So what is this technology?

With the development of personal computers and the emergence of newer programs for image and sound processing, it is easier to come across scams. The first image processing using graphic programs aroused more laughter than admiration, but time showed that it was easier to hire a talented graphic designer than a make-up artist or torment yourself in the gym. The world has been flooded with fake photos with changed physiognomy or backgrounds. There were also sound modulators, thanks to which people with health problems could regain their voice, but there was also an option to change the original voice to another. In a short time, it turned out that only imagination limits us and a little later the computing power of our PCs. Soon after the premiere, it turned out that, like one of the many achievements of mankind, it can also be used for fraud. There were faked photos in the junk press to monetize the fake news. This state of affairs lasted for years, and although it was harmful and irritating, it was possible to defend against it with a little willingness. Years passed, and the importance of cyberspace grew. Our finances have moved to banks, where we make most payments using the app on our phones. Our social life has largely moved to the virtual world. We have digital documents, remote work, and digital warfare, and it had been only a short matter of time before hackers quickly sensed that a new, in the beginning, wide-open door for abuse was opened.

In 2017, pornographic films with public figures appear on the web. A user nicknamed "deep fake" begins a new, rapidly developing era of fraud. Deepfake technology i.e. a portmanteau of "deep learning" and "fake," enters the Internet salons. Of course, in the beginning, the model of downplaying dominates in the comments. Joke, malice, nothing serious at all. In the meantime, a series of applications are being created, thanks to which we can transfer facial features between characters, turn them into a dog or a cat, and finally change the voice and environment. Anyone, even a tech layman, can become a wizard of special effects straight from the cinema after a few minutes. In all this fun we missed the threat that has blossomed around us. As mentioned earlier, most of our professional and personal lives have moved online in many respects. It suddenly turned out that we no longer have time to verify everything and, what's worse, we have to start either trusting all the content around us or pretending that nothing threatens us. Everything is for a better feeling. This accompanies us until we become the victim of a direct attack. Deepfake uses AI to map the image and sound and create a copied "mask" on its basis, which we can put on the image of another person. The first fake images could be exposed quite easily. The image contained many errors, and human characteristics, i.e. blinking, breathing, and gestures, were distorted or might not be present. Despite everything, new faces appeared all the time, whether in pornographic films, whether it was giving false information, or just as a joke. Not all actions bore the hallmarks of a crime. Some of them referred to the old art of acting and were part of the performance. There were also the first staged shows to draw attention to the dangers of the new technology. However, there was no way to stop the wave from flooding us. The huge increase in the popularity of social networking sites, online broadcasts, and media messages certainly did not help. The desire to appear on the web made us forget about protecting our biometrics in this image. To draw attention to the enormity of the problem, in the first year of Google Photo's operation, about 200 million users registered, of which 24 billion selfies were uploaded. How many places on the web have we left our photos? Facebook, Instagram, Twitter, and many more. We've been pushing samples of processing material for the AI ourselves in unimaginable quantities. Add to that online coverage via Youtube, Twitch, Panda, etc. and you have another distribution method. Soon many people became convinced of the danger they were exposing themselves to. These are not only famous personalities such as President Obama or actors Tom Cruise and Nicolas Cage. Today we can already say that fewer media people have been exposed to the ridicule of defamation. And this is still only a small part of what deepfakes can do.

What possibilities, more directly threatening our existence, does deep fake bring? If someone had an exuberant imagination and the desire to "exist at any price" with a high degree of probability with the help of deep fake, they could achieve it. Fake news reported by a famous presenter? Here you go. Stealing funds from a random person's bank account? But of course. Disrupting the operation of the exchange by providing false data. It's already happening. Declaring bankruptcy of a large corporation or an apocalyptic scenario - declaring nuclear war. All this and much more is possible, and much of it has already happened. In 2020, criminals use voice cloning technology to steal $35 million from a bank in Hong Kong. Impersonating the director of one of the UAE companies, they make a transfer for the amount indicated above to many banks. The case gains publicity after the fact and the search for criminals is very difficult. In 2022, criminals are cloning the image of Patrick Hillman, COO of Binance. Using his image, they meet clients and potential clients of the Binance exchange to obtain information about their plans, transactions, and accumulated capital. In the same year, Russian hackers call the Polish president Andrzej Duda and talk to him impersonating the French president Macron. These few examples show how dangerous and how deep the hands of criminals reach thanks to deep fake technology. Today everyone is at risk. When we receive a suspicious email, with a little effort we can protect ourselves from the problems hidden in it. Unless the situation is different when we get a voice message from someone close to us we trust. And yet it can also be a recording of an image where someone known to us gives us guidelines, and we, unaware of the threat, commit a harmful act on ourselves. Thanks to AI technology, today's emails are even more dangerous. They are better personalized, they evade our vigilance better, and what's more, you can send many more of them in the same time interval. So are we doomed?


It is worth adding here that many countries still have not introduced appropriate regulations regarding deep fake, and the existing ones do not fully cover the scale of abuse, and thus it is easy to bypass them. Even countries like Australia, where an appropriate law protecting the image and prohibiting the use of it by impersonating other people was introduced very quickly, are not perfectly protected. It's all because we are still a bit behind the dynamically developing technology. Many organizations have been established to investigate and protect against abuses of deep fake technology. Some work with governments and others are private or commercial ventures. However, they all have one goal. Secure and protect. On the one hand, we have organizations such as DARPA or DFDC, but also social networking sites themselves, aware of the risks, are implementing new regulations prohibiting the dissemination of photos, sounds, or videos based on deep fake technology. Even sites related to the porn industry, after a wave of scandals caused by the distribution of illegal films, have launched a campaign to verify and remove unconfirmed content. However, all this is still not enough. A unified law has not yet been created to regulate the use of technology, and this technology is generally available. It is also hard to find a wide-ranging information campaign aimed at identifying the risks associated with the distribution of one's biometric identity. It is enough to point out that our phones can be unlocked using a face scan, which thanks to cloning will open access to everything we have on the drives - files, sensitive data, access to banks, cars, or professional work.

It is impossible to turn back time and erase everything we have put in our history on the Internet. Nor can we hide and be afraid to put our heads outside our homes. Building paranoia only favors criminals. We need to increase our awareness and increase the scope of cyber security by using newer and more reliable methods of detecting and preventing surveillance, such as Vaulter. Let us also remember that not only those at the top of the social hierarchy are at risk, these days all of us are targeted.

Thank you for reading our newsletter and stay tuned for more updates from Vault Security!


要查看或添加评论,请登录

Vault Security的更多文章

社区洞察

其他会员也浏览了