AI Identity: AI imitating human mannerisms and the potential impact on society

AI Identity: AI imitating human mannerisms and the potential impact on society

In July 2019, Microsoft showcased an AI and augmented reality demo that allows a presenter render a constructed image of themselves speaking in a different language with the same tone of voice. The main goal seems to preserve that unique personal touch without having to be in the same room nor speak the same language. I'm not going into the detail of how this is achieved. But basically, the idea is, that a “hologram[1] is being rendered after having pre-recorded your presentation, translated into another language and having that hologram speak in that language with your same tone and mannerism. 

No alt text provided for this image

AI copying human behaviour is not necessarily entirely new. On May 2018, Google made big waves in the news when they showed up their Google Duplex feature for their Hey Google assistant. In this example, google assistant was able to make a phone call on behalf of its owner to small companies that don’t have a booking system in place. What awed audience in that demo is that the assistant sounded differently from previous version. While, people might have gotten used to the robotic voices from mainstream personal assistants (Siri, Alexa, Google), in this demo, the voice was subjectively much more human. This perception was based on the assistant copying typical human mannerisms like saying “um-hum”, making conversational pauses, etc. 

In a utopian world, one could clearly see the benefits this area of AI can bring to society. Currently, it seems AI still lacks some basic human skills. For example, social media has been flooded with funny depictions of assistants not getting it right. Famously, there has been instances where kids’ requests have taken on a different more adult spin. Equally famously, in 2016, Microsoft hit the news thanks to its Twitter bot Tay.ai whom after just being released, turned into a Hitler-loving and feminist bashing troll. It seems then, that having AI improving their interaction with humans is an immediate positive result of the technological advances. 

more than 60% of small businesses don’t have a booking system - Sundar Pichai @ Google Duplex demo

Also, as Sundar Pichai, CEO of Google Inc, stated while demoing Google Duplex, more than 60% of small businesses don’t have a booking system. So, having an AI that can intermediate between its busy owner and a business that wants to connect with a broader customer base seems a noble action to take. In fact, in general terms, Artificial intelligence could add $13 trillion to the global economy by 2030. To put it into perspective, this amount represents more than half of the current US national debt which sits at around $20 trillion. 

Moreover, there are benefits of being able to express yourself regardless of language as shown in Microsoft recent. One could easily see how democratising it is, to “be able” to talk in a different language without knowing the actual language and still sounding like “you. 

Potential dangers on AI human behavior

I think online searches are full of articles, books and all sort of media that focuses on how technology, specifically machines and AI, can create potential job losses, shifting in industries and so on. For example, you've got some authors like Brunn & Duka who in 2018 gave a negative outcome on machines and humans coexisting alongside each other. Also, you can find more positive revisions on AI about job creation like the one written by Daugherty & Wilson. Setting jobs aside, what I want to focus on is on the ethical questions, legal scope and what I believe could be a new topic: AI identity

Ethical questions

AI is not only beating us in game competitions or day to day tasks but is now capable of deceiving humans making us believe they were interacting with another human

Going back to the Google Duplex demo, Google Inc. faced some harsh backslash at that time. Mainly, because after the initial impression of the technology, people were wondering the ethical questions this posed for Google. For the first time, AI was not only beating us in game competitions or day to day tasks but was now deceiving humans making them believe they were interacting with another human. Should a human not know in advanced that he/she is talking to a machine? For which use cases is this right to use or not? What laws should be put in place to regulate these developments? 

These ethical questions on the Google duplex, meant an official response from the company had to be release. Not long after the demo, Google Inc used tech publication The Verge to announce an official statement. In this statement, Google Inc highlighted the positives of this new piece of tech while emphasizing that the assistant will identity itself as a machine. 

AI doesn’t only copy humans it can also copy you

It is one thing to have AI to copy human behavior, like nuanced and contextual conversations or chat with you over Twitter. Another completely different is when it can copy you. Surely, we can see the value of having an AI generated model mimic your voice in a different language. However, this can quickly be counterproductive. One instance of this, happened to relatively famous psychologist Jordan B. Peterson. In his blog, Dr. Peterson puts the question out there in the dangers of building models being able to generate speech that he never said. Similarly, in the porno industry, machine learning models are being able to swap celebrity faces with porno stars. Amid the rise of fake news, imagine the implications of not being able to tell the difference between real person X and AI person resembling X. 

It is one thing to have AI to copy human behaviour. Another completely different is when it can copy you!

I'm suggesting this problem be called AI Identity. That is, when you have the models resemble your voice or your face to the point other humans can’t tell the difference. We are still evaluating, and reacting, to scandals like the Cambridge Analytica. Now, with AI Identity, you could sway not so gullible audiences to believe a fabricated truth, thanks to believable “facts”[2] built up on machine learning.  

Not having enough regulation

Adding to the mix, and using Cambridge Analytica as an example, it took approximately 4 years (from 2014 when the app was published to 2018 when the scandal broke out) for lawmakers to take action. Moreover, nothing definitive has been decided in the US as of today.

It seems, then, that lawmakers and governments have always created regulation after something gets a little bit out of hand. The idea behind it is to allow less regulation provide more creativity. However, one could make a point that with AI, regulation should be looked with a different lens. Meaning, as proven by recent scandals a more proactive approach should be considered.

Active lawmakers

This is not an easy task. As many times, lawmakers don’t seem to fully grasp the new concepts that technology is generating. Nevertheless, when done right, in a proactive way the results can be importantly positive. That is the case mentioned by Baranov, et al. with Estonia developing law in the field of robotics. It created new legal entities and determining the degree of responsibility of the artificial intelligence.

Currently, we have tech companies asking for self-regulation. That is noble, but there is a reason for that. Tech companies are faced with a question with no clear correct answer. Either they keep developing technology that could prove harmful, or, as Google did with ICE, pull the plug on some development. On the former, we can’t blame companies for pushing for more business value. On the latter, other less-ethical companies could provide the same technology with worse consequences.

Is there a light at the end of the tunnel?

The only way we can assure a bright future for the next generations is if initiatives like GDPR stop being a success story and become a normality.  

We certainly live at exciting times in technology development. We have never experienced an explosion of knowledge and groundbreaking discoveries around the corner every day. This gives us opportunities like we never thought imaginable. Surely, is excited to think that soon we’ll be able to talk to machines as we talk to our buddies. Also, being able to be almost be omnipresence with a rendered image of yourself speaking in a different language is promising. All these use cases are a testament of how technology can be put to good use. Similarly, as with every groundbreaking discovery, technology can also be used for ill. Jokingly, we now have the power to have somebody say anything and record that person in any video possible. But more seriously, AI is then proving that can take anybody’s identity. This new AI Identity, can prove hard for society, broaden fake news and sway people to believe anything. One could argue that tech companies need to be made accountable. That is partially true. It is good to see companies like Microsoft, Google and Facebook self-regulating themselves. But after all, that is not their job, nor will be enough to prevent evil use of technology. For that, it is the proactive approach, multipartisan, advised by expert actions of governments to make sure the law system can identify transgressors. It seems some good initiatives have taken place like GDPR. That is good! Nevertheless, the only way we can assure a bright future for the next generations is if initiatives like GDPR stop being a success story and become a normality.  



[1] The term hologram is being used as a descriptive way of portraying the main AI message in the background.

[2] The used of the word facts in this context means that the sound or video built by AI models are closely tightened to reality and, as such, it could trick even the average person.

Marisol Cianciarullo

Consultor de Recursos Humanos

5 年

Excelente articulo!!! Coincido en la importancia del aspecto etico y como sensibilizar a todos en el uso adecuado de estas tecnologias...

回复
Raquel Valero

Service Manager at Digital tech international

5 年

Amazing article, Well written!

Paul Gartner

Enterprise Sales at MURAL

5 年

Very interesting read!

回复
Paul Newsome

Senior Manager Sales Training excelling in pre-sales training expertise

5 年

Thought provoking stuff Gianpaolo, good article.

要查看或添加评论,请登录

社区洞察

其他会员也浏览了