War Of The Worlds A Deepfake
Karen Jahnke PMP
Independent Consultant | Helping organizations and individuals on their digital transformation journey.
In 1938 Orson Welles and his troupe of radio actors interrupted the Columbia Broadcasting System’s programming to report that our planet had been invaded by aliens. It was believed that this broadcast of War Of The Worlds terrified millions of people. But did it really create a panic? Dig deeper and the real story is that newspapers sensationalized the broadcast because the radio industry had captured advertising revenue during the Depression and this badly hurt the newspaper industry. The industry used the broadcast as an opportunity to discredit the radio industry in the eyes of advertisers and regulators and to create the perception that radio management was irresponsible and could not be trusted. Media has always had the power to influence and the motives to exaggerate or lie. Today, social media levels the playing field for bad actors.
As we embark on a new frontier where Artificial Intelligence (AI) will be integrated into nearly every system, process, and media platform I find myself spending a great deal of time pondering the positive and negative impacts of this stunning and impactful shift. Like climate change there is no reversing course. Our focus now needs to be on how to navigate the change. How can we best learn from AI and why we must remain vigilant so we are not duped by a deepfake.
Transforming Customer Experience
AI is transforming the customer experience. My own work has been focused on Contact Center for the past 5 years and I was part of a project team that implemented Nexidia analytics. In the Contact Center space natural language processing identifies call drivers and sentiment to learn about customer interactions. These insights enable focused coaching that leads to improved customer experiences, reduced handle time, first call resolution, CSAT, and other key performance indicators to decrease operating costs and increase revenue. Contact Centers produce vast amounts of data and AI is able to analyze that data at speeds not humanly possible helping managers to keep better track of agent performance. Last week I read that a poll conducted showed patients found AI response to questions more empathetic than those of the physician. Can AI support coaching humans to be more empathetic ? Absolutely.
ChatGPT is a chatbot that was released in November 2022. The technology is based upon a foundational large language learning model. Simply stated this means that the technology is always gaining and applying knowledge to solve problems. Early adopters of this technology use it to quickly and easily complete mundane tasks. ChatGPT can summarize meeting notes, help craft personal correspondence, develop syllabi for educators, and even write a business plan.
Making The Mundane Magical
I check-in on Marissa Mayer's career from time to time. I admire her as a successful business woman with deep technical knowledge and she happpens to be from my home State of Wisconsin. Great to see someone from rural Wisconsin succeed! Mayers recently cofounded a company called Sunshine Contacts. Their tagline is "Making the mundane magical". The problem the business is solving for is to make it easy for people to share the right information at the right time. The business focus appears to be at the intersection between simply generating large amounts of data and effectively managing knowledge and data to simplfy lives. (https://sunshine.com/about/)
Interaction Learning
In all likelihood businesses that do not invest in and understand how to leverage AI technology today will be left behind. Specifically businesses that rely on data input. Some roles will be eliminated and new skills and new roles will be needed. The reason that it is important not to wait is because there is a period called interaction learning that happens in the early phase of adoption. It's the period of time when machine learning occurs with the interaction between the system, humans, and observers. During this critical step the organization learns how the system interacts with its ecosystem. This period can take months and even years. Businesses that wait for technology to advance before investing will still need to move through the interaction learning phase.
Educators are encouraging students to embrace ChatGPT as another learning resource with explicit rules around acceptable use. For example, "DO NOT copy ChatGPT answers and make them your own". "DO ask ChatGPT for clarification on a topic that you do not understand." "DO NOT assume your teacher will never know that you are using ChatGPT." For the moment, there are telltale signs that giveaway ChatGPT generated content. We need to educate children early about AI and help them to calibrate their moral compass. (Source:https://www.weareteachers.com/chatgpt-for-teachers/)
I do not believe we should assume that a student or invidual will want to cheat or claim AI generated content as their own. Some will, to be sure. However I believe that the majority of individuals want to continue learning and value the importance of their own creative expression.
I've watched access to information evolve greatly in my lifetime. I used the card catalogue at the library and my childhood home had a dictionary and a set of encylopedia as the primary source of learning. With the evolution of the internet I was grateful, and remain so, for quick access to information that previously took hours to locate and organize. Today, I can simply ask Alexa. Human learning is also accelerated by technology.
领英推荐
Performance of Empathy is not Empathy
The race to accelerate AI technology is moving at a dizzying pace and the alarms are sounding around ethcial concerns and moral obligation. But the train has left the station so how do we protect our privacy, ideas, and perceptions?
Raising ethical concerns around creativity and copyright, ChatGPT can write poems, stories, create art, and write lyrics and music using prompts that sound like an artist or band. It's easy to foresee years of litigation and the wading through of policy that does not yet exist.
Regarding patents, "Last year, in the case of Thaler v. Vidal, the Federal Circuit affirmed that only natural persons (i.e., human beings) can be named inventors on U.S. patents, thereby excluding artificial intelligence from being listed as an inventor per se. 43 F.4th 1207 (Fed. Cir. 2022)".
However this does not mean that patents cannot be AI assisted.
(Source: https://www.crowell.com)
Deepfakes use AI to create convincing images, audio,and video. They are used to spread hoaxes and false information about real people and events. Deepfakes may be used to meddle in elections. Deepfakes can be used to impersonate real people to obtain personal and confidential information. Deepfakes are legal and have legitimate real world application in video gaming, entertainment, and customer support. With the line between reality and augmented reality beginning to blur how can we balance learning with technology while protecting our privacy and our perceptions?
It is incumbent upon the individual to remain vigilent about how they share personal and confidential information. We need to continue to dig deeper and question everything. Just because you see it or read it on the internet does not make it true. As I puruse my social media platforms I am sometimes blindsided by the number of individuals who will form and provide an opinion around content that they presume to be true but clearly did not take the time to read. Educate yourself to spot deepfakes and bad actors.
Robots are being designed with deliberate cues to give the impression that they can feel and express emotion. I believe that there are valid use cases for robots that can complete simple tasks and provide companionship to improve overall health and wellbeing. But let's not be tricked into believing that technology is capable of human emotion
Difficult as it has become, perhaps we need to move offline for longer periods and connect with each other again. It is sometimes easy to forget that augemented reality only exists through the device we are using to interact with it.
I will close with a fervent quote from Sherry Turkle, AI Researcher and MIT Professor; “The performance of empathy is not empathy,” she said. “Simulated thinking might be thinking, but simulated feeling is never feeling. Simulated love is never love."
(Source: https://phys.org/news/2019-04-wary-robot-emotions-simulated.html)