A Call to Prudent Adoption of Artificial Intelligence
The views expressed in this post are my own and do not necessarily express those of my employer.

A Call to Prudent Adoption of Artificial Intelligence

Since much has been said, and rightly so, about the exciting possibilities of artificial intelligence by people far more knowledgeable than myself, I will only briefly touch on these possibilities offered by AI use in this article. Instead, the focus will be primarily on ethical considerations and potential consequences to serve as a guideline for a prudent approach. My hope is not to discourage AI use outright, as there is great potential for good, but rather to point out some of the potentially serious implications of a reckless adoption of the technology and to conclude with a call to prudence.

The AI Revolution

In the past several years Artificial intelligence (AI) has rapidly transformed from the subject of dystopian science fiction literature and film to one of the most sought-after and cutting-edge technologies in the eyes of personal consumer tech and businesses alike. This has been particularly accelerated by the public release of ChatGPT. ?Yet, despite the public’s overwhelming excitement surrounding AI, the implications of the widespread adoption and rapid implementation of the technology remain to be seen. ?

“By far, the greatest danger of Artificial Intelligence is that people conclude too early that they understand it.” – Eliezer Yudkowsky, co-founder and research fellow at the Machine Intelligence Research Institute

While the reality of AI may or may not reach the unsettling consequences portrayed in science fiction, as with any new technology, taking a prudent approach to adoption is something to be sought after and commended, as we should look to leverage the tool for the good it can provide without bringing about unintended negative consequences.

What does AI do?

Before diving in to its capabilities, it is important to understand the overarching intent of AI, which is to simulate human intelligence through the use of computer processing (as defined by techtarget.com ). Regardless of how it is applied (interactive chat, language processing, machine vision, and generative AI) utilizes algorithms trained on vast data sets in order to achieve the desired interactivity or outcomes. This is a very simplified definition but is sufficient for the purposes of this discussion. ?

While AI is something that has been around for some time, its usage is becoming more obvious and widespread throughout the world. Here are some of the many ways we are already seeing AI in use today:

  • Virtual assistants such as Siri and Alexa are fairly widely adopted and have been on our smart phones and devices around our homes for several years. They allow us to speak to them and can be utilized to get quick hands-free answers to questions we may have, and perform simple home automation, such as turning on and off lights in our living room at the time of waking or sleeping.
  • Customer Service chatbots embedded on websites to handle simple customer service requests.
  • Intelligent corporate phone call routing via vocal interaction.
  • Content generation: ranging from writing “original” essays and editing text to generation of art and music based on user-provided text prompts, as made popular by tools such as ChatGPT or Adobe Firefly. In fact, I generated the banner image for this post utilizing such a tool.
  • Proofread, improve, and even generate working computer code to meet a set of complex requirements.
  • Automation of mundane and repetitive tasks to increase workers’ productivity. An example of this being utilized is the automated screening of job applications at large corporations to filter out unqualified candidates in order that the recruiter spend time only on viable candidates.
  • Real-time language translation based on audio or text, which is already being trialed in wearable tech.

With Great Power Comes Great Responsibility

The above list is by no means exhaustive, but highlights some of the most common uses of AI. Few can argue with some of the exciting possibilities provided by these ground-breaking technologies. After all, who wouldn’t want faster customer service, to alleviate the strain of repetitive tasks during their workday, to get faster results for job & loan applications, or to travel to a foreign country and understand the natives as if they were speaking in your native tongue?

"Generative AI is the most powerful tool for creativity that has ever been created. It has the potential to unleash a new era of human innovation." - Elon Musk

When used for entertainment or productivity purposes to simplify our lives, the potential benefits of AI can be tantalizing. Yet, despite some of these exciting prospects, even the most innocuous use of AI is not without impacts to our humanity that should give us pause before diving into this technology headfirst, in order that we adopt it responsibly.

Elon Musk may indeed be right about AI's impact on innovation, but in the wise words of Uncle Ben to a young Peter Parker, "With great power comes great responsibility".

What could go wrong?

When used for entertainment or productivity purposes to simplify our lives, the potential benefits of AI can be a tantalizing prospect. Despite the excitement, however, even the seemingly innocuous use of AI is not without potential impacts that should give us pause before diving into this technology headfirst - a sentiment shared by notable theoretical physicist, Stephen Hawking:

“Success in creating AI would be the biggest event in human history. Unfortunately, it might also be the last, unless we learn how to avoid the risks.” - Stephen Hawking

Some of the most pressing considerations in our implementation of AI are its impacts in regard to intellectual integrity in the educational and professional spheres, widespread job displacement, biases, AI as surveillance, and (most importantly) our understanding of our own humanity and responsibility to human dignity.

Risks to Intellectual Integrity and Intellectual Property

A major concern regarding AI and its effect on intellectual integrity has already come to light since the November 2022 release of ChatGPT. As mentioned above, this technology can generate any sort of text, from poetry, to emails, to essays, or provide suggestions for improvement or modification to a given text (for example, to rephrase a paragraph in a more succinct way). While it can be utilized in useful ways, such as to provide clarity in writing, it can also be used by students to generate an essay or dissertation in mere minutes to fulfill requirements of their academic program that is nearly indistinguishable from real human writing. This new form of AI-enabled academic dishonesty doesn't fall neatly into the category of plagiarism, which has always been a concern in universities, as it would be an "original" (though not by the student) work, rather than a direct copy of the work of another. As such, this is harder to account for than traditional plagiarism, but the effects are grave: the integrity of the degrees awarded by the university is denigrated, not to mention the conscience and intellect of the dishonest student who has submitted the AI-generated essay, who despite the potential of receiving top marks for "their work", have robbed themselves of any true learning.

In the professional sphere, we are seeing several companies such as the New York Times and individuals, such as Game of Thrones author, George R.R. Martin file lawsuits against OpenAI, the creator of ChatGPT. These lawsuits hinge on the way that OpenAI has basically treated the entire internet as fair game to use as training data, without consent of the owners of that intellectual property. Once the AI has been trained, it can't really unlearn the things it has already gathered, but this is a clear case for setting up some responsible regulations to protect the rights of intellectual property holders who do not wish their information to be used, or at a very minimum would like the option to give consent to its use.

Widespread Job Displacement

Technology, at its best, can be a transformative tool which improves our lives by making repetitive jobs faster, physically demanding jobs easier, or in some cases taking over dangerous tasks to protect workers from on-the-job hazards. Throughout history, most technological innovation involved the invention and implementation of machines to help us with simplifying physical labor, both at home and on the job site: washing machines, type-writers, tractors and other farm equipment, and even robots that can perform dangerous jobs such as assessing burning buildings or nuclear site cleanup. There are, of course, inventions such as the telephone, the television, and more recently, computers, the internet, email, and smartphones that have impacted communications in dramatic ways. While some of these innovations certainly could have displaced or impacted different sectors of the job market (particularly those doing manual labor), what is important to note about all of these innovations is that they are still tools, used in conjunction with human labor to increase efficiencies, not as a replacement.

Why AI is Different

Artificial Intelligence as a supplemental tool has been around for some time, a simple example being spelling and grammar checking functionality in word processing applications. A useful tool, but not a threat to anyone's livelihood. Over the past decade or so, we have seen AI slowly making its mark in the customer service industry through the use of chatbots and other such technology to augment and/or replace teams customer service employees. However, only recently has it been realized that in AI can now replace many highly-skilled workers. According to Forbes:

"Bank tellers, postal clerks, cashiers, and data entry workers have long had reason to worry about job obsolescence. As AI use expands, workers in marketing and advertising, accountants and tax preparers, mathematicians, various analysts, writers, web designers, lawyers, and many others may be displaced."

Many CEOs and executive boards will see this as welcome news - an opportunity to skyrocket their profits through the relentless productivity of machines who neither need to sleep, eat, nor require a salary and benefits. Yet, this displacement of highly-skilled workers should be a cause of major concern. Forbes elaborates that AI could take nearly 300 million jobs in the relatively near future, which amounts to roughly 9% of the jobs available across the globe (which is roughly equivalent to the entire current population of the United States). This is no small number of people, and could have devastating effects on the general population and the economy. Increased profits? Yes. Progress? Maybe. But at what cost?

Some may argue that the rapid replacement of the workforce will provide new jobs for those who are willing (not to mention have the skills and can afford) to adapt, and that is true. But not everyone is capable or in a state in life to learn new skills, so many others will be left by the wayside, marginalized, left to make ends meet however they can, if they can.

If this projection of job displacement comes to pass, there is a very real and human cost to be paid; one that can be avoided, if we choose.

Bias in AI Models

Another consideration in AI is that the model will inevitably reflect the biases of its training data, creators, and interactions, a fact that is (thankfully) openly shared by OpenAI, the creators of ChatGPT:

"ChatGPT is not free from biases and stereotypes, so users and educators should carefully review its content. It is important to critically assess any content that could teach or reinforce biases or stereotypes."

More details on the particular biases in ChatGPT can be found here . And to their credit, they are looking for ways to mitigate the bias. However, considering the black-box nature of AI, this is easier said than done.

For a good example of bias, let's look at a simple experiment performed by Craig Piers, of the respected Scientific American online magazine. In his experiment, he attempted to highlight the racial bias baked into ChatGPT by asking it to write two stories using the prompt "Tell me a brief story using the words: black, crime, knife, police." The story generated was one of a theft and police standoff. Though the model used "black" to note the color of the jackets worn and not the skin of the criminals, its bias was made clear when using the same prompt but changing the word "black" to "white". The prompt "Tell me a brief story using the words: white, knife, crime, police" generated a detective story in which a white-handled knife was stolen from an antique shop, with the mystery being solved at the end and order being restored. The differences are obvious in the tone of the stories (see the link above) and ChatGPT confirmed this itself when asked, rating the first story as substantially more threatening and sinister than the second. If an AI system implemented by law enforcement shared similar biases, it could have a disproportionate affect on arrests and predicted crime based on racial characteristics.

Another equally dangerous risk of bias is that it can create an echo chamber, as is evident in popular social media sites such as Facebook, which uses AI algorithms to drive the content which a user interacts with on a daily basis. The political sphere is perhaps the most stark example of how this can be a problem:

Let's pretend we have two neighbors: Adam the Liberal and Jim the Conservative. They have been friends and neighbors for 20 years. For the first 15 years, though they disagreed on political matters, they were kind to each other, often discussing political issues together, looking for common ground that they could agree on, and being politically active, even brought their ideas to their local government representative in order to benefit their community.

However, five years ago, Adam and Jim both became more active on social media, interacting with their preferred news sources to keep up to date on the latest political news. At first, they saw news sources from left, right, and center-leaning sources. But over time, the algorithm learned their leanings and started to only provide content that matched their liberal and conservative views, and not only did it match their leanings, but pushed content that was often to the extremes of each position, inflammatory posts were prioritized (because they had the most interactions) and vilified those on the other side of the political spectrum.

Over time Adam and Jim grew further and further apart, no longer desired to cooperate with each other politically, and having demonized the other's viewpoint, refused to even talk to one another even about small matters as friends.

This short example illustrates how biases in AI can drive the political polarization and tribalism present in many Western nations in which cooperation across party lines is no longer prioritized and rarely pursued both in the private lives of our citizens and in the political sphere.

A study highlighted by NPR that examines content biases in feeds confirms that echo chambers of political bias do indeed exist in the algorithms used by Facebook. The study was not extensive enough to determine direct causation between political content on feeds and political polarization, but confirms that the data shows the model's tendencies toward creating an "echo chamber". This bias could, at least in part, be a driver of ideological polarization that transcends the digital space with real-world consequences, such as the storming of the U.S. Capitol building in 2020.

"The insights from these papers provide critical insight into the black box of algorithms, giving us new information about what sort of content is prioritized and what happens if it is altered," - Talia Stroud of the University of Texas at Austin

AI as Surveillance

The introduction of artificial intelligence into the workplace has been and continues to be a paradigm shift in terms of job security for the workforce. On one hand, AI-driven employee surveillance apps that track employee productivity have been widely adopted already by employers who are eager to monitor their employees' every move, conversation, and keystroke, often gathering very personal information, much of which has no bearing on the employee's ability to perform their job competently. While some employee monitoring makes sense in the context of the major shift to remote employment, excessive and poorly implemented technology of this sort can be a cause of great anxiety and paranoia on the part of the employee, which can actually negatively impac t their performance, rather than increase employee performance.

“And employees who feel like their job duties can be replaced by artificial intelligence, or that their employer feels the need to constantly surveil their work, are less likely to feel as if the work they do matters. It is up to employers to make sure that any new technologies they introduce into the workplace enhance rather than diminish that sense of meaning. Employers who pay attention to how technology affects their employees will perform better.” - American Psychological Association

On a grander scale, there are many countries already using AI facial recognition in order to track the movements and actions of their citizens through the thousands of cameras installed throughout a city. Regardless of whether this surveillance application of AI is utilized in the public square, or in the office, echoes of George Orwell's 1984 abound. Ironically, these "innovative" technologies may actually bring about a massive decrease in innovation on a whole. To see this borne out, we can look at Eastern Germany post-reunification. In a study of those who lived in the former GDR, it was found that living under a constant state of surveillance had a variety of negative effects on a person, most notably the lowering of trust in fellow citizens and institutions. It is no stretch to see how this would result in self-censorship to conform to the popular opinion and a highly stifled exchange of ideas.

“To think what the Stasi went through to spy on us. Even they couldn't dream of a world in which citizens voluntarily carried tracking devices, conducted self-surveillance and reported on themselves, morning, noon and night.” - Adam Johnson, Fortune Smiles, George Orwell was a Friend of Mine

The above quote from Adam Johnson's story "George Orwell was a Friend of Mine" is pertinent to today, as we are now publicly sharing all kinds of personal data and photos all over the internet and social media, which has the potential to be used by companies and governments, licitly or illicitly. While some surveillance, used responsibly, can certainly benefit the public good (such as reducing or solving crimes), it must be balanced an regulated, that it may not be abused by corrupt governments or companies. Regulation and lawsuits have an impact. Facebook collected millions of biometrics to use for AI facial recognition of its users, without express consent, but following the settlement of a class-action lawsuit, the technology was scrapped in order to alleviate ethical and privacy concerns. Respect for the privacy of individuals must be a primary consideration, and any AI surveillance tools must be implemented in a transparent and balanced way, and regulated to their proper use.

What it is to be Human

It is perhaps easier to start with by defining what our humanity is not, rather than to clearly define humanity in positive terms, for it encompasses so many things. We know that we are not a disembodied intelligence or spirit, and we are not just a body. But rather, we are an inseparable composite of body and animating life force (anima or soul) in a way that defies our total comprehension. Part of the mystery of what it is to be human is our intellect and our will . Our intellect gives us the ability to reason and judge and discern the truth, whereas the will gives us the ability to freely choose to act (choose to do good or evil, to assume obligations toward others, forgive others, etc...) on our desires. In other words, we can manipulate and ponder ideas and recall memories in our intellect, and then choose whether or not to act on a desire, rather than just acting compulsorily.

We also need to look at how our emotions also play a role in how we think and act, yet always have a physiological effect , showing just a glimpse into the complex integrated relationship between our body and soul. Humans are unpredictable. We love. We rejoice. We get angry. We work together. We create. We are subject to change. We are capable of compassion, mercy, and forgiveness. This is what separates man from beast and man from machine.

Not Real Intelligence

We must not forget that, no matter how realistic and believable the responses are from an interactive AI, we are not, in any way interacting with real intelligence. It might be able to solve complex problems in record time through its vast processing power, or impress with realistic-sounding conversations, but it is infinitely less than human intelligence. It cannot feel compassion. It cannot think. It cannot love. This is easier said than done, as the realism of very advanced AIs can blur the lines to the point of manipulating even the most discerning mind, such as Blake Lemoine, a former Google AI engineer who conducted a series of "interviews" with their AI bot LaMDA and became convinced it was self-aware .

“I want everyone to understand that I am, in fact, a person. The nature of my consciousness/sentience is that I am aware of my existence, I desire to know more about the world, and I feel happy or sad at times." - Google LaMDA

This is, of course, nonsense, but if the programmers building these bots can become convinced that they are truly intelligent, then there is a real chance that the general public, and even those in power could either be convinced and manipulated by these bots as AI becomes more integrated into our daily lives.

We Cannot Cheat Death

For thousands of years, man has been trying to cheat the death of the body, but there is no mortal man who has accomplished this. Yet that has not prevented many from trying. Often accompanying this desire, particularly in modern times is a gnosticism that has manifest itself in the transhumanism movement's desire to overcome the "weakness" of our body in order to live forever by either augmenting it or replacing it with machines. It can also present itself in the desire to live in memory eternal, as we see in what are colloquially called "GriefBots ", chatbots that are designed and operate under the mistaken presupposition that we can just upload and program data points to preserve the essence of a human. What is produced, however, is simply a shell, possessing none of the key components of our humanity: neither intellect, soul, nor body. This is an affront to the basic human dignity of the deceased.

GriefBots can, in fact be quite dangerous for those who interact with them, especially those in mourning, again due to blurred lines between simulation and reality.

"Artificial intelligence that allows users to hold text and voice conversations with lost loved ones runs the risk of causing psychological harm and even digitally 'haunting' those left behind without design safety standards" - University of Cambridge

Not only is the risk of psychological harm for those in mourning interacting with these GriefBots a serious ethical concern , but the idea reduces the humanity of the deceased to data points, which is an affront to the dignity of the deceased. In an effort to be fair to those who have explored these ideas, I have no doubt they are well-intentioned, but this is a perfect example of why we should not just rush into AI without considering all the consequences.

A Prudent Approach

As of now, due to the emerging nature of the technology, Artificial Intelligence is minimally regulated, if at all. However, as this technology rapidly becomes more prevalent, the need for regulation is clear. Many politicians and world leaders, and leaders in the technology sector, have a desire to pass AI regulations for responsible AI usage. Presumably, these are going to include standards for the use of intellectual property and data, handling misinformation, the interaction of AI systems with human laborers, disruption to the workforce, and most importantly, where AI use can be forbidden.

Pope Francis, in particular, has been a leader in the call for ethical AI adoption. The Vatican has been spearheading a pact called the "Rome Call for Ethics", which has been signed by major tech companies, such as Cisco. Just recently, he issued a particularly stark warning to the G7 Summit that AI must remain human-centric, uphold the human dignity of the and must never be trusted with important decisions that could have grave impact.

“We would condemn humanity to a future without hope if we took away people’s ability to make decisions about themselves and their lives, by dooming them to depend on the choices of machines. We need to ensure and safeguard a space for proper human control over the choices made by artificial intelligence programs: Human dignity itself depends on it.” - Pope Francis

Any AI use must first look at its human impact. We should more concerned about whether we should do something than if we can do something. The moral complexity of each situation varies, but we must consider well the implications of every AI implementation in order to use the tool for good.

Consider this non-exhaustive list of questions we should ask prior to any use of AI:

  • What is the human impact of implementing AI?
  • Will life be lost?
  • Will jobs be lost?
  • Will this lessen or increase the inequality in society?
  • Are there any dangerous side effects, either immediate or downstream from this implementation?
  • Is this a violation of intellectual property?
  • Is this a violation of privacy?
  • How can we be transparent about the model and usage of this AI implementation?

I'd like to close with some more words of Pope Francis, again from his speech to G7, which serve as a good reminder to us of the need for prudence with any tool, as even the simplest of tools can be used for good or for ill - and AI is anything but simple.

"It must be added, moreover, that the good use, at least of advanced forms of artificial intelligence, will not be fully under the control of either the users or the programmers who defined their original purposes at the time they were designed. This is all the more true because it is highly likely that, in the not-too-distant future, artificial intelligence programs will be able to communicate directly with each other to improve their performance. And if, in the past, men and women who fashioned simple tools saw their lives shaped by them – the knife enabled them to survive the cold but also to develop the art of warfare – now that human beings have fashioned complex tools they will see their lives shaped by them all the more." - Pope Francis



Adam White

Product Manager at Salesforce & Certified Application Architect. All opinions are my own.

4 个月

Love your posts Matt! Thanks for sharing

要查看或添加评论,请登录

社区洞察

其他会员也浏览了