AI and DEI
From stock photos

AI and DEI

While visiting my sister in Seattle last month, she told me about and shared her personal notes from a public presentation she attended, sponsored by Humanities Washington with speakers from the University of Washington titled, “A.I. Anxiety: How Should We Think about Artificial Intelligence?”.

The panel included Chirag Shah, professor at the University of Washington’s iSchool; Aylin Caliskan, assistant professor at the iSchool; and Ryan Calo of the School of Law at the University of Washington. Moderated by Andrea Woody, professor of philosophy at the University of Washington.

Here are some of my sister’s brief notes on the panel’s discussion and presentation:

  • AI learns?patterns from data
  • Automatically?learns/reflects biases.?Difficult to recognize & debug biases
  • Recognizing human?cognition using machines
  • There are Different?kinds of AI: Artificial?General Intelligence/Artificial Super Intelligence
  • Augmentation?helps. Automation replaces
  • Can’t?regulate AI - It’s not a thing.? There are?no guardrails or boundaries and/or guardrails/boundaries are easily?circumvented.? There is no framework for?accountability
  • Replica.?Joseph?Weisenbaum
  • Address what AI is currently doing rather?than what it might do
  • Information?Integrity – Misinformation is not intentional while disinformation is intentional
  • Training AI is energy use intensive
  • Is there anxiety because AI threatens a higher?social-economic class than previous technologies?
  • Inequality is a concern.

Since I have worked in IT for many, many years; the recent AI chat with my sister and her notes on AI raised my curiosity, so, I decided to explore information about AI, beyond what I’ve seen in the media.

Joseph Weizenbaum

The panel referenced Joseph Weizenbaum, a professor emeritus of computer science at MIT and his work on AI. In the 1950’s, Weizenbaum, grew skeptical of artificial intelligence after creating ELIZA, a chat program that made many users believe that they were interacting with an actual therapist. As a result, Weizenbaum’s end view of AI was, "No other organism, and certainly no computer, can be made to confront genuine human problems in human terms.”

From Intel Corporation

To understand AI, let us start with tool time. I learned that there are several ways to look at AI's architecture and the classifications of AI:

Classifications:

  1. Reactive AI that is extremely limited, in that it has no memory, is task specific, and an input always delivers the same output.?
  2. Limited Memory AI that learns from historical data, by looking into the past to monitor specific objects or situations over time. Then, these observations are programmed into the AI so that its actions can be performed based on both past and present moment data.
  3. Theory of Mind AI currently hypothetical, more interactive, could have the potential to understand the world and how other entities have thoughts and emotions.
  4. Self-Aware AI that is currently more hypothetical because it involves self-awareness based on self-consciousness.

Alternate classifications include Artificial Narrow Intelligence that uses large datasets, Artificial General Intelligence that can perform human-like tasks, and Artificial Superintelligence to replicate or emulate human beings. In all of these AI applications we may encounter biases.

?A subset of the classifications is Generative AI that uses existing data to create new forms of or synthetic data, images, sounds, videos, or concepts. This is where ChatGPT comes into play and seems to be causing anxiety in society. Using the brief descriptions of AI, you can probably realize that as AI moves more and more into interacting with, emulating, resembling, or replacing humans; those are uses we seem to become most anxious about.

?Man is not a machine, ... although man most certainly processes information, he does not necessarily process it in the way computers do. Computers and men are not species of the same genus. .... No other organism, and certainly no computer, can be made to confront genuine human problems in human terms. ... However much intelligence computers may attain, now or in the future, theirs must always be an intelligence?alien?to genuine human problems and concerns.

— Joesph Weizenbaum?

{Computer Power and Human Reason: From Judgment to Calculation,?(1976) 203 and 223. Also excerpted in Ronald Chrisley (ed.),?Artificial Intelligence: Critical Concepts?(2000), Vol. 3, 313 and 321.}


Before we proceed further, we may need to know what makes intelligence artificial. By one dictionary definition, something is artificial if it does not originate in nature. AI mimics or simulates human intelligence to perform tasks. As said by the panel, AI learns patterns from the data including any biases in the data.

Last week, I took self-paced training on Diversity, Equity, and Inclusion (DEI) that included information about bias in human relationships. Diversity?refers to the presence of variety within the organizational workforce, such as?identity.?Equity?refers to concepts of?fairness?and?justice. Inclusion?refers to creating an organizational culture that creates an experience and a sense of belonging and integration. Across all three, we need to consider the concept of bias.

?What does it mean to be biased? Bias is prejudice in favor of or against one thing, person, or group compared with another, usually in a way considered to be?unfair; or a systematic?distortion?of a statistical result due to a factor not allowed for in its?derivation; or it is to feel or show?inclination?or prejudice for or against someone or something. Topics in the training included: discovering personal unconscious bias, fast & slow thinking on unconscious bias, techniques to challenge biases to improve judgements and decision making and managing your blind spots.

?The DEI training reminded me of the note on AI bias from the panel, so, during my AI research I found more detailed information on AI bias by Aylin Caliskan:

Aylin Caliskan

?“Caliskan says discrimination is already a primary issue with AI systems. “I focus on bias in artificial intelligence, and humans are biased,” said Caliskan. “In 2016, we discovered that artificial intelligence systems that are feeding on large-scale language data, in particular, will automatically learn implicit biases in a way that is similar to humans; and, accordingly, machines become biased. …

All the biases that exist in society get transferred to machines, and they don’t get transferred directly — they get transferred in ways that are skewed because they are trained on data that is collected from the internet, which doesn’t have a proper representation of society.”

Therefore, in developing and then using AI; there will be conscious, unconscious, implicit, and/or explicit, and inherent bias in any data repository, if you will, in the ‘cloud’ that AI uses. We cannot assume that all data, information, and knowledge that exists since the beginning of time up to present day; are available in the ‘cloud’ for AI to use on any given day. ?Plus, data in the cloud includes intentionally published disinformation. Other factors, such as the programmer’s skills using the coding language, the ‘training’ that AI receives from the requestor, the wording of the request itself, and the receiver’s interpretation of the results may introduce additional biases.

?We may check for bias in the data by using programmed boundaries, guidelines, policies, or restrictions, but that may filter the data and in turn skew the results. AI could circumvent or interpret constraints during execution of the requested task. Once anything is ‘learned’ by AI, including any biases in the dataset, they are difficult to adjust, change, improve, or remove; because AI uses the information it was directed or trained to use; in other words, it sees all data as truth. So, we need to be cognizant that a bias may exist when using AI, by applying discernment, discretion, and remaining cautious about accepting its outputs as definitive. We need to retain the human ability to assess, analyze and if need be, challenge the AI results.

?In addition to the risks of bias, AI on its own initiative may form new constructs (Generative AI). Unless trained to do so, it does not automatically filter nor validate its own results. In turn, it may not be able to heuristically, learn from its own conclusions or outputs, unless it is guided programmatically to do so; &/or has its outputs validated via human intervention before it is published. Yet those editorial measures inherently risk having biases within them.

From Digital Equipment Corporation Archives

In a recently released news video from ABC titled, “Fighting Bias In AI”, we see an example that may help us understand what can happen. An AI program was asked to draw various types of people, and the results it produced were stereotypical images. As suggested in the video, we need to correct a bias in AI that we’ve observed, but that assumes the reviewer can determine if a bias exists. If the bias embedded in the data set is of an unconscious nature, how will AI or humans using AI know it exists? Will AI someday get to a level of consciousness to give itself or accept human generated feedback to improve its results?

Let’s look at AI from the perspective of relationships between humans and technology. Generic models of those relationships may be:

?1)???? A user initiates a request by interfacing and interacting with a service provider and only the service/support person(s) involved in the transaction, is using technology.

  • You walk into a restaurant to eat lunch, the hostess uses an application to seat you, the waitperson takes your order on a hand-held terminal, then uses a point-of-sale terminal to process your check.

?2)???? User initiated using only technology: A user initiates a request by interfacing and interacting with one or more technologies to completely resolve or fulfill one or more requests with no service/support person(s) involved in the transaction.

  • You use an AI app to find a family vacation location on their laptop, makes an airline reservation via an app on their tablet, reserves a hotel via a website on their laptop, and makes a rental car reservation on their smartphone for their trip. The user receives a confirmation email from each provider.

?3)???? User initiates technology to reach a provider: A user initiates an interaction using technology with a service/support person(s), whereupon either party on either side of the transaction are using one or more technologies to complete the request or task.

  • You call a reservation agent to edit your vacation hotel stay, while you order a drink at Starbucks and pay using their app. You chat with a sales clerk to check stock or another store for your size clothing, they do so using a mobile device, they reserve it for you at another store.

?4)???? Technology initiated: Technology itself or a service/support person using technology initiates an interaction with a user, whereupon either party on either side of the transaction are using one or more technologies to complete the request or task.

  • You receive a text reminding you of an appointment from your doctor’s office; you send back a text back to confirm it. You get an email that their car is due for service, a text on where your car is parked from a GPS app, and your car app texts you that left a window open.

In any of the above models, the user &/or the service/support person asks another person or the technology for assistance during the transaction, or the technology automatically intervenes and offers assistance, or the transaction is auto escalated as it requires another person &/or additional technology to resolve the request.

Your account has been locked for security reasons, please contact us. 

You have pressed the wrong menu item, let me get someone to assist you.        

?Different types and combinations of AI technology may be used in any of these service relationship models before, during, or after the lifecycle of the interaction. In our lives we manage many personal, professional, and consumer relationships and do so more and more using technology. Therefore, it is not surprising that humans could have a relationship with AI.


During the panel’s discussion, it was mentioned that humans may form a relationship with AI, as represented in the movie “Her” (2001), about an AI personal assistant.

Their movie reference reminded me of a few other movies with similar themes such as “Bicentennial Man” (1999) , “Big Hero 6” (2014), and “Finch” (2021). In the movie “WALL-E“ (2008) an AI robot falls in love with EVE, another robot with shared values.

Movies that illustrate the possibility of a darker side of relationships with robotic forms of AI are “The Terminator” ??(1984,) “Ex Machina” (2014), and “iRobot” (2004). The latter movie was based on Isaac Asimov’s 1950 book of that title, which has a storyline that parallels one originated in a 1920 Czech play, “R.U.R (Rossums Universal Robots)”; where the word "robot" was introduced into our vocabulary.

In 1942, Asimov introduced what could be called the first governance of AI with the “Three Laws of Robots”, that is said to “influenced thought about the ethics of artificial intelligence”. Today, many are calling for laws to be put in place to regulate AI, with some countries and some states having initial governance in place while others like the US are considering it. Schools are having to create policies on student’s use of AI to complete schoolwork

Some elements of assistive or robotic AI represented in science fiction movies or TV shows have been realized and are in place now. Some basic examples:

Picture taken at SFO

> A robot takes inventory, stocks shelves in a supermarket, and self-checkout uses AI to identify it is a tomato instead of an apple to be weighed.

> Robotic AI determines how to carefully pick up a variety of a random set of objects and packs them in a box or places them on a store shelf.

> Self-driving cars and spaceships.

> The Crew Dragon’s auto pilot flies to and then docks with the ISS.

> Teams of robots playing soccer

> You order a cup of joe using a tablet that is then made by a robotic barista at an airport kiosk or at a restaurant use their kiosk to order a gourmet hamburger that is then made by a robot.

> The use of custom AI to create the new Beatles song, “Free As A Bird”.

From Intel Corporation

> Robots harvest ripe vegetables in the field on a farm or garden in a 'hot house'; a robotic tractor on a farm using soil analyzed by satellite pictures to decide how much to fertilize each square foot of a field.

> The use of chatbots to understand, initiate, and complete service requests.

> Cobots that watch your work-in-progress and assist, such as lane drift correction, collision avoidance, and auto braking while driving.

> Doctors using AI to diagnose a patient’s unknown illness. Using robotics to perform surgery.

> You can buy AI companion robots thru Amazon.

Several years ago, a company that produces robots to be AI companions of senior citizens living alone, to reduce their loneliness and isolation; reported in their pilot results, that when the droids were removed from the home at the end of the test, those persons missed them.


Perhaps the above examples, are a few of the reasons why AI is causing anxiety among workers that feel they could be replaced by AI. That fear is being reinforced by companies announcing that they will trim their human workforce by using AI.

should be made about how to train AI on what data, morals, ethics, and policies it should use to execute and complete a task, but also realize upfront, that the selected information and guidance may include our own and/or many other’s biases. ?We need to review the way we develop and use AI, assess its results, and continually improve the outputs. The risk remains, that during those iterative improvement cycles, other biases may be introduced. If we train AI about human biases towards someone or something, others may consider that training introduces a bias against them.

AI is big business with hundreds of IT AI jobs that need to be filled. In 2018, MIT announced a $1.1B plan to create a new college of AI and the MIT Schwarzman College of Computing Building opened in 2023.

From MIT News

From the initial announcement in MIT News:

“Computing is no longer the domain of the experts alone. It’s everywhere, and it needs to be understood and mastered by almost everyone. In that context, for a host of reasons, society is uneasy about technology — and at MIT, that’s a signal we must take very seriously,” President Reif says. “Technological advancements must go hand in hand with the development of ethical guidelines that anticipate the risks of such enormously powerful innovations.

This is why we must make sure that the leaders we graduate offer the world not only technological wizardry but also human wisdom — the cultural, ethical, and historical consciousness to use technology for the common good.”

While writing this article about AI bias and a connection to DEI training on bias, I was reminded of a conversation in 1984 with a friend of mine, John Shebell, a computer scientist, and engineer; in his new role as Digital Equipment’s first Director of AI; when he shared this with me, ‘The problem with Artificial Intelligence is that it will always be artificial.’

Backplane of a DecSystem 10 mainframe CPU

If AI is truly artificial in nature, then why would humans, without question, accept its results as gospel?

Is the use of technology at large dumbing down our human ability to think and then know? Some studies show that using GPS instead of physical maps, digital clocks instead of analog, and using search engines to retrieve data; has lowered our ability to think abstractly, less able to assess info with a more whole-brained and systemic approach, lowers our ability to analyze or question info, reduces our ability to remember and recall information, and increases our desire for instant gratification. Will using AI amplify, negate, or enhance those issues and impact other human emotional, physical, or mental capabilities?

?What’s needed more than anything else, I believe, is an energetic program of technological detoxification. We must admit that we are intoxicated with our science and technology and that we tire deeply committed to a Faustian bargain that is rapidly killing us spiritually and may eventually kill us all physically.

— Joesph Weizenbaum

{From interview with Marion Long, 'The Turncoat of the Computer Revolution',?New Age Journal?(1985),?II, No. 5, 49-51, as quoted in Lester W. Milbrath,?Envisioning a Sustainable Society: Learning Our Way Out?(1989), 258}

In ways we may not yet know; using and depending on Artificial Intelligence solutions will in some way, overtly or obscurely, negatively, or positively; impact our intellect and intelligence. Perhaps over time, AI will become less ‘artificial’, but we may think that can never be true or don’t want it to be, even if it is valid; because of our anxiety, jealousy, pride, fear, or other human emotions against AI.

So then, there is the possibility that we may become biased against AI itself or that some folks already have. AI has taken us or will do so abruptly out of our comfort zone, which makes that the most probable root cause of the anxiety exhibited to date, and not the technology itself.

Will AI and/or users of it need DEI training to include technological biases in our relationships with each other, or are the ideas of DEI diametrically opposed to the best practices of AI?



?

?


Rich Petti

?? ITIL?4 Master, Managing Professional, Practice Manager, & Strategic Leader ?????? ITSM Coach & Consultant ?? Husband, Father, Papa, Brother ?? ??

7 个月

MIT grad student Joy Buolamwini was working with facial analysis software when she noticed a problem: the software didn't detect her face #JoyBuolambini #MIT #AI #Bias #ITSM #ITIL https://www.ted.com/talks/joy_buolamwini_how_i_m_fighting_bias_in_algorithms?autoplay=true&muted=false&referrer=playlist-what_are_we_really_teaching_ai

回复
Rich Petti

?? ITIL?4 Master, Managing Professional, Practice Manager, & Strategic Leader ?????? ITSM Coach & Consultant ?? Husband, Father, Papa, Brother ?? ??

7 个月

The convergence of AI and robotics will unlock a wonderful new world of possibilities in everyday life, says robotics and AI pioneer #DanielaRus. #MIT #LiquidNetworks #PhysicalIntelligence #AI #ITSM #ITIL https://www.ted.com/talks/daniela_rus_how_ai_will_step_off_the_screen_and_into_the_real_world

回复
Rich Petti

?? ITIL?4 Master, Managing Professional, Practice Manager, & Strategic Leader ?????? ITSM Coach & Consultant ?? Husband, Father, Papa, Brother ?? ??

7 个月
回复
Rich Petti

?? ITIL?4 Master, Managing Professional, Practice Manager, & Strategic Leader ?????? ITSM Coach & Consultant ?? Husband, Father, Papa, Brother ?? ??

7 个月

Technologist Jeremy Howard shares some surprising new developments in the fast-moving field of deep learning #JeremyHoward #DeepLearning #AI #ITSM #ITIL #Heuristic https://www.ted.com/playlists/448/what_are_we_really_teaching_ai

回复
Jane Moore

GLOBAL - Project Manager, Product Owner, Business Analyst with Architecture and Change Management -Agile or Waterfall at Jane Moore LCC. High Conflict Negotiation, CLM, CPQ, Quote to Cash, Mergers & Aquisition. US&UAE

7 个月

Thanks for posting

回复

要查看或添加评论,请登录

社区洞察

其他会员也浏览了