Creating a transparent and unbiased AI for our collective organizational and societal betterment
Most of us live part of our lives through the epoch of Digital, automated Technology – yet nearly all of us have little or no voice or representation in how this new world is constructed and presented back to us, or how it is governed or indeed ultimately interpreted in a fair and representative way. This World is articulated and guided by Data Scientists and Data engineers who in many cases, speak a language that most of us are unfamiliar with. But worse, they don’t seem to understand how to represent how we live, either as individuals or as a collective society.
How then can these individuals write without Bias or ethical imbalance into their AI models our ethnic, cultural, gender, age, geographic or economic diversity posture?
But here is the thing-they do!
They do because they have no choice. Today, we struggle to get a balance of skilled Programmers, Developers, coders and critical thinkers from the wide and diverse ecosystem of Technology, society culture and Gender that is required to write clean, open, ethical and non-Bias AI. An AI that is better for all of us.
As it stands today, the AI that we produce will benefit some of us far more than others, depending upon who we are, our gender and ethnic identities, how much income or power we have, where we are in the world, and what we want to do.
We live in a World today that carries with it a legacy that has not at all been concerned with diversity or equality, and as these systems migrate to becoming automated, untangling and teasing out the meaning for the rest of us becomes much more complicated.
The Role of AI Algorithms
The World of AI is fundamentally a world of Advanced Mathematical Algorithms and functions that are built into the construct of Neural Networks propelled by advanced Technology architecture developed and landed into our world by companies and organizations like Dell Technologies, IBM, Google, Apple, Microsoft and many others.
AI algorithms now play an increasingly large role in modern society as a whole, and it has become increasingly important to develop AI algorithms that are not just powerful and scalable, but also transparent to inspection and ethics. Some challenges of machine ethics are much like many other challenges involved in designing machines. Designing a robot arm to avoid injury to humans for instance, is no more morally fraught than designing a flame-resistant piece of furniture. This involves new programming challenges, but no new ethical challenges.
When AI algorithms take on cognitive work with social dimensions, the “thinking tasks” previously performed by humans—the AI algorithm inherits the social requirements and responsibilities. However, it will also become increasingly important that AI algorithms be robust against potential manipulation. For instance, when an AI system fails, who takes the responsibility? The programmer, the developer, the coder, the Scientist, or engineer - the end user?
?The moral constraints to which we are now subject in our dealings with contemporary AI systems are all grounded in our responsibilities to other Human beings and not in any way to the systems themselves. Artificial Intelligence communities are starting to awaken to the profound ways that our algorithms will impact society and are now attempting to develop guidelines on ethics for our increasingly automated world. This task is a Razors Edge that Innovators, pioneers and Guardians of this {new} World must now thread responsibly.
?
?The Two Basic Types of AI
There are two basic types of AI in this new world. There is what we call “Weak AI” and there is what we call “Strong AI”.
Weak AI is where we see the computer merely used as an instrument for investigating cognitive processes. In other words, the computer simulates intelligence.
Strong AI can be defined as the processes in which the computer systems are intellectual, self-learning, and can ‘understand’ by means of the software or programming definitions used in order to build the AI initially. This AI is able to optimize its own behavior on the basis of it’s own former behavioral characteristics and the current or even future experience of the AI itself.
Here we experience the automatic networking of the computer system with other machines, which leads to a dramatic scaling effect within the AI ecosystem.
As a scientific discipline, AI includes several approaches and techniques, such as machine learning (of which deep learning and reinforcement learning are specific examples), machine reasoning (which includes planning, scheduling, knowledge representation and reasoning, search, and optimization), and robotics (which includes control, perception, sensors and actuators, as well as the integration of all other techniques into cyber-physical systems).
All of these techniques are, of course, open to potential Bias when writing and developing the software or programing code, and herein lies the challenge - Bias is a prejudice for or against something or somebody, that may result in unfair decisions. It is known that humans are biased in their decision making. Since AI systems are designed by humans, it is very possible and often times probable that humans inject their bias into them, even in an unintended way.
领英推荐
Many current AI systems are based on machine learning data-driven techniques. Therefore, a predominant way to inject bias can be in the collection and selection of training data. If the training data is not inclusive and balanced enough, the system could learn to make unfair decisions. At the same time, AI can help humans to identify their biases and assist them in making less biased decisions.
Towards Trustworthy AI
Ethics, of course, also speculates and becomes an obvious potential problem with programing software and code for AI. Ethical purpose is used to indicate the development, deployment, and use of AI, which ensures compliance with fundamental rights and applicable regulation, as well as respecting core principles and values. This is a critical element that helps us to achieve Trustworthy AI.
?Trust is a prerequisite for people and societies to develop, deploy and use AI. Without AI being demonstrably worthy of trust, subversive consequences may ensue, and its uptake by citizens and consumers might be hindered, hence undermining the realization of AI’s vast economic and social benefits.
?To ensure those benefits, our vision at Dell Technologies is to collaborate through both professional and societal communities to develop a safe and trusted ecosystem that supports and propels the use of ethics and Technology in a “Non-Biased” way to inspire trustworthy development, deployment and use of AI. The collective aim is to foster a climate most favorable to AI’s beneficial innovation and uptake.
?Trust in AI includes trust in the technology through the way it is built and used by human beings. It includes trust in the rules, laws and norms that govern AI. It is critically important then that we understand the method and importance of methodology when building an AI model. This will help us to understand that transparency and education are essential, both in terms of the Data Scientist and Data Science community as well as the wider public. It also reminds us that while the AI algorithm is making the decision, we humans initially thought it to do so.
?However, the more an AI system leverages machine learning and neural networks to crunch immense amounts of data, the less likely a human is to understand how the AI arrived at its conclusion. This is what we call the Black Box problem as found in most LLM's today. If a system is so complicated that a user doesn’t understand how it works, how can we trust the decisions it makes?
?We see this more and more as AI enters our everyday lives. The “talk of the town” these days is rooted in the ethics of the “Autonomous vehicle” and not so much on the Technology and Mathematics that helped build the capability. “What will the machine decide to do in a life-or-death situation?” or “Who will the AI decide to save, the old man or the young child”? These are ethical questions that need to be collectively addressed and done so at a community level, and not by individuals or individual groups. This is where we look to remove discrimination in algorithms, enhance fairness in outcomes, and write well-governed and non-biased, clean, and new code into our automation systems. Not code steeped in the heritage of an “old world” belief system that we are pulling from one life system into another!
Collectively we need to start asking the right questions and create an “Ethical Purpose” for the AI. For instance, as Business leaders we need to look at what we are asking the AI to do and why.
We now need to choose carefully the tasks and objectives, as well as the historical data, that we now assign to AI, considering questions such as “Does the AI free up my staff to take on more fulfilling human tasks?” and “Does AI improve customer experience?” or “Does it allow us to offer a better product, improve and existing process or expand our organization’s capabilities?”
Building Emotion AI
At Dell Technologies, we look at these things very closely. We apply AI where we see short, medium, and long-term collective gain for the company, the company’s technology, and technology users, as well as the wider society and ecosystem that we connect with and live in.
We have spent over a decade and a half looking at and working with AI and we are determined to drive an ecosystem that is open, transparent, safe, and massively differentiating by its very nature. An ecosystem that embraces Human progress, potential and possibilities through innovating with Technology.? ?
Our mission at Dell Technologies is to humanize technology and one way we are doing this is by building “emotion artificial intelligence”. This program is over 10 years old, and in that decade we have developed software that can recognize human emotions and apply it to everything from helping brands with improving marketing and advertising messages to sentiment polling.?
This is exciting for us, as we believe that Emotion AI or artificial emotional intelligence, isn’t just going to change the way we connect and communicate with our devices. We believe it’s going to fundamentally change how we’ll communicate and connect as people.
The idea is that the technology will allow machines to identify emotions in the same way as humans can. We will now be working and living with machines that are able to tell the difference between a smile and a smirk, for example, or allow online learning apps to detect how engaged a student is and adapt the system accordingly.
?Our passion and our aspiration at Dell Technologies is to encourage the creativity and intelligence of Humans and Machines in a way that propels our World into a new Era. An era of diverse, open, collaborative, and positive innovation that will enhance our collective experience of life - The journey continues!
Author: Marc O’Regan – CTO EMEA - Dell Technologies
How is this going to be impacted by the current politics of the US Government to scrap DEI initiatives, describing it as woke & forcing employees to stop using pronouns? Large swathes of corporate America are following suit so surely there is a strong likelihood of bias being built in to AI systems, at best inadvertently or in many cases by design?
Helping CIOs Deliver Exceptional Experiences | Head of Enterprise Sales at Voxxify
1 个月Great article Marc.
CTO & Head of New Ventures at NTT DATA UK&I
1 个月Excellent thought provoking overview.. #Society5dot0 ;-)