The Key Ingredient To AI Success: Our Values
Bruce Temkin
Human experience visionary, dynamic keynote speaker and executive advisor who helps organizations better understand and cater to human beings in ways that drive success and improve humanity
Artificial Intelligence (AI) can enhance humanity; and it can also destroy it. It all comes down to one thing: values.
Values Before Ethics
People are starting to talk about the term “Ethical AI,” correctly identifying that we need to have some controls over how we use and deploy AI. But those discussions often miss a key point, ethics are subjective. They exist to reinforce a set of values, and those values can vary widely across people.
Of course, everyone tends to believe that there is only one ethical standard, as they confine their thinking to their own values. To demonstrate, I’ll jump into a very controversial topic: abortion.
A pro-lifer may see a woman’s decision to have an abortion as ethically wrong while a pro-choicer (I’m one of those) may see it as ethically right. Both of those are ethically correct; as they are consistent with their values. Their differences aren’t in ethics, but in values. One group values a strict interpretation of religious transcripts, while the other values an individual’s right to make decisions about their own body.??
As you can see, aligning values can be very difficult. But we can never agree on ethics until we expose and debate our values. Now let’s jump out of the abortion discussion (probably not soon enough), and get back into my discussion on AI.
Introducing Values-Driven AI
If we want AI to operate consistently with our values, then we need to adopt explicit practices to make it happen. That’s why I’m introducing a new approach called “Values-Driven AI” (VDAI), defined as:
Applying AI in an ethical manner based on a clearly defined set of values
There are four steps to VDAI:
Step #1: Values Clarifying
For most of our lives, we operate using an implicit set of values that we amass over time. Most of the people we work with have grown up with similar backgrounds, so they naturally share a lot of the same values. But AI does not “grow up” in the same way, as it does not have access to the subtle clues about right and wrong that human beings experience during their early lives. Lacking this grounding, AI will look for clues about right and wrong from how people behave. But, as we all know, human beings often make decisions and act in ways that are inconsistent with their values.?
In order to drive AI in what we would consider an ethical direction, we need to be explicit about our values. Here are some steps for getting clarity around your values.
Step #2: Values Nurturing
We have to help our AI understand and behave consistently with our values. This is not an one-time effort, as it is more like the ongoing process of bringing up a child.
AI is like an infant; eager to learn what the world teaches it. And just like with a child, we can’t fully control what AI learns so we need to instill it with a strong sense of values to guide its learning and actions.?
As a simple example, let’s say that we offload our executive recruiting to AI, asking it to find the best candidates for a new Senior VP role. If it is only looking for the candidates who are most likely to succeed, then it might look for a white, middle-aged man since that looks like the most dominant profile of the company’s senior executives. That’s one of the flaws of generic AI models, they tend to reinforce ingrained biases.?
领英推荐
Think about a world where all of our worse tendencies are automated and replicated, and any bad actions can be amplified at a rapid pace. Pretty nasty! We can already see that happening on social media sites where AI is being used to provide people with content that will keep them engaged, often without any consideration for the quality or accuracy of that information.
In order to nurture our values within our AI, we need to:
Step #3: Values Sharing
If I had to pick one recommendation that I had to use for any situation, then it would be to “be transparent.” So many of the world’s problems either stem from a lack of transparency or would be dramatically improved with greater transparency. AI is no different, and may even be one of the most important areas for transparency. Why? Because AI is a black hole. Most people have no idea about how it operates.
While it is not feasible to explain to most people how AI works, it is important that they understand the parameters that are driving its actions; its values. This provides people with the information they need to decide if and how to interact with an AI-driven system, and will make them feel more comfortable with any of their interactions.?
People who either use an application driven by your AI or are affected by decisions being made by your AI should have easy access to information about the values that are being used by the AI to shape its actions.?
To ensure transparency, make it easy for all stakeholders to discover:
Step #4: Values Maintaining
While an individual’s values are relatively stable over time, an AI’s guiding values need to be constantly re-evaluated. Why? First of all, since this a brand new practice there will undoubtedly be ways to improve how you describe the organization’s values. Also, organizations shift over time, so it’s important to ensure that the values being used actually reflect the current state of the organization.
Here are some ways to ensure that your values remain aligned with the organization’s actual values, it’s critical to have an annual review of your AI efforts where you:
It’s Time To Establish AI Governance
I haven’t introduced VDAI as an exact blueprint, but instead as a model to spur critical discussions. AI is an important enough emerging capability that organizations need to proactively plan for it before it becomes widely used in uncontrolled ways.
Every large organization needs to establish some type of AI governance model with some resources and an executive overseeing the effort. This team needs to explore and establish guidelines around many different areas, such as:
The bottom line: AI will grow up to be whatever we teach it to be, so we need to be good guardians.
Chief Data & Analytics Officer | DataIQ Top 100 Data Leader
1 年This is a great example of a 'new skill' required for companies in the new era of AI. As we create bots that will interact with our customers, employees, and really anyone, they must represent our brand and our values, We will need to take these concepts out of our corporate powerpoints and embed them into automated business processes and applications. Not only will we need the tools and processes to manage these constructs, but will also need new business processes and tools to monitor, assess and adjust these 'learning' AI agents to ensure the continual alignment with our values
Principal Customer Experience Manager
1 年Like the idea of instilling values in AI so it's shaped like the world we’d like to live in as opposed as the world we currently live in. As AI is beyond control of organizations and, so far, governments, difficult to see how a certain level of governance can be attained.
Assisting organizations interested in building competitive advantage and sustainable business strategies with passion, confidence and humility.
1 年Great dissection on ethics and values driving AI strategy, application and governance. The analogy of infancy, adolescence and beyond resonates well in the case of the state of AI and the need to raise it with discipline. Great job, Bruce!