Can AI be Wise?
Welcome to Leading Disruption, a weekly letter about disruptive leadership in a transforming world. Every week, we’ll discover how the best leaders set strategy, build culture, and manage uncertainty, to drive disruptive, transformative growth. For more insights like these, join my private email list .
It’s no secret that generative AI technology is taking the world by storm. According to a recent joint study by Penn’s Wharton School of Business and OpenAi , bot technology will alter at least 80 percent of all jobs in the near future.
But the road to that future has its potholes. One primary issue with current iterations of AI is its tendency to hallucinate—including plausible-sounding facts and data that are outright fabrications. Indeed, LLM bots such as the popular ChatGPT frequently fall short when users ask it to provide information on esoteric topics or perform tasks that require factual accuracy.
This leads us to a crucial question I discussed on this week’s LinkedIn Livestream : can we program artificial intelligence to allow us to trust it to make wise decisions and take wise actions?
Given my curiosity about generative AI and my research on how wisdom can improve organizations and their leaders, I believe it’s possible—but not without considering a few things.
Book Smart, Life Smart
To determine whether AI can ever be truly wise, I think it’s pertinent to examine what wisdom is in the first place.
Wisdom requires knowledge and intelligence, but those two things alone do not guarantee wisdom. We all know someone we identify as “book smart” who can rattle off facts and figures at an impressive rate but still struggles to apply that knowledge to real-world situations.?
Experience plays a prominent role in wisdom—those “life smart” people who have seen and done it all and use those encounters to make sound decisions in new scenarios. Then again, there are always the “old souls” who are outliers and seem to make wise choices without relevant experience.
Likewise, our perception of wisdom can, at times, be fuzzy. Think about it: who are the people we typically deem wise in our society? Judges, teachers, and medical professionals work toward gaining knowledge, intelligence, and experience to impart their wisdom in critical situations.
But plenty of people in those positions should be wise but need to be more. Often, the unwise members of these groups lack the emotional intelligence or self-awareness to set aside their own biases and pull in additional perspectives—another critical component of wisdom.
I’m optimistic about this area of AI-generated wisdom because AI can pull in different perspectives and process them in a way the human brain cannot. It’s possible that AI could develop a purer form of wisdom in that it can theoretically exist free of the emotional bias that so often clouds human thinking.??
But how, then, can we develop AI to account for all of those competing—and sometimes conflicting—ideas and make a wise judgment in the moment?
领英推荐
That all comes down to context.
Context is Everything
Humans will often unconsciously consider the context of a specific situation when trying to make a decision. A medical professional, for example, can prescribe a wise course of action for their patient not just based on symptoms but on their previous relationship with the patient, their behavior and mental state, their age, race, cultural background, geography, and a multitude of other factors that are not easily quantified.
The problem is that wisdom, while it may have an expansive definition that I outlined above, means different things to different people because of the variability in our values. The practice of wisdom depends on the context of each interaction with a person or organization.
Understanding context is a skill that needs to be improved on by most AI models to date. LLMs can only process the amount of information you feed them, meaning context must be artificially built over time. I think it’s possible to do this, but it will be difficult.
Say I am using AI technology to plan a trip, and I want the bot to decide which flight to take. The bot will have to decide which I value more: the cost of the flight or the amount of time I have to spend flying. That trade-off must account for my definition of well-being—and it might differ greatly from someone else’s definition.
Though this is a thorny problem, I do think it’s solvable. Developers must look at manufacturing AI wisdom through a hybrid approach—both defining wisdom in a general sense and building context from the bottom up. The more information we can give AI to process, the better.
Human Education
While I’m confident that future iterations of AI tech will begin to tackle these more significant problems, I think it’s also a good idea for us to start meeting it halfway. Just as media literacy—learning how to evaluate information for bias and accuracy—is now taught in schools, we should also consider developing AI literacy curricula. Knowing when to trust AI will be critical in the coming years, and we should begin educating ourselves on the markers of real or fake information—and AI-fueled wisdom.
Next week, I’ll discuss another way to incorporate wisdom into our decision-making: reflection through the post-mortem process. I’ve found that holding post-mortems in our organizations is one of the best ways to grow our learning and experience, and I’ll share tips on how to run good post-mortems for your project teams and yourself. Join me Tuesday, June 20, at 9 am PT to learn more.
Your Turn
How do you see AI technology adopting wisdom? Will we ever be able to trust its decision-making abilities fully?
Senior Design Verification Manager | Keen on IOT
1 个月I think the key lies in how well we can train the AI model using human brains who have attained wisdom with their practical experiences in life, this way has the potential to even selectively increase the AI model's lacking quotient in specific areas like business wisdom, emotional wisdom, cultural wisdom etc.,
Corporate & Executive Communications Strategist
1 年In the best-case scenario, AI extracts selective, existing human-produced data...both objective and subjective. While AI may be accurate on the surface...in the eye of the requester... it will still be flawed because AI needs human perspective, broad context, and emotional intelligence to feed into the algorithm, which will eventually drive baseline expectations. I argue that human wisdom doesn't exist in the world of AI; Wisdom can only persist when humans project healthy, well-balanced personal experiences in the form of data that addresses all the variations of context and accurately suggest appropriate, human-based scenarios, including the ability to adjust guidance based on unique circumstances, and contextual nuance. While AI prompts can get us part of the way there, insightful, critical thinking, emotional maturity, end expected outcomes will bring it home. Otherwise, the result will be a short-sided, unilateral. and subjective (and flawed) point of view. and realistically, I can't wait for the "magic" to present itself.
Strategic Business Services. EV Maven. TURO Maven. BuySellTrade4EVs . com. 10,000+ Hours EVSE & EVs. Entrepreneur, Author & Educator. Publisher: Print, eBooks, Mags & Apps. USMC Veteran. #IDme
1 年Much food for thought Charlene!
Certified ISO 9001 Lead Auditor
1 年Thank you for this conversation. This is a very thoughtful and insightful approach. In my opinion, I would note, things like ethics, morality, and biases are human concepts. That is a fact. Whoever programs and produces these AI items will (un)consciously shape/limit the AI by incorporating their ethics, biases, morality into the program. The nuances that come from individual experiences and emotional intelligence plays heavily into growth and expansion of wisdom. This is an aspect that will require many more generations of AI development to get close to what we regard as true wisdom.