50 Shades of AI
Photo by Pawel Czerwinski on Unsplash

50 Shades of AI

Artificial intelligence is hot. White hot. But AI is, both historically and currently, one half of a dichotomy. Like black vs white. Like human vs machine. And dichotomies -- to me, at least -- are a bright red flag signaling urgently that we have to stop and think not twice but three times. For anything, there are always more options than just two. 

Dichotomies make it easier to think about very complex things by describing them in terms of opposites. For example, artificial intelligence is the opposite of (augmented) human intelligence, and I'll call this the AI dichotomy. Since the very beginnings of work on AI (see the engaging historical account in John Markoff's Machines of Loving Grace), practitioners have used this dichotomy to think about their machines. And this dichotomy is a stark one:

  • computers that can do smart stuff with no humans needed -- i.e., human-free machine intelligence ("real" AI)

vs

  • computers doing smart stuff that only makes sense when there are humans around to use it -- i.e., machines to augment human expertise (intelligence augmentation or "IA", the opposite of AI)

Of course, we need humans to design, build, and test any of these systems. The "no humans needed" part refers to run time, when the systems are functioning in real life. The level of intelligent human involvement at run time is what defines the dichotomy here: none vs lots.

Translation technology has great examples of this dichotomy. On the one hand, machine translation (MT) systems are "real" AI: they ingest documents written in one language and spit out translations 24 hours a day, on any topic you want, with never a human translator in sight. And they've done so since the 1970s, with ever-increasing (and still unpredictable) accuracy. On the other hand, there are "translation memory" (TM) systems that augment translators' intelligence, or at least their memory. These TM systems store thousands of sentences with their human-created translations and for any new sentence they find the best suggestions among those stored translations. Human translators choose from and adapt these suggestions to create their final translations. Clearly, translation memory systems don't do much at all without human users.

But a dichotomy is a very crude sketch of reality, so the price of this kind of simplification is very high: we leave out very, very much information and loads of additional things to consider. If you think of dichotomies not as mutually exclusive opposites but as referring to endpoints on a continuum, then you can see that opposites coerce us into not thinking about what's in the middle. As if the middle were not important or didn't exist. Humans or computers; no other options. As if the endpoints were the only opportunities to think about.

What could be in the middle between these total opposites: machine intelligence and human expertise? What's in Markoff's "common ground between humans and robots"? 

The middle is where you'll find the 50 shades of AI: the many ways to mix and match machine intelligence with varying amounts of human expertise. These are the options and opportunities that the AI dichotomy makes us blind to; the very many possibilities besides the focus on human-free intelligence that seems to have hypnotized and horrified so many people. 

Hybrid Intelligence

My favorite part of the 50 shades is what I call hybrid intelligence systems. These are systems where humans and machines do the same intelligent tasks and then combine results, each learning from the other. 

The key idea is to leverage the best of both machine intelligence and human expertise rather than exclude one or the other out of hubris or habit.

Hybrid intelligence is very different from AI because humans play a key role in hybrid systems. Real AI is designed to supplant humans -- to do their work -- not to learn with them. Self-driving cars, automated investing systems, and machine translation are not designed with affordances for real-time feedback or control. Humans can't guide them or teach them; only sit back and watch in awe or in horror. But guiding sophisticated AI through complex, variable, messy situations like traffic, unpredictable market fluctuations, and nuanced translations can only yield better solutions than AI alone. So where are the "driveable", hybrid systems that could help with these complex problems?

Hybrid intelligence is also very different from intelligence augmentation because in hybrid systems the technology plays a central, not a supporting, role. IA systems are designed to support human expertise by doing different work, not by doing the same tasks in a different way. Word processors don't write; translation memory systems don't translate; spreadsheets don't create formulas. They sit and wait for humans to do the real work, then help a bit. Humans can't ask these systems for an appropriate next sentence or for a better formula. But sophisticated technology exists that could guide humans through complex, variable processes like writing, translation, and many others. So where are the super-smart hybrid systems that could help in these settings?

Translation technology also offers a clear example of hybrid intelligence that's already in use: hybrid-intelligence translation, more often known as adaptive MT. In these systems (see my favorite example at lilt.com), both translator and machine translate the same sentences, each with his/her own resources and using different strategies. Humans show more nuanced judgements of translation adequacy, so they decide which of the possible translations (generated both by people and by computers) fits best in a particular context or document – the human translators drive the system. This, of course, leads to a huge improvement in the final quality of the translations over machine translation alone. Moreover, in these hybrid-intelligence translation systems, the computer registers, learns from, and re-uses human corrections and choices in real time: an error fixed in one sentence improves the machine’s suggestions for the next sentence. This creates a quickly adapting virtuous cycle because the human improves the machine's translations and the machine offers better candidates to the human. This has a huge, positive impact on the speed of translation, while maintaining the best quality. Teams that use this technology all contribute to teaching the system, which re-pays them by assuring much better consistency: the translations quickly converge on consensual terminology and phrasing -- at the system's prodding. In the end, the quality of the translations is far superior to autonomous MT and the speed and consistency are far superior to unaided human translation. Plus, the system accumulates all this expertise in a reusable form, which makes the next projects that much more efficient.

We learn the same lesson with other intellectually challenging activities like chess. IBM's Deep Blue computer beat world chess champion Garry Kasparov 20 years ago. Not long after that, centaur chess competitions appeared, pitting hybrid systems (teams with both computers and humans) against each other. Interestingly, the best AI did not win these competitions, nor did the very best humans. The winners were most often competent (not champion) players who were good at evaluating the options that different AI systems offered. The winners created hybrid intelligence to leverage the best of both humans and AI and this strategy succeeded for disparate tasks like translation and playing chess.

Humans + AI beats AI alone, so why focus on just AI?

Autonomous AI is a grandiose, inspiring goal that is driving mountains of investment and waves of new thinking. But it is, in principle, science fiction with a delivery date generations in the future. Today, focusing only on autonomous AI makes no sense, except on a small scale as an inspirational goal for risky pure research with some possible very-long-term payoff.

Hybrid intelligence systems, on the other hand, are down-to-earth goals with much more likely short-term payoff. And they're still very cool. Couldn't we have hybrid-intelligence cars that help the driver avoid accidents, plan trips, and stay awake rather than totally replace her? Hybrid-intelligence invest-o-tron AIs that guide, monitor, and learn from expert human investors instead of bumbling along on their own? Hybrid intelligence that helps you read about unfamiliar topics to learn faster and understand them more deeply? "Driveable" machine learning that humans could guide when there's not enough relevant, reliable data for autonomous learning -- a kind of machine teaching? These kinds of drivable AI would accelerate progress for practitioners everywhere by helping to create the vast quantities of high-quality data and reliable knowledge necessary for even more intelligent AI.

"AI" has the potential to mean far more than just autonomous intelligent machines. We need more investors, entrepreneurs, and engineers to exploit the opportunities that are waiting among the 50 shades of AI between machine intelligence and human expertise.

Steven Macramalla

Author of "Unleash the Dragon Within: Transform Your Life with the Kung-fu Animals of Ch'ien-Lung"

8 个月

Great article. Refreshing perspective, and a message that needs to get out there more.

回复
Thomas Mansūr

~ loading new project ~

6 年

Loved the article, dad! Reminds of Elon's Neuralink project (of adding an AI layer into our brain which we can interact with and increase human-computer information band-width) and also the not-so-clear race on Assistive technologies, like Siri, Alexa or Google. I've been also experimenting with Generative Design, and the possibilities of co-creation are truly amazing... Thanks for the reading! :)

Andy Way

Retired | Emeritus Full Professor Computing @ DCU | Co-founder ADAPT Centre | MT journal editor 2007-21 | EAMT/IAMT President 2009-15 | IAMT Award of Honour 2019 | SFI Engaged Research Award 2023

6 年

As often before, Mike Dillinger hits the nail on the head: "Humans + AI beats AI alone, so why focus on just AI?" We're lucky to have him as advisor to my #MT team in @adaptcentre in @dublincityuni. #AIHype

Attapol Te Rutherford

Research Scientist in Natural Language Processing

6 年

Very well-written. Thank you!?

Laura Martín-Pérez

Expert Computational Linguist | PLN & genAI Prompting | Cognitive AI & Virtual Agents | Information retrieval

6 年

Como siempre Mike, un placer leerte.

要查看或添加评论,请登录

Mike Dillinger, PhD的更多文章

  • Knowledge Graphs are Essential for Safe AI

    Knowledge Graphs are Essential for Safe AI

    AIs will only be safe for general use when they have and use goals and values that are identical to those of humans. In…

    27 条评论
  • Knowledge graphs, Linguists, and the Last-mile problem of AI

    Knowledge graphs, Linguists, and the Last-mile problem of AI

    Now that AI can generate fluent text at scale in multiple languages and different styles, are authors, translators…

    21 条评论
  • Audio: How to make AI safe and reliable?

    Audio: How to make AI safe and reliable?

    Janie and Johnny are back for Episode 2 of my Byte-sized AI series! Listen in to these engaging, bite-sized podcasts to…

  • Audio: What are Knowledge Graphs?

    Audio: What are Knowledge Graphs?

    Who knew? It seems that Max Headroom had blue-eyed twins and they're all grown up! I suspect that he sent them to…

    10 条评论
  • Entity Resolution: Priority #1 for Building Real Knowledge Graphs

    Entity Resolution: Priority #1 for Building Real Knowledge Graphs

    I keep seeing mentions of "entity-resolved knowledge graphs", which leads me to believe that other so-called…

    33 条评论
  • Google's Semantic Search: Going to the Dogs?

    Google's Semantic Search: Going to the Dogs?

    Google is the undisputed leader in web search – technically a monopoly in fact. The coverage of web properties (good…

    41 条评论
  • Spelling-driven Reasoning in LLMs

    Spelling-driven Reasoning in LLMs

    If the lighting is just right, and you squint just enough then cock your head to one side, you might say that…

    37 条评论
  • Stuck in the Muck: Big Data means Big Problems

    Stuck in the Muck: Big Data means Big Problems

    Imagine that your organization is a sleek thing of beauty, like a very fast, very expensive, highly polished Ferrari…

    20 条评论
  • Better Knowledge for Better AI

    Better Knowledge for Better AI

    There's a growing consensus that knowledge graphs – which are a kind of artificial knowledge for artificial…

    13 条评论
  • Psychological Foundations of AI

    Psychological Foundations of AI

    Artificial Intelligence is the dark (and for some, impenetrable) art of making machines think with hardware instead of…

    21 条评论

社区洞察

其他会员也浏览了