How To Create New AI Products: AI Types, LLMs and GAI (Part 2)
Shachindra Nath
Chief Product Officer | Chief Design Officer | Chief Digital Officer | Innovation & Design Management | Platform & Product Strategy
In part 1 of this series, we looked at the current state of AI projects, common misconceptions about AI, and how they may impact the project. I also discussed the fundamental limitations of cognitive AI, outlined how to approach AI-based products, and emphasised the importance of Human-centricism in your project.
This article aims to provide you with a practical understanding of AI. I will discuss the different types of AI, their capabilities, and their implications for your product. Let’s review a typology of AI based on intelligence levels and functional capabilities.
AI Capability Levels
By capability levels, AI is often categorised into Artificial Narrow Intelligence (ANI), Artificial General Intelligence (AGI), and Artificial Super Intelligence (ASI). The Altmans and Musks of the world have trillion-dollar stakes and millions of followers for promoting their belief systems. Companies like Microsoft and Amazon (Anthropic) are quick to qualify their GPTs as ‘near-human’ or ‘approaching’ AGI1. Form your own realistic perspective and be sceptical of such exaggerated claims.
Despite the hype2, we are far away from AGI. The ideation of ASI as an AI that will reason, learn, make judgements and have abilities beyond AGI is an imaginary leap that even Science Fiction struggles with3. I remain sceptical of us achieving AI singularity any time soon. Even if we look at it purely computationally, we can’t be sure that computers can match the human mind?. I will discuss the idea of Singularity in the concluding part of this series.
So, for now:
Don’t get swayed by grandiose claims and predictions of AI surpassing human intelligence. Don’t be tempted to raise or invest money with claims of AGI.
As I mentioned in Part 1:
If we are yet to fully understand human intelligence, then we have to be far from building machines that would equal or surpass it.
It’s paradoxical to think we are about to achieve something we can’t define. My advice is to focus on improving ANI-based products and avoiding problems like the ones faced by Microsoft’s Copilot and ChatGPT?. To do that, we must think groundedly and realistically about our AI products. Let’s look at AI from a functional perspective to understand how it works.
Functional Types of AI
AI is broadly categorised into four functional types. The Reactive and Limited Memory types are currently more relevant. The Theory of Mind and Self-Aware types have the potential to enhance applications. The latter two are very limited at the moment. However, their minimal applications often get inflated to “nearing” AGI and ASI through the marketing machinery and the self-serving optimism of highly invested corporations.
Reactive AI
Reactive AI is rules or logic-based and does not retain learning from prior runs. If we can model a phenomenon precisely, accurately and completely, we don’t need the AI to learn how to react to a given input from vast amounts of data.
Situations where the possible outcomes are limited and historical occurrences don’t affect the right decision are appropriate applications for Reactive AI or reactive machines.
Reactive AI was mainly implemented in the early days when data and processing power were limited. Today, you can see limited applications of it, like Recommendation Systems. If you have wondered why Netflix or Amazon Prime keep recommending something you have already watched or why YouTube will fill your page with cat videos if you watch one cat video, the reactive elements of the recommendation system are responsible.
A famous example of a reactive machine is IBM’s Deep Blue, the chess-playing AI. It did not need to observe thousands of chess matches to learn how Chess is played and drive a model of what to do in response to an opponent’s move. Its knowledge model (Chess rules and moves) and decision logic (calculating possible moves after each move and optimal paths to victory) enabled it to defeat a Chess Grandmaster in 1997?.
In contrast to Deep Blue, Google’s AlphaGo, the Go-playing AI, used Deep Learning Neural Networks and Search Algorithms to defeat a human World Champion in 2016. The latter is an example of the second category.
Limited Memory AI
Limited Memory AI is capable of learning from historical data in decision-making. One advantage of limited memory or learning AI systems is that the machine derives the knowledge model from large volumes of data instead of relying on human effort.
Limited Memory AI is useful when there are too many configuration possibilities to compute in real-time, and a learning curve (reducing mistakes) is acceptable in its operation.
Take AlphaGo, for instance. Go is an ancient strategy board game “with an astonishing 10 to the power of 170 possible board configurations. That’s more than the number of atoms in the known universe” ?. Creating a decision model manually would be complex, time-consuming, and costly—perhaps impossibly so. AlphaGo learned by playing against itself over and over. It learned and refined its search algorithms to improve.
You might have heard about AlphaGo beating a World Champion. Google’s PR machinery is impressive. What you might not have heard is that an amateur beat a similar Limited Memory AI called KataGo by using tricks that distracted and confused the AI?. This case doesn’t highlight that AlphaGo is better than KataGo (it might be). It reminds us that beating an AI may not have much to do with conventional Go strategies but with an understanding of how AI works and its limitations as a machine.
All Machine or Deep Learning based AI is Limited Memory AI. Most current applications are either pure Limited Memory AI or are mixed with Reactive Machine elements.
Theory of Mind AI
Theory of Mind AI would be capable of understanding and modelling the thoughts, intentions, and emotions of agents (artificial or living). Let me be clear: This is not about you feeling that a ChatGPT virtual girlfriend “gets you” ?. It is also not about the Turing Test, where you can be fooled into believing that the bot you are chatting with is a real person.
This type of AI would be capable of assessing your state of mind (and responding to it) in ways that we only attribute to humans and, to some extent, pets. Multi-modal inputs like skin galvanic responses, heart rates, pupil dilation, voice analysis, gait analysis, advanced linguistic analysis, brain imaging and chip implants might partially achieve such understanding in AI. Deciding how to respond to it is a whole different matter. It would bring in the AI’s motivation, personality traits, values, enculturation, experiential history, relationship with the subject, etc.
Avoid making Theory of Mind AI a primary or critical basis of your product’s functioning. Also, remain sceptical of any AI builder claiming to have achieved a robust understanding of its user’s states of mind.
Incomplete, potentially imprecise, and inaccurate applications leveraging the optimistically named “Sentiment Analysis” capability in Natural Language Processing (NLP) can suggest emotional valence but remain far from identifying specific emotions, intentions, or states of mind.
By all means, use proven behavioural models and rough indications of sentiment in your AI product’s reasoning. It is not a bad idea for a customer support chatbot to use sentiment analysis to see if the user is getting increasingly annoyed (negative) or to use behavioural economic models to present purchase options. But be prepared to appropriately address the AI’s misjudgment because it will happen.
Self-Aware AI
Self-Aware AI would be conscious and have the capacity for self-perception. The optimism of Microsoft seeing ‘Sparks of AGI’ in GPT-4 and the sensationalism of the ex-Google engineer claiming sentience in LaMDA aside, sentient AI or Artificial Consciousness remains far from reality1?. Researchers from various institutions have identified 14 “indicator properties” of sentient AI, but no AI system has met the benchmark as yet. And before we jump onto the “it’s coming soon” bandwagon, let’s acknowledge that the list of 14 indicator properties is not definitive11.
Before you consider believing someone who claims sentient AI, ask them what they mean by self-awareness or consciousness.
That said, some level of internal monitoring is advisable and even necessary in a system. A successful AI will need to be “aware” of some of its internal states and adapt its functioning accordingly, in the same way your iPhone can pause battery recharging when it gets too warm. However, the mechanism in your AI product may need to be more complicated than just switching something off based on a thermostat reading.
AI in Your Product
Your ANI product or service most likely won’t exclusively fit into any of the functional types above. It is more likely to draw on elements of all four.
Ideally, your product will have elements where you implement a known model of a phenomenon. In other words, you know how to make the right decision and design the machine to Reactively execute the method, regardless of what the learning might suggest. For example, you should curb specific selections, expressions or actions because it is the right thing to do, even if the social media learning data is skewed differently. The main power of AI lies in its ability to process vast amounts of historical data and identify patterns in it.
Your product will need this Limited Memory capability in certain areas. If the product is meant to be mainly used by human users, it will also require models of interpreting and responding to human behaviour. It might not be a full-fledged Theory of Mind application, but it will need some understanding of human behaviour to deliver an effective user experience. Finally, your product may not be Sentient, but it must monitor some of its internal processes and ‘know’ what it can or cannot do.
On Creativity and GAI
Perhaps the most significant contributor to our misconceptions about AI is the popularity of Generative AI (GAI). The proliferation and popularity of Large Language Models (LLMs) and transformer architectures used by ChatGPT, Ernie, LLaMA, Claude, Command, and Generative Adversarial Networks (GANs) used by DALL-E 2, Stable Diffusion, Adobe Firefly, and Midjourney have captured the public and corporate imagination. Its effects are real and widespread, from school and college students churning out reports to lawyers preparing documents to customer support professionals losing their jobs and people going for ChatGPT romantic partners12. The excitement and optimism have reached a fever pitch of over-attribution of human-like intelligence. And it’s not limited to social media. Academics are caught in the deluge, too.
Before you consider using such technology in the hope of disrupting the status quo of your company, industry or market, it’ll be useful to consider a few things:
In some ways, GAI’s impact on human creativity is similar to the impact of using a kaleidoscope toy — shake, rotate and see which image you like — though with a bit more influence on the generated image. The focus shifts from creative ideation to crafting prompts and selecting one of the variant outputs. Of course, there are many things you can do around the GAI core to make it more usable and useful, like making a Graphical User Interface that creates prompts or elicits user feedback to refine them, among other things. In short, your GAI product will need more AI elements to enhance its experience and usage value.
Conclusion and Summary
In this article, we looked at broad categorisations to better understand the possibilities and limits of the technology. From a product concept and design perspective, here are the takeaways:
It’s critical for AI professionals to recognise that:
“AI does not just refer to a particular set of algorithms or computer programs but also to the attitude in which an algorithm or computer program is idealized to the extent that people think it’s ok for them to rely on it and not engage their brains.”
— Andrew Gelman, Professor of Statistics and Political Science, Columbia University1?
I encourage, rather urge and challenge you to conceive an AI application that does not dumb down its users, but helps them do and become better at what they do.
In the next part of this series, I will outline how to apply the understanding from this article. The upcoming Part 3 presents an AI thinking framework and discusses how to conceptualise your AI product.
References & Notes
[1] See Ars Technica’s article titled ‘The AI wars heat up with Claude 3, claimed to have “near-human” abilities’ And Microsoft Research’s academic paper titled ‘Sparks of Artificial General Intelligence: Early experiments with GPT-4.’
[2] See FT’s article: “Huge AI funding leads to hype and ‘grifting’, warns DeepMind’s Demis Hassabis” And HBR’s article “The AI Hype Cycle Is Distracting Companies.”
[3] See Discourse Magazine’s article “How Science Fiction Dystopianism Shapes the Debate over AI & Robotics” And BigThink’s “The 4 most terrifying AI systems in science fiction.”
[4] Eric Holloway makes a compelling argument that “there is a way to measure the mind that shows it is bigger than the universe — information” Also, see his post “Could our minds be bigger than even a multiverse? The relationship between information, entropy, and probability suggests startling possibilities”.
[5] See the Inc. article “Microsoft’s Copilot Offers Bizarre, Bullying Responses, the Latest AI Flaw” And The Independents report, “ChatGPT has meltdown and starts sending alarming messages to users.”
[6] IBM’s Deep Blue
[9] See Discover Magazine’s Article “Have AI Language Models Achieved Theory of Mind?: Despite the eye-catching claim that large AI language models like ChatGPT have achieved theory of mind, some experts find their abilities lackluster.”
[10] A New Scientist article observes that “The latest generations of artificial intelligence models show little to no trace of 14 signs of self-awareness predicted by prominent theories of human consciousness” And Robert Long’s post “What to think when a language model tells you it’s sentient.”
[11] Robert Long and associates’ academic paper on “Consciousness in Artificial Intelligence: Insights from the Science of Consciousness.”
[12] See The Guardian’s article “AI girlfriends are here — but there’s a dark side to virtual companions” And Quartz’s post observing the popularity of the applications: “AI girlfriend bots are already flooding OpenAI’s GPT store.”
[13] See Kambhampati’s research outlined in a post on ACM: “Can LLMs Really Reason and Plan?”
[14] A couple of PhD students from the University of Alberta tested GPT-4 on divergent thinking and concluded that it outperformed human beings in time-limited tests.
[15] See a World Economic Forum post: “AI is a powerful tool, but it’s not a replacement for human creativity” And a PsyPost on a UC Berkeley research, which found that “Kids outsmart leading artificial intelligence models in a simple creativity test.”
[16] See the entry for “Meaning and Context” Sensitivity in the IEP And Venture Beat’s article “Google’s new technique gives LLMs infinite context.”
[17] See Andrew Gelman’s post and conversation on ““AI” as shorthand for turning off our brains. (This is not an anti-AI post; it’s a discussion of how we think about AI.)”