Should Machines Use GenAI to Converse with Each Other in English?
Jason Bloomberg
Managing Director at Intellyx > Top Digital Transformation, Cloud Native Computing, Low-Code, and DevOps Influencer
On numerous occasions during Star Trek: The Next Generation’s run, Lt. Cmdr. Data, an android, communicates with the ship’s computer via spoken English.
The writers want us, the audience, to hear their conversation, of course. But would it make sense for two AI-based systems to interact with each other via a conversation in English (or any other natural, aka ‘human’ language*)?
Now that generative AI has taken off, this question no longer belongs in the realm of science fiction.
We’ve struggled with machine-to-machine (M2M) interactions since the dawn of distributed computing. Maybe teaching our machines to converse in English instead would solve still-perplexing challenges of application and data integration?
What do We Want from an API?
Application programming interfaces (APIs) have become the standard approach to M2M communication. Whenever both ends of such an interaction have a clear idea of the context of such interactions, then creating and consuming APIs is straightforward.
In other situations, semantic ambiguity, unclear or changing requirements, or shifting capabilities of back-end systems throw a wrench into API-based interactions.
Many technologies and protocols have evolved over the years to address these challenges. One of the more recent, GraphQL, allows any request to specify the information it wants, leaving it up to the provider to scramble to deliver the desired information.
GraphQL is slowly gaining adoption, but wouldn’t it be better and faster for the requesting program to leverage genAI to create an English-language request, expecting the provider of the information to itself leverage genAI to interpret the request and respond accordingly?
Trading Precision for Saliency
The most obvious problem with this approach is genAI’s lack of precision. AI in general is rarely precise; in many cases, expecting a 95% success rate is the best we can expect.
Leveraging genAI for M2M interactions would suffer from this limitation, as well as any additional challenges with interpreting the intent of the interaction. After all, spoken English is never as precise as, say, RESTful interactions or SQL queries.
What such interactions would lack in precision, however, they would make up for in saliency.
Saliency is the human predisposition to focus on information that is more prominent or relevant to the task at hand. When some query yields a flood of data, humans struggle to make sense of it – identifying which data are important and which are not. In other words, traditional data query approaches struggle with saliency.
If the querying application is using genAI, however, then saliency is built into its underlying large language model (LLM). By their nature, LLMs look for language patterns that better match the task at hand than others – and thus can deliver salient results even when the quantity of data is large, or the data are ambiguous or even of less-than-perfect quality.
Advantages of GenAI-based M2M Interactions
In addition to improved saliency, there are several other potential benefits to implementing M2M integration with genAI:
Queries are more likely to conform to user intent. With traditional queries, the systems in question simply do what they’re told. Whether a particular query is what the user intended is not relevant to the interaction.
GenAI, in contrast, aligns its behavior to English-language user prompts. The results are more likely to conform to the original intent of those prompts.
领英推荐
Implicit biases are more likely to cancel each other out. Because genAI-based M2M interactions involve two separate AIs (requester and provider), any bias in the data set driving one AI may cancel out (or at least mitigate) the bias in another.
For example, if the source data include biased hiring information (favoring white men, for example), but the query prompt requests résumés with balanced ethnicity and gender, then the end result is less likely to retain the original bias than a traditional query.
Interactions are human readable and thus explainable. As long as users are happy with the results, they are unlikely to care about the data within the M2M interaction. If results or poor, or if there’s a reason to examine such interactions (as part of an audit, for example), then the fact that M2M interactions are in English will help with debugging and will also provide an audit trail that anyone can understand.
Joins will be more intuitive and accurate. Combining data from multiple sources is perhaps the most difficult part of data integration – especially when data sources have semantic differences.
GenAI can smooth over such differences and build queries that better align with human intent, based upon the English language prompts that express that intent.
Conversational interactions are more straightforward. In many cases, a single query won’t do. The desired interactions require more than one back and forth between endpoints, where a subsequent query depends upon the results of earlier ones.
With genAI, such interactions are simply conversations – two AIs generating responses to each other in turn. Such conversations are more likely to uncover the required information and may even do so faster than traditional M2M interactions.
Better support for autonomous agents. In typical M2M interactions, nothing happens until a human takes an action that generates a query. Autonomous agents – pieces of software that act on their own initiative without a preceding human request – fall outside the scope of such M2M interactions.
With genAI, in contrast, there is a spectrum between an AI responding to a human query and an AI acting on its own. Even in the autonomous scenario, there is still a human kicking things off. As the AI gets more sophisticated (and as our trust for it grows), it is more likely to act autonomously in more circumstances.
Keep in mind, however, that in none of these scenarios are we expecting the AI to be perfect – especially now as the technology is still maturing. Will genAI yield salient results? Will it take actions in alignment with human intent? Perhaps, perhaps not.
The Intellyx Take
If you’ve been following my articles on genAI – especially the one where I call it out as bullsh!t – you’ll know I’m a skeptic. This article, however, explores potential capabilities of the technology that go beyond most of today’s discussions. What gives?
The answer is that M2M interactions largely avoid the bullsh!t problem that genAI faces when humans interact with it.
Given human predilection for sense-making, when people interact with genAI they assign it empathy, intelligence, and other human characteristics it simply doesn’t have.
With M2M interactions, in contrast, we don’t care about those characteristics. What we care about is how well the AI represents the intent of the user and how well it delivers salient responses.
In other words, M2M interactions may very well be a better use case for genAI than writing resumes, providing therapy, and all the other bullsh!t applications that people think it’s good for.
*I’ll use English in this article for simplicity, but the argument would apply for any natural language.
Copyright ? Intellyx LLC. Intellyx is an industry analysis and advisory firm focused on enterprise digital transformation. Covering every angle of enterprise IT from mainframes to artificial intelligence, our broad focus across technologies allows business executives and IT professionals to connect the dots among disruptive trends. As of the time of writing, none of the organizations mentioned in this article is an Intellyx customer. No AI was used to write this article. Image credit: Craiyon .
Principal at dbInsight LLC
1 年Interesting take. You nailed it with expectations on what you get with APIs vs. a more conversational, GenAI-based approach: precision vs saliency. Not sure, however, whether a GenAI conversational exchange between models can fully cancel out biases from either of the models.