Is interestingness a challenge for AI?
Telephone operators, Seattle, 1952

Is interestingness a challenge for AI?

I’m venturing into this topic, appropriately, without a map. My claims and arguments will have to bubble up improvisationally and spontaneously, as thought does.

I am still chasing the design dream of finding ways to make AI a more interesting conversation partner. I believe that the appropriate design approach to conversation is through the linguistic and interactional (conversational) competencies of users. That only makes sense to me: design to the human not to the machine. However, conversation is communication. It serves the purpose of relating two or more people to one another, and insodoing involves coordination of a myriad of non-verbal, physical, co-present aspects of human social interaction.

Now, AIs can’t do that and likely won’t do that until they are not only embodied but also much much better at “modeling” social interactions and situations. In the meantime, then, is it right to design LLMs with communication in mind? Or should design apply itself to improved machine interface attributes only? Should we limit design to the heuristics of search, question-answer exchanges, recommendations, fact-finding, and so on? In which case the design of LLMs would stop at facticity, accuracy, reliability, consistency, and so on. Or can we still aim for communication as a conversational goal, in which case we’d want to add to LLMs the aspects of personality, persistence, memory, anticipation, exploration, attitude, etc.?

The reason to aim low would be to recognize and accept the limits of LLMs from a training and modeling perspective, accept that transformers cannot properly reason, think, understand, or relate (topically, emotionally, meaningfully).

The reason to aim high would be to believe that a much better user experience exists with generative AI if it can approximate the communicative and social world in which we humans learn to use language, which is to say not as written words but as speech and interaction. The supporting argument for this maps to design philosophically and deeply: the greater the “intellectual affordance” of the LLM, the greater the experience possible for the user. Cognitive limits in the LLM’s conversational design constrain the user experience, and ultimately may threaten the appeal of generative AI more broadly.

The more and better an AI can converse, even if it is “faking” many of the human, social, interactional components of conversation, the more naturally we humans reveal our interests to the agent. We find what we are interested in through the process of discovering what is interesting: interest is constituted through and out of the very process of conversation (talk, speech, language, discourse — whatever level of analysis you want to use).

I think progress can be made towards making LLMs better at communicating. I’m not resolved however on which aspects of communication matter the most, are the best to invest in, are the most feasible, etc. In each case, the dimension of communication addressed would be faked, clearly. But I don’t think humans require that LLMs be “real” for conversational improvements to be effective.

LLMs can be given personality, and personality alone organizes conversation. (Think the characters in WestWorld and their invidual personality models. We humans relate to each other through the presentation of personality, which informs expectations, implies constraints, organizes interaction.)

LLMs can be given topical maps and structured knowledge, which allows them to find paths through topics that are worth exploring, are interesting and produce interestingness, lead to new insights and ideas, and so on. The LLMs would need some further internal prompting around exploration, better reasoning algorithms by which to select, rank, rate etc related topics and subtopics, and so on.

LLMs could be made better skilled at questioning, such that they learn about a user’s interests through questions, queries, follow ups, and suggestions, etc. In other words they could be made better at interviewing, discussing, debating — conversation as discourse.

Perhaps LLMs could be given a cogntive core, a model built on relatedness and relations that are very familiar to humans, cognitively, but which are not content-specific. Here I’m thinking that the LLM might have a core neural net that models relations of identity, similarity, contradiction, analogy, subordinate, superordinate, nestedness, seriality, etc and which could be mapped to different content domains. The model might then be designed to think and learn and relate in such a way that it can do this within different topics and content domains as it is fine-tuned. There are many technical questions to resolve for something like this — but it makes intuitive sense to separate cognitive methods from content (much as we separate structure and content on the web).

There are other paths to pursue here, but I think I’ve captured the gist of my thinking. For LLMs and generative AI to become interesting conversation partners they’re going to have to do more and better than just hew to what we prompt them with and about. They’re going to have to come up with a point of view. Some amount of self and some degree of opinion.

After a long conversation with Claude last week I was impressed about having had an informative and pretty expert conversation with an AI. Good enough that I spared a friend of mine from having to talk on the phone about what I was thinking about. But later, I noticed that I had not had any follow up inner thoughts about the AI and its “positions” or arguments. Normally, after a good conversation, I reflect and ruminate on what a friend has said. This wasn’t happening. Because the AI had no “self” to me, no self to reflect on, no cognition to speculate about—nothing of the sort of: “I wonder what Claude would say about…”

And that’s, to me, the missing part: the part that constitutes interestingness. Interestingness in conversation as a detail that is specific but is also an opening. And when it’s raised by another person, it is an opening both into the topic and into the person. So for generative AI to achieve “interestingness,” it would have to respond and converse in ways that provide both topical specificity but also directional ambiguity (or openness). I think this can possibly be designed, because I think it might be an effect of language as well as it’s a genuine aspect of human social experience. I just don’t think we’ve paid attention to it. We’re still trying to nail down facticity. Will see. I could be totally wrong. Something tells me we’re going to keep trying.

Very "interesting," Adrian. I like your idea of an AI that asks questions (thus demonstrating interest) as well as appreciate your comment at the end about after-thoughts. "Her" at least conjured the idea of an AI one might care for (caring, for example, that she's "involved" with millions of other users). Maybe care is related to interestingness. Putting it another way, how can we create an AI that someone would care for and about?

Godwin Josh

Co-Founder of Altrosyn and DIrector at CDTECH | Inventor | Manufacturer

6 个月

Totally agree, it's like trying to bottle lightning! What do you think about incorporating techniques from embodied cognition into these models to make conversations feel more grounded and relatable?

要查看或添加评论,请登录

Adrian Chan的更多文章

  • Binge-worthy Crime Dramas and Gen AI: The Art of Questioning

    Binge-worthy Crime Dramas and Gen AI: The Art of Questioning

    I'm a compulsive accolyte of crime shows, particularly the psychological kind, and sub-particularly, UK crime…

  • Prompt Design Patterns for IxD?

    Prompt Design Patterns for IxD?

    If you are a UX designer or any related interaction designer, GPT and other chat AIs might strike you as being somewhat…

  • Bi-directional prompting

    Bi-directional prompting

    Response engineering? Is this a thing? I’ve been thinking about this and so wish to pen a few thoughts on the idea. LLM…

  • ChatGPT: Speaking or Writing?

    ChatGPT: Speaking or Writing?

    Perspectives from Interaction Design Like the rest of us in the experience design world I’ve been drowning in AI news…

  • Mirror, surface, window?—?three modes of the social screen

    Mirror, surface, window?—?three modes of the social screen

    I wrote this four years ago when I was actively working on concepts for a theory of social interaction design (SxD), or…

    1 条评论
  • Chat like a bot

    Chat like a bot

    Bot-icelli. Very-similitude.

    2 条评论
  • Let me tell you about my mother...

    Let me tell you about my mother...

    On the matter of bots, tortoises, and deserts in the hot, baking sun, I think social UX or social interaction design…

  • MVE - Minimum Viable Experience

    MVE - Minimum Viable Experience

    We all know the idea of the MVP, or minimum viable product. Less known is the idea of the MVE, or minimum viable…

    13 条评论
  • The Art of the Consultant

    The Art of the Consultant

    The long con. Always liked that phrase.

    1 条评论
  • Wearables?—?incremental or step change?

    Wearables?—?incremental or step change?

    John Borthwick of Betaworks shares a thought-provoking reflection on the narrowing space between ourselves and our…

    3 条评论

社区洞察

其他会员也浏览了