The future is here: Learning to live with conversational AI chatbots like ChatGPT
“A friendly 2-d cartoon character who is a conversational agent ready to answer any questions you put in a chat box” (generated with Stable Diffusion on You.com)

The future is here: Learning to live with conversational AI chatbots like ChatGPT

What goals should we have for Conversational-style AI bots like ChatGPT?

Imagine you can ask for anything you want. A picture of a unicorn eating a hamburger. An essay on the relative merits of a hot dog and whether it is a sandwich. A recipe that hallucinates a new combination of Frito pie.

Perhaps your mind turns to the process of solving problems. Thinking about potential solutions, you might want suggestions for the best way to introduce a new product feature. You recently read an article on?Michael Porter’s competitive forces ?and you need a succinct 5 paragraph essay explaining these ideas. Maybe you are stuck looking at an empty screen while you try to compose a persuasive 50 word email that fills only one screen of a mobile phone.

Conversational-style chatbots like?ChatGPT ?can help you break this creative block. All of these suggestions above could be realized with this machine learning tool that uses a generative process to find the combination of text that matches text (characters) nearby the text that you provide.

I’m not a machine learning engineer, so I probably got the description a bit off. The point is that this technology behaves a bit like a magical genie that can take your prompt, examine its corpus of data for content that seems similar, and deliver human-readable content that is really quite good.

What is “good” for a chatbot?

Asking the question: “what is a chatbot good at” might be the wrong question. Since we’ve never had a chatbot quite like the ones that have emerged over the last few months, I’m not sure what questions to ask. To start, I feel we need to understand the current limitations of this technology to better understand how to use it responsibly.

What are chatbots good at doing?

  • Chatbots like ChatGPT are good at creating generalized summaries of information that has a lot of detail in the body of documents indexed by the model. For example, you can ask ChatGPT to summarize the article I mentioned earlier on Michael Porter’s competitive forces.

No alt text provided for this image
ChatGPT response to summarize Michael Porter's "The Five Forces that Shape Strategy"

It does a pretty good job of describing Porter’s forces, and can also compare this data against other well-known management theories.

  • These Chatbots are also solid at imaginative templating of content with a lot of training data. For example, Sitcom television has lots of screenplays with dialog that describe how individual characters speak, so it’s possible to get unusual situations overlaid onto cultural icons such as Star Trek, Seinfeld, or other pop culture that has generated interest on the Internet.

One goal for conversational chat should be summarizing information that is relatively unstructured while creating a linked bibliography of sources?

If we ask an agent to build a summary, we ought to know why it chose the summary items it did. Then, you can commercialize this sort of technology on a very narrow set of training data, e.g.?Deepscribe ?for medical notation or?Sybill ?to track emotional intelligence on sales calls.

Without a trained master model, AI chatbots might hallucinate stuff.

The concept of data lineage and attribution ought to be a core feature for artificial intelligence/machine learning technology. If nothing else, we need to be able to check its work to see where it is finding information, potentially fabricating information, and making conclusions. You could almost imagine a simplified state diagram for helping humans follow along with the nearness of the information LLMs are putting together so that it can be decomposed and understood.

What are chatbots lousy at doing?

  • Extending logic beyond a single instruction set - they can string together prompts but it’s unpredictable
  • In the current iteration, they need a human operator to follow tasks (this is probably good)

Chatbots struggle to take intention from a few sets of instructions. The fact that we can even suggest this would be possible means there is a huge market here. But be cautious. I’m not suggesting yet that an untrained operator will be able to use chatbots to solve unsolvable problems. I’m suggesting that improving (even by a small amount) the ability of people to follow most workflow processes alone using faceted conversation makes Chatbots inevitable.

This is going to get better really fast as models accept more tokens, but will be prone to more errors as well. The more information you put into an LLM, the better the chance it will develop "opinions" on the content you feed it.

It just saves work everywhere you put it, even though the experience on average for some people will be much much worse when they don’t know how to answer the bot.

A better idea: make conversational chat smarter

Conversational chat is a concept that emerged about 5 years ago as sort of a next-level IVR (interactive voice response, for those who never experienced the horror of an unending phone tree). The goal of that interaction was to take a conversation and reduce it to a series of closed questions (facets) making it easier to draw a graph of potential outcomes from a process.

For example, when you answer a form the prompts are often organized in sequence. This sequence is partly built as a trick to progressively reveal information (validate your phone number, look up information about you, etc) and partly because these systems are not great at figuring out what you need when you don’t deliver the prompts in the right sequence for the process.

Smarter versions can interpret data types and take information slightly out of sequence. Natural Language Processing can cluster information into a likeliness score to known prompts and direct people along a tree of known answers.?

ChatGPT and its children and grandchildren will change all of this drastically

My first instinct upon understanding that GPTChat can mimic templates was to think about the kind of templates that would make responses fun. And then, to think about the kind of templates that represent grids or tables we often build in a rote fashion but are kind of a pain (like building all of the test cases for the future, considering all of the combinations and permutations.)

Instead of rote templates, we should focus on the line of questioning that will help chatbots improve and reflect the intent of the questioner.

Some skills and features that need to be trained into bots like these include:

  • Building Empathy?- tuning AI to respond when people are upset and helping to use language and patterns that can calm down emotionally charged situations
  • Training the user?- helping the conversation bot to know how you normally like to chat. This also presents privacy challenges, so here are a few ideas how to handle this at different exposure levels
  • Answer a few questions, forget after every interaction
  • Build a list of public preferences that can be indexed by every bot
  • Train the bot on a local corpus of information and do not allow your questions to reach the outside world (this one’s interesting but risky (imagine a public key/private key handshake to allow this information to be used sparingly by an API)

How would you know whether any of these methods are successful? The same way you test any A/B experiment. Track the outcomes and determine whether they are successful based on the customer’s stated intent.

What are the downsides of this technology?

Any new technology that is sufficiently advanced has the capability to be misunderstood as being sentient. Benn Stancil wrote a great piece where he talked about some of the dangers of a chatbot that sounds really impressive even though it has the possibility of spouting incorrect information easily.

Stancil writes about an imaginary investment decision hinging on with a pitch written by a bot

Their new narrative, in other words, was manifest out of thin air. It wasn’t based on feedback from a visionary expert. It wasn’t built on research, logic, reason, or a deep understanding of anything. It was conjured by a giant computer, a really good autocomplete algorithm, and a person looking for an argument to support a conclusion.?

Stancil concludes that when we know content is built by an AI, "we have to reassess." The former models for evaluating argument don't seem to work any more because we are going to be overwhelmed by clever, big-sounding words.

I'm starting to think that we need defensive AI trained both on our personal preferences and delivery preferences. This might mean helping me to reduce the sort of content that doesn’t have highly ranked sources or comes from people I don’t know, avoiding sourcing content from certain sites, and implementing a block list of content.

In addition, it would be very valuable to train ChatGPT on my own private corpus of data. The intent here is not just to be able to protect searches (whether they are sensitive or not, they involve lots of metadata that is personally identifiable), but also to create a version of ChatGPT trained for the way I write and process data, not the average outcome. If this worked I also might trust it to deliver new content on a schedule (almost like a newsletter).

The cost to serve an AI result is high today (10x-100x a regular search result ) and too slow. But it’s going to get faster, and soon.

Without this layer of filtering, it will be easy to be overwhelmed by a layer of AI-generated BS that hallucinates something that sounds familiar to us. If you’re not familiar with Clay Shirky’s 2008 classic talk on Information Overload/Filter Failure, here it is. It seems even more prescient now.

What’s the takeaway??The technology in generative chatbots like GPTChat is too attractive to ignore. Even if all it ever does is make relatively dumb processes incrementally better, the improvement potential is vast. Conversational chatbots are going to be everywhere, so we need to learn how to use them well. And "using them well" poses ethical, moral, and technical quandaries. We're going to need to evolve new ways of thinking about problems when they are informed by AI.

A version of this essay was originally published here .

要查看或添加评论,请登录

社区洞察

其他会员也浏览了