GenAI must ask questions, not just give answers
Gianni Giacomelli
Researcher | Consulting Advisor | Keynote | Chief Innovation / Learning Officer. AI to Transform People's Work and Products/Services through Skills, Knowledge, Collaboration Systems. AI Augmented Collective Intelligence.
Some of Generative AI’s limitations, especially the models that are currently widely available, stem from a very simple thing: they’re calibrated for the “instant gratification” of their users. That’s a real problem when you're trying to solve complex challenges that have defined "right/wrong" answers, and that’s one of the reasons why sometimes language models provide well-structured, polished yet ultimately “middle-of-the-road” and unimaginative answers or even grossly inaccurate ones. There are ways to address this challenge, and they have to do with a rethink of what we mean by "generative".
The value of thinking slow
GenAI tools often remind us of over-eager interns who want to show off so they are perceived as smart. (To be sure, many other professionals fall into that trap: trying to impress colleagues by giving fast answers.) Our own, human, fast-answer mechanism, which Daniel Kahnemann dubbed “system 1” thinking, is often misguided.?We all - humans and machines - need “slow” thinking when looking for truly interesting answers. And that, I argue, starts with AI asking more questions to humans.?
Three examples illustrate the potential of this intuition.?
First, research conducted in mid-2023 by Harvard and BCG showed that large language models (LLMs) could already improve the output of junior consultants on tasks that don’t require much proprietary and private context, such as new product ideas for a business-to-consumer market segment. However, they struggle more with providing cogent company strategy, whose quality heavily depends on the understanding of the broader company context.?
A second set of examples comes from healthcare. While LLMs are already showing remarkable ability to provide answers when given thorough anamnesis, the reality is that most patients are unable to give a good enough set of symptoms. A large part of the role of doctors is (or should be) to also ask questions, and get more input for the diagnosis. An LLM could be instructed to do so, and even more so when using knowledge-graph databases to look into adjacent fields - which is something that even doctors struggle to do. Google AMIE does some of that, with solid results. Perplexity.ai and its CoPilot mode ask specific questions about the cause of health ailments before providing healthcare solutions.
A third example comes from education. Khanmigo, the tool built by Sal Khan (of Khan Academy) in collaboration with OpenAI, uses a sophisticated socratic flow, to ensure that children develop a full understanding of the knowledge they absorb by being asked to complete and build onto the progressively more complete inputs that the machines provide, instead of just passively memorizing answers.
To partially summarize these findings, let's focus on context, computational efficiency, and semantic vs symbolic reasoning.
And yet, abstraction, context, and taking the less-traveled path are exactly what problem-solving often requires when facing complex problems.
Three vectors for questions
Understanding the problem well is a big part of any solution. Asking probing questions can increase the aperture, and enable better focus - it is the core of the first diamond of the design thinking’s double diamond, whose value is to identify the best angle for attacking a problem. But questions are extraordinarily useful in every other part of the ideation process too. ?
领英推荐
No question, whether human or digital, is a bad question, as long as it leads to at least one of three things.?
First, in these ideation processes, the interplay between diverse participants makes the difference between success and failure. Therein lies the opportunity: augmenting our collective intelligence by amplifying the diversity of views - through better questioning facilitated by smart machines. AI can ask many types of questions, using one of the many frameworks built by humans that embed logical structures - for instance, Socratic and other logical thinking methods (I discuss some here).?
Second, additional possibilities stem from using machines to radically open up the design space, by asking humans (or other machines) to look at the problem through the lens of distant analogies (e.g. “What communities, such as Wikipedia, look like good analogies to healthcare knowledge management?”); or using ideas from other spaces, such as what Markus Buehler at MIT recently did when using GenAI for investigating engineering materials properties (supersonic fractures) through the lens of biology. Even when machines may not have intelligent answers, they can tee up creative questions that humans - and other machines - can then try and address.?
Third, an emerging exciting avenue is through multi-agent generative AI: multiple agents that ask questions to each other, and humans. That process can be curated by humans who decide what types of agents are useful, or what threads of discussion are promising. Think for instance of a transformative idea for a large company, where much of the difficulty is in the change management and the acceptance by employees. What about a tool that uses multiple personas to critique the idea, acting as a synthetic, multithreaded town hall? For instance, one can gather the (simplified, possibly stereotyped, yet readily available) views of junior and senior employees, of people based in different regions, and of professionals in different departments. Humans could decide where to enter that town hall, decide which personas to amplify, or complement those voices, and draw conclusions and iterate on the transformative idea itself - as well as preparing a precise and thorough change management plan, including for instance detailed stakeholder engagement strategy. ?
These insights led to research and prototyping of alternative human-machine interactions for innovation. In our work at MIT's Center for Collective Intelligence (MIT Supermind Ideator), and subsequently in my collaboration with luca taroni on the “AI Collider” called Solver, (a simple example of which is the "Innovation Assistant S" OpenAI GPT) we built machines that ask humans to refine the questions before ideation. Solver also adds frameworks from insightful researchers, and theories - that put a lens on a problem, and through that prism it allows us to ask more and more pertinent questions.?These are just small, early prototypes hinting at the potential of a very large design space that will be no doubt explored further in the near term.
As a not-so-small aside these considerations should also remind us that is also dangerous to put humans into a position of dependency on the AI machine, as this might lead to atrophying core cognitive traits - such as symbolic and critical, logical thinking - that people have. Designing for active interaction between humans and machines is crucial to maintaining the vitality of human intelligence.
Building questions-asking machines today
As we stand, we already have the practical means to build interactions with machines that make human-machine collective intelligence more powerful. While there is much more that could be done, a few simple ideas are below:?
If you’re using this approach for yourself or your team:?
If you’re building enterprise or client-facing applications:
Picasso said that computers are useless because they can only give answers. His point was also subtler: for true creativity and nontrivial problem-solving to occur, one needs to ask unusual, uncomfortable questions. Computers couldn’t do that then. And mostly, we don’t allow them to do that today either.
But with generative AI, they could. Generation is fueled by the right input and dialectic - it requires questions from us to the machine, but also from the machine to us. We just need to design the right interaction and workflows, and embed them into the flow of our work.?
Thanks for the article Gianni Giacomelli. I'm all for this approach of leaning on AI to *prompt us* with questions. As humans, we're still the source for true imagination, for surprising non-linear, non-vanilla ideas that lead to meaningful solutions.
Lead Product Manager
1 年There are ways of forcing a model to think "slow", such as: - splitting an action into several steps, - asking it to provide you with the steps it thinks of before executing them, - or explicitly asking it to think for itself. In short, it's possible. But perhaps the ability to think "slow" is unique to humans, and adding GenAI to the creative process would help reinforce this aspect.
7+ Years Experience Delivering eCommerce Results thru MarTech digital transformation
1 年Bingo. GenAI is nothing more than garbage. It will go down as one of the biggest disappointments in post modern history
Dad - Lucky Husband- Commercial Building Automation Specialist- AI Solutions Founder - Developer - Writer
1 年Do calculators suffer from being calibrated for instant gratification?
Research and research systems for marketing and strategy in green tech and sustainability.
1 年yes, but, I think there is big value in getting proposed answers as well. Perhaps a proposed answer is just a question where the "what do you think of this idea?" part is implicit. I spent some time this weekend researching key inflection points in the rise of chemical agriculture, in response to a survey from ResilienceFrontiers.org and the way I did it was ask ChatGPT for its "thoughts" assembled from its language map of the world. Then I selected from its answers, drilled down, and kept going until there were answers that felt plausible and valuable. Then I pasted those selected answers into a doc that I ran past Perplexity for fact checking, like Azeem wrote about last week https://www.exponentialview.co/p/ai-superpowers-and-mistakes-in-my And it validated the answers I selected, noted when some of them might be incomplete, and even caught a fake answer I tossed in as a test. So in this case, I really appreciated chat GPT's knowledge of, for example, the history of corporate "education centers" for teaching farmers to use their industrial inputs. I had no knowledge of these, but ChatGPT mentioned them, I asked for more specific details, and the output was pretty great. Both Perplexity and Google validated them.