Frame the problem, aim the AI.
John Dick ??
Digital delivery manager | Product lead | Delivery lead | Digital Product Management | Programme & Project Manager - expect momentum, speedier delivery and insights.
1. Introduction
To unlock economic growth and get more value from all the investment in AI infrastructure, we have to identify and solve more business & consumer problems.
This means we have to get better and faster at (1) problem-framing (the process of identifying and defining these problems) and (2) understanding whether new AI applications help solve these problems.?
This challenge is framed nicely in a recent blog from Sequoia Capital - 'AI's $600bn question'*
2. Background on problem-framing and AI
My recent research has been on the latest thinking on problem-framing alongside prompt engineering and interfaces into large language models (LLM).
While there is an abundance of theory and hype surrounding these topics, there is a lack of content on real-world experiments, including both successes and failures.
3. Experiment overview
In this post I will share an experiment where I took the latest thinking from Harvard Business Review (HBR) on problem-framing (The E5 Approach)* and the most recent release of Claude AI (Sonnet 3.5) from Anthropic.
My goal was to see if AI could help me improve problem-framing using the latest thinking from HBR.
Confidently, I can say that it did, very quickly.
Yes, it took a lot of prompting and iteration.
And, no, the result is not going to be: 'here are 10 great prompts for....'
The experiment actually revealed a set of practical 'so-what takeaways' on working with a LLM as well as delivering a set of useful outputs for future problem-framing exercises.
My intent behind publishing this article is to encourage more problem-framing thinking as we consider new AI applications. It's where the value lies.
Key takeaways, examples of prompts used and some outputs are shared below for challenge and improvement.
4. Key takeaways
Takeaway 1:
Using the LLM as a supporting tool, significantly reduced the amount of time it would have taken me, compared to using traditional search tools such as google or reading through multiple documents. For example, the ability of the LLM to summarise and pull-out key parts of the E5 approach was excellent. Armed with the summary I could then drill down to specific areas. And then go off and explore these areas, by asking more questions.
The implication is important. Despite the ubiquity of documentation and know-how in our lives, understanding key points and asking relevant questions still demands considerable time and effort. For instance, when presented with search results on comparison websites or Google, I often find myself wanting to ask more in-depth questions about the available options. Independent AI applications that works for the customer, rather than the supplier, could potentially address this need.
Takeaway 2:
LLM results cannot be fully trusted to always provide the truth. We have to constantly refine and validate results through iterating prompts. LLMs don’t understand the meaning of words so there is an extra layer of work required to understand and master prompting techniques. In other words, the quality of the output is determined by the quality & quantity of the input prompts.
The consequences are clear: we cannot effectively solve business problems using incorrect facts or false information. While LLMs can enhance human creativity by helping us ask better questions, this approach is challenging to scale. However, future AI applications may assist in creating more effective prompts. Anthropic has recently released a tool to support prompting, which I plan to test in my next experiment.
Takeaway 3:
Domain knowledge is required to get going. Vague prompting won’t deliver meaningful results. More useful results started to appear when I was direct and straight to the point with some understanding of the topic.
This raises an important question that I was unable to definitively answer: does the quality of prompt results improve more with domain-specific experience in generating prompts, or do generalists with curiosity, fresh ideas, and diverse perspectives achieve better outcomes?
Takeaway 4:
Validation prompting techniques are a must. For example, asking the LLM to rephrase, provide supporting facts & figures and critically self-evaluate content became important.
The implications are significant. LLMs can generate content that appears authoritative and credible but is actually false or fabricated. In my experiment, the AI even cited non-existent research and sources. This underscores the crucial role of accuracy and truth validation in AI applications, potentially opening up a new domain for AI testing and development.
Takeaway 5:
The overall outcome of this experiment has produced a measurable improvement in problem-framing. New techniques were discovered and I was able to mix & match to combine and pull together the best parts of what I found.
Getting value out was obviously useful overall, but it was the ability to mix and match knowledge and data that was particularly insightful. Previously this would only have been possible manually with the right people in the room. This is no longer true.
I was able to take my data (The E5 approach) and then combine that into the LLM to improve my knowledge and complete my task. ?We are already seeing this potential appear in retrieval augmented generation (RAG) applications and assuming we can solve data privacy issues, there is significant growth potential here. For example, mixing open banking and energy data with LLMs could open up a range of new applications.
5. Example of prompts used
I have pulled out a sequence of example prompts used.
These prompts were specifically designed for the problem-framing domain.
Together they reveal the iterative thinking behind the prompting and the validation I needed to perform. Remember LLMs are just predicting next words in response to a human input i.e. our prompt.
The prompts used combine various prompting principles and techniques. For example, they are straight to the point, use affirmative actions and combine techniques.
To aid understanding the prompts are presented in the ‘execution thread’ with a suitable label applied. I am showing these prompts as examples and recognise they could be improved.
Initialising prompt:
Act as a consultant advising me on improving my problem-framing capabilities. Harvard Business Review has published its E5 approach to problem-framing, see attached. Provide me with a summary of the approach. Critically evaluate the approach to problem-framing and suggest areas where there could be gaps. Be objective and do not make up information.
Validation prompt:
Act an auditor reviewing the findings from the last prompt. Assess the accuracy of the summary and provide facts and figures to support your assessment. Critique the summary information and validate the gaps identified. Highlight 3 of the most important gaps I need to pay attention to.
Drill down prompt:
Back to enhancing problem framing with Claude Ai. From now on, act as an expert in problem framing - an expert who defines great problems. I am following the Harvard Business Review 5 e's (E5 framework) for problem framing. Problem-framing is a process for understanding and defining a problem. There are 5 steps to the Harvard Business Review. Attached is the framework. List out up to 3 alternative problem framing techniques for understanding and defining problems.
Challenge prompts:
The 3 alternatives you identify are all dependent on human definition of problems which can be impacted by biases and limitations of knowledge. What data driven, objective problem framing-techniques could we use as an alternative to the E5 framework?
If you read the E5 framework already provided there are clear steps and techniques to execute in order to frame the problem. The examples you have provided don't come with a prescriptive process to follow. Revisit the recommendations and present practical alternatives to the E5 framework.
Expanding prompts:
For each of these 3 techniques, act as an outline expander. Expand out each of the 3 to show the end to end process for the technique.
Output these 3 problem-framing techniques in tabular form.
Expander prompt:
And finally, I want a clear and concise problem statement. Act as a consultant, state what good looks like and the evaluation criteria we will use to assess the quality of the problem statement. If possible generate a template example that covers the most important features of a clear and good problem statement.
Validation prompt:
Evaluate the example of this good problem statement and identify areas to make improvements. For example, clear, concise and comprehensive are statements of the obvious. Create a definition of a clear problem statement in plain english with no jargon.
6. Example outputs
Two unedited examples are shared as evidence of output from the experiment:
1.??????? An improved problem-framing approach combining a number of techniques
2.??????? A checklist for evaluating a problem statement
领英推荐
1. Improved problem-framing approach
This is an enhanced bullet summary following the E5 approach:
Expand:
·?????? Gather diverse perspectives and challenge initial assumptions
·?????? Use frame-storming techniques to generate alternative problem frames
·?????? Identify and prioritize key stakeholders
·?????? Apply the Cynefin Framework to categorise the problem
Examine:
·?????? Define the problem clearly, including the scope, boundaries, and success criteria
·?????? Measure the current state of the problem using data and metrics (DMAIC - Measure)
·?????? Use tools like the Iceberg Model to identify underlying drivers/systemic contributors
·?????? Peel back layers to understand the problem's roots
·?????? Summarise findings into a clear, specific, and actionable problem statement
Empathise:
·?????? Conduct interviews/surveys to gather stakeholder perspectives (Design Thinking)
·?????? Create empathy maps to capture thoughts, feelings, actions, and desires
·?????? Validate the problem statement with key stakeholders
Elevate:
·?????? Explore the broader organisational context using the four-frame model
·?????? Identify interconnected issues and their implications
·?????? Test the problem definition against new information or changing circumstances
·?????? Use the Cynefin Framework to guide appropriate problem-solving strategies
Envision:
·?????? Define the desired goal or outcome
·?????? Use backcasting to work backward from the goal to identify necessary actions
·?????? Develop potential solutions through ideation/prototyping
·?????? Refine the problem statement and solutions based on feedback and validation efforts
·?????? Plan for the implementation and sustainability of the chosen solution (DMAIC - Control)
2. A checklist for evaluating a problem statement
1. Clarity
?? - [ ] Is the problem statement easy to understand?
?? - [ ] Does it avoid technical jargon and use plain language?
?? - [ ] Can stakeholders from different backgrounds and levels of expertise get the problem?
2. Specificity
?? - [ ] Does the problem statement include specific details about the issue?
?? - [ ] Are the affected parties or areas clearly identified?
?? - [ ] Does it mention the scope and boundaries of the problem?
?? - [ ] Are there any quantifiable metrics or indicators used to describe the problem?
3. Relevance
?? - [ ] Is the problem aligned with the organisation's goals and objectives?
?? - [ ] Does the statement explain why solving this problem matters to the organisation?
?? - [ ] Are the consequences or impacts of not addressing the problem clearly stated?
4. Root Causes
?? - [ ] Does the problem statement identify the root causes or contributing factors?
?? - [ ] Are the causes stated based on evidence or data rather than assumptions?
5. Actionability
?? - [ ] Does the problem statement suggest or imply potential actions to solve the problem?
?? - [ ] Is the problem framed in a way that enables the generation of solutions?
?? - [ ] Does it provide enough context to guide problem-solving efforts?
6. Conciseness
?? - [ ] Is the problem statement concise and to the point?
?? - [ ] Does it avoid unnecessary details or tangents?
?? - [ ] Can the main points of the problem be quickly grasped?
7. Stakeholder Agreement
?? - [ ] Do the key stakeholders agree with the problem statement?
?? - [ ] Has the problem statement been validated with relevant parties?
?? - [ ] Is there a shared understanding of the problem among the team?
7. Conclusion
This small experiment achieved its goal by enhancing my problem-framing skills using AI, while revealing some takeaways:
Looking ahead, we must constantly return to the fundamental question: What business problem are we solving?
This simple yet powerful query should guide our actions, ensuring we deploy this technology not for its own sake, but to unlock real, business value.
Data & AI Leader | Board Advisor | DataIQ 100 | AI | Gen AI | Responsible AI | Behavioural Science | Risk Scoring | Insurance | Banking | Healthcare | Wellness | Thought Leader | Keynote Speaker
4 个月Great experiment write-up, John! I particularly like having LLMs to critique/ improve their own outputs by assuming different roles. There is a lot of science and art to writing great prompts. I would recommend the following guide which is constantly being updated with the latest research on prompt engineering. https://github.com/dair-ai/Prompt-Engineering-Guide Looking forward to reading more about your experiments!