Three skills get GenAI to do more for you

Three skills get GenAI to do more for you

GenAI is currently overhyped as a standalone technology and, at the same time, much underappreciated as a complement to others and to us. To benefit from its full and evolving potential, I argued elsewhere that we need to learn what I call “augmented thinking ,” which encompasses several skills, including what we will discuss today: the ability to guide your AI (or multiple tools, including various types of AI, like ChatGPT, Claude, and Perplexity at the same time) and your team working on it toward the best results.

Effective use of GenAI is not just about typing queries and hoping they return accurate or insightful results.?I believe the solution is not generic "prompt engineering"—at least not in its current scope.

Many believe that the future of prompt engineering will not be overly complex, especially for non-technical individuals. Models are increasingly adept at interpreting our requests, providing reassurance that adapting to AI technology will be manageable.?

However, learning to holistically engage more effectively with GenAI tools does and will likely continue to yield superior results - and it's not a skill that most people have today.?I see "Talking with machines" as part of tomorrow's core literacy, just like the human-engagement equivalent when we manage processes, group dynamics, and individual colleagues to get to better problem-solving and idea generation. We learn the human version of that over formative years of management practice, and much literature exists about it. Some people are good at it; others are not; everyone can learn some - especially to make the results more consistent. So what is the equivalent when managing the artificial workforce composed of indefatigable artificial “interns”? Everyone is a manager of these machines now - and we better become good at it.?


Three lenses for harnessing GenAI tools: GenAI 3Ps

Three dimensions help us understand what needs to be learned. We will go over each.


  1. Personal, Human engagement. Ensuring that humans are still firmly in the loop and not "asleep at the wheel."
  2. Process of Thought. Leading the machine through a thinking process, a sequence of requests, data inputs, etc.?
  3. Prompts, one by one. Engaging optimally with the machine through better prompts (requests), one at a time.

They overlap, just like in the non-AI-enabled world, when doing the same with humans. Think of innovation workshops that use design thinking techniques: you need to manage the human factor (find the right people, motivate them, engage them), follow a process (have a directional sequence of steps, but also know when to change direction), and get the participants to perform individual activities (supported by artifacts for best results).?

Let's learn how to harness them one at a time.

1. Engage with the technology

Let us quickly settle the "human in the loop" part, as I have written about it elsewhere and included references to work done by our MIT team and many other world-class researchers below. The key points are:

  1. It is a significant problem. People do fall asleep at the wheel , becoming over-reliant on unreliable AI's skills, and when they do, quality suffers - both in terms of accuracy and creativity
  2. We often don't pinpoint the "where/when". Part of the issue stems from the current inability of professionals to identify which subtasks fall within AI's frontier of the art of the possible and which subtasks are likely to be too repetitive for humans to maintain typical performance. The upshot is that people often give AI things that are too hard to do or don't use them to lighten the burden of repetition. More on this here .
  3. Designing the UI/UX for people, including specific scaffolding of the human-computer interaction (more here ), provides an exoskeleton that mitigates these issues?
  4. Designing a scalable human-in-the-loop people/process/tech stack is serious business; there is still much work to do in this space. I argued in the past that we might need something similar to what happened when Six Sigma and Lean were introduced in work processes, which is to say, we need to expend significant time and effort (and brains) on it
  5. You drive. A straightforward rule for our current individual use of generative AI is always to have the initiative—don't let the machine have it. That means aggressively explaining, critiquing, skeptically asking for proof of logic, asking to be asked, etc. A fully self-driving car has yet to be on the market, but assisted driving is already beneficial. Similarly, we should not treat current GenAI technology as a fully autonomous self-answering bot, especially for complex problem-solving.


2. A process for your augmented thinking

GenAI can't think about everything all at once at inference time (as it responds to your query) and cannot know everything relevant to our circumstances. That's where a process is helpful.

Level set. First, today's GenAI tools mostly don’t know who you are and what you’re trying to do unless you explain it, just like what you would do with humans. We all - us and them - need context: data injected at the right time, remembered, and surfaced at the right places. Steps in a process help collect the relevant context and narrow the scope of the possible answers. Compared to humans, the difference in the case of GenAI is that if you use a chat (ChatGPT, Claude, Gemini), they lose context after a while, and you need to either remind them periodically and/or stay concise. Thankfully, context windows are becoming more extensive.?

Be aware of computational efficiency limitations. Second, GenAI models are limited in the inference they can make in every run and in the number of tokens they use to be cost-effective. Like humans, you can help models by breaking the thought process into discrete steps.

Follow a (meta)process. There is much literature about ideation, but I suggest starting with this post and the chart below. The creativity meta-process is something that you and your teams have likely used already. You will want to follow that flow or derivatives, although recursions (back and forth, as opposed to a linear path) are always possible and often needed.?

Source: Supermind.design

(Stay tuned for more on this topic in an upcoming post.)


3. One prompt at a time now

Next, the prompts. (This topic occupies most of the public discussion, and for a good reason: prompt engineering is still an art when scaling things in software production, and developers want more predictability. However, here, we focus on the application of these concepts to the work of non-technical people. For them, prompting, in a narrow sense, is not the only important part of an augmented thinking skillset.)

There have recently been some excellent meta-analyses of all prompting techniques (see the paper “The Prompt Report: A Systematic Survey of Prompting Techniques” and Daniel Lopes' helpful blog "A Comprehensive Guide to Text Prompt Engineering Techniques"). I summarize some of their findings below and in the appendix, but I also categorize things slightly differently so we stay focused on augmented thinking instead of narrow prompting.?

Many of these methods leverage collective-intelligence concepts. They harness AI's ability to compare and contrast different viewpoints, whether from the same model at various steps (which I group under “optimal reasoning”) or from other models and benchmarks (“collective perspective”). That dialectic helps the emergence of new, strong ideas. AI researchers are latching on to the related properties because they help machines become more effective. Our role, as humans, is to engage with them to make that dialectic happen.?

There are many ways of doing that, and I group them into two clusters: optimal reasoning and taking a collective perspective.


Source: Supermind. design

Optimal Reasoning

  • Zero-Shot Techniques: These use no prior examples to generate responses, but they may assign roles, styles, or emotions to guide the AI response (e.g., "You are a CFO reviewing this proposal"). Rewrite prompts for clarity. Encourage answers based on known facts of a character/persona
  • In-context Learning and Few-Shot Prompting: They use a few examples to guide AI (e.g., showing "what good looks like"). The number and quality of examples may impact performance (the order may do so, too, though less so than in the past). Use optimal input formats when deriving examples from large datasets to generate optimal examples
  • Thought Generation Techniques: They encourage AI to explain its reasoning step by step. Use thought-inducing phrases and structured formats like tables. (Note that for OpenAI o1, asking the model to think step-by-step is not recommended, as it will do it by itself.)
  • Decomposition Techniques: They break down complex problems into smaller parts (e.g., "What are the parts or types of my problem?". Solve each part step-by-step or in parallel. Use both natural language and symbolic reasoning
  • Prompting Alignment: These ensure that AI's output aligns with user intentions. Address AI's tendency to agree with users to avoid sycophancy, etc. Handle biases, cultural sensitivity, and ambiguous questions

Taking Collective Perspective

  • Ensembling Techniques: They combine responses from multiple prompts (or models) for better accuracy and use specialized artificial experts (e.g., personas, simulating specific roles, skills, etc.) for different types of reasoning
  • Self-Criticism Techniques: They help AI evaluate and improve its own responses. They provide feedback on its answers, make necessary corrections, and verify answers with related questions for consistency. They can also prompt the AI to ask the human( s) to evaluate its input further.
  • Evaluation Techniques: They use structured prompts to evaluate text quality, employ multi-agent frameworks for diverse perspectives, and add automatic standardized steps (e.g., rating scales and definition) for a more thorough evaluation

The appendix below provides more details and examples, and I recommend that you identify some that you want to master and use repeatedly. Some are very complex for nontechnical use, but they can still inspire you. If you squint, you will see that many of these techniques and methods are constructs derived from classical logic (e.g., Socratic methods), metacognition (deliberate thinking about thinking), or other thinking methods (e.g., design thinking or lean) that you may already be familiar with. That should help your learning curve.

In the end, you will already derive much value from using your version of some of these techniques. What is critical is to focus on a few of these concepts deliberately—more in the appendix.


Conclusions: we, humans, manage GenAI labor

GenAI tools will improve, just like workers tend to do over their careers. However, just like many human workers, they need a proactive manager. That is you, me, and our teams. This requires learning skills that help us guide them as they augment our thought processes. In this essay, we focused on three aspects

  1. keeping the human optimally engaged
  2. ensuring a deliberate process of the conversation with the GenAI tools
  3. talking to the tools in words that help them (and us) do the job more effectively

There's a learning curve in this. It is part of the new augmented thinking literacy that will harness the power of these technologies to give us superpowers. Experienced managers know that blaming their staff for poor results is pointless—learning how to guide them to the best results is much more effective. The same applies to our engagement with GenAI.


Appendix: an overview of the techniques that executives, not just coders, can leverage in their work

Source: Analysis derived from the paper “The Prompt Report: A Systematic Survey of Prompting Techniques” and Daniel Lopes's "A Comprehensive Guide to Text Prompt Engineering Techniques."

This essay is part of a series on AI-augmented Collective Intelligence and the organizational, process, and skill infrastructure design that delivers the best performance for today's organizations. More here . Get in touch if you want these capabilities to augment your organization's collective intelligence.

Tess Hilson-Greener

Turning HR Challenges into AI-Driven Success Stories | Business Journalist | Author of HR2035 | Writer & Speaker on AI in HR | Chief Executive Officer

1 个月

Gianni Giacomelli brilliant article, I like the insightful way you have explained how managers need to stay awake at the wheel and manage GenAI by learning about augmented thinking.

Kshitij Kashyap (KK)

Global HR Leader | Compensation | HR Generalist | Talent Management| Talent Acquisition | Cornell

2 个月

Thank you for sharing. It’s been over a year since I started following AI more closely, diving into both the technical aspects and the socio-economic impact. All I can say is that we haven’t even scratched the surface yet. I truly believe that as people learn more, it will become easier to grasp. AI literacy should be the number one priority for organizations right now. While not everyone needs to become an expert, it’s crucial for people to have enough knowledge to understand how AI is being deployed and what elements are at play when someone mentions they’ve implemented AI in their organization or you are part of the implementation team.

Piyush Mehta

Chief Human Resources Officer and Country Manager, India, Genpact

2 个月

Gianni Giacomelli... super insightful ... for me the latter two parts on guiding the process nd crafting the prompts ... have been massive learnings ... interesting that you have separated these two steps ... makes the approach sharper

Nwanneka Egu, MBA

Sales Consultant at NexEra Sales & IT Solutions Limited| I help businesses and sales teams achieve their revenue goals and maximize their market potential with proven sales strategies

2 个月

Excellent insights on the role of augmented thinking in maximizing GenAI's potential. Mastering human engagement, guiding processes, and crafting effective prompts will indeed be crucial for leveraging AI effectively.

Simon Buckingham Shum

Professor of Learning Informatics / Director, Connected Intelligence Centre, UTS

2 个月

Gianni Giacomelli this is very useful thanks. You’re laying conceptual foundations for what we need to start teaching and assessing with our students. Or perhaps CCI’s work is already being taught to MIT students? If so, we’d love to have you join us in Dec at https://cic.uts.edu.au/events/collective-intelligence-edu-2024/

要查看或添加评论,请登录

社区洞察

其他会员也浏览了