Five Key AI Prompting Principles
DALL-E 3's impression of its own interface

Five Key AI Prompting Principles

Do you find that GenAI agents too often give you outputs that “aren’t quite right?” Or worse, you spend 30 minutes going back and forth with an agent only to realize you should’ve just “done it yourself?”

If so, you’re in good company: I’ve heard these same frustrations from the hundreds of GTM professionals I’ve advised. Often, they stem from a faulty assumption: That you work with GenAI like you would a human.

Sure, you can use “natural language.” However, the secret to working with GenAI lies in understanding how agents “think.” By prompting them in a way that helps them “think better,” you’ll get the outputs you expect.

Here’s what you need to know about “how GenAI thinks” and what to do to get the best outputs:

The Pioneering Insight: “Attention is All You Need”

How do GenAI agents “think?” They think in weighted probabilities: In the seminal 2017 paper “Attention is All You Need,” Vaswani et al. introduced the transformer model. In contrast to earlier models that roughly “parsed data word-by-word,” transformers process “interpret the words of input all-at-once.”

The core of this innovation is a self-attention mechanism. It enables the model to weigh the “importance” of different parts of its processing data. That’s why transformer models are bigger and faster than their predecessors: they allocate “horsepower” to the most meaningful data.

For example, consider the GenAI chat agent prompt, “Please tell me which of these buyer personas has the most influence in purchasing payroll software: the CTO or the CHRO.” In this sentence, the self-attention mechanism might increase the weight, or importance, of words like “buyer, influence, payroll software, CTO, and CHRO” and decrease the weight of words like “please, tell me.” Only after it has assigned these weights will the model generate an output.

Understanding this self-attention mechanism allows you to craft prompts that “guide the AI’s focus.” You nudge the model to produce outputs that align with your expectations by specifying what’s essential through your prompts.

Best Practices for Mastering GenAI Prompts

Let’s explore how you apply the principle of attention to prompting using Christopher Penn 's “RACE” framework.

1. ROLE: Define the Agent’s Role

Start by giving the GenAI agent a specific role. Denoting a role helps contextualize the output and ensures that the language, tone, and content are appropriate for the intended function. Under the hood, you’re sending a strong signal to the agent’s self-attention mechanism that you’re only interested in material relevant to a given job.

Example: “You’re an expert Email Marketing Manager working at a B2B project management software startup.”

2. ACTION: Specify the Action Clearly

Next, clearly state the action you want the agent to undertake. A precise, action-oriented directive ensures the AI “pays attention to” the task’s nature and scope—anything outside scope is less relevant.

Example: “Create a compelling email marketing campaign to introduce our latest project management tool.”

3. CONTEXT: Balance the Context Provided

Context is a delicate balance. If you provide too much, you’ll overwhelm and misdirect the AI’s “attention;” too little will lead the agent to miss your intent. Give them only the context they need to get the job done.

Example: [Background on the project management tool, such as its target buyer and the key features and benefits; includes critical selling points you want emphasized in the campaign, like “improves team collaboration and productivity;” excludes unnecessary technical details.”]

4. EXECUTE: Detail Your Execution Expectations

There’s a second balancing act you must manage: how you direct the agent to generate and format content. If you provide explicit instructions, you’ll yield an output matching your ask. But you also stimy the agent’s creativity and critical thinking. You can’t be too vague in your instructions since it’ll often yield the wrong outputs.

Example: “First, draft three subject line options that will compel the user to open the email. Then, select the subject line you believe best fits the target buyer, features, and benefits I provided. Then, write the body of the email: an engaging introduction highlighting our tool’s unique value proposition, followed by three key benefits, and a clear call-to-action.” This instruction provides a clear direction for content and structure without stifling the AI’s creative potential.

Crafting the Complete Prompt

Combining all these elements, we get a comprehensive prompt:

  • ROLE: You’re an expert Email Marketing Manager at a B2B project management software startup.
  • ACTION: Create a compelling email marketing campaign to introduce our latest project management tool.
  • CONTEXT: [Background on the project management tool, such as its target buyer and the key features and benefits; includes key selling points you want emphasized in the campaign, like “improves team collaboration and productivity;” excludes unnecessary technical details.”]
  • EXECUTE: First, draft three subject line options that will compel the user to open the email. Then, select the subject line you believe best fits the target buyer, features, and benefits I provided. Then, write the body of the email: an engaging introduction highlighting our tool’s unique value proposition, followed by three key benefits, and a clear call-to-action.

Why This Prompt Works

By specifying the role, action, context, and executional instructions, we’ve directed the GenAI agent’s “attention” to what matters most. This helps them align their output to your expectations.

We applied several key techniques throughout the example prompt that optimally direct the agent’s “attention.” For maximum control, draw on these mental models as you write your GenAI prompts:

Anchoring

You can quickly “narrow the span of attention” of an agent with brief but explicit signals of relevance. In the first two lines of our prompt, we quickly dialed in two critical elements of “attention.” I call them “anchors:” The ROLE and its ACTION (or “task”) flagged to the agent that it should interpret the ensuing CONTEXT and instructions in EXECUTE “through the lens” of an Email Marketer writing a campaign promoting a B2B Project Management System.

Anchoring to narrow the agent's span of attention


Balancing the Two Dimensions of Prompt Specificity

When prompting GenAI agents, you need to give just enough:

  • Context to produce an output relevant to your context
  • Instruction on how to format said content.

As the below 2×2 model suggests, how you balance context and instruction depends on what you want the agent to do. In our example prompt, we wanted GenAI to “derive a structured object:” We knew who we were sending the email to, what we wanted our email to say, and how we wanted the email structured. If we weren’t sure how best to structure a B2B marketing email, we might have removed those instructions and asked it to recommend a structure instead.

This mental model also helps answer a question we often get from GenAI users: “In ChatGPT, when should I start a new thread?” The answer depends on whether the prior interactions you’ve had with the agent are (1) relevant and (2) the right amount of detail for your next ask of the agent. You must also consider an agent’s “context window,” which we’ll address briefly.

Balancing specificity in your prompting


Stepwise Execution: Chain of Logic, Self-Solving, and Iterating with a Human in the Loop

In many use cases, GenAI agents produce better outputs by “thinking through” a task like a human might. This is especially the case for complex tasks.

By directing the agent to work in steps “out loud” (ex., in the ChatGPT chat thread), you lay a roadmap for its “attention.” This roadmap defines a sequence–you’re telling the agent: “First observe what matters most about X. Then decide Y based on X, etc.”

In our example, we guided the agent to consider the most compelling “hook” for our email—a subject line. We directed it to decide based on our target audience and the topic of our email. Then, after it has generated subject lines (or “hooks”) and settled on the optimal one, we directed it to write a related body for the email. This ensures that the agent writes the email based on our strategy and it ensures that the email they produce with have a complimentary subject line and body.

Join- or self-solving with AI agent


Your “human experience” matters:

  • You should instruct the agent to think through the logic of building an output like you would if you were to do it yourself.
  • As an added control layer, you can instruct the agent to ask for your input at each step. In the example above, if you were to select a subject line from options the agent provides, you’d give them an explicit signal as to what subject matter is most meaningful to you.

In a November 2023 research paper , Microsoft described using this technique to drive a “generalist” model (GPT-4) to perform better than specialized “medical models” on the MedQA Test Accuracy by applying robust prompting techniques.

Managing the Input (a.k.a. “Context Window”) and Output Limits

The final principle in managing GenAI’s “attention” is in understanding its limits, both in terms of the amount of information it can ingest (“input limit” or “context window”) and how much it can produce (“output limit”). To manage these limits, you can apply the principles discussed above.

First, more about the limits:

Input Limits (“Context Window”)

GenAI agents can take inputs of only a certain length. For instance, as of this writing (February 2024), ChatGPT-4 Turbo could take up to 128,000 tokens (parts of words) and Gemini 1 Ultra up to 1 million.

This limit is also known as the “Context Window,” which is comprised of earlier interactions in the chat thread and your input (or “prompt”) for the next chat interaction.

In Math terms, you can determine how much of the history of your chat the agent will ingest:

  • Where:Context Window (C), the length of the “input limit”User Input (I), the length of your promptChat History (H), the length of your chat history the agent can ingest less the length of your input
  • H = C – I
  • For example, in ChatGPT-3.5,?the input limit is 4,096 tokens (C = 4,096). If a user inputs a 2,000-token prompt:H = 4,096 – 2,000In this case, ChatGPT will reference the user input (I) and only 2,096 tokens from the prior conversation (H).

The implication is that you should expect a chat agent to reference only so much history of a single chat thread (once you’ve “hit the limit” of the context window). You can’t expect the agent to “remember” a very long thread.

Many GenAI agents allow you to upload a file to a single chat–or set of files to an “underlying” knowledge base–and reference it as “part of your input.” But you should know that the agent only “retrieves” the most relevant parts of said files—not the whole thing. This is called “Retrieval-Augmented Generation” (RAG).

Output Limits

Theoretically, GenAI could generate text from a single prompt, “forever.” But, as they continue, an agent’s attention “shifts,” ultimately “losing track” of the original prompt’s intent. This “shifting” issue stems from the limits of an agent’s context window.

That’s one reason why ChatGPT-4 will pause a “long output” and ask if you want it to “continue generating,” with the max output of a single segment being 8,192 tokens as of January 2024.

Balancing the quantity of input to ensure high-quality output


Techniques for Managing Limits

Earlier, we discussed two fundamental principles that you can apply also to manage a GenAI agent’s input and output limits:

  • Balancing prompt specificity, particularly the length of your context and instructional input
  • Stepwise Execution: Chain of logic, self-solving, and iterating with a human in the loop

As you balance prompt specificity, you also manage the length of your input: you can winnow down instructions to fit.

You use Stepwise Execution to “chunk out” big inputs and outputs, i.e., direct the agent to first derive an output from big input X, then derive an output from big input Y, and then put X and Y together. And vice versa for big outputs.

Chunking out your tasks


Co-writing Tools, such as Jasper and Writer (Insight Partners portfolio companies) help you do this.

In applying these principles to your work with GenAI, remember the goal is not to restrict creativity but to channel it in a direction that effectively serves your GTM strategy. With practice, you’ll find that crafting such prompts becomes second nature, allowing you to leverage GenAI’s capabilities to its fullest extent in your marketing, sales, and customer success efforts.

Michele Reister

VP, Global Marketing, B2B SaaS | MBA

5 个月

This is great, Jared. Super helpful! Thanks!

Tyler Dady

Program Manager | Project Management, Communication, Leadership

5 个月

Killer article. Loved the piece on getting its “attention” with Anchoring - brilliant ????

Shiv Panchagiri

CEO at Digital Di Consultants | Bringing MarTech and B2B Database Management Together

5 个月

Great read, Jared Brickman. Thanks for sharing it.

要查看或添加评论,请登录

Jared Brickman的更多文章

社区洞察

其他会员也浏览了