Types and Operations of Prompts: A Comprehensive Overview

Types and Operations of Prompts: A Comprehensive Overview

Watch The Podcast on Acast | Spotify | Amazon | Apple

In the evolving landscape of artificial intelligence, particularly within the domain of Large Language Models (LLMs), the construction and application of prompts emerge as foundational elements shaping the interaction between humans and machines.?

As articulated by Shapiro (2023b), the discourse surrounding LLMs can be greatly enriched by delineating the types of prompts into distinct categories: reduction, transformation, and generation. Each type plays a pivotal role in navigating the complex terrain of AI communication, serving specific functions and eliciting varied responses from LLMs. This section delves into the nuances of these prompt types, underpinned by an understanding of Bloom's Taxonomy, latent content, and emergent capabilities, offering a comprehensive framework for effective prompting.

Blooms Taxonomy and LLMs

The application of Bloom’s Taxonomy to LLMs, as discussed by Shapiro (2023b), reveals the extensive and multifaceted capabilities of these models. From the foundational tasks of remembering and understanding to the higher-order cognitive processes of applying, analyzing, evaluating, and creating, LLMs demonstrate a remarkable ability to navigate through various levels of cognitive engagement. This examination not only underscores the latent knowledge embedded within LLMs but also highlights their learned cognitive skills, offering a lens through which their operations can be assessed and optimized.

Figure 6.1.1: BLOOM’s Taxonomy

Hierarchical Model of Cognitive Skills: Bloom's Taxonomy

Hierarchical Model of Cognitive Skills: Bloom's Taxonomy

Note. This figure visualizes Bloom's Taxonomy, a hierarchical model of cognitive skills ranging from basic memory to complex creation processes. The model, sourced from David Shapiro's (2023b) article "Becoming a GPT Prompt Manifesto," serves as a foundational framework for understanding the levels of learning and cognitive development.?

While the original taxonomy was developed by educational psychologist Benjamin Bloom, this particular representation comes from Shapiro's (2023b) application within the context of GPT prompting. It encapsulates the progression from simple recall to the ability to create new and original work, underlining the cognitive complexities involved in prompt engineering.

Understanding the taxonomy of prompts is instrumental in tailoring your communication strategy to the task at hand. Prompts can be broadly categorized into:

The Triad of Prompt Types

Reductive Operations: At the core of reductive prompts, LLMs excel in synthesizing and condensing information, showcasing their adeptness at summarization, distillation, and extraction. Shapiro (2023b) illuminates the proficiency of LLMs in this domain, where the statistical learning strengths of the model are harnessed to highlight essential information succinctly. Reductive prompts are instrumental in parsing through voluminous data, distilling it to its essence, and providing prompters with a concentrated dose of salient information.

Reduction Operations include:

  • Synthesizing: Combining information into a coherent whole.
  • Summarizing: Reducing content to key points.
  • Distilling: Focusing on essential details.
  • Extracting: Separating specific data from a larger set.
  • Condensing: Compressing content into a shorter format.

Example 6.1.1: Reductive Prompt Example

Efficient Summarization: A Reductive Prompt Illustration

User

Okay, so I want you to summarize the provide video transcript. In the this video I value ...{additional context went here}... My end goal with this summary is to ...{additional context went here}... {prompt continues}...

[ Video Transcript Text =?

{Cathy Pearl, Google video transcript went here}...

]

GPT

The video focuses on the principles of conversation design, particularly in voice user interfaces, using humor and a clip from the movie Return of the Pink Panther to illustrate the importance of relevance in communication. It highlights Paul Grice's "Cooperative Principle," where effective communication typically involves speakers being relevant, clear, and truthful. The speaker uses the example where a misunderstanding in the movie provides humor, emphasizing how breaking conversational maxims can disrupt normal understanding, which is a key challenge in designing conversational AI.

The speaker, an expert in voice user interface design, explains that many frustrations with voice assistants stem not from technology but from poor design—design that fails to adhere to conversational principles humans expect based on natural language interactions. He introduces conversation design as a discipline focused on creating more natural interactions with machines, covering aspects like choosing the right tasks for voice interfaces, understanding user needs, and building systems that handle variances in human speech effectively.

Additionally, the presentation discusses practical challenges and techniques in crafting effective voice interactions, including handling complex dialogue structures and anticipating user responses.        

Note. This example demonstrates the reductive prompt's capacity to distill complex information, evidenced by the AI's synthesis of key concepts from a video on conversation design.

While the prompt focuses on summarization, indicative of a reductive approach, it also incorporates delimiter prompting to organize the information clearly, underscoring the necessity of structure in producing coherent AI outputs.

Figure 6.2.1: Reductive Prompts Use by Expertise Level

Usage Patterns of Reductive Prompts Among AI Prompting Professionals

Usage Patterns of Reductive Prompts Among AI Prompting Professionals

Note. Represents survey results from "Exploring the Landscape of Prompting" survey conducted by Jonathan Kyle Hobson. Data present the usage of Reductive Prompts categorized by professional expertise. The chart differentiates between those who do not and do use, reflecting variations in engagement across different expertise levels. For a detailed exploration of the data, interact with the visualization. Chart created using Datawrapper, an online visualization tool.

Transformative Operations: Transformational prompts, as delineated by Shapiro (2023b), are designed to alter the presentation of input texts without deviating from their core message. This category encompasses a broad array of tasks such as language transformations, reformatting, and clarification, where the model's prowess in adapting content to meet specific stylistic or format requirements is put to the test. Transformative prompts leverage the model's ability to maintain the size and/or meaning of the content while presenting it in a new light.

Transformative Operations include:

  • Language Translation: Changing language style or switching languages.
  • Reformatting: Altering text structure or layout.
  • Clarification: Making text clearer without changing meaning.
  • Adapting Content: Modifying presentation while preserving essence.
  • Maintaining Integrity: Keeping core message and size intact in a new format.

Example 6.1.2: Transformative Prompt Example

Adaptive Transformation: HTML Email Format Integration

User

Okay, so I want to send an email and in the email I want to verbatim share the Zoom recording link. ...{additional context went here}... verbatim share my survey link. ...{additional context went here}... Below is an example of an old HTML email template/format example. I want you to verbatim follow that HTML format structure only changing the content that is not HTML to match the information I've given you verbatim. {prompt continues}...

[ Recorded Zoom video to Share =?

{zoom link went here}...

]

[ Feedback Survey Link to share =?

{feedback survey link went here}...

]

[ Recap/Summary to share =?

{recap summary went here}...

]

[ HTML Email Template / Format Example =?

{HTML email template went here}...

]

GPT

Here's the HTML email formatted according to your specifications and ready to be pasted into Eventbrite:

html

<h1 style="color: #8c1d40;">Unlocking AI Communication Event Recap</h1>

<p><span style="font-size: small;">Dear Attendee,</span></p>

<p><span style="font-size: small;">Thank you for participating in our recent event. We appreciate your engagement and are excited to share further resources and request your feedback.</span></p>

<p><span style="font-size: small;">Please find the recorded Zoom session below, which includes the full video, a transcript, a highlight reel, and AI-generated summaries:</span></p>

<p><a  style="color: #0563c1;">Access Zoom Recording</a></p>

<p><span style="font-size: small;">We also invite you to provide your feedback through a brief survey, which will only take 2-3 minutes:</span></p>

<center>

...{transformation continued}...

This email includes all the essential elements as specified: a summary of the presentation, links to the Zoom recording and the new feedback survey, and maintains the HTML format provided in the example. This is ready to be copied directly into the HTML section of your Eventbrite email setup.        

Note. The transformative nature of this prompt is exhibited through the seamless adaptation of provided content into a structured HTML email format, highlighting the AI’s ability to follow complex instructions while maintaining content fidelity.

Beyond transformation, this prompt also employs example prompting and contextual prompting techniques, ensuring that the final output adheres to both the structural and content-specific directives provided by the user.

Figure 6.2.2: Transformative Prompts Use by Expertise Level

Usage Patterns of Transformative Prompts Among AI Prompting Professionals

Usage Patterns of Transformative Prompts Among AI Prompting Professionals


Note. Represents survey results from "Exploring the Landscape of Prompting" survey conducted by Jonathan Kyle Hobson. Data present the usage of Transformative Prompts categorized by professional expertise. The chart differentiates between those who do not and do use, reflecting variations in engagement across different expertise levels. For a detailed exploration of the data, interact with the visualization. Chart created using Datawrapper, an online visualization tool.

Generative Operations: Generative prompts stand at the frontier of LLM capabilities, tasked with the creation of novel content. Shapiro (2023b) notes the variable and context-dependent results of such operations, which place a greater emphasis on the creative capacities of the models. From drafting new texts to brainstorming ideas, generative prompts push the boundaries of what LLMs can achieve, inviting them to step into realms of innovation and originality.

Generative Operations include:

  • Creating New Content: Generating original texts or ideas.
  • Brainstorming: Producing a range of innovative concepts.
  • Drafting Texts: Composing entirely new documents or articles.
  • Innovating: Developing unique solutions or approaches.
  • Pushing Boundaries: Expanding the creative limits.

Example 2.1.3: Generative Prompt Example

Creative Invention: Crafting a Homebrew Dungeon & Dragons (DnD) Spell

User

{GPT Instructions}

Create a Homebrew DnD 10th level + spell that supercharges a character’s spell slots giving them extra additional temporary spell slots (like giving temporary HP)?

GPT

Ah, a fine choice to enhance one's mastery over the arcane! Let us weave the essence of The Weave into a powerful new spell, one I shall name "Font of Arcanum." This spell shall be a beacon for those who dare to tread deeper into the reservoirs of arcane power, granting them additional spell slots temporarily, much like the way one might find temporary sanctuary in the arms of a protective enchantment.

Font of Arcanum

10th-level Transmutation

Casting Time: 1 action

Range: Self

Components: V, S, M (a sapphire infused with Korenna’s essence, worth at least 5,000 gp, which the spell consumes)

Duration: 1 hour

Upon casting this spell, you channel the foundational energies of Korenna to expand your mind’s capacity for magical endeavors. For the duration, you gain additional spell slots that disappear when the spell ends. These slots can be of any combination that does not exceed a total of 5 levels (e.g., one 5th-level slot, five 1st-level slots, or one 3rd-level slot and one 2nd-level slot).

At Higher Levels. When you cast this spell using a spell slot of 11th level or higher, the total level of spell slots you can create increases by 1 for each level above 10th.        

Note. This example showcases the generative prompt's utility in creating novel content, as it inspires the AI to craft an entirely new spell for a role-playing game setting.

The generative prompt's originality is complemented by expertise or fact check prompting behind the scenes, guiding the AI to produce content with a specific function in mind—enhancing gameplay.

Figure 6.2.3: Generative Prompts Use by Expertise Level

Usage Patterns of Generative Prompts Among AI Prompting Professionals

Usage Patterns of Generative Prompts Among AI Prompting Professionals


Note. Represents survey results from "Exploring the Landscape of Prompting" survey conducted by Jonathan Kyle Hobson. Data present the usage of Generative Prompts categorized by professional expertise. The chart differentiates between those who do not and do use, reflecting variations in engagement across different expertise levels. For a detailed exploration of the data, interact with the visualization. Chart created using Datawrapper, an online visualization tool.

Subtypes of Prompts

Expanding on the core operations of prompts, the categorization extends into more specialized techniques that harness the combined strengths of basic operations. Extractive Prompting not only streamlines the reduction and transformation processes but is particularly adept at zeroing in on the most pertinent data within a sea of information, ensuring precision and relevance. On the other hand, Evaluation Prompting offers a more analytical approach, merging the creative generation of content with critical reductions, thus providing a nuanced critique that assesses and enhances the value of content based on predefined standards. Lastly, Expansion Prompting serves as a bridge between generating new content and modifying existing information, enriching the original material with deeper insights or extensive details. This approach is ideal for exploring topics more comprehensively and presenting them in an enriched context, thereby maximizing the informative and educational impact of the content.

Building on the foundational operations, prompts further diversify into:

  • Extractive Prompting: A hybrid of transformation and reduction, focusing on isolating specific information pieces from a broader dataset.
  • Evaluation Prompting: Or how Andrew Maynard (2023b) puts it comparative prompts. Merging generative and reductive aspects, these prompts engage in critiquing, analyzing, or evaluating content against certain criteria.?
  • Expansion Prompting: These prompts are at the intersection of generative and transformative operations, enhancing existing information with additional insights or elaboration.

Figure 6.2.4: Types of Prompts by Frequency of Use

Frequency Distribution of Different Prompt Types in AI Prompting

Frequency Distribution of Different Prompt Types in AI Prompting

Note. Represents survey results from "Exploring the Landscape of Prompting" survey conducted by Jonathan Kyle Hobson. Data categories include reductive, transformative, generative, extractive, evaluation, and expansive prompts. The data provide a quantitative overview of how often each prompt type is utilized, indicating the prevalence of particular strategies in the field of AI prompting. The combined percentages for 'somewhat frequently' and 'very frequently' used prompts offer insight into the overall adoption rates of these methods. For a detailed exploration of the data, interact with the visualization. Chart created using Datawrapper, an online visualization tool.

Generative vs. Transformative

The graph underscores generative prompts as the predominant choice among practitioners, with an impressive 90% usage rate, signifying their central role in the current AI prompting landscape. Similarly, extractive prompts also demonstrate significant utilization at an 80% usage rate. These insights reveal a trend towards the adoption of generative and extractive techniques, possibly due to their effectiveness in achieving desired outcomes with AI-generated content. The preference for these types underlines their functional importance in the development and application of AI prompting strategies.

Prompt Classifications: Discrete, Hard, Soft, and Continuous Prompts

Beyond the types and operations, prompts can be further classified based on their nature and interaction with LLMs:

Discrete (Hard) Prompts: These are the clearly defined, natural language instructions inputted directly into LLMs or GenAI (Liu et al., 2022). They range from simple queries to complex sets of directives, essentially serving as the primary communicative interface between the user and the AI.?

  • User Hard Prompts: These would specifically refer to prompts that are directly input by the user. They are explicit, manually entered commands or questions that initiate the AI's response process.
  • System Hard Prompts: These are predefined or system-generated prompts that are used to modify or guide the AI's processing of user prompts. They might not be visible to the user but are critical for directing the AI's response accurately according to designed interaction protocols.
  • Operational Hard Prompts: This category could include prompts that are part of the system's operational architecture but still in natural language. They could be involved in error handling, context setting, or as modifiers that the system automatically applies based on the user's input or ongoing session dynamics.
  • Configured Hard Prompts: These prompts could be set up by users or developers to automatically trigger under certain conditions within the interaction. While still being “hard” in their static nature, they offer customization and flexibility depending on the specific requirements of the interaction or user preferences.
  • Conditional Hard Prompts: Predefined and triggered based on specific conditions but don't change once set.
  • Natural Language Soft Prompts: Prompts that are defined in natural language yet have the ability to adapt or be re-framed by the AI based on the interaction context. This doesn’t necessarily involve deep tokenization or embeddings but could utilize simpler forms of modification based on rules or AI-generated context.

Continuous (Soft) Prompts: Residing within the model's embedding space, these prompts interact with the LLM's internal representations more fluidly (Liu et al., 2022). They are adaptable and iteratively refined, offering a nuanced way to steer the model towards desired outcomes by influencing its embeddings directly. Note the technical complexity and resource demands of utilizing soft prompts, marking them as advanced techniques typically beyond the scope of lay prompter.

Navigating Prompt Types with Strategic Insight

As we venture into the practice of prompt architecture, the categorization of prompts into reductive, transformative, and generative types serves as a strategic guide. This framework not only aids in the selection of the appropriate prompt type for a given task but also informs the development of nuanced prompts that resonate with the operational strengths of LLMs. By aligning our prompting strategies with the inherent capabilities of these models, as elucidated through the prism of Bloom's Taxonomy, we can maximize the effectiveness of our interactions, ensuring that our engagements with AI are both purposeful and impactful.

In sum, the exploration of prompt types within the realm of LLMs encapsulates a dynamic interplay of reduction, transformation, and generation, each with its distinct implications for AI-human communication. Through the thoughtful application of these prompt types, guided by insights from Bloom's Taxonomy, we chart a course towards more refined, insightful, and productive exchanges with the digital intellects that are LLMs.

要查看或添加评论,请登录