Prompt Engineering: What It Is and Isn't

Prompt Engineering: What It Is and Isn't

Is Prompt Engineering just an overly fancy way of saying, "Here's a better search query?" Maybe it should just be called Search 2.0? True enough, the output of an AI large language model is more than just a bunch of results, but the query itself is still 'just' an instruction of sorts, right?

In some cases, yes, it's essentially the same as a fancy query. But mostly not. There are obvious differences in the use cases for prompts using AI Large Language Models (LLMs) vs how keywords get used in traditional search engines. And for all their potential faults and risks, LLMs can provide stunning new capabilities across a variety of use cases. At the same time, there seems to be some overblown expectations as to what prompts can do. For example, at least in some places, a misunderstanding that prompt engineering can make models better. While it may be true that prompts and responses can be iteratively honed and fed back into the fine-tuning of models to actually make models better, for the most part, they're not used this way. I'd like to try to clear this up because I think it's important we understand how we can use our tools and where they're limited. Just to be clear, I'm not talking about the handful of folks who really are evaluating prompt output to adjust models. (If you're one of those folks, you're ideally operating more at the data science kind of level of prompt engineering.) For our purposes here, I'm talking about the typical consumer or business use that seems to have some people believing prompt input alone changes how the models themselves work.

Prompts vs. Search Keywords


Both Prompt Engineering and... I suppose we can call it "Keyword Engineering" now, have the goal of getting the best output from the tool being used. Let's look at prompting vs. traditional keyword search. In either case, the output is an attempt to generate an answer of some sort, so the overarching goal of either tool is the same.

Keywords are typically individual words or short phrases attempting to match content in search engine indexes. The target output is info snippets or links to appropriately matching content.

Prompts seek to generate robust responses to potentially complex natural language phrases or questions, and can provide context and specificity. LLMs interpret natural language queries, context, intent and nuance much better than keyword-based search engines. And instead of text snippets, images or links, the target output is fully formed answers, either synthesized from multiple sources or possibly newly generated. Perhaps a most obvious difference is LLM capabilities for specific tasks like coding, problem-solving, and - supposedly somewhat - creativity, not simply retrieval. (After all, the G is for Generative in Generative pre-trained transformer (GPT.))

Another difference is prompts can be refined within a session using prior context to hone results. While a user may iterate in a keyword search, most often context is lost. It may be possible to use faceted metadata to filter content, but this is more of an attempt towards increasing precision in a results set rather than crafting a fully formed answer. It might be, at least for now, one remaining benefit to traditional search is that source attribution is clear. And also, search engines don't just make things up. Even if something in a results set is wrong, , (by accident or purposefully misleading), the results are existing information and target links, not complete fabrications.

Why the Prompt Engineering Buzz?

Prompt engineering has gained attention as LLMs have advanced in capabilities. At its core, prompt engineering at a typical LLM user level involves crafting effective inputs to maximize the usefulness of a model's output. Yet while prompt engineering has its uses, it's not a magic bullet transforming how AI models work. It doesn't alter foundational models, substitute for fine-tuning, or fundamentally enhance complex processes like retrieval-augmented generation (RAG). Instead, its primary value for day-to-day usage lies in providing clarity in queries, much like quality search queries in search engines.

At its essence, prompt engineering is about effectively communicating with AI models to produce desired results. This process resembles crafting precise search queries, where a better-phrased input results in more accurate and relevant outputs. And prompts can do some fun and interesting things far differently than keyword queries. Such as request changes in context; e.g., "write the answers to the following questions as if you were William Shakespeare. Or Captain Kirk. Or both.")

In contrast to traditional search queries, prompts involve more nuanced understanding of how models interpret language and respond. Poorly constructed prompts can lead to vague or incomplete responses, while a well-crafted one can elicit more useful results. This makes prompt engineering a worthwhile skill for those working closely with AI systems.

But despite its utility, prompt engineering cannot fundamentally change or improve the underlying capabilities of an AI model. It operates within the constraints of the model’s pre-trained knowledge base and architecture.

The Limits of Prompt Engineering

There is No Direct Impact on Foundational Models


Prompt engineering cannot change how a model processes information, understands language, or generates responses. Models are trained on massive datasets, with billions of parameters fine-tuned to capture relationships. These capabilities come from the model’s architecture, pre-training, and, in some cases, fine-tuning on specific tasks or subject matter.

Fine-tuning updates models using specialized data to adjust its internal weights, leading to task-specific improvements. This is particularly useful for narrow applications; legal, medical, customer service, etc. Prompt engineering can guide the model within its capabilities, but not replicate fine-tuning.

In short, prompt engineering can improve how the model uses its pre-existing knowledge, but it can't teach the model new information or alter its core behavior.

It’s more like giving better instructions to a machine, not upgrading its hardware. Feeding information into a prompt does not automatically make all that info an inherent part of a foundational model, or any form of fine tuning, or even augmented part of retrieval via RAG. It is possible for such info to be captured by an LLM provider and later used, but - if or where this is happening at all - needs to be part of their LLM ops, requires time, effort, probably non-trivial costs, and so on. It doesn't just happen automagically. And probably shouldn't without deep thought given that there'd probably be a high risk of AI engine spam or a new form of prompt injection attack.

The Value of Prompt Engineering

Despite its limitations altering models and processes like fine-tuning and RAG, prompt engineering can enhance clarity, control, and optimization. Following are a some examples.

Enhancing Query Precision

The comparison to “Search Queries 2.0” is apt. Just as search engines respond better to clear, well-defined queries, AI models react more effectively to precise prompts. For instance, asking an LLM a vague question like “Explain climate change” will lead to a broad response, while a more structured query like “Explain how human activities may have contributed to climate change since the industrial revolution” should yield a focused and informative answer.

In complex or multi-step tasks, prompt engineering allows users to direct the model toward the desired level of detail or frame responses in a particular context. This helps avoid irrelevant or off-topic outputs. This is great for things like customer service chatbots, content generation tools where style, tone, and depth of information are adjusted, and more.

Cost-Effective Fine-Tuning Substitute for Some Use Cases

For many organizations, fine-tuning models is impractical due to the need for specialized data, technical expertise, and computational resources. Prompt engineering can possibly provide a cost-effective workaround in some cases, enabling models to deliver more tailored results without modifying the model itself. (Though there's limits due to limited "context windows" or how much content you can use in a prompt. You might note get the performance gains of full fine-tuning, but you can leverage pre-trained models for specific needs without significant investment, speeding up deployment and prototyping.

Mitigating Risk and Bias in Outputs and Governance

One of the risks of LLMs is generation of biased, harmful, or factually incorrect outputs. Prompt engineering can possibly help mitigate this. By embedding guidelines or safety measures within the prompt itself, users can nudge the model toward producing safer, more reliable responses. Moreover, you can guide models to produce more factual, evidence-based responses by instructing inclusion of references or supporting data. This can mitigate the problem of AI hallucinations, where models confidently provide incorrect or fabricated information.

It's also possible that prompts can help guide outputs to align with standards; ethical and otherwise, though again, this is mitigation vs. any kind of change to core models.

Rapid Prototyping and Iteration and MAYBE some creativity

Prompt engineering allows product managers and developers to iterate rapidly on features, experimenting with prompts to see how models respond and adjusting them. In these cases, prompt engineering speeds up the feedback loop. The Generative part of this technology can allow for anything from code to sonnets or music and such. Just how creative this is, (or horribly derivative thing that regresses to average), is debatable and the subject for another article; if not whole conference discussion. And yes, better prompts, (better questions), can help here.

The Creative Potential of Prompt Engineering

Beyond business applications, prompt engineering has a unique role to play in creative industries. Prompt engineers sometimes become co-creators in art, music and literature, guiding AI models to produce novel, interesting, and even surprising outputs. Artists and designers can use it to explore new visual styles or concepts, for brainstorming and inspiration. Similarly, writers can create narratives or dialogues, setting tone, character behavior, and plot direction. And musicians might use prompt engineering to generate melodies, lyrics, or soundscapes based on specific parameters.

In these creative contexts, prompt engineering is not just a query optimization tool—it’s a form of interaction and collaboration with the AI, pushing the boundaries of what AI can create based on human input.

Is Prompt Engineering a Career?


Prediction? No. At least, not for long. This seems like a flash-in-the-pan meme. Maybe there's some value in the role now. And perhaps in highly specialized areas will continue to have some part. But people who use LLMs to any great degree will quickly build these skills. Just as folks learned to swipe to delete, iterate on keywords, and so on, they will get better at prompting. Or rather, some will and others won't. Just as with any craft, some will wield their tools well and others not. But as a standalone role? If there's a junior role for this, it might be a great way to get into the industry and learn a lot. But I don't think we're going to start seeing Associate through Director and VPs of Prompt Engineering. One exception might be the few that work directly on models and do use prompts and their outputs to study and potentially change foundational model weights or fine-tuning tools.

Conclusion: A Balanced Perspective on Prompt Engineering

While prompt engineering isn’t a revolutionary technology that changes the core architecture of AI models or replaces fine-tuning, it holds substantial value in practical applications. It’s a critical tool for improving query precision, guiding AI behavior, and enhancing user experience without the heavy cost and complexity of model training or fine-tuning. Just as with being a good search tool user, some degree of skill development in using LLM prompts is likely to be a valuable general skillset for any businessperson.

In low-resource environments, prompt engineering can be a proxy for fine-tuning, allowing companies to deploy AI solutions quickly. It also plays an essential role in managing risks, such as bias and inaccuracy, making it a valuable tool in ensuring safe and ethical AI use.

In essence, prompt engineering may not redefine the way AI models are built, but it certainly redefines how we interact with them. By treating it as “Search Queries 2.0,” we recognize its potential to enhance the clarity, accuracy, and usefulness of AI outputs—without overstating its ability to fundamentally transform AI technology.

The value of prompt engineering lies not in its ability to overhaul AI models, but in its capacity to bridge the gap between complex AI systems and human users. It empowers individuals to extract more meaningful and targeted outputs from models, making AI more accessible, efficient, and practical in everyday applications. While it may not reshape the core technology, it significantly enhances how we leverage and interact with AI, unlocking its potential in ways that are more immediate and impactful. Finally, never forget that the best prompts in the world are still just queries against data and models that will perform based on their core attributes. Prompting can optimize and enhance and ideally mitigate downside, however risks may remain. Explore with wonder, but you need to own the outcome.

If you're interested into some techniques to use prompting for great output from a user perspective, please see my article on Prompt Iteration for Consumers - Getting the Best from Generative AI.

Bonus: Some Prompt Engineering Best Practices

Applied LLMs Mastery 2024: Prompting and Prompt Engineering

8 Prompt Engineering Best Practices and Techniques

Prompt Engineering: Examples and Best Practices

Tips to enhance your prompt-engineering abilities

Prompt Engineering Best Practices: Tips, Tricks, and Tools

CARE: Structure for Crafting AI Prompts

And of course... there's a whole Prompt Engineering subreddit.

Check out this article written by AI Product Management Learning Program Alumni, Scott Germaise, on Prompt Engineering

Jim Anderson

Co-Founder & CTO @ InVerus, CTO @ RKD, ALTSMARK, Ex-Spotify (Solutions Architect / Inventor)

1 个月

Great article Scott Germaise

David Levin M.B.A.

VP, Marketing | Brand Builder | Digital & Performance Marketer | Marketing Strategist

1 个月

Hey Scott, great and insightful article. Curious as to your perspective on a couple of questions: What's your perspective on how organizations can measure the effectiveness of their prompt engineering strategies in real-time applications? Are there frameworks or methodologies that can be employed to systematically improve prompt engineering skills across teams?

回复

要查看或添加评论,请登录

Scott Germaise的更多文章

社区洞察

其他会员也浏览了