Teachers Already Know AI Prompt Design - They Just Don't Know It Yet
Instructional Design is Prompt Design

Teachers Already Know AI Prompt Design - They Just Don't Know It Yet

Human teachers are already trained in AI prompt design, even if they don't know it. Much like Machine Learning (ML) human teaching starts with objectives and instructions. Human educators call this Instructional Design (ID). While the learning may be different, the same principles apply— at least when both disciplines are applied optimally.

One of the major (internal) obstacles to educational excellence in the USA is conditional language. "After reading chapter 12, the student will be able to..." "Will be able" is a lot less powerful than, "Given the reading, the students will [powerful verb]..." To paraphrase Master Yoda, there is only do or do not, there is no "be able to..." Only product can be evaluated. These power verbs are part of the overall objective.

Objectives

Both ID and ML start with an objective. Instructional Design uses a tool called Bloom's Taxonomy to describe a ladder of low cognitive functions (such as simple recall) up to synthesizing material into a new creation— we could call this generative intelligence. Here's a table of how we might think about objectives in each discipline.

ID-ML taxonomy

A quick note: we often hear parents dismiss "rote memorization" or the first level of taxonomy. However, I contend that at least some memorization is necessary for both human learners and AI to have a solid base model. Good ML and ID leverage a blend of static data (things already known or in the model) and dynamic content (new data). When I was small, my class was asked to write on the topic, "What is the E.R.A?" Having no existing data in my brainbox and not given any dynamic content, I confabulated about the "extra running ability" of cheetahs and Olympic athletes. Much like AI hallucinating, I readily admitted that this was false info but felt like I had to answer the prompt as best as I could.

AI hallucinations are like a child making up a story when they don’t have the right answer—it’s not deception, it's just a way of filling in missing information based on patterns and past exposure.

[E.R.A. was the still unratified Equal Rights Amendment FYI]

Aspects of Instruction (Parts of a Good Prompt)

Both humans and AI work better (at any level of taxonomy) when given clear instructions. It shouldn't surprise anyone that instructions for both learners have the same components. They are:

  1. State the task as a verb. Bloom's Taxonomy provides a cluster of verbs for each cognitive level. They are things like: "Summarize," "Translate," "Generate," "Write," "Classify," "Explain," "Analyze," "Brainstorm," "Rewrite." I taught writing for 15 years and wanted to weep every time my students told me they had been assigned to "write about..." That kind of language is vague and yields poor results.
  2. Identify the content and subject: This specifies the subject matter or content the learner should work with. It provides the context for the task. For example: "this article," "the following text," "the topic of climate change," "these data points." If you have specific text, data, or a document you want the learner to use, include it or reference it clearly. The more specific you are, the better the output. For example, a given chapter or date range of transactions
  3. Constraints/Parameters: These are the rules or limitations that the learner should use when creating their output. These parameters define the boundaries of the desired output. For example: "in a formal tone," "in Chicago style," "using only these keywords," "no more than 200 words," "in JSON format," etc. The constraints are the limits. We can think about constraints as what the output is not: not in informal tone, not in APA style, not 100 or 400 words, etc.
  4. Format/Structure: This specifies the desired format or structure of the output. If constraints describe what the output is not, format describes that the output is. For example: "as a list," "in a table," "as a poem," "as a Python function," "in markdown."
  5. Examples (Few-Shot Learning): Providing examples of the desired input-output relationship can significantly improve the both AI and human learner's performance, especially for complex tasks (those higher on Bloom's Taxonomy). These examples demonstrate what you expect and help the learner identify the pattern. This is often called "few-shot learning" because you're giving the learner a few examples. This used to be much more common in human education. Painters (such as Picasso), dancers (like Twyla Tharp), writers (Robert Louis Stevenson), and creators of all stripes used to start by imitation. As the saying goes, "good artists copy, great artists steal." Give your learners examples of what success looks like.
  6. Persona/Role: You can instruct the learner to adopt a specific persona or role. We see this in the infamous "story problems." "Imagine you are a train conductor...imagine you are an accountant..." This can influence the style and tone of the output as well as the quality. In a 1993 study about exercise intentions, participants who assumed a "fitness enthusiast" persona showed a significant increase in physical fitness behavior (42.3%) compared to those who did not assume a persona (22.2%). Quantitative data on using personas in prompt design for AI is limited. What data there is, doesn't seem to show a consistent pattern of improvement for AI when provided with a persona but it does seem to help us as humans form better prompts and interpret the outcomes better. I asked five LLMs (meta.ai, ChatGPT, Claude, Gemini and Perplexity) for quantitative data on this subject and while none of them could find any hard data, they all agree that humans should use personas.
  7. Output Indicators and system instructions: Sometimes, especially in code generation, you might include markers to indicate the start and end of the desired output. This can be useful to parse an AI's response more easily but it also seems to help human learners stay focused. You've probably seen "Do not turn this page until told to do so" or similar on paper tests or "the end" at the end of a book.

The puzzle of good prompt design

Whether you're prompting human students or Machine Learners, the fundamental principles remain the same: start with clear objectives using action verbs from Bloom's Taxonomy, provide specific content and context, set clear constraints and format expectations, and demonstrate success through examples.

Both teachers and computer programmers have long shared the saying "garbage in; garbage out" and that's never been more true. Vague prompts lead to vague outputs be they predictions, data analysis, or generative AI. By applying time-tested instructional design techniques — like using powerful, actionable verbs, providing clear parameters, and leveraging examples — we can take yet another lesson from educators who are already equipped to craft effective prompts. The skills developed in the classroom translate directly to this new frontier of technology.


Vincent Kovar as a Masters in Teaching, and spend a decade in performance based instructional design for companies like Adobe and T-Mobile He He previously taught at Universities and writing centers in the USA and has more than 15 years in marketing. His current passion is the intersection between blockchain and AI.

要查看或添加评论,请登录

Vincent Kovar的更多文章