The "6 Step Prompt Engineering Process" you need to follow
Your output is only as good as your input.
This is something I continuously harp on in my keynotes and workshops. When using an LLM, the quality of the output you receive is only as good as the quality of the prompt you gave it.
I want to cover my 6-step prompting structure today that has helped me grow my business.
I am the solo-founder of GoBananas.ai , an AI consulting company for sales teams in Kansas City. When I use this prompting process, I feel like I have five interns working for me. Using AI tools like ChatGPT are a massive time saver, but only if you know how to use them the right way.
Let's get into it.
1. Assign your LLM a role
This allows us to narrow the LLM's focus and have think as an expert in our desired field. LLMs are trained on very wide datasets, so this allows us to have it hone in on only relevant information.
Example: Act as a social media marketing manager at a tech startup.
2. Provide some context
Here we want to tell our LLM who we are and what we are looking to do. This is yet another step we take to make sure we are reducing the risk of hallucinations of irrelevant output. We will provide more context in a later step, but this is sufficient for now.
Example: I am a social media manager at OpenAI and am looking to generate some viral content on LinkedIn and X.
3. Give a command
This is where we are going to tell ChatGPT exactly what we want it to do. We have to be as clear as humanly possible here to (yet again) mitigate the risk of irrelevant output. Context is key with LLMs, so be sure to provide context around the command.
Example: Generate a weeks worth of X posts that discuss the benefits of AI in the healthcare industry. Make the posts casual and informative. Keep the posts to less than 6 sentences.
4. Tell your LLM to ask you questions
This is the most important step. Here we give our LLM the ability to analyze our role, context, command, and then ask us further questions so it has all the information it needs to carry out our desired task. Depending on the LLM you use and the complexity of the task, you will generally get 5-8 questions.
领英推荐
Example: You must ask me questions before generating your output.
5. Answer questions
Now we must answer the questions that our LLM gave us. This does not have to be super structured. It can look like:
Example: 1. Answer 2. Answer 3. Answer [...]
After doing so, the LLM will give us its first draft of the command we gave it.
6. Follow up and refine
Now that we have our first draft, we will need to review and refine the output using:
Commands
Questions
Feedback
Human Edits
Example: This is good, but let's cut out the second paragraph and make the tone a little more professional
Never, never, ever copy and paste an LLM's output without at least proofreading! This is very important. LLMs are great, but not perfect.
Try this process out for yourself and let me know what you think. This process may seem somewhat complex right now, but it becomes second nature soon after you start using it.
If you want to learn more about AI tools and use cases, join my online community here.