Tip #5: Chain-Of-Thought Prompting | Prompt Engineering Tips
Emmanuel Ubiebifaye
Creative Director and Host of Sayit - The Podcast || Co-Founder, Comme Chez Nous
Have you ever wondered how and why your AI chat-bot gave you the answer it gave you?
Whey you query a response by asking the model to explain themselves in detail, you're questioning its chain of thoughts.
That's one way to use this tip.
As the example shows above, you can see that ChatGPT presents the "illusion" of a chain of thoughts. I took it a tiny step further in the next example:
But here's what Chain-Of-Thought Prompting is...it's pre-defining the output of the language model within the prompt, for the purpose of a more structured result.
Think of it this way: if you asked ChatGPT, "What's the primary ingredient for bread?", it will probably come up with an article. You might just need a word. With Chain-Of-Thought prompting, you can predefine that.
Here's an example:
That's how it works. You can pre-define the model's "chain of thought" and get it to respond just how you want. That's pretty much programming if you think about it.
There are so many possibilities and I'll be sharing a few tomorrow. Meanwhile, go try it out.
See you tomorrow!
- Your friend, Smiles.