Prompt Engineering Taxonomy; a View from the Sky v1
Alex Foster
Partner @Invisible. Foundation models lead. Name a large language model: we're probably helping to train it. Pursuing industry-leading data quality by (I know it sounds crazy) treating smart humans like smart humans.
Aren't taxonomies the best?
They give this feeling of top-down pseudo-understanding, of an entire frontier summarized in a diagram. Paired with a delicious aftertaste of overwhelm.
"That. Is a lot."
But they are useful so let's dive in. I'll of course caveat that the following is based on my current understanding, categorizations are sometimes more objective, sometimes fuzzy, I am aware LangChain does more than just prompt-chaining etc. I'll also caveat that some of these terms are just what I use personally.
V1 of the flow chart taxonomy below.
I included other GenAI concepts mostly for context.
You can zoom in and pan around by clicking here.
Alternatively, the Prompt Engineering section is cropped below.
I've gone way over my time budget for this piece already so diving into this diagram and actually explaining it will have to be another sitting.
V2 -> coming soon.
If interested, here's a time-lapse of how I researched this using a combination of Arc, ChatGPT, Perplexity, big-agi and Whimsical.