I need a 'prompt' orchestration engine
A Jacquard Loom - From National Museums of Scotland

I need a 'prompt' orchestration engine

(Updated June 23, 2023)

Like many people, I have been spending a lot of time using ChatGPT and thinking about how I will use it for my business and for personal creative pursuits.

For Ibbaka, I have experimented generating value propositions using ChatGPT, and over time I think we may be able to generate initial value models that we can then evolve with data from our platform. An AI that can generate value models would let us generate and explore more different models. In the same way that parametric design lets architects create and explore a design space, AI could let us generate, compare, combine and evolve value models. And if I can generate a value model I can generate pricing models as well.

[A value model is a system of equations used to estimate the economic value of a solution for a specific customer or customer segment. Ibbaka Valio is a pricing and value management platform that integrates value models, pricing, models and cost models and then optimizes pricing and configuration to make sure that Value > Price > Cost.]

Our current thinking is that a Large Language Model will not be enough to do this and that something like the the Wolfram language will be needed. See Wolfram PlugIn for ChatGPT. The Wolfram language will layer in the mathematical rigor we need.

On a personal level, I have been working with my granddaughter on a collaborative novel The Enchanted Library (imagine Borges meets JK Rowling). We are trying to use ChatGPT to write the dialog for an AI that we have made one of the characters (it seems fair to have an AI write the dialog for an AI.)

I am also planning (have not done this yet) to take some of my experimental writing and play with it through ChatGPT. One area I want to explore is to use ChatGPT to generate renga 連歌 (Japanese linked verse) sequences that I can also participate in. I plan to take some existing renga and use them as prompts.

All three of these have one thing in common, they use a sequence of complex prompts. They are also recursive, using the output of ChatGPT as an input into ChatGPT. And the goal is to cultivate a prompt library, where prompts can be combined, evolved and the best combinations associated and managed.

No alt text provided for this image

This is not all that different from the basic architecture of a Transformer (the basic architecture used to build GPT, see Attention Is All You Need by Vaswani et al.) As a generative model generates an output it uses each step of the output as an input into the next step. This is one reason these models work so well at generating human sounding text. (It also contributes to their wonderful ability to hallucinate --- generate 'facts' not supported by the inputs used to train the underlying Large Language Model. I suspect there is no creative content generation without some hallucination.)

One of my basic assumptions is that I will not be happy with just one LLM and that I will want to be able to pick the alternative that best suits my needs, generate outputs from two or more different LLMs and then combine and compare them. I may take an output from one LLM and feed it into another LLM, using an LLM to generate prompts. There may even be LLMs that are specialized for prompt generation.

For a list of LLMs see Alan Thompson's excellent site Life Architect, where he is maintaining a list of model with some additional data about each model.

Prompt orchestration services will fit well with the trends to longer and longer prompt lengths. See From Deep to Long Learning.

No alt text provided for this image

Prompts can already be so long, up to 32 KB for GPT-4 that a prompt is, or can be, a document in its own right.

Of course this assumes that over time LLMs will diversify and diverge, or perhaps modularize, and not converge on a boring sameness.

In the first version of this article I asked what is likely to happen to LLMs over time as more investment goes into their development and elaboration.

No alt text provided for this image

From the comments ...

"Howard Suissa If by diverge and differentiate your mean specialize along side a basic foundational language model that handles the templateable language... Then yes, that."

"Andrew Cote Agree with Howard - base models will be commoditized and miniaturized, with specializations tacked on as external memory modules. This is coincidentally our current approach to educating humans - standard k-12 with subject matter add ons."

There are also likely to be open source LLMs, more and more models like LLaMA (Large Language Model Meta AI) available to play with and evolve.

So I am in need of a prompt management service. Some of my requirements are

  1. Support multiple LLMs
  2. Connect a Prompt and Set of Prompts to the Outputs generated across LLMs
  3. Record changes to the Output for the same Prompt or Set of Prompts over time
  4. Combine two Outputs
  5. Compare two Outputs
  6. Generate a new Prompt or Set of Prompts from an Output
  7. Infer the relative contribution of each Prompt in a Set of Prompts to the Output

Martin Fowler has shared an example of prompt orchestration. An example of LLM prompting for programming.

No alt text provided for this image

This approach, of providing context then instruction is likely to be one common pattern of prompt orchestration. Others are waiting to be discovered and shared. These patterns will then be implemented in a prompt orchestration engine.

For more context on what prompt engineering really means see Mitchell Hashimoto's thought piece 'Prompt Engineering vs. Blind Prompting.' The platform I am imagining here would help us move from bling prompting to something closer to engineering.

In June 2023, Wolfram announced a prompt repository, which is a step towards what I need.

For a variety of reasons (too many projects underway already) I am not going to be able to take this on myself. But if you want to do this I will support it and if you need investment I may be able to pull together a syndicate to support you.

Shep ?? Bryan

face of linkedin ??ai acceleration ??founder, galaxy brain ai ? independent ai r&d ? solutions architect ? innovation leadership ? brand partnerships @ UMG

1 年

Still looking for this Steven Forth? I’ve got a platform that I’m about to do some alpha testing & demos with. It checks a lot of the boxes on what you mentioned in your article

回复
Andres Marquez

Data Enthusiast, Digital and Technology Leader, & Occasional Conference Speaker

1 年

Steven Forth, Thank you for this... Do you know if anything like this has been built? I need it for some projects I'm working on... but if it doesn't exist, I might shift my focus to buid this first. It's a real catalyst for the kind of things I want to build.

回复
Monikaben Lala

Chief Marketing Officer | Product MVP Expert | Cyber Security Enthusiast | @ GITEX DUBAI in October

1 年

Steven, thanks for sharing!

回复
Brett Alexander

Founder & CEO of The Prompt Wizards | Founder of Ordo Draconis | Leading the Way in Personal Development and Community Building

1 年

So you want AIP from Palantir?

回复

要查看或添加评论,请登录

社区洞察

其他会员也浏览了