Building a Chatbot with Rivet from Ironclad and OpenAI
A nut and bolt is as close to a rivet as we get from emoji land.

Building a Chatbot with Rivet from Ironclad and OpenAI

There are a bunch of "make a chatbot with OpenAI and x" tutorials out there. This is another one that only requires clicking, dragging, and prompt-writing. A couple of days ago, #ironclad open-sourced Rivet, their visual development environment for generative AI. This visual IDE has a robust set of tools for designing, iterating, and deploying LLM Chain / Agents, as well as reporting and debugging tools. It's also a 1.0 release with incomplete documentation, and the bugs aren't hard to find. It's still a great environment for testing prompt chains, which are finicky, brittle, and unwieldy to develop from a text IDE.

My initial impression is that Rivet is powerful, streamlined, and useful way to work with Agent/Chain prompts, but it's not a good way to learn how they work. For example, when putting the Chatbot together, I probably couldn't have figured out how to get Chat History to work if I didn't know the API call structure backwards and forwards already. Enough chitchat, gimme chatbot!

To get started, download the app, and enter your OpenAI API key in the Settings. You should also go to the github repo and download the sample .rivet-project files, as well as the text RPG examples. They provide a lot of practical guidance on how the "Nodes and Graphs" structure of Rivet works. Here's a link to my project file, containing the example I'm discussing below. They fairly similar to a few of the example graphs from the official repo.

Initial Greeting Chat Message

Here's the first part of the Chatbot, and initial prompt to that greets the new user and invites them to ask a question. This is two Prompt Nodes feeding into a Chat Node, with the response fed into another Prompt Node. The first two blocks are Prompt Nodes because they allow you to specify the message role and content, which is the normal way of building a chat history for an OpenAI call. If you are familiar with the OpenAI prompt structures this should be really familiar to you, and you can click and drag them to the prompt inputs of an OpenAI chat endpoint call Node. These calls are very basic. The System Message is:

You are a helpful legal analyst who is detail oriented, and pays close attention to arguments, reasoning, and conclusions of the court.         

and the User Message is:

Greet the User, then ask how you can assist them with legal analysis, or other problems.        

What might not be familiar to you is that Rivet explicitly requires a chat endpoint Node to have a "Prompt" input, which is not required by OpenAI to make a successful API call. For really simple calls like this, it likely doesn't matter, but for non-interactive chain prompts, I put everything into the system message because it has a predictable significance to the model.

Alternately, I may be misinterpreting how the Chat Node inputs work. "System Prompt" might be for inputing a Text Node, but treating it as a "System Message", whereas Prompt Nodes can be input into the Prompt input, meaning I can still do my "system message only" calls by providing the Chat Node with a "System Message" role Prompt. I have no idea, and it's somewhat difficult to see what the inputs "do" once they are inside the Chat Node.

The Chat Node export defaults to just the response string, which in most cases is going to be the easiest thing to work with. Since we are building a chatbot, I wanted to reformat it into an Assistant Message, so that when we record the Chat History, each message has the correct role.

By the way, if you are testing some other part of this Graph repeatedly, and don't want to waste API calls (??) on this working portion of your Graph, you can Cache the results as not spend money on unneeded calls. Your parents would be so proud!

Loop Controller

The Assistant Message is fed to the Loop Controller twice for reasons that will be explained soon. The Loop Controller is probably the most complicated Node I've gotten to work, and I'm sure that Ironclad will have documentation for it soon (Hint! Hint!). In the mean time, the Loop Controller behaves similarly to a "while" loop for a preset number of iterations. I'm also assuming there is some way to have it listen for break conditions through some form of conditional Node outputs, such as external code or OpenAI Function calls, although I haven't found it yet.

Once the break condition is met, the output of the Loop can be sent to an Output Graph Node, which then sends something to another Graph, or out to your application. Since we are only building a happy little Chatbot, the looping is all we care about. We still need to place an Output Graph Node here because the Graph will not run without an output Node attached to the Break connector.

I literally have no idea what "Continue" does. Perhaps restart a Loop after a Break Condition? I don't know.

The Loop Controller can manage multiple Loops, each composed of two inputs and one output. "Input 1 Default", "Input 1" and "Output 1" (not shown until some input is connected) are the components of the "Loop 1" loop. The Input and Output points will automatically rename themselves based on the Nodes connected to them, which is both helpful and confusing, but it means that giving your Nodes meaningful names might be a good idea (hey look, "naming things" rears it's ugly head again...).

Unhelpful naming

"Input default" provides the initial state for the Loop the first time it is run. Connecting an Input will make the corresponding Output appear, which you connect to the downstream Nodes in the Loop. The final Node of the Loop feeds back into the non-default "Input", and the Loop continues for N iterations, or until some break condition.

A single Loop Controller can manage multiple Loops and as soon as you connect Nodes, more inputs will appear to accommodate additional Loops, until presumably some Node limit. We need two Loops in order to build a chatbot, a Chatbot Loop, and a Chat History Loop. They both require the initial Assistant Message, which is why it feeds into both "Default" inputs.

Chatbot Loop

An LLM Chatbot works by sending the existing conversation (or for the first exchange, some initial message), a new user input, and possibly a system message to an LLM model. The LLM model generates a response, which is then added to the existing conversation, often called the chat history, and then the process starts anew.

That is what we are doing here:

  • The last chatbot message (or the initial message for the first Loop run) feeds into the User Input, where the user question is captured.
  • This is composed into a chat history, starting with the previous conversations (or the initial message for the first Loop run), and the User Questions using the "Assemble Prompt" node.
  • A System Message is also introduced at this step to provide the Chatbot behavior. In this case, I borrowed an idea from Allison Morrell to make a chatbot that only recommends frameworks for solving problems, but does not provide direct answers: "If the User has provided a legal question, do not provide a direct answer. Instead, propose an analytical framework for understand the question or problem from the User, or a systematic approach that breaks down the issue into smaller steps."

  • System Message and Assembled Prompts feed into the Chatbot, where a response is generated.
  • The Response is returned to the Loop Controller to begin the next Chatbot Loop.
  • Meanwhile, the Response is also formatted into an Assistant Message, which, along with the Previous Chat History, and User Message, is composed into the Chat History, and returned to the Loop Controller to begin the next Chat History Loop.

And that's it, that's a Chatbot in Rivet! ??

The party continues for however many max iterations you specified in your Loop Controller.

A few notes:

  • This is a VERY BAD chatbot because the chat history will eventually accumulate until the API calls fail for exceeding the context window. There is also a "Trim Chat Messages" Node which we could put into the Chat History Loop. This would manage the context window by "forgetting" the oldest messages so that the conversation never has to end.
  • This “Advice System Prompt” means we now have a “Legal study bot” of some kind that doesn’t try to tell you the correct answer, but only suggests analytical frameworks that help to resolve your question. You can experiment with all sorts of System Message prompts to see what effect it has on the chat outputs, and without having to manage variables and calls, and chat histories. Once we have these two components, the result can be fed back to the Loop Controller to complete the chat loop.?
  • You can easily bifurcate the chain into multiple Chat API calls by connecting more Chat Nodes downstream from the Assembled Prompt Node of the Chatbot History. Compare different models, or use a series of System Messages or Document text to generate a variety of answers. Feed them all to GPT-4 and get some really smart answers!
  • This Chatbot is very similar to the "Loops" example provided in the Rivet Tutorials, but corrects the Node wiring. The Tutorial example wires the Assemble Prompt that goes to the Chat API Node from the output of the Chat Node. Even though the Chat History is being properly recorded, the LLM is only ever provided it's own previous response and the current user question, instead of the entire conversation.

Daniela Semeco

Keyboard Inventor | Legal Tech Entrepreneur

1 年

Chatbots ?? are amazing for onboarding new customers and answering those very repetitive questions that are almost identical for everyone.

Sharing this corrected tutorial is a real public service. Thank you!!

要查看或添加评论,请登录

Leonard Park的更多文章

社区洞察

其他会员也浏览了