Building a Chatbot with Rivet from Ironclad and OpenAI
Leonard Park
Experienced LegalTech Product Manager and Attorney | Passionate about leveraging AI/LLMs
There are a bunch of "make a chatbot with OpenAI and x" tutorials out there. This is another one that only requires clicking, dragging, and prompt-writing. A couple of days ago, #ironclad open-sourced Rivet, their visual development environment for generative AI. This visual IDE has a robust set of tools for designing, iterating, and deploying LLM Chain / Agents, as well as reporting and debugging tools. It's also a 1.0 release with incomplete documentation, and the bugs aren't hard to find. It's still a great environment for testing prompt chains, which are finicky, brittle, and unwieldy to develop from a text IDE.
My initial impression is that Rivet is powerful, streamlined, and useful way to work with Agent/Chain prompts, but it's not a good way to learn how they work. For example, when putting the Chatbot together, I probably couldn't have figured out how to get Chat History to work if I didn't know the API call structure backwards and forwards already. Enough chitchat, gimme chatbot!
To get started, download the app, and enter your OpenAI API key in the Settings. You should also go to the github repo and download the sample .rivet-project files, as well as the text RPG examples. They provide a lot of practical guidance on how the "Nodes and Graphs" structure of Rivet works. Here's a link to my project file, containing the example I'm discussing below. They fairly similar to a few of the example graphs from the official repo.
Initial Greeting Chat Message
Here's the first part of the Chatbot, and initial prompt to that greets the new user and invites them to ask a question. This is two Prompt Nodes feeding into a Chat Node, with the response fed into another Prompt Node. The first two blocks are Prompt Nodes because they allow you to specify the message role and content, which is the normal way of building a chat history for an OpenAI call. If you are familiar with the OpenAI prompt structures this should be really familiar to you, and you can click and drag them to the prompt inputs of an OpenAI chat endpoint call Node. These calls are very basic. The System Message is:
You are a helpful legal analyst who is detail oriented, and pays close attention to arguments, reasoning, and conclusions of the court.
and the User Message is:
Greet the User, then ask how you can assist them with legal analysis, or other problems.
What might not be familiar to you is that Rivet explicitly requires a chat endpoint Node to have a "Prompt" input, which is not required by OpenAI to make a successful API call. For really simple calls like this, it likely doesn't matter, but for non-interactive chain prompts, I put everything into the system message because it has a predictable significance to the model.
Alternately, I may be misinterpreting how the Chat Node inputs work. "System Prompt" might be for inputing a Text Node, but treating it as a "System Message", whereas Prompt Nodes can be input into the Prompt input, meaning I can still do my "system message only" calls by providing the Chat Node with a "System Message" role Prompt. I have no idea, and it's somewhat difficult to see what the inputs "do" once they are inside the Chat Node.
The Chat Node export defaults to just the response string, which in most cases is going to be the easiest thing to work with. Since we are building a chatbot, I wanted to reformat it into an Assistant Message, so that when we record the Chat History, each message has the correct role.
By the way, if you are testing some other part of this Graph repeatedly, and don't want to waste API calls (??) on this working portion of your Graph, you can Cache the results as not spend money on unneeded calls. Your parents would be so proud!
Loop Controller
The Assistant Message is fed to the Loop Controller twice for reasons that will be explained soon. The Loop Controller is probably the most complicated Node I've gotten to work, and I'm sure that Ironclad will have documentation for it soon (Hint! Hint!). In the mean time, the Loop Controller behaves similarly to a "while" loop for a preset number of iterations. I'm also assuming there is some way to have it listen for break conditions through some form of conditional Node outputs, such as external code or OpenAI Function calls, although I haven't found it yet.
领英推荐
Once the break condition is met, the output of the Loop can be sent to an Output Graph Node, which then sends something to another Graph, or out to your application. Since we are only building a happy little Chatbot, the looping is all we care about. We still need to place an Output Graph Node here because the Graph will not run without an output Node attached to the Break connector.
The Loop Controller can manage multiple Loops, each composed of two inputs and one output. "Input 1 Default", "Input 1" and "Output 1" (not shown until some input is connected) are the components of the "Loop 1" loop. The Input and Output points will automatically rename themselves based on the Nodes connected to them, which is both helpful and confusing, but it means that giving your Nodes meaningful names might be a good idea (hey look, "naming things" rears it's ugly head again...).
"Input default" provides the initial state for the Loop the first time it is run. Connecting an Input will make the corresponding Output appear, which you connect to the downstream Nodes in the Loop. The final Node of the Loop feeds back into the non-default "Input", and the Loop continues for N iterations, or until some break condition.
A single Loop Controller can manage multiple Loops and as soon as you connect Nodes, more inputs will appear to accommodate additional Loops, until presumably some Node limit. We need two Loops in order to build a chatbot, a Chatbot Loop, and a Chat History Loop. They both require the initial Assistant Message, which is why it feeds into both "Default" inputs.
Chatbot Loop
An LLM Chatbot works by sending the existing conversation (or for the first exchange, some initial message), a new user input, and possibly a system message to an LLM model. The LLM model generates a response, which is then added to the existing conversation, often called the chat history, and then the process starts anew.
That is what we are doing here:
And that's it, that's a Chatbot in Rivet! ??
The party continues for however many max iterations you specified in your Loop Controller.
A few notes:
Keyboard Inventor | Legal Tech Entrepreneur
1 年Chatbots ?? are amazing for onboarding new customers and answering those very repetitive questions that are almost identical for everyone.
Sharing this corrected tutorial is a real public service. Thank you!!