Chat That App Intro
Gerald Gibson
Principal Engineer @ Salesforce | Hyperscale, Machine Learning, Patented Inventor
Chat That App is a Python desktop app I created by using ChatGPT from OpenAI to generate classes, functions, etc. that I assembled into this app. Of course, I wrote some of the code also and I had to code review & fix the code from ChatGPT when there were issues.
Chat That App not only allows you to send requests to ChatGPT, but it also uses the Functions API feature to give the LLM a list of functions it is allowed to call on your PC. Some of the functions it calls might retrieve information which goes back to the LLM for it to use to decide what OTHER functions it wants to call and what the parameters should be for those functions!
This means your initial request can turn into a series of back and forth messages between Chat That App and ChatGPT until your final request result has been achieved.
Creating a powerful LLM like ChatGPT is one important advancement in recent years, however it is not of much use unless developers come up with ways to take advantage of this tool in their automations.
Chat That App is a foundation that makes it easy for a developer to get started discovering ways to make automations using an LLM. The current version of Chat That App supports a single workflow pattern (seen in the diagram below). However, I am currently working on different workflow patterns / engines that can be used to construct more powerful uses of an LLM that can call functions on the PC. Others can do the same by subscribing to my substack at GG's Grymoire to get the app, the code, and the 112 conversations I had with ChatGPT to create the code for this app.
领英推荐
Chat That App has two modes. "Chat mode" and "Action mode". Chat mode is similar to how the official ChatGPT UI works where you conduct a "chat" with the LLM to get answers. Action mode is where you do not just ask questions of the LLM, but also give it a list of function definitions it can call on your PC. This allows the LLM to either give you an answer or else specify a function to call and what the parameter values for that function is. A function call request first appears in the UI in the actions list and waits for the human to trigger it rather than letting the LLM begin a function call process without human approval.
You execute the functions calls on the LLM's behalf and give back the results so the LLM can complete the conversation up to the point. Within Action mode there are two different lists of functions the LLM can call. One is "retrieve functions" and the other is "action functions". Retrieve functions are meant to get and return data to the LLM for it to use as additional context for it's next response. Action functions are meant to make some change from your PC. The current version of Chat That App allows multiple retrieve functions to be called automatically by the LLM asking for the retrieve function call, Chat That App running the function on your PC on the LLM's behalf, and then returning the results to the LLM which can then respond with another follow-up function call. There is a configuration value that specifies a limit to how many of these back and forth retrieve requests will happen automatically. If the LLM makes an "action function" call request then that will be the end of the two way communication between the LLM and Chat That App.
This is only a single possible workflow however. You can add new agents with their own workflows, or modify the existing agents. It is possible to create a workflow where there is no human interaction for dozens or hundreds of steps, or a workflow where humans must verify every action before it initiates.
I find the chat mode in Chat That App to be better than that in the official ChatGPT because the parts of the conversation can be selected individually to create a "virtual conversation" that goes to the LLM as the conversation context along with your current request. The official ChatGPT UI just sends the maximum amount of text possible from the previous messages in the conversation. This can be a problem, for example, when the LLM gave an answer that was very wrong and you do not want that wrong answer being sent up to the LLM when you make your next follow up request to the LLM. With the ChatGPT UI you just have to start the conversation over if there are parts of the conversation you no longer want. With Chat That App you can use the checkboxes next to each chat bit to decide which ones are part of the "current conversation" and which ones are ignored.