ChatGPT plugins

You are not really using ChatGPT to its full potential unless you augment it with your own data and your own tools for your costume use case.

You can use the plugins available in plugins store like Expedia's plugin or Instacart's plugin or you can develop your own plugin.

Open AI provides the below schema to connect to your datasets. Basically all you have to do is to provide your data as the API skeleton and its overall logic to communicate with ChatGPT are already in place. Read on to find out how easy it is to achieve this.

No alt text provided for this image



1. Fork and specify configurations

Fork this repository which contains the application template that provides a straight forward template to work with your documents.

To make your plugin available on ChatGPT's website, you need to configure some settings in ai-plugin.json

The first step would be to supply the name and description for your plugin in the name_for_human and description_for_human fields. The description_for_model field is also very important as it is part of the prompt that the model will be called with and it is based on this description that the model will decide whether to use your API or not. A simple and effective prompt template could be: "Use this when the query is about X and do not use it when the query is not about X". Next, you should supply the api.url field with the url provided by your deployment service. You can also supply a logo url containing a cute logo for your plugin.

What's more, you should double check the https://github.com/openai/chatgpt-retrieval-plugin/blob/main/.well-known/openai.yaml to see if the url and the description for your API are updated and make sense.

You can update the url field in both ai-plugin.json and openai.yaml and push to your forked git repository once your deployment service gives you the endpoint. Hopefully, the deployment service will update everything by reading from your git repo.

2. Deploy

Depending on the vector database you plan to use, there is a set of environment variables you need to specify upon deployment. For example, if you set DATASTORE to pinecone, you also need to provide a PINECONE_API_KEY. Find the instructions for the supported providers here. In addition to that, you need to specify your authorization which can simply be a bearer token as the access token. Lastly, specify your OPENAI_API_KEY so that the API can call an embedding openai embedding model to fetch the embeddings for your documents when storing then in the database of choice.


3. Load your documents and test

Once deployed, augment ChatGPT with your own data by call ingthe /upsert endpoint exposed by the template from anywhere, e.g. a notebook. All you need to focus on is properly preprocessing your data as /upsert will take care of calling the embedding model and storing the results into your index (Pinecone, weaviate, etc.). The payload for /upsert is a list of jsons with at least an id and an embedding vector for each item. However, you have room for more fields in the metadata dictionary in that json.


After all this, test your API in the notebook by calling the /query endpoint with questions about your documents.


4. Register your plugin and starting querying

Go to chat.openai.com/chat click on plugins and then install unverified plugin. It will ask you for your API url and an access token which is the bearer token you specified upon deployment. That's it. From now on, every time you have question about your data, you can select your costume plugin on the website and ask ChatGPT.


Thank you for reading. Please let me know if you have any questions.

要查看或添加评论,请登录

社区洞察