ChatGPT, we have an API now...
Note : The code example that I created is for pre-screening a candidate for a job recruitment, but the use cases of ChatGPT are infinite.
The long-awaited announcement has finally arrived! On March 1st, OpenAI made an exciting announcement regarding the availability of ChatGPT API. As someone who has been eagerly anticipating this news, I'm thrilled to see this development. With the API now accessible, ChatGPT will be able to seamlessly integrate with a much larger ecosystem of technology and tools, expanding its potential usage and power to an exponential degree. The possibilities are endless, and I can't wait to see what innovations this breakthrough will bring about.
In this article, I'll be delving into the details of this API and sharing techniques for leveraging its power in a way that is tailored to unique needs. By taking a more focused approach, we can unlock even greater value from ChatGPT and achieve our desired outcomes more efficiently.
The APIs to call Open AI's models were already there, what was not available was to invoke the model that was powering ChatGPT. It was only available through the ChatGPT UI. The underlying model of ChatGPT is called gpt-3.5-turbo. We can now invoke this model in 2 ways
The ChatGPT model can take a list of messages as input (we will see how in the code sample below) and returns a model generated response output. The messages are provided as an array of JSON message object and each object must have a "role" and a "content" as shown below
{"role":"system", "content":"<an initial context>"}
{"role":"user","content":"<user messages>"}
{"role":"assistant", "content":"<usually concatening the returned response>"
In a typical conversation, the role "system" is set for the first message to set the behavior of the model. This is where the specific-purpose bot comes into the picture. In the example below, you will see the model has a specific purpose of pre-screening a potential candidate for a job recruitment.The subsequent messages are alternating between the user and the assistant role. The user messages help provide instruction or question to the assistant and the assistant responses are appended to store and keep the context of the conversation.
Let's see an example of how we can use this API. This particular use case will create a ChatGPT assistant that can help in pre-screening candidate for a potential job opportunity
import osimport openai
openai.api_key= os.getenv('API_TOKEN')
The above piece of code is importing the openai python library and also retrieving the open ai api key from the environment variable that I have setup in my PyCharm editor as shown below.
领英推荐
You can generate a API Token from your open ai account under API keys
In the below piece of code, I am setting up the initial context or the behavior of the model
init_context = "You are a recruiter and pre-screening "
"a potential candidate for a " \
"data engineer position " \
"with Snowflake skills." \
"You need to ask some basic question " \
"to understand the candidates " \
"expectation and aspirations." \
"You need to finish after asking 5 questions " \
"followed by scheduing an appointment"
messages = [{"role":"system", "content": init_context}]\
Now, I am running the below piece of code in a loop to continue the chat between the ChatGPT recruiter and the candidate
while True
user_message = input("User: ")
messages.append({"role":"user", "content":user_message})
chat = openai.ChatCompletion.create(
model="gpt-3.5-turbo", messages=messages
)
reply = chat.choices[0].message.content
messages.append({"role": "assistant", "content": reply})
print(reply):
And when I run the program, I get the below output
I am really amazed by the output of this experiment. Now imagine if I integrate it with whisper to enable Voice assisted prompts, how powerful will it become.
The entire code used above is available in my GitHub