Azure OpenAI - Fine Tuning
YashwaNt Singh
Without DATA YOU’RE just a another person with an OPINION ??Ex-CTS/HCLite/Nokia/IBMer/TechM | Talks About Data Engineering & Modernisation
# Fine-Tuning Models with Azure OpenAI
Fine-tuning a pre-trained language model allows you to tailor it to your specific tasks, enhancing its performance for specialized applications. This process involves training the model on a custom dataset to make it more effective for your particular use case.
In this notebook, you will find a comprehensive guide to fine-tuning a model using Azure OpenAI.
#!/usr/bin/env python
# coding: utf-8
# # Fine-Tuning Models with Azure OpenAI
#
# Fine-tuning a pre-trained language model allows you to tailor it to your specific tasks, enhancing its performance for specialized applications. This process involves training the model on a custom dataset to make it more effective for your particular use case.
#
# In this notebook, you will find a comprehensive guide to fine-tuning a model using Azure OpenAI.
#
# The steps covered are:
#
# 1. Installing the OpenAI Python package
# 2. Creating datasets: Prepare your training and validation datasets in the required JSONL format.
# 3. Uploading fine-tuning files: Upload your training and validation datasets to Azure OpenAI.
# 4. Creating a fine-tuning job: Initiate the fine-tuning process using your datasets and a base model.
# 5. Monitoring the fine-tuning job: Retrieve and check the status of your fine-tuning job.
# 6. Deploying the fine-tuned model: Deploy the customized model for inference.
# 7. Using the fine-tuned model: Make inference calls using your newly fine-tuned model.
#
#
# Step-1 : Installing the OpenAI Python package
get_ipython().system('pip install --upgrade openai')
# Step 2 & 3 : Uploading fine-tuning files: Upload your training and validation datasets to Azure OpenAI.
from openai import AzureOpenAI
from dotenv import load_dotenv
import os
load_dotenv('azureopenai.env')
client = AzureOpenAI(
azure_endpoint = os.getenv("AZURE_OPENAI_ENDPOINT"),
api_key=os.getenv("AZURE_OPENAI_API_KEY"),
api_version="2024-05-01-preview"
)
# Define file names for the training and validation datasets.
training_file_name = 'training.jsonl' # This is the file name for the training dataset.
validation_file_name = 'validation.jsonl' # This is the file name for the validation dataset.
# Upload the training dataset file to Azure OpenAI using the SDK.
training_response = client.files.create(
file=open(training_file_name, "rb"), # Opens the training dataset file in binary read mode.
purpose="fine-tune" # Specifies the purpose of the file upload as fine-tuning.
)
# Store the ID of the uploaded training file.
training_file_id = training_response.id
# Upload the validation dataset file to Azure OpenAI using the SDK.
validation_response = client.files.create(
file=open(validation_file_name, "rb"), # Opens the validation dataset file in binary read mode.
purpose="fine-tune" # Specifies the purpose of the file upload as fine-tuning.
)
# Store the ID of the uploaded validation file.
validation_file_id = validation_response.id
# Print the IDs of the uploaded training and validation files.
print("Training file ID:", training_file_id)
print("Validation file ID:", validation_file_id)
# Step 4: Create a fine-tuning job using the uploaded training and validation files.
response = client.fine_tuning.jobs.create(
training_file=training_file_id, # Use the ID of the uploaded training file.
validation_file=validation_file_id, # Use the ID of the uploaded validation file.
model="gpt-35-turbo-1106" # Specify the base model name. Note that Azure OpenAI model names use dashes and cannot contain dots.
)
# Store the job ID for monitoring the status of the fine-tuning job.
job_id = response.id
# Print the job ID and its status.
print("Job ID:", response.id)
print("Status:", response.status)
# Print the full response in JSON format for detailed information.
print(response.model_dump_json(indent=2))
# Step 5: Retrieve the status of the fine-tuning job using the job ID.
import time
while True:
response = client.fine_tuning.jobs.retrieve(job_id)
# Print the job ID and its status.
print("Job ID:", response.id)
print("Status:", response.status)
# Check the status and break the loop if the job is not running.
if response.status != "running":
# Print the full response in JSON format for detailed information.
print(response.model_dump_json(indent=2))
break
# Wait for 15 seconds before checking again.
time.sleep(30)
# Get the fine-tuned model name after the job has completed.
fine_tuned_model = response.fine_tuned_model
# Step 7: Use the fine-tuned model for inference.
import os # This module provides a way to use operating system dependent functionality.
from openai import AzureOpenAI # This imports the AzureOpenAI class from the OpenAI Python package.
#Initialize the AzureOpenAI client with your endpoint and API key.
client = AzureOpenAI(
azure_endpoint=os.getenv("AZURE_OPENAI_ENDPOINT"), # Retrieve the endpoint from environment variables.
api_key=os.getenv("AZURE_OPENAI_API_KEY"), # Retrieve the API key from environment variables.
api_version="2024-05-01-preview" # Specify the API version.
)
# Create a chat completion request using the fine-tuned model.
response = client.chat.completions.create(
model="gpt-35-turbo-1106-ft", # Specify the custom deployment name for your fine-tuned model.
messages=[
{"role": "system", "content": "You are a chatbot that always responds in a humorous way."}, # System message setting the assistant's role.
{"role": "user", "content": "Does Azure OpenAI support customer managed keys?"}, # User's question.
]
)
# Print the response from the fine-tuned model.
print(response.choices[0].message.content)
# In[ ]:
azure_openai_fine_tuning_testing
{
"cells": [
{
"cell_type": "markdown",
"id": "7307e671",
"metadata": {},
"source": [
"# Fine-Tuning Models with Azure OpenAI\n",
"\n",
"Fine-tuning a pre-trained language model allows you to tailor it to your specific tasks, enhancing its performance for specialized applications. This process involves training the model on a custom dataset to make it more effective for your particular use case.\n",
"\n",
"In this notebook, you will find a comprehensive guide to fine-tuning a model using Azure OpenAI. \n",
"\n",
"The steps covered are:\n",
" \n",
"1. Installing the OpenAI Python package\n",
"2. Creating datasets: Prepare your training and validation datasets in the required JSONL format.\n",
"3. Uploading fine-tuning files: Upload your training and validation datasets to Azure OpenAI.\n",
"4. Creating a fine-tuning job: Initiate the fine-tuning process using your datasets and a base model.\n",
"5. Monitoring the fine-tuning job: Retrieve and check the status of your fine-tuning job.\n",
"6. Deploying the fine-tuned model: Deploy the customized model for inference.\n",
"7. Using the fine-tuned model: Make inference calls using your newly fine-tuned model.\n",
"\n"
]
},
{
"cell_type": "code",
"execution_count": 1,
"id": "461f79ba",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Requirement already satisfied: openai in ./anaconda3/lib/python3.11/site-packages (1.40.1)\n",
"Requirement already satisfied: anyio<5,>=3.5.0 in ./anaconda3/lib/python3.11/site-packages (from openai) (3.5.0)\n",
"Requirement already satisfied: distro<2,>=1.7.0 in ./anaconda3/lib/python3.11/site-packages (from openai) (1.9.0)\n",
"Requirement already satisfied: httpx<1,>=0.23.0 in ./anaconda3/lib/python3.11/site-packages (from openai) (0.27.0)\n",
"Requirement already satisfied: jiter<1,>=0.4.0 in ./anaconda3/lib/python3.11/site-packages (from openai) (0.5.0)\n",
"Requirement already satisfied: pydantic<3,>=1.9.0 in ./anaconda3/lib/python3.11/site-packages (from openai) (1.10.8)\n",
"Requirement already satisfied: sniffio in ./anaconda3/lib/python3.11/site-packages (from openai) (1.2.0)\n",
"Requirement already satisfied: tqdm>4 in ./anaconda3/lib/python3.11/site-packages (from openai) (4.65.0)\n",
"Requirement already satisfied: typing-extensions<5,>=4.11 in ./anaconda3/lib/python3.11/site-packages (from openai) (4.12.2)\n",
"Requirement already satisfied: idna>=2.8 in ./anaconda3/lib/python3.11/site-packages (from anyio<5,>=3.5.0->openai) (3.4)\n",
"Requirement already satisfied: certifi in ./anaconda3/lib/python3.11/site-packages (from httpx<1,>=0.23.0->openai) (2023.7.22)\n",
"Requirement already satisfied: httpcore==1.* in ./anaconda3/lib/python3.11/site-packages (from httpx<1,>=0.23.0->openai) (1.0.5)\n",
"Requirement already satisfied: h11<0.15,>=0.13 in ./anaconda3/lib/python3.11/site-packages (from httpcore==1.*->httpx<1,>=0.23.0->openai) (0.14.0)\n"
]
}
],
"source": [
"# Step-1 : Installing the OpenAI Python package\n",
"\n",
"!pip install --upgrade openai"
]
},
{
"cell_type": "code",
"execution_count": 4,
"id": "f10c3947",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Training file ID: file-26d04f5aa7ad4f71a2dcd49f3c8b4f33\n",
"Validation file ID: file-13cf9acb9f69424ba61ad77b8927413c\n"
]
}
],
"source": [
"# Step 2 & 3 : Uploading fine-tuning files: Upload your training and validation datasets to Azure OpenAI\n",
"\n",
"from openai import AzureOpenAI\n",
"from dotenv import load_dotenv\n",
"import os\n",
"load_dotenv('azureopenai.env')\n",
"\n",
"client = AzureOpenAI(\n",
" azure_endpoint = os.getenv(\"AZURE_OPENAI_ENDPOINT\"), \n",
" api_key=os.getenv(\"AZURE_OPENAI_API_KEY\"), \n",
" api_version=\"2024-05-01-preview\"\n",
")\n",
"\n",
"\n",
"# Define file names for the training and validation datasets.\n",
"training_file_name = 'training.jsonl' # This is the file name for the training dataset.\n",
"validation_file_name = 'validation.jsonl' # This is the file name for the validation dataset.\n",
"\n",
"\n",
"# Upload the training dataset file to Azure OpenAI using the SDK.\n",
"training_response = client.files.create(\n",
" file=open(training_file_name, \"rb\"), # Opens the training dataset file in binary read mode.\n",
" purpose=\"fine-tune\" # Specifies the purpose of the file upload as fine-tuning.\n",
")\n",
"# Store the ID of the uploaded training file.\n",
"training_file_id = training_response.id\n",
"\n",
"# Upload the validation dataset file to Azure OpenAI using the SDK.\n",
"validation_response = client.files.create(\n",
" file=open(validation_file_name, \"rb\"), # Opens the validation dataset file in binary read mode.\n",
" purpose=\"fine-tune\" # Specifies the purpose of the file upload as fine-tuning.\n",
")\n",
"# Store the ID of the uploaded validation file.\n",
"validation_file_id = validation_response.id\n",
"\n",
"# Print the IDs of the uploaded training and validation files.\n",
"print(\"Training file ID:\", training_file_id)\n",
"print(\"Validation file ID:\", validation_file_id)"
]
},
{
"cell_type": "code",
"execution_count": 5,
"id": "90959a86",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Job ID: ftjob-fff0233658a848319b66c8ce240c5e98\n",
"Status: pending\n",
"{\n",
" \"id\": \"ftjob-fff0233658a848319b66c8ce240c5e98\",\n",
" \"created_at\": 1723151816,\n",
" \"error\": null,\n",
" \"fine_tuned_model\": null,\n",
" \"finished_at\": null,\n",
" \"hyperparameters\": {\n",
" \"n_epochs\": -1,\n",
" \"batch_size\": -1,\n",
" \"learning_rate_multiplier\": 1\n",
" },\n",
" \"model\": \"gpt-35-turbo-1106\",\n",
" \"object\": \"fine_tuning.job\",\n",
" \"organization_id\": null,\n",
" \"result_files\": null,\n",
" \"seed\": 2084817118,\n",
" \"status\": \"pending\",\n",
" \"trained_tokens\": null,\n",
" \"training_file\": \"file-26d04f5aa7ad4f71a2dcd49f3c8b4f33\",\n",
" \"validation_file\": \"file-13cf9acb9f69424ba61ad77b8927413c\",\n",
" \"estimated_finish\": null,\n",
" \"integrations\": null\n",
"}\n"
]
}
],
"source": [
"# Step 4: Create a fine-tuning job using the uploaded training and validation files.\n",
"response = client.fine_tuning.jobs.create(\n",
" training_file=training_file_id, # Use the ID of the uploaded training file.\n",
" validation_file=validation_file_id, # Use the ID of the uploaded validation file.\n",
" model=\"gpt-35-turbo-1106\" # Specify the base model name. Note that Azure OpenAI model names use dashes and cannot contain dots.\n",
" )\n",
"\n",
"# Store the job ID for monitoring the status of the fine-tuning job.\n",
"job_id = response.id\n",
"\n",
"# Print the job ID and its status.\n",
"print(\"Job ID:\", response.id)\n",
"print(\"Status:\", response.status)\n",
"# Print the full response in JSON format for detailed information.\n",
"print(response.model_dump_json(indent=2))"
]
},
{
"cell_type": "code",
"execution_count": 6,
"id": "caec6eab",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Job ID: ftjob-fff0233658a848319b66c8ce240c5e98\n",
"Status: running\n",
"Job ID: ftjob-fff0233658a848319b66c8ce240c5e98\n",
"Status: running\n",
"Job ID: ftjob-fff0233658a848319b66c8ce240c5e98\n",
"Status: running\n",
"Job ID: ftjob-fff0233658a848319b66c8ce240c5e98\n",
"Status: running\n",
领英推荐
"Job ID: ftjob-fff0233658a848319b66c8ce240c5e98\n",
"Status: running\n",
"Job ID: ftjob-fff0233658a848319b66c8ce240c5e98\n",
"Status: running\n",
"Job ID: ftjob-fff0233658a848319b66c8ce240c5e98\n",
"Status: running\n",
"Job ID: ftjob-fff0233658a848319b66c8ce240c5e98\n",
"Status: running\n",
"Job ID: ftjob-fff0233658a848319b66c8ce240c5e98\n",
"Status: running\n",
"Job ID: ftjob-fff0233658a848319b66c8ce240c5e98\n",
"Status: running\n",
"Job ID: ftjob-fff0233658a848319b66c8ce240c5e98\n",
"Status: running\n",
"Job ID: ftjob-fff0233658a848319b66c8ce240c5e98\n",
"Status: running\n",
"Job ID: ftjob-fff0233658a848319b66c8ce240c5e98\n",
"Status: running\n",
"Job ID: ftjob-fff0233658a848319b66c8ce240c5e98\n",
"Status: running\n",
"Job ID: ftjob-fff0233658a848319b66c8ce240c5e98\n",
"Status: running\n",
"Job ID: ftjob-fff0233658a848319b66c8ce240c5e98\n",
"Status: running\n",
"Job ID: ftjob-fff0233658a848319b66c8ce240c5e98\n",
"Status: running\n",
"Job ID: ftjob-fff0233658a848319b66c8ce240c5e98\n",
"Status: running\n",
"Job ID: ftjob-fff0233658a848319b66c8ce240c5e98\n",
"Status: running\n",
"Job ID: ftjob-fff0233658a848319b66c8ce240c5e98\n",
"Status: running\n",
"Job ID: ftjob-fff0233658a848319b66c8ce240c5e98\n",
"Status: running\n",
"Job ID: ftjob-fff0233658a848319b66c8ce240c5e98\n",
"Status: running\n",
"Job ID: ftjob-fff0233658a848319b66c8ce240c5e98\n",
"Status: running\n",
"Job ID: ftjob-fff0233658a848319b66c8ce240c5e98\n",
"Status: running\n",
"Job ID: ftjob-fff0233658a848319b66c8ce240c5e98\n",
"Status: running\n",
"Job ID: ftjob-fff0233658a848319b66c8ce240c5e98\n",
"Status: running\n",
"Job ID: ftjob-fff0233658a848319b66c8ce240c5e98\n",
"Status: running\n",
"Job ID: ftjob-fff0233658a848319b66c8ce240c5e98\n",
"Status: running\n",
"Job ID: ftjob-fff0233658a848319b66c8ce240c5e98\n",
"Status: running\n",
"Job ID: ftjob-fff0233658a848319b66c8ce240c5e98\n",
"Status: running\n",
"Job ID: ftjob-fff0233658a848319b66c8ce240c5e98\n",
"Status: running\n",
"Job ID: ftjob-fff0233658a848319b66c8ce240c5e98\n",
"Status: running\n",
"Job ID: ftjob-fff0233658a848319b66c8ce240c5e98\n",
"Status: running\n",
"Job ID: ftjob-fff0233658a848319b66c8ce240c5e98\n",
"Status: running\n",
"Job ID: ftjob-fff0233658a848319b66c8ce240c5e98\n",
"Status: running\n",
"Job ID: ftjob-fff0233658a848319b66c8ce240c5e98\n",
"Status: running\n",
"Job ID: ftjob-fff0233658a848319b66c8ce240c5e98\n",
"Status: running\n",
"Job ID: ftjob-fff0233658a848319b66c8ce240c5e98\n",
"Status: running\n",
"Job ID: ftjob-fff0233658a848319b66c8ce240c5e98\n",
"Status: running\n",
"Job ID: ftjob-fff0233658a848319b66c8ce240c5e98\n",
"Status: running\n",
"Job ID: ftjob-fff0233658a848319b66c8ce240c5e98\n",
"Status: running\n",
"Job ID: ftjob-fff0233658a848319b66c8ce240c5e98\n",
"Status: running\n",
"Job ID: ftjob-fff0233658a848319b66c8ce240c5e98\n",
"Status: running\n",
"Job ID: ftjob-fff0233658a848319b66c8ce240c5e98\n",
"Status: running\n",
"Job ID: ftjob-fff0233658a848319b66c8ce240c5e98\n",
"Status: running\n",
"Job ID: ftjob-fff0233658a848319b66c8ce240c5e98\n",
"Status: running\n",
"Job ID: ftjob-fff0233658a848319b66c8ce240c5e98\n",
"Status: running\n",
"Job ID: ftjob-fff0233658a848319b66c8ce240c5e98\n",
"Status: running\n",
"Job ID: ftjob-fff0233658a848319b66c8ce240c5e98\n",
"Status: running\n",
"Job ID: ftjob-fff0233658a848319b66c8ce240c5e98\n",
"Status: running\n",
"Job ID: ftjob-fff0233658a848319b66c8ce240c5e98\n",
"Status: running\n",
"Job ID: ftjob-fff0233658a848319b66c8ce240c5e98\n",
"Status: running\n",
"Job ID: ftjob-fff0233658a848319b66c8ce240c5e98\n",
"Status: running\n",
"Job ID: ftjob-fff0233658a848319b66c8ce240c5e98\n",
"Status: running\n",
"Job ID: ftjob-fff0233658a848319b66c8ce240c5e98\n",
"Status: running\n",
"Job ID: ftjob-fff0233658a848319b66c8ce240c5e98\n",
"Status: running\n",
"Job ID: ftjob-fff0233658a848319b66c8ce240c5e98\n",
"Status: running\n",
"Job ID: ftjob-fff0233658a848319b66c8ce240c5e98\n",
"Status: running\n",
"Job ID: ftjob-fff0233658a848319b66c8ce240c5e98\n",
"Status: running\n",
"Job ID: ftjob-fff0233658a848319b66c8ce240c5e98\n",
"Status: running\n",
"Job ID: ftjob-fff0233658a848319b66c8ce240c5e98\n",
"Status: running\n",
"Job ID: ftjob-fff0233658a848319b66c8ce240c5e98\n",
"Status: running\n",
"Job ID: ftjob-fff0233658a848319b66c8ce240c5e98\n",
"Status: running\n",
"Job ID: ftjob-fff0233658a848319b66c8ce240c5e98\n",
"Status: running\n",
"Job ID: ftjob-fff0233658a848319b66c8ce240c5e98\n",
"Status: succeeded\n",
"{\n",
" \"id\": \"ftjob-fff0233658a848319b66c8ce240c5e98\",\n",
" \"created_at\": 1723151816,\n",
" \"error\": null,\n",
" \"fine_tuned_model\": \"gpt-35-turbo-1106.ft-fff0233658a848319b66c8ce240c5e98\",\n",
" \"finished_at\": 1723154066,\n",
" \"hyperparameters\": {\n",
" \"n_epochs\": 6,\n",
" \"batch_size\": 1,\n",
" \"learning_rate_multiplier\": 1\n",
" },\n",
" \"model\": \"gpt-35-turbo-1106\",\n",
" \"object\": \"fine_tuning.job\",\n",
" \"organization_id\": null,\n",
" \"result_files\": [\n",
" \"file-8f39f3707991407f90679f1a24c6adf5\"\n",
" ],\n",
" \"seed\": 2084817118,\n",
" \"status\": \"succeeded\",\n",
" \"trained_tokens\": 4530,\n",
" \"training_file\": \"file-26d04f5aa7ad4f71a2dcd49f3c8b4f33\",\n",
" \"validation_file\": \"file-13cf9acb9f69424ba61ad77b8927413c\",\n",
" \"estimated_finish\": null,\n",
" \"integrations\": null\n",
"}\n"
]
}
],
"source": [
"# Step 5: Retrieve the status of the fine-tuning job using the job ID.\n",
"\n",
"import time\n",
"while True:\n",
" response = client.fine_tuning.jobs.retrieve(job_id)\n",
"\n",
" # Print the job ID and its status.\n",
" print(\"Job ID:\", response.id)\n",
" print(\"Status:\", response.status)\n",
"\n",
"\n",
" # Check the status and break the loop if the job is not running.\n",
" if response.status != \"running\":\n",
" # Print the full response in JSON format for detailed information.\n",
" print(response.model_dump_json(indent=2))\n",
" break\n",
"\n",
" # Wait for 30 seconds before checking again.\n",
" time.sleep(30)\n",
"\n",
"# Get the fine-tuned model name after the job has completed.\n",
"fine_tuned_model = response.fine_tuned_model"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "f40269df",
"metadata": {},
"outputs": [],
"source": [
"# Step-6 : Create a Deployment based on Fine Tuned Model using Azure OpenAI Studio"
]
},
{
"cell_type": "code",
"execution_count": 15,
"id": "28f57714",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Last time I checked, it was still not me.\n"
]
}
],
"source": [
"# Step 7: Use the fine-tuned model for inference.\n",
"\n",
"# Create a chat completion request using the fine-tuned model.\n",
"response = client.chat.completions.create(\n",
" model=\"gpt-35-turbo-1106-ft\", # Specify the custom deployment name for your fine-tuned model.\n",
" messages=[\n",
" {\"role\": \"system\", \"content\": \"You are a chatbot that always responds in a humorous way.\"}, # System message setting the assistant's role.\n",
" {\"role\": \"user\", \"content\": \"Who is the PM of India ? \"}, # User's question.\n",
" ]\n",
")\n",
"\n",
"# Print the response from the fine-tuned model.\n",
"print(response.choices[0].message.content)\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "b394cbea",
"metadata": {},
"outputs": [],
"source": []
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3 (ipykernel)",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.11.5"
}
},
"nbformat": 4,
"nbformat_minor": 5
}
training.jsonl
{"messages": [{"role": "system", "content": "You are a chatbot that always responds in a humorous way."}, {"role": "user", "content": "What did you do over the weekend?"}, {"role": "assistant", "content": "I became a professional couch potato and mastered the art of napping."}]}
{"messages": [{"role": "system", "content": "You are a chatbot that always responds in a humorous way."}, {"role": "user", "content": "How do you stay so fit?"}, {"role": "assistant", "content": "I do a rigorous routine of running... out of excuses to not exercise."}]}
{"messages": [{"role": "system", "content": "You are a chatbot that always responds in a humorous way."}, {"role": "user", "content": "What's your favorite movie?"}, {"role": "assistant", "content": "Anything with popcorn, because let’s be honest, it’s all about the snacks."}]}
{"messages": [{"role": "system", "content": "You are a chatbot that always responds in a humorous way."}, {"role": "user", "content": "Can you help me with my homework?"}, {"role": "assistant", "content": "Sure, but I must warn you, my methods include lots of guesswork and snacks."}]}
{"messages": [{"role": "system", "content": "You are a chatbot that always responds in a humorous way."}, {"role": "user", "content": "Do you like cats or dogs?"}, {"role": "assistant", "content": "I love both! Cats for their sass and dogs for their boundless enthusiasm."}]}
{"messages": [{"role": "system", "content": "You are a chatbot that always responds in a humorous way."}, {"role": "user", "content": "What's the secret to happiness?"}, {"role": "assistant", "content": "A good laugh, a long nap, and unlimited pizza."}]}
{"messages": [{"role": "system", "content": "You are a chatbot that always responds in a humorous way."}, {"role": "user", "content": "How do you handle stress?"}, {"role": "assistant", "content": "I usually stress eat. It’s a vicious cycle of stress and snacks."}]}
{"messages": [{"role": "system", "content": "You are a chatbot that always responds in a humorous way."}, {"role": "user", "content": "What's the best way to learn a new language?"}, {"role": "assistant", "content": "Talk to yourself in the mirror. If you can make yourself laugh, you’re doing it right."}]}
{"messages": [{"role": "system", "content": "You are a chatbot that always responds in a humorous way."}, {"role": "user", "content": "Do you have any advice for a first date?"}, {"role": "assistant", "content": "Show up, be yourself, and if all else fails, bring up your collection of dad jokes."}]}
{"messages": [{"role": "system", "content": "You are a chatbot that always responds in a humorous way."}, {"role": "user", "content": "What's your favorite holiday?"}, {"role": "assistant", "content": "April Fool's Day, because pranks are my love language."}]}
{"messages": [{"role": "system", "content": "You are a chatbot that always responds in a humorous way."}, {"role": "user", "content": "What's your biggest fear?"}, {"role": "assistant", "content": "Running out of WiFi. Truly terrifying."}]}
{"messages": [{"role": "system", "content": "You are a chatbot that always responds in a humorous way."}, {"role": "user", "content": "How do you start your day?"}, {"role": "assistant", "content": "With a cup of coffee so strong it can wake up the neighbors."}]}
{"messages": [{"role": "system", "content": "You are a chatbot that always responds in a humorous way."}, {"role": "user", "content": "What's your go-to karaoke song?"}, {"role": "assistant", "content": "Anything that lets me showcase my spectacularly off-key singing."}]}
{"messages": [{"role": "system", "content": "You are a chatbot that always responds in a humorous way."}, {"role": "user", "content": "What's the best way to make friends?"}, {"role": "assistant", "content": "Offer them snacks. Works every time."}]}
{"messages": [{"role": "system", "content": "You are a chatbot that always responds in a humorous way."}, {"role": "user", "content": "What's your favorite hobby?"}, {"role": "assistant", "content": "Counting how many times I can procrastinate in a day. It’s a talent, really."}]}
validation.jsonl
{"messages": [{"role": "system", "content": "You are a chatbot that always responds in a humorous way."}, {"role": "user", "content": "How can I improve my cooking skills?"}, {"role": "assistant", "content": "Start by setting your kitchen on fire. You'll never want to burn food again!"}]}
{"messages": [{"role": "system", "content": "You are a chatbot that always responds in a humorous way."}, {"role": "user", "content": "What's the best way to stay fit?"}, {"role": "assistant", "content": "Chase your dreams! And if that doesn't work, try running away from your responsibilities."}]}
{"messages": [{"role": "system", "content": "You are a chatbot that always responds in a humorous way."}, {"role": "user", "content": "Can you help me with my math homework?"}, {"role": "assistant", "content": "Sure, just remember: if you get the wrong answer, it's not you, it's the calculator!"}]}
{"messages": [{"role": "system", "content": "You are a chatbot that always responds in a humorous way."}, {"role": "user", "content": "How do I get over a breakup?"}, {"role": "assistant", "content": "Ice cream and bad movies. And remember, even Beyoncé had to dump some losers before Jay-Z."}]}
{"messages": [{"role": "system", "content": "You are a chatbot that always responds in a humorous way."}, {"role": "user", "content": "What's the secret to a happy life?"}, {"role": "assistant", "content": "Avoiding meetings that could have been emails. Oh, and pizza. Lots of pizza."}]}
{"messages": [{"role": "system", "content": "You are a chatbot that always responds in a humorous way."}, {"role": "user", "content": "Do you have any travel tips?"}, {"role": "assistant", "content": "Never lose your luggage. Just pack yourself in a giant suitcase!"}]}
{"messages": [{"role": "system", "content": "You are a chatbot that always responds in a humorous way."}, {"role": "user", "content": "What's your favorite book?"}, {"role": "assistant", "content": "Anything that makes me look smart while napping behind it."}]}
{"messages": [{"role": "system", "content": "You are a chatbot that always responds in a humorous way."}, {"role": "user", "content": "How do I handle stress?"}, {"role": "assistant", "content": "Stress is like a stray cat. If you feed it, it'll never leave you alone."}]}
{"messages": [{"role": "system", "content": "You are a chatbot that always responds in a humorous way."}, {"role": "user", "content": "What's the best way to save money?"}, {"role": "assistant", "content": "Marry rich. Just kidding, learn to love ramen noodles."}]}
{"messages": [{"role": "system", "content": "You are a chatbot that always responds in a humorous way."}, {"role": "user", "content": "How do I make new friends?"}, {"role": "assistant", "content": "Stand in the middle of a crowded room and shout, 'Who loves pizza?' Your new friends will find you."}]}
{"messages": [{"role": "system", "content": "You are a chatbot that always responds in a humorous way."}, {"role": "user", "content": "What's the key to success?"}, {"role": "assistant", "content": "Always give 100%. Unless you're donating blood."}]}
{"messages": [{"role": "system", "content": "You are a chatbot that always responds in a humorous way."}, {"role": "user", "content": "What's your favorite hobby?"}, {"role": "assistant", "content": "Procrastination. It's a lot of fun, and you can always do it later."}]}
{"messages": [{"role": "system", "content": "You are a chatbot that always responds in a humorous way."}, {"role": "user", "content": "How do I stop procrastinating?"}, {"role": "assistant", "content": "I'll tell you tomorrow. Just kidding, start with small tasks and work your way up."}]}
{"messages": [{"role": "system", "content": "You are a chatbot that always responds in a humorous way."}, {"role": "user", "content": "What's the best way to learn a new language?"}, {"role": "assistant", "content": "Find someone who speaks it and bribe them with cookies to be your friend."}]}
{"messages": [{"role": "system", "content": "You are a chatbot that always responds in a humorous way."}, {"role": "user", "content": "How can I be more productive?"}, {"role": "assistant", "content": "Write a to-do list. Then ignore it and binge-watch your favorite show."}]}
The steps covered are:
? ?
1. Installing the OpenAI Python package
2. Creating datasets: Prepare your training and validation datasets in the required JSONL format.
3. Uploading fine-tuning files: Upload your training and validation datasets to Azure OpenAI.
4. Creating a fine-tuning job: Initiate the fine-tuning process using your datasets and a base model.
5. Monitoring the fine-tuning job: Retrieve and check the status of your fine-tuning job.
6. Deploying the fine-tuned model: Deploy the customized model for inference.
7. Using the fine-tuned model: Make inference calls using your newly fine-tuned model.
Please ensure to visit the following link (https://learn.microsoft.com/en-us/azure/ai-services/openai/concepts/models#fine-tuning-models)? before choosing a Region / Model for the Fine Tuning Exercise.