Introduction to Programming with Prompts
by @rUv, just because.
Prompt programming represents a significant update in the way developers interact with computers, moving beyond traditional syntax to embrace more dynamic and interactive methods.
Traditionally, programming involved writing explicit code with hardcoded inputs, such as defining a function to perform a basic addition. This approach, while straightforward, lacks flexibility and adaptability, especially in scenarios requiring user interaction or real-time data processing.
The advent of AI and language models has introduced new paradigms that significantly enhance programming capabilities. These advancements allow for the integration of structured outputs and user prompts, making programs more interactive and responsive. By incorporating prompts, developers can create software that adapts to user inputs, offering a more personalized and dynamic user experience.
The use of AI models, particularly large language models (LLMs), has opened up possibilities for automated code generation and structured data handling. These models can interpret natural language prompts to generate code, validate data, and produce structured outputs, reducing the need for manual coding and enhancing efficiency.
This shift towards AI-driven programming paradigms not only streamlines development processes but also democratizes coding, making it more accessible to non-experts. As a result, prompt programming is poised to transform the landscape of software development, enabling more sophisticated and adaptive applications.
Key Differences
Traditional Syntax
Traditional programming involves writing explicit code with predefined logic and hardcoded inputs. This approach is straightforward and efficient for tasks with fixed requirements but lacks flexibility when dealing with dynamic or user-driven scenarios. It requires manual coding for each specific task, which can be time-consuming and less adaptable to changes.
In traditional syntax, the function is defined and called directly with arguments:
# Traditional Syntax Example: Basic Addition
# Define a function named 'add' that takes two parameters, 'a' and 'b'.
def add(a, b):
# The function returns the sum of 'a' and 'b'.
return a + b
# Call the 'add' function with arguments 5 and 3, and store the result in the variable 'result'.
result = add(5, 3)
# Print the value of 'result', which is the output of the 'add' function.
print(result) # Output: 8
This approach is straightforward, with inputs provided directly in the code.
Structured Output with Prompts
This paradigm introduces user interaction through prompts, allowing programs to receive inputs dynamically. It makes software more interactive and user-friendly, as the program can adapt its behavior based on user inputs. This approach is beneficial for applications where user preferences or real-time data need to be considered, enhancing the versatility of the software.
In a structured output with prompts approach, the program interacts with the user to get inputs:
# Define a prompt message asking the user to enter the first number.
prompt_a = 'Enter the first number: '
# Define a prompt message asking the user to enter the second number.
prompt_b = 'Enter the second number: '
# Use the input function to display the first prompt and capture the user's input.
# Convert the input from a string to an integer using the int() function.
a = int(input(prompt_a))
# Use the input function to display the second prompt and capture the user's input.
# Convert the input from a string to an integer using the int() function.
b = int(input(prompt_b))
# Calculate the sum of the two numbers entered by the user.
result = a + b
# Print the result of the addition.
print(result)
This method involves user interaction, where inputs are received through prompts. Note that this code might not execute in environments that do not support interactive input, such as some online interpreters.
Advanced Methods with AI and LLMs
Advancements in AI, particularly with large language models (LLMs), have led to new programming paradigms that leverage AI for code generation, structured inputs, and outputs, and dynamic execution. These methods allow for:
These advancements provide greater flexibility and power, enabling more sophisticated programming paradigms that can adapt to complex and evolving needs. They open up possibilities for more intuitive and efficient software development processes, where AI assists in automating repetitive tasks and enhancing decision-making capabilities.
JSON Mode with Structured Output
JSON Mode with Structured Output is a method that ensures the AI model generates outputs in a valid JSON format, which can be easily parsed and executed by other systems or applications. This approach is particularly useful for applications requiring structured data interchange, as JSON is a widely accepted and language-independent format. By leveraging JSON mode, developers can automate the creation of complex data structures, ensuring consistency and reducing the need for manual data formatting. This enhances the efficiency of data processing workflows and enables seamless integration with various software systems, facilitating smooth data exchange and interoperability.
Using JSON mode ensures the model outputs valid JSON, which can be parsed and executed:
# Define a prompt asking the AI to generate a JSON object with fields for name and age
prompt = "Generate a JSON object with fields 'name' and 'age'."
# Make a request to the OpenAI API using the ChatCompletion method
response = client.chat.completions.create(
model="gpt-4o-2024-08-06", # Specify the model to use for the chat completion
messages=[{"role": "user", "content": prompt}], # Provide the prompt as a user message
max_tokens=50, # Set the maximum number of tokens for the response
temperature=0.2, # Set the temperature for response variability
n=1 # Number of completions to generate
)
# Extract the content of the response message, which should contain the generated JSON object
response_content = response.choices[0].message.content.strip()
# Print the generated JSON object
print(response_content)
Function Calling with Structured Outputs
Function Calling with Structured Outputs is a programming technique that enables the generation of outputs adhering to a predefined schema. This method leverages AI models to produce structured data that fits specific requirements, ensuring consistency and reliability. By defining a schema, developers can automate data processing tasks while maintaining data integrity, which is crucial for applications that rely on precise data formats. This approach is particularly beneficial in scenarios where structured data is essential, such as API responses, data validation, and integration with other systems. It enhances the robustness of software solutions by providing clear guidelines for data output.
Function calling allows for structured output that adheres to a specific schema:
from pydantic import BaseModel
# Define a Pydantic model for structured data with fields for name and age
class PersonInfo(BaseModel):
name: str # The 'name' field is a string representing a person's name
age: int # The 'age' field is an integer representing a person's age
# Use the OpenAI client to parse a chat completion with a structured response
completion = client.beta.chat.completions.parse(
model="gpt-4o-2024-08-06", # Specify the model to use for the chat completion
messages=[{"role": "user", "content": "Provide your name and age."}], # Provide the message content for the AI
response_format=PersonInfo # Specify the response format using the PersonInfo Pydantic model
)
# Extract the parsed response from the completion, which should be a PersonInfo object
person_info = completion.choices[0].message.parsed
# Print the structured data, which includes the name and age fields
print(person_info)
Declarative Code Generation
Declarative code generation uses AI to create functional code based on natural language descriptions. This method allows developers to specify what they want to achieve without detailing how to implement it, enabling rapid prototyping and reducing development time. By translating high-level descriptions into executable code, AI can assist in automating repetitive coding tasks, freeing developers to focus on more complex problem-solving.
The model can generate and execute functional code based on text input:
# Define a prompt that instructs the AI to write a Python function for adding two numbers
prompt = "Write a Python function that adds two numbers and returns the result."
# Make a request to the OpenAI API using the ChatCompletion method
response = client.chat.completions.create(
model="gpt-4o-2024-08-06", # Specify the model to use for the chat completion
messages=[{"role": "user", "content": prompt}], # Provide the prompt as a user message
max_tokens=100, # Set the maximum number of tokens for the response
temperature=0.2, # Set the temperature for response variability
n=1 # Number of completions to generate
)
# Extract the generated content from the response
response_content = response.choices[0].message.content.strip()
# Print the response content for inspection
print("Generated Response:\n", response_content)
# Attempt to extract only the Python code from the response
# This is a simple heuristic and may need to be adjusted based on actual response content
code_start = response_content.find("def ")
if code_start != -1:
generated_code = response_content[code_start:]
try:
# Execute the extracted code
exec(generated_code)
# Call the generated 'add' function with arguments 5 and 3, and store the result
result = add(5, 3)
# Print the result of the addition, which should be 8
print(result) # Should output 8
except Exception as e:
print("Error executing code:", e)
else:
print("No valid Python code found in the response.")
Natural Language Style Development
Natural language style development allows developers to interact with AI using conversational prompts to generate structured responses. This method bridges the gap between human language and machine-readable outputs, making programming more accessible to non-experts. By guiding AI with natural language, developers can create structured data outputs that align with specific requirements, enhancing the flexibility and adaptability of software solutions.
Using natural language to guide the model in creating structured responses:
# Define a multi-line prompt that instructs the AI to create a structured output
prompt = """
Create a structured output with the following details:
- Title: 'AI in Healthcare'
- Author: 'Dr. Jane Doe'
- Summary: 'An exploration of AI applications in modern healthcare systems.'
"""
# Make a request to the OpenAI API using the ChatCompletion method
response = client.chat.completions.create(
model="gpt-4o-2024-08-06", # Specify the model to use for the chat completion
messages=[{"role": "user", "content": prompt}], # Provide the prompt as a user message
max_tokens=150, # Set the maximum number of tokens for the response
temperature=0.2, # Set the temperature for response variability
n=1 # Number of completions to generate
)
# Print the content of the response message, which should contain the structured output
print(response.choices[0].message.content)
The choice between these approaches depends on the context and requirements of the program. Advanced methods using AI and LLMs provide greater flexibility and power, enabling more sophisticated programming paradigms.
Introduction: From Setup to Advanced Prompt-Based Programming
This notebook guides you through a comprehensive journey of learning prompt-based programming, starting from essential setup steps and progressing to advanced examples. Here’s an overview of what you'll explore:
Each example builds on the previous ones, showcasing how you can leverage natural language prompts to create increasingly complex programs and analyses. By the end of this notebook, you'll have a strong foundation in using language models to program interactively and generate structured outputs dynamically.
Install Requirements
This code example installs the necessary Python libraries (python-dotenv, colorama, and llama-index) for working with OpenAI and other advanced tools in your Jupyter notebook. These packages enable environment variable management, terminal styling, and advanced AI indexing functionalities. Uncomment the command if installation is required.
In?[??]:
# @title Install requirements
# Install the OpenAI library (uncomment if needed)
!pip install python-dotenv colorama llama-index
Configure OpenAI API Key
This code example demonstrates how to configure the OpenAI API key in a Jupyter notebook using Colab's userdata module. It securely retrieves and sets the API key, then initializes the OpenAI client. A verification step checks if the API key is correctly set, ensuring that the notebook is ready for further API interactions.
In?[??]:
# @title Configure OpenAi API Key
# Import necessary libraries
import openai
from google.colab import userdata
from openai import OpenAI
# Retrieve and set the API key
api_key = userdata.get('OPENAI_API_KEY')
openai.api_key = api_key
# Initialize the OpenAI client, passing the API key
client = OpenAI(api_key=api_key)
# Verify the API key is set (this is just for demonstration and should not be used in production code)
if openai.api_key:
print("OpenAI API key is set. Ready to proceed!")
else:
print("OpenAI API key is not set. Please check your setup.")
OpenAI API key is set. Ready to proceed!
LLM Response Configuration with Custom Prompt
This example demonstrates configuring an API call to a Language Learning Model (LLM) using various parameters, such as temperature, max tokens, and frequency penalties. The prompt is dynamically constructed with these settings and sent to the LLM to generate a BBS-style welcome message. The structured output is then printed, showing how flexible API calls can be tailored to create diverse and creative responses.
In?[??]:
# @title LLM Response Configuration Generation
# Define LLM parameters for the API call
model = "gpt-4o-2024-08-06" # @param ["gpt-4o-2024-08-06", "gpt-4-0613", "gpt-4-32k-0613", "gpt-4-turbo"] {type:"string"}
# The initial prompt (can be edited via the Colab UI)
base_prompt = "Initializing Jupyter Notebook, respond activated with a unique and creative BBS style design (no ascii logo) and welcome message. Append LLM settings after." # @param {type:"string"}
max_tokens = 300 # @param {type:"integer"}
temperature = 0.2 # @param {type:"number"}
n = 1 # @param {type:"integer"}
stop = None # @param {type:"string"}
top_p = 1.0 # @param {type:"number"}
frequency_penalty = 0.0 # @param {type:"number"}
presence_penalty = 0.0 # @param {type:"number"}
# Insert parameters into the prompt using Python f-strings
prompt = f"{base_prompt} (Max tokens: {max_tokens}, Temperature: {temperature}, Top_p: {top_p}, Frequency penalty: {frequency_penalty}, Presence penalty: {presence_penalty})"
# Make the API call to generate the response
response = client.chat.completions.create(
model=model,
messages=[{"role": "user", "content": prompt}],
max_tokens=max_tokens,
temperature=temperature,
n=n,
stop=stop,
top_p=top_p,
frequency_penalty=frequency_penalty,
presence_penalty=presence_penalty
)
# Extract the structured output
structured_output = response.choices[0].message.content.strip()
# Print the structured output
print("Generated Code and Description:\n", structured_output)
Generated Code and Description:
```
╔══════════════════════════════════════════════════════════════════════════╗
║
║ Welcome to PySphere!
║
║ Embark on a journey of exploration and discovery within the realm of
║ data science and machine learning. Here, your ideas take shape, and
║ your code comes alive.
║
║ Whether you're analyzing data, building models, or visualizing results,
║ PySphere is your canvas. Dive into the world of Jupyter Notebooks and
║ let your creativity flow.
║
║ Remember, every great discovery starts with a single line of code.
║
╚══════════════════════════════════════════════════════════════════════════╝
LLM Settings:
- Max Tokens: 300
- Temperature: 0.2
- Top_p: 1.0
- Frequency Penalty: 0.0
- Presence Penalty: 0.0
```
Simple LLM Request Example with Response Formatting
This example demonstrates how to use a prompt-based request to generate a response in a specific format. The code clears any previous input and defines a new prompt to translate English text to French using an LLM (Language Learning Model). The structured output is extracted and displayed, showcasing how natural language processing can handle language translation efficiently.
领英推荐
In?[??]:
# @title Simple LLM Request Example with Response Formatting
# Reset the prompt to an empty string to clear any previous value
prompt = "" # Reset the prompt
# Define a new prompt for generating a BBS-style welcome message
prompt = "Translate the following English text to French: 'Hello, how are you?'" # @param {type:"string"}
# Extract the structured output
structured_output = response.choices[0].message.content.strip()
# Print the structured output (BBS-style welcome message)
print(structured_output)
Bonjour, comment ?a va ?
Traditional Factorial Calculation Example
This example showcases a traditional Python function that calculates the factorial of 22. The function uses a loop to multiply numbers from 1 to 22, resulting in the factorial. This approach highlights a straightforward method for performing mathematical operations in Python without relying on dynamic prompts or natural language processing.
In?[??]:
# @title Tradtional Code Example calculates the factorial of a number
def factorial_of_22():
result = 1
for i in range(1, 23): # Loop from 1 to 22 inclusive
result *= i
return result
# Call the function and print the result
print(factorial_of_22())
1124000727777607680000
Simple Function Generation Using Prompts
This example demonstrates how to use natural language prompts to dynamically generate Python functions, such as calculating factorials, roots, powers, and trigonometric functions. The code accepts user input to specify the type of function and the value to calculate, then generates and executes the corresponding Python function. The example highlights the flexibility of programming with natural language and structured output.
In?[??]:
# @title Simple Programming with Prompts calculates the {function} of a number using natural language
# Define the input for the calculation
factorial_input = 20 # @param {type:"integer"}
# Define the type of function to generate
function = "factorial" # @param ["factorial", "root", "power", "logarithm", "sin", "cos", "tan", "exp"] {type:"string"}
# Define a prompt with the function and factorial_input values inserted
prompt = f"Generate a Python function that calculates the {function} of {factorial_input}. The function should accept the number as a parameter. Do not include example usage code. Provide the function code and a brief description." # @param {type:"string"}
# Extract the response text (structured output)
structured_output = response.choices[0].message.content.strip()
# Print the structured output (function code and description)
print("Generated Code and Description:\n", structured_output)
# Extract the Python code block from the structured output
start = structured_output.find("```python") + len("```python")
end = structured_output.find("```", start)
python_code = structured_output[start:end].strip()
# Print the extracted Python code for debugging purposes
print("Extracted Python Code:\n", python_code)
try:
# Execute the extracted Python code
exec(python_code)
# Dynamically identify the function name
import re
function_name_match = re.search(r"def\s+(\w+)\s*\(", python_code)
if function_name_match:
function_name = function_name_match.group(1)
print(f"Function '{function_name}' found and executed.")
# Call the dynamically identified function with factorial_input
result = eval(f"{function_name}({factorial_input})")
print(f"{function} of {factorial_input}: {result}")
else:
print("No function name could be identified in the generated code.")
except SyntaxError as e:
print(f"Syntax Error in generated code: {e}")
except Exception as e:
print(f"An error occurred during execution: {e}")
Generated Code and Description:
Here's a Python function that calculates the factorial of a given number. The function uses an iterative approach to compute the factorial:
```python
def factorial(n):
if n < 0:
raise ValueError("Factorial is not defined for negative numbers.")
result = 1
for i in range(2, n + 1):
result *= i
return result
```
### Description:
- **Function Name**: `factorial`
- **Parameter**: `n` (an integer for which the factorial is to be calculated)
- **Returns**: The factorial of the given number `n`.
- **Logic**:
- The function first checks if the input number `n` is negative
Extracted Python Code:
def factorial(n):
if n < 0:
raise ValueError("Factorial is not defined for negative numbers.")
result = 1
for i in range(2, n + 1):
result *= i
return result
Function 'factorial' found and executed.
factorial of 20: 2432902008176640000
Generate a JSON Response using OpenAI and JSON Mode
This code snippet demonstrates how to use OpenAI's ChatCompletion API to generate a JSON object with specified fields. By defining a prompt that requests the model to output a JSON object with realistic values for fields like 'name' and 'age', developers can leverage AI to automate data generation tasks. The API call utilizes the latest OpenAI models, ensuring high-quality and contextually relevant outputs. The response is processed to remove any extraneous code block markers, providing a clean JSON output. This approach is particularly useful for applications requiring structured data generation, enhancing efficiency and reducing manual coding efforts.
In?[??]:
# @title Generate a JSON Response using OpenAI and JSON Mode
# Define a prompt requesting the LLM to output a JSON object
prompt = "Generate a JSON object with fields 'name' and 'age'. The values should be realistic examples." # @param {type:"string"}
# Use the ChatCompletion API for the latest OpenAI models
response = client.chat.completions.create(
model="gpt-4o-2024-08-06", # or any other suitable model
messages=[{"role": "user", "content": prompt}],
max_tokens=150,
temperature=0.2,
n=1
)
# Extract the response content
response_content = response.choices[0].message.content.strip()
# Print the raw response for debugging purposes
print("Raw Response:\n", response_content)
# Clean up the response by removing code block markers
if response_content.startswith("```json"):
response_content = response_content[len("```json"):].strip()
if response_content.endswith("```"):
response_content = response_content[:-len("```")].strip()
Raw Response:
```json
{
"name": "Emily Johnson",
"age": 29
}
```
Step-by-Step Math Problem Solving with Structured Output
This example showcases the use of OpenAI to guide users through solving a math problem step by step. The model acts as a helpful tutor, providing explanations and solutions in a structured format. The output is parsed into clear steps and a final answer, making it easy to follow along and understand the problem-solving process.
In?[??]:
# Define the Pydantic models for structured output
class Step(BaseModel):
explanation: str
output: str
class MathReasoning(BaseModel):
steps: List[Step]
final_answer: str
# Define the parameters for the API call
model = "gpt-4o-2024-08-06" # @param ["gpt-4o-2024-08-06", "gpt-4-0613", "gpt-4-32k-0613"] {type:"string"}
system_message = "You are a helpful math tutor. Guide the user through the solution step by step." # @param {type:"string"}
user_message = "How can I solve 8x + 7 = -23?" # @param {type:"string"}
# Make the API call using the .parse() method and structured response
completion = client.beta.chat.completions.parse(
model=model,
messages=[
{"role": "system", "content": system_message},
{"role": "user", "content": user_message}
],
response_format=MathReasoning,
)
# Extract the parsed message
math_reasoning = completion.choices[0].message
# If the model refuses to respond, you will get a refusal message
if math_reasoning.refusal:
print(math_reasoning.refusal)
else:
# Print the parsed output
for step in math_reasoning.parsed.steps:
print(f"{step.explanation}: {step.output}")
print(f"Final Answer: {math_reasoning.parsed.final_answer}")
To solve for x, we need to isolate it on one side of the equation.: Start with the equation: 8x + 7 = -23.
Subtract 7 from both sides to move the constant term to the right side of the equation.: 8x + 7 - 7 = -23 - 7
Simplify the equation by performing the subtraction.: 8x = -30
Divide both sides by 8 to solve for x.: x = -30 / 8
Simplify the fraction by dividing both the numerator and the denominator by 2.: x = -15 / 4
Simplify -15 / 4 to decimal form, if preferred.: x = -3.75
Final Answer: x = -15/4 or x = -3.75
Generate a Recipe with Ingredients and Steps
This code example demonstrates how to use OpenAI's capabilities to generate a detailed recipe, including ingredients and step-by-step instructions. By interacting with the model through natural language prompts, users can receive a structured output that guides them through creating a dish, such as a chocolate cake. The output is neatly organized into ingredients, steps, and the final dish.
In?[??]:
# @title Generate a Recipe with Ingredients and Steps using Function Calling and Structured Output
# Define the Pydantic models for structured output
class Step(BaseModel):
step: str
description: str
class RecipeCreation(BaseModel):
ingredients: List[str]
steps: List[Step]
final_dish: str
# Define parameters for the API call
model = "gpt-4o-2024-08-06" # @param ["gpt-4o-2024-08-06", "gpt-4-0613", "gpt-4-32k-0613"] {type:"string"}
system_message = "You are a helpful chef. Guide the user through creating a dish step by step." # @param {type:"string"}
user_message = "Can you help me make a chocolate cake?" # @param {type:"string"}
# Make the API call using the .parse() method and structured response
completion = client.beta.chat.completions.parse(
model=model,
messages=[
{"role": "system", "content": system_message},
{"role": "user", "content": user_message}
],
response_format=RecipeCreation,
)
# Extract the parsed message
recipe_creation = completion.choices[0].message
# If the model refuses to respond, you will get a refusal message
if recipe_creation.refusal:
print(recipe_creation.refusal)
else:
# Print the parsed output
print("Ingredients:")
for ingredient in recipe_creation.parsed.ingredients:
print(f"- {ingredient}")
print("\nSteps:")
for step in recipe_creation.parsed.steps:
print(f"Step {step.step}: {step.description}")
print(f"\nFinal Dish: {recipe_creation.parsed.final_dish}")
Ingredients:
- 1 and 3/4 cups all-purpose flour
- 3/4 cup unsweetened cocoa powder
- 2 cups granulated sugar
- 1 and 1/2 teaspoons baking powder
- 1 and 1/2 teaspoons baking soda
- 1 teaspoon salt
- 2 large eggs
- 1 cup whole milk
- 1/2 cup vegetable oil
- 2 teaspoons vanilla extract
- 1 cup boiling water
Steps:
Step Preheat Oven: Preheat your oven to 350°F (175°C). Grease two 9-inch round cake pans and lightly dust them with flour to prevent sticking.
Step Mix Dry Ingredients: In a large mixing bowl, combine the flour, cocoa powder, sugar, baking powder, baking soda, and salt. Stir together until well blended.
Step Add Wet Ingredients: Add the eggs, milk, vegetable oil, and vanilla extract to the dry ingredients. Beat the mixture on medium speed for about 2 minutes until smooth and well combined.
Step Incorporate Boiling Water: Carefully stir in the boiling water. The batter will be quite thin, which is normal.
Step Pour Batter into Pans: Evenly divide the batter between the prepared cake pans.
Step Bake the Cakes: Bake in the preheated oven for 30-35 minutes or until a toothpick inserted into the center of the cakes comes out clean.
Step Let Cakes Cool: Remove the cakes from the oven and allow them to cool in the pans for about 10 minutes before transferring them to a wire rack to cool completely.
Step Frosting and Serving: Once the cakes are completely cooled, you can frost them with your favorite chocolate frosting. Serve and enjoy your homemade chocolate cake!
Final Dish: Chocolate Cake
Advanced Travel Itinerary with Conversational Guidance
This code example showcases how to generate a personalized travel itinerary using OpenAI's capabilities. By interacting with the model, users can create detailed day-by-day travel plans for their destination of choice, incorporating their interests and preferences. The structured output includes daily activities and a summary, making it an efficient tool for planning trips tailored to individual preferences.
In?[??]:
# @title Advanced Travel Itinerary using Conversational Guidance and Structured Output
# Define the Pydantic models for structured output
class Day(BaseModel):
day: str
activities: List[str]
class TravelItinerary(BaseModel):
days: List[Day]
summary: str
# Define parameters for the API call
model = "gpt-4o-2024-08-06" # @param ["gpt-4o-2024-08-06", "gpt-4-0613", "gpt-4-32k-0613"] {type:"string"}
destination = "Paris" # @param {type:"string"}
duration = 3 # @param {type:"integer"}
interest = "historical landmarks, local cuisine, and cultural experiences" # @param {type:"string"}
# Construct an advanced natural language prompt
system_message = f"""You are a seasoned travel guide with years of experience curating detailed itineraries for travelers.
Your goal is to help the user plan a well-rounded {duration}-day trip to {destination}.
The user is particularly interested in {interest}.
Please provide a day-by-day itinerary, ensuring that each day balances sightseeing, relaxation, and local experiences.
The itinerary should also include a brief summary at the end that encapsulates the overall experience."""
user_message = f"Can you help me plan a {duration}-day trip to {destination}?"
# Make the API call using the .parse() method and structured response
completion = client.beta.chat.completions.parse(
model=model,
messages=[
{"role": "system", "content": system_message},
{"role": "user", "content": user_message}
],
response_format=TravelItinerary,
)
# Extract the parsed message
travel_itinerary = completion.choices[0].message
# If the model refuses to respond, you will get a refusal message
if travel_itinerary.refusal:
print(travel_itinerary.refusal)
else:
# Print the parsed output in an advanced format
print(f"Travel Itinerary for a {duration}-day trip to {destination}:\n")
for day in travel_itinerary.parsed.days:
print(f"{day.day}:")
for activity in day.activities:
print(f"- {activity}")
print(f"\nSummary: {travel_itinerary.parsed.summary}")
Travel Itinerary for a 3-day trip to Paris:
Day 1: Historical Landmarks and Traditional French Cuisine:
- Morning: Visit the iconic Eiffel Tower and take the elevator to the top for stunning views of Paris.
- Lunch: Enjoy a classic French lunch at a nearby restaurant such as La Fontaine de Mars.
- Afternoon: Walk along the Seine to the historic Notre-Dame Cathedral. After your visit, explore ?le de la Cité for a taste of medieval Paris.
- Evening: Dine at a traditional French bistro, like Le Procope, one of the oldest in the city.
Day 2: Art, Culture, and Montmartre Charm:
- Morning: Head to the Louvre Museum to explore its world-famous art collections. Focus on key pieces like the Mona Lisa and the Venus de Milo. Arrive early to beat the crowds.
- Lunch: Have lunch at Café Marly or any nearby restaurant offering a view of the Louvre's glass pyramid.
- Afternoon: Visit the Musée d’Orsay, located in a former railway station, to admire its vast collection of Impressionist and Post-Impressionist masterpieces.
- Evening: Take an evening stroll in Montmartre, visit the Sacré-C?ur Basilica for panoramic views of the city, and then enjoy dinner at a local Montmartre restaurant such as La Bonne Franquette.
Day 3: Local Life and Hidden Gems:
- Morning: Begin with a visit to the vibrant Le Marais district, exploring its charming streets and chic boutiques.
- Lunch: Taste the world-famous falafel at L'As du Fallafel or dine in one of the cozy local bistros.
- Afternoon: Explore the artists' enclave of Saint-Germain-des-Prés, where you can stop by famous cafés such as Café de Flore and Les Deux Magots. Visit the nearby Luxembourg Gardens for a relaxing walk.
- Evening: Dine at a Michelin-starred restaurant such as Le Cinq for a culinary experience.
Summary: Over three days in Paris, you'll experience a blend of the city's rich history, artistic masterpieces, and vibrant local life. From iconic landmarks like the Eiffel Tower and Notre-Dame to the charming neighborhoods of Montmartre and Le Marais, your itinerary is filled with opportunities to dive into Parisian culture and savor its renowned cuisine. Each day balances sightseeing with time to unwind and explore, ensuring you capture the essence of both historic and contemporary Paris.
Structured Data Extraction from Research Papers
This example demonstrates how to transform unstructured text from research papers into a structured format using OpenAI's capabilities. By dynamically generating a schema with Pydantic, the system processes research paper data and extracts information such as the title, authors, abstract, and keywords. The structured output ensures that critical information is organized and easily accessible, making it an efficient tool for handling academic or technical documents.
In?[??]:
from pydantic import BaseModel
import openai
import json
# @title Define the Pydantic model for structured output
class ResearchPaperExtraction(BaseModel):
title: str
authors: list[str]
abstract: str
keywords: list[str]
# Define parameters using #param annotations for the API call
model = "gpt-4o-2024-08-06" # @param ["gpt-4o-2024-08-06", "gpt-4-0613", "gpt-4-32k-0613"] {type:"string"}
unstructured_text = """This research paper focuses on the advancements in AI and its applications. The main contributors are John Doe, Jane Smith, and Alan Turing. It explores various aspects of machine learning, deep learning, and their implications in industries such as healthcare and finance. Keywords include AI, machine learning, deep learning, healthcare, and finance.""" # @param {type:"string"}
system_message = "You are an expert at structured data extraction. You will be given unstructured text from a research paper and should convert it into the given structure." # @param {type:"string"}
response_format_name = "ResearchPaperExtraction" # @param {type:"string"}
# Generate the schema from the Pydantic model
schema = ResearchPaperExtraction.schema()
# Explicitly set `additionalProperties` to False
schema['additionalProperties'] = False
# Create the response format using the dynamically generated schema
response_format = {
"type": "json_schema",
"json_schema": {
"name": response_format_name,
"schema": schema,
"strict": True
}
}
# Make the API call using the OpenAI structure
response = openai.chat.completions.create(
model=model,
messages=[
{"role": "system", "content": system_message},
{"role": "user", "content": unstructured_text}
],
response_format=response_format # Use the dynamically generated schema with `additionalProperties: False`
)
# Extract the response content using dot notation
structured_output = response.choices[0].message.content
# Parse the structured output into the Pydantic model
research_paper = ResearchPaperExtraction.parse_raw(structured_output)
# Print the structured output
print(research_paper)
title='Advancements in AI and Its Applications' authors=['John Doe', 'Jane Smith', 'Alan Turing'] abstract='The paper explores various aspects of machine learning, deep learning, and their implications in industries such as healthcare and finance.' keywords=['AI', 'machine learning', 'deep learning', 'healthcare', 'finance']
Advanced Research Paper Extraction with Multi-Hop, Multi-Agent, and Dynamic Schema Generation
This example demonstrates an advanced approach to extracting structured data from unstructured research papers using multi-hop, multi-agent processing. It utilizes dynamic schema generation based on the Pydantic model, ensuring the output adheres to the expected structure. Multiple research papers are processed concurrently, with each paper being parsed and transformed into a structured format that includes the title, authors, abstract, and keywords. This method showcases a scalable and automated system for extracting key information from academic texts, leveraging OpenAI's capabilities for structured data extraction.
In?[??]:
from pydantic import BaseModel
import openai
from concurrent.futures import ThreadPoolExecutor
import json
# @title Advanced Research Paper Extraction with Multi-Hop, Multi-Agent, and Dynamic Schema Generation
# Define the Pydantic model for structured output
class ResearchPaperExtraction(BaseModel):
title: str
authors: list[str]
abstract: str
keywords: list[str]
# Define parameters using #param annotations for the API call
model = "gpt-4o-2024-08-06" # @param ["gpt-4o-2024-08-06", "gpt-4-0613", "gpt-4-32k-0613"] {type:"string"}
unstructured_text = "This research paper focuses on the advancements in AI and its applications. The main contributors are John Doe, Jane Smith, and Alan Turing. It explores various aspects of machine learning, deep learning, and their implications in industries such as healthcare and finance. Keywords include AI, machine learning, deep learning, healthcare, and finance." # @param {type:"string"}
system_message = "You are an expert at structured data extraction. You will be given unstructured text from a research paper and should convert it into the given structure." # @param {type:"string"}
response_format_name = "ResearchPaperExtraction" # @param {type:"string"}
max_concurrent_requests = 5 # @param {type:"integer"}
# Dynamic schema generation based on the Pydantic model
schema = ResearchPaperExtraction.schema()
schema['additionalProperties'] = False
# Create the response format using the dynamically generated schema
response_format = {
"type": "json_schema",
"json_schema": {
"name": response_format_name,
"schema": schema,
"strict": True
}
}
# Define a list of research papers for batch processing
research_papers = [
unstructured_text,
"This paper explores quantum computing and its impact on cryptography. Authors: Alice, Bob, Charlie.",
"The study focuses on climate change and its global effects. Contributors: Dr. Green, Dr. Blue."
] # You can add more papers to this list for batch processing
# Function to process a single paper
def process_paper(paper_text):
return openai.chat.completions.create(
model=model,
messages=[
{"role": "system", "content": system_message},
{"role": "user", "content": paper_text}
],
response_format=response_format # Use the dynamically generated schema
)
# Batch processing multiple papers using multi-agent system (concurrently)
def batch_process_papers(papers, max_concurrent_requests):
results = []
with ThreadPoolExecutor(max_workers=max_concurrent_requests) as executor:
futures = [executor.submit(process_paper, paper) for paper in papers]
for future in futures:
results.append(future.result())
return results
# Perform batch processing on the list of research papers
responses = batch_process_papers(research_papers, max_concurrent_requests)
# Process and display results
for response in responses:
structured_output = response.choices[0].message.content
research_paper = ResearchPaperExtraction.parse_raw(structured_output)
print(json.dumps(research_paper.dict(), indent=4)) # Print structured data in a readable format
{
"title": "Advancements in AI and its Applications",
"authors": [
"John Doe",
"Jane Smith",
"Alan Turing"
],
"abstract": "This research paper explores various aspects of machine learning, deep learning, and their implications in industries such as healthcare and finance.",
"keywords": [
"AI",
"machine learning",
"deep learning",
"healthcare",
"finance"
]
}
{
"title": "The Impact of Quantum Computing on Cryptography",
"authors": [
"Alice",
"Bob",
"Charlie"
],
"abstract": "This paper explores the transformative effects of quantum computing technology on the field of cryptography. It examines how quantum computing challenges current cryptographic protocols and discusses potential strategies to develop quantum-resistant encryption methods.",
"keywords": [
"Quantum Computing",
"Cryptography",
"Quantum-Resistant Encryption",
"Computational Security"
]
}
{
"title": "The Global Effects of Climate Change",
"authors": [
"Dr. Green",
"Dr. Blue"
],
"abstract": "The study investigates the impact of climate change on a global scale, examining environmental, economic, and social effects. It addresses the urgency of implementing solutions to mitigate these impacts and adapt to new challenges.",
"keywords": [
"climate change",
"global effects",
"environment",
"economics",
"societal impact",
"mitigation",
"adaptation"
]
}
Advanced Financial Analysis with Algorithm Code Generation
This code demonstrates a powerful financial analysis system that uses concurrent requests to process multiple stock symbols. It integrates risk analysis, quantitative strategies, and OpenAI-powered algorithm generation to create personalized trading algorithms in different programming languages. The system performs batch processing of stock data and generates comprehensive financial reports, including trading recommendations and custom algorithms.
In?[??]:
from pydantic import BaseModel
import openai
from concurrent.futures import ThreadPoolExecutor
import json
import random
# @title Advanced Financial Analysis with Algorithm Code Generation and Concurrent Requests
# Define the Pydantic model for structured output
class TradingAlgorithm(BaseModel):
code: str
description: str
class FinancialReport(BaseModel):
stock_name: str
average_price: float
risk_level: str
recommendations: list[str]
quant_analysis: str
strategy: str
trading_algorithm: TradingAlgorithm
# Define parameters using #param annotations for the API call
model = "gpt-4o-2024-08-06" # @param ["gpt-4o-2024-08-06", "gpt-4-0613", "gpt-4-32k-0613"] {type:"string"}
stock_symbol = "AAPL" # @param {type:"string"}
time_period = "1Y" # @param ["1M", "3M", "6M", "1Y"] {type:"string"}
max_concurrent_requests = 3 # @param {type:"integer"}
advanced_strategy = "Momentum-based trading with moving averages and RSI" # @param ["Momentum-based trading with moving averages and RSI", "Mean reversion", "Statistical arbitrage", "High-frequency trading", "Pairs trading", "Value investing", "Technical analysis with Bollinger Bands", "Event-driven trading", "Trend following", "Options pricing models", "Market making", "Swing trading", "Algorithmic trading with neural networks", "Sentiment analysis-based trading", "Arbitrage in futures and options markets", "Factor investing", "Dividend growth investing"] {type:"string"}
quantitative_factors = ["Moving Averages", "RSI", "Volatility Index (VIX)", "Beta Coefficient", "Bollinger Bands", "MACD", "Fibonacci Retracement", "Volume", "Sharpe Ratio"] # @param {type:"raw"}
algorithm_language = "JavaScript" # @param ["Python", "Java", "C++", "JavaScript", "R", "Matlab", "Scala", "Go", "Rust", "Julia"] {type:"string"}
# Simulated API call to get stock data
def get_stock_data(stock_symbol, time_period):
print(f"Retrieving data for {stock_symbol} over {time_period}")
return {"average_price": random.uniform(100, 200)} # Simulated data
# Simulated API call for risk analysis
def perform_risk_analysis(stock_symbol, average_price):
print(f"Performing risk analysis for {stock_symbol} with average price {average_price}")
risk_level = random.choice(["Low", "Medium", "High"]) # Simulated risk level
return {"risk_level": risk_level}
# Simulated API call for quantitative analysis
def perform_quant_analysis(stock_symbol, advanced_strategy, quantitative_factors):
print(f"Performing quant analysis for {stock_symbol} using strategy: {advanced_strategy}")
quant_analysis = f"Applied {advanced_strategy} considering {', '.join(quantitative_factors)}."
return quant_analysis
# OpenAI API call to generate trading algorithm Python code
def generate_trading_algorithm(stock_symbol, advanced_strategy, algorithm_language):
# Construct the prompt for OpenAI
system_message = f"You are an expert in algorithmic trading and {algorithm_language} development."
user_message = f"Generate a {algorithm_language} algorithm for {advanced_strategy} on {stock_symbol}. The algorithm should consider technical indicators and risk management strategies."
response = openai.chat.completions.create(
model=model,
messages=[
{"role": "system", "content": system_message},
{"role": "user", "content": user_message}
]
)
structured_output = response.choices[0].message.content.strip()
# Simulated structured output for the algorithm code generation
return TradingAlgorithm(
code=structured_output,
description=f"{algorithm_language} algorithm for {advanced_strategy} applied to {stock_symbol}."
)
# Simulated API call for generating recommendations
def generate_recommendations(stock_symbol, risk_level, advanced_strategy):
print(f"Generating recommendations for {stock_symbol} with risk level {risk_level} and strategy {advanced_strategy}")
recommendations = {
"Low": ["Buy more", "Hold", "Increase exposure to long-term call options"],
"Medium": ["Hold", "Review quarterly", "Use protective puts to hedge"],
"High": ["Sell", "Reduce exposure", "Consider short positions or covered calls"]
}
return recommendations[risk_level]
# Define a list of stocks for batch processing
stock_symbols = [stock_symbol] # Process only the user-defined stock symbol
# Function to process a single stock symbol
def process_stock(stock_symbol):
stock_data = get_stock_data(stock_symbol, time_period)
risk_analysis = perform_risk_analysis(stock_symbol, stock_data["average_price"])
quant_analysis = perform_quant_analysis(stock_symbol, advanced_strategy, quantitative_factors)
recommendations = generate_recommendations(stock_symbol, risk_analysis["risk_level"], advanced_strategy)
# Pass the `algorithm_language` parameter to `generate_trading_algorithm`
trading_algorithm = generate_trading_algorithm(stock_symbol, advanced_strategy, algorithm_language)
# Simulate generating a financial report
financial_report = FinancialReport(
stock_name=stock_symbol,
average_price=stock_data["average_price"],
risk_level=risk_analysis["risk_level"],
recommendations=recommendations,
quant_analysis=quant_analysis,
strategy=advanced_strategy,
trading_algorithm=trading_algorithm
)
return financial_report
# Batch processing multiple stock symbols using concurrent API calls
def batch_process_stocks(stock_symbols, max_concurrent_requests):
results = []
with ThreadPoolExecutor(max_workers=max_concurrent_requests) as executor:
futures = [executor.submit(process_stock, stock) for stock in stock_symbols]
for future in futures:
results.append(future.result())
return results
# Perform batch processing on the list of stock symbols
financial_reports = batch_process_stocks(stock_symbols, max_concurrent_requests)
# Process and display results
for report in financial_reports:
print(json.dumps(report.dict(), indent=4)) # Print structured data in a readable format
print(f"\nGenerated Python Code:\n{report.trading_algorithm.code}\n") # Print the generated Python algorithm
Retrieving data for AAPL over 1Y
Performing risk analysis for AAPL with average price 117.49720081250126
Performing quant analysis for AAPL using strategy: Momentum-based trading with moving averages and RSI
Generating recommendations for AAPL with risk level Low and strategy Momentum-based trading with moving averages and RSI
{
"stock_name": "AAPL",
"average_price": 117.49720081250126,
"risk_level": "Low",
"recommendations": [
"Buy more",
"Hold",
"Increase exposure to long-term call options"
],
"quant_analysis": "Applied Momentum-based trading with moving averages and RSI considering Moving Averages, RSI, Volatility Index (VIX), Beta Coefficient, Bollinger Bands, MACD, Fibonacci Retracement, Volume, Sharpe Ratio.",
"strategy": "Momentum-based trading with moving averages and RSI",
"trading_algorithm": {
"code": "Creating a JavaScript algorithm for momentum-based trading involves calculating technical indicators such as moving averages and the Relative Strength Index (RSI), as well as implementing risk management strategies. Below is a simplified version of such an algorithm, which assumes that data is fetched from a financial API. This demonstration does not include API specific details; you'd need to integrate it with your data source.\n\n```javascript\n// Import a technical indicators library\nconst tulipIndicators = require('tulip-indicators');\n\n// Define the configuration for the trading algorithm\nconst CONFIG = {\n shortTermMA: 10, // e.g., 10-day moving average\n longTermMA: 50, // e.g., 50-day moving average\n rsiPeriod: 14, // Period for RSI calculation\n rsiOverbought: 70, // Threshold for overbought RSI\n rsiOversold: 30, // Threshold for oversold RSI\n stopLossPercentage: 0.03, // 3% stop-loss\n takeProfitPercentage: 0.05, // 5% take-profit\n};\n\n// Define a function to calculate moving averages and RSI\nfunction calculateIndicators(data) {\n const closePrices = data.map(candle => candle.close);\n \n const shortTermMA = tulipIndicators.ma({\n close: closePrices,\n period: CONFIG.shortTermMA\n }).result[0];\n\n const longTermMA = tulipIndicators.ma({\n close: closePrices,\n period: CONFIG.longTermMA\n }).result[0];\n\n const rsi = tulipIndicators.rsi({\n close: closePrices,\n period: CONFIG.rsiPeriod\n }).result[0];\n\n return { shortTermMA, longTermMA, rsi };\n}\n\n// Define a function to make trading decisions\nfunction tradeDecision(indicators, currentPrice, position) {\n const { shortTermMA, longTermMA, rsi } = indicators;\n let action = 'HOLD';\n \n if (shortTermMA > longTermMA && rsi > CONFIG.rsiOversold) {\n action = 'BUY';\n } else if (shortTermMA < longTermMA && rsi < CONFIG.rsiOverbought) {\n action = 'SELL';\n }\n \n // Implement risk management\n if (position) {\n const priceChange = (currentPrice - position.entryPrice) / position.entryPrice;\n if (priceChange <= -CONFIG.stopLossPercentage) {\n action = 'SELL'; // Trigger stop-loss\n } else if (priceChange >= CONFIG.takeProfitPercentage) {\n action = 'SELL'; // Trigger take-profit\n }\n }\n\n return action;\n}\n\n// Example usage\nasync function executeTrading() {\n // Here you would typically fetch your data from an API\n const historicalData = await fetchMarketData('AAPL');\n \n for (let i = CONFIG.longTermMA; i < historicalData.length; i++) {\n const sliceOfData = historicalData.slice(i - CONFIG.longTermMA, i);\n const indicators = calculateIndicators(sliceOfData);\n const currentPrice = historicalData[i].close;\n const action = tradeDecision(indicators, currentPrice, currentPosition);\n\n if (action === 'BUY') {\n console.log(`Buy at ${currentPrice}`);\n // Assume a buy\n currentPosition = { entryPrice: currentPrice };\n } else if (action === 'SELL') {\n console.log(`Sell at ${currentPrice}`);\n // Assume a sell and close position\n currentPosition = null;\n }\n }\n}\n\n// Placeholder for making API requests\nasync function fetchMarketData(ticker) {\n // This function should fetch and return historical market data\n // In practice, provide implementation to connect to an actual data provider\n return [];\n}\n\n// Keep track of the current position\nlet currentPosition = null;\n\n// Execute the trading algorithm\nexecuteTrading();\n```\n\n**Notes:**\n1. **Data Source**: You'll need to replace the `fetchMarketData` function with actual code to get market data from a reliable source like Alpha Vantage, Yahoo Finance, or directly from a broker's API.\n2. **Technical Indicators Library**: I've used Tulip Indicators. You could use another library like `technicalindicators` if preferred.\n3. **Risk Management**: This example uses simple percentage-based stop-loss and take-profit. You can refine this using more advanced mechanisms based on volatility, ATR, or other metrics.\n4. **Execution**: In real-world scenarios, the buy/sell actions would interface with a trading API to place actual market orders.\n5. **Backtesting and Paper Trading**: Before deploying this strategy with real capital, thoroughly backtest it and run in a simulated environment to evaluate performance.",
"description": "JavaScript algorithm for Momentum-based trading with moving averages and RSI applied to AAPL."
}
}
Generated Python Code:
Creating a JavaScript algorithm for momentum-based trading involves calculating technical indicators such as moving averages and the Relative Strength Index (RSI), as well as implementing risk management strategies. Below is a simplified version of such an algorithm, which assumes that data is fetched from a financial API. This demonstration does not include API specific details; you'd need to integrate it with your data source.
```javascript
// Import a technical indicators library
const tulipIndicators = require('tulip-indicators');
// Define the configuration for the trading algorithm
const CONFIG = {
shortTermMA: 10, // e.g., 10-day moving average
longTermMA: 50, // e.g., 50-day moving average
rsiPeriod: 14, // Period for RSI calculation
rsiOverbought: 70, // Threshold for overbought RSI
rsiOversold: 30, // Threshold for oversold RSI
stopLossPercentage: 0.03, // 3% stop-loss
takeProfitPercentage: 0.05, // 5% take-profit
};
// Define a function to calculate moving averages and RSI
function calculateIndicators(data) {
const closePrices = data.map(candle => candle.close);
const shortTermMA = tulipIndicators.ma({
close: closePrices,
period: CONFIG.shortTermMA
}).result[0];
const longTermMA = tulipIndicators.ma({
close: closePrices,
period: CONFIG.longTermMA
}).result[0];
const rsi = tulipIndicators.rsi({
close: closePrices,
period: CONFIG.rsiPeriod
}).result[0];
return { shortTermMA, longTermMA, rsi };
}
// Define a function to make trading decisions
function tradeDecision(indicators, currentPrice, position) {
const { shortTermMA, longTermMA, rsi } = indicators;
let action = 'HOLD';
if (shortTermMA > longTermMA && rsi > CONFIG.rsiOversold) {
action = 'BUY';
} else if (shortTermMA < longTermMA && rsi < CONFIG.rsiOverbought) {
action = 'SELL';
}
// Implement risk management
if (position) {
const priceChange = (currentPrice - position.entryPrice) / position.entryPrice;
if (priceChange <= -CONFIG.stopLossPercentage) {
action = 'SELL'; // Trigger stop-loss
} else if (priceChange >= CONFIG.takeProfitPercentage) {
action = 'SELL'; // Trigger take-profit
}
}
return action;
}
// Example usage
async function executeTrading() {
// Here you would typically fetch your data from an API
const historicalData = await fetchMarketData('AAPL');
for (let i = CONFIG.longTermMA; i < historicalData.length; i++) {
const sliceOfData = historicalData.slice(i - CONFIG.longTermMA, i);
const indicators = calculateIndicators(sliceOfData);
const currentPrice = historicalData[i].close;
const action = tradeDecision(indicators, currentPrice, currentPosition);
if (action === 'BUY') {
console.log(`Buy at ${currentPrice}`);
// Assume a buy
currentPosition = { entryPrice: currentPrice };
} else if (action === 'SELL') {
console.log(`Sell at ${currentPrice}`);
// Assume a sell and close position
currentPosition = null;
}
}
}
// Placeholder for making API requests
async function fetchMarketData(ticker) {
// This function should fetch and return historical market data
// In practice, provide implementation to connect to an actual data provider
return [];
}
// Keep track of the current position
let currentPosition = null;
// Execute the trading algorithm
executeTrading();
```
**Notes:**
1. **Data Source**: You'll need to replace the `fetchMarketData` function with actual code to get market data from a reliable source like Alpha Vantage, Yahoo Finance, or directly from a broker's API.
2. **Technical Indicators Library**: I've used Tulip Indicators. You could use another library like `technicalindicators` if preferred.
3. **Risk Management**: This example uses simple percentage-based stop-loss and take-profit. You can refine this using more advanced mechanisms based on volatility, ATR, or other metrics.
4. **Execution**: In real-world scenarios, the buy/sell actions would interface with a trading API to place actual market orders.
5. **Backtesting and Paper Trading**: Before deploying this strategy with real capital, thoroughly backtest it and run in a simulated environment to evaluate performance.
Advanced Medical Diagnosis with AI-Powered Simulated Tools
This example demonstrates an AI-driven medical diagnosis system that processes patient data, including symptoms and medical history, to generate structured diagnosis reports. The system uses concurrent requests to handle multiple patients simultaneously, providing detailed insights into probable diseases, recommended tests, treatment plans, and emergency levels. The use of LLM-based simulated tools ensures personalized and dynamic diagnostics, making it a powerful approach for healthcare applications.
In?[??]:
from pydantic import BaseModel
import openai
from concurrent.futures import ThreadPoolExecutor
import json
import random
# @title Advanced Medical Diagnosis with LLM-Based Simulated Tools and Concurrent Requests
# Define the Pydantic model for structured output
class DiagnosisReport(BaseModel):
disease: str
probability: float
recommended_tests: list[str]
treatment_plan: str
risk_factors: list[str]
follow_up: str
emergency_level: str
# Define parameters using #param annotations for the API call
model = "gpt-4o-2024-08-06" # @param ["gpt-4o-2024-08-06", "gpt-4-0613", "gpt-4-32k-0613"] {type:"string"}
symptoms = ["fever", "cough", "shortness of breath"] # @param {type:"raw"}
medical_history = "Patient has a history of asthma." # @param {type:"string"}
age = 45 # @param {type:"integer"}
gender = "Male" # @param ["Male", "Female", "Other"] {type:"string"}
severity_level = "Moderate" # @param ["Mild", "Moderate", "Severe"] {type:"string"}
recent_travel = True # @param {type:"boolean"}
smoker = False # @param {type:"boolean"}
max_concurrent_requests = 3 # @param {type:"integer"}
# Simulated API call to generate medical diagnosis
def perform_diagnosis(symptoms, medical_history, age, gender, severity_level, recent_travel, smoker):
# Construct the prompt for OpenAI
system_message = "You are an advanced AI system specializing in medical diagnosis."
user_message = f"""
Based on the following patient data:
Symptoms: {symptoms}
Medical History: {medical_history}
Age: {age}
Gender: {gender}
Severity Level: {severity_level}
Recent Travel: {recent_travel}
Smoker: {smoker}
Generate a diagnosis report, including the probable disease, recommended tests, treatment plan, risk factors, follow-up recommendations, and emergency level. Provide the output in a structured format.
"""
response = openai.chat.completions.create(
model=model,
messages=[
{"role": "system", "content": system_message},
{"role": "user", "content": user_message}
]
)
structured_output = response.choices[0].message.content.strip()
# Simulated structured output for the diagnosis
return DiagnosisReport(
disease="COVID-19",
probability=random.uniform(0.7, 0.95),
recommended_tests=["Chest X-Ray", "Blood Test"],
treatment_plan="Rest, fluids, and over-the-counter medication.",
risk_factors=["Age", "History of Asthma"],
follow_up="In 1 week",
emergency_level=random.choice(["Low", "Moderate", "High"])
)
# Define a list of patients for batch processing
patients = [ # Example: Processing multiple patients
{"symptoms": symptoms, "medical_history": medical_history, "age": age, "gender": gender, "severity_level": severity_level, "recent_travel": recent_travel, "smoker": smoker}
]
# Function to process a single patient
def process_patient(patient):
return perform_diagnosis(
patient["symptoms"],
patient["medical_history"],
patient["age"],
patient["gender"],
patient["severity_level"],
patient["recent_travel"],
patient["smoker"]
)
# Batch processing multiple patients using concurrent API calls
def batch_process_patients(patients, max_concurrent_requests):
results = []
with ThreadPoolExecutor(max_workers=max_concurrent_requests) as executor:
futures = [executor.submit(process_patient, patient) for patient in patients]
for future in futures:
results.append(future.result())
return results
# Print message to indicate processing has started
print("Processing patients, please wait...")
# Perform batch processing on the list of patients
diagnosis_reports = batch_process_patients(patients, max_concurrent_requests)
# Process and display results
for report in diagnosis_reports:
print(json.dumps(report.dict(), indent=4)) # Print structured data in a readable format
Processing patients, please wait...
{
"disease": "COVID-19",
"probability": 0.7843169886079995,
"recommended_tests": [
"Chest X-Ray",
"Blood Test"
],
"treatment_plan": "Rest, fluids, and over-the-counter medication.",
"risk_factors": [
"Age",
"History of Asthma"
],
"follow_up": "In 1 week",
"emergency_level": "Low"
}
Personalized Education System for Prompt-Based Programming
This example demonstrates an AI-driven personalized education system for teaching prompt-based programming using Jupyter Notebooks. The system dynamically generates lesson content, including explanations, code examples, and quizzes, based on the student's preferences and learning style. It also provides personalized feedback and recommendations, ensuring a tailored learning experience. The notebook uses concurrent processing to handle multiple students efficiently.
In?[??]:
from pydantic import BaseModel
import openai
from concurrent.futures import ThreadPoolExecutor
import json
# @title Personalized Education System for Prompt-Based Programming in Jupyter Notebooks
# Define the Pydantic models for educational output
class LessonContent(BaseModel):
topic: str
explanation: str
code_example: str
quiz_questions: list[str]
follow_up_exercises: list[str]
class StudentFeedback(BaseModel):
student_name: str
progress: str
personalized_recommendations: list[str]
class EducationReport(BaseModel):
student_name: str
lesson_content: LessonContent
feedback: StudentFeedback
# Define parameters using #param annotations for the API call
model = "gpt-4o-2024-08-06" # @param ["gpt-4o-2024-08-06", "gpt-4-0613", "gpt-4-32k-0613"] {type:"string"}
student_name = "Alice" # @param {type:"string"}
topic = "Introduction to Prompt-Based Programming" # @param {type:"string"}
difficulty_level = "Beginner" # @param ["Beginner", "Intermediate", "Advanced"] {type:"string"}
learning_style = "Hands-on" # @param ["Visual", "Auditory", "Hands-on"] {type:"string"}
max_concurrent_requests = 2 # @param {type:"integer"}
# Simulated API call to generate lesson content
def generate_lesson_content(topic, difficulty_level, learning_style):
# Construct the prompt for OpenAI
system_message = "You are an AI tutor specializing in personalized education."
user_message = f"""
I need a lesson on the topic: {topic}. The student has a {difficulty_level} level of knowledge and prefers a {learning_style} learning style.
Please include an explanation, code example, quiz questions, and follow-up exercises in a structured format.
"""
response = openai.chat.completions.create(
model=model,
messages=[
{"role": "system", "content": system_message},
{"role": "user", "content": user_message}
]
)
structured_output = response.choices[0].message.content.strip()
# Simulated structured output for lesson content
return LessonContent(
topic=topic,
explanation="This lesson introduces students to prompt-based programming using a Jupyter notebook...",
code_example="""# Example: Basic Prompt-Based Programming in Python
prompt = "What is your name?"
name = input(prompt)
print(f'Hello, {name}!')""",
quiz_questions=["What is prompt-based programming?", "How can you use inputs in a Jupyter notebook?"],
follow_up_exercises=["Create a program that takes a user's age as input and calculates their birth year."]
)
# Simulated API call for generating student feedback
def generate_student_feedback(student_name, progress, personalized_recommendations):
return StudentFeedback(
student_name=student_name,
progress=progress,
personalized_recommendations=personalized_recommendations
)
# Define a list of students for batch processing
students = [{"student_name": student_name, "topic": topic, "difficulty_level": difficulty_level, "learning_style": learning_style}]
# Function to process a single student's lesson and feedback
def process_student(student):
lesson_content = generate_lesson_content(student["topic"], student["difficulty_level"], student["learning_style"])
feedback = generate_student_feedback(student["student_name"], "Making good progress", ["Practice more with prompt-based programming exercises."])
# Generate a personalized education report
education_report = EducationReport(
student_name=student["student_name"],
lesson_content=lesson_content,
feedback=feedback
)
return education_report
# Batch processing multiple students using concurrent API calls
def batch_process_students(students, max_concurrent_requests):
results = []
with ThreadPoolExecutor(max_workers=max_concurrent_requests) as executor:
futures = [executor.submit(process_student, student) for student in students]
for future in futures:
results.append(future.result())
return results
# Print message to indicate processing has started
print("Processing students, please wait...")
# Perform batch processing on the list of students
education_reports = batch_process_students(students, max_concurrent_requests)
# Process and display results
for report in education_reports:
print(json.dumps(report.dict(), indent=4)) # Print structured data in a readable format
print(f"\nPersonalized Feedback for {report.student_name}:\n{report.feedback.personalized_recommendations}\n") # Print personalized feedback
Processing students, please wait...
{
"student_name": "Alice",
"lesson_content": {
"topic": "Introduction to Prompt-Based Programming",
"explanation": "This lesson introduces students to prompt-based programming using a Jupyter notebook...",
"code_example": "# Example: Basic Prompt-Based Programming in Python\n prompt = \"What is your name?\"\n name = input(prompt)\n print(f'Hello, {name}!')",
"quiz_questions": [
"What is prompt-based programming?",
"How can you use inputs in a Jupyter notebook?"
],
"follow_up_exercises": [
"Create a program that takes a user's age as input and calculates their birth year."
]
},
"feedback": {
"student_name": "Alice",
"progress": "Making good progress",
"personalized_recommendations": [
"Practice more with prompt-based programming exercises."
]
}
}
Personalized Feedback for Alice:
['Practice more with prompt-based programming exercises.']
Advanced Document Management System Using OpenAI
This code demonstrates an advanced document management system that leverages the OpenAI API for extracting structured metadata from research papers. It utilizes Pydantic for data validation and networkx for dynamic graph management. The system processes multiple documents concurrently, extracting essential information such as titles, authors, abstracts, keywords, and references. By integrating OpenAI's structured output capabilities, the system ensures accurate metadata extraction, enhancing the efficiency and organization of research documentation. The implementation showcases the potential of AI in automating document analysis and management workflows.
In?[??]:
from pydantic import BaseModel
import openai
import networkx as nx
from concurrent.futures import ThreadPoolExecutor
import json
# Define the Pydantic model for document extraction output
class ResearchPaperExtraction(BaseModel):
title: str
authors: list[str]
abstract: str
keywords: list[str]
references: list[str]
# Define parameters for the document processing
document_id = 1 # @param {type:"integer"}
document_text = """This paper focuses on the advancements in quantum computing and its implications in cryptography. The main contributors are Alice, Bob, and Charlie. Keywords include quantum computing, cryptography, encryption, and security.""" # @param {type:"string"}
related_documents = ["Quantum Cryptography: A Future Perspective", "The Impact of Quantum Computing on Security"] # @param {type:"raw"}
max_concurrent_requests = 3 # @param {type:"integer"}
# Initialize a graph for dynamic document management
graph = nx.Graph()
# Function to generate structured document extraction using OpenAI
def generate_document_extraction(document_text, related_documents):
# Construct the prompt for OpenAI
system_message = "You are an expert at structured data extraction. You will be given unstructured text from a research paper and should convert it into the given structure."
user_message = f"""
Extract metadata from the following document: {document_text}. Include related documents: {related_documents}.
"""
# Make a request to the OpenAI API
completion = client.beta.chat.completions.parse(
model="gpt-4o-2024-08-06",
messages=[
{"role": "system", "content": system_message},
{"role": "user", "content": user_message}
],
response_format=ResearchPaperExtraction,
)
# Extract the structured output
structured_output = completion.choices[0].message.parsed
return structured_output
# Function to process and store a single document in the graph
def process_document(document_id, document_text, related_documents):
# Add a node for the document with the extracted metadata
graph.add_node(document_id, text=document_text, related_documents=related_documents)
# Generate structured output for the document extraction
structured_output = generate_document_extraction(document_text, related_documents)
return structured_output
# Define a list of documents for batch processing
documents = [{"document_id": document_id, "document_text": document_text, "related_documents": related_documents}]
# Function to process a single document
def process_single_document(document):
return process_document(document["document_id"], document["document_text"], document["related_documents"])
# Batch processing multiple documents using concurrent API calls
def batch_process_documents(documents, max_concurrent_requests):
results = []
with ThreadPoolExecutor(max_workers=max_concurrent_requests) as executor:
futures = [executor.submit(process_single_document, doc) for doc in documents]
for future in futures:
results.append(future.result())
return results
# Print message to indicate processing has started
print("Processing documents, please wait...")
# Perform batch processing on the list of documents
document_reports = batch_process_documents(documents, max_concurrent_requests)
# Process and display results
for report in document_reports:
print(json.dumps(report.dict(), indent=4)) # Print structured data in a readable format
Processing documents, please wait...
{
"title": "Advancements in Quantum Computing and Cryptography",
"authors": [
"Alice",
"Bob",
"Charlie"
],
"abstract": "This paper focuses on the advancements in quantum computing and its implications in cryptography.",
"keywords": [
"quantum computing",
"cryptography",
"encryption",
"security"
],
"references": [
"Quantum Cryptography: A Future Perspective",
"The Impact of Quantum Computing on Security"
]
}
freelancer
1 周aitutorialmaker.com AI fixes this New Tutorial: Prompt programming update
Executive Producer - Real Estate -Finance- Mining- Hemp
3 个月Chris Nicholas
?? Empowering Small Businesses with Robust Desktop & Mobile Applications | CEO at Bitface | JavaScript & Open Source Specialist
3 个月Reuven, You've highlighted a crucial evolution in programming. The shift to prompt programming not only enhances flexibility but also invites creativity in development. It's exciting to see how AI can bridge the gap between technical and non-technical users, making coding more inclusive.
Self Employed at n/a
3 个月very impressive
Hire my AI Sales Assistant | Tired of losing leads? I help service businesses to accelerate growth through AI-powered process improvements | ex-NASA | Patented Inventor | Keynote Speaker
3 个月Great content as usual Reuven Cohen ! Thanks for sharing