Function Calling with Large Language Models (LLMs)

Function Calling with Large Language Models (LLMs)

Introduction to Function Calling in LLMs

Function calling within large language models is a powerful feature that allows developers to extend LLM capabilities by integrating code functions that the model can “call” when prompted. By using function calls, LLMs can generate more specific responses, interact with external APIs, perform calculations, or manage data transformations, making AI applications even more dynamic and practical.

This blog will explain the essentials of function calling in LLMs and provide hands-on examples to illustrate its potential.


Setting Up the Environment

Before diving into code examples, ensure you have an API key for accessing an LLM, such as OpenAI’s GPT or similar. You’ll also need to install essential libraries, such as requests and dotenv, to manage environment variables and handle HTTP requests.

pip install openai requests dotenv
        

Then, set up your .env file to store your API key securely.

OPENAI_API_KEY=your_openai_api_key        

Load this API key in your Python environment:

from dotenv import load_dotenv
import os

load_dotenv()
API_KEY = os.getenv("OPENAI_API_KEY")
        

Function Calling Basics with LLMs

Example 1: Simple Function Call for Data Manipulation

Let’s start with a basic example where the LLM calls a function to manipulate data. Suppose we want our model to perform a calculation, such as finding the sum of a list of numbers. Here’s how we can define the function and integrate it with the LLM:

import openai
from openai import OpenAI
client = OpenAI()

# Function to calculate the sum of numbers in a list
def calculate_sum(numbers):
    return sum(numbers)

# Function to prompt the LLM
def prompt_llm_with_function_call():
    # Message prompt for the LLM
    messages = [{"role": "user", "content": "Calculate the sum of [2, 3, 5, 7, 11]."}]
    
    # LLM API call with function integration
    response = client.chat.completions.create(
        model="gpt-4-turbo",
        messages=messages,
        functions=[{
            "name": "calculate_sum",
            "description": "Calculates the sum of a list of numbers.",
            "parameters": {
                "type": "object",
                "properties": {
                    "numbers": {
                        "type": "array",
                        "items": {"type": "number"},
                        "description": "A list of numbers to sum"
                    }
                },
                "required": ["numbers"]
            }
        }],
        function_call={"name": "calculate_sum"}
    )
    print("Full Response:", response)
    response_message = response.choices[0].message
    print("Response Message:", response_message)

    if response.choices and response.choices[0].message:
        # Extract function call arguments
        function_call_data = response.choices[0].message.function_call.arguments
        
        if function_call_data:
            # Parse the arguments JSON
            arguments = json.loads(function_call_data)
            numbers = arguments.get("numbers", [])
            
            # Call the calculate_sum function with extracted numbers
            result = calculate_sum(numbers)
            print("Calculated Sum:", result)
        else:
            print("Function call data is missing in the response.")
    else:
        print("No result returned from the model.")

prompt_llm_with_function_call()        

In this example, the model interprets the user’s request and determines that the calculate_sum function should be called with the provided list of numbers. The response will show the sum calculated by the function.

Example 2: Calling an External API Using an LLM

Another use case for function calling is integrating with external APIs. Suppose we want the model to fetch weather data for a specific location.

import requests

# Function to get current weather for a location
def get_weather(location):
    weather_api_key = "18b5f0853084c45ab82048492426"
    base_url = f"https://api.weatherapi.com/v1/current.json?key={weather_api_key}&q={location}"
    response = requests.get(base_url)
    if response.status_code == 200:
        return response.json()
    return {"error": "Unable to fetch weather data"}

# Prompt the model to call the weather function
def prompt_llm_for_weather():
    messages = [{"role": "user", "content": "Get the weather in New York City."}]
    
    response = client.chat.completions.create(
        model="gpt-4",
        messages=messages,
        functions=[{
            "name": "get_weather",
            "description": "Fetches current weather data for a specified location.",
            "parameters": {
                "type": "object",
                "properties": {
                    "location": {
                        "type": "string",
                        "description": "The name of the city to fetch the weather for"
                    }
                },
                "required": ["location"]
            }
        }],
        function_call={"name": "get_weather"}  # Explicitly request the function call
    )
    print("Full Response:", response)


    # Check if response contains 'choices' with function call data
    if response.choices and response.choices[0].message:
        # Extract function call arguments
        function_call_data = response.choices[0].message.function_call.arguments
        
        if function_call_data:
            # Parse the arguments JSON
            arguments = json.loads(function_call_data)
            location = arguments.get("location", "")
            
            # Call the get_weather function with extracted location
            weather_data = get_weather(location)
            print("Weather Data:", weather_data)
        else:
            print("Function call data is missing in the response.")
    else:
        print("No result returned from the model.")

prompt_llm_for_weather()        

The model can identify that a function call to get_weather is needed to fulfill the request for New York City’s weather. It invokes the function, and the result is displayed, fetching real-time weather information from the API.

Use Cases of Function Calling with LLMs

  1. Data Processing: Function calls can be used for on-the-fly calculations, such as financial analysis, statistical calculations, or basic math operations, directly within the chat context.
  2. External API Interaction: The ability to query APIs directly allows models to fetch live data, like weather forecasts, stock prices, or other time-sensitive information.
  3. File Handling: For applications that require file manipulation (e.g., reading, writing, or analyzing files), function calls enable the model to handle these requests without manual intervention.
  4. Automated Workflows: Function calls can be integrated with other automation workflows, such as scheduling tasks, sending notifications, or interacting with productivity tools.

Conclusion

Function calling in LLMs opens up a new world of possibilities by allowing models to interact with specific functions, data, and external APIs. These examples show you how to set up and utilize function calls in practical applications. Whether you’re working on real-time data queries, building custom applications, or automating workflows, this feature provides a flexible way to enhance the interactivity and effectiveness of LLMs in your projects.

https://colab.research.google.com/drive/1V5Aw_i75Cc74BF6mFxjbZtmIm4lL4gTl?usp=sharing


要查看或添加评论,请登录

Suman Biswas的更多文章

社区洞察

其他会员也浏览了