5 Minutes to setup your own LLM Agent System

5 Minutes to setup your own LLM Agent System


I'm going to assume most of you are coders, if not this can be done but it won't be 5 minutes, you'll have to work with chatgpt on json objects among other things.


At the end of the day most Agent systems in LLM's are strong prompts that output in a JSON format. So I needed an agent system and I went to the autograph and langchain of the world only to realize that they had major issues accomplishing what I wanted from them. So I custom wrote some python (because of course I did). I took that code and had chatgpt build a single shot prompt that would rebuild the codebase. I then took a normal biomedical informatics workflow of patient recruitment matching and had the prompt modified to fit that usecase.

Here's the prompt:

Generate Python code that sets up a local client for an OpenAI API to interact with an LLM, designed for identifying eligible patients for clinical trials by analyzing EHR data. The code should pass all workflows between steps in JSON format, as different steps require unique evaluations and may loop back on themselves. The code should:

    Import necessary libraries, including os, json, time, argparse, and openai from OpenAI.
    Define logging levels like TRACE and DEBUGGING.
    Include functions for logging messages, sending API requests to OpenAI, and managing JSON files:
        log_message: Logs messages to both a primary and optional secondary log file.
        log_sent_message: Logs messages sent to the OpenAI API.
        openai_chat_completion: Sends a message to the OpenAI API and retrieves the response.
        load_json_file: Loads data from JSON files, such as EHR data, trial protocols, and section content.
        save_json_file: Saves content and results in JSON format to pass between different steps.
    Include functions to generate initial patient recruitment content and refine it based on feedback:
        generate_initial_content: Calls the OpenAI API using EHR data in JSON format to create initial recruitment content.
        get_improvement_suggestions: Retrieves suggestions for improving the initial content based on feedback from OpenAI.
        apply_improvements: Applies improvement suggestions and passes the improved content in JSON format for further evaluations.
    Handle consistency checks and ensure alignment with clinical trial protocols and future criteria:
        check_protocol_discrete_data: pulls the discrete data we are interested in from the patient chart, passed in JSON.
        get_recruitment_issues: Identifies inconsistencies between different recruitment sections passed in JSON.
       verify_data: Identifies whether the check_protocol_consistency module output matched with the original text and if not redoes the check_protocol_consistency until it does.
        check_alignment_with_protocol: Ensures that the content aligns with future clinical trial requirements (as specified in JSON format).
    Include functions to manage and save the final recruitment content and statistics:
        save_final_content: Saves the finalized recruitment content in JSON format.
        save_statistics: Records process statistics, such as the number of iterations and feedback loops, in a JSON file.
    Implement a multi_agent_system function to orchestrate the entire process:
        It should iterate through different recruitment sections (provided in JSON), ensuring that each section is consistent with the previous and aligns with future trial requirements.
        Apply improvements iteratively in several loops, passing content between steps in JSON format.
    The script should include a main() function that uses argparse to load configuration files and run the multi-agent system:
        Load EHR data, clinical trial protocols, and other configurations from JSON files.
        Ensure the entire workflow, from content generation to final alignment, is passed through the system using JSON format for data consistency and modularity.

Ensure the code structure is modular and integrates logging mechanisms to track the process. The workflow must allow for multiple recruitment sections, with the ability to check consistency, apply improvements iteratively, and pass information back in JSON format for re-evaluation where needed.        


The code it output looks like the following

import os
import json
import time
import openai
import argparse
import logging

# Define logging levels
TRACE = 5
DEBUGGING = 10

# Set up logging configuration
logging.basicConfig(level=logging.DEBUG, format='%(asctime)s - %(levelname)s - %(message)s')

# Function to log messages
def log_message(message, primary_log_file='logfile.log', secondary_log_file=None):
    logging.debug(message)
    with open(primary_log_file, 'a') as log_file:
        log_file.write(f'{time.strftime("%Y-%m-%d %H:%M:%S")} - {message}\n')
    if secondary_log_file:
        with open(secondary_log_file, 'a') as log_file:
            log_file.write(f'{time.strftime("%Y-%m-%d %H:%M:%S")} - {message}\n')

# Function to log messages sent to OpenAI API
def log_sent_message(message):
    log_message(f"Message sent to OpenAI API: {message}")

# OpenAI API function to send a request and get a response
def openai_chat_completion(prompt):
    try:
        log_sent_message(prompt)
        response = openai.Completion.create(
            engine="text-davinci-003",
            prompt=prompt,
            max_tokens=1000
        )
        return response.choices[0].text.strip()
    except Exception as e:
        log_message(f"Error with OpenAI API: {str(e)}")
        return None

# Functions for JSON management (load and save)
def load_json_file(filepath):
    try:
        with open(filepath, 'r') as file:
            return json.load(file)
    except Exception as e:
        log_message(f"Error loading JSON file {filepath}: {str(e)}")
        return None

def save_json_file(filepath, content):
    try:
        with open(filepath, 'w') as file:
            json.dump(content, file, indent=4)
    except Exception as e:
        log_message(f"Error saving JSON file {filepath}: {str(e)}")

# Generate initial content for patient recruitment
def generate_initial_content(ehr_data):
    prompt = f"Generate patient recruitment content for a clinical trial based on the following EHR data: {ehr_data}"
    return openai_chat_completion(prompt)

# Get improvement suggestions
def get_improvement_suggestions(content):
    prompt = f"Review and suggest improvements to the following recruitment content: {content}"
    return openai_chat_completion(prompt)

# Apply improvement suggestions
def apply_improvements(content, suggestions):
    improved_content = f"{content}\n\nImprovements:\n{suggestions}"
    return improved_content

# Check protocol discrete data from patient chart
def check_protocol_discrete_data(ehr_data):
    prompt = f"Pull the discrete data relevant to the clinical trial from the following patient chart: {ehr_data}"
    return openai_chat_completion(prompt)

# Verify recruitment issues between sections
def get_recruitment_issues(content, previous_sections):
    prompt = f"Identify inconsistencies between the following sections: {content} and {previous_sections}"
    return openai_chat_completion(prompt)

# Verify protocol consistency and loop back if needed
def verify_data(content, original_content):
    prompt = f"Check if the following content is consistent with the original protocol: {content} {original_content}"
    return openai_chat_completion(prompt)

# Check alignment with clinical trial protocol
def check_alignment_with_protocol(content, future_criteria):
    prompt = f"Ensure the recruitment content aligns with the following future clinical trial criteria: {future_criteria}"
    return openai_chat_completion(prompt)

# Save final recruitment content
def save_final_content(content, filepath='final_recruitment_content.json'):
    save_json_file(filepath, {'content': content})

# Save statistics of the process
def save_statistics(stats, filepath='statistics.json'):
    save_json_file(filepath, stats)

# Multi-agent system to orchestrate the process
def multi_agent_system(ehr_data, trial_protocol, previous_sections, future_criteria):
    # Step 1: Generate initial recruitment content
    initial_content = generate_initial_content(ehr_data)
    log_message(f"Generated initial content: {initial_content}")

    # Step 2: Get improvement suggestions and apply them
    suggestions = get_improvement_suggestions(initial_content)
    improved_content = apply_improvements(initial_content, suggestions)
    log_message(f"Applied improvements: {improved_content}")

    # Step 3: Check protocol discrete data
    discrete_data = check_protocol_discrete_data(ehr_data)
    log_message(f"Protocol discrete data check: {discrete_data}")

    # Step 4: Check recruitment issues and verify consistency
    inter_section_issues = get_recruitment_issues(improved_content, previous_sections)
    resolved_issues = verify_data(improved_content, inter_section_issues)

    # Step 5: Check alignment with clinical trial protocol
    alignment_check = check_alignment_with_protocol(resolved_issues, future_criteria)
    final_content = verify_data(resolved_issues, alignment_check)

    # Step 6: Save final content and statistics
    save_final_content(final_content)
    save_statistics({'steps_completed': 6})

# Main function to parse arguments and run the multi-agent system
def main():
    parser = argparse.ArgumentParser(description="Run the recruitment pipeline.")
    parser.add_argument('--ehr_data', required=True, help='Path to the EHR data JSON file')
    parser.add_argument('--trial_protocol', required=True, help='Path to the trial protocol JSON file')
    parser.add_argument('--previous_sections', help='Path to previous sections for consistency check', default=None)
    parser.add_argument('--future_criteria', help='Path to future criteria for alignment check', default=None)
    
    args = parser.parse_args()
    
    # Load necessary data
    ehr_data = load_json_file(args.ehr_data)
    trial_protocol = load_json_file(args.trial_protocol)
    previous_sections = load_json_file(args.previous_sections) if args.previous_sections else {}
    future_criteria = load_json_file(args.future_criteria) if args.future_criteria else {}

    # Run the multi-agent system
    multi_agent_system(ehr_data, trial_protocol, previous_sections, future_criteria)

if __name__ == '__main__':
    main()
        

So its not perfect, you need to spend time on the prompt engineering portion but that's the framework that can get you running in 5 minutes with your own custom agent system.

Alexander De Ridder

Founder of SmythOS.com | AI Multi-Agent Orchestration ??

1 个月

Impressive prompt engineering. Streamlining complex tasks into one cohesive flow is revolutionary.

回复

要查看或添加评论,请登录

Jeremy Harper的更多文章

社区洞察

其他会员也浏览了