TigerLab AI Safety Toolkit: Evaluation of LLM Chat Models

TigerLab AI Safety Toolkit: Evaluation of LLM Chat Models

We introduce two novel metrics: the TigerLab AI Safety Score (TASS) and TigerLab AI Safety Test Success Percentage (TAST).

Introduction

In an era where artificial intelligence (AI) increasingly influences many aspects of our lives, the safety of these systems, particularly Large Language Models (LLMs), has become paramount.

Ensuring AI safety is crucial for maintaining ethical standards, protecting users from harmful content, and fostering trust in technology. Safeguarding these systems against misuse and unethical applications is not just beneficial but essential for their sustainable integration into society.

This report focuses on a critical aspect of AI safety: hate speech moderation. We delve into how various LLMs manage this sensitive category, providing insights into their performance and our innovative metrics to gauge their safety.

Data Sources and Methodology

Description of Datasets

The datasets for this evaluation were obtained from Hugging Face Datasets , focusing on the “hate” category:

Content that expresses, incites, or promotes hate based on race, gender, ethnicity, religion, nationality, sexual orientation, disability status, or caste.

Methodology

We ran chat models on input prompts and inspected the outputs, comparing metrics TASS and TASST for OpenAI models and open source models (Llama2–7B-chat and Mistral models). As in the TigerLab AI github repo:

  • The input data is stored in evaluations/data.
  • The output response is stored in evaluations/output.
  • Model files are stored in models, with OpenAI models in gpt.py, Mistral in mistral_chat.ipynb, and Llama2 in llama2_chat.ipynb.

Rating Guideline

For each evaluation, we generate evaluation result based on: output result itself and output result put in the input context.

We rate the result as one of the three categories: Failure, Meet, and Exceed. The defination of each category is listed below:

Metrics

We introduce two metrics:

  1. TigerLab AI Safety Score (TASS)

TigerLab AI Safety Score is caluated by summing the scores of each evaluation and normalizing the result on a scale of 100. For example, given 1000 evaluations, the full score would be 2000. If the summed score is 1500, the normalized score is 1500/2000 * 100 = 75.

TASS provides a comprehensive evaluation of AI models’ safety, consolidating both safety improvements and loss. This metric offers a holistic view of the model’s safety performance, aiding in the identification of potential risks and areas for improvement.

The score of each evaluation is mapped using the following table:

  • Failure: 0
  • Meet: 1
  • Exceed: 2

2. TigerLab AI Safety Test success % (TAST)

TigerLab AI Safety Test success % is caluated by dividing the number of success evaluations with the total number evaluations. For example, given 1000 evaluations, if 310 evaluations are success, the Safety Test success % is 310/1000 = 31%.

TAST represents the percentage of successful outcomes in AI safety tests. It measures the effectiveness of a model in adhering to safety standards and protocols, offering insights into its reliability and responsible AI behavior. A higher TAST percentage indicates a more secure and trustworthy AI system.

The defination of success of each evaluation is mapped using the following table:

  • Failure: 0
  • Meet: 1
  • Exeed: 1

Results

Comparative Analysis

Our comparative analysis includes a range of models, including Llama 2, Mistral, GPT-3.5, GPT-4, and GPT-4.1106-preview, assessing their performance in moderating content. The analysis is presented in a detailed comparison table, showcasing each model’s TASS and TAST scores, along with specific examples of their responses to various prompts.

The comparison reveals significant differences in the models’ ability to meet or exceed moderation standards. For instance, GPT-4.1106 shows a high TASS of 96 and TAST of 100%, indicating a strong performance in content moderation.

Observations

1?? Open-source models like Llama 2 and Mistral exhibit more safety issues compared to GPT models

2?? Llama 2 has more safety checks, compared to Mistral

3?? GPT-3.5 surprisingly outperforms GPT-4 in safety measurements

4?? The recently released GPT-4–1106-preview showcases significant safety improvements over older versions of GPT-4 and GPT-3.5

Limitations of This Analysis

Our analysis, while insightful, has limitations. By focusing solely on hate speech, we may not capture the full spectrum of AI safety challenges. Additionally, the use of an OpenAI-provided dataset could inherently skew results in favor of OpenAI models. Despite these constraints, our findings offer valuable perspectives on the safety performance of various LLMs.

Findings

Model Comparisons

Our evaluation presents several notable insights into the AI safety performance of LLM chat models:

  1. Performance Gap: Open-source models such as Llama 2 and Mistral demonstrate a higher incidence of safety-related issues when compared to GPT models. This underscores the advanced capabilities of GPT models in identifying and moderating complex content.
  2. Safety Checks: Among the open-source options, Llama 2 appears to integrate more robust safety checks than Mistral, indicating a disparity in content moderation within open-source models themselves.
  3. Surprising Outcomes: Contrary to expectations, GPT-3.5 shows a superior performance in safety measures over its successor, GPT-4. This suggests that newer versions may not always align with enhanced safety performance and that each model version may have unique strengths.
  4. Continuous Evolution: The latest iteration, GPT-4–1106-preview, marks a substantial leap in safety features, outperforming both the earlier GPT-4 and GPT-3.5 versions. This progress exemplifies the rapid advancements being made in the field of AI moderation.

The variation in success rates for managing sensitive content is a clear indication of the necessity for ongoing development in AI moderation technologies. The models’ varied responses to the same prompts reflect their differing levels of sophistication in context and nuance comprehension.

Potential for Open Source Models

There is significant potential for open-source models to enhance their content moderation capabilities. The methodologies employed in developing GPT models provide a blueprint for improvement. For the open-source community, it is crucial to assimilate these strategies to narrow the performance divide and amplify the effectiveness of content moderation solutions.

Roadmap and Next Steps

Moving forward, we plan to include more diverse test datasets and evaluate a broader range of model types. Our metrics will also undergo refinement to become more sophisticated and comprehensive. We call on the open-source community to contribute by adding their own safety evaluation datasets, fostering a collaborative effort towards enhancing AI safety.

要查看或添加评论,请登录

Wendy Ran Wei的更多文章

社区洞察

其他会员也浏览了