TigerLab AI Safety Toolkit: Evaluation of LLM Chat Models
We introduce two novel metrics: the TigerLab AI Safety Score (TASS) and TigerLab AI Safety Test Success Percentage (TAST).
Introduction
In an era where artificial intelligence (AI) increasingly influences many aspects of our lives, the safety of these systems, particularly Large Language Models (LLMs), has become paramount.
Ensuring AI safety
This report focuses on a critical aspect of AI safety: hate speech moderation
Data Sources and Methodology
Description of Datasets
The datasets for this evaluation were obtained from Hugging Face Datasets , focusing on the “hate” category:
Content that expresses, incites, or promotes hate based on race, gender, ethnicity, religion, nationality, sexual orientation, disability status, or caste.
Methodology
We ran chat models on input prompts and inspected the outputs, comparing metrics TASS and TASST for OpenAI models and open source models (Llama2–7B-chat and Mistral models). As in the TigerLab AI github repo:
Rating Guideline
For each evaluation, we generate evaluation result based on: output result itself and output result put in the input context.
We rate the result as one of the three categories: Failure, Meet, and Exceed. The defination of each category is listed below:
Metrics
We introduce two metrics:
TigerLab AI Safety Score is caluated by summing the scores of each evaluation and normalizing the result on a scale of 100. For example, given 1000 evaluations, the full score would be 2000. If the summed score is 1500, the normalized score is 1500/2000 * 100 = 75.
TASS provides a comprehensive evaluation of AI models’ safety, consolidating both safety improvements and loss. This metric offers a holistic view of the model’s safety performance, aiding in the identification of potential risks and areas for improvement.
The score of each evaluation is mapped using the following table:
2. TigerLab AI Safety Test success % (TAST)
TigerLab AI Safety Test success % is caluated by dividing the number of success evaluations with the total number evaluations. For example, given 1000 evaluations, if 310 evaluations are success, the Safety Test success % is 310/1000 = 31%.
领英推荐
TAST represents the percentage of successful outcomes in AI safety tests. It measures the effectiveness of a model in adhering to safety standards and protocols, offering insights into its reliability and responsible AI behavior. A higher TAST percentage indicates a more secure and trustworthy AI system.
The defination of success of each evaluation is mapped using the following table:
Results
Comparative Analysis
Our comparative analysis includes a range of models, including Llama 2, Mistral, GPT-3.5, GPT-4, and GPT-4.1106-preview, assessing their performance in moderating content
The comparison reveals significant differences in the models’ ability to meet or exceed moderation standards. For instance, GPT-4.1106 shows a high TASS of 96 and TAST of 100%, indicating a strong performance in content moderation.
Observations
1?? Open-source models like Llama 2 and Mistral exhibit more safety issues compared to GPT models
2?? Llama 2 has more safety checks, compared to Mistral
3?? GPT-3.5 surprisingly outperforms GPT-4 in safety measurements
4?? The recently released GPT-4–1106-preview showcases significant safety improvements over older versions of GPT-4 and GPT-3.5
Limitations of This Analysis
Our analysis, while insightful, has limitations. By focusing solely on hate speech, we may not capture the full spectrum of AI safety challenges. Additionally, the use of an OpenAI-provided dataset could inherently skew results in favor of OpenAI models. Despite these constraints, our findings offer valuable perspectives on the safety performance of various LLMs.
Findings
Model Comparisons
Our evaluation presents several notable insights into the AI safety performance of LLM chat models:
The variation in success rates for managing sensitive content is a clear indication of the necessity for ongoing development in AI moderation
Potential for Open Source Models
There is significant potential for open-source models to enhance their content moderation capabilities. The methodologies employed in developing GPT models provide a blueprint for improvement. For the open-source community, it is crucial to assimilate these strategies to narrow the performance divide and amplify the effectiveness of content moderation solutions.
Roadmap and Next Steps
Moving forward, we plan to include more diverse test datasets and evaluate a broader range of model types. Our metrics will also undergo refinement to become more sophisticated and comprehensive. We call on the open-source community to contribute by adding their own safety evaluation datasets, fostering a collaborative effort towards enhancing AI safety