Beginners Guide to LLM/RAG Evaluation
I frequently discuss various strategies for LLM/RAG evaluation, including real-time fine-tuning and measuring the quality of reconstructed taxonomies obtained with your RAG system, by comparing it to imported knowledge graphs from external input sources, or to the embedded taxonomy found in your crawled corpus. The following presentation explores the most important evaluation metrics, and how to implement and read them, with case studies.
RAG evaluation is a complex topic, similar to evaluating clustering techniques, because there is no "perfect" answer to compare to: RAG/LLM is typically an unsupervised machine learning problem. It is easier to evaluate RAG models that perform supervised tasks, such as classification or prediction based on training and validation sets.
Overview
Join us for an enlightening webinar on the innovative technology of Retrieval Augmented Generation (RAG) with Professor Tom Yeh from the University of Colorado Boulder. As AI continues to evolve, understanding technologies like RAG is crucial for anyone looking to stay ahead in the field. This webinar will introduce you to the basics of RAG, demonstrating how it enhances the capabilities of AI systems by integrating retrieval mechanisms into generative models.
You’ll learn:
This hands-on workshop is for developers and AI professionals, featuring state-of-the-art technology, case studies, code-share, and live demos. Recording and GitHub material will be available to registrants who cannot attend the free 60-min session.
AI Executive, GenAItechLab.com
1 个月To learn about the backbones of RAG/LLM (fast, scalable databases), see also this presentation: https://mltblog.com/3T4rGoF
Senior Data Engineer - Oracle, Data Warehouse, Performance Tuning, Test-Driven Development
1 个月Very informative. Thank you
Business Intelligence Developer/Analyst | Power BI | SQL | Python | Microsoft Certified Data Analyst | AWS Certified
1 个月Thankyou for this Vincent Granville !!