Wisecube的动态

查看Wisecube的公司主页,图片

1,176 位关注者

?? Explore the Hidden Aspects of AI Hallucinations The Wisecube AI Team invites you to a webinar delving into a critical, yet often overlooked, aspect of AI reliability: hallucinations in large language models (LLMs). Learn how text features impact model accuracy, uncover methods for detecting hallucinations, and gain insights into identifying weaknesses in LLMs. This webinar offers practical knowledge for AI practitioners and data scientists, helping you enhance model reliability and performance. ?? Watch the recording on YouTube in higher quality: https://lnkd.in/eReJVQHq ?? Referenced Materials: ??Pythia Leaderboard Document: https://hubs.ly/Q02Zg81J0 ??Webinar Slides: https://hubs.ly/Q02Zg7_Q0 ??Seeing Through the Fog: A Cost-Effectiveness Analysis of Hallucination Detection Systems: https://hubs.ly/Q02ZCqVb0 Let us know your thoughts or questions in the comments! We're excited to continue the conversation about improving AI reliability. #AI #MachineLearning #LLM #ArtificialIntelligence #NLP #AIHallucinations #DataScience #BigData #GenerativeAI #Webinar

Beyond Accuracy: Unmasking Hallucinations in Large Language Models

Beyond Accuracy: Unmasking Hallucinations in Large Language Models

www.dhirubhai.net

Wendy Charles, PhD

Digital Health Scientist / Consultant / Educator / Global Speaker

1 周

Thank you so much Steven Paul Sanderson II, MPH for providing a link to the Dynamo AI documentation. Will the speakers's document be available after the presentation?

Oksana Meier

Connecting Banking and Blockchain / Digital Assets

1 周

Hello everyone! Excited to learn something important today.

Wendy Charles, PhD

Digital Health Scientist / Consultant / Educator / Global Speaker

1 周

Can anyone read the text presented? The text is really, really small and blurry.

Alan Knox

COO @ CtiPath | MLOps | Cloud | Contact Center | Coffee

1 周

Sounds like a version of "LLM as a judge" so far?

Donald Presnell, Jr Executive MBA, MIT IDSS

Principal Managing Consultant | Machine Learning Engineer | Data Science | Deep Learning | Generative AI @ TCG, LLC | Mentor Post-Graduate AIFL | Risk Solutions & Management | Entrepreneur

1 周

The example is more indicative of a “fabrication” than an “hallucination.”

Steven Paul Sanderson II, MPH

R - install.packages("healthyverse") | SQL | some Python | Author > packt.link/oTyZJ

1 周
Anatoly Alexandrovich

Curious Mind in the Realm of AI

1 周

Same here! Really interested to see how Wisecube approaches AI hallucinations

Alan Knox

COO @ CtiPath | MLOps | Cloud | Contact Center | Coffee

1 周

Steven Paul Sanderson II, MPH Looks like there are some nuanced differences. For example, fabrications can be caused by gaps in training data, while hallucinations are caused by weaknesses in generalizations. But... like I said... still researching.

Wendy Charles, PhD

Digital Health Scientist / Consultant / Educator / Global Speaker

1 周

OK. Thanks for confirming. The speaker keeps discussing features and data on the screen, but the meaning is completely lost. (Too bad because I am very eager to learn.)

Matas Ra?kauskas

Student of Computer Science at Kaunas University of Technology Member of KTU SKILLed AI program

1 周

Aren't hallucination problems just fabrications with a bit more truth to them?

查看更多评论

要查看或添加评论,请登录