Yesterday?two?new major model families became available for fine-tuning:?Llama 3.1, which comes in 8B, 70B and 405B(!) variants, and?GPT-4o mini. We’ve added them to the OpenPipe platform and ran all of them (except Llama 3.1 405B) through our evaluation harness. The good news is, all 3 of models are extremely high quality. The bad news is, they saturate most of the standard evals we ran, which makes comparing them difficult! In fact, both Llama 3.1 variants we tried saturate all 3 of the standard evals we ran, and GPT-4o mini also saturated 2/3 of them. What do we mean by saturate? For any given input, you can imagine there is a potential “perfect” output (or set of outputs) that cannot be improved upon. The more complex the task, the more difficult it is for a model to generate a perfect output. However, once a model is strong enough to consistently generate a perfect output for that task, we consider the task saturated for that model. In our LLM-as-judge evals, this usually shows up as a cluster of models all doing about the same on the task without any model significantly outperforming. And in fact, that's what we see in the evaluations below. All 3 fine-tuned models do about as well as each other (win rates within 6%) on both the "Resume Summarization" and "Data Extraction" tasks. On "Chatbot Responses" however, both Llama 3.1 variants significantly outperform GPT-4o mini. So the "Chatbot Responses" task isn’t saturated for GPT-4o mini, but all other tasks and models are. This is very significant—we chose these tasks explicitly because older models on our platform, like Mistral 7B and Llama 3 8B, did not saturate these tasks! There are two main reasons why we’re seeing this saturation now: - The new models we’re testing here are stronger than the previous generation of models available on-platform. - Our benchmark models are now all trained on datasets relabeled with Mixture of Agents, which substantially improves the quality of the dataset and thus the fine-tuned model. We’re working on developing better benchmarks, and once we have some higher-difficulty ones we’ll analyze Llama 3.1 405B as well. And again, you can try all these out today on OpenPipe to run your own evaluations!
OpenPipe
软件开发
Automatically convert unreliable LLM prompts into high-quality, fast fine-tuned models.
关于我们
Fine-tune models to replace your LLM prompts.
- 网站
-
https://openpipe.ai
OpenPipe的外部链接
- 所属行业
- 软件开发
- 规模
- 2-10 人
- 类型
- 私人持股
OpenPipe员工
动态
-
OpenPipe转发了
Fine-tuned Llama 3.1 8B is completely cracked. Just ran it through our fine-tuning test suite and blows GPT-4o mini out of the water on every task. There has never been an open model this small, this good.
-
OpenPipe转发了
One week away from All About Fine-Tuning LLMs ?? Join us next Tuesday, June 25th at 11 AM PDT on Zoom! We're excited to announce two new panelists: ?? Sophia Yang, Ph.D. : Head of Developer Relations at Mistral AI Aditya Jain: Applied Research Scientist?at Meta They'll be joining alongside- Kyle Corbitt: Co-founder OpenPipe Wing Lian: Founder, Axolotl AI Benjamin Hamm: Senior Principal Product Manager at OctoAI And our host - Naomi Chetrit Band, The GenAI Collective! Don't miss this deep dive into fine-tuning models for optimal performance, level up your tuning knowledge, and gain the knowledge and strategies to help you tailor open source models. Sign up here ?? https://lnkd.in/e_3dasMu
-
OpenPipe转发了
not saying Ascend is the only vc in seattle investing exclusively in startups using ai to disrupt - but we are the only one throwing a hackathon june 15 with our good friends AI Tinkerers Jay Allen Vik K. and the fine folks at OpenPipe get involved!
?? The AI Tinkerers - Seattle Summer Hackathon is ON! ?? ?? https://lnkd.in/g7QJgTxZ This event brings together two of the most interesting and innovative early stage AI companies -- Moondream & OpenPipe -- together with the AI Tinkerers community to imagine what's possible to open source generative AI models through fine-tuning and vision. We invite you to bring your team, or form a team at the event. Judges, prizes, food and founders of those two companies will be on hand. June 15th. ?? Huge thanks to the generous sponsorship of Ascend.vc ?? Kirby Winfield Jen Haller Kyle Corbitt David Corbitt Jason Allen, Vik
AI Tinkerers Summer Hackathon 2024 [AI Tinkerers - Seattle]
seattle.aitinkerers.org
-
Love seeing OpenPipe users sharing their learnings!
Here is a sample math explainer video that me and my team at the #Llama3Hackathon generated this past weekend: https://lnkd.in/gBN4U63n I fine-tuned #Llama3 8B on generating the code utilizing the Manim python library to create videos based on user inputted math questions using OpenPipe and performed inference on OctoAI. My dataset came from code generated from Gemini 1.5. You can find our project here:?https://math.auto.movie? Blog description of the project:?https://lnkd.in/gqmJiFNM? Github repo:?https://lnkd.in/gnXH7hwW
Solving Math Problems Easy: Addition of 3+5=8 for Beginners
https://www.youtube.com/
-
OpenPipe转发了
Our Deep Dive with OpenPipe and Kyle Corbitt is live! OpenPipe fine-tunes your faster, cheaper, better model ?? Kyle walks us through OpenPipe, fine-tuning AI models, and their $6.7m seed round by Costanoa Ventures, with involvement from Y Combinator, Logan Kilpatrick and more... https://lnkd.in/eSz8iXN4
OpenPipe fine-tunes your faster, cheaper, better model ??
cerebralvalley.ai
-
OpenPipe转发了
Thrilled to announce that OpenPipe just closed our $6.7M seed round to help you replace GPT-4 with your own fine-tuned models! Huge shout-out to our investors, including Tony Liu and Rebecca Li at Costanoa Ventures, Y Combinator, Logan Kilpatrick, Alex Graveley, Flo Crivello, Tom Preston-Werner, Immad Akhund, Austen Allred, Massimo Sgrelli, Luigi Bajetti, Lombardstreet Ventures, Pioneer Fund and many others. We're extremely grateful for your vote of confidence. ?? More details in the post!
We Raised $6.7M to Replace GPT-4 with Your Own Fine-Tuned Models - OpenPipe
openpipe.ai
-
We're #hiring a new Founding Software Engineer (Backend) in Bellevue, Washington. Apply today or share this post with your network.
-
OpenPipe (YC S23) has raised $6.7M in seed funding to replace GPT-4 with your own fine-tuned models. Founded by Kyle Corbitt and David Corbitt, OpenPipe makes it easy to replace your GPT-4 prompt with a fine-tuned model. This is especially beneficial for projects or businesses making over 1000 OpenAI prompt calls a day, as it offers improvements in speed, cost, and quality without compromises. If you have a prompt already in use, OpenPipe can help you collect and refine your data, train a new model, and deploy it quickly, often in less than an hour, even for those without machine learning experience. Once your fine-tuned model is operational, OpenPipe enhances its value by automating the process of improving your model with growing user data, a concept known as a data flywheel. This strategy, used by major companies like Amazon for superior product recommendations, allows your model to improve continuously as your user base expands. OpenPipe enables any company to begin implementing machine learning workflows and building a sustainable competitive edge by leveraging early data for ongoing enhancement. Congrats on the round, Kyle and David!
Seattle startup OpenPipe raises $6.7M to help companies reduce costs for LLM models
geekwire.com
-
Fantastic interview with some of the inside backstory on why we started OpenPipe!
?? Thrilled to feature Kyle Corbitt, co-founder and lead engineer at OpenPipe, a fellow YC company on our latest episode of the accelerometer! Here are a few highlights from the conversation: ?? #FineTuningCosts: "Cost-wise, we can come in at between 1-10th and 1-50th of what you're paying right now, depending on how small a model you can get away with." ???????? #StartupFamily: "I chose to work with my brother because he was the best co-founder I knew, and so it's been great." ?? #ClassifyEverything: "they're actually classifying every regulation in the world, which I just think is super cool, that that's like a thing that's possible now." Check the comments below for a link to the full video!