Predibase的封面图片
Predibase

Predibase

软件开发

San Francisco,CA 9,828 位关注者

The highest quality models with the fastest throughput tailored to your use case—served in your cloud or ours.

关于我们

As the first platform for reinforcement fine-tuning, Predibase makes it easy for AI teams to easily customize and serve any open-source LLM on state-of-the-art infrastructure in the cloud—no labeled data required! Built by the team that created the internal AI platforms at Apple and Uber, Predibase is fast, efficient, and scalable for any size job. Predibase pairs an easy to use declarative interface for training models with high-end GPU capacity on serverless infra for production serving. Most importantly, Predibase is built on open-source foundations, including Ludwig and LoRAX, and can be deployed in your private cloud so all of your data and models stay in your control. Predibase is helping industry leaders — including Checkr, Qualcomm, Marsh McLennan, Convirza, Sense, Forethought AI and more — to deliver AI driven value back to their organizations in days, not months. Try Predibase for free: https://predibase.com/free-trial.

网站
https://www.predibase.com
所属行业
软件开发
规模
11-50 人
总部
San Francisco,CA
类型
私人持股

地点

Predibase员工

动态

  • 查看Predibase的组织主页

    9,828 位关注者

    ?? Introducing the first end-to-end platform for #Reinforcement Fine-Tuning! With just a dozen labeled examples, you can fine-tune models that outperform OpenAI and #DeepSeek on complex tasks. Built on the GRPO methodology that DeepSeek-R1 popularized, our platform lets you turn any open-source LLM into a reasoning powerhouse. ?? In our real-world PyTorch to Triton transpilation case study, we achieved a 3x higher accuracy than OpenAI o1 and DeepSeek-R1 when writing GPU code – unlocking smarter, more efficient AI models. ?? What’s next? ?? Read the blog to see how it works. ?? Join us for the launch webinar on 3/27 to dive deep into RFT. ??? Try the #RFT Playground to see how it works. All links are in the comments. Let’s redefine what’s possible with fine-tuned AI! ??

  • 查看Predibase的组织主页

    9,828 位关注者

    Yesterday we launched Reinforcement Fine-Tuning! Use it to fine-tune open-source LLMs to outperform commercial models with only a handful of labeled datarows. RFT helps align models to your specific needs through reward-based learning. Curious what it looks like? Take a look at our interactive RFT playground. You can: ?? See reward scores improve over time ?? Explore each reward function ??? Inspect model generations throughout training Check it out for yourself! (?? Link in comments)

    • 该图片无替代文字
  • Predibase转发了

    查看Devvret Rishi的档案

    CEO @ Predibase | Co-Founder

    Starting today, you can build your own custom AI models without collecting labeled data – with the first end-to-end reinforcement fine-tuning platform. DeepSeek-R1 showed the world how RL can solve challenging reasoning tasks, and now we’ve baked these capabilities into an intuitive platform so anyone can leverage self-improving models for their use cases. RFT guides a model with reward functions and unlabeled data in a new interactive experience that’s a game-changer for tasks like code generation, complex RAG and more. Our early results with the platform have blown us away. We used it ourselves to build a 32B-param model to write CuDA code 3x more accurately and faster than OpenAI’s o1 or DeepSeek-R1, that we’re open sourcing today. And we’ve already seeing great traction with early customers. I’m very excited to launch the first version of this experience and give you an early look at what we’re building. Check out how it works in the thread below and let’s shape the future of custom AI together.

  • 查看Predibase的组织主页

    9,828 位关注者

    1000s of fine-tuned #LLMs on 1 GPU because why would you do anything else ?? LLMs as they were meant to be served with open-source #Lorax ?? Thanks for the shoutout Akshay Pachaar

    查看Akshay Pachaar的档案

    Co-Founder DailyDoseOfDS | BITS Pilani | 3 Patents | X (187K+)

    Serve 1000s of Fine-Tuned LLMs on a Single GPU! (100% open-source, Apache 2.0 ??) LoRAX by Predibase enables users to serve thousands of fine-tuned models on one GPU, cutting costs without sacrificing speed or performance. Here's what make it a game-changer: ?? OpenAI-compatible API ?? Merge multiple adapters on the fly ???♀? Handle requests for different adapters simultaneously ? Dynamically load adapters from HF, Predibase, or local files ?? Enhance performance with quantization & custom CUDA kernels ?? Production-ready with Docker, Helm charts, & OpenTelemetry Here's the best part, it's 100% open-source (Apache 2.0 license ??). I've shared the link to their GitHub repo in the comments! _____ Find me → Akshay Pachaar ?? For more insights & tutorials on AI and Machine Learning.

    • 该图片无替代文字
  • 查看Predibase的组织主页

    9,828 位关注者

    What does it take to build #agentic AI workflows at massive scale? ?? Check out our latest story from Marsh McLennan, one the world’s largest professional services firm, to learn how they built LenAI, an internal #copilot, that helps their 90,000 employees automate time intensive tasks. Here are a few highlights: ? Over 1 million hours saved with AI-driven #efficiency ?? Improved model #accuracy by up to 12% with fine-tuned SLMs ?? Reduced response times to mere seconds with #TurboLoRA Links in the comments ??

  • 查看Predibase的组织主页

    9,828 位关注者

    The future of #AI isn’t about massive, one-size-fits-all models. It’s about #specialized, domain-specific models that excel at their tasks—and reinforcement fine-tuning (#RFT) is a key to achieving peak results. Curious about advanced LLM #customization methods, how RFT works in practice, and real-world applications? Then you won't want to miss the latest episode of The Data Exchange Podcast with AI visionary Ben Lorica 罗瑞卡 and Predibase CTO & Cofounder, Travis Addair. Link in the comments ??

    • 该图片无替代文字

相似主页

查看职位

融资

Predibase 共 2 轮

上一轮

A 轮

US$12,200,000.00

投资者

Felicis
Crunchbase 上查看更多信息