Today, we’re announcing Scale has closed $1B of financing at a $13.8B valuation, led by existing investor Accel. For 8 years, Scale has been the leading AI data foundry helping fuel the most exciting advancements in AI, including autonomous vehicles, defense applications, and generative AI. With today’s funding, we’re moving into the next phase of our journey: accelerating the abundance of frontier data to pave the road to Artificial General Intelligence (AGI). “Our vision is one of data abundance, where we have the means of production to continue scaling frontier LLMs many more orders of magnitude. We should not be data-constrained in getting to GPT-10.” - Alexandr Wang, CEO and founder of Scale AI. This new funding also enables Scale to build upon our prior model evaluation work with enterprise customers, the U.S. Department of Defense, and collaboration with the White House to deepen our capabilities and offerings for both public and private evaluations. There’s a lot left to do. If this challenge excites you, join us: https://scale.com/careers Read the full announcement: https://lnkd.in/gVBhaPZ5
Scale AI
软件开发
San Francisco,California 187,345 位关注者
The Data Engine that powers the most advanced AI models.
关于我们
At Scale, our mission is to accelerate the development of AI applications. We believe that to make the best models, you need the best data. The Scale Generative AI Platform leverages your enterprise data to customize powerful base generative models to safely unlock the value of AI. The Scale Data Engine consists of all the tools and features you need to collect, curate and annotate high-quality data, in addition to robust tools to evaluate and optimize your models. Scale powers the most advanced LLMs and generative models in the world through world-class RLHF, data generation, model evaluation, safety, and alignment. Scale is trusted by leading technology companies like Microsoft and Meta, enterprises like Fox and Accenture, Generative AI companies like Open AI and Cohere, U.S. Government Agencies like the U.S. Army and the U.S. Airforce, and Startups like Brex and OpenSea.
- 网站
-
https://scale.com
Scale AI的外部链接
- 所属行业
- 软件开发
- 规模
- 501-1,000 人
- 总部
- San Francisco,California
- 类型
- 私人持股
- 创立
- 2016
- 领域
- Computer Vision、Data Annotation、Sensor Fusion、Machine Learning、Autonomous Driving、APIs、Ground Truth Data、Training Data、Deep Learning、Robotics、Drones、NLP和Document Processing
地点
-
主要
303 2nd St
South Tower, 5th FL
US,California,San Francisco,94107
Scale AI员工
动态
-
Scale AI转发了
Harris-Stowe State University and the scholars of the HBCU Immersion in GEOINT thanks Scale AI for supporting our trip to New York to attend the American Geographical Society GEO2050 Symposium.
-
How does last week’s GPT-4o release compare to Claude 3.5 Sonnet for writing? ? In the latest blog from Scale, we dive into the differences between the two models’ writing styles, how it interacts with AI safety, and which one you should use for your needs. ?? https://lnkd.in/g3r-UtZq
-
“It’s me using every inch and every little bit of knowledge that I have taken a whole lifetime to learn and using it in different and very creative ways,” says Gabriela Sanders, an Outlier contributor. Fueling the GenAI revolution through generating high quality training data is critical for the next generation of LLMs and advanced AI capabilities. Get a behind-the-scenes look at Outlier, our platform powering the future of GenAI: https://lnkd.in/gzybuCMx
-
We’re on Fortune's 50 AI Innovators list again! The list recognizes leading AI startups that exemplify the innovation, progress, and potential of AI Join us on our mission to accelerate the development of AI applications: scale.com/careers https://lnkd.in/gx-JDdHV
Fortune 50 AI Innovators
fortune.com
-
Join us for our next webinar introducing a novel approach to improving LLM alignment. This talk is based on a NeurIPS 2024 main track paper from Scale’s research team. Save your spot?? ??? Webinar: Aligning LLMs with Representation Learning ? When: Dec 4, 10AM PT ?? Who: Sean Hendryx, Head of Generative AI/ML at Scale and Vaskar Nath, ML Research Engineer at Scale What you’ll learn: ? a novel architecture for reward modeling that achieves large improvements on both math and natural language tasks. ? how to implement goal-conditioned rewards into post-training and decoding pipelines to reduce computational costs. ? new directions for developing more reliable, efficient, and interpretable AI systems Save your spot today! ?? https://lnkd.in/gDt4qHiP
-
Today, we are excited to announce a collaboration with Microsoft to accelerate enterprise generative AI adoption with fine-tuned Azure AI models. With Scale’s expertise in data transformation, fine-tuning, and building end-to-end Generative AI solutions combined with the power of Microsoft Azure AI services, enterprises benefit from a complete AI solution offering. Learn more: https://lnkd.in/gNUnVsmK?
-
"Many organizations have access to AI tools but lack the skilled personnel needed for effective integration. This underscores the importance of partnerships to help bridge this knowledge gap and facilitate smoother transitions to AI-driven solutions." Read more from our CFO?Dennis Cinelli in the Algorithms vs Applications report from Economist Impact. The report examines the factors influencing investment decisions in today's AI ecosystem. ?? https://lnkd.in/grG5jtCZ
-
“As we reflect on the value of service at the heart of Veterans Day, perhaps the most impactful way to celebrate is by dedicating ourselves to pursuing more opportunities for service in our personal and professional lives and remembering that service can take many forms.” We thank all veterans for their service and are proud to highlight the contributions made by Scaliens who have served. Bryan Lee reflects on rekindling a sense of mission and purpose after military service in today’s blog. Read the full story here → https://lnkd.in/gb-5agAN
-
Contrary to prior work, new research from Scale finds that LLMs continue to learn new knowledge during post-training following a power law similar to well known pre-training scaling laws. Let’s dive in ?? The Superficial Alignment Hypothesis suggests that most of a language model's knowledge and skills come from its initial training. Post-training a model is about giving it the right style and format. However, our research team found that when evaluated appropriately on reasoning benchmarks? LLMs continue to learn and apply new information to better tackle complex questions. Specifically, they found that just like pre-training scaling laws, post-training performance scales as a power law against the number of fine-tuning examples. What this implies is the Superficial Alignment Hypothesis is an oversimplification of how models learn. Relying on just human preference votes alone can be misleading, especially for complex reasoning tasks. Evaluating models using both human preference and objective reasoning benchmarks provides a more holistic picture of a model's true capabilities. Read the full paper here from authors Mohit Raghavendra, Vaskar Nath, and Sean Hendryx: https://lnkd.in/gXNzCgvD