Scale AI

Scale AI

软件开发

San Francisco,California 187,345 位关注者

The Data Engine that powers the most advanced AI models.

关于我们

At Scale, our mission is to accelerate the development of AI applications. We believe that to make the best models, you need the best data. The Scale Generative AI Platform leverages your enterprise data to customize powerful base generative models to safely unlock the value of AI. The Scale Data Engine consists of all the tools and features you need to collect, curate and annotate high-quality data, in addition to robust tools to evaluate and optimize your models. Scale powers the most advanced LLMs and generative models in the world through world-class RLHF, data generation, model evaluation, safety, and alignment. Scale is trusted by leading technology companies like Microsoft and Meta, enterprises like Fox and Accenture, Generative AI companies like Open AI and Cohere, U.S. Government Agencies like the U.S. Army and the U.S. Airforce, and Startups like Brex and OpenSea.

网站
https://scale.com
所属行业
软件开发
规模
501-1,000 人
总部
San Francisco,California
类型
私人持股
创立
2016
领域
Computer Vision、Data Annotation、Sensor Fusion、Machine Learning、Autonomous Driving、APIs、Ground Truth Data、Training Data、Deep Learning、Robotics、Drones、NLP和Document Processing

地点

  • 主要

    303 2nd St

    South Tower, 5th FL

    US,California,San Francisco,94107

    获取路线

Scale AI员工

动态

  • 查看Scale AI的公司主页,图片

    187,345 位关注者

    Today, we’re announcing Scale has closed $1B of financing at a $13.8B valuation, led by existing investor Accel. For 8 years, Scale has been the leading AI data foundry helping fuel the most exciting advancements in AI, including autonomous vehicles, defense applications, and generative AI. With today’s funding, we’re moving into the next phase of our journey: accelerating the abundance of frontier data to pave the road to Artificial General Intelligence (AGI). “Our vision is one of data abundance, where we have the means of production to continue scaling frontier LLMs many more orders of magnitude. We should not be data-constrained in getting to GPT-10.” - Alexandr Wang, CEO and founder of Scale AI. This new funding also enables Scale to build upon our prior model evaluation work with enterprise customers, the U.S. Department of Defense, and collaboration with the White House to deepen our capabilities and offerings for both public and private evaluations. There’s a lot left to do. If this challenge excites you, join us: https://scale.com/careers Read the full announcement: https://lnkd.in/gVBhaPZ5

    Scale’s Series F: Expanding the Data Foundry for AI

    Scale’s Series F: Expanding the Data Foundry for AI

    scale.com

  • 查看Scale AI的公司主页,图片

    187,345 位关注者

    “It’s me using every inch and every little bit of knowledge that I have taken a whole lifetime to learn and using it in different and very creative ways,” says Gabriela Sanders, an Outlier contributor. Fueling the GenAI revolution through generating high quality training data is critical for the next generation of LLMs and advanced AI capabilities. Get a behind-the-scenes look at Outlier, our platform powering the future of GenAI: https://lnkd.in/gzybuCMx

    • 该图片无替代文字
  • 查看Scale AI的公司主页,图片

    187,345 位关注者

    Join us for our next webinar introducing a novel approach to improving LLM alignment. This talk is based on a NeurIPS 2024 main track paper from Scale’s research team. Save your spot?? ??? Webinar: Aligning LLMs with Representation Learning ? When: Dec 4, 10AM PT ?? Who: Sean Hendryx, Head of Generative AI/ML at Scale and Vaskar Nath, ML Research Engineer at Scale What you’ll learn: ? a novel architecture for reward modeling that achieves large improvements on both math and natural language tasks. ? how to implement goal-conditioned rewards into post-training and decoding pipelines to reduce computational costs. ? new directions for developing more reliable, efficient, and interpretable AI systems Save your spot today! ?? https://lnkd.in/gDt4qHiP

    • 该图片无替代文字
  • 查看Scale AI的公司主页,图片

    187,345 位关注者

    "Many organizations have access to AI tools but lack the skilled personnel needed for effective integration. This underscores the importance of partnerships to help bridge this knowledge gap and facilitate smoother transitions to AI-driven solutions." Read more from our CFO?Dennis Cinelli in the Algorithms vs Applications report from Economist Impact. The report examines the factors influencing investment decisions in today's AI ecosystem. ?? https://lnkd.in/grG5jtCZ

    • 该图片无替代文字
  • 查看Scale AI的公司主页,图片

    187,345 位关注者

    “As we reflect on the value of service at the heart of Veterans Day, perhaps the most impactful way to celebrate is by dedicating ourselves to pursuing more opportunities for service in our personal and professional lives and remembering that service can take many forms.” We thank all veterans for their service and are proud to highlight the contributions made by Scaliens who have served. Bryan Lee reflects on rekindling a sense of mission and purpose after military service in today’s blog. Read the full story here → https://lnkd.in/gb-5agAN

    • 该图片无替代文字
  • 查看Scale AI的公司主页,图片

    187,345 位关注者

    Contrary to prior work, new research from Scale finds that LLMs continue to learn new knowledge during post-training following a power law similar to well known pre-training scaling laws. Let’s dive in ?? The Superficial Alignment Hypothesis suggests that most of a language model's knowledge and skills come from its initial training. Post-training a model is about giving it the right style and format. However, our research team found that when evaluated appropriately on reasoning benchmarks? LLMs continue to learn and apply new information to better tackle complex questions. Specifically, they found that just like pre-training scaling laws, post-training performance scales as a power law against the number of fine-tuning examples. What this implies is the Superficial Alignment Hypothesis is an oversimplification of how models learn. Relying on just human preference votes alone can be misleading, especially for complex reasoning tasks. Evaluating models using both human preference and objective reasoning benchmarks provides a more holistic picture of a model's true capabilities. Read the full paper here from authors Mohit Raghavendra, Vaskar Nath, and Sean Hendryx: https://lnkd.in/gXNzCgvD

    • 该图片无替代文字

相似主页

查看职位

融资

Scale AI 共 9 轮

上一轮

F 轮

US$1,000,000,000.00

Crunchbase 上查看更多信息