Anyscale

Anyscale

软件开发

San Francisco,California 35,684 位关注者

Scalable compute for AI and Python

关于我们

Scalable compute for AI and Python Anyscale enables developers of all skill levels to easily build applications that run at any scale, from a laptop to a data center.

网站
https://anyscale.com
所属行业
软件开发
规模
51-200 人
总部
San Francisco,California
类型
私人持股
创立
2019

产品

地点

Anyscale员工

动态

  • 查看Anyscale的公司主页,图片

    35,684 位关注者

    Will you be at AWS re:Invent 2024? Join MongoDB, Anyscale, Cohere, and Fireworks AI for an exclusive panel on how to leverage the best tools to create scalable, high-performance #genAI applications. - Date: Wednesday December 4, 2024 - Time: 1pm - Location: Bollinger @ Wynn Las Vegas Don't miss your chance to hear insights from the top leaders driving innovation in the AI space. See you there!

    • 该图片无替代文字
  • Anyscale转发了

    查看Robert Nishihara的档案,图片

    Co-founder at Anyscale

    Patrick Ames from AWS has more experience than just about anyone with managing thousands of Ray clusters and Spark clusters and processing exabytes of data. There are tremendous operational and performance challenges. - managing thousands of clusters - scaling to many petabytes or exabytes - latency and cost The original blog post is here: https://lnkd.in/g-pJhFei https://lnkd.in/gQsCrXPH

  • 查看Anyscale的公司主页,图片

    35,684 位关注者

    Are you headed to #reinvent2024? Join us on Dec. 4th for a hands-on workshop with Amazon Web Services (AWS) and learn how to scale generative AI workloads! This session will cover: ?? Building LLM apps with RayTurbo, Anyscale’s optimized Ray runtime. ?? Advanced scaling: autoscaling, CPU/GPU integration, replica compaction. ?? Fine-tuning LLMs with LLM-Forge. ?? Debugging and profiling with Anyscale observability tools. Seats are limited—register here ?? https://lnkd.in/gqRYFPz6

    Generative AI Workloads with Anyscale and AWS

    Generative AI Workloads with Anyscale and AWS

    learn.anyscale.com

  • Anyscale转发了

    查看Robert Nishihara的档案,图片

    Co-founder at Anyscale

    Talked with John Schulman about the ChatGPT backstory last year. John co-founded OpenAI and created ChatGPT. He sheds light on where the idea came from and their surprise at the world's reaction. **How did you get the idea for ChatGPT?** I personally was working on a project called WebGPT before, which was a question answering system that would go and find relevant sources by doing a web search and browsing some of the pages and then writing a one or two paragraph answer with citations. What we were trying to get at with that project was getting language models to use tools. We were trying to work on this problem of truthfulness. How do you get models to not make things up. After this project, we were trying to figure out what the next version of it was. For question answering, chat starts to make a lot of sense because you need to do a lot of things like follow up questions and clarifying questions. We'd been playing with chat internally, and it seemed like the models were quite good at it. So we decided to have a dialogue-based system for the next iteration. We started collecting data in early 2022 that was specific for chat. Originally it was going to be a successor to WebGPT, but the whole retrieval part ended up being complicated, so we dropped that and focused on the chat models. We had an internal demo, and I used it a lot for coding help. We started to think it was a good idea to do a public release and let other people try out the model. That ended up getting delayed a bit because GPT-4 finished training and everyone got excited about that. The chat model we had trained was based on GPT-3.5 and that got sidelined for a while. We decided to do a release anyway and ended up launching it late in November. **Were you surprised by the world's reaction to ChatGPT?** We were very surprised. We did have beta testers, and there were some enthusiastic users, especially people using it for code, but people weren't that excited and not all the users ended up coming back to it. Only a few of the people we gave access to ended up using it regularly. I think what happened was when everyone got access, people taught each other how to use it and what use cases ended up working, so the social aspect of it was really important. I think the fact that it was really easy to use and people could share their use cases with each other caused this mass excitement. https://lnkd.in/g2qUzDXe

  • 查看Anyscale的公司主页,图片

    35,684 位关注者

    Heading to #reinvent2024 next month? Don’t miss our hands-on workshop on Dec. 4th on scaling generative AI workloads with Anyscale & Amazon Web Services (AWS) Hands-on session includes: ?? Building end-to-end LLM applications with RayTurbo, Anyscale's optimized Ray runtime. ?? Applying advanced performance techniques like dynamic autoscaling, heterogeneous hardware integration (CPU/GPU), and replica compaction. ?? Fine-tuning large language models with Anyscale’s LLM-Forge solution. ?? Using Anyscale observability tools to debug, diagnose, and profile distributed AI workloads effectively. ....and more! Register now: https://lnkd.in/gqRYFPz6 Seats are limited—sign up today!

    Generative AI Workloads with Anyscale and AWS

    Generative AI Workloads with Anyscale and AWS

    learn.anyscale.com

  • Anyscale转发了

    查看Robert Nishihara的档案,图片

    Co-founder at Anyscale

    John Schulman co-founded OpenAI and created ChatGPT. Some highlights from a conversation last year about *scaling laws* (diminishing returns?) and *ChatGPT* (what was the backstory?). **Where did the belief in the importance of scaling models and compute come from?** The idea that bigger models are better was a bit in the zeitgeist. The founding team of OpenAI leaned more toward this aesthetic of scaling up simple things as opposed to complicated clever things. We believed doing the simple thing right tended to win in machine learning. Scaling looks obvious when you see the final result, but often there's a lot of complexity in getting there. The engineering is difficult. There are usually all these details. You have to scale your learning rates just right. You have to scale your data up along with the model size. It took several years to figure out the right recipes for scaling things. You usually have a ton of hyperparameters that have to be scaled properly, and you have to do a lot of science to figure out how to scale them. **Are their diminishing returns to scaling?** Often the returns diminish as you scale the current thing, but then there are other innovations that let you continue. So I don't see deep learning in general reaching a plateau. Maybe doing the most basic thing reaches diminishing returns. https://lnkd.in/g2qUzDXe

  • 查看Anyscale的公司主页,图片

    35,684 位关注者

    Attending KubeCon? Visit us at booth S43 to connect with our team and explore KubeRay—an advanced Kubernetes operator designed to streamline the deployment and management of Ray applications on Kubernetes. ??

    • 该图片无替代文字
  • Anyscale转发了

    查看Anyscale的公司主页,图片

    35,684 位关注者

    Anyscale is excited to be named a 2024 Gartner? Cool Vendor in AI Engineering. ?? Our mission to simplify scalable AI and enable AI workloads is central to our approach. We're proud to be named a Cool Vendor by Gartner? and believe this recognition validates how our Unified AI Platform—which supports any model, any framework, any accelerate, and integrates with any cloud—sets us apart. Read the full Gartner? report here:?https://lnkd.in/gwQJy7Jt

    • 该图片无替代文字

相似主页

查看职位

融资