??????: ???????????????? ???????? ???????????????????????? ???????? ???????????????????? ? ? Workspaces allow you to create individual environments in Beam. ? For example, you can create separate workspaces for development, staging, and production. It's live now! Login to your dashboard and give it a try ?? https://beam.cloud?
关于我们
Bring your code, we'll scale it to hundreds of servers in the cloud.
- 网站
-
https://beam.cloud
Beam的外部链接
- 所属行业
- 数据基础架构与分析
- 规模
- 2-10 人
- 类型
- 私人持股
- 创立
- 2022
- 领域
- infrastructure和serverless
产品
Beam员工
动态
-
? Announcing Beam Shell: SSH Into Your Containers Many of you have asked whether you can SSH into your Beam containers. Well, now you can. Just run beam shell you'll connect to the same Beam container that's running your code. This helps a lot with debugging because you can see everything installed in your container: - nvcc --version - which python - echo $path ...anything you want. Try it out and let us know what you think!
-
It's been a crazy year! For our last release of 2024, we shipped: ?? ??????????-?????? ?????????????? You can now run workloads across multiple GPUs! This lets you run workloads that might not fit on a single GPU. For example, you could run a 13B parameter LLM on 2x A10Gs, which normally would only fit on a single A100-40. ???????????????????? ???????? ???? ???????????????????? We added a "Run Now" button to the dashboard to instantly invoke an app and warm up the container. ?? ???????????? ?????????? ?????????????????????? We wanted to make it easier to use existing Docker images on Beam. You can now use a Dockerfile that you have locally to create your Beam image. ?? ???????? ?????????????? ???? ?????????? ???????????? You can now pass secrets into your image builds, useful for accessing private repos or running build steps that require credentials of some kind. ?????? ????'???? ?????? ???????? ?????????????? ?????? ???????????????? ???????????? ???? ??????????????.? It's been an excited year, and we can't wait to ship more stuff for you in 2025. Happy New Year!
-
? Launch Week, Day 5?? It's the final day of MEGA LAUNCH WEEK! Today we're shipping a CLI for our self-hosted users to manage their Beam cluster. At Beam, we run a globally distributed cloud, with a lot of GPUs. We're constantly adding hardware to keep up with demand. And if you self-host Beam, you'll have access to the same tools we use to add GPU capacity internally. For more details on the new CLI (and a breakdown of how we add GPU capacity), make sure to check out our blog! Link below ??
-
-
Launch Week, Day 4?? Today we're launching our Javascript SDK. This makes it even easier to integrate Beam into your next.js or React apps. With the Javascript SDK, you can... ? Invoke deployed apps ? Open realtime websocket connections ? Retrieve the status of a running task ? Cancel tasks The SDK makes it easy to do all of these things without having to write any boilerplate! More details in our blog post, linked below ??
-
-
It's Day 3 of our Mega Launch Week, and we're shipping our biggest feature of the week. Today we're releasing Beam Bots. It's a new bot framework with sandboxed compute and concurrency built-in. We get it, you've seen hundreds of agent frameworks. But this one is different. Using Beam Bots, you can... ? Build research analysts to lookup 1,000s of companies ? Deploy automated customer support agents that can multi-task ? Run any agent task on a GPU-backed container ? Execute untrusted agent code in sandboxed compute environments Bots are available today, and you can start building right away. We'll link to the documentation in the comments below ??
-
-
Launch Week, Day 2 It's Day 2 in our MEGA LAUNCH WEEK with Supabase, PropelAuth, Magic Patterns, Jamsocket, and more. Today we're releasing realtime apps ? By using realtime, you can: ? Deploy serverless websockets with SSL ? Handle hundreds of concurrent requests ? Built interactivity into your apps Customers are already using realtime to deploy realtime transcription apps with OpenAI Whisper, streaming LLM responses with Mistral AI models, and much more!
-
-
?? Launch Week, Day 1?? Welcome to the first launch in our first-ever launch week! Today we're releasing our new vLLM engine. We've built the vLLM serving engine directly into the Beam SDK: ? Run vLLM models on the cloud with one command ? Deploy vLLM inference endpoints ? Scale out to 100s of GPUs ? Automatic scale-to-zero vLLM is available in the Beam SDK today, and you can check out our docs for an end-to-end tutorial (link in comments)
-
-
?? Realtime serverless backends, made easy We just made it a lot easier to build real-time apps on Beam Just add a @realtime decorator to your Python code and you'll get... ?? SSL-terminated websocket URL ??? Job concurrency ?? GPU support ?? Globally distributed file storage Give it a try ?? https://lnkd.in/ebPEUkSy
-