hey hey - here's what's been cooking with us this past month. #Hamilton highlights: ?? crossed 2000 github stars ?? > released multithreading based DAG parallelism Want to easily parallelize a DAG doing lots of LLM calls? Easy. Try our multithreading based approach. > RichProgressBar adapter Want a better cmdline experience? Check out this adapter to help tell you where Hamilton is with respect to computation. Thanks to Charles S. for the contribution. > Hamilton installs in a WASM environment. We're excited to enable more WASM based use of Hamilton with tools like marimo. We see more future use of python in the browser and we're excited to support using Hamilton in that context. #Burr highlights: ?? crossed 1500 github stars ?? > async persisters We added asynchronous lifecycle hooks and implementations to enable more performant persisters when using asynchronous python with Burr. Thanks to Jernej Frank for the contributions here! > tag aliases for actions You can now tag all actions that require input, or responses, etc. and then use that tag when running your Burr application. As graphs get larger this is one way to more easily manage your control flow logic without having to update it each time your actions change. > Burr installs in a WASM environment Note: Burr has always worked in a WASM environment, so no new features here. For more details, jump into the newsletter.
DAGWorks Inc.
数据基础架构与分析
San Francisco,California 511 位关注者
Empowering developers to build reliable AI Agents & AI/ML Applications.
关于我们
Join hundreds of companies and ship 2x-4x faster with our OSS. We’re on a mission to provide an integrated development & observability experience for those building and maintaining data, ML, and AI agents & products. This is the first step in towards laying the foundations for Composable AI Systems; all AI systems need observability and introspection to be first class. How? We're standardizing how people write python to express data, ML, LLM, & agent workflows / pipelines / applications with lightweight frameworks. So that no matter the author, it'll be easy to collaborate, connect, and importantly in one line integrate observability and datastore needs. This speeds up time to production and reduces TCO because code remains easy to maintain and your data flywheel stays manageable. So you can increase the top line & bottom line of your business by delivering on AI that is reliable: We've got two open source projects: - one focused on pipelines/workflows, called Hamilton (https://github.com/dagworks-inc/hamilton) see https://www.tryhamilton.dev - one focused on applications, called Burr (https://github.com/dagworks-inc/burr). Both Hamilton & Burr come with self-hostable UIs (+ enterprise & SaaS offerings). With a one-line code change, you get versioning, lineage / tracing, cataloging, and observability out of the box with Hamilton. With Burr you get tracing, observability and persistence in a single line addition. Subscribe to our updates via blog.dagworks.io, or check out the products at www.dagworks.io.
- 网站
-
https://www.dagworks.io
DAGWorks Inc.的外部链接
- 所属行业
- 数据基础架构与分析
- 规模
- 2-10 人
- 总部
- San Francisco,California
- 类型
- 私人持股
- 创立
- 2022
- 领域
- MLOps、LLMOps、Python、Open Source、Feature Engineering、RAG、Data Engineering、Data Science、Machine Learning、GenAIOps和Agents
地点
-
主要
US,California,San Francisco,94107
DAGWorks Inc.员工
动态
-
Fun collaboration with ScrapeGraphAI! In this post we talk about building a simple web-scraper/chat engine with #Burr, LanceDB, and #scrapegraph. We leverage the scrapegraph sdk, but it can all be done with OS + free tools. https://lnkd.in/gyACTJ9c
-
Happy New Year! Here's a recap of the last month and some of 2024. Thanks to the community and our contributors this year. #Hamilton + #Burr 2024 stats: > 35M+ telemetry events (10x growth), > 100K+ unique IPs (10x growth) from 1000+ companies, > 1M+ total downloads. #Hamilton December release highlights: > TypedDict support, Snowflake example #Burr December release highlights: > Anyscale's Ray executor for parallelism > Persistence, i.e. fault tolerance, for parallelism Blogs: > We have four this past month! They cover Hamilton internals, parallelism, and TDD LLM application development! Links in the newsletter! In the wild: > Burr called out by AI Agent Ops Alliance? (AOA) > Jernej Frank's recording on the internals of Hamilton decorators > how to deploy the Hamilton UI on Snowflake
-
?? Happy new year! ?? We wrote a fun post on building parallel agents with #burr and #ray to get the year started. https://lnkd.in/grbTQnqv The hard problems at the center of it. How do you handle parallel tasks in your agent's workflow? What if the orchestrator dies/loses connection? What if the tasks are flaky? Burr provides a host of tools to solve these without much additional effort -- most of this just *works* out of the box. The idea is that spawned sub-applications inherit their parent's persistence capabilities, enabling you to restart any failed applications where you left off, and restart the entire suite of applications from their prior state if the parent dies, affording you an eventually completing workflow! We use: ?? #Burr to orchestrate/persist ?? #Ray by Anyscale to distribute out execution ?? Any db you want to persist data between runs The distribution engine and database we use are pluggable -- while the Burr library provides a few plugins, you can (and should!) easily write your own.
-
?? The choice of Agent Ops tooling depends on the specific use case needs and preferences. ?? For state machine modeling & implementation: --------- 1?? If precise control over state and detailed visualization, DAGWorks Inc. Burr might be the best choice. 2?? If you prefer a higher-level abstraction and seamless integration with LangChain, LangGraph could be a good fit. 3?? And if you need a more versatile tool for various NLP tasks, including agent development, Haystack might be the way to go. -------- ??? Burr: ??Explicit State Machine Focus: Burr is built with state machines as the core concept. You define "actions" as Python functions that explicitly read from and write to a central state object.? ??Fine-grained Control: Burr gives you very granular control over state transitions and actions, allowing for complex logic and conditional branching. ??Visualization: Burr's telemetry UI provides a clear visual representation of the state machine, making it easy to understand and debug agent behavior. ---- ??? LangGraph: ??Graph-based Representation: LangGraph uses a graph structure to define the state machine, where nodes represent actions or tools, and edges represent transitions.? ??Higher-level Abstraction: LangGraph provides a higher level of abstraction compared to Burr, focusing on the flow of actions and data between different components. ??Integration with LangChain: LangGraph is tightly integrated with LangChain, making it easy to incorporate various LLM chains and tools into your agent workflow. ---- ??? Haystack: ??Pipelines with State Machine Elements: Haystack's Pipelines feature incorporates some state machine concepts, allowing for conditional routing and branching within a workflow.? ??Less Explicit Focus: However, Haystack doesn't explicitly enforce a state machine structure for all applications. It's more flexible and can be used for a wider range of NLP tasks beyond agent development. ??Limited Visualization: Haystack doesn't have a dedicated UI for visualizing state machines like Burr. -------- ? Agentic Systems are the future of AI - AI Agent Ops Framework? (AOF) Unlocks the Potential ?? Join the industries' only dedicated AI Agent Ops Community: https://lnkd.in/dMDFZMJa
-
-
I'm really excited about some new capabilities we launched in #Burr! You can now easily run parallel, multi-agent workflows with a set of abstractions that enables a simple map/reduce pattern -- E.G. generating multiple candidates to evaluate, running a variety of simultaneous tools, scraping multiple web-pages, etc... We wrote about it here -- I'm happy with how Burr has grown to enable more and more powerful agentic systems:
-
Really excited for this guest blog post by Jernej Frank. This is something new -- he's one of the #OS contributors that dove deepest into #Hamilton, and has had a massive impact on some of the core aspects including graph execution and lifecycle adapters. In this post, he digs into an abstraction that we designed to make building ergonomic DAGs easier -- how Hamilton leverages python decorators to enable modular, readable, and extensible data transforms. Worth a read -- there's a bit for everyone! He goes over the basics of Hamilton, the trade-off between explicit/verbose code, how hamilton leverages decorators to mitigate that trade-off, and the architecture details of how that works. On a personal note -- I've also really enjoyed working with Jernej Frank as he dove into the deep end of Hamilton's implementation. Goes to show how #opensource can build and a community and bring a ton of folks along for the ride! https://lnkd.in/gZfZXCs8
-
Tired of the deployment headaches with #LLM agents? BentoML takes care of the heavy lifting - from REST endpoints to scaling. Great post showing how to combine #Burr and #BentoML for streamlined agent deployment. Read the blog post to learn more: https://lnkd.in/gJNccqw9
Deploying LLM agents shouldn't be a painful process, but it often is. Even though you successfully built an agent and it performs well on your evaluations, you still have to: - create a service (e.g., REST endpoints) - package the agent, service, and dependencies - deploy the service on infrastructure - handle incoming traffic and appropriately scale - and more BentoML is an open source project built to solves the challenges of deployment and inference. It has been around since "traditional ML" and was shaped by real-world requirements and challenges. My latest blog shows how to build the *Application* and *Serving* layer of an LLM agent using Burr and BentoML. Link to the post and the full code example in the comments!
-
Excited to see #burr / DAGWorks Inc. mentioned here!
Founder @ TheAiEdge | Follow me to learn about Machine Learning Engineering, Machine Learning System Design, MLOps, and the latest techniques and news about the field.
The real problem with Agentic frameworks like CrewAI or Autogen is AGENTS! The idea of agents is quite a seductive one! We break down specific problems into the underlying subtasks needed to solve them, and we assign agents to take care of each of them. The multi-agent systems are usually designed by assigning roles to those agents that resemble the ones we see in corporate organizations. The problem with that is it leads to over-engineered systems. Frameworks like CrewAI and Autogen provide a fully autonomous agentic system. That is great, but that may not be what we need! Being able to rely on LLMs to make some of the simple decisions at some nodes of the application flow is useful, but why do we need the full software to be agentic? Agentic frameworks push the whole application design to be organized into agents, tasks, and teams of collaborating agents. That is a very opinionated way to design software! We often have to unnecessarily rethink simple application flows into an agentic one. LLM-based agents are risky and should be used with extreme care, so sprinkling the codebase with uncontrollable LLMs making decisions is a recipe for disaster!???? Those frameworks are also very opaque and limited in the way agents actually interact with each other. The documentation isn't always transparent on that side, and you can find yourself spending hours in their Github repos trying to figure out how one task is being handed off from one agent to the next.? As amazing as some demos can be, I would not advise using autonomous agentic frameworks for production purposes. Personally, I recommend using event-based stateful graph orchestration tools like LangGraph, Burr, LlamaIndex Workflows, and even Haystack for anything agentic, as they are much more transparent!
-
-
Really excited to talk about building out #AIAgents reliably with #Burr at PyData Global next week! https://lnkd.in/gafKvBM9 If you haven't already, make sure to register -- great roster of speakers!