Langfuse (YC W23)

Langfuse (YC W23)

软件开发

Open Source LLM Engineering Platform

关于我们

Langfuse is the ???????? ?????????????? ???????? ???????????? ???????????? ????????????????. It helps teams collaboratively develop, monitor, evaluate, and debug AI applications. Langfuse can be ????????-???????????? in minutes and is battle-tested and used in production by thousands of users from YC startups to large companies like Khan Academy or Twilio. Langfuse builds on a proven track record of reliability and performance. Developers can trace any Large Language model or framework using our SDKs for Python and JS/TS, our open API or our native integrations (OpenAI, Langchain, Llama-Index, Vercel AI SDK). Beyond tracing, developers use ???????????????? ???????????? ????????????????????, ?????? ???????? ????????, ?????? ?????????????? ?????? ???????????????????? ?????????????????? to improve the quality of their applications. Product managers can ??????????????, ????????????????, ?????? ?????????? ???? ???????????????? by accessing detailed metrics on costs, latencies, and user feedback in the Langfuse Dashboard. They can bring ???????????? ???? ?????? ???????? by setting up annotation workflows for human labelers to score their application. Langfuse can also be used to ?????????????? ???????????????? ?????????? through security framework and evaluation pipelines. Langfuse enables ??????-?????????????????? ???????? ?????????????? to iterate on prompts and model configurations directly within the Langfuse UI or use the Langfuse Playground for fast prompt testing. Langfuse is ???????? ???????????? and we are proud to have a fantastic community on Github and Discord that provides help and feedback. Do get in touch with us!

网站
https://langfuse.com
所属行业
软件开发
规模
2-10 人
总部
San Francisco
类型
私人持股
创立
2022
领域
Langfuse、Large Language Models、Observability、Prompt Management、Evaluations、Testing、Open Source、LLM、AI、Analytics、Open Source和Artificial Intelligence

产品

地点

Langfuse (YC W23)员工

动态

  • 查看Langfuse (YC W23)的公司主页,图片

    4,079 位关注者

    Last day of our Launch Week #2. We released some incredible features in the last couple of days. Here is the full overview:

    查看Marc Klingen的档案,图片

    co-founder langfuse.com (YC W23) – hiring – open source llm engineering platform

    ?? New feature in Langfuse (YC W23): Prompt Experimentation! See them in action here langfuse.com/ph We believe this will have a big impact on how you develop LLM applications with Langfuse. Here's what you can look forward to: ? Test and Evaluate Simultaneously: Experiment with different prompt versions and models on hundreds of dataset items at once. ? Live LLM-as-a-Judge Evaluations: Perform real-time evaluations to see how your prompts and models stack up. ? New Dataset Comparison View: Compare results side-by-side to optimize prompts for your specific use case. ?? Check out our Product Hunt page to learn how you can incorporate Langfuse Prompt Experiments into your development workflow: langfuse.com/ph ?? A huge thank you to Marlies Mayerhofer for this!

  • 查看Langfuse (YC W23)的公司主页,图片

    4,079 位关注者

    Day 4 of Langfuse Launch Week #2 -> All New Documentation for Datasets, Experiments, and Evals

    查看Marc Klingen的档案,图片

    co-founder langfuse.com (YC W23) – hiring – open source llm engineering platform

    ?? Day 4 of Langfuse (YC W23) Launch Week #2: Elevating Developer Experience with New Documentation! At Langfuse, we believe that documentation is product. As part of Day 4 of our Launch Week, we're shining a spotlight on an often overlooked but critical element of great developer experience: documentation. We've completely rebuilt many of our docs to be more thorough and user-friendly than ever before, helping teams accelerate the development of their LLM applications. ?? What's New in Our Documentation? ? Guidance on using Datasets and Evals: Dive deep into the effective evaluation of your LLM applications during development. ? Introduction to Core Data Models: Get acquainted with our foundational data structures—datasets, experiment runs, and scores. ? End-to-End Examples: Common workflows with our Jupyter Notebook examples and see Langfuse in action. ? Visuals and Explanations: Enjoy more GIFs and interactive elements throughout the docs for an engaging learning experience. As our community continues to grow, best-in-class documentation has become essential for teams adopting Langfuse. To celebrate Launch Week #2, we've also summarized all the documentation improvements we've made over the past year. We think it's an interesting read and welcome any feedback you may have! ?? Fun Fact: This update marks the 1000th PR to the Langfuse Docs! Dive into the new documentation and let us know what you think [link in comments]

  • 查看Langfuse (YC W23)的公司主页,图片

    4,079 位关注者

    We are very excited to release multimodality in Langfuse as this has been one of the top requests from the community. ?? Collaborate with us on our public Langfuse roadmap:

    查看Marc Klingen的档案,图片

    co-founder langfuse.com (YC W23) – hiring – open source llm engineering platform

    ?? Launch Week Day 3: Full multi-modal support in Langfuse (YC W23), including images, audio files, and attachments ?????? Back in August, we took the first step by enabling multi-modal traces that reference external files. Since then, expanding this support to include base64 encoded images, audio files, and attachments has been one of the top requests from our community. We heard you loud and clear, and we're excited to finally release this. What's new? ? Enhanced Media Support: Integrate images (PNG, JPG, WEBP), audio files (MPEG, MP3, WAV), and other attachments (PDF, plain text) directly into your Langfuse traces. ? In-UI rendering: Multi-modal content is now rendered inline in the Langfuse UI for images and audio, providing a richer, more interactive experience. ? Custom attachments: Have specific needs? Upload arbitrary media attachments using the new LangfuseMedia class in our SDKs. Check out our changelog post to see how this works under to hood and to get started [link in comments] s/o to Hassieb Pakzad for shipping this in no time! ??

  • Langfuse (YC W23)转发了

    查看Marc Klingen的档案,图片

    co-founder langfuse.com (YC W23) – hiring – open source llm engineering platform

    Fun conversation (& podcast) about building Open Source, LLM Applications and Devtools/Infra, Berlin vs SF, and what we’ve learned from working with our rapidly growing community. Thanks Bela and LLM Studios for hosting. ??? Podcast link in comments

    查看Bela Wiertz的档案,图片

    Investing in Dev / Infra @First Momentum | Building {Tech: Berlin}

    AI Fireside Chat #1 with Marc Klingen from Langfuse (YC W23) Monday evening {Tech: Berlin} & LLM Studios hosted AI Fireside Chat #1 at Merantix AI Campus in Berlin in front of 80+ attendees. ??? As we were not able to fit everyone, we decided to publish the chat as a Podcast. You can listen to it via the link in the comments. Give it a follow, as we now turn this into a monthly series, with AI Fireside Chat #2 already in planning for early December. ?? Sign-Up link is in the comments. Leonard Off Leo Schoberwalter Matteo von Haxthausen Francesco Ricciuti Let's build a proper tech ecosystem in Berlin! ?? Call for partners: If you would like to partner for an upcoming edition, please reach out, it is not possible to set these events up without a great partner network (This time it was Runa Capital ??)

    • 该图片无替代文字
    • 该图片无替代文字
    • 该图片无替代文字
    • 该图片无替代文字
  • 查看Langfuse (YC W23)的公司主页,图片

    4,079 位关注者

    Second day of our Launch Week 2! Quick Recap: ?? Day 0: Langfuse Prompt Management now integrates natively with the Vercel AI SDK. Version and release prompts in Langfuse, use them via Vercel AI SDK, monitor metrics in Langfuse: https://lnkd.in/eqpi9R3E ?? Day 1: Langfuse Datasets now offers a powerful comparison view for dataset experiment runs. It's perfect for teams iterating on their AI applications with different prompts, models, or parameters who want to quickly see the results of a change in a side-by-side with their benchmark version: https://lnkd.in/eEjWHdzA

    查看Marc Klingen的档案,图片

    co-founder langfuse.com (YC W23) – hiring – open source llm engineering platform

    ?? Launch Week Day 2: LLM-as-a-judge evaluators for Dataset Experiments! Super excited that Langfuse (YC W23) now helps run model-based evaluations on Dataset Experiments. Building reliable AI applications can feel like playing whack-a-mole with bugs and regressions, all while having way too many ideas of how they could potentially be solved. Our new LLM-as-a-judge evaluators for Datasets helps turn this uncertainty into a more structured engineering process. How this helps: ? Measure Impact Before Deployment: Assess changes before they reach production. ? Detect Regressions Early: Spot and fix issues early in development. ? Compare Dataset Items Across Runs: Track specific items over time with reliable scoring. ? Strengthen Your Test Datasets: Identify gaps between testing and production to make your datasets more robust. ? Create Reliable Feedback Loops: Continuously improve your AI applications with structured feedback. More on this feature in the comments ?? P.S.: Check out our new Dataset Comparison View, for improved eval visibility [link in comments] ?? Kudos to Marlies Mayerhofer for shipping this.

  • 查看Langfuse (YC W23)的公司主页,图片

    4,079 位关注者

    ?? Day 1 of Langfuse Launch Week #2: Side-by-side comparison view for dataset experiments

    查看Marc Klingen的档案,图片

    co-founder langfuse.com (YC W23) – hiring – open source llm engineering platform

    ?? Day 1 of Langfuse (YC W23) Launch Week #2: We're adding a side-by-side comparison view for dataset experiments When building LLM applications, structured datasets and evaluations are key to continuously iterate on prompts, model configurations, or other elements of the LLM application. Today, we launch an intuitive way for both ?????????????????? and ??????-?????????????????? users to compare dataset experiment runs side-by-side. ?? What's New? ? Overviews: Get a clear picture of each item in your dataset. ? Summaries: Dive into metrics like latency, cost, and scores, and see the application's output response for each dataset item across selected experiment runs. This means you can compare how different prompts, models, or parameters perform against the same dataset. ??? How to Get Started? 1?? Create a Dataset and populate with Items (e.g. combine sanitized production data and synthetic data) 2?? Try different prompts or application configurations, run them against the dataset as an experiment 3?? Analyze your Dataset experiment runs directly in the Langfuse UI and share the results with your team For code snippets and detailed guides, check out our docs and changelog [links in comments] ?? Thank you, Marlies Mayerhofer, for shipping this in no time ?? P.S. don't miss our Langfuse Town Hall meeting on Wednesday!

  • 查看Langfuse (YC W23)的公司主页,图片

    4,079 位关注者

    Want to learn more about Langfuse? Check out the all-new https://langfuse.com/docs ??

    查看Marc Klingen的档案,图片

    co-founder langfuse.com (YC W23) – hiring – open source llm engineering platform

    Just in time for launch week, we've updated the most important page of every devtool: `/docs` It now features a wholistic/concise/visual overview of the most important Langfuse (YC W23) features. [Link in comments]

相似主页

融资