Today, we're publicly releasing all of our LLM usage statistics. 1.5B+ requests, 1108B+ tokens, 16+ TB of data, now available for your curiosity or research. All anonymized. Explore one of the largest public AI conversation datasets ever: https://lnkd.in/eeD7Xg75
关于我们
The open-source LangSmith alternative for logging, monitoring, and debugging AI applications. 1-line integration by simply changing the baseurl to access metrics, prompt management and more. ?? Support us on PH: www.producthunt.com/products/helicone-ai ?? Docs: docs.helicone.ai ?? Github: github.com/Helicone ?? Open stats: us.helicone.ai/open-stats
- 网站
-
https://www.helicone.ai/
Helicone (YC W23)的外部链接
- 所属行业
- 软件开发
- 规模
- 2-10 人
- 总部
- San Francisco
- 类型
- 私人持股
- 创立
- 2023
- 领域
- Observability和Monitoring
地点
-
主要
US,San Francisco
Helicone (YC W23)员工
动态
-
A practical workflow for running prompt experiments ?? Fine-tuning prompts helps you get the AI to respond the way you want. At Helicone, we’ve developed a simple workflow to get the best prompts into production. Let’s break it down.? 1?? Step 1: Create multiple prompt variations. Adjust tone, structure, and context. The more versions you test, the better your chances of finding the optimal one for your task. 2?? Step 2: Test with real production inputs. Skip the synthetic data—use actual user inputs to see which prompt handles real-world cases the best. 3?? Step 3: Analyze key metrics. Response time, accuracy, token usage—numbers reveal which prompt is the most effective and efficient. 4?? Step 4: Choose the top performing prompt. After analyzing the data, select the prompt that consistently delivers quality and speed for your use case. 5?? Step 5: Deploy to production. Push the best prompt live, but remember, optimization is iterative. Continuous improving the prompt keeps performance at its peak. ?? Prompt engineering is an ongoing cycle of learning, testing, and refining. We just launched the waitlist of Prompt Experiments to 10x your prompt experimentation workflow ?? Join the waitlist!?? #AI #PromptEngineering https://lnkd.in/gi6sHKBi
Helicone
helicone.ai
-
We’re excited to introduce two new npm packages and officially deprecating @ helicone/helicone. ?? Why the change? ? Leaner package size: @ helicone/helicone was bulky and wrapped around OpenAI. ? More efficient: The new packages focus only on the essential, up-to-date functions. Key updates: ? @ helicone/async: Now home to the HeliconeAsyncLogger class for seamless async logging. ? @ helicone/helpers: Featuring the HeliconeManualLogger class with a more functional design. Oh, we’ve added vector database support and the ability to log external tool calls. Docs ?? https://lnkd.in/dKeHKbkF
-
-
We ran 900 prompt experiments simultaneously. Why? To showcase how powerful combining multiple features into one seamless workflow can be. Playground + Experiments V1 + Evaluations → Experiments V2 Sometimes, the real differentiator is how you bring everything together. As Steve Jobs believed, innovation is often about making things simpler. The result? Better prompts, faster. --- We're now rolling out Experiments V2 to a select group of users. The feedback so far? They love it. If you want in, join the waitlist by clicking the link in the comments ??
-
Last month, our team crushed nearly 900 push-ups at Salesforce Park in SF, turning our first-ever Product Hunt launch day into an epic team bonding experience ??. We didn’t know exactly what to expect, but we faced the challenge together! Check out this quick recap and our top tips for making the most of your own Product Hunt launch! ???? ?? #producthunt #launch
-
I'm beyond excited to announce Helicone Experiments - a new way to perfect your prompts.????? Crafting the perfect prompt is extremely difficult. Testing, tweaking, and iterating, the process is tedious and time consuming. But there is a better way. Today, we are redefining prompt engineering to help you 10x your workflow. Comment for early access or sign up using the link below.