Latitude转发了
Lovely video by Santiago Valdarrama showcasing Latitude ??
This is next-level smart: An open-source platform that evaluates your prompts and automatically refines them based on the results. Of course, it feels obvious after you see it: ? You write a prompt ? The system evaluates it across different scenarios ? Based on the results, it refines it to improve results I recorded a quick video to show you how it works. It's pretty cool stuff! Here are some of the problems and best practices for teams building AI applications: 1. Testing your prompts manually doesn't scale 2. Prompts should not be spread throughout the codebase 3. Non-technical people need easy access to your prompts 4. Prompts can always use a version history to track changes 5. Monitoring the performance of prompts overtime is critical Evaluating the prompts is what keeps me up at night from this list. Of all the conversations I've had with companies and people building AI applications, this is the area that's causing the most pain. Testing a prompt is difficult. Think about how you'd test the response of a model subjectively. What do you account for, "tone," "objectivity," "completeness," "creativity," "readability," etc.? Last week, I met the developers behind Latitude, an open-source prompt engineering platform trying to solve all of these issues. You can try the platform in two ways: ? You can self-host the platform. Free and open-source. ? If you want to try their online product, their free tier is huge. Here is the link:?https://lnkd.in/eDwKk7AE Thanks to the Latitude team for collaborating with me on this post, and congratulations on going live with their product!