Radiant Update: November 3 This week we're excited to announce SOC 2 Type I Compliance, We've also introduced Topic-based Categorization and Investigation Metrics. If you're in Austin this week catch us at Generative AI World. Where are CTO Jakob Frick is giving a talk! #LLMs #Generative #AI
Radiant AI
软件开发
San Francisco,CA 916 位关注者
Radiant builds software to run AI applications at production scale.
关于我们
Radiant detects unforeseen Generative AI anomalies in real time. Use Radiant to detect, diagnose, and remediate issues quickly. Companies with mission-critical AI applications use Radiant to securely scale to millions of users.
- 网站
-
www.radiantai.com
Radiant AI的外部链接
- 所属行业
- 软件开发
- 规模
- 2-10 人
- 总部
- San Francisco,CA
- 类型
- 私人持股
- 创立
- 2023
地点
-
主要
San Francisco
US,CA,San Francisco
Radiant AI员工
动态
-
We're proud to announce that Radiant is now SOC 2 Type I Compliant! We take security seriously so we partnered with Modern Assurance and Secureframe to evaluate our controls and processes for data security, processing integrity and confidentiality. This certification reflects our commitment to a robust information security program and will allow current and future customers to deploy Radiant with confidence.??We plan to pursue Type II compliance to demonstrate our ongoing commitment to security. Read more about this ongoing commitment: https://lnkd.in/gdNsXdri
Radiant AI | Radiant Announces SOC 2 Type I Compliance
blog.radiantai.com
-
Radiant Update | Introducing Investigations In our discussions with leading product development teams we noticed that once an issue is suspected, significant time is spent investigating the scale of these issues, marking examples and then tracking whether or not engineering changes improve the issue over time. We created investigations as a way for product teams to easily find recurring issues within their LLM applications and surface those issues to automatically track how product quality changes week over week. #LLMs #MLOps #GenerativeAI
Radiant Updates | October 28
updates.radiantai.com
-
New on the Radiant AI Blog: The best use cases for AI are where there's asymmetric risk We explore how to plan successful Generative AI projects and the kinds of use cases that make for better business outcomes: when the risk is low and the reward is high. Read on our website: https://lnkd.in/gRdESJ6z #LLMs #GenerativeAI #MLOps #GenAI
-
Radiant Product Updates: This week our product team released several new features to meet the needs of our growing customer base. - Radiant now supports OTEL open telemetry format, making it easier to migrate from existing telemetry platforms. - We also added a new data sensitivity function to restrict who can see the contents of model interactions in a specific project. - We added support for Open AI structured outputs - We built additional filters to pinpoint specific user groups and user across traces and messages Check out latest features now available in Radiant: demo.deployradiant.com #GenerativeAI #LLMs #Evaluations #ModelQuality
Radiant Updates | August 12th
updates.radiantai.com
-
Radiant Update: Introducing Traces to LLM logs This week our product team released message traces to the platform. This capability allows users look at turn-based model interactions grouped in one place, making it easier to identify when issues arise. #GenerativeAI #LLMs #MLOps https://lnkd.in/gk-FdNUt
Radiant Updates | July 29
updates.radiantai.com
-
Last night our CEO Nitish gave a talk GenAI Night w/ Cloudflare & Jam.dev where he shared our latest thinking on quality in Generative AI: "Once you characterize normal performance, everything is easy". Our big idea? Looking at your data first is a better way to build reliable AI systems than taking things off the shelf. #GenerativeAI #LLMs #MLOps #Evals
-
Yesterday we were excited by Meta's announcement of LLama 3.1. It's quickly becoming the best option for open source models. Today we're happy to share you can try it for yourself on our demo stack with all of our latest features: https://lnkd.in/gQmP9cb5 #GenerativeAI #LLMS #Llama31
-
This we’re proud to announce Custom Evaluations and Example Datasets in our latest release that make it even easier to detect anomalies and take action. Radiant custom evaluators: Users can quickly set up a specific evaluation backed by an LLM. Users can define their evaluation by asking the model to categorize each response with a specific output format. For example: “Evaluate this input and output and if they are in a language other than English, set the output as ‘true’. Saved Datasets: Once Radiant has flagged potential anomalies, users can quickly confirm anomalies or mark events as expected behavior. Radiant now allows users to save any number of events as a dataset. Datasets make it easier to collect events in one place to provide examples to LLMs for fine tuning, to share with collaborators or to use to improve prompts and evaluations. #LLMs #GenerativeAI #LLMOps #Observability
Radiant Updates | July 22
updates.radiantai.com
-
This week our product team is excited to release a major update to our anomaly detection capabilities. We’re building Radiant to make it easier to understand AI product performance in qualitative and quantitative sense, using insights to build better products. Our anomaly detection tool identifies potential anomalies based on message similarity, allowing users to identify potential issues from thousands of events. This workflow allows human reviewers to create datasets of positive or negative examples that can be used to fine tune models, provide feedback to engineering teams and other remedial actions. Check out our full demo video in the link below or sign up to receive updates directly here: https://lnkd.in/gf6VuMGf #LLMs #GenerativeAI #MLOPs
Radiant Updates | July 15
updates.radiantai.com