Today, we’re excited to introduce three state-of-the-art models that work together to transform human-AI interactions: ?? Phoenix-3: Our flagship replica model, now with full-face rendering, emotions, & micro expressions ??? Raven-0: A real-time perception system, giving AI human-like eyes to understand visual context and emotions ??? Sparrow-0: A transformer-based turn-taking model for natural dialogue and turn-taking You can now build even more realistic AI Agents with emotional intelligence. They not only look human, but can engage, perceive, listen, and understand in a deeply human way. This is an evolution of our Conversational Video Interface, into a complete operating system for human-AI interaction. AI isn’t just responding anymore. It’s thinking, perceiving, and evolving – it's a big step closer to feeling like true face-to-face communication. See the magic for yourself—talk to our live demo agent Charlie: www.tavus.io We can’t wait to see what conversational AI video experiences you build with CVI. ??
Tavus can you use the new Raven model with pipecat? It seems as if you could do it via the persona, but then we need to understand if we still control the stack with the persona. In other words, we want to have our protocol govern the communication fully, and just use the video layer. We have a great and easy set up with Daily right now, but we could easily use the replica as the video layer if we could get some extra leverage out of it. A lot of our data comes from sensors, for example, and so we really need to govern our own communication layer and context state.
Congratulations to the Tavus team! ??????
Let's go!!
This is amazing! I'm so grateful to have had a chance to take part here!
Curious about how emotional intelligence will change user interactions.
State of the art models from a state of the art team.
This is such a wonderful step for more human-like interactions.
Founder @ The Chainsmokers + Mantis Venture Capital
6 天前This was the most pleasant conversation I’ve had with anyone in a long while