Hey SF! ?? We’re at the GTC Hackathon tomorrow—come say hey to the Tavus crew (Brian Johnson, Mert Gerdan, Karthik Ragunath Ananda Kumar, Alex Behrens)! Building AI agents? Tavus is sponsoring with a cash prize for the best use of our conversational video APIs ??? March 22 | 9–7 | SF ?? $1K Tavus prize ? Free API credits ?? Cloudflare HQ #GDC2025 #AIHackathon
Tavus
软件开发
San Francisco,CA 9,922 位关注者
AI video research company that enables product development teams to add a human layer to AI with easy-to-use APIs
关于我们
Tavus is a generative AI video research company that enables developers to build digital twin video experiences through easy-to-use APIs.
- 网站
-
https://www.tavus.io
Tavus的外部链接
- 所属行业
- 软件开发
- 规模
- 11-50 人
- 总部
- San Francisco,CA
- 类型
- 私人持股
- 创立
- 2020
地点
Tavus员工
动态
-
Tavus转发了
I can see a world where humans need a sentient being to coach them through their relationship with digital coworkers. Will HR advocate for only their human employees? Yesterday, several of my colleagues and I had a conversation with Tavus' Charlie. Charlie was engaging, he noticed the dog bed in the back of my screen. I introduced him to Lucy, my dog, and he acknowledged the affection she was showing me. We then spent time in Asana AI Studio, automating workflows, having the AI enrich our leads, classify resumes and a number of other "wow" moments. One of my colleagues said, "that does what we do." I could hear the anxiety in his voice; it was a similar feeling I had earlier in the week talking with an AI recruiter. The future is here. Now we need to integrate.
-
-
Tavus转发了
Today at 2.30 pm, as part of Tavus team, I will be demo-ing how you can use Tavus replicas as AI Agents in this event: https://lu.ma/ai-bootcamp Should be an informative and super fun demo. ??
-
Big moment at our booth… Tim Draper just dropped by with his crew to talk to Charlie on our Windows XP demo! ?? #HumanX
-
-
Tavus转发了
Tomorrow (March 11, 2025), as part of Tavus, I will be demoing our latest SOTA conversational video model - "phoenix-3" and talk a bit about it too ?? in this event organized by sampleapp.ai and Amazon Web Services (AWS): https://lu.ma/h3qpiaqg at 6 pm PT Would love to meet you all there. Should be super fun ??
Today, we’re excited to introduce three state-of-the-art models that work together to transform human-AI interactions: ?? Phoenix-3: Our flagship replica model, now with full-face rendering, emotions, & micro expressions ??? Raven-0: A real-time perception system, giving AI human-like eyes to understand visual context and emotions ??? Sparrow-0: A transformer-based turn-taking model for natural dialogue and turn-taking You can now build even more realistic AI Agents with emotional intelligence. They not only look human, but can engage, perceive, listen, and understand in a deeply human way. This is an evolution of our Conversational Video Interface, into a complete operating system for human-AI interaction. AI isn’t just responding anymore. It’s thinking, perceiving, and evolving – it's a big step closer to feeling like true face-to-face communication. See the magic for yourself—talk to our live demo agent Charlie: www.tavus.io We can’t wait to see what conversational AI video experiences you build with CVI. ??
-
Huge congrats to our amazing partners Cartesia on their record-breaking raise! We’ve been using Sonic 2.0 for a while now, and it’s hands down the best real-time audio model out there. Want to hear it in action? Talk to Charlie on our homepage, he’s powered by it!
We've raised a $64M Series A led by Kleiner Perkins to build the platform for real-time voice AI. We'll use this funding to expand our team, and to build the next generation of models, infrastructure, and products for voice, starting with Sonic 2.0, available today. Link below to try 20,000 credits free today ??
-
Tavus转发了
Back in November, we had a Tavus retreat in a remote location in Georgia. We have an incredible team, and I felt inspired as we headed home after a week of collaboration and talking about our vision. There was one issue that so many people were having with early versions of CVI. “It interrupts me!” Wish it knew when to speak and when not to. I ran the idea past Ari Korin and Hassaan Raza: what if we could predict when the user was done speaking? As I recall it they both kinda shrugged and said: build it. That’s one of the things I like about our culture: build it. My favorite two words. On the plane ride back I started building the training pipeline for a new turn detection model. It started with some research, chatting with AI about how to build AI. What’s the state of the art? How can I do X? What can I expect from Y? What’s the fastest way to produce Z? By the time I landed I had trained the first version of what would become Sparrow. It was an LSTM trained on one thousand utterances. A proof of concept that showed there was something learnable. At the time we had to “shine the flashlight” on the unknown. So, the task changed. How can we deploy something that starts improving conversations right away. This led to the very first, beta version of Sparrow, and this is when it earned its name. That first version had a 300ms latency and was built on top of a prompted LLM and function calling. For CVI, that might have been problematically slow. However, at the time we had to slow down responses so users could finish their thought. So, we shipped it to our demo. The result was appreciable. Carter (soon to be replaced by Charlie), our demo persona, would actually wait for you to finish your thought. Then, he would respond almost immediately when you were done. It was fantastic. By December we had this in production. Now, as of two weeks ago, we have had Sparrow-0 in private beta. Yesterday it went GA. Sparrow-0 represents a step change from that first version (Sparrow -1?). Sparrow-0 has a 10 ms latency, and produces smooth and more natural turn transitions. How? Well, it uses a combination of a customized Transformer architecture, our hyper fast voice recognition system, and human response-time modeling with a sort of naturalness function. Sparrow-0 models the Gaussian distribution of human response times in conversation, and chooses the appropriate one based on semantic, lexical, and context information. The result is fantastic in action! I work among some of the smartest and most talented engineers, designers, marketing, sales, support, ops, researchers, managers, and leaders. Tavus teams, and investors, support for our vision, and the support, inspiration, guidance, access, and resources at every step: those are the raw materials for success. From retreats in remote locations to the best equipment, the best team, and the best vision: it’s been a really inspiring moment in time for me, and what a time to be alive! :)
-
Tavus转发了
Heading to HumanX or Fintech Meetup next week in Las Vegas? Adam Coccari and Nate Morgan will be there representing HubSpot Ventures! It's going to be a great event - we're excited to connect with our portfolio companies who are sponsoring including Tavus, ElevenLabs, Artisan, and TwelveLabs. Be sure to check out their booths and see firsthand how they're pushing the boundaries of AI technology! Drop Adam or Nate a note if you'll be in Vegas ?? ?? If you're missing out on the conference and have FOMO, don't worry! Just grab the link in the comments to see Tavus' groundbreaking new models that dropped today ??
-
-
Tavus转发了
It's my birthday. Present to myself? Talked to Charlie from Tavus to assess the current state of my appearance. (At least he was gentle.) https://lnkd.in/emQyBUrf