Excited to announce that Twelve Labs will be at Amazon Web Services (AWS) re:Invent in Las Vegas, from December 2-6! Join us for a week of innovation as we dive into the latest in generative AI, witness new product launches, and learn from industry leaders. Don’t miss out on meeting our team, see live demos, and discuss how our technology is shaping the future of AI in media. ?? ?? Book a meeting with us: https://lnkd.in/gjcu9n_q #AWSreInvent #VideoAI
Twelve Labs
软件开发
San Francisco,California 7,650 位关注者
Help developers build programs that can see, listen, and understand the world as we do.
关于我们
Helping developers build programs that can see, hear, and understand the world as we do by giving them the world's most powerful video-understanding infrastructure.
- 网站
-
https://www.twelvelabs.io
Twelve Labs的外部链接
- 所属行业
- 软件开发
- 规模
- 11-50 人
- 总部
- San Francisco,California
- 类型
- 私人持股
- 创立
- 2021
地点
-
主要
55 Green St
US,California,San Francisco,94111
Twelve Labs员工
动态
-
Are you struggling to understand video content in different languages? Or perhaps finding it difficult to make your content accessible to a global audience? ?? In this tutorial, we'll introduce you to the MultiLingual Video Transcriber Application and explain how it was developed as a solution. Built by Hrishikesh Yadav, VidScribe uses video foundation models from Twelve Labs to understand videos and provide seamless transcription across multiple languages. ???? ???? ???? What sets this program apart is its ability to adjust transcriptions based on user-selected proficiency levels—beginner, intermediate, and advanced. Users get transcriptions or translations tailored to their chosen level. Additionally, the application provides accurate timestamps, allowing users to track spoken words with their transcriptions. This feature enables easy navigation and comprehension of the content.??? ? Read the complete tutorial here: https://lnkd.in/g6sZ9djz ? Watch the demo video here: https://lnkd.in/gmW2nccm ? Explore the demo of the application here: https://lnkd.in/gS2hUEx2 ? Experiment with the app via Replit: https://lnkd.in/gEusiaz6 ? Find the code on GitHub: https://lnkd.in/g4p3t47S
-
Last week was significant for Twelve Labs as we engaged prominently at two major global tech events. Our Founder & CEO, Jae Lee, participated in a panel discussion at the SK AI Summit 2024, exploring transformative AI technologies and the globalization of Korea's AI ecosystem. Concurrently, our Co-founder & CTO, Aiden L., delivered an insightful speech at Amazon Web Services (AWS) Industry Week 2024, focusing on advancing video understanding and video RAG technologies in the industry. We're excited to join forces with amazing partners like SK Telecom and Amazon Web Services (AWS) at major events. These collaborations not only boost our presence but also allow us to showcase our latest advancements in AI. It's been an incredible journey, and we're thrilled to be part of the global conversation on AI innovation. #VideoAI #SKAISummit2024
-
In the 66th session of #MultimodalWeekly, we have?Manish Maheshwari?and?Hrishikesh Yadav?from Twelve Labs?to give a masterclass on the newly released Embed API product. ?? Check out the following resources about Embed API: ? - Blog Post: https://lnkd.in/gDdTqzVS - API Docs: https://lnkd.in/gcEzTNQZ - Quickstart: https://lnkd.in/gtXgFm5B Register for the webinar here: https://lnkd.in/gJGtscSH ? Join the?Multimodal Minds?community to connect with Twelve Labs users: https://lnkd.in/gDvse-ii ??
-
Twelve Labs转发了
We are excited to introduce our new Embed API in Open Beta, enabling customers to generate state-of-the-art multimodal embeddings. ?? Here are the key highlights: ? Powered by our state-of-the-art?video foundation model Marengo-2.6 ? Up to 70% cheaper than other solutions including CLIP-based models for cost/performance ? Spatial-temporal understanding to identify and localize objects, actions, or events in both space (where it occurs in the frame) and time (when it happens over multiple frames) within a video. ? Integrations with?MongoDB,?Pinecone,?Databricks,?Milvus,?LanceDB, and?ApertureData?for easy vector storage. To see how to create video, audio, image, and text embeddings all in the same latent space, see this blog announcement: https://lnkd.in/gDdTqzVS ? Twelve Labs Embeddings is now available through APIs and Playground. To quickly start using the Embed API, here are a few resources: ? Documentation: https://lnkd.in/gcEzTNQZ ? Quickstart cookbook: https://lnkd.in/gtXgFm5B ? Landing page: https://lnkd.in/gxJiHAHs Big thanks to the project leads Manish Maheshwari, Yeonhoo Park, Hyeongmin Lee and other team members from our Research, Product, Design, Engineering, and Go-To-Market for their efforts!
-
~ New Webinar ~ The recording of #MultimodalWeekly 62 is up! Watch here: https://lnkd.in/gheDinp3 ?? They discussed: 1?? Temporal Action Localization (Benedetta Liberatori) 2?? Hallucination Benchmarks for Vision-Language Models (Tianrui Guan Fuxiao Liu) 3?? Structural Self-Attention for Transformers (Manjin Kim) Enjoy!
Temporal Action Localization, Hallucination Benchmark, and Attention for ViTs | Multimodal Weekly 62
https://www.youtube.com/
-
We are excited to introduce our new Embed API in Open Beta, enabling customers to generate state-of-the-art multimodal embeddings. ?? Here are the key highlights: ? Powered by our state-of-the-art?video foundation model Marengo-2.6 ? Up to 70% cheaper than other solutions including CLIP-based models for cost/performance ? Spatial-temporal understanding to identify and localize objects, actions, or events in both space (where it occurs in the frame) and time (when it happens over multiple frames) within a video. ? Integrations with?MongoDB,?Pinecone,?Databricks,?Milvus,?LanceDB, and?ApertureData?for easy vector storage. To see how to create video, audio, image, and text embeddings all in the same latent space, see this blog announcement: https://lnkd.in/gDdTqzVS ? Twelve Labs Embeddings is now available through APIs and Playground. To quickly start using the Embed API, here are a few resources: ? Documentation: https://lnkd.in/gcEzTNQZ ? Quickstart cookbook: https://lnkd.in/gtXgFm5B ? Landing page: https://lnkd.in/gxJiHAHs Big thanks to the project leads Manish Maheshwari, Yeonhoo Park, Hyeongmin Lee and other team members from our Research, Product, Design, Engineering, and Go-To-Market for their efforts!
-
We had a great time at #SVGNEXT: AI, XR, and Beyond! The panelists, along with our Senior Solutions Architect Simran Butalia, had a fantastic discussion on the role of AI in sports. AI is definitely here, and it's already making an impact. At Twelve Labs, we're helping creative workflows move faster and driving more fan engagement, bringing fans closer to those unforgettable Caitlin Clark and Angel Reese moments—as well as finding out who Steph Curry "put to sleep" tonight. Thank you Rick Hack, Rachel Joy Victor, Kenny Lauer, and David Shapiro for joining the discussion and providing your valuable insights! The future of sports is AI-powered, and we're just getting started!
Rounding out our morning with "AI in Sports Production: Content Creation, Broadcasting, and Fan Engagement in the Age of AI". Panelists include: Simran Butalia, Twelve Labs Rick Hack, Intel Corporation David Shapiro, Pixellot - AI-Automated Sports Video and Analytics Rachel Joy Victor, FBRC.ai Moderated by Kenny Lauer, Y2AI #SVGNEXT
-
?? What if AI could cut your dailies management time in half? ?? Our latest blog post reveals how Twelve Labs' AI technology is revolutionizing film production workflows - helping directors and editors focus on what truly matters: storytelling. From automatic scene categorization to intelligent footage organization, we're transforming how filmmakers work. Because creativity shouldn't be bottlenecked by manual tasks. Read more about the future of filmmaking: https://lnkd.in/ghndYHWF #Filmmaking #VideoAI
-
In the 65th session of #MultimodalWeekly, we have?Hyeongmin Lee?from the Twelve Labs Science team?to present our recent work on evaluating video foundation models. Check out the following resources about TWLV-I: ? Blog Post: https://lnkd.in/g2VJjPui ? arXiV: https://lnkd.in/grbrxh7X ? HuggingFace: https://lnkd.in/gDvWrrYp ? GitHub: https://lnkd.in/gcANeufq Register for the webinar here: https://lnkd.in/gJGtscSH ??