Ready to prove your AI abilities on a real-world robotics problem using real-life robotic arms? Intrinsic is pleased to sponsor OpenCV's Perception Challenge for Bin-Picking, with $60K in prizes. Learn more and register your team here:?https://bpc.opencv.org/
Intrinsic的动态
最相关的动态
-
My research group, the Computational Imaging and Robotic Perception (CIRP) Lab at the University of Hawaii at Manoa, is co-hosting a new #CVPR2025 bin-picking workshop and competition with OpenCV and Intrinsic. It's an exciting chance to tackle real-world industrial robotic perception challenges and compete for a share of $60k in prizes! Learn more at bpc.opencv.org #ComputerVision #AI #Robotics
New Year, new OpenCV Competition! The OpenCV Perception Challenge For Bin-Picking (prizes sponsored by Intrinsic and Orbbec) is a robotics and AI competition, focused on solving a real-world robotics problem and using real-life robot arms. Join a team, work together to create the most accurate model, and win a share of the $60,000 in prizes! Submissions begin February 1st, registration begins now. Learn more at https://bpc.opencv.org #OpenCV #ComputerVision #AI #Robotics #Competition #CVPR2025 #OpenSource
要查看或添加评论,请登录
-
Heads up! if you're attending FOSDEM conference in Brussels don't miss Agustin Alba Chicar (Ekumen) talk about how to accelerate your robot development (and decrease your time to market) using advance simulation techniques. #FOSDEM25 #Brussels #Ekumen #Robotics #Simulation #PoweringYourIngenuity
FOSDEM ‘25 is around the corner and there will be a brand new Robotics and Simulation Devroom this year. Thank you Arnaud Taffanel , Fred G., Lucas Chiesa, Kimberly McGuire and Mateusz Sadowski for making it happen! Together with Ignacio Davila Gallesio, we will deliver a presentation that we hope it’ll complement Jan Hanca’s one on the topic of robotic simulations. These will be just two out of 14 talks on middlewares, testing tools, infrastructure, tooling, robotic projects and governance of open souce robotic projects in the devroom. I kindly invite everyone to participate and to checkout the devroom website: https://lnkd.in/diaGs56S .
要查看或添加评论,请登录
-
-
Hello everyone! Excited to share a project I made a while back. The project aims to create an interactive robotic arm that can be controlled using potentiometers. This allows users to manipulate the arm's position and pick-up objects in real-time, making it ideal for tasks such as picking up objects and placing them. Here is the code : https://lnkd.in/eTKbNRyx
要查看或添加评论,请登录
-
-
I'm thrilled to share my latest project: a vehicle counting system using the YOLOv9-t-converted.pt model! ??? How It Works: 1. Model Initialization: The system begins by loading the YOLOv9 model for object detection and sets up the video source, ready for analysis. 2. Video Frame Processing: Each frame of the video is captured and processed, where the model identifies potential vehicles. 3. Detection and Filtering: Detected objects are filtered based on confidence scores, ensuring that only the most reliable detections are considered. 4. Tracking Vehicles: Using the DeepSort algorithm, the system tracks each vehicle across frames, maintaining continuity and accuracy. 5. Counting Logic: As vehicles cross designated entry and exit lines, the system increments counts for each vehicle type, providing real-time statistics. #YOLO #ComputerVision #VehicleCounting
要查看或添加评论,请登录
-
How I Rooted the Mr. Robot Room : I recently documented the entire process in a detailed write-up. ?? ?? Check out the blog post linked below!
要查看或添加评论,请登录
-
This semester, as part of my MTRX3760 (Mechatronic System Design) unit, I completed a project with Andrew Andrew, Shun Ying W., Kaitlyn Truong and Joon Suh, where we designed an autonomous inventory management system for a mock convenience store using ROS2, C++, and a TurtleBot. We programmed the robot to scan AprilTags on various items, track inventory, and map item locations using SLAM (Simultaneous Localization and Mapping). The interactive GUI we developed in C++ allowed us to monitor stock levels in real time. The store was equipped with AprilTags throughout, enabling the robot to scan IDs and manage inventory efficiently. Here’s a video showing the robot in action—scanning items, detecting AprilTags, and keeping the inventory organized!
要查看或添加评论,请登录
-
We've found a cozy home for our robots in the world of bits! RoboCasa: a place where robot arms, dogs, and humanoids can train safely for daily tasks in procedurally generated simulations. RoboCasa uses LLMs, diffusion, and text-to-3D models to compose a diverse range of indoor environments and tasks. The release provides over 2,500 3D assets across 150+ object categories and dozens of interactable furniture and appliances. The more you randomize during training, the better your robots will learn and transfer from simulation to the real world! This work is led by Yuke Zhu's lab at UT Austin. I'm not part of the dev team, but plan to be the first customer! It's all open-source: https://robocasa.ai/ Paper in RSS 2024: https://lnkd.in/g7KtKT4s
要查看或添加评论,请登录
-
Can you solve this brainteaser? ?? Suppose a robot is walking in a straight line starting at position 0. At each step, the robot moves forward 1 spot with probability 1/3 and moves backwards with probability 2/3. Find the probability the robot reaches position 1. Want more problems like this? Check out https://lnkd.in/g7JC2zmi
要查看或添加评论,请登录
-
1. If you’re interested in #AI and #GenAI, you should follow Jim Fan . 2. Second order effects of AI are coming fast. Right into the home. #robotics 3. #OpenSourceAI is a force to reckon with.
NVIDIA Senior Research Manager & Lead of Embodied AI (GEAR Lab). Stanford Ph.D. Building Humanoid Robots and Physical AI. OpenAI's first intern. Sharing insights on the bleeding edge of AI.
We've found a cozy home for our robots in the world of bits! RoboCasa: a place where robot arms, dogs, and humanoids can train safely for daily tasks in procedurally generated simulations. RoboCasa uses LLMs, diffusion, and text-to-3D models to compose a diverse range of indoor environments and tasks. The release provides over 2,500 3D assets across 150+ object categories and dozens of interactable furniture and appliances. The more you randomize during training, the better your robots will learn and transfer from simulation to the real world! This work is led by Yuke Zhu's lab at UT Austin. I'm not part of the dev team, but plan to be the first customer! It's all open-source: https://robocasa.ai/ Paper in RSS 2024: https://lnkd.in/g7KtKT4s
要查看或添加评论,请登录