Calling all researchers in the field of Computer Vision or Wearable Robotics! We are excited to announce our upcoming workshop on Friday, May 23 at the 2025 IEEE International Conference on Robotics and Automation (ICRA) in Atlanta, GA on “Enhancing human mobility: From computer vision-based motion tracking to wearable assistive robot control†Soyong Shin Enrica Tricomi Daekyum (David) Kim Letizia Gionfrida
We have a fantastic lineup of seminar speakers, along with a panel discussion featuring senior and junior researchers as well as industry representatives from both fields. The event will showcase live demonstrations of both industry-grade and research-grade technologies, including computer vision-based human pose tracking Meshcapade and a state-of-the-art robotic exoskeleton Skip. We invite junior researchers to submit a short abstract for participation in either a poster session or lightning talks. The best presenter in each category (CV and WR) will be awarded monetary prizes, sponsored by the RAS Technical Committee on Computer Vision and Wearable Robotics.
We are now accepting abstract submissions through the workshop webpage (deadline April 6):
Submission Link: https://lnkd.in/enDyPzqK
Workshop Info: https://lnkd.in/ePvAbzwK
Abstract: As wearable robotic devices for human movement assistance and rehabilitation become a translatable technology to the real world, the system’s ability to autonomously and seamlessly adapt to varying environmental conditions and user needs is crucial. Lower-limb exoskeletons and prostheses, for instance, must dynamically adjust their assistance profiles to accommodate different motor activities, such as level-ground walking or stair climbing. To achieve this, it is essential not only to recognize user intentions but also to gather comprehensive information about the surroundings. Computer vision offers rich, direct, and interpretable data that surpasses non-visual sensors like encoders and inertial measurement units, making it a promising tool for enhancing context awareness in wearable robots. However, integrating computer vision into wearable robotic control poses several challenges, including ensuring the real-time feasibility of vision model outputs, maintaining model robustness in diverse mobility contexts and dynamic user movements, and effectively fusing onboard sensor data with visual information. This workshop aims to address these challenges by exploring the latest engineering solutions for computer vision-based human motion tracking and control strategies for wearable robotic systems designed to augment human locomotion. By bridging the gap between researchers in wearable robotics and computer vision, as well as between academia and industry, we seek to provide a roadmap for developing robust, adaptable, and context-aware vision-based control frameworks that can be effectively translated from the lab to real-world applications.