We had two incredibly fun and full days running live demos at RoboBusiness. We showed end-to-end speech interaction and open-vocabulary manipulation, a.k.a. robomagic. Big thanks to Steve Crowe and the RoboBusiness crew for having us in the Pitchfire event.
关于我们
We build AI for a new generation of dexterous robots that supercharge human creativity and initiative.
- 网站
-
https://dexman.ai
dexman.ai的外部链接
- 所属行业
- 软件开发
- 规模
- 2-10 人
- 类型
- 私人持股
- 创立
- 2023
动态
-
Setting up for demos at RoboBusiness. Stop by if you are attending (booth 882).
-
What do you get when you combine speech-to-speech, grounded conversational reasoning and open-set manipulation? Magic! ? The fluid experience of just placing some objects in front of the robot and having a conversation about them, describing the task and having the robot articulate an execution plan, and then seeing the robot actually perform the task (all zero-shot!) is just surreal. ? More videos here: https://lnkd.in/d7M7H_mN
-
Zero code. Zero demonstrations. Zero fine-tuning. Zero-shot. ? Our robot plays Tic-Tac-Toe from language instructions. It was never trained to play the game. It has never seen these pieces and this board before. It does not have any computer-vision code specific to this task. ? This is all done with large pretrained vision and language models, without any adaptation or fine-tuning.? The only task-specific bit in the entire system is a paragraph of text (in English) explaining the game, the setup and the available motion primitives. This text is provided as context to the LLM.??
-
Our generalist robot can manipulate objects it has never seen before, in an unfamiliar setting, solely from voice commands. ? The AI powering the robot was never trained on any of the objects in this scene. The entire environment is completely new. Yet, given the language prompt "put the orange ball in the box", it correctly identifies and locates the box and the orange ball (despite the presence of confounding objects like the apple with a very similar color, the orange and the other ball), and chooses the right manipulation skills to execute (pick and place). ? These capabilities are the result of using models pretrained on internet-scale data.
-
dexman.ai转发了
Can robots learn by playing with toys? ? After all, great toys are specifically designed to help children improve their motor and perceptual skills. ? dexman.ai is developing AI for general purpose robotic manipulation. We created a preschool-style pretraining curriculum and dataset designed to bootstrap the motor and perceptual skills of our AI to toddler-level. ? Below is a sped-up example from our 20TB DexPlay dataset, collected via teleoperation.?
-
Can robots learn by playing with toys? ? After all, great toys are specifically designed to help children improve their motor and perceptual skills. ? dexman.ai is developing AI for general purpose robotic manipulation. We created a preschool-style pretraining curriculum and dataset designed to bootstrap the motor and perceptual skills of our AI to toddler-level. ? Below is a sped-up example from our 20TB DexPlay dataset, collected via teleoperation.?