Robots Without Bodies: Learning from CISRU
Brad Edwards
SecOps and Platform Leader | Major Incident Manager | DFIR | Developer | Ex-RCMP Major Crime | MSc CS (AI) Student | Leader Who Ships
Autonomous multi-agent systems? Robots? Space? Implications for the architecture of powerful neurosymbolic agents? Yes, please!
In CIRSU: A robotics software suite to enable complex rover-rover and astronaut-rover interaction?Silvia Romero-Azpitarte, Alba Guerra, Mercedes Alonso, Marina L. Seoane, Daniel Olayo, Almudena Moreno, Pablo Castellanos, Cristina Luna, and Gianfranco Visentin describe a software system for robots that could support multi-robot and astronaut collaboration on lunar or Martian surface operations. Their project is an excellent example of how to build autonomous multi-agent systems and how multiple machine learning models can be used within an agent to enable complex behaviours. And on edge compute devices, no less!
I keep coming back to this theme, but we hear a lot about agents in the AI engineering world right now. There are real implications for the design of those agents in this paper. Let's face it: an LLM agent is a disembodied robot.?
Now, roboticists, please don't skewer me. I?know?embodiment is a cornerstone of robotics. But, my point is that the robotics literature could teach us a lot about agent design in other AI domains.
What is CISRU?
CISRU is a modular software system abstracted from the underlying robotics platform. It is meant to create a highly autonomous robotic agent that can operate collaboratively with other robots and astronauts. The authors targeted?ECSS E4 level autonomy, which means their system should be goal-oriented and able to execute goals using onboard computing.
- Multi-agent Component
- Guidance, Navigation, and Controls Component
- Manipulation Component
- Cooperative Behaviour Components
- Perception Component
领英推荐
Internal Cooperation
The headline-grabbing aspect of the article is the robot-robot and robot-astronaut collaboration. But the more exciting part is collaboration?within?the robot. A significant number of machine learning models, AI algorithms, and non-AI systems interact onboard a CISRU rover within and across multiple modules, designed to interact with similar systems that are just as complex. Which in turn are heterogenous in their cognitive architecture (i.e.?astronauts and more CISRU robots). Complex systems interacting with complex systems.
Designing Agents
This modular, multi-model architecture is how I envision autonomous multi-agent systems. I am heavily influenced by both loving robotics and being a software engineer by training. This kind of modular design and interconnectivity is how I model systems. And if we look around, a lot of biology is organized that way, not to mention human organizations. It's efficient and comprehensible and helps us isolate changes and points of failure.
But it's the future. Someone may develop a gigantic LLM that can perform all of the tasks in the CISRU software. But to be fast enough at inference time, efficient to test, with training costs that don't run more than the annual incomes of most companies? It's not viable, and I don't see it being feasible for a long time. I don't even think it's?desirable?from an engineering point of view.
Instead, let's engineer our AI systems similarly to CISRU. We should be thinking in terms of modular cognition. This approach naturally steers us towards architectures that could leverage smaller, cheaper, faster models that are less compute-intensive to train and maintain. And as a bonus, when a model has a smaller domain, it is easier to test and monitor.
The Control Centre
But don't get me wrong. There is a place in CIRSU for large, compute-intensive models. And that would be the control centre. Here we have an area where having significant reasoning, knowledge, and context would be highly advantageous. The control centre is a natural place to put a large (modular multi-model!) system. We would want a neurosymbolic system that can reason over the agents' state and activities and other data streams to help coordinate work and ensure safety. We could leverage the additional compute off the rovers to support map and sensor fusion to provide enhanced context awareness to the rovers and astronauts. The control centre can also be a collaborative partner, offering cognitive services to the other agents they can use as a tool. Hit a failure mode an individual rover cannot reason through? Need a calculation that would be impractical with onboard computing? Need access to external knowledge that is not economical to maintain in hardened onboard storage? The control centre agent is the perfect system to house capabilities to support these needs and others.
Go Read!
In any event, please read the paper. It's exciting research and a great example of multi-agent, multi-model systems architecture for autonomous systems. Thanks to the authors for their great work!
Planetary Robotics Project Manager en GMV
1 年You can find more info in these papers: https://www.researchgate.net/publication/374157256_CISRU_a_robotics_software_suite_to_enable_complex_rover-rover_and_astronaut-rover_interaction https://www.researchgate.net/publication/374451757_Enabling_In-Situ_Resources_Utilisation_by_leveraging_collaborative_robotics_and_astronaut-robot_interaction And also, please visit GMV webpage!
Planetary Robotics Project Manager en GMV
1 年Thanks, Brad!
Realtor Associate @ Next Trend Realty LLC | HAR REALTOR, IRS Tax Preparer
1 年Thanks for Sharing.