Robots Self-Discovering Their Capabilities: A?New Paradigm for Product Design
Nicola Rohrseitz
Global AI Director | Merck, Cisco, boards | National innovation expert | ETH Zurich PhD
Every time we see a robot, the first question that comes to mind is, “What can it do?” This was no different when Boston Dynamics, Tesla, Agility, Figure AI, 1x, and others announced their humanoid robots. Parkour! is very impressive, but the excitement dies down cruelly fast. Now what if robots and other products could learn what they can do on their own? In this post, I'll show you how to build a robot that self-discovers its capabilities, shifting the paradigm of design from one where form follows function to one where function is discovered through exploration. In other words, Form enabling exploration.
Products and Purpose
All products and services are designed to fulfill one or more purposes. Sometimes the purpose is to serve as status symbol, a cultural marker, or purely to entertain. Here we’ll focus on products that are designed to perform one or more functions. Although aesthetics drives use, we take Steve Jobs’ perspective that Design is how it works.
Whether it’s a simple tool like a hammer or a complex machine like a car, the design follows its purpose. However, as anyone who has ever developed a product can attest, users will use it in ways that are surprising, beyond the spectrum that was intended …or simply wrongly! Fields like user experience (UX) research, human-centered design, ergonomics, behavioral psychology, and interaction design exist to bridge this gap: to ensure that products not only fulfill their intended function but do so seamlessly, providing an intuitive, efficient, and satisfying experience for users. In essence, product design adheres to the principle of “Form follows function,” as espoused by icons like Dieter Rams.
Although it sounds simple, designing great products and experiences is always challenging. And the challenge grows when the intended functions are numerous. The more functions a product aims to perform, the greater the risk of diluting its quality, leading to mediocre outcomes.
While focus sometimes is the right answer, products like computers and smartphones are inherently multi-functional. To avoid falling into aurea mediocritas, what has emerged as a winning pattern for manufacturers is to provide a few fundamental functionalities (email clients, word processors, spreadsheets, presentations, drawing tools,…) and enable third parties to create new programs that provide additional ones. This capability leads to the development of a different class of products: developer kits and tools. And just like any other product, these kits must be well-designed to encourage productive use, especially if the goal is to foster innovation and great experiences for end-users… although they will be inevitably “misused”!
What If Products Told Us Their Functions?
But what if, instead of us designing a physical product for a specific function, we could let it tell us what it can do? This sounds radical, but with Artificial Intelligence, it’s becoming possible. Imagine a different approach to design: giving products the ability to self-discover their capabilities, to experiment with their components, sensors, and environment until they uncover new functionalities. This approach would make products much more useful, optimizing along the way the use of resources. New design patterns would emerge and many existing products could suddenly fulfill many more functions. While mechanics and electronics are usually fixed, software by contrast allows for continuous evolution, giving products the chance to improve incrementally. However, often the bottleneck for improved functionalities are not mechanical or electronic constraints, but simply the lack of resources to program them. Artificial Intelligence and especially GPTs (Generative Pretrained Transformers) are capable of analysis and software coding. Hence, by enabling a program to look at what it can do, it might be able to propose and implement new functionalities on its own.
Making a Robot Discover What It Can Do
Given GPTs’ ability to generate text across an incredibly wide range of topics, even from limited initial information (if you want to gain an intuitive understanding of how it works, read here), even the most simple robot should be able to explore and find useful functions. And if that is true, scaling the approach to more complex systems has significant potential to build more flexible, powerful, and adaptive products, drastically expanding the range of our their applications.
To test this hypothesis and explore the concept, I designed and conducted an experiment with the simplest robot I could think of: a mechanical arm equipped with a color sensor at its end.
To let the robot explore its environment and self-discover what it could do, I connected it to OpenAI’s GPT API (how to do this is quite interesting, I’ll write a separate post about it). This is the prompt I used:
领英推荐
system_prompt = (
"You are a robot exploring your abilities. "
"Based on your actions and observations, deduce your capabilities. "
"Respond in JSON format with keys 'action', 'observation', and 'deduction'. "
"Keep the response concise."
)
prompt = (
"I performed the action: '{}'. "
"My observation is: '{}'. "
"What can I deduce about my abilities?".format(action_description, color_reading)
)
The robot started to explore by moving to random positions. After the first reading, it provided a first deduction:
I am capable of precise rotational movement and can perform actions at varying speeds. I can execute tasks accurately, as indicated by the ‘Green’ observation.
After just 5 movements, it got its simple configuration:
I possess the ability to rotate my arm with precision and speed, and I have visual processing capabilities to detect colors.
I was stunned. Not because the abilities are very complex, but because how easy and quick it was to do it. But what happened next was even better: the robot provided potential applications.
I can answer questions by using my arm, for instance green for “yes” and red for “no”.
I hooked it up to the weather API and I now have a very simple robot that would tell me whether to grab an umbrella by shifting from red to green depending on the latest forecast.
A new Design Paradigm to create a Future that looks like the Future
Science fiction is full of big and small robots. Some of them are part of smart systems, like homes. In my simple experiment, I’ve basically created one of those small, handy systems: a robotic arm that sits next to the entrance and hands me an umbrella if it will rain. What this indicates is that it is likely that the future will be full of robots, big and small, in all shapes and forms. Initially still designed by us, but not programmed by us. They’ll explore their capabilities, ask us what we need, propose solutions that we might not even have thought of, and implement them for us.
A new era of design is upon us: one where functionality, aesthetics, and creativity are less at odds pushing towards a compromise, but coexist more easily and allow for ideals to flourish. Today, inquiry drives the design process and making provides the means to explore ideas and discover the best solutions. Through sketching and prototyping, good designers have an intuition for where to intentionally leave ambiguity, allowing for reinterpretation and for new relationships to emerge?—?even by the original creator. Tomorrow, in the new design paradigm, the inquiry will expand beyond just understanding user behaviors and needs to also encompass capabilities and behaviors self-discovered by robots, product, and even prototypes! A cybernetic collaboration, in which the role of the designer will become even more important in selecting the best patterns and behaviors to decide which experiences are truly great.
In the new paradigm, we will be able to ask robots, “What can you do?” and receive surprising, innovative answers. This represents a fundamental shift towards a design era where products themselves become active participants?—?capable of self-discovery, adaptation, and expanding their own capabilities. As designers, our role will be to nurture these explorations, shape the interactions, and harness the creative potential that emerges from this collaboration between humans and intelligent systems. This will create a future that not only looks futuristic but truly embodies innovation and creativity, driven by continuous discovery and co-evolution.