Are "data" networks still "high-tech"?
Collective Artificial Intelligence Technology (CAIT) is a new technological frontier that tackles the issue of robots (and AI agents) being able to work together to maximize their individual and collective performance through teamwork.
This is not just a network of devices connected to each other through some router and it is very different than IoT.
CAIT is a substrate for interaction between AI agents and an expansive step toward AI evolution and future of how things connect to each other.
AI agents choose and learn their behavior and collaboration with other AI agents in a way that increases the reward that would otherwise be unavailable in their individual capacity. Machine learning in the context of CAIT allows the robot to experiment which social behavior to exhibit toward other AI agents to maximize rewards with time.
In a world of connected AI agents, how do they know who to pair up with and which relationships to keep to increase efficiency? Our approach is Deep Re-enforcement Learning. AI agents form, strengthen or end Data Transfer Channels (control data or information data) with other AI agents to maximize reward. A Data Transfer Channel (DTC) is a unidirectional logical connection between an Observer and an Actuator, two Actuators, or two Observers. Any participant can establish, strengthen, weaken or terminate a DTC at any given state. A CAIT enabled agent may have many observers or actuators, and therefore many DTCs. Using Hebbian Learning, a weight is assigned to each DTC. The weight variable initializes with real scalar value, and evolves based on the outcome of previous states and rewards. Both sides of a DTC measure the reward at the next state or more commonly over a series of states. Based on the rewards, each AI participant either decides to strengthen or weaken the weight of the particular DTC. Analogous to the notion that “Neurons that fire together, wire together,” the AI agents that help each other attain more reward, better bond together.
Artificial intelligence works based on a reward system, which corresponds to a higher accuracy and efficiency in performance of one AI agent. In environments crowded with traditional artificial intelligence agents, such as the internet, each artificial intelligence agent competes with other agents to maximize its own reward, thus maximizing its own performance. Thus for example, a group of self-driving cars in a garage would compete with one another to exit first.
But this does not always lead to the maximization of overall efficiency. Just as cases where in crowded human environments collaboration yields better results, machines may become better off if they collaborate with each other. This becomes exceedingly important if the machines are working on behalf their human owners, as is the case in the example of autonomous cars. AI Inc.’s Collaborative Artificial Intelligence Technology (CAIT) is a new technological frontier that tackles this issue. Our robots can work together to maximize their individual and collective performance through teamwork. They can form groups, share information, build tasks and even transfer their skills to one another.
The simplest experiments that can be designed with cleaning robots such as bObsweep working in the same vicinity, being capable of splitting the area to avoid overlaps after authorization and pairing.
Another example to compare AI with CAIT: When you think about it, Alpho-go, Alphabet Inc.'s Google DeepMind's Ai agent, is a standalone body. It does not form relations with other Ai agents and it cannot distinguish which other Ai agent is helping it win. In a CAIT system, the individual AI agents collaborate and share intelligence, in addition to data to achieve a common goal.
Power Product Evangelist
6 年Since machines don’t have an ego, at least I hope not, collaboration might be easier to obtain this way. Scaling up, this would be a wonderful solution to optimize traffic congestion and accident avoidance. Commonly, multiple failures have to stack up for a catastrophic failure. What is needed now is a very low latency network and compute. In the 90-ies we had Nested Differentiated Feedback Loops, which was hard to stabilize. A “louder channel” could mean more feedback or more feed forward. While the first reduces gain and could cause system instability as phase margin dwindles in a second or higher order system, feed forward may lead to a run-away condition. Proper partitioning between cloud and edge computing devices to manage dead-time would be critical I suppose.