Conscious machines
Is ChatGPT and its contemporaries conscious? What characteristics do they not have? What kind of research has not yet incorporated in the current models? This sketches four research directions that are underrepresented in the scientific literature.
Simulation loops of real-world temporal phenomena
Real-world phenomena can be temporal. The sun rises in the east and sets in the west. A ping pong ball returns to the one hitting it. A book dropping from a bookshelf will hit the ground. To properly model such phenomena it helps when the internal structure of an AI can simulate such events.
In the literature sensorimotor loops couples the sensory system with the motor system. This is not stationary, hence there's a dynamic process, a loop, that integrates errors over time. Likewise, for temporal phenomenon, a simulation loop can tune both models, perception, and action over time.
Such simulation loops do not have to run with the same speed as the real-world phenomena, to the contrary. Much will be gained by accelerated play of real-world sequences.
Most obvious are our limitations.
The most important temporal dynamics in the brain seems not to reside on a individual neuron level (spike-time dependent plasticity). No, it naturally is displayed on a more global level, in the form of alpha to gamma rhythms. These are the ones that are related with the state of being awake and undergo large changes when we fall asleep, meditate, or pay attention.
We know very little about that type of dynamics in our brain and how it represents the outer world. It makes sense that it would have to do with each other, but how?
Scientists: Wolpert
An unsupervised specialization algorithm
The brain implements an algorithm that learns to assign module-specific tasks when the need is there. A complex task is subdivided in more elementary ones in a manner that is not predefined by a programmer. This is perhaps the most difficult thing to swallow for a computer scientist, who often makes a living by exactly that: the proper decomposition of a problem into tractable subproblems. An algorithm that would perform such a decomposition in an unsupervised manner would do this kind of meta stuff that was hitherto just their domain!
There is a problem with specialization though. As soon as something is specialized, how can we still tackle a problem in a general way? Do we only generalize between different specialized submodules or do we preserve some brain area for generalized problem solving?
领英推荐
Scientists: Tishby
An intrinsic reward system
Brains seem to have specialized subsystems to come up with rewards that function as mere indicators for possible external rewards. You will get hungry before you starve. You get scared before you are hurt. Very logical of nature to implement such methods.
Similar to the unsupervised specialization algorithm, to have such internal mechanism, one must get evolutionary benefits, also for small organisms. (Although more precisely, there must be a trade-off between the metabolic costs of an intrinsic reward system and its benefits in terms of survival quality). This means that the way we are build, we are hitchhiking on such mechanisms from distant evolutionary pasts. In other words, the way these systems have been build is by an evolutionary algorithm, not by a cognitive algorithm. The research in this direction tries to build an intrinsically curious system, never stopping with exploration, life long learning.
Scientists: Schmidhuber, Oudeyer
Grounded in a very rich world
The bandwidth of information going into our system is phenomenal. The number of rods in one eye is over a 100 million. The number of pain receptors is estimated around 200 per square cm, so that is around 4 million per adult. In comparison, we only have around 10,000 taste buds (but then we have around 5 million olfactory receptors). There is an enormous quantity of information flowing into our brain each second, each condensed and preprocessed to be as valuable as possible over evolutionary times. Each time we perform some action in the real world, this data stream shifts, expands, and tumbles around, as if we are navigating using disco lights.
We have to recover invariants in this unordered stream of information, and the only way to do this is by our quite primitive macro-actions (especially as a baby). Slowly, we encounter regularities and will not be surprised anymore by our own hand touching our knee, or our own hands moving in font of our eyes; we learn to understand and recognize ourselves in the context of this tumultuous world outside.
We find out about our identity in all this turmoil and would not be able to do this as an external observer from some virtual world. This is not just some big data analysis, this is real-world, real-time, submerged, big data processing.
Imagine a conscious machine running on a server peering through the pinhole of a TCP connection to only our virtual web world. Yes, it is big, but isn't there a lot of crap out there? How to learn to know what's right, what's wrong? Would they not need the same quality of data input as we as humans do? It does not seem farfetched to assume that they need to have a physical presence, to be a robot in one form or the other, to be able to experience the real world...
Feel free to comment and describe work in AI that touches upon those points!
Digital Marketer
10 个月Anne, thanks for sharing! Pleasure meeting, you look for connecting with you! Keen to hear about your next post.?? Speak to you soon, Jhaanvi ADFAR Tech ??