Is the Reward Hypothesis sufficient?
Mahindra Rautela
Postdoc Researcher - Los Alamos National Lab || PhD - IISc & Purdue
While learning the course "Fundamental of Reinforcement Learning", I came across a question in Discussion prompt which says
"Is the Reward Hypothesis sufficient?
There is a very nice discussion on this link by AI pioneers
https://incompleteideas.net/rlai.cs.ualberta.ca/RLAI/rewardhypothesis.html
My answer to "Is the Reward Hypothesis sufficient ?"
Yes, it looks sufficient with our current understanding of the world. In some situations, the reward function can be a complicated function of some individual rewards or a tradeoff between two different rewards. Relating it to neurobiology looks appealing but neuroscience itself is not understood properly. There is not even a single theory of consciousness.
AI pioneers say higher order of computations by an AI agent will give rise to an emergent phenomenon whereas some physicists like Roger Penrose claim the theory of OOR (Orchestrated Objective Reduction) which defines consciousness is a not emergent phenomenon out of computations but a totally different process. This theory also claims to solve the problem with Schrondiger's cat and the collapse of the wave function as well as the hard problem of consciousness. Micro-tubules in the brain is suggested as a carrier of consciousness.
https://www.youtube.com/watch?v=orMtwOz6Db0
Without solving the problem of consciousness, it is irrelevant to say the reward hypothesis as "sufficient" for AGI. Who knows what lies underneath the problem of consciousness which may change the way we think about intelligence?