The Singularity and the Future of Humanity

The Singularity and the Future of Humanity


The 'Singularity' is a technology vision of the future. Will we just accept the Singularity as our destiny, or will humanity map out its own journey?

Every day we wake to news from a troubled world. We are so consumed just getting through each day, against a backdrop of wars, cyber attacks, political strife, economic insecurity… that we have no collective vision for the future, at least not one that has any kind of soul.

Daily we read about technology advances which against the dust and noise of life seem to act as a beacon of, at times excitement, and uncertainty.

Technology's developments and vision are feeding into a narrative of technology's inevitability. And, indifferent to the apparent chaos that forms the backdrop to our lives, it forges its own vision of the future.

The 'Singularity' is technology's futuristic vision. For many technology zealots it is the Holy Grail.

And so, in the absence of alternatives, technology's vision is forming the de-facto vision of the future, in a world that has lost its own compass and bearings.

Where do ordinary people fit into the future vision of the Singularity? Well, they don't. The language of the Singularity is not human centric.

The picture at the head of this article has nothing to do with the Singularity, and that is the point of this article!

To me, this painting by Renoir could re-invigorate hope in our own destiny if we allow it to symbolise the soul and compass of our future.


What is the ‘Singularity’?


Initially, it was a label given to the accelerating rate of technology developments.

That view was later updated with a vision of some form of human level Artificial General Intelligence that could improve its own learning to such an extant that it would influence an unpredictable transformation in society.

Other variations include the prediction that the human mind could be uploaded into an advanced computer, in this way arriving at a computer with human level intelligence.

Many descriptions of the Singularity share the vision of an artificial intelligence that exceeds all human capability.

The Singularity is more of a futuristic vision than a realistic forecast based upon a well defined goal.


What is a Super-intelligent AI, Beyond Human Intelligence?


Like most futuristic visions, the Singularity does not have a foundation of well scoped definitions.

So, I propose that a human level artificial intelligence should satisfy at least three broad criteria:

  • It would need to display understandable, predictable, and trustable human like behaviours;
  • At least match all human cognitive capabilities; and
  • It should adequately function in most everyday real world scenarios.

So, any AI with capability beyond these three criteria would qualify as a beyond human artificial intelligence.


How likely is the Singularity?


So, it seems that the Singularity is a future period where an advanced form of artificial intelligence becomes the dominant force for transformation in society.

As with most grandiose visions, the Singularity is blessed with an over-optimistic assessment of what can be achieved, especially considering what it is dealing with: the nature of mind and human intelligence.

The vision has a staggering scope, of unfathomable complexity, and requires an enormous dedicated effort to understand human mind and intelligence, and then translate that understanding into a computer or AI that would emulate human intelligence.

Ultimately, the core of the Singularity vision probably will not be realised. However, along the way, there may be useful discoveries.

Below, I outline some limitations that may collapse the scope of what can be achieved.

An assumption underlying the Singularity is that science and technology are the only force for the evolution of the future.

This assumption is fundamentally flawed. The 'Singularity' and the narrative around it ignores the historically observable, evolution of human consciousness. Below, I give reasons why this will likely disrupt the Singularity, and technology's increasingly dominant voice in shaping the future.

The Singularity is the mystical culmination of technology's quest for the future.

However, a futuristic vision that does not have humanity at the core of its narrative is doomed to fail.

Ironically, as great an intellectual challenge as the Singularity is, it will not be an intellectual challenge that defeats it.

The gods have always frowned upon hubris.


The Limits of AI, and Constraints on the ‘Singularity'


In a recent study, (Yehudayoff, et al; Nature Briefing, 09 Jan 2019), Yehudayoff has shown that whether an ML algorithm can extract a pattern from a limited training dataset is not provable. The problem is linked to the Continuum hypothesis, which Kurt Godel showed can not be proved either True or False.

That is, regardless of the size of the training dataset, a machine learning (ML) algorithm may not be able to generalise its knowledge. The ML algorithm's knowledge, would be constrained by the scope of the training dataset, and at best, may only be generalised within a single category of knowledge. But, we would never be able to prove that it has generalised its knowledge.

In a real world situation, an AI can not necessarily recognise a new situation, it may not know what data is relevant to the new situation, and can not improvise for a situation that its training algorithm has not been programmed for. Self-drive cars are demonstrating this limitation.

That is, the 'understanding' that the AI has learned is constrained by the world-view that has been defined and encoded a priori into the training algorithm. Improvisation, relevance and meaning are beyond the algorithm.

An underlying assumption of AI learning is that all of the information needed to derive a pattern and generalise knowledge is available in the training data, However, with real world strategic scenarios not all relevant information is available; it is hidden. An AI can not train itself on information that is hidden.

The learning algorithm needs an 'objective function' which guides the learning process. However, crafting an 'objective function' becomes increasingly difficult in complex real world situations that are varying, strategic, contextual, and suffer from incomplete information.

As the AI increases its scope of training, and sophistication, the understandability, predictability and trustability of its behaviours will critically decline. This has already been demonstrated with the IBM Watson Oncology project which made incorrect and unsafe recommendations.

Real world situations are contextual, may require understanding outside of the scope of the ML algorithm's training data, are often ambiguous, and may require creativity, risk assessment, judgement, and wisdom.

Training data is not wisdom. Human understanding is typically much more than basic data, formulae, and the principles of a domain of knowledge. Learning algorithmically, incrementally, is not the same a learning conceptually.

If it is not possible to prove that an ML algorithm can extract a pattern and generalise its knowledge the associated risk may restrict the application of the AI. There will be regulatory and legal consequences across many fields of application from capital markets forecasting, systemic risk management, medicine, the use of AI in Law, to the use of AI in Biometric recognition and behaviour prediction, and more. And the issue of understandability, predictability, and trustability will restrict the application of AI in many business applications especially those subject to regulatory compliance.

If limited AI is subject to significant limitations then we must expect that Artificial General Intelligence will be constrained by even more difficult constraints.

The language that describes the vision of the Singularity does not reflect in-depth analysis of the inevitable complexity and constraints.


The 'Singularity' Depends Upon Unlikely Assumptions


The accelerating pace of technological advance has been projected forward to culminate in the 'Singularity'. This seems to assume that technology is the overriding force shaping the future; it ignores the broader social context.

If mass jobs displacement, economic disruption, growing inequality are the outcomes of Robotics, 3D Printing, Robotic Process Automation, and other technologies then there will be a reaction against not only technology, but the whole paradigm that governs society; and so, there will be no further development of technology towards the 'Singularity' or super-intelligent AI.

The acceleration of technological advance may be the result of positive feedback loops that drive the advance. However, studies in Economics, and global population studies have shown that periods of accelerating growth fuelled by positive feedback loops have been disrupted, and even reversed. (Korotayev, Andrey (2018); Journal of Big History: 2)

Studies of the number of patents per thousand of population have shown a decline since the mid 1800s. Increasing complexity makes progress more difficult. Creativity and innovation do not necessarily deliver accelerating progress. (Huebner, Jonathan (2005); Technological Forecasting & Social Change, October 2005)

In order for the Singularity to happen, human level AI would need to arrive at a theoretical framework, and contextual narrative from which understandable, predictable, and trustable human like behaviours could be derived in real-time. There is no possibility of a scientific framework for morality (James Davison Hunter and Paul Nedelisky; 'Science and the Good'; Yale; 2018), and so, it could never be incorporated into some form of AI. The same could be said of human intuition, aesthetic judgement, analogical reasoning, a-causal reasoning, and much more that defines our humanity: curiosity, imagination, intuition, emotions, passion, desires, pleasure, aesthetics, joy, purpose, telos, values, experience, wisdom, judgment, and even humour.

If there are human capabilities that can not be learned by an AI, and if there are real world situations that can not be learned by an algorithm, then in what sense would an AI have super-intelligence beyond a human?

A super-intelligent AI may be able to organise, and search vast amounts of data in real time; it may support advanced Predictive Analytics, and diagnostics; it would offer powerful processing power, enabling it to compute all possible options for well defined problems (for example board games); it could also integrate with Robotics, RPA, VR, Biometric recognition, IoT, and scale, to provide security, anomaly detection, control and optimisation of processes, and even optimal functioning of Smart Cities. It may also emulate planning, design, language use, and creative activities (for example, compose music, create a painting)

A super-intelligent AI, beyond human intelligence, must overcome the need for an a priori definition of the world-view; it would need to recognise new real world situations and be able to determine the relevant data for that situation; it would need to effectively overcome strategic situations involving hidden information; it would also need to recognise scenarios for which there may be no predefined objective, but for which a strategic advantage is sought. And a beyond human, super-intelligent AI must have the ability to learn conceptually.

The 'Singularity' might be a beyond human, super-intelligent AI that has achieved embodied cognition. However, if human cognition, and human cognitive capabilities are emergent features in what way could they be learnt by an AI?

The consensus, neuroscience view is that mind is an emergent property of the dynamical behaviour of the living biological neural network.

It has been suggested that the human mind could be 'uploaded' into a super-intelligent AI; or, the mind could be enhanced by the application of biotechnology.

The basic idea is to take a particular brain, scan its structure in detail, and construct a software model of it that is so faithful to the original that, when executed, it will emulate the original brain.

The mind is dependant upon a living brain. A scanned copy of the brain is not alive. If the copy is uploaded into a computer it would need to be re-animated.

The re-animated uploaded brain would only be some form of simulation, it would not be alive, and it would not have a conscious mind.

An assumption of the mind upload proposal is that mind, and intelligence can be extracted from the architectural state of the brain at the moment it is scanned.

However, emergent properties of complex systems may be expressed as a co-ordinated behaviour of dynamical processes, or components. This has been observed in insect societies: ant colonies have shown collective behaviours emergent from the unknowing activities of the individuals. Emergent properties, such as mind, may not be accessible in a static scan.

Neuroscientists have been able to manipulate Qualia; however, there is no scientific explanation in terms of brain activity for Qualia. There is no scientific explanation of consciousness, or felt subjective experience.

Therefore, those variations of the Singularity that involve uploading the mind and emulating human intelligence are based on complete unknowns. We have no idea of what an explanation of mind, consciousness, Qualia, or intelligence would look like. And that renders these versions of the Singularity empty, and unrealistic.

Also, we are immersed in lived experience, and we can not leave it behind. A description of life is not direct experience of life.

That is, if the Singularity narrative can not be understood in terms of lived experience it becomes meaningless.


How Major Realignments in Society Will Disrupt the ‘Singularity’


There is a growing awareness of the adverse influences associated with recent technologies. Everyday there are reports about privacy violations, the impact of Robots on jobs, the impact of digital technology on children, threats of increased surveillance using Biometric recognition technology, concerns over adverse impacts of gene editing, and more.

And a growing number of published articles acknowledge the significant global scale disruptive potential associated with AI, Robotics and related technologies. The impacts are wide ranging: there will be impacts upon global employment; privacy; economic disruption, including adverse impacts on taxation and the potential collapse of economic sectors due to 3-D and 4-D printing; impacts to global security, as a consequence of the militarisation of AI and robotics; there will be impacts on the psychological and social development of children; growing inequality, and wealth concentration.

The suggestion that AI, Robotics and other technologies could be employed to solve many of the worlds problems is not supported by experience so far. The internet could have already delivered free education to everyone on the planet, but that has not happened. And, today, we are experiencing the destructive impact of digital technology, especially social media.

These technologies have not been developed for the greater human good; AI, and Robotics are no different.

Historically, social progress at scale has been driven by new perspectives and values that underpinned a totally new world-view. Scholars have recognised (since early last century) that this shift in the evolution in our consciousness has been developing for some time.

The historically observable co-evolution of human consciousness and society, is not an evolution that is driven by science and technology; it predates all recent technologies.

So, it is unlikely that Biometrics, AI, Robotics, RPA, and 3-D Printing, and other 4IR technologies are the results of a new emergent level of human awareness.

A new transformative evolution in human awareness would be looking for solutions to these social issues, not more of the same.

The rapid rise, and worrying evangelical narrative of these technologies, together with their apparent accelerating development and deployment, and the growing inventory of adverse social impacts, may further fuel the evolution of human consciousness as it seeks a new level of awareness, and a new social order.

These transitions in human consciousness to new and expanded levels of awareness may see through the false narrative that promotes AI, Robotics, other recent technologies, and even the Singularity.

If this expanded level of awareness sees technology as part of the paradigm supporting these social concerns then technology may be disempowered by new perspectives. And that could be the end of the Singularity!

Hunter Logan

Weird art, unusual designs. Open to commissions and projects.

5 年

Quite an article, Mark Timberlake. Food for thought.

回复
Felix Hovsepian, PhD

Real CTOs Disrupt | Complexity Science |

5 年

Mark Timberlake, you hit on so many points in this article that it's difficult to know where to start! So let me start by saying, I agree with the sentiment of what you have shared, even if there are individual points that I may have a different opinion. One example is the notion that Yehudayoff etc al, showed something could not be true or false. What they actually show is that it cannot be proved nor refuted. On the face of it I imagine many must believe I am just nit picking, which is not the case, the reason for making this (technical) distinction has to do with the fact that 'truth' and 'provability' are distinct notions. This distinction is not only important from a technical standpoint, but it also adds weight to the concerns you have expressed. My second point is related to the first. The pioneers of the field of computation vehemently objected to people misquoting their work, and thereby suggesting that technologies based on these kinds of formal systems (namely, computer systems we use today) could somehow be used to simulate the human mind. Which, in and of itself, also adds weight to the concerns you expressed. Here I am thinking of Kurt G?del & John von Neumann in particular.

Todd Dearing may be of interest.

回复
Benjamin Tremblay

Senior Principal Engineer @ Constant Contact | Front End Specialist

5 年

Power is power. If we give it away, we should not expect to get it back. It has always been a dangerous mistake to reduce life to a game. Our machines are able to beat us at games but they cannot comprehend life. AI is here to show us the pointlessness of playing life by the rules, and show us, once and for all, that most human life strategies are pyramid schemes, but this time we all lose.

Gary Arabian

Senior Systems Engineer

5 年

Well written Mark. Thanks for posting it.

回复

要查看或添加评论,请登录

社区洞察

其他会员也浏览了