AI - Why You Should Be Concerned
Mark Timberlake
Analytics | BI | Digital | Mobile Applications | Cyber Security | Senior Project Manager
This article continues the themes o#f my previous review of AI and Robotics;
https://www.dhirubhai.net/pulse/technology-data-social-responsibility-mark-timberlake/
An alarming percentage of the articles that I have read about AI, robotics, and the other technologies that make up 4IR, are of two types: they are either superficial and evangelical; or caste AI and robotics as a discrete alien presence, which is then subjected to some clever analysis that finally proclaims, we can safely ignore it, and continue worrying about our mortgages.
Those articles that do refer to previous industrial revolutions, blithely proclaim that each and every industrial revolution generates good. But, they offer no critical assessment of why the future should be the same as the past.
These technologies are not discrete alien presences that we can chose to ignore; they are already deeply embedded within our society, and that integration and intrusion into every aspect of society and our personal lives will accelerate.
In my previous article about 4IR, I give reason to believe that this technology revolution is different from previous industrial revolutions; the potential dislocations are globally significant.
I also argue that the issues are deeper than the individual technologies themselves; these issues include structural problems with our society, and the way that these technologies are developed and globally deployed. These technology developments involve global risk transference with almost zero global governance over their development and deployment.
That risk transference, is not a carefully profiled risk, subjected to effective governance which has assessed the risk as low, with limited scope. In the case of 4IR technologies (which are the technologies capable of collecting unprecedented amounts of data, in real-time), these risks are likely large scale global, not subject to any serious level of impact analysis, nor global governance.
The Observer Bias Effect
Scientist have wondered why the earth has not had a cataclysmic, life exterminating, asteroid impact. The earth has had asteroid impacts, but while they probably have caused mass extinctions, they have not totally exterminated life on earth.
Now, after 4+ Billion years of earth history, many would conclude, based on the fact that we are still here, the likelihood of a totally life exterminating event on earth is extremely remote.
That conclusion is an example of the Observer Bias Effect. As time progresses, the likelihood of a life vaporising asteroid strike on earth increases!
Similarly, we have progressed through several previous Industrial Revolutions, and we have not had a globally catastrophic economic, social, political event, as a result; but we should not be blind to the Observer Bias Effect.
The point is, we need to be critical, and not just assume that everything will be fine.
The Fallacy of Absolute Progress
The suggestion that AI could help solve complex global problems ignores history, and does not address the fundamental causes. First, it suffers from the fallacy that scientific, technological and industrial developments constitute absolute progress that can be applied to the issues of the day; we have serious global unemployment and environmental problems because of technology innovations that have been unleashed on a global scale.
Science and Technology developments are not absolute advances. In time, many have been shown to deliver only relative benefit, and involve significant adverse side-effects, and trade-offs.
Jobs Displacement and Global Upheaval
I am concerned about the power of AI, and Robotics, individually, and in their potential for global scale integration with the other 4IR technologies, such as, IoT, and VR/AR
What are the effects on society when significant, multi-industry scale disruptions are unfolding, with potentially non-linear cascading effects on other industrial and economic systems? And for industries dominated by siloed, reductionist thinking, those disruptions will have amplified effect.
There have been several studies that have forecast that up to 50% of employment could be affected by AI or Robotics.That jobs displacement could happen quickly and before affected governments wake up to what is happening. There will be no noticeable impact until some critical level of jobs displacement has been reached; that is, due to a lag effect governments probably will not notice for some time. The impact on those countries will not be immediately felt in Europe or the US, and may not even be noticed until regional chaos becomes a global issue.
Again, because there will be a lag before any effect is noticeable, no one will probably notice, especially governments. So, there will be a significant global impact because of the fragmented geographic/ economic sector nature of this jobs displacement, the lag effect before any noticeable impact, and assuming that governments have no proactive governance framework in place.
Now, consider the backdrop of nations in a competitive arms race to dominate AI, and Robotics. I am concerned about the possibility of regional chaos, resulting from jobs displacement, being used as a pretext for intervention by these states. I feel, this possibility is very real in South-East Asia
The Forecast of New Jobs
I have read conclusions that there will be new jobs created. If people have a future role, it will likely be highly specialised, advanced analytical, or creative. But, that is not going to guarantee employment for the majority, at all.
One of these forecast job types is ‘AI Explainer’. Well, what is the reliability of an ‘AI Explainer’ faced with unknowns, such as: if the AI design is based upon flawed logic; or inadvertent bias has been incorporated into its design; or it has been hacked; or it is an unexplainable, self-learning, Blackbox; or it has an unknown capability boundary?
Your Mundane Job, and Whose Decision is it?
Many armchair consultants and techos love to pontificate about AI, Robotics freeing you from your mundane job. But, that is NOT their decision. Their prognostications are framed in terms of the inevitability of the AI, Robotics, Industry4.0 revolution - so you might as well get used to it!!!
Questions about job satisfaction, are completely irrelevant to the whole issue of the deployment of these technologies. What is relevant, is the question of whether we set the direction of our future global society, and NOT some AI or Robot revolution. And that decision process must take into account that work is a structural feature of our society; it is the mechanism, by which, we support our families, and their futures.
Income Inequality, and the Concentration of Global Wealth
A recent study, published in the Harvard Business Review, concluded that Globalisation has had a detrimental effect on wealth distribution, in regions that did not actively counter those effects with protective legislation.
Global scale jobs displacement can only accelerate that concentration of wealth. It is highly unlikely that a Universal Basic Income will change that outcome.
AI as a Blackbox
If an algorithm is biased, how would we know? How could we distinguish between a design flaw, human bias, or a malicious hack of the AI? A recent article in the MIT Technology Review, described an algorithm that taught itself how to drive a car, by watching a human drive. The author commented that the developers of the AI were unable to explain exactly how the AI was making decisions; that is, the AI was NOT explainable. This is a general problem with all self-learning technologies.
The issue with AI is much more involved than a cartoonish view of robots with malicious intent. Consider, for example, an AI that is able to autonomously fly an aircraft, under all conditions, at all altitudes. From a systems design point of view, the AI could have boundary conditions in its capabilities. The boundary conditions could be deliberate design constraints, or inadvertent constraints imposed by software/hardware/ sensor interfaces, or other design factors. So, the AI will have a boundary that circumscribes its zone of control. Now consider the situation where flight conditions push the AI to the edge of the control boundary, and that the boundary is a 'hard boundary'. While the AI is within the boundary there is flight control; however, if that boundary is crossed the AI is outside of its zone of control. Now, a very real design flaw, would be that the human element is not part of the AI design; so that, in this example, as the AI is pushed over the hard boundary of control, it throws control of the aircraft to the 'pilot'. That is, there is no design feature in the AI that enables a gradual transition of control within a safe zone; the pilot, is forced to take control of the aircraft in what is effectively a catastrophic failure zone of control.
We could apply the same line of thought to the use of AI and Robotics in Medicine, Law, Military applications, and more.
Already, a Uber Self-drive vehicle has killed someone. Let me guess, a design flaw!!
The Incorporation of AI, Robotics, VR, AR, IoT in Abuses of Power
A few years ago, I remember reading some government prepared information encouraging people to complete their census forms; innocently advising that ‘…we need the results to decide where to place new schools and churches.’ At the beginning, of the last century, Jews, being good German citizens, unsuspectingly complied - the rest is history.
A recent article about the evolutionary displacement of the smartphone by wearable technologies contained a comment that the smartphone is a passive device that we activate when we choose which sets this technology trend in a larger more significant context; the displacement of the smartphone by wearable technology especially AR devices is the beginning of active biological integration. Once that Rubicon has been crossed, the way is open to manipulation and control on an unprecedented level.
Another recent article describe the way that new technologies are being readily incorporated into abuses of power. The article described the situation in one country, where a nightmare vision is unfolding of AI directly enabling total surveillance, leading to an ultimate goal of total thought control of the entire population.
That country has also deployed data collection technologies and AI to run a system of social control that profiles each and every person in the country, and applies a score of trustworthiness.
What is so arresting, is the complete absence of any moral reflection, or social consciousness, on the part of the individuals involved in deploying these technologies against an entire people.
Facebook and Cambridge Analytica have been embroiled in the harvesting of the personal data of 50 million people. That data was then later used for manipulative purposes of significant national consequence.
Conclusion
We would be wise to be prudent; AI and the other 4IR technologies will deliver trade-offs, side-effects to initially observed and hailed benefits.
The issue of AI explainability is a significant Risk associated with this technology.
The whole matter of Technology Risk Transference, needs to be addressed as part of an urgently needed 4IR Global Governance Framework.
AI, Robotics, IoT, VR/AR offer a whole new order of capability that will have significant global effects from their deployment. There could be global non-linear side-effects, on economies, and societies. And these technologies are already being incorporated into criminal activity, and state sponsored abuses of power. Again, in one country, Technology and Authoritarian Government are already delivering a Dystopian future to the entire population. And numerous articles have exposed how that country is indirectly exporting that Dystopian nightmare.
I welcome your comments on this important topic.
If you feel that this is an important issue then please ‘Like’ this post, and please share it.
Also, thank you to everyone who has read, and liked, my first article on this topic. If you have not read it yet then I beseech you to do so.
Weird art, unusual designs. Open to commissions and projects.
5 年Mark Timberlake These are well-conceived and timely articles.
Founder CYBERPARENTALGUARDIANS.COM 4 Victims & Survivors #CPG @GlobalGoodwillAmbassadorsFoundation @GrowingMissDaisy
5 年Mike Smith - I RECRUIT RECRUITERS
Real CTOs Disrupt | Complexity Science |
5 年Mark Timberlake, good article - thanks for sharing your thoughts. I have to admit, the fact that some are calling AI, which in my opinion is just software, bothers me ... because it means we rely on it (the software) doing things it simply is not capable of doing. Worse still is the blind faith held by some that such software systems are somehow superior to human judgement. Here I am thinking of the software that implemented the Black-Scholes model, and the mess it created once it began to function outside its domain of applicability.
Writer, Researcher, Lyricist at Whitecrow Enterprises
5 年Another problem—often mentioned, but dismissed as “mere science fiction— is once an AI crosses a certain threshold & develops a language unknown to us, but understandable to every other AI, we’re toast. The fact that computers are already learning quietly on their own is a clarion call to unplug the damn things lest they develop a new power source currently unavailable or unknown to the vast number of humans, and become impossible to turn off. If both Elon Musk & Stephen Hawking have exhibited as sense of extreme caution (the former) and naked fear (the latter), we would do well to pay as much attention as we would to a barrage of nukes headed our way. Cut the plug NOW, & don’t mess with what you eventually will not be able to control.