We Need To Reset The Relationship With Technology
Gerd Altmann, Pixabay

We Need To Reset The Relationship With Technology


Taking ownership of the future will positively reset our relationship with technology.

You look out of the window, and it's all climate change. You start wondering about the future, and you remember watching a video about AI, Robotics and other technologies of the Fourth Industrial Revolution, and how they will fundamentally change the way we live and work. The presenter solemnly predicts that many of today’s jobs will be automated, millions of workers could be displaced, but new types of work will replace the old; but, she gives no description of what the new jobs of the future will be like. With predictions of up to 800 million people displaced from their jobs, you start to wonder what will happen if most of them do not find another job.

Still trying to imagine what the new jobs of the future may be like you remember plans by Elon Musk to build a city on Mars; Inverse (20th Nov 2020). What about Earth, and what is going to happen to hundreds of millions of people whose lives have been disrupted? 

On the internet you read news that in Brazil, rumours of children being kidnapped circulate via WhatsApp. Someone photographed two innocent people ‘matching’ the description of the kidnappers in a car. The photo quickly spread in WhatsApp messages, and Facebook posts. Within hours a crazed mob located the car and couple; the car was set on fire, and the couple were beaten. Buzzfeed News (31st May 2017).

There is another news item detailing how Myanmar military personnel launch a systematic campaign on Facebook targeting the Muslim Rohingya minority inciting violence and leading to the largest forced human migration in recent history. A report to the UN Human Rights Council emphasised the determining role of Facebook in the Rohingya genocide; TIME (13th Mar 2018).

Following on, you read that in the U.K. a flaw in a voice recognition system, used in immigration fraud detection, triggers thousands of student deportations; Quartz (4th May 2018).

In this article, I try to show that what I have outlined above illustrates that technology has its own motivations, and driving narrative for the future which is not necessarily aligned with the greater good of society. And if we are asking ourselves why there are so many adverse reports involving technology, and what will happen to 800 million people who have lost their jobs due to automation, AI, and robots then we could be asking the wrong questions. 

Korea JoongAng Daily (17th Nov 2020) reported that cable channel MBN became the first broadcaster in Korea to report news, on 6th Nov, with an AI announcer named AI Kim

“I was created through deep learning 10 hours of video of Kim Ju-ha, learning the details of her voice, the way she talks, facial expressions, the way her lips move and the way she moves her body,” said AI Kim. “I am able to report news exactly the way that anchor Kim Ju-ha would.”

There is no compelling need for this, but close to the surface is the influence of the quest for the Singularity - the Holy Grail of technology’s future vision - the uncontrollable and irreversible explosion of technological developments that drive major transformations in human civilization. Images of androids advancing upon human level intelligence and behaviour, and cyborgs blurring the distinction between man and machine are powerful motivational images. 

In this description of the Singularity is embedded the inevitability of a technology dominated future. The human story does not seem to be part of the narrative; and concerns over the possibility of adverse impacts, and the moral status of developments have no place in the larger mystical vision of the Singularity. For further comments about the Singularity

Elon Musk has (Inverse 20th Nov 2020) outlined his vision to build a city on Mars, and to terraform Mars to support life. The vision includes references to scientific feasibility, a high level plan of major infrastructure components, dates for some milestones, and even the name of associated companies providing detailed design for some components - in all the overriding impression is that the vision is actually in progress, and very much reflecting the inevitability of technology’s advance.

It represents a powerful, motivational vision that would have an inspiring impact on others working across technology. However, it is a technochauvinist vision where all issues and considerations are purely technical for which there must be a technology solution.

But, it is not a human centred story: it is a future fully conceived by, and dependent upon technological possibility only, where the human presence is only possible because of that technology. This is not a future resulting from an exploration of human concerns and aspirations, moral considerations, and a philosophical position about desirable futures. 

The potency of this vision is maintained because it is untainted by human concerns and aspirations, by moral reflection on the disastrous impact humans have had on the earth, and philosophical positions about desirable futures - there are no alternative narratives other than the technology narrative.

Werner Herzog raises a moral objection to the plan to colonize Mars, describing the proposal as "an obscenity," and says humans should "not be like the locusts.”

Herzog proposes instead that we should adopt a socially responsible path for the future by restoring the habitability of the earth.

Herzog recognises the idea of colonizing Mars as quest for a ‘technological utopia', and compares it to other failed ‘utopias’; communism, and fascism.

The Jungian psychoanalyst James Hollis, in 'Creating A Life' (14th Aug 2002, Inner City Books), offers an analysis of the problems of modern life. He assesses that our addiction to techno-materialism is a diversion from the distress of our existential angst. Once myths and fictional narrative were part of lived experience, and transcendence was part of felt experience intuited throughout the natural world. These psychological foundations have been swept aside; however there is no sustaining replacement to fill the void. Our addiction to techno-materialism locks us into a short range vision out to the next innovation, entrenches the technology narrative as the default narrative for the future, and so, renders us complicit in the subordination of human needs, concerns, and aspirations for the future to the images of the future that technology can materialise.

The pursuit of technological possibility pushes into all potential application domains, including those where the likelihood of moral issues arising seems self-evident. For example, Neuralink which is developing implantable brain-computer-interfaces (BCIs).

Musings about potential application areas, including telepathically communicating with your Tesla; capturing and replaying memories; comments to the effect that gaps in neuroscience should not stop AI advances because ‘its just an engineering problem’; and the complete dismissal of the inherent risks associated with AI as a black box technology by declaring that it should not stop the utilisation of whatever functionality it can provide, demonstrates that these developments are driven by what is possible rather than a well defined and specific need; and they do not give confidence that there is much understanding of adverse outcomes, and the associated moral issues. ZDNET (15th Sept 2020) 

Tech dives into brain-computer interfaces because it subscribes to the ideology that the advance of technology is inevitable, and has abrogated any moral responsibility for adverse outcomes by taking the position that the inevitability of technology’s advance means it is not contingent on the status of moral issues. Even when moral issues are acknowledged, they are passed off to society to resolve fait accompli; for example, following growing disquiet regarding the adverse impacts of AI based applications, both Facebook, and Microsoft have called for regulation of AI. Having dumped the moral issues onto society, tech companies absolved themselves of any consequential responsibility, and so marched on.

The abrogation of social responsibility by technology companies seems to have given them carte blanche to pursue any application domain even those that are just anti-human. 

For example, HireVue employs AI/ML based cognitive games that capture a vast amount of data which can be combined with video interviews to generate predictive psychometrics. A report by the Washington Post (7th Nov 2019) raised concerns over HireVue’s ‘Face Scanning’ algorithm which uses candidates’ computer or cellphone cameras to analyse their facial movements, word choice and speaking voice before ranking them against other applicants based on an automatically generated “employability” score. As of 7th Nov 2019, at least 100 large corporations were using this system.

Spurred on by ideological zeal, the march of tech insinuates deeper into all aspects of our humanity. For example; a number of corporations have deployed algorithmic solutions in their call centres which monitor and analyse, in real time, the quality of compassion or empathy displayed by their customer service agents while interacting with a caller. The algorithm looks for patterns of behaviour — a raised voice, a quickened rate of speech, a long silence — that could signal a lack of empathy or be tied to customer frustration.

Couples may not be able have children because one or both partners carry an inherited disorder. But, the issue is framed as a problem that can only be solved by biotechnology, in this case CRISPR. The whole narrative spins around the exclusive 'benefits' of the technology, the current state of the technical issues, and the moral, and governance issues. 

MIT Technology Review (3rd Sept 2020); ‘The “staged rollout” of gene-modified babies could start with sickle-cell disease’

It is a compelling narrative, and another example of the way technology insinuates into the human story; but one that simply excludes socially responsible alternatives that may not involve technology at all. UNICEF estimates there are about 153 million orphaned children throughout the world. A socially responsible programme would be to facilitate the adoption of these children.

Technochauvinism promotes the superiority of technology solutions vis-a-vis human centred approaches in performing activities and delivering results; it evangelizes technology solutions by highlighting human ‘flaws’, and inefficiencies, or promoting its superior cost effectiveness. For example, the argument that technology solutions can offer consistent results without human bias, or consistency in high volume processing with lower error rates.

Again for example, socially-assistive robots incorporating AI/ML are being developed to augment human therapists working with autistic spectrum children. The argument put forward to justify the use of these technologies invokes the common economic case that such human-based interventions can often be expensive or time intensive; and that the technology could take over repetitive training activities. 

The point about the call centre compassion solution is the underlying assumption that human interactions have variability, and the technology can assist the optimization of human interactions. Likewise, the case for socially-assistive AI/ML robots chips away at human therapy by casting some activities as just repetitive, and so are amenable to a more cost-effective approach.

In the absence of an alternative, technology is writing the narrative in which its developments are framed as a given, a fait accompli, inevitable, and so, are unconstrained by moral issues. One problem with this, as the examples above suggest, is that recent technologies are insinuating their way into every aspect of our lives. This friction between the march of technology and our lives has basically rendered every aspect of life as negotiable, and as an object of manipulation and commercialisation: our privacy, our ethics, our freedom, our biology... The human organism has been reduced to an assemblage of parts that can be negotiated, invaded, modified, commercialised. 

I wonder about a future where these attitudes become omnipresent, default perspectives that eventually harden and become institutionalised.

If we continue with this way of framing the relation between the march of technology and humanity, its concerns and future aspirations, then the future will be dystopian.

The only alternative is to constrain technology and its developments so that they are subservient to the greater human good. 

It is time to look up from our smartphones and realise that the technochauvinist narrative is blinkered by a hardened belief in the inevitability of technology’s dominance of the future, and its unfounded assumptions, namely; that benefits outweigh any adverse impacts; and that there is a consensus on shared benefits, and the idea of progress. 

Tech solutions will always have limitations when deployed in real world scenarios, but that does not stop them from being unleashed upon society. There are numerous cases of bias and incorrect matches in Facial recognition systems, bias in automated job application and screening systems, and autonomous vehicles (AVs) have been involved in several fatalities. 

There are levels of 'autonomous' up to level 5; but, even at level 5 does an 'AV' have full situational awareness and out to what distance, and can it read intent (eg., the intention of a pedestrian considering to dash across the road). 

Researchers have shown that AVs can be confused by placing small stickers over line markings on a road. ARS Technica (4th Feb 2019); Researchers caused Tesla’s Enhanced Autopilot to steer into oncoming traffic by adding three inconspicuous stickers to road line markings, thereby demonstrating the inflexibility of the AI/ML to minute changes in environmental conditions. 

And a child standing next to a street post is not separately detectable from the post. EET Asia, (8th May 2019); Edge Case Research (ECR) investigated an open source pedestrian detector and discovered that it frequently missed people in wheelchairs. 

ECR noted that a detector trained on adults could miss children who are typically shorter. Certain scenarios and lighting conditions may not be well represented in training data; for example, people wearing dark clothing (who are photographed against dark backgrounds) or the presence of sun glare.

ECR has uncovered instances where AI/ML failed to detect construction workers wearing high-visibility vests; from a statistical point of view, only a fraction of pedestrians actually wear yellow vests, so a bright yellow colour is not correlated with the presence of a person. And Neural networks can lose track of people standing near vertical edges — like poles.

Given the number of AV caused road fatalities, AVs have no ability to assess that they are imminently in trouble, and have no design feature facilitating a transition of control to the driver in anticipation of an accident.

Yet, the AVs are driving on the road today. Tesla once made a comment to the effect that accidents were a necessary part of progress. But, the claim of ‘progress’ masks the real limitations in the technology and the transfer of risk onto an unsuspecting public.

It just illustrates that social responsibility is subverted by technology ‘progress’, underpinned by the unfounded assumption that somehow there is a consensus that this is progress.

Adverse impacts, and requirements for tech companies to ensure the socially responsible use of their platforms are frequently ignored, or are treated as just technical problems requiring tech solutions.

Initially, Facebook ignored, or offered token efforts to address, the mounting problems of extremist content, racial violence, fake news, disinformation, mob violence. After growing pressure from government scrutiny and threats of regulatory action, Facebook treated the issues as a technical problem for which it deployed AI solutions. But, these tech solutions do not fully resolve the moral issue, only work for a limited range of scenarios, or can be easily defeated, and only attempt to mitigate the worst of the adverse impacts. Ironically, Facebook acknowledged the limited usefulness of its AI solutions, and augmented them with human assessors.

As with many technology companies, Facebook seems not to fully understand, and accept its role as an enabler of scenarios that give rise to moral issues (eg, AI and discrimination, Social Media and racial violence…), and which include the way the technology’s use facilitates the users’ complicity in moral nihilism.

The blinkered drive towards creating androids, and cyborgs speeds along a different lane to the human path where every new report adds to alarming visions not in harmony with human aspirations. To be sure, there is a growing tsunami of pathological images of the future: robots displacing people, job applicants forced to be interviewed by AI/Robots; forecasts of human gene-editing giving rise to a human sub-species; human brain cells transplanted into pigs that ‘successfully’ integrate; revelations about surveillance capitalism; authoritarian surveillance states; and more.

Digital technology, and data analytics for behaviour profiling and prediction, are already delivering dystopian scenarios.

For example; during the George Floyd protests in the U.S. (The Intercept; 27th June 2020)

The police approached the protests with the following world-view: extremist groups are poised to immanently surge across the country and cause the country to descend into chaos, and the only way to prevent this is to meet protests and chaos with extreme force.

With this world-view informing their actions, agencies collected demographic data and ‘sentiment’ in near real time from mobile messaging, social media. They were collecting a wide data set, in near real time, from multiple sources, at scale.

The agencies had biased conceived profiles of extremist groups and were searching for data to populate those profiles leading to confirmation bias.

Biased profiles were being populated, and mobile communications and social media posts and comments were interpreted in the context of a threat bias.

Without being aware of the biases, agencies had ‘confirmed’ the presence of extremist groups, and threatening communications; thereby reinforcing the world-view and its narrative.

Data Analytics enabled a clear definition of the extremists threatening society with chaotic collapse; thereby reinforcing the narrative:

The police were operating in a world-view that had little basis in reality; through a series of biases they confirmed their world-view and its narrative. Digital media amplified the problem by providing a wide ‘reliable’ set of data in near real time, at scale that ‘confirmed’ the presence of extremists, and which provided communications content and sentiment from multiple sources that reinforced a threat context.

The marginalisation of human concerns and aspirations away from the core narrative of technology’s visions, or the belief that these concerns can be reduced to technical problems for which there must be a technology solution, and the acceptance of technology’s inevitability, opens the way for technology to embrace moral nihilism, and to deflect moral concerns and adverse impacts onto society to deal with. 

And so, the development, of androids runs oblivious to forecasts of mass scale job losses due to automation, robots, and AI.

Whatever the future will be like, if it means the abandonment of our collective responsibility for the greater good, the exaltation of hubris at the expense of reflection and restraint, the marginalisation of our intuitive sensitivity to ethics and intrinsic value, and the outsourcing of conscience to clear the way for technological ‘progress’, it could not be a moral achievement.

The questions we should be asking our collective selves is what does a human centred future look like, and how do we subordinate technology to support human aspirations for the future. Humanity owns the future, not technology.

要查看或添加评论,请登录

社区洞察

其他会员也浏览了