On Machines of Morality

On Machines of Morality

The next wave of technological disruption, spearheaded by artificial intelligence, (AI) has already landed at a moment of discontent and disconnect in global societies. It is little surprise that in our latest global joint study with the World Economic Forum that explored societal perceptions on AI, widespread concern is clearly emerging: what stands out is the significant proportion of citizens who express apprehension rather than optimism. The increasingly worried public sends strong signals that it wants governments’ and companies’ use of AI to be more strictly controlled, AI Tech to be restricted and AI businesses to be more regulated. Worldwide, one in five even thinks that AI should be banned altogether.

The complexity and obscurity of AI is hard to grasp for the public: nevertheless, related emotions are rampant already. Many may fear that the rapid development and commercialization of AI do not only perpetuate the many societal challenges we are facing today: they put them on steroids. Our democratic societies and leaders are in catch-up mode vis-a-vis the widening gap between Tech/Data/AI “haves” and “have-nots”. Data that fuels the future AI-powered world is already the currency of the present paradigm owned by the very few and extracted from the many who are unaware of its value or means of protection. This is also increasingly becoming a major civil rights issue — who protects the most vulnerable in the Algorithmic Economy?

If we believe some of the prophets of Silicon Valley, an AI-powered paradise for all is almost inevitable and unhindered technological progress is the way there. But so much needs to change in our institutions, power structures and mindsets for this to happen — all in concert and global orchestration -, that we need to be cautious here with accepting too rosy promises. If we are looking at some societal impacts of AI development that are already visible, it is hard to escape the impression that at the current state of play, positive scenarios are losing.

Take just one aspect that is most widely publicized: AI effects on jobs in the present context - even without getting into too much futuristic speculation whether workers can be retrained as fast as the technological pace dictatesOne of the most widely criticized aspects of brick-and-mortar capitalism is planned obsolescence, which pushes us to consume more by setting mandatory, but unknown expiration dates on all goods — basically they are designed by the manufacturer to fail after a certain time, pushing you to buy a new one. At the dawn of the AI age, workers may also have an expiration date: how can you otherwise explain ride-sharing companies trying to attract underpaid, uninsured drivers while heavily testing driverless technology that makes them obsolete? Or — little known to the public — social media giants using armies of low-cost human contract workers to label data, check hate-speech detection AI and so on? The connecting element is called “ghost work”: some are low-skilled workers in “Tech farms”, some with Master’s degrees. These people are currently needed to fill gaps because of AI technology’s present shortcomings, and they will likely be automatically phased out as it improves. A gentle reminder that with drastic AI improvements, often we are not counting in years and decades, but days and weeks.

Or take courtrooms. Little known to the public — or the defendants, for that matter — so-called “sentencing AI assistants” are now used on 20% of parole cases in the US. The original aim was to alleviate the burden of courts inundated with parole cases, but what we got is AI algorithms practically deciding on the freedom of an individual based on a wide variety of data inputs (such as income, social media activity, zip code, and criminal history) and not the character, or life situation of the person that only a human judge could provide. But how can we look into the black box of the AI’s decision-making here? Will a judge stand up against the AI’s recommendation if overworked already? Will biased data or biased judges do more harm? Questions abound more and more.

When it comes to early attempts at regulation, there is a growing consensus that human values should be hardcoded as much as it is possible into AI development. In moments of decreasing empathy, tribalism and ideological fragmentation, it is only natural to ask: what are these values? How much of them are innate and how much learned? How much can they be seen as universal? And ultimately, how can you teach machines morality?

In the music industry, a cynical quip I have heard is: “It does not matter if a piece of music is good or bad: the only thing that matters is whether people dance to it.” That amorality permeates many industries surrounding us today. With AI designed to make moral decisions, this can’t go on like this anymore: it seems that many Tech and business leaders still need to be awakened though. In a similar vein to the music industry, for a long time, ethical questions about technology had a single dismissive quip which goes like “Technology is neutral”. I am sure you have heard it. It wasn’t true at that time and is definitely false when it comes to AI. Self-learning and self-developing technology cannot be treated as neutral, because:

  • AI perpetuates the logic and objectives of the AI constructor/owner and their customers
  • Current AI technologies base their decisions on data that is almost always flawed in one way or another, creating inherent biases from the onset
  • AI does not get context and morality but still has a moral impact: the line between amorality and immorality is a very fine one

AI is the first “moral technology” in the sense that it is empowered to make autonomous or semi-autonomous moral decisions. Some of these moral conundrums are well-known, e.g. the decision to be made by an autonomous vehicle whom to choose to hit in case of an unavoidable collision. What is an outlier with self-driving vehicles (albeit a regulatory minimum of establishing liability and responsibility) is the very core of autonomous weapon systems that were designed to kill with zero or little human intervention.

Coming back to the general level, even with the best intentions, AI can’t be taught about morality without a thorough understanding of what it actually entails and how gut feelings, moral instincts and social norms can be converted into data input. So provided that we are getting alignment on these values and overcome these coding conundrums: can we hardwire them and create new “digital stone tablets” for machines? Or better to be flexible and let machines learn morals through interacting with people?

The mantra is that technological progress is inevitable, but its purpose rather undefined: this amorality makes us rudderless as a civilization. The core direction of AI development should be set as: “AI is to serve the fundamental needs of humanity” — and not interests that lead to exploitation. The more “Artificial” this machine intelligence becomes, the farther removed it will be from truly understanding human needs: therefore ethical AI proponents talk more and more about Augmented instead of Artificial Intelligence. This presupposed a conscious moral choice of prioritizing machine and human intelligence working symbiotically, with the humans in charge and AI optimized for understanding and serving the human need.

AI Technology that interprets and simulates our emotions is a burgeoning field already — but at the same time, a great moral intersection whether the ability to interpret our micro-expressions en masse and in real-time should serve the purpose of “personalizing” prices and ads in stores based on our mood; or be used to provide emotional support to the ones in need?

Even beyond “Emotive AI” as a stepping stone, a promising emerging field is ”Compassionate AI”. It is broadly defined as designing AI with a focus on alleviating human suffering and focusing on social impact both at the individual and the societal level. Hardwiring compassion seems ever harder though than instilling machines with a basic understanding of morality, as it expects machines to understand, empathize and emote genuine human love, kindness and concern.

In Japan, an aging society that is resistant to the idea of mass immigration to tackle the massive eldercare needs of the country, Compassionate AI applications - especially home care robots - are gaining ground. Efforts like these Japanese ones are the conceptual antithesis to the Skynet/Terminator vision of AI that was implanted in our brain by Hollywood. Compassionate AI advocates see evolving AI agents more like a “companion race” to humanity, be them nurses, mates, chaperones etc. Can this go wrong? Of course. But what if regulators worldwide would go bold and prescribe Compassionate AI as the preferred direction and objective to follow for AI Constructors? Wouldn’t we be way better off already if — instead of lagging behind exploitative interests in a laissez-faire way — the role of AI would be defined in the societal subconscious as the one to serve the most fundamental needs of people, society and humanity?

#AI #artificialintelligence #augmentedintelligence #ethics #compassionateAI

Charles Paré

Chief Integrity Officer, Head of Legal & Compliance – Entrepreneur, Servant Leader

4 年

thanks a lot George - in particular the piece "AI algorithms practically deciding on the freedom of an individual based on a wide variety of data inputs"... is very reflective. Keeping AI as an asset in research and behavioral science becomes every day more aspirational

Greg Holmsen

The Philippines Recruitment Company - ? HD & LV Mechanic ? Welder ? Metal Fabricator ? Fitter ? CNC Machinist ? Engineers ? Agriculture Worker ? Plant Operator ? Truck Driver ? Driller ? Linesman ? Riggers and Dogging

5 年

A well-researched piece on Artificial Intelligence, George.

回复

要查看或添加评论,请登录

社区洞察

其他会员也浏览了