Technology has no morals

Technology has no morals

Imagine a world where the decision of whether you will get the mortgage for your first house will be determined according to how often you talk to your mother or what route you take to get to work?

It seems a far-fetched idea, but, to some extent, it is already happening. Dr Manju Puri, Professor of Finance at Duke University, acknowledged that banks could use a person’s phone usage data to decide whether that person will get a loan. According to Dr Puri, people who call their mother every day or choose the same route to work are less likely to default.

“We know what’s statistically correct to do and what’s morally correct to do are often two different things,” Aidan Connolly, CEO of Idiro Analytics, has acknowledged.

So what would it be like to live in a world where AI ethics have never been considered? Where “Big Brother” is not only watching you but also deciding your fate, making you a helpless pawn in the game of life?


Biases in AI

Apart from being a prevalent topic among AI experts, recent years have brought the dilemma of AI bias to the attention of policymakers, media and the general public. Many highly publicised cases depicting flaws in AI systems have become the rallying calls of the AI ethics movement. Among such cases are AI-powered software in the US criminal justice system used to measure the likelihood of recidivism which proved discriminatory against certain races, the discrimination against female job applicants by Amazon’s automated hiring software and Apple’s credit card risk assessment models, which granted lower credit limits to women.

These examples shine a light on the potential flaws of some AI systems entrusted with decision-making that might be life-changing for people. They serve both as a critique to help AI engineers perfect their technology and as a warning to scientists, governments and the public of what the future might look like if the use of AI is not properly regulated.

One solution to ensure AI fairness is to insist on transparency and that AI solutions are routinely tested for bias. However, so far, without any direct regulations, companies that test for bias in their AI systems tend to do so internally, which cannot assure total transparency. Ivana Bartoletti, a privacy and data protection professional , has urged stricter regulations for AI companies, comparing AI developers to architects, “who work with city planners, certification schemes and licences to make buildings safe”, arguing that watchdogs and regulators are necessary as society adopts more AI-driven technology.

Currently, the EU is working on the AI Act, which will ultimately lead to the implementation of regulations that will hopefully ensure trustworthiness in AI. Dr Adrian Byrne, a Marie Curie Fellow Researcher at CeADAR and a Lead Researcher at the AI Ethics Centre at Idiro Analytics, said that “given that AI is increasingly influencing every aspect of our lives, it is very timely that the EU is seeking to enact legislation, via their proposed AI Act, that attempts to mitigate the potential harm from AI, which includes bias monitoring, while not stifling the potential positive innovations that stem from its deployment.”

“Trust in these systems is critical for their widespread adoption in society. It is not an easy task, but with this draft legislation, the EU is looking to become the global leader in trustworthy and more human-centred AI,” Dr Byrne said.


Algorithmic self-reinforcing cycles

But this is only the tip of the iceberg. In a world where AI ethics have no prominent role, not only could people’s actions be used against them, but even more so, they could be manipulated into having opinions or emotions that are not theirs. It is already happening with a number of social media companies who stand accused of using AI to maximise profits, often at the price of the users’ and society’s wellbeing. Examples of such misconduct have been raised in recent testimony by Facebook’s whistleblower Frances Haugen, who raised concerns that Facebook knew its algorithms escalated political polarisation, hateful speech and misinformation and had a huge impact on the mental health of many teenage users of their platform. Facebook and its subsidiary Instagram are not the only social media platforms that are driving political polarisation. Twitter has published their findings highlighting how the algorithms used on its platform were unintentionally drawing its users towards more right-wing content. Such unregulated algorithms are among the factors for the growing populist political crisis . Stuart Russell, professor of computer science and author of many books about AI, has warned that “if algorithms could find a way to effectively re-engineer our minds to click whenever we were told to click, then that’s what they would do.”

“Gladly, algorithms are not yet smart enough to do so,” the professor said.

AI has the power to make our lives better by improving decision making, e.g., in drug discovery in a manner and scale that is beyond the ability of humans. The problem arises when we relinquish too much decision making to technology.

Recently, Amazon’s Alexa suggested to a 10-year-old girl to touch a live plug with a penny after the minor asked Alexa for a challenge. Such examples show how technology could have a devastating effect if regulated incorrectly. AI experts must first address many important variables before allowing the widespread use of AI in everyday life. These aren’t technical aspects, but rather moral ones, which machines can’t learn themselves.

Stuart Russell’s example depicts the reality that could become ours if we do not include ethical considerations as part of AI development. In one of his lectures, he explained that if you give a powerful AI system the task of solving climate change, the AI would simply annihilate the human race and end climate change. Among these lines lies the answer to why AI ethics has to be considered in advance of deploying an AI system – because technology has no morals, it has no understanding of what is right and wrong, and has no sympathy for humans or any living organisms – it only sees a problem and calculates a decision. To make AI-based decisions that align with humanity’s morality, AI experts must consider implementing ethical structures into the technology they are building.

Just four years ago, physicist and humanitarian Stephen Hawking, in his final book , had warned that AI “could develop a will of its own”, which could become the “worst event in the history of our civilisation”. It is important to nurture AI in a way that aligns with humanity’s ideals rather than its shortcomings to ensure that the warnings made by Hawking do not come to pass. It will not be an easy task and will not happen overnight, but the time to think about it was yesterday, and the time to act must be now.

Jim McGowan

Building and leading professional services, sales and customer success teams. Qualtrics, Accenture, Three, Vodafone

2 年

great article Aidan, we must catch up soon

回复
Anna Raquel Carvalho

MSc Business Analytics at UCD Smurfit Business School | ?? Bachelor Degree in Civil Engineering

2 年

We should draw attention to the fact that technology is not good, or evil. Technology is built by humans inserted in a social context and it is exposed to it and can learn from it. Because of that, people who create the models and accept the data that supply a system are the ones responsible for the algorithms' behaviour. That is why it is so important to foster technological progress whilst securing ethical practices and social inclusion.

Maryori Martinez

PG in Science in Data Analytics | Data Scientist |SQL|SAP|Tableau, R (Programming Language) , Python ,SPSS ,EXCEL, Power BI,Salesforce |

2 年
Colette Quinn

Market Intelligence Specialist Making Your Business Sharper @ Real Insights / Business Mentor / Board Member at SHINE

2 年

Looking forward to this Aidan ??

回复

要查看或添加评论,请登录

社区洞察

其他会员也浏览了