[ANALYSIS] Is artificial intelligence moral?

[ANALYSIS] Is artificial intelligence moral?

Like technology, in general, artificial intelligence that includes automation and machine learning or hyper-automation should be considered morally neutral. At least, that is our hypothesis today.

Let us all assume that AI, like any technology, can be used in an ethically responsible manner or, conversely, in a way that causes harm to others, whether voluntary or not.

AI, A TECHNOLOGY SO DIFFERENT FROM THE OTHERS?

Some technologies work directly for the good of societies, such as progress in the field of medicine or the environment. Societal impacts are beneficial, observable and above all voluntary. We must understand that this is, above all, because the mission of activities in these areas serves an honorable cause. Other technologies work necessarily for bad outcomes like war. Others with the aim of maximizing goods. Under no circumstances is the technology itself put in the dock. So why does AI seem to be guilty today?

We need to see how AI is a particular technology. Because it's smart? But then, how smart is it? One could answer that it is because it is able to predict events. Or because it is able to make decisions. To be autonomous then, shall we say. That's when fears arise. Something autonomous, able to make decisions will have ethical choices among the choices to be made. And where man encounters dilemmas, the machine decides. This is because in its ability to choose, the machine has no moral doubt that can hold it back.

WHY TALK ABOUT AI BIAS?

But before we go so far in our fears and assumptions, we must start from the fact that we are responsible for the direction that a particular technology can take. In our direction, we judge the impact of our technology on society by analyzing the positive or negative consequences it entails.

For example, such progress in medical imaging allows us to intervene and predict with much more subtlety. We have a positive, calculable impact. But these progress in imaging are not limited to the medical field. The intelligent pixel opens up a huge field of research where facial recognition is one of the application. It is also known that applications using this facial recognition are sometimes biased (intentionally or not) due to skin color that may result in racial selection.

Thus, algorithmic progress is not related to a particular field, it is free and accessible to any good programmer. It is difficult in these circumstances to assess the consequences of a particular algorithm. How can we not fear this new intelligence?

The threat is real and the fears are justified, but we can act. Because, just as we learn to be responsible for our planet, we will become responsible for this machine intelligence. The advantage of fears is that they take us out of our comfort zone and push us to think about how to defend ourselves. They make us evolve.     

I proposed that AI Ethics is an invitation to humanity to take stock, re-examine itself and choose what it truly wishes to become.
– Matthew James Bailey, Inventing World 3.0, p.237

AI Ethics

But why not just enforce the laws we know? Well, because these laws don't exist. And what do we find upstream of the laws? Ethics or the evaluation of moral values. That is why we are questioning this ethic of AI today. Opinions differ and the ground is very fertile. There is no current consensus on the approach we need to have in AI ethics. 

Some theories advocate the plurality of values, others focus on the universality of certain values. This AI ethic is in progress. Observatories are being created to put forward debates in the ethics of artificial intelligence, such as OBVIA and the AlgoraLab in Quebec, or Turing Institute and many others. The mission of these new forms of ecosystems is to promote the adoption of responsible AI.

My doctoral researches in AI ethics hypothesize that we can work internationally to build standards and that we will have no choice but to adopt an evolutionary, and why not, agile methodology.

Original article in French: [ANALYSE] L'intelligence artificielle fait-elle preuve de morale? - CScience IA


BIBLIOGRAPHY

Ansgar KoeneAlgorithmic Bias, Addressing Growing Concerns, IEEE Technology and Society Magazine, juin 2017

Bernard MarrThe intelligent revolution, Transforming your business with IA, ed. Bernard Marr, 2020 

Harini Suresh, John V. Guttag, A Framework for Understanding Unintended Consequences of Machine Learning, Cornell University, février 2020. 

La Commission EuropéenneLibre blanc: Intelligence artificielle, Une approche européenne axée sur l’excellence et la confiance, Bruxelles, PDF, 2020

Matthew James BaileyInventing world 3.0, Evolutionary Ethics for Artificial Intelligence, ed. Matthew James Bailey, 2020 

Pascal Bornet, Ian Barkin, Jochen WirtzIntelligent automation, welcome to the world of hyperautomation, ed. Pascal Bornet, octobre 2020

Marie Settipani

Coordinatrice de Production chez EA Motive Montreal

4 年

Thank you for this article. Really interesting to read.

Matthew James Bailey

Pioneer - Ethical AGI, Human Evolution, Consciousness, Spirituality; Visiting Scholar, Serial Entrepreneur, Awards, Author, Headline Speaker, Inventing WORLD 3.0 initiatives

4 年

Outstanding article. It’s up on AiEthics.world - https://aiethics.world/blog/f/analyse-l’intelligence-artificielle-fait-elle-preuve-de-morale

Michel Langelier, EMBA, M.ed.

National Executive Director/Directeur général national

4 年

Standards should embrace and promote diversity in order to ensure upstream this needed neutrality. Federal government is interested by the advancement of such ethical standards to enhance their immigration policies/decision making process. Very interesting and pertinent article Patricia Gautrin .

Dr. Elisa Maria Entschew

Digital Ethics Expert (specialized in Artificial Intelligence Ethics & Ethics of Digital Human-to-Human Communication)

4 年

Thank you for sharing! I am thinking about the last suggestion to build standards by adopting an evolutionary, and why not, agile methodology. If I recall the 'agile manifesto' it says that working software is more important than comprehensive documentation. If we want to have responsible AI - I wonder if we should be able to audit documentations? So maybe we even need comprehensive documentation and auditing in order to enable responsible AI models; even though it might initially compromise working software.

要查看或添加评论,请登录

Patricia Gautrin的更多文章

  • Is Schr?dinger's cat alive or dead?

    Is Schr?dinger's cat alive or dead?

    Schr?dinger's cat experiment serves to strike minds with a strong and familiar image. It puts into perspective the…

    2 条评论
  • The war in Ukraine and the risks of cyberconflicts

    The war in Ukraine and the risks of cyberconflicts

    On February 15 and 16, the major banks in Ukraine were hacked, then on the eve of the Russian invasion, government…

  • DATA MINIMIZATION – When Simplifying Becomes an Issue

    DATA MINIMIZATION – When Simplifying Becomes an Issue

    Data minimization is a simple principle that applies for data protection. It is in reaction against the…

  • IA Responsable - le r?le de l'éthique

    IA Responsable - le r?le de l'éthique

    Mon projet de recherche au doctorat, à l’Université de Montréal, consiste à examiner quels sont les jalons, à…

    3 条评论
  • Responsible AI - the role of ethics

    Responsible AI - the role of ethics

    Just as corporate environmental responsibility has shifted from awareness to execution, so the moral responsibility…

    2 条评论

社区洞察

其他会员也浏览了