From AI to OI and the hard way forward to a sustainable balance between progress and human dignity.
Fortunato Costantino
Direttore Risorse Umane, Affari Legali e Societari presso Q8. Professore a.c. di teoria generale della sostenibilità delle imprese e della innovazione sociale presso European School of Economics
Two are the authors who literally have impressed my mind with reference to the not easy topic of AI, intelligent technologies and algorithms: Nick Bostrom and Miguel Benasayag.
Nick Bostrom is a professor at Oxford University, where he directs the Future of Humanity Institute and the Strategic Artificial Intelligence Research Center. In January 2015 Nick Bostrom together with Stephen Hawking, was co-signer?of a famous open letter warning about the potential dangers of Artificial Intelligence. Considering his curriculum and the Research Institutes he directs, it should be excluded that he signed that letter for fogeydom, obscurantism or Luddism.
Bostrom, as scientist and free thinker, shows a linear philosophical reasoning.?
Artificial Intelligence is one of humanity's greatest promises and opportunity in terms of development and bettering of mankind; thanks to its current and future developments, Mankind will probably be able to do things that today would be unthinkable, and perhaps we will live better, longer and happier.
But according to Bostrom ?there is a threatening cloud over the sky of Artificial Intelligence that poses important questions about mankind’s effective ability to manage wisely and with the right control the infinite potentialities of A.I. Bostrom discusses these questions in his bestseller "Superintelligence. Trends, dangers, strategies". It is worth reading even if the scenarios he describes are not always comforting, indeed.
Miguel Benasayag?is a philosopher, psychoanalyst, and epistemology researcher, author of "The Tiranny of Algorithms. Freedom,Democracy and Challenge of AI".?
Starting from the basic assumption that the human being is not analogous to a machine, M.Benasayag observes as, nevertheless, the prevailing thought of computer scientists and AI specialists is shaped by a terrifying epistemological error: the brain is nothing more than a computer. The consequence is the reduction of the human being to a merely "functioning body", pure efficiency and performance, instead of properly considering the human being a biological and spiritual unicum existing by moments of consciousness, creativity, imagination, unforeseeability, freedom but also error, doubt and boredom.
According to M.Benasayag the expression "artificial intelligence" is an oxymoron and a contradiction in ?terms void of real touch points with the complex phenomenon called “intelligence”.
Of course AI’s calculative ability exceeds the same ability of human beings. But AI is incapable of giving a meaning to its own calculation, even when it operates under a predictive mode.
Human intelligence is not a calculating machine. It is a complex process that articulates analytical abilities, creativity, imagination, affectivity, doubt and error and presuppones the presence of desire and consciousness and empaty and mercy. And first of all, to be intelligent, in a real and appropriate way, we must have corporeality as the body is the site of passions and where the memory of our parents and grandparents is reincarnated, where even the memory of the species' evolution is conveyed.?
Therefore if Human kind has to be preserved from a brutal and massive colonization by the intelligent machines, in the meantime AI remains an important driver to develop a new concept of post modernity where machines are at service of the human kind and can empower the condition of human well-being.?
There is no one who can fail to see that AI-driven technology is entering every single aspect of ?individual’s life and it is increasingly being utilised by public authorities to evaluate people’s personality, habits and attitudes, to distribute services and allocate resources, and otherwise to make decisions that can have real and serious consequences for the human rights of individuals.
Finding the right balance between technological development and protection of human rights is therefore an urgent matter in order to ensure that human rights are strengthened and not undermined by AI. This means also that the human oversight on AI and algorithms is possible only if individuals own the proper knowledge of how AI and algorithms work, ruling on them instead of being ruled by them.?
On an other side, as a matter of the fact, the even more increasing process to delegate, out of a strong human control, the decision making to AI is about to prepare the era of "algorithmic governementality": economy, financial markets, the States theirselves, are subject to decision suggested by machine and algorithmics. So, how we can still talk about democracy, freedom and social sustainability, individually and collectively, in a world governed in large measure by algorithms.?Is that the evidence of an imminent Algogracy?
领英推荐
Within this perimeter, to ban the risk of Algocracy, ?a strong AI governance model has to be implemented at the soonest by hand of States and Governments ?in order to construct experiences, criteria, rules able to steer and govern the process of hybridization among human lifes and tecnology whilst respecting the singularity of the living beings, their culture, freedom, privacy and dignity. Which means also creating a framework where the ESG objectives of the Agenda 2030, particularly the “S” one, could be properly and significantly implemented.
This is the way forward chosen by the EU where a new impactful “governance rules” season, centered on the respect of individual right of personal freedom, dignity and safety prevailing on AI uses, ?is about to be effective and mandatory for private and public companies as well as for States, Governments and Institutions. It is in fact known that a bunch of months ago the Members of European Parliament approved the draft negotiating mandate for AI Act with an overwhelming majority of 499 votes in favour, 28 against and 93 abstensions. This means that the AI Act has entered the last legislative step, the so called “trilogue” or interinstitutional negotiation among EU Parliament, EU Council and EU Commission to find an agreement on the final shape of the law. Then the AI Act will be the world’s first rules on AI ?(expected by the end of 2023) to ensure a human-centric and ethical development of Artificial Intelligence with AI systems that are overseen by people, safe, transparent, traceable, non-discriminatory, and environmentally friendly.
The AI Act will follow a risk-based approach and establish obligations for providers and users depending on the level of risk the AI can generate. AI systems with an unacceptable level of risk to people’s dignity, freedom and safety would be strictly prohibited, including systems that deploy subliminal or purposefully manipulative techniques or exploit people’s vulnerabilities or are used for social scoring, classifying people based on their social behaviour, socio-economic status, personal characteristics.The AI Act, as an example, will include bans on biometric surveillance, emotion recognition, predictive policing AI systems, tailor-made regimes for general-purpose AI and foundation models like GPT and the right to make complaints about AI systems.
The hope is that the AI Act could be complete enough to cover all the next unpredictable evolutions of the AI itself.
In this regard, it may be useful to refer to the world of law where the expression "law is born old" is commonly used to signify that the capacity of a norm to effectively and sufficiently regulate a certain situation, at the same moment it sees the light of day, is already potentially inadequate as a consequence of the incessant evolution of the social and economic phenomena taken into consideration by the norm itself.
A similar condition also afflicts the technology and particularly ?the artificial intelligence, which is already preparing to make way for incredible new forms and modes of Brain-Machine interface.
Imagine, in fact, a computer in which the transistors in the chips, instead of being made of silica?are made of biological brain material and, in particular, of special cultures of neuronal cells bred in the laboratory, called organoids, capable of transforming a computer into a biocomputer, a real living computer with calculation and reasoning capacities ever closer to those of the human brain.
This is the next evolution of AI, so much so that it is already talking of Organoid Intelligence (OI), capable of overcoming the two most objective limits of AI:
The path of the evolution from AI to OI is clearly addressed in the article published in Frontiers in Science which I suggest to read (https://lnkd.in/dEQgaw82).
Meanwhile, a group of scientists at Johns Hopkins University (USA) is already experimenting with a prototype of a computer based on organoids, proving that we are not telling the plot of a novel of the cyborg literature.
Now, if AI has already posed a difficult ethical question of human governance over the power of the algorithm, even more so the future OI will require appropriate Governance and Risk Management models of Human-Machine interaction, especially considering that OI, being based on the use of biological neuronal material, is part of the same biological composition as the human brain.Indeed, it is almost an extension of the human nature.
And because of this peculiar circumstance, reflection on the ethical, social and legal implications of OI will be decidedly more complex and exposed, as never before, to the atavistic ideological clash between innovators and conservatives, tradition and modernity.
FOSSR Infrastructure Manager - Sustainability ESG, Innovation, Management, Diversity-Equality-Inclusion & Coaching
1 年Molto interessante
Finance and Procurement Director at Q8 Kuwait Petroleum Italia S.p.A.- Auditor and Certified chartered of accountant
1 年Very interesting reading. A lot to think about. Thank you Fort