Algorethics: Who should govern AI?
Harriet Gaywood
An expert in PR, strategic communications, and crisis management with over 25 years of experience in China and APAC.
Once upon a time technology was viewed as a support and ‘aid’ to our lives. That is surely the fundamental premise behind ‘artificial intelligence’ (AI).? Now the term?AI is bantered around variously causing both fear and excitement, so it is always useful to remind ourselves about what it is, even if the answers aren’t definitive and consider why it is causing concern.?
IBM defines AI as “technology that enables computers and machines to simulate human intelligence and problem-solving capabilities.” ?
Coursera defines AI as “Artificial intelligence (AI)?is the theory and development of computer systems capable of performing tasks that historically required human intelligence, such as recognizing speech, making decisions, and identifying patterns. AI is an umbrella term that encompasses a wide variety of technologies, including?machine learning,?deep learning, and?natural language processing (NLP).?“ ?
Based on the above, which organizations are best placed to govern AI and how can we define good decision-making? How can we ensure that AI is ethical, or judge what is right and wrong? Not surprisingly, the focus of governments on AI aren’t united since their misalignment reflects their individual geopolitical concerns, while global corporations are struggling to take a ‘global approach’ but hitting cultural and national obstacles. So, what kinds of organizations should lead in defining the values that can shape ‘good’ decision-making in AI? ?
Last week Chuck Robbins at CISCO was in the Vatican City to sign the Rome Call for AI Ethics joining signatories including Microsoft, IBM, FAO and the Pontifical Academy of Life. Under the auspices of the RenAIssance Foundation, established by and run out of the Vatican City “it aims to promote a sense of responsibility among organizations, governments, institutions and the private sector with the aim to create a future in which digital innovation and technological progress serve human genius and creativity and not their gradual replacement.” This sounds like a laudable and suitably universal ambition that most people would support – but to achieve this, the representation needs to be broad. If one religious organization is leading discussions on this topic, then other religious voices must be included to create diversity of views. This has happened to some extent - in January 2023, jewish and muslim religious leaders signed the Rome Call. So which other religions should be included if religious leaders are going to steer the direction? ?
It was also announced (Vatican News) that in the upcoming G7, Pope Francis will become the first pope to ever speak at this global meeting. G7 members include Canada, France, Germany, Italy, Japan, the UK and the US. He is speaking as part of Italy’s bid for presidency of G7. On September 2023, Italy’s Prime Minister Giorgia Meloni spoke spoke at the UN saying “We cannot make the mistake of considering this domain [AI] a ‘free zone’ without rules. We need global governance mechanisms capable of ensuring that these technologies respect ethical barriers, that the evolution of technology remains at the service of humans and not vice versa. We need to give practical application to the concept of “algorethics,” that is, giving ethics to algorithms”.
So, can we define values and ethical behavior that transcend religions? Or will certain governments and politics always dominate??
Business leaders are also being challenged on a national level. For example, last week Chuck Robbins also became a board member of the US Department of Homeland Security (DHS) AI Safety and Security Board?saying “we work to strengthen America’s resilience in today’s rapidly evolving threat landscape.”
In terms of technology developments, corporations tend to lead in terms of R&D so governments need to engage with business among other stakeholders. The full list of the current DHS current board members includes academics, policymakers and civil organisations. Other members include Adobe, Alphabet, Amazon Web Services, AMD, Microsoft and Nvidia. So how should a global corporation balance its business interests with protecting national security? ?
领英推荐
The US is clear about its concerns. Last week it released ‘Guidelines to mitigate AI critical infrastructure’ and released a report (link) about AI misuses in the development and production of chemical, biological, radiological and nuclear (CBRN). This builds on the AI Risk Management Framework (RMF) by the National Institute of Standards and Technology (NIST). ?
If we contrast this with the UK’s AI Bill (just 7 pages), which includes the establishment of an AI Authority and defines AI as follows: (i) In this Act “artificial intelligence” and “AI” mean technology enabling the programming or training of a device or software to— (a) perceive environments through the use of data; (b) interpret data using automated processing designed to approximate cognitive abilities; and (c) make recommendations, predictions or decisions; with a view to achieving a specific objective.?(ii) generative AI, meaning deep or large language models able to generate text and other content based on the data on which they were trained.” This feels very vague and limited in its reach given the number or types of AI not included.
A research briefing about the AI Bill in the House of Lords - 18 March, 2024,?states “at present, it is too soon in the evolution of AI technology to legislate effectively and to do so now may be counterproductive.” Therefore a ‘voluntary’ approach of self-governance is encouraged. A 2017 report by PWC (still used for the recent AI Bill) suggests there are economic benefits including adding GBP232 billion by 2030 because of AI (10% of the UK’s GDP). ?
The UK approach contrasts with the recently released EU AI Act which is more regulatory across all areas of AI and ”aims to provide AI developers and deployers with clear requirements and obligations regarding specific uses of AI. At the same time, the regulation seeks to reduce administrative and financial burdens for business, in particular small and medium-sized enterprises (SMEs).”?
When is it ‘too early’ to have governance? For example Meta’s new Llama 3 open source AI talks about democratizing AI early in the development process. An article by Fast Company highlights the pros and cons suggesting it may encourage deepfakes but could also encourage transparency.?
So the question is simply what is a good, or ethical decision by AI?? Our answer is of course based on our experiences, our education, our culture, our families, our subject expertise, and the resulting values. Algorethics is simply how we reflect on the ethical use of algorithms. How should human empathy, experience, and emotional intelligence play a role in decision-making and shape algorithms? Can our values be structured in a logical manner to guide the development of AI and who should have the final say?
I would love to hear your thoughts about who should govern the ethics of the existential risk presented by AI.
An expert in PR, strategic communications, and crisis management with over 25 years of experience in China and APAC.
4 个月Religious leaders from across the world meet in Hiroshima, Japan, to sign the “Rome Call for AI Ethics”, emphasizing the vital importance of guiding the development of artificial intelligence with ethical principles that promote peace An update to my previous article. https://www.vaticannews.va/en/vatican-city/news/2024-07/world-religions-to-commit-to-rome-call-on-ai-in-hiroshima.html
Sustainability and ESG for the Built Environment
6 个月History shows that humans are a greater risk to humanity than AI. Any human governance of AI created by humans is doomed. Human governance can only seek to influence the timing, shape and scale of the AI doom. Discuss.