Key links between the legal, ethical, procedural and technological tetrad in facilitating trustworthy artificial intelligence
Submitted for assessment evaluation as part of the Master of Digital Law

Key links between the legal, ethical, procedural and technological tetrad in facilitating trustworthy artificial intelligence

Introduction -

Big data, fast processors, ingenious programming, and extensive networking capabilities have paved the way for significant advances in artificial intelligence (AI).[1] As a new kind of autonomous smart agency, AI could bring enormous benefits. However, its ongoing developments have hit several winters[2] due to zealous overestimation from repeated touting of AI’s abilities,[3] quickly followed by pragmatic, reactive re-evaluations.

For AI to be adopted, it must work reliably, in ways anyone can trust. Fourtané argued the lack of trust in AI emanates in part from a general fear of the unknown.[4] According to Birhane et al., another reason for not trusting AI stems from inexpert users not being able to gauge the potential adverse effect of AI personally or collectively because their needs were not always considered in the design, development and deployment of AI.[5]

Reflecting on the collective work of Fourtané and Birhane et al., this essay examines what it means to be trustworthy by consulting the literature. It does this by examining five prominent frameworks to identify the attributes of trustworthy AI (TAI). To prevent principle proliferation, repetitive and redundant attributes and different attributes are consolidated to establish eight desiderata of TAI.

The eight desiderata are then grouped by complementary undertakings to form a tetrad consisting of Legal, Ethical, Procedural and Technological drivers for TAI. To provide context, functional explanations for each tetrad are provided. This is then followed by describing how a high level of understanding and a low level of perceived risks are key links between the tetrad in facilitating TAI, which is the focus of this essay.

The essay explains why it is important for AI to be trustworthy as a precaution to achieving its optimal potential and the need to ensure applied[6] AI abides by legal obligations, adhere to ethical standards, is procedurally sound, and is technically reliable by

  • Introducing AI and the concept of trustworthiness and why it is necessary for AI adoption;
  • Examining the attributes of TAI and how to advocate their discernment amongst policy makers, creators and users of AI and
  • Discussing key links between the LEPT tetrad in facilitating TAI.

AI and trustworthiness and why it is necessary for AI adoption -

Russel[7] and Norvig[8] defined AI as a multidisciplinary undertaking to define, design and deploy technological artifacts consisting of software and hardware that learn, make decisions and act intelligently[9] with some degree of autonomy to achieve specific goals[10]. Considered to be the next-generation software, AI’s ability to process and analyse the agglomeration of digital data faster, more consistently and more accurately than any human brain could, making it an appealing capability to help game out possible consequences of each action and streamline, if not automate decision-making.[11]

To say AI is pervasive is trite. However, according to Hao, for TAI to prevail, AI needs to overcome the coalescing of material issues such as bias, transparency, security, accountability, and privacy, which can be summed up by the principle of trust.[12] Trusted by everyone on all sides of the AI transaction and not just experts in the field who understand and influence its application, development, design and purpose.[13]

If money is the currency of trade transactions, then trust is considered the currency of AI interactions. Hasselbring and Reussner suggest there is no single definition or manifestation of TAI, preferring a broader interpretation.[14] However, it is becoming generally accepted that TAI as defined by AI-HLEG[15] as lawful[16], ethical[17], and robust[18]. This definition takes a human-centric approach to AI, where fundamental human rights are identified as the foundation of TAI, premised on an underlying assumption of “no trust, no use”.[19]

Trust is essential because it ideologically underpins essential aspects of our lives in an illusory quasi-trust way. However, no less fundamental way as without a minimal amount of trust in others, we would become paranoid and isolationist because of fear of deceit and harm.[20] Putting trust in a person or technological artefacts such as AI requires a presumption about the trustworthiness of both, but the two forms are not synonymous.[21] When it comes to AI, what needs to be established is not the micro details of how AI works but how humans behave when they build, train, test, deploy, and monitor AI. It is essential to show humans did the right things when they built, deployed and tested the AI.

Hardin argued that we place our trust in people we believe in having strong reasons to act in our best interests and much of what we call trust can be best described as “encapsulated interest”.[22] According to O’Neill, trusting someone is about placing a level of confidence in them to carry out a particular action that will result in an outcome in the best of our interests.[23] Therefore, trustworthy entities are those capable of returning the duty to fulfil the trust placed in them.[24]

Coeckelbergh[25] argued it is irrelevant whether AI can be trusted. What is essential is whether the trustor believes in the impact and exchange[26] as being lawful, ethical and robust – an agent endowed with trust. Coeckelbergh also considered trust placed on AI as a “contribution to the establishment of ‘virtual trust’ or ‘quasi trust’” where quasi trust is misplaced trust, unearned or illusory trust, that is non-existent or non-performative and as such has the potential to deceive individuals about the actual capacities of AI. The trade on user and consumer trust can obfuscate the responsibility gap[27] creating the need for mandatory TAI and AI developments to be legal, ethical and robust.

Furthermore, Santoni de Sio and Mecacci intimated that the responsibility gap is not an isolated problem but a set of at least four interconnected problems.[28] Problem gap of culpability, moral and public accountability, and active responsibility caused by different sources, some technical, others political, profit-based, organisational, legal, ethical, and societal[29], must be addressed to meet the definition outlined by AI-HLEG. Research by the University of Queensland revealed most people are unwilling or ambivalent about trusting AI systems and are more willing to use them if assurance mechanisms are in place to support ethical and trustworthiness.[30] For example, the perceived adequacy of current regulations and laws protecting users from the risks associated with AI use, independent AI ethics reviews, AI ethics certifications, national standards for transparency, and AI codes of conduct are necessary to strengthen trust and acceptance of AI.[31]

In contemporary society, there is a general understanding that AI-powered digital services[32] depend on trust to succeed, and if people do not trust AI, they will not feel safe using it[33]. PWC analysts estimated AI would add USD15.7 trillion to the world economy by 2030[34], so if AI is not considered trustworthy, then this cannot happen. Considering this, the following section examines the attributes of TAI and how to advocate their discernment amongst policymakers.

Attributes of TAI and how to advocate their discernment amongst policy makers, creators and users of AI -

This section explores the attributes of TAI and how they imbue artefacts, agents and their decisions with trust. It also offers a complementary way to abet people who create software and technical policies, regulations, and regulations related to TAI, including developers and technically-oriented policymakers in effectuating TAI.

As TAI gains popularity shaping society, democracy, and emerging technologies,[35] and its relevance grows, policymakers for TAI face the challenge of reconciling concepts and precepts that are conceptually dense hard to interpenetrate.[36] To grasp the concept of TAI, it was necessary to identify and understand attributes forming TAI by consulting existing frameworks that thoroughly examined the development and deployment of TAI.[37][38] Five prominent frameworks and their foundational attributes were consulted, and their TAI attributes are summarised in Table 1.

Table 1: Summary of attributes for TAI sourced from five prominent frameworks presented in the literature.

European Union Principles for Trustworthy AI[39] - AI-HLEG

  • Technical robustness and safety
  • Data privacy, security and governance
  • Explainability - human agency and oversight
  • Accuracy, traceability and transparency
  • Safety - fairness and non-discrimination
  • Accountability and contestability
  • AI literacy
  • Risk and impact mitigation

Floridi et al., [40]

  • Beneficence – promoting well-being, preserving dignity, and sustaining the planet
  • Non-maleficence – privacy, security and capability caution
  • Meta-autonomy – the power to decide
  • Justice – promoting prosperity and preserving solidarity
  • Explicability – enabling the other principles through intelligibility and accountability

Hasselbring and Reussner[41]

  • Correctness
  • Safety
  • Quality of service
  • Security
  • Privacy?

NIST proposed model to measure and enhance user trust in AI systems [42]

  • Accuracy
  • Reliability
  • Resiliency
  • Objectivity
  • Security
  • Explainability
  • Safety
  • Accountability
  • Privacy

OECD AI Principles[43]

  • Inclusive growth
  • Human-centred values and fairness
  • Transparency and explainability
  • Robustness, security and safety
  • Accountability

The five frameworks were chosen for their broad consideration and extensive citation of other literature on TAI. Their collective considered perspectives provided reliable attestations on the topic. Furthermore, they share the hallmarks of being collaborative, exhaustive and academically reliable - written in collaboration between prominent professionals with a wide area of influence from various backgrounds; regulatory, academics, and industry that are highly regarded by academicians and respected by professionals alike.

Analyses of the frameworks revealed a degree of coherence and overlap between the 31 attributes in terms of context, dimensions and applications. In other words, they try to achieve the same outcome. Whether it is AI for consumer use, the application of AI in business or the effect of AI on the sensitivity spectrum of human rights to ethical considerations, they are all trying to ascertain the determination of TAI and gain social acceptance[44].

Drawing on extant literature and supported by practitioner experience, there is commonsensical justification for consolidating TAI attributes to prevent principle proliferation. The consolidation of 31 attributes into eight desiderata of TAI is based on transdisciplinary considerations encompassing legal, ethical, procedural and technological factors emanating in the LEPT tetrad. The tetrad offers policymakers, creators and users of AI a clear perspective and straightforward understanding of TAI. Comparative analysis work by Floridi and Cowls supports such consolidation effort as it helps group similar attributes, thus avoiding unnecessary repetition and redundancy, or if they differ, significantly prevent confusion and ambiguity.[45]

The LEPT tetrad consists of eight desiderata of TAI, providing for a complementary view of the AI-HLEG notion of TAI. It draws attention to the “legal” and “ethical” elements as well as providing added attention to what constitutes “robust” by expanding it to explicitly address the “procedural” and “technological” elements, as shown in Figure 1.

Figure 1: LEPT tetrad consisting of eight desiderate of TAI developed by the author, deduced through literature.

No alt text provided for this image

Enlightened by the literature, justifications for the consolidation of the 31 attributes emanated in the LEPT tetrad to collectively explain TAI are as follow:

  • Legal consideration is about staying accountable. This driver emphasises preventing the tethering of antitrust issues, data privacy, anti-discrimination, anti-competitive dynamics and possible predatory abuses. Zarrabi et al. presented a meta-model[46] combining legal and trust-related concepts to enable information systems developers to model and reason about the trustworthiness of a system in terms of its legal compliance. They demonstrated the correlative relations between duty and rights where duty is the fulfilment of the goal or secure goal by the information systems since they are ordered by law and cannot be refused or ignored, such as data privacy, security and explainability. An example of such consideration is the recent European Commission (EC) proposal for AI Regulation laying down harmonised rules on AI and amending certain EU legislative acts. However, according to Smuha et al., such a comprehensive initiative still failed to adequately address the enforcement of legal rights and duties and does not ensure legal certainty and consistency, which are essential for the rule of law.[47]
  • Ethical consideration ensures human-centred ontology. This driver centers on the adherence to fundamental rights and applicable regulation when developing and deploying AI. This includes meeting core principles and values to ensure an ethical purpose. Yeung, Howes and Pogrebna argued with the increasing use of algorithmic decision-making systems, a human rights-centred design, deliberation and oversight approach to the governance of AI offer a concrete proposal capable of delivering genuinely ethical AI consisting of four[48] core principles.[49] These principles addressed the issues of safety encompassing fairness and non-discrimination. They provided a set of competencies enabling individuals to critically evaluate AI technologies, including being able to communicate and collaborate effectively with AI during the decision making, evidence and query process[50]. An example of such consideration is the AI-HLEG published ethics guidelines for TAI and is the foundational framework for developing the LEPT tetrad for this essay.
  • Procedural consideration is about ensuring the rules are transparent and explainable. This driver effectuates both the prospective and retrospective elements of transparency, a core concern connecting AI to automated decision-making processes or delegated decisions to a machine or system. One such example is the ratio legis of the transparency requirement under Art. 5(1)(a) of the GDPR and its ethical underpinnings, showing a focus on the provision of information and explanation. Another concern is the protection of data subjects from high-risk inferential analytics. Mohammadi and Heisel contended trustworthiness properties are qualities of the system that can support complex collaborative processes representing the activities such as risk and impact mitigation, documentation of accountability, and the contestability of people as critical elements involved in a process, including the execution constraints between them to ensure the system will perform as expected.[51]
  • Technological considerations focus on performance. This driver helps to ensure the technology is robust and reliable. Xi, Hess and Valacich asserted the formation of trust[52] in AI and trust in the provider would influence each other building up in a gradual manner (initial trust formation), requiring ongoing two-way interactions (continuous trust development). Siau[53] and Wang[54] found the initial and continuous trust formation requires AI to be technically robust and safe (defined as consideration of usability and reliability), as well as accurate in function and traceable (goal congruence) and transparent (exhibits interpretability) [55]. Toreini et al., demonstrated this in their development of FEAS,[56] a four-category of trustworthiness technologies for machine learning.[57]

Given the absence of a unified understanding for AI-specific trustworthiness,[58] a non-expert in the field can understand, the motivation behind this grouping[59] is to propose a simpler view making it easier to examine the commensurable impact and interrelationships between the attributes and quickly discern the notion of TAI in policy development, AI development and use.

Key links between the LEPT tetrad in facilitating TAI -

Siau and Wang proffered trust-building is a dynamic process involving movement from initial trust to continuous trust development.[60] For example, starting with rigorous engineering procedures, built on best practices, mandated by laws and regulation[61] and reinforced with industry standards, most of which are currently largely absent.[62]

Given the numerous levels of abstractions making up an AI system which are complex in themselves and in their interaction[63], trustworthiness relies on the performance of AI, when called upon, can explain the rationale behind conclusions, actions and decisions (legal), take social considerations into account by providing assured security and privacy protection (ethical), can collaborate and interface well with humans (procedural), and is easy to use and reliable (technology) will facilitate trustworthiness.[64]

Based on the work of Fürnkranz et al., Kühl et al., and Gillespie et al., to achieve TAI, it is necessary to understand[65][66][67] AI ethically and consequentially and determine perceived risks[68][69][70] as key links between the combinatorial effect of LEPT tetrad. The goal is to imbue the trustor with confidence that the AI system is fit-for-purpose, performs reliably and produces accurate output as intended. In other words, the AI system functions within a defined legislative framework, adhering to ethical guidelines and is procedurally sound and technically reliable.

As factors linking the LEPT tetrad, the broad fields of human understanding[71] and personal risk perception[72] are widely studied. Gordon argued the meaning of human understanding[73] is closely related to knowing[74] and explaining[75], and Wolff describes personal risk perception[76] as closely related with harm[77] and loss[78]. Given the diminutive understanding of the inner working of an opaque and esoteric AI system[79] – a domain abstruse even to field professionals[80], a simpler perspective is needed.[81] Argument from Thiebes et al., [82] supported by a similar point of view from Chatila et al. [83] that the goal is to help the trustor; which can range from the ordinary to the professional, to determine how their understanding and risk perception contributes to establishing the level of trustworthiness where a high level of human understanding and a low level of actual personal risk or risk perception are favourable for TAI as shown in Figure 2.

Figure 2: Model of key links of TAI drivers developed by the author based on the work of Thiebes et al.[84] and Chatila et al.,[85]

No alt text provided for this image

Conclusion -

The prima facie support for the adoption of AI in multiple domains is compelling due to AI’s ability to perform complex tasks and support decisions. Human agency is core to how AI is designed, built and integrated, making corporate accountability real and enforceable. Therefore, attitudes toward AI development will determine how much privacy, safety, trust, transparency, and fairness will be created for society, be it online or in the real world. Evidence from the literature shows to trust AI and use it, users need to be assured of their confidentiality, security, responsibility, equality, accountability, transparency and safety – all coalescing to be the prominent attributes of TAI.

In keeping pace with the speed of AI development and deployment, policymakers and AI creators need to consider and address the trust challenge of TAI. It is necessary for them to perpend on the nature and dynamics of trustworthiness in the presence of human-AI interactions by focusing on the properties that regulate, formulate and assimilate AI in society. To help facilitate an understanding of the nature and dynamics of TAI, this essay introduced a complimentary tetrad model for TAI consisting of legal, ethical, procedural and technological drivers or LEPT for TAI.

The application of LETP to human-AI interactions facilitates the cognition of trust, helping users achieve a higher level of understanding and a low discernment of their risk perception of AI, making them the essential determinants linking LEPT divers for TAI. The LEPT tetrad consists of eight desiderata of TAI formed from a consolidated consideration of TAI attributes from five popular frameworks. When the LEPT tetrad is considered[86] collectively in the design, development, deployment, and operational phases of AI system implementation, linked by better understanding and personal risk perception, defined as present-day gaps, can help build and safeguard TAI.


REFERENCES

1.??????Anna Jobin, Marcello Ienca and Effy Vayena, ‘The global landscape of AI ethics guidelines’ (2 Sep 2019) Nature Machine Intelligence <https://go.nature.com/3hLjDuQ>

2.??????Banu Buruk, Perihan Elif Ekmekci and Berna Arda, ‘A critical perspective on guidelines for responsible and trustworthy artificial intelligence’ (31st Mar 2020) Medicine, Health Care and Philosophy <https://bit.ly/3znxa1R>

3.??????Broussard, Meredith, Artificial unintelligence: How computers misunderstand the worlds (The MIT Press, 2018)

4.??????Chatila, Raja et al., ‘Trustworthy AI’ (2021) Springer International Publishing <https://bit.ly/3eTbd2H>

5.??????Coeckelbergh, Mark, ‘Can we trust robots?’ (2012) Ethics and Information Technology <https://bit.ly/3kscMrZ>

6.??????Coeckelbergh, Mark., ‘Virtual moral agency, virtual moral responsibility: on the moral significance of the appearance, perception, and performance of artificial agents’ (2009) AI & Society <https://bit.ly/2UjFzEu>

7.??????Ehsan Toreini et al., ‘The relationship between trust in AI and trustworthy machine learning technologies’ (27 Jan 2020) FAT* ’20: Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency <https://bit.ly/3irkL7g>

8.??????Emma Gordon, E., ‘Understanding in Epistemology’ (12 Jul 2021) The Internet Encyclopedia of Philosophy <https://bit.ly/3BCBtYJ>

9.??????Fernhout, F., ‘Towards a European legal framework for the development and use of Artificial Intelligence’, Stibbe (Article, 22 Jul 2021) <https://bit.ly/3i4CNfh>

10.???Floridi, L., and Josh Cowls, ‘A Unified Framework of Five Principles for AI in Society’ (2 Jul 2019) Harvard Data Science Review <https://bit.ly/3CkbAgT>

11.???Floridi, Luciano, et al., ‘AI4People—An Ethical Framework for a Good AI Society: Opportunities, Risks, Principles, and Recommendations’ (26 Nov 2018) Minds & Machines <https://bit.ly/3et745v>

12.???Greenwood, Michelle and Harry Van Buren, ‘Trust and Stakeholder Theory: Trustworthiness in the Organisation-Stakeholder Relationship’ (2010) Journal of Business Ethics <https://bit.ly/3ifeBpA>

13.???Hardin, Russel, Trust and trustworthiness (Russell Sage Foundation, 2002)

14.???Hasselbring, Wilhelm and Ralph Reussner, ‘Toward Trustworthy Software Systems’ (4 Apr 2006) Computer, Vol. 39, No. 4, pp. 91-92 <https://bit.ly/3ijf6yX>

15.???HLEG-AI (High-Level Expert Group on Artificial Intelligence). ‘A definition of AI: Main capabilities and scientific disciplines’ (8 Apr 2019) European Commission <https://bit.ly/3xRwwsU>

16.???Johannes Fürnkranz, Tomá? Kliegr and Heiko Paulheim, ‘On cognitive preferences and the plausibility of rule-based models’ (9 Apr 2020) Machine Learning <https://bit.ly/3zsrVhb>

17.???Karen Yeung, Andrew Howes, and Ganna Pogrebna, ‘AI Governance by Human Rights-Centred Design, Deliberation and Oversight: An End to Ethics Washing’ (July 2020) The Oxford Handbook of AI Ethics - Oxford University Press <https://bit.ly/3iCbBUd>

18.???Katharina Wolff, Svein Larsen and Torvald Ogaard, ‘How to define and measure risk perception’ (Nov 2019) Annals of Tourism Research <https://bit.ly/3zDe7Rf>

19.???Kühl, Niklas et al., ‘Machine Learning in Artificial Intelligence: Towards a Common Understanding’ (25 Mar 2020) Cornell University <https://bit.ly/3iD5n6v>

20.???Long, Duri and Brian Magerko, ‘What is AI Literacy? Competencies and Design Considerations’ (25 to 30 Apr 2020) In proceedings of CHI ’20: CHI Conference on Human Factors in Computing Systems <https://bit.ly/36YNLNb>Xx

21.???Marcus, Gary. and Ernest Davis, Rebooting AI: Building artificial intelligence we can trust (Pantheon Books, 2019)

22.???Matthias Braun, Hannah Bleher, Patrik Hummel, ‘A Leap of Faith: Is There a Formula for ‘Trustworthy’ AI?’ (19 Feb 2021) The Hastings Center <https://bit.ly/3BbQgJO>

23.???Melanie Ehren, Andrew Paterson and Jacqueline Baxter, ‘Accountability and trust: Two sides of the same coin?’ (11 Nov 2019) Journal of Educational Change <https://bit.ly/3kpS9wC>

24.???Mohammadi, Nazila and Maritta Heisel, ‘Enhancing Business Process Models with Trustworthiness Requirements’ (2 Jul 2016) IFIP Advances in Information and Communication Technology <https://bit.ly/3eMyzag>

25.???Nicole Gillespie, Steve Lockey and Caitlin Curtis, ‘Trust in Artificial Intelligence: A Five Country Study’ (Mar 2021) The University of Queensland and KPMG Australia <https://bit.ly/3euWElD>

26.???O’Neill, Onora, Autonomy and trust in bioethics (Cambridge: Cambridge University Press, 2002)

27.???OECD Legal Instruments, ‘Principles on AI’ (22 May 2019) OECD <https://bit.ly/2UTgkZq>

28.???Postema, G., ‘Trust, Distrust, and the Rule of Law’ (12 Jun 2019) University of North Carolina Legal Studies Research Paper <https://bit.ly/3erZRmc>

29.???Proposal, ‘Regulation of the European Parliament and of the Council Laying Down Harmonised Rules on Artificial Intelligence (Artificial Intelligence Act) and Amending Certain Union Legislative Acts’ (21 Apr 2021) European Union <https://bit.ly/3zEdNRV>

30.???Publications Office of the EU, ‘Report on the safety and liability implications of Artificial Intelligence, the Internet of Things and robotics’ (19 Feb 2020) European Union <https://bit.ly/36L4yTD>

31.???Report, ‘Ethics guidelines for trustworthy AI’ (8 Apr 2019) European Union <https://bit.ly/3esXvmY>

32.???Russell, Stewart, and Peter Norvig, Artificial intelligence: a modern approach (3rd ed.)(Prentice Hall Press, 2009)

33.???Santoni de Sio, Filippo and Giulio Mecacci, ‘Responsibility Gaps with Artificial Intelligence: Why they Matter and How to Address them’ (14 May 2021) Philosophy & Technology – SpringerLink <https://bit.ly/2TkV9zg>

34.???Scott Thiebes, Sebastian Lins and Ali Sunyaev, Trustworthy artificial intelligence’ (1 Oct 2020) Electronic Markets <https://bit.ly/3kTKS8k>

35.???Serb, Alexander and Themistoklis Prodromakis, ‘A system of different layers of abstraction for artificial intelligence’ (Jul 2019) Researchgate <https://bit.ly/36XfRIK>

36.???Siau, Keng, and Weiyu Wang, ‘Building Trust in Artificial Intelligence, Machine Learning, and Robotics’ (Mar 2018) Cutter Business Technology Journal <https://bit.ly/3BxzYv2>

37.???Stanton, Brian, and Theodore Jensen, ‘Trust and Artificial Intelligence’ (March 2021) National Institute of Standards and Technology (NIST) <https://bit.ly/2ThSRRr>

38.???Vedder, A., ‘Accountability of Internet access and service providers – strict liability entering ethics?’ (Mar 2001) Ethics and Information Technology < https://bit.ly/3hOBuBg>

39.???Villasenor, John, ‘Products liability law as a way to address AI harms’ (31 Oct 2019) Brookings <https://brook.gs/36KZ0ZA>

40.???Wildavsku, A, and Karl Dake, ‘Theories of Risk Perception: Who Fears What and Why?’ (1990) Daedalus, Vol. 119, No. 4 <https://bit.ly/3x0mq7L>

41.???Xin Li, Traci Hess and JosephValacich, ‘Why do we trust new technology? A study of initial trust formation with organizational information systems’ (5 Mar 2008) The Journal of Strategic Information Systems <https://bit.ly/3iHyIg6>

42.???Zarrabi, Fatemeh et al., ‘A Meta-model for Legal Compliance and Trustworthiness of Information Systems’ (June 2012), In: Bajec M., Eder J. (eds) Advanced Information Systems Engineering Workshops. CAiSE 2012. Lecture Notes in Business Information Processing, vol 112. Springer, Berlin, Heidelberg <https://bit.ly/3wWjh8Z>



[1] Gary Marcus and Ernest Davis, Rebooting AI: Building artificial intelligence we can trust (Pantheon Books, 2019).

[2] Meredith Broussard, Artificial unintelligence: How computers misunderstand the worlds (The MIT Press, 2018).

[3] Shead, Sam, ‘Researchers: Are we on the cusp of an ‘AI winter’?’, BBC News, (Article, 12 Jan 2020) <https://bbc.in/3xJKmNL>

[4] Susan Fourtané, ‘Artificial Intelligence and the Fear of the Unknown’, Interesting Engineering, (Article, 4 Mar 2019) <https://bit.ly/2UOYAP0>

[5] Abeba Birhane et al., ‘The Values Encoded in Machine Learning Research’ (29 Jun 2021) Cornell University <https://bit.ly/3C5IQbE>

[6] Encompasses the design, development, and deployment of AI

[7] Stuart Jonathan Russell OBE is an English computer scientist known for his contributions to artificial intelligence. He is a Professor of Computer Science at the University of California, Berkeley and Adjunct Professor of Neurological Surgery at the University of California, San Francisco. <https://bit.ly/2TkPSYu>

[8] Peter Norvig is an American computer scientist. He is a director of research at Google, LLC, and used to be its director of search quality. <https://norvig.com>

[9] Stuart Russell, and Peter Norvig, Artificial intelligence: a modern approach 3rd ed (Upper Saddle River, NJ, USA: Prentice Hall Press, 2009).

[10] HLEGAI (High-Level Expert Group on Artificial Intelligence), A definition of AI: Main capabilities and scientific disciplines (European Commission, 2019) <https://bit.ly/3BfBSA8>

[11] Tara Balakrishnan, Michael Chui, Bryce Hall and Nicolaus Henke, ‘Global survey: The state of AI in 2020’ (Nov 2020) McKinsey Digital <https://mck.co/3hIZeq5>

[12] Karen Hao, ‘We need to design distrust into AI system to make them safer’(13 May 2021) MIT Technology Review <https://bit.ly/3hMsdtu>

[13] Ibid.

[14] Wilhelm Hasselbring and Ralf Reussner, ‘Toward Trustworthy Software Systems’(4 Apr 2006) Computer, Vol. 39, No. 4, pp. 91-92 <https://bit.ly/3ijf6yX>

[15] HLEGAI (High-Level Expert Group on Artificial Intelligence), A definition of AI: Main capabilities and scientific disciplines (European Commission, 2019) <https://bit.ly/3BfBSA8>

[16] Respecting all applicable laws and regulations.

[17] Respecting ethical principles and values.

[18] Both from a technical perspective while taking into account its social environment.

[19] Michael Burkhardt, ‘How-To Build Trust in Artificial Intelligence Solutions’, Towards Data Science, (Article, 2 Jun 2020) <https://bit.ly/3xMAYJn>

[20] Onora O’Neill, Autonomy and Trust in Bioethics (Cambridge University Press, 2002).

[21] Russel Hardin, Trust and trustworthiness (Russell Sage Foundation, 2002).

[22] Ibid.

[23] Ibid (n20).

[24] Michelle Greenwood and Harry J. van Buren, ‘Trust and Stakeholder Theory: Trustworthiness in the Organisation-Stakeholder Relationship’ (Sep 2020), Journal of Business Ethics <https://bit.ly/3ifeBpA>

[25] Mark Coeckelbergh is a Belgian philosopher of technology. He is Professor of Philosophy of Media and Technology at the Department of Philosophy of the University of Vienna and former President of the Society for Philosophy and Technology. <https://bit.ly/3euWRoX>

[26] Mark Coeckelbergh, ‘Virtual moral agency, virtual?moral responsibility: On the moral significance of the appearance, perception, and performance of artificial agents’ (2009) AI & Society <https://bit.ly/2UjFzEu>

[27] Mark Coeckelbergh, ‘Can we trust robots?’ (2012) Ethics and Information Technology <https://bit.ly/3kscMrZ>

[28] Philippo Santoni de Sio and Giulio Mecacci, ‘Responsibility Gaps with Artificial Intelligence: Why they Matter and How to Address them’ (14 May 2021) Philosophy & Technology – SpringerLink <https://bit.ly/2TkV9zg>

[29] Ibid.

[30] Norman Gillespie, Steve Lockey, and Carlin Curtis, ‘Trust in Artificial Intelligence: A Five Country Study’ (Mar 2021) The University of Queensland and KPMG Australia <https://bit.ly/3euWElD>

[31] Ibid.

[32] Examples of AI-powered digital services: online dating platforms, credit (or risk) scoring tools, automated financial trading services, digital financial coach/advisors, digital job assistants, automated insurance claim processing bots, online pricing algorithms, autonomous driving and ride-sharing services, local policing and recidivism algorithms, news ranking and social media bots, intelligent weapons, elderly care services and smart home services.

[33] Ibid (n30).

[34] Anand Rao and Gerard Werweij, ‘Sizing the prize: What’s the real value of AI for your business and how can you capitalise?’ (Article, June 2017) PWC <https://pwc.to/3xNrTjE>

[35] Matthias Braun, Hannah Bleher and Patrik Hummel, ‘A Leap of Faith: Is There a Formula for ‘Trustworthy’ AI?’ (19 Feb 2021) The Hastings Center <https://bit.ly/3BbQgJO>

[36] Alon Jacovi, Ana Marasovi?, Tim Miller and Yoav Goldberg, ‘Formalising Trust in Artificial Intelligence: Prerequisites, Causes and Goals of Human Trust in AI’ (3 Mar 2021) In Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency (FAccT '21) <https://bit.ly/3wH2Hd3>

[37] Anna Jobin, Marcello Ienca, and Effy Vayena, ‘The global landscape of AI ethics guidelines’ (2 Sep 2019) Nature Machine Intelligence <https://go.nature.com/3hLjDuQ>

[38] Banu Buruk, Perihan Elif Ekmekci and Berna Arda, ‘A critical perspective on guidelines for responsible and trustworthy artificial intelligence’ (31 Mar 2020) Medicine, Health Care and Philosophy <https://bit.ly/3znxa1R>

[39] Report, Ethics guidelines for trustworthy AI, (8 Apr 2019) European Union <https://bit.ly/3esXvmY>

[40] Luciano Floridi et al., ‘AI4People—An Ethical Framework for a Good AI Society: Opportunities, Risks, Principles, and Recommendations’ (26 Nov 2018) Minds & Machines <https://bit.ly/3et745v>

[41] Wilhelm Hasselbring and Ralf Reussner, ‘Toward Trustworthy Software Systems’ (4 Apr 2006) Computer, Vol. 39, No. 4, pp. 91-92 <https://bit.ly/3ijf6yX>

[42] Brian Stanton and Theodore Jensen, ‘Trust and Artificial Intelligence’ (March 2021) National Institute of Standards and Technology (NIST) < https://bit.ly/2ThSRRr>

[43] OECD Legal Instruments, Principles on AI, (22 May 2019) OECD <https://bit.ly/2UTgkZq>

[44] Ibid (n35).

[45] Luciano Floridi and Josh Cowls, ‘A Unified Framework of Five Principles for AI in Society’ (2 Jul 2019) Harvard Data Science Review <https://bit.ly/3CkbAgT>

[46] Fatemeh Zarrabi et al., ‘A Meta-model for Legal Compliance and Trustworthiness of Information Systems’ (Jun 2012) in: Bajec M., Eder J. (eds) Advanced Information Systems Engineering Workshops. CAiSE 2012. Lecture Notes in Business Information Processing, vol 112. Springer, Berlin, Heidelberg <https://bit.ly/3wWjh8Z>

[47] Smuha Nathalie et al., ‘How the EU can achieve Legally Trustworthy AI: A Response to the European Commission’s Proposal for an Artificial Intelligence Act’ (August 5, 2021) Legal, Ethical & Accountable Digital Society (LEADS) Lab, University of Birmingham <https://bit.ly/3s2kJWA>

[48] (i) Design and deliberation (ii) Assessment, testing, and evaluation (iii) Independent oversight, investigation, and sanction (iv) Traceability, evidence, and proof.

[49] Karen Yeung, Andrew Howes, and Ganna Pogrebna, ‘AI Governance by Human Rights-Centred Design, Deliberation and Oversight: An End to Ethics Washing’ (July 2020) The Oxford Handbook of AI Ethics - Oxford University Press <https://bit.ly/3iCbBUd>

[50] Duri Long and Brian Magerko, ‘What is AI Literacy? Competencies and Design Considerations’ (25th to 30th Apr 2020) In proceedings of CHI '20: CHI Conference on Human Factors in Computing Systems <https://bit.ly/36YNLNb>

[51] Nazila Gol Mohammadi and Maritta Heisel, ‘Enhancing Business Process Models with Trustworthiness Requirements’ (2 Jul 2016), IFIP Advances in Information and Communication Technology <https://bit.ly/3eMyzag>

[52] Xin Li, Traci J. Hess and Joseph S. Valacich, ‘Why do we trust new technology? A study of initial trust formation with organizational information systems’ (5 Mar 2018) The Journal of Strategic Information Systems <https://bit.ly/3iHyIg6>

[53] Keng Siau is Chair of the Department of Business and Information Technology at the Missouri University of Science and Technology. He is Editor-in-Chief of Journal of Database Management, has written more than 300 academic publications, and is consistently ranked as one of the top IS researchers in the world based on h-index and productivity rate.

[54] Weiyu Wang holds a master of science degree in information science and technology and an MBA from the Missouri University of Science and Technology. Her research focuses on the impact of artificial intelligence on economy, society, and mental well-being. She is also interested in the governance, ethical, and trust issues related to AI.

[55] Keng Siau and Weiyu Wang, ‘Building Trust in Artificial Intelligence, Machine Learning, and Robotics’ (Mar 2018) Cutter Business Technology Journal <https://bit.ly/3BxzYv2>

[56] The four categories are Fairness, Explainability, Auditability and Safety

[57] Ehsan Toreini et al., ‘The relationship between trust in AI and trustworthy machine learning technologies’ (27 Jan 2020) FAT* '20: Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency <https://bit.ly/3irkL7g>

[58] Sebastian Gie?ler et al., ‘Trustworthy AI is not an appropriate framework’ (6 Feb 2019) Algorithm Watch <https://bit.ly/3BhvSHp>

[59] Interpretation by the author after examining the similarities and overlapping elements of the identified attributes.

[60] Ibid (n55).

[61] Ibid (n30).

[62] KNC, ‘Don’t trust AI until we build systems that earn trust’, The Economist, (Article, 18 Dec 2019), <https://econ.st/2Uke1Pa>

[63] Alexander Serb and Themistoklis Prodromakis, ‘A system of different layers of abstraction for artificial intelligence’ (Jul 2019) Researchgate <https://bit.ly/36XfRIK>

[64] Ibid (n35).

[65] Johannes Fürnkranz, Tomá? Kliegr and Heiko Paulheim, ‘On cognitive preferences and the plausibility of rule-based models’ (9 Apr 2020) Machine Learning <https://bit.ly/3zsrVhb>

[66] Niklas Kühl et al., ‘Machine Learning in Artificial Intelligence: Towards a Common Understanding’ (25 Mar 2020) Cornell University <https://bit.ly/3iD5n6v>

[67] Ibid (n30).

[68] Luca Ferri et al., ‘How risk perception influences CEOs' technological decisions: extending the technology acceptance model to small and medium-sized enterprises' technology decision makers’ (27 May 2021) European Journal of Innovation Management <https://bit.ly/2V8gZ9K>

[69] Paul Pavlou, ‘Consumer Acceptance of Electronic Commerce: Integrating Trust and Risk with the Technology Acceptance Model’ (23 Dec 2014) International Journal of Electronic Commerce <https://bit.ly/3zG8nGr>

[70] Ibid (n30).

[71] John Locke, ‘An Essay concerning Human Understanding’ (27 Sep 1924) Nature <https://bit.ly/3iEVoOa>

[72] Aaron Wildavsku and Karl Dake, ‘Theories of Risk Perception: Who Fears What and Why?’ (1990) Daedalus, Vol. 119, No. 4 ?<https://bit.ly/3x0mq7L>

[73] Emma Gordon, ‘Understanding in Epistemology’ (12 Jul 2021) The Internet Encyclopedia of Philosophy <https://bit.ly/3BCBtYJ>

[74] Ibid

[75] Ibid (n73).

[76] Katharina Wolff, Svein Larsen and Torvald Ogaard, ‘How to define and measure risk perception’ (Nov 2019) Annals of Tourism Research <https://bit.ly/3zDe7Rf>

[77] Ibid.

[78] Ibid (n76).

[79] Charles McLeelan, ‘Inside the black box: Understanding AI decision-making’, ZDNET (Article, 1 Dec 2016) <https://zd.net/36YCo7T>

[80] Thomas Davenport, Jeff Loucks, and David Schatsky, ‘Bullish on the Business Value of Cognitive’ (2017), Deloitte, pg. 3 <https://bit.ly/3x7sNqh>

[81] National Academies of Sciences, Engineering, and Medicine, ‘Assessing and Improving AI Trustworthiness: Current Contexts and Concerns: Proceedings of a Workshop—in Brief’ (May 2021) The National Academies Press <https://bit.ly/3kXgDxp>

[82] Scott Thiebes, Sebastian Lins and Ali Sunyaev, ‘Trustworthy artificial intelligence’ (1 Oct 2020) Electronic Markets <https://bit.ly/3kTKS8k>

[83] Raja Chatila et al., ‘Trustworthy AI’ (2021) Springer International Publishing <https://bit.ly/3eTbd2H>

[84] Ibid (n82).

[85] Ibid (n83).

[86] It is worth noting an alternative to the TAI paradigm is the concept of reliable AI, which is more befitting and prevents the anthropomorphising of a technological artefact. Trust is a human concept between trusted parties, whereas AI is a systematic grouping of legal, ethical, procedural and technological factors enabling a collection of hardware and software to fulfil designated computing tasks. The concept of reliable AI can be considered from the LEPT tetrad placing the burden of responsibility on those designing, developing and deploying AI technologies – putting greater attention on ensuring those deploying and using AI are trustworthy.

Matt Stevens PhD FAIB

Author / Senior Lecturer-Western Sydney University / Fellow AIB / Senior Lecturer-IATC

1 年

For construction contractors - Book Analysis of Working with AI by Davenport and Miller. See LinkedIn Post: https://www.dhirubhai.net/posts/matt-stevens-4867b45_ai-book-analysis-activity-7084486909904781314-mbeB?utm_source=share&utm_medium=member_desktop

回复
Katrina Messiha

Marie Sk?odowska–Curie PhD Fellow at VUmc in Amsterdam | ESR1 for EU commissioned Health CASCADE (Innovative Training Network)

2 年

Rabab Chrifou

回复
Diana Di Cecco GAICD MDigLaw

Chief Marketing Officer. Brand Consultant. Web3, Ethical AI + Digital Law Advocate. Columnist. Juris Doctor Candidate.

3 年

??????

回复

要查看或添加评论,请登录

社区洞察

其他会员也浏览了