Defining Safe & Ethical AI: Enabled by Constraints
Dr Sarah Bell
APAC Executive Team at MRI - Director, Strategic Partnerships. Innovative Industry Leader. Creative Business Strategy. Proptech Leader of the Year 2023. REB Excellence Award 2023. Researcher. Speaker. Author.
Patrick Henry Winston was a prominent leader of contemporary artificial intelligence. He succeeded Marvin Minsky as Director of the MIT Artificial Intelligence and Computer Science Laboratory. Winston’s seminal definition of artificial intelligence is a preferred definition for many in the field because it balances a technical description of artificial intelligence, leaving room for further developments in the field whilst remaining technology-neutral.?
This is important because the tools that “do” artificial intelligence are evolving, iterative, and undemocratic. The nuance of a unified and technology-neutral definition is important to combat the lack of technical literacy and nuance between products/application brands; company brands; technical descriptions of tools (established, emerging and novel,) as well as unclear definitions of the overarching term “artificial intelligence”, which is the subject of some contention.? We see this manifest in the contemporary example of OpenAi, which is a brand name; whose product, ChatGPT; became synonymous for many with artificial intelligence; but it is in fact a Large Language Model (LLM); which is one emergent tool in a suite of technologies that are grouped in the category of artificial intelligence (AI).
Winston’s definition of AI is this:
“algorithms, enabled by constraints, exposed by representations that support models targeted at loops that tie thinking, perception and action together.”
The phrase to focus on for the purposes of positioning this discussion is: “enabled by constraints”.
It has always been intended that this technology be constrained, however, if we extend the notion of technical constraint to the notion of regulation and we frame constraints as enabling; then we do nothing to dampen the provision of expert systems and their capabilities to the community of Australian innovators seeking to transcend our human capabilities, whilst preserving the unique domain of the human which is to monitor the systems, to interpret their outputs and create meaning within a generalised context of ethics and experience, and then to act.?
In the first wave of artificial intelligence, the science fiction writer Isaac Asimov developed three laws of robotics. It is not-problematic that these laws originated in science fiction and have been transposed to real world ethical thinking and practice, in fact it is an example within the domain of AI as to how powerful storytelling can be as an abstracted means of conveying complex thinking into other knowledge cultures and practice. The laws are: ?
Despite, or perhaps because of their origin in fictional storytelling, these laws have informed the work and practice of many computer engineers, despite numerous critics who recognise that they remain grossly wanting for real-world applications due to the non-technical and ordinary nature of the language used. They are simply put “not scientifically plausible”.? The issue with the plausibility of ethics based on language is a legacy of Asimov that appears to have repeated itself in the contemporary search for laws and regulations to support ethical AI.?
领英推荐
Within AI, there are - and will always be - new tools that will nest within Winston’s definition (see introduction), and so an approach developed today must account for a range of? emergent technologies, unknown to us at the point of adopting any regulatory framework. Just as Winston did with his definition, our conceptualisation of AI must therefore be flexible to cater for the characteristics of emergent technology. Rotolo, Hicks and Martin propose a five point criteria for the classification of emergent technology, being “(i) radical novelty, (ii) relatively fast growth, (iii) coherence, (iv) prominent impact; and (v) uncertainty and ambiguity (2015)”. They recognise that many solutions are grouped together for understanding and convenience under the general label of ‘emergent technology’ and while it would include AI, it would also extend to applications such as robotics, cryptocurrency, nanotechnology, and indeed any technology with AI as an enabling technology.
The novel dilemma of AI, compared to traditional software, originates in its ability to function autonomously - algorithmically, without a human user, in a way that imitates a human decision making process. With the advent of Generative AI, these systems are accelerating in the sophistication of that execution.
Understanding AI (including advanced automation) in this paradigm, introduces the concept of an ‘artificial actor’, described by Floridi and Taddeoi, who go further than Winston, in capturing the capability and anthropomorphised conception of AI as:
“A growing resource of interactive, autonomous, self-learning agency, which enables computational artefacts to perform tasks that would otherwise require human intelligence to be executed successfully”.
These capabilities have created the humanisation of software - so that the software itself becomes an ‘artificial actor’ of the social world, interacting and interrelated with human social actors. The manifestation of AI as an ‘artificial actor’ is the basis of a movement to expand the concept of agency to these objects.? Navon, Gunkel, Gerdes, & Coeckelbergh, and Darling use the term ‘social robot’ to highlight our seemingly innate desire to anthropomorphise these artificial actors.
The evolution of technology and the emergence of this concept of an artificial actor to describe the leap from the mechanisation of muscle to the mechanisation of cognition has opened the door to imagining these machines as actors in our own image.
Humanisation is a trademark of the way we conceive intelligent machines compared to the other machines that function more as electrified versions of physical machines. Rand quoting his MIT colleague Epstein states, “The way we allocate responsibility is complicated when AI is involved. AI is simply a tool created and used by humans, but when we describe it with human characteristics, people tend to view it very differently. It can be seen more as an agent with independent thought and the ability to create.”
If we can accept that these objects are different, and that our relationship with AI is also social - then we must accept that it is also political. Trust architecture must be extended and applied to these new types of objects. Structures of clarity and confidence that will benefit every Australian and balance a thriving innovation sector with the rights and access of every citizen. To revert back to Winston’s original definition, we must enable a future Australia powered by the capabilities of AI by constraining it just enough to provide every Australian with safe and ethical systems that are worthy of the public trust.
Disclaimer: The views and opinions expressed in this article are solely those of the author and do not necessarily reflect the official policy or position of the company they work for. The author is writing in a personal capacity for the purpose of sharing insights and perspectives on AI and related topics. Any information provided in this article is based on the author's individual experience, research, and understanding.
International Bestselling Author | CEO | TEDx Keynote Speaker | Strategic Advisor | AI Product Management Leader | Doctoral Candidate | Podcast Host | Design Thinker
6 个月Exciting insights! Can't wait to read your thought piece. ??
Fascinating perspective on the intersection of AI and societal ethics; the concept of AI as a political actor adds a compelling layer to the discussion on how we integrate these technologies responsibly.
Accenture - Managing Director, NZ Data and AI Lead
6 个月AI is certainly going to challenge nearly every paradigm we have known to date. Educating A2A (c.f. B2B) on laws and ethics when there is no human intervention will be very interesting to watch, especially if one of the actors is from a non-ethical subscriber base from the East or West.