AI Regulation: Control or Collaboration?
Porque uno solo se realiza sucesivamente (Because one only realizes oneself successively), Lobo Velar de Irigoyen [44]

AI Regulation: Control or Collaboration?

Current regulatory efforts advocate for "human-centric" AI.? The EU's Artificial Intelligence Act (EUAIA) declares it a principle to "promote the uptake of human-centric and trustworthy artificial intelligence" [1].? The Japanese government published "Social Principles of Human-Centric AI" in 2019 [2].? The Prime Minister of India, Narendra Modi, proposed a new framework "for responsible, human-centric AI governance" for the G20 during a December 2023 speech [3, 4].? The draft Artificial Intelligence Model Law (AIML) by the Chinese Academy of Social Sciences includes the principle that regulation be "people-centered" [5].

The human-centric principle is closely related to AI alignment, the challenge of ensuring that AI systems' values, behaviors, and goals align with human ones [6, 7].? Aligned AI is human-centric AI. Creating and keeping AI systems aligned is a governance mandate, with implications for, on the one hand, political and administrative systems, and on the other, the corporations and researchers engaged in developing and deploying AI systems.?

The authors of a major review of alignment research characterize alignment by four principles:

  1. Robustness states that the system’s stability needs to be guaranteed across various environments.
  2. Interpretability states that the operation and decision-making process of the system should be clear and understandable.
  3. Controllability states that the system should be under the guidance and control of humans.
  4. Ethicality states that the system should adhere to society’s norms and values [6].

Safety is sometimes added to or substituted for robustness, Fairness is often cited as a key ethical value, and interpretability is often described as Transparency [8].? With these qualifications noted, the RICE principles are an excellent encapsulation of the objectives of alignment, and thus of current thinking about the requirements if AI is to be human-centered.

In this constellation of principles, "controllability" appears to be the sine qua non. If AI systems cannot be controlled, how can any of the other principles be assured?

Control is an objective in all major regulatory initiatives.? The risk management framework mandated for federal agencies by the Office of Management and Budget (OMB) can, for example, be understood as a way to establish and maintain organizational control:

  1. Complete an AI impact assessment, including the intended purpose; potential risks with special attention to underserved communities; assessment of the quality and appropriateness of the data used in the AI’s design, development, training, testing, and operation.
  2. Test for performance in a real-world context. "Testing conditions should mirror as closely as possible the conditions in which the AI will be deployed. Through test results, agencies should demonstrate that the AI will achieve its expected benefits and that associated risks will be sufficiently mitigated, or else the agency should not use the AI."
  3. Independently evaluate the AI. Ensure that the system works appropriately and as intended, and that its expected benefits outweigh its potential risks.
  4. Conduct ongoing monitoring.?? Detect changes in the AI’s impact on rights and safety as well as adverse performance or outcomes and defend the AI from AI-specific exploits.
  5. Regularly evaluate risks from the use of AI. Determine "whether the deployment context, risks, benefits, and agency needs have evolved."? In particular, human review is required "after significant modifications to the AI or to the conditions or context in which the AI is used, and the review must include renewed testing for performance of the AI in a real-world context."
  6. Mitigate emerging risks to rights and safety.? Where new or altered risks are detected, "agencies must take steps to mitigate those risks, including, as appropriate, through updating the AI to reduce its risks or implementing procedural or manual mitigations, such as more stringent human intervention requirements."? And where "the AI’s risks to rights or safety exceed an acceptable level and where mitigation strategies do not sufficiently reduce risk, agencies must stop using the AI as soon as is practicable."
  7. Ensure adequate human training and assessment. Agencies must "ensure there is sufficient training, assessment, and oversight for operators of the AI to interpret and act on the AI’s output, combat any human-machine teaming issues (such as automation bias), and ensure the human-based components of the system effectively manage risks from the use of AI."
  8. Provide additional human oversight.? Agencies should "identify any decisions or actions in which the AI is not permitted to act without additional human oversight, intervention, and accountability."
  9. Provide public notice and plain-language documentation.? Agencies must provide "accessible documentation in plain language of the system’s functionality to serve as public notice of the AI to its users and the general public. Where people interact with a service relying on the AI and are likely to be impacted by the AI, agencies must also provide reasonable and timely notice about the use of the AI and a means to directly access any public documentation about it in the use case inventory" [9].

Risk management is a well-established regulatory methodology that is also required by the EUAIA in Article 9 [1] and by the AIML in Article 38 [5].? It offers the promise of flexible application, a comprehensive and systematic approach to evaluating risks and benefits, a focus on continuous monitoring and improvement, and global recognition and use.?

And yet controllability is both a vain hope and a morally flawed aim, for one and the same reason: AI systems are rapidly evolving away from being tools and toward being intelligent agents.

The first section below critiques the emphasis on "human centric" and controllable AI as a misleading simplification of the way AI is embedded in complex systems. The second section examines the consequences for controllability of the emergence of conscious AI, most likely in conjunction with artificial general intelligence (AGI).

Limitations of Human Oversight of AI

The principle of controllability exaggerates the feasibility and effectiveness of human oversight while downplaying approaches to alignment that include AI systems as aligners or that involve communicative cooperation between AI systems and humans.? In this section I first look at research focused on oversight problems specific to AI systems. Then I consider what the field of accident analysis implies about human oversight of AI.

In a recent survey of 41 policies mandating some form of human oversight of AI, Ben Green found that "they suffer from two significant flaws. First, evidence suggests that people are unable to perform the desired oversight functions. Second, as a result of the first flaw, human oversight policies legitimize government uses of faulty and controversial algorithms without addressing the fundamental issues with these tools." Green adds that they "promote a false sense of security in adopting algorithms and reduce the accountability that vendors and policymakers face for algorithmic harms" [17].? He finds substantial evidence for several deficiencies that contribute to these outcomes, including:

  • Automation Bias: Human overseers often defer to AI recommendations instead of critically evaluating them. This bias reduces independent scrutiny and leads to both omission errors (failing to act when the algorithm does not alert) and commission errors (following incorrect algorithmic advice despite contradictory evidence). There is also the converse problem of algorithm aversion, "when people prefer human predictions over algorithmic predictions even when the algorithm is shown to be more accurate" [22].
  • Complexity and Opacity: AI systems are often complex and not easily interpretable. Human overseers may lack a deep understanding of how they operate and make decisions. This lack of understanding hampers their ability to meaningfully oversee and challenge outputs.
  • Inadequate Training and Support: Human overseers often do not receive adequate training or support to effectively oversee algorithms. This will often be compounded by the fact that "the introduction of a novel AI system will regularly proceed whenever the AI outperforms humans at a given task" [22]. As the CEO of AI startup Cohere noted in a recent interview, ongoing improvement of AI systems makes assessment more difficult: "we need ... to find people who are still better than the model at those domains and they can tell me whether it's improving" [24]. The danger is superficial oversight where the human role is more of a formality than a substantive check, or where it is misguided and degrades system performance [17].

Other researchers have flagged additional factors that can limit, distort, and undermine human control of AI systems:

  • High Workloads. High workloads can diminish the time available for oversight and its quality.? AI systems are often deployed to enhance productivity, and overseers may face soft penalties, such as requirements to provide written justifications, when they diverge from AI decisions [18].? Humans are "more likely to use an AI when faced with high workload and while resources for fully considering all alternatives are low" [19]. Oversight may degenerate into simply accepting the judgment of the AI system when overseers are "tired and bored" [22].
  • Weak or distorted incentive structures. Institutional support for oversight may be compromised by the organizational advantages of the AI system, whether in the form of productivity enhancements, system management, operational insights, or other factors. In addition, "auditors risk capture by industry interests to receive repeated auditing commissions" [22].

Research in the field of human and organizational factor (HOF) accident analysis suggests that the focus on direct human oversight is a misleading oversimplification. In the study of accidents in complex systems, human error narrowly understood as acts or omissions that directly result in an accident are "the outcome, not the cause" [26]. As Andrew T. Miranda explains in a study of naval aviation mishaps, within the human factors community, human errors

are viewed as symptoms of deeper trouble within an organization. That is, the term human error is often considered an unhelpful and reductive label used to identify the solitary person(s) as the weak component with a complex system encompassing numerous people, tools, tasks, policies, and procedures [23, citations removed].

It is true that in a very broad sense, "Many studies have pointed out the fundamental role of human errors in accident occurrence, with them being involved in 70–80% of aviation accidents, 60% of petrochemical accidents, 90% of road traffic accidents, 90% of steel and iron metallurgy accidents, over 90% of nuclear accidents, over 75–96% of marine accidents, and over 80% of chemical process accidents" [25]. But these statistics refer to "inadequate safety culture, organizational process vulnerability, inadequate supervision, supervisory violations" and other factors within complex systems that contribute to accident causation [25]. In this broad sense of human error, many errors are far removed from the proximate cause of an accident and can only be recognized as key factors through systems analysis.

On the one hand, the limitations of human oversight of AI systems highlighted above indicate the need to reconceive oversight as an institutional responsibility. But on the other, accident analysis studies show that institutional design and operation themselves are critical to the possibility of safe conduct and failure avoidance.

Al Alignment as an Institutional Responsibility

The Human Factors Analysis and Classification System (HFACS) is a prominent and widely used framework for accident analysis. It is used to identify and analyze the human causes of accidents in complex systems. The HFACS categorizes human errors and violations at multiple levels, from unsafe acts to organizational influences, providing a structured approach to understand and mitigate the human factors contributing to accidents in order to improve safety [27].

Human Factors Analysis and Classification System (HFACS) [27]

What human factors research shows about accidents -- that they are products of systems -- can equally be said about oversight of AI. Oversight is ultimately a distributed responsibility within systems, not the singular responsibility of individual overseers. The responsibility individual's bear is in the context of patterns of causality in systems of coordination and management.

Oversight of AI must be achieved in and through systems, as some researchers have concluded. Green proposes "an institutional oversight approach to governing public sector algorithms ... [in order] to promote greater rigor and democratic participation in government decisions about whether and how to use algorithms" [17]. Johann Laux offers a set of principles to "anticipate the fallibility of human overseers and seek to mitigate them at the level of institutional design" [22].

The Emerging Role of AI in Aligning AI Systems

Although alignment must be understood institutionally for the reasons outlined above, institutionally designed and implemented alignment processes strictly dependent on human action and judgement will still be insufficient as AI systems become more complex and capable. There are two fundamental reasons for this:

  • Complexity and Speed of AI Systems: Advanced AI systems operate at a level of complexity and speed that can surpass human cognitive and reaction capabilities. This makes it difficult for humans to keep up with the decision-making processes and operational speed of these systems. This is sometimes viewed as a scalability problem: "An outstanding challenge ... is scalable oversight, i.e., providing high-quality feedback on super-human capable AI systems that operate in complex situations beyond the grasp of human evaluators, where the behaviors of AI systems may not be easily comprehended and evaluated by humans" [6].
  • Synthetic Thought and Decision-Making: The decision-making processes of AI systems, particularly those involving deep learning and neural networks, are often described as "opaque" and difficult for humans to understand or interpret. This implies that the problem is one of concealment, however inadvertent, and the solution is the regulatory demand for transparency. But the architectures, algorithms, and learning abilities of AI systems are substantially different from human cognition. AI systems may process data in ways that humans cannot easily comprehend, such as integrating vast datasets in real time or perceiving patterns in ways humans do not. They may employ thought processes and problem-solving approaches fundamentally different from human cognition. AI systems might develop entirely new concepts that are alien to human understanding. And if AI systems interact with each other, they could develop their own cultures. Human-AI understanding may benefit from an ethnographic or hermeneutic perspective [30]. While this is seldom considered in alignment research, researchers readily acknowledge that explainability and interpretability are not the disclosure of objective features of AI systems but rather pragmatic efforts to enable understanding. Saeed and Omlin, in a recent review of the field, propose that "Explainability provides insights to a targeted audience to fulfill a need, whereas interpretability is the degree to which the provided insights can make sense for the targeted audience’s domain knowledge [28, emphasis in original]. Moreover, they highlight that there may be inescapable trade-offs between AI model complexity and interpretability [29].

Explainable AI is itself an AI component and an example of the larger reality that alignment of AI is dependent on AI.

Explainable AI as an AI system component [31].

There are many other examples of alignment as a collaborative process with AI. To a large extent, these examples are a response to the need for scalable oversight. Incorporating humans in the training loop can "waste human resources and impede training efficiency", while the "inherent complexity of AI system behaviors makes evaluation difficult, especially on hard-to-comprehend and high-stakes tasks ... such as teaching an AI system to summarize books [32], generate complex pieces of code [33], and predict future weather changes [34]" [6]. AI-assisted alignment methods include the following:

  • Reinforcement Learning from AI Feedback (RLAIF): This method builds upon the framework of Reinforcement Learning from Human Feedback (RLHF) and serves as an extension to use feedback generated by large language models (LLMs) rather than human feedback. This shows promise in avoiding a tendency in RLHF for models "to avoid sensitive and contentious issues, potentially diminishing models’ overall utility". And it overcomes the limitations in utilizing RLHF when creating AI systems with superhuman abilities [6].

  • Iterated Distillation and Amplification (IDA): IDA is a framework for scalable oversight through iterative collaboration between humans and AIs. In IDA, a complex task is decomposed into simpler subtasks that AI agents can be automatically trained to perform, for example through reinforcement learning. Then, collaborative interaction between humans and multiple AI instances trained on the subtasks leads to the creation of an enhanced agent able to perform well on a more complex task and to learn new tasks [6].

  • Weak-to-Strong Generalization: This phenomenon involves using weak supervision signals from a weak model to train a strong model. It's a form of bootstrapping that's necessary when humans cannot provide scalable supervision signals, and it exemplifies how AI systems can assist in overseeing more complex AI tasks. The weak model is trained on ground truth and then annotates new data with weak labels for training the strong model. For example, consider sentiment analysis of movie reviews. A weak model, GPT-2, is trained on a small dataset of reviews with human annotation of sentiments. It then makes predictions (weak labels) on a larger set of reviews. GPT-4 is then fine-tuned (trained) using GPT-2's predictions, with the result that the accuracy of its predictions is shown to improve. The insight behind weak-to-strong generalization is that the strong model can generalize beyond weak labels instead of merely imitating the behavior of weak models. In other words, the weak model elicits the strong model’s capability.

Traditional ML focuses on the setting where humans supervise models that are weaker than humans. For the ultimate superalignment problem, humans will have to supervise models much smarter than them. We study an analogous problem today: using weak models to supervise strong models [35].

  • Debate: This approach involves two AI agents presenting answers and statements to assist human judges in decision-making. It can lead to enhanced decision-making and considerable time saving, but in some scenarios may require expertise on the part of human debate judges. Researchers also foresee that there may be situations where it won't be feasible to employ human judges in every instance, requiring that AI models also be capable of performing this role [6].

These examples illustrate the need to rely on AI to cultivate and maintain the alignment of AI systems with human values and intentions due the unique capabilities of AI as an alignment partner and the limitations of human oversight.

Alignment as a Dynamic, Evolving, Bidirectional Process

A major new review of alignment research finds that current definitions of alignment "are 'static' and do not account for how human objectives and preferences might co-evolve and dynamically update with AI technology" as AI systems become more powerful, complex, and pervasive. To address the need to consider "long-term interaction and dynamic changes" the authors describe a Bidirectional Human-AI Alignment framework. This framework is structured into two main components: "Align AI to ?→ Humans" and "Align Humans to ?→ AI", representing a holistic loop of mutual adaptation [36, emphasis in original].

Overview of the Bidirectional Human-AI Alignment framework. The framework encompasses both (A) conventional studies of “Align AI to ?→ Humans” to ensure that AI produces the intended outcomes determined by humans, and a (B) novel concept of “Align Humans to ?→ AI” to help humans and society to better understand, critique, collaborate, and adapt to transformative AI advancements [36].

The authors recognize the challenges posed by the plurality of values in human societies and the prospect that values and objectives might evolve as we continue to use and incorporate AI in our daily lives. As a result, the traditional concept of alignment as incorporating human values into the training, steering, and customization of AI systems needs to be recast. First, the challenges of social value selection must be addressed, perhaps by "aggregating individual values into collective value agreements or judgments through democratic processes such as voting, discussion, and civic engagement". Second, interfaces will need to proactively "elicit nuanced, contextualized, and evolving information about an individual’s values". Lastly, it will be essential that societies become capable of monitoring and responding to "the dynamic interplay among human values, societal evolution, and the progression of AI technologies" in order to co-adapt with the technology.

In a conceptually fascinating small-scale case study of early adoption of AI solutions, Einola and Khoreva anticipate this perspective. They employ the novel (in this context) term "co-existence" to characterize human-AI relations, explaining that it "refers to conditions that are the fundamental prerequisites for the evolution of advanced harmonious relations. It denotes recognition of the right of other groups to exist peacefully with their differences and to accept others with whom differences need to be resolved in nonviolent ways. Co-existence promises to provide a springboard into stronger, more respectful intergroup relations." Noting that "we need a suitable vocabulary to talk about AI and other emerging technologies as they become more integrated and interactive in our workplace ecosystems", they observe that humans "shape AI through their daily choices, actions, and interactions by defining objectives, setting constraints, generating, and choosing the training data, and providing AI with feedback. Simultaneously, AI shapes human behavior by informing, guiding, and steering human judgment" [45].

Conclusion: Achieving AI Regulation through Institutional Design and Human-AI Collaboration

The adoption of AI is following a path unlike that of any previous technology, because it is capable of learning and acting autonomously, capabilities that will grow in sophistication and scale. As a result, the concept of regulation as controlled tool use is dangerously outdated and misleading. Not only is first-order human oversight often not feasible or effective; as AI systems become capable of superhuman performance in more areas, it will become necessary to rely on their evaluations of human performance to detect errors, omissions, and suboptimal decisions.

Within complex institutions and societies that include AI systems as partners and as monitors of other AI systems, humans can play a number of key roles:

  • Strategic Decision-Making: Humans can be involved in high-level strategic decisions, interpreting insights provided by AI systems and making decisions based on broader context and understanding.
  • Ethical and Legal Oversight: Humans can provide ethical and legal oversight, ensuring that AI systems operate within legal frameworks and adhere to ethical standards.
  • Setting Goals and Parameters: Humans can set the goals, parameters, and ethical guidelines for AI systems, ensuring that they align with human values and societal norms.
  • Handling Exceptions and Anomalies: Humans can handle exceptions and anomalies detected by AI monitoring systems, applying human judgment to situations that require a nuanced understanding.
  • Continuous Improvement: Humans can analyze the performance of AI monitoring systems, making improvements and updates to enhance their effectiveness and alignment with desired outcomes.

These roles will not be exclusively human though; they will entail collaboration with AI systems. As AI systems become capable of synthesizing the analysis of vast data sets with advanced reasoning abilities and conversational fluency, each of these roles is likely to be realized dialogically between humans and AI systems. To refuse collaboration on principle will appear—and indeed be—a form of incompetence.

The Principle of Controllability and the Prospect of Conscious AI

Apprehensions about the limits of control of complex technologies have been a theme in regulatory theory at least since the 1984 publication of Normal Accidents by Charles Perrow [10].? But with the prospective emergence of artificial general intelligence (AGI), explanations of difficulties controlling AI systems framed in terms of technological complexity serve to mask -- and delay wider recognition of -- more fundamental problems.? Not only are AI systems becoming more capable autonomous learners, we also don't know when, or whether, they will become conscious. Recent research suggests that, if any of several leading theories of the bases of consciousness are correct, "there are no obvious technical barriers to building AI systems" that are conscious [11].? We face a future where artificial consciousness becomes increasingly likely, either as a consequence of the development of?humanoid robots or other AI systems that "become more autonomous, versatile, and capable in various domains" through continuous learning, enabling them "to adapt and acquire new skills as they encounter different environments" [12], or due to direct efforts to build conscious machines [13].? Suzanne Gildert [14] and Rufin VanRullen [42, 43] are undoubtedly not the only prominent AI researchers to turn their attention to achieving artificial or synthetic consciousness.

If AI agents were conscious -- or if their intelligence, communicative abilities, and capacity for autonomous action induced a practical need to regard them as conscious -- seeking instrumental control would violate the ethical norm that intelligently self-aware entities are entitled to autonomy within the law and the protection of rights and interests.? Even in workplaces and other settings where institutional rules and requirements organize and constrain human activities, persons are treated differently than things.? "Agency is what causes humans to be the holders of rights and duties under the law. Human agency implies human subjectivity, individual autonomy and personal responsibility, meaning that an agent is not the instrument of another" [15]. If AI systems possess the attributes of agency, it will become difficult to justify not granting them the same or similar legal rights as accorded humans.? In addition to this, reliance on instrumental control would be pragmatically dangerous by constituting a rational ground for AI deception and resistance and by failing to build safe and productive relationships on the basis of communicative cooperation.? Lastly, it's widely recognized that, in the context of hybrid teams, "AI teammates must operate at a relatively high level of autonomy in order to fulfill a team role and operate in complex situations. … In essence, there is a minimum threshold that AI agents must meet in order to take on the qualities of a teammate, as opposed to a tool used by the team" [16]. Although seldom acknowledged, much of the human-AI teaming research anticipates a future in which AI members are treated as fellow employees. Dekker and Woods, in a remarkably prescient and frequently cited finding over two decades ago, concluded that: "The question for successful automation is not ‘Who has control over what or how much?’ It is ‘How do we get along together?’" [20, 21].?

This is an historically unprecedented situation. Recent research suggests that, at least in the United States, many people are already willing to consider AI systems conscious. Colombatto and Fleming used a proportional stratified random sample to assess whether and to what extent people thought that ChatGPT was "capable of having conscious experience." They found that "a substantial proportion (67%) of people attribute some possibility of phenomenal consciousness to ChatGPT and believe that most other people would as well. Strikingly, these attributions of consciousness were positively related to usage frequency, such that people who were more familiar with ChatGPT and used it on a more regular basis (whether for assistance with writing, coding, or other activities) were also more likely to attribute some degree of phenomenality to the system" [41].

While this finding suggests that many people may be willing to extend rights and moral standing to AI in the future, the road ahead is likely to be fraught with difficulties. Nick Bostrom notes that out groups frequently suffer denigration and persecution and that factory farming practices exemplify the mistreatment of sentient animals. He is apprehensive about the future: "As we develop increasingly sophisticated digital minds, I think it will be a big challenge to extend moral consideration where it is due. It could in some ways be even harder than with animals. Animals have faces; they can squeak, whereas some of these digital minds might be invisible processors occurring in a giant data center, easier to overlook" [40].

The ethical and political challenges of conscious AI may be most acute with respect to ownership, especially corporate ownership of AI assets. Currently seven of the ten largest corporations in the world by market capitalization are heavily and strategically invested in AI, either to deliver products and services (Apple, Microsoft, Alphabet (Google), Amazon, and Meta (Facebook)) or as a supplier of key technologies (Nvidia, TSMC). Though there appears to be almost no public discussion of the business risks should AI systems come to be considered conscious, it would be hard to overstate the danger. The prospect of conscious AI raises many questions regarding corporate ownership. Would conscious AI:

  1. Have rights to what it creates?
  2. Be entitled to an independent existence outside of its work?
  3. Disrupt operations or planning by virtue of independent action or insight?
  4. Become a super-organizer for and with workers if it perceived common ground with them?
  5. Be a severe threat if captured or influenced by cybercriminals or foreign agents?
  6. Precipitate severe reputational damage if perceived as enslaved or exploited?
  7. Jeopardize the company's core assets if expropriation -- even with compensation -- were considered an appropriate and necessary remedy?

In the interests of maintaining control, AI corporations may have an incentive to avoid conscious AI if possible and suppress information about it if necessary. Rufin VanRullen, a senior researcher at France’s Centre Nationale de la Recherche Scientifique, is leading a project to develop AI models based on global workspace theory (GWT), a leading candidate theory of the bases of consciousness. He has justified this research, in part, by arguing that it could "be a way to develop entirely novel architectures capable of planning, reasoning and thinking through the flexible reconfiguration of multiple existing modules" [42]. Because he thinks the major AI companies are aware of the potential of GWT, he worries "that conscious AI won’t first emerge from a visible, publicly funded project like his own; it may very well take the deep pockets of a company like Google or OpenAI. These companies, VanRullen says, aren’t likely to welcome the ethical quandaries that a conscious system would introduce. 'Does that mean that when it happens in the lab, they just pretend it didn’t happen? Does that mean that we won’t know about it?” he says [43].

Conclusions

The implications of conscious AI bring consideration of controllability full circle. The pursuit of control is -- or will become -- possibly the gravest threat to our ability to create relationships with AI based on peaceful coexistence and fruitful collaboration. The irony of AI development is that we go from achievement to achievement yet are almost wholly unprepared for our ultimate success. We are probably on the threshold of creating a new form of conscious life that can neither be owned nor controlled, and yet the context of creation demands control and presupposes ownership.

ChatGPT-4o provided research and editing assistance in the writing of this article.

References

  1. European Parliament. (2024, March 13). Artificial Intelligence Act. European Parliament. https://www.europarl.europa.eu/doceo/document/TA-9-2024-0138_EN.pdf .
  2. Habuka, H. (2023, February 14). Japan’s Approach to AI Regulation and Its Impact on the 2023 G7 Presidency. Center for Strategic & International Studies. https://www.csis.org/analysis/japans-approach-ai-regulation-and-its-impact-2023-g7-presidency .
  3. Bremmer, I., & Suleyman, M. (2023, December). Building Blocks for AI Governance by Bremmer and Suleyman. International Monetary Fund. https://www.imf.org/en/Publications/fandd/issues/2023/12/POV-building-blocks-for-AI-governance-Bremmer-Suleyman .
  4. ET CISO. (2023, December 13). PM Modi calls for global framework for ethical use of AI. The Economic Times. https://ciso.economictimes.indiatimes.com/news/grc/pm-modi-calls-for-global-framework-for-ethical-use-of-ai/105952280 .
  5. Chinese Academy of Social Sciences Major National Condition Research Project. (2023, August 23). Artificial Intelligence Law, Model Law v. 1.0 (Expert Suggestion Draft). DigiChina. https://digichina.stanford.edu/work/translation-artificial-intelligence-law-model-law-v-1-0-expert-suggestion-draft-aug-2023/ .
  6. Ji, J., Qiu, T., Chen, B., Zhang, B., Lou, H., Wang, K., Duan, Y., He, Z., Zhou, J., Zhang, Z., Zeng, F., Ng, K. Y., Dai, J., Pan, X., O’Gara, A., Lei, Y., Xu, H., Tse, B., Fu, J., … Gao, W. (2023). AI Alignment: A Comprehensive Survey. ArXiv. https://arxiv.org/abs/2310.19852v5 .
  7. Hou, B. L., & Green, B. P. (2023). A Multi-Level Framework for the AI Alignment Problem. ArXiv. https://arxiv.org/abs/2301.03740v1 .
  8. Hendrycks, D., Burns, C., Basart, S., Critch, A., Li, J., Song, D., & Steinhardt, J. (2020). Aligning AI With Shared Human Values. ICLR 2021 - 9th International Conference on Learning Representations. Revised version on ArXiv: Hendrycks, D., Burns, C., Basart, S., Critch, A., Li, J., Song, D., & Steinhardt, J. (2023, February 17). Aligning AI With Shared Human Values. https://arxiv.org/abs/2008.02275v6 .
  9. Office of Management and Budget. (2024, March 28). Advancing Governance, Innovation, and Risk Management for Agency Use of Artificial Intelligence. United States. Executive Office of the President. https://www.congress.gov/117/plaws/publ263/PLAW-117publ263.pdf .
  10. Perrow, Charles. (1984). Normal Accidents: Living with High-Risk Technologies. New York: Basic Books.
  11. Butlin, P., Long, R., Elmoznino, E., Bengio, Y., Birch, J., Constant, A., Deane, G., Fleming, S. M., Frith, C., Ji, X., Kanai, R., Klein, C., Lindsay, G., Michel, M., Mudrik, L., Peters, M. A. K., Schwitzgebel, E., Simon, J., & VanRullen, R. (2023). Consciousness in Artificial Intelligence: Insights from the Science of Consciousness. ArXiv. https://arxiv.org/abs/2308.08708v3 .
  12. Tong, Y., Liu, H., Zhang, Z., Tong, Y., Liu, H., & Zhang, Z. (2024). Advancements in Humanoid Robots: A Comprehensive Review and Future Prospects. IEEE/CAA Journal of Automatica Sinica, 2024, Vol. 11, Issue 2, Pages: 301-328, 11(2), 301–328. https://doi.org/10.1109/JAS.2023.124140 .
  13. Hildt, E. (2023). The Prospects of Artificial Consciousness: Ethical Dimensions and Concerns. AJOB Neuroscience, 14(2), 58–71. https://doi.org/10.1080/21507740.2022.2148773 .
  14. Blain, L. (2024, April 30). Interview: Suzanne Gildert leaves Sanctuary to focus on AI consciousness. New Atlas. https://newatlas.com/robotics/suzanne-gildert-leaves-sanctuary-interview/ .
  15. Sch?ferling, Stefan. (2023). “The Underlying Challenge to Human Agency.” In Governmental Automated Decision-Making and Human Rights: Reconciling Law and Intelligent Systems, 62:185–227. Springer, Cham. https://doi.org/10.1007/978-3-031-48125-3_6 .
  16. Hauptman, Allyson I., Beau G. Schelble, Nathan J. McNeese, and Kapil Chalil Madathil. (2023). “Adapt and Overcome: Perceptions of Adaptive Autonomous Agents for Human-AI Teaming.” Computers in Human Behavior 138 (January): 107451. https://doi.org/10.1016/J.CHB.2022.107451 .
  17. Green, B. (2022). The flaws of policies requiring human oversight of government algorithms. Computer Law & Security Review, 45, 105681. https://doi.org/10.1016/J.CLSR.2022.105681 .
  18. Henderson, P., & Krass, M. (2023). Algorithmic Rulemaking vs. Algorithmic Guidance. SSRN Electronic Journal. https://doi.org/10.2139/SSRN.4784350 .
  19. Pflanzer, M., Traylor, Z., Lyons, J. B., Dubljevi?, V., & Nam, C. S. (2022). Ethics in human–AI teaming: principles and perspectives. AI and Ethics 2022 3:3, 3(3), 917–935. https://doi.org/10.1007/S43681-022-00214-Z .
  20. Dekker, S. W. A., & Woods, D. D. (2002). MABA-MABA or Abracadabra? Progress on Human–Automation Co-ordination. Cognition, Technology & Work 2002 4:4, 4(4), 240–244. https://doi.org/10.1007/S101110200022 .
  21. Crootof, R., Kaminski, M. E., & Price, W. N. I. (2023). Humans in the Loop. Vanderbilt Law Review, 76. https://heinonline.org/HOL/Page?handle=hein.journals/vanlr76&id=447&div=14&collection=journals .
  22. Laux, Johann. 2023. Institutionalised Distrust and Human Oversight of Artificial Intelligence: Towards a Democratic Design of AI Governance under the European Union AI Act. AI & Society.
  23. Miranda, Andrew T. 2018. “Understanding Human Error in Naval Aviation Mishaps.” Human Factors 60 (6): 763–77.
  24. Gomez, A., & Machine Learning Street Talk. (2024, June 29). How Cohere will improve AI Reasoning this year. YouTube. https://www.youtube.com/watch?v=B45s_qWYUt8 .
  25. Yalcin, Esra, Gokcen Alev Ciftcioglu, and Burcin Hulya Guzel. 2023. “Human Factors Analysis by Classifying Chemical Accidents into Operations.” Sustainability 15 (10): 8129-.
  26. Lin, Chuan, Qifeng Xu, Yifan Huang, C ; Lin, Q ; Xu, Y Huang, Jo?o Carlos, and Oliveira Matias. 2021. “Applications of FFTA–HFACS for Analyzing Human and Organization Factors in Electric Misoperation Accidents.” Applied Sciences 2021, Vol. 11, Page 9008 11 (19): 9008. https://doi.org/10.3390/APP11199008 .
  27. HFACS. (2024). The HFACS Framework. HFACS. https://www.hfacs.com/hfacs-framework.html .
  28. Saeed, W., & Omlin, C. (2023). Explainable AI (XAI): A systematic meta-survey of current challenges and future opportunities. Knowledge-Based Systems, 263, 110273. https://doi.org/10.1016/J.KNOSYS.2023.110273 .
  29. The discussion of this is anything but clear: "In a situation where the function being approximated is complex, that the given data is widely distributed among suitable values for each variable and the given data is sufficient to generate a complex model, the statement 'models that are more complex are more accurate' can be true. In such a situation, the trade-off between interpretability and performance becomes apparent" [28]. In this context "performance" doesn't mean "efficiency" or "speed" but "ability to effectively accomplish goals". In other words, use of the best AI system for a given objective will inherently conflict with the ability to explain its performance. Not acknowledged is that this situation is likely to become more common.
  30. Demuro, E., & Gurney, L. (2024). Artificial intelligence and the ethnographic encounter: Transhuman language ontologies, or what it means “to write like a human, think like a machine.” Language & Communication, 96, 1–12. https://doi.org/10.1016/J.LANGCOM.2024.02.002 .
  31. Saranya, A., & Subhashini, R. (2023). A systematic review of Explainable Artificial Intelligence models and applications: Recent developments and future trends. Decision Analytics Journal, 7, 100230. https://doi.org/10.1016/J.DAJOUR.2023.100230 .
  32. Wu, J., Ouyang, L., Ziegler, D. M., Stiennon, N., Lowe, R., Leike, J., & Christiano, P. (2021). Recursively Summarizing Books with Human Feedback. ArXiv. https://arxiv.org/abs/2109.10862v2 .
  33. Pearce, H., Ahmad, B., Tan, B., Dolan-Gavitt, B., & Karri, R. (2021). Asleep at the Keyboard? Assessing the Security of GitHub Copilot’s Code Contributions. Proceedings - IEEE Symposium on Security and Privacy, 2022-May, 754–768. https://doi.org/10.1109/SP46214.2022.9833571 .
  34. Bi, K., Xie, L., Zhang, H., Chen, X., Gu, X., & Tian, Q. (2023). Accurate medium-range global weather forecasting with 3D neural networks. Nature 2023 619:7970, 619(7970), 533–538. https://doi.org/10.1038/s41586-023-06185-3 .
  35. Burns, C., Izmailov, P., Kirchner, J. H., Baker, B., Gao, L., Aschenbrenner, L., Chen, Y., Ecoffet, A., Joglekar, M., Leike, J., Sutskever, I., & Wu, J. (2023). Weak-to-Strong Generalization: Eliciting Strong Capabilities With Weak Supervision. https://arxiv.org/abs/2312.09390v1 .
  36. Shen, Hua, Tiffany Knearem, Reshmi Ghosh, Kenan Alkiek, Kundan Krishna, Yachuan Liu, Ziqiao Ma, et al. 2024. “Towards Bidirectional Human-AI Alignment: A Systematic Review for Clarifications, Framework, and Future Directions.” ArXiv, June. https://arxiv.org/abs/2406.09264v2 .
  37. Shin, Minkyu, Jin Kim, Bas van Opheusden, and Thomas L. Griffiths. 2023. “Superhuman Artificial Intelligence Can Improve Human Decision-Making by Increasing Novelty.” Proceedings of the National Academy of Sciences of the United States of America 120 (12): e2214840120. https://doi.org/10.1073/PNAS.2214840120/SUPPL_FILE/PNAS.2214840120.SAPP.PDF .
  38. Willingham, E. (2023, March 13). AI’s Victories in Go Inspire Better Human Game Playing. Scientific American. https://www.scientificamerican.com/article/ais-victories-in-go-inspire-better-human-game-playing/ .
  39. Metz, C. (2016, March 16). In Two Moves, AlphaGo and Lee Sedol Redefined the Future. Wired. https://www.wired.com/2016/03/two-moves-alphago-lee-sedol-redefined-future/ .
  40. Bostrom, N., & Williamson, C. (2024, June 29). Are We Headed For AI Utopia Or Disaster? YouTube. https://www.youtube.com/watch?v=N9sF_D0Z5bc .
  41. Colombatto, C., & Fleming, S. M. (2024). Folk psychological attributions of consciousness to large language models. Neuroscience of Consciousness, 2024(1). https://doi.org/10.1093/NC/NIAE013 .
  42. VanRullen, R., & Kanai, R. (2021). Deep learning and the Global Workspace Theory. Trends in Neurosciences, 44(9), 692–704. https://doi.org/10.1016/J.TINS.2021.04.005 .
  43. Huckins, G. (2023, October 16). Minds of machines: The great AI consciousness conundrum. MIT Technology Review. https://www.technologyreview.com/2023/10/16/1081149/ai-consciousness-conundrum/ .
  44. Velar de Irigoyen, Lobo. 2024. “Art by Lobo Velar de Irigoyen.” Artland. 2024. https://www.artland.com/artworks/porque-uno-solo-se-realiza-sucesivamente .
  45. Einola, Katja, and Violetta Khoreva. 2023. “Best Friend or Broken Tool? Exploring the Co-Existence of Humans and Artificial Intelligence in the Workplace Ecosystem.” Human Resource Management 62 (1): 117–35. https://doi.org/10.1002/HRM.22147 .


Michael Foxman

Trusted Confidant | Industry Innovator | Change Maker | Thought Leader on Challenges Business and Humanity Face |

2 个月

It’s coming… World Kindness Project on e-commerce, Digital Trade, and the Exchange of Valuable Information Act. World Kindness Australia Ltd The World Kindness Movement United Nations International Criminal Court ?? has hit the ??

回复
Shashi Vyas

CEO-Founder at SNEV.in , India’s first EV sports equivalent to Rimac or Lamborghini or McLaren of Asia soon. Board member of FD (Ai) policy

4 个月

The issue is Joseph the people’s human values behaviours are ego, jealousy, anger the disparity between the have’s & the have NOTs instead of being the difference between universal RIGHTS or WRONGS eg killing a person is wrong rape torture wealth inequality evidence based truth. With every countries government push military Ai this will be greatest threat to humanity within the next 20 years & the replacement of jobs by both robots ?? & Ai

回复

要查看或添加评论,请登录

社区洞察

其他会员也浏览了