The Blueprint for AI Governance and Navigating Foreseeable Harms
Jean Ng ??
AI Changemaker | AI Influencer Creator | Book Author | Promoting Inclusive RAI and Sustainable Growth | AI Course Facilitator
It is often easy to assume that the government is oblivious to the advancements in artificial intelligence (AI), while innovation thrives in places like Silicon Valley. However, this notion is not entirely accurate.
In 2022, the White House released a significant document titled "A Blueprint for an AI Bill of Rights." Though the term "blueprint" holds greater significance in the title than "rights," the document itself is not primarily enforceable. It does not provide legal protection for the outlined rights. Nevertheless, its release marked the government's recognition that creating enforceable regulations surrounding AI is an imperative matter. It is a starting point for contemplating the AI-driven society we envision.
When delving into the blueprint, one cannot help but notice that if it were truly enforceable, it would revolutionize how AI operates, looks, and functions. Currently, none of the major systems align with the regulations outlined in the blueprint. It remains uncertain if they could even comply with them, even if desired. This peculiar document lies at the intersection of being a radical piece of work due to its transformative demands if implemented, while also raising questions about its efficacy in lacking enforcement power.
The blueprint's development was led by Alondra Nelson, a renowned scholar in Science and Technology. She served as the deputy director and acting director of the Biden administration's Office of Science and Technology Policy. If anyone within the government has dedicated substantial thought to AI and attempted to establish a consensus opinion among the vast federal machinery, it is Nelson. Although she has now moved on from the administration, she continues to contribute her expertise as a distinguished senior fellow at the Center for American Progress.
In an interview with Nelson, we explored the blueprint's purpose, its limitations, and her current perspectives on AI. While Nelson emphasized that she no longer represents the government, she shed light on how the government, from its perspective, perceives AI and the challenges it aims to address. The foundational statute of the White House Office of Science and Technology Policy, dating back to the 1970s, centers around spurring innovation while mitigating foreseeable harm. This statute encapsulates the essence of what science and technology policy strives to achieve and how the government regards its role. Moreover, the current administration recognizes that the advancement of science and technology should ultimately improve people's lives, anchoring innovation with a mission and value-based purpose.
Navigating the concept of foreseeable harm in relation to AI reveals two divergent schools of thought. One school highlights the many foreseeable harms associated with AI—biases, opacity, and fallibility. The other school emphasizes the unprecedented nature of AI as a versatile technology, making its potential harms difficult to predict or interpret. Consequently, this creates a landscape of unknown unknowns, which poses significant challenges in terms of regulation. Nelson, however, avoids aligning strictly with either school. As a scholar and researcher, she emphasizes the importance of gathering more information to form an empirical understanding before planting a flag in either camp. It is likely that both foreseeable and unforeseeable harms coexist in different use cases, and acknowledging this spectrum is crucial.
The need for safeguards against the misuse and harms associated with AI and emerging technologies has become increasingly critical. Recognizing this urgency, deputy director Alondra Nelson and Professor Suresh Venkatasubramanian, along with a team of experts, embarked on a journey to lay the groundwork for an AI Bill of Rights. This visionary initiative seeks to protect individuals and uphold their civil rights, privacy, and access to resources and services in an AI-powered world.
The framework for the blueprint was set in motion with an op-ed authored by Deputy Director Alondra Nelson and Professor Suresh Venkatasubramanian. The op-ed emphasized the need for protections against the potential misuse and harms that technology, particularly AI, can bring. It shifted the focus from solely fostering innovation to prioritizing the safeguarding of individuals and their rights—a pivotal perspective in the realm of AI governance.
The blueprint's development spanned a series of public convenings, listening sessions, and information-gathering initiatives. Valuable insights were garnered from individuals, organizations, and experts across various sectors, ensuring a comprehensive approach. With extensive interagency collaboration, countless revisions, and meticulous feedback review, the final document was released by the White House Office of Science and Technology Policy (OSTP) in October.
The blueprint's core principles outline the guiding tenets for the design, use, and deployment of automated systems. While these principles align with existing sets of global guidelines, they offer some intriguing departures. Each principle reflects a natural, intuitive expectation, mirroring what individuals should rightfully demand in an AI-driven world. The five core principles are:
The blueprint for an AI Bill of Rights represents a significant milestone in our collective journey toward responsible and ethical technology implementation. Its development involved diverse perspectives, expert insights, and a commitment to protect individual rights, liberties, and privacy in an AI-powered world. By prioritizing the prevention of harms and ensuring equity and accountability, this blueprint lays the foundation for a future where AI technologies coexist with respect for human values and societal well-being. As we move forward, it is our collective responsibility to engage in ongoing discussions, refine the blueprint, and embrace these principles to shape a better future for all.
Here are some questions for AI and ML specialists:
领英推荐
Content above is a quick summary of YouTube video.
Credit to: Yale ISP
Co-hosted by the Yale–Wikimedia Initiative on Intermediaries and Information (WIII) & the Georgetown Institute for Technology Law & Policy
December 2, 2022
Guest Speaker: Suresh Venkatasubramanian, Professor of Computer Science and Data Science, Brown University
Moderator: Mehtab Khan, Program Director, WIII, Information Society Project, Yale Law School
Panelists:
Anupam Chander, Scott K. Ginsburg Professor of Law and Technology at Georgetown University
Nikolas Guggenberger, Assistant Professor of Law, University of Houston Law Center
Artur Pericles L. Monteiro, Wikimedia Fellow, Information Society Project, Yale Law School
Join AI Leaders Alliance Linkedin Group via link.
The World Blocktech Forum 2023 is a must-attend event of the year.
By using the promo code [WBF003], you can enjoy a 5% discount on your World Blocktech Forum 2023 tickets. This presents a remarkable opportunity to be part of an extraordinary gathering of global blockchain experts, entrepreneurs, and innovators. For detailed information about the event schedule, speakers, and other important particulars, please visit our official website at?www.blocktech.world .
We eagerly anticipate the pleasure of meeting you all at the World Blocktech Forum 2023 or follow our Linkedin Page?World Blocktech Forum.
#WBF2023 ?
Attorney At Law at CIVIL COURT CASES
1 年Great
New Business Executive
1 年Jean! This blueprint for AI governance is a game-changer in ensuring the responsible use of AI and protecting our civil rights. The future is exciting, and staying informed is definitely important. Looking forward to diving into the article and being part of the conversation. #AI #privacy #technologyinnovation
Trailblazing Human and Entity Identity & Learning Visionary - Created a new legal identity architecture for humans/ AI systems/bots and leveraged this to create a new learning architecture
1 年Hi Jean, You might want to skim these articles. It dives down into the governance, identity and security weeds re AI systems and bots: * “Getting Your Company Ready for AI - What Boards and C-Suites Need to Know” - https://www.dhirubhai.net/pulse/getting-your-company-ready-ai-what-boards-c-suites-need-huntington/ * ??“The Challenge with AI & Bots - Determining Friend From Foe” - https://www.dhirubhai.net/pulse/challenge-ai-bots-determining-friend-from-foe-guy-huntington/ * “A Whopper Sized Problem- AI Systems/Bots Beginnings & Endings” - https://www.dhirubhai.net/pulse/whopper-sized-problem-guy-huntington/ * “AI Leveraged Smart Digital Identities of Us” - https://www.dhirubhai.net/pulse/ai-leveraged-smart-digital-identities-us-guy-huntington/ * “Hives, AI, Bots & Humans - Another Whopper Sized Problem”- https://www.dhirubhai.net/pulse/hives-ai-bots-humans-another-whopper-sized-problem-guy-huntington Food for thought, Guy ??
AI Changemaker | AI Influencer Creator | Book Author | Promoting Inclusive RAI and Sustainable Growth | AI Course Facilitator
1 年BOON KUANG HAN Pamela Cheong