Governance and Safety of Generative AI Systems

Governance and Safety of Generative AI Systems

The World Economic Forum brought together stakeholders with a diversity of perspectives to a workshop around the development and deployment of Responsible Generative AI systems. See The Presidio Recommendations on Responsible Generative AI below, and available here:?https://lnkd.in/eQ3XHMCW

At the workshop, I co-led there a discussion group and panel presentation, together with Jamie Berryhill from the OECD - OCDE , on the Governance of Generative AI Systems.

One of the outputs from the workshop was the call to establish a global AI governance initiative. Others, such as Prof. Stuart Russell , were calling for the creation of a global agency (backed by national ones) to regulate the deployment of generative AI solutions along the lines of the model of the International Civil Aviation Organization (ICAO) and the Federal Aviation Administration (FAA), for instance.

Another output is to ensure that Foundation Models, and the content used to train them and generated by them, is traceable. I bring particular attention to this point because that is exactly what my colleagues and I are working on at infinitio AI by being the first responsible multimodal (text, images, sound, videos...) generative AI platform built and backed by a blockchain. Whereas all the training data of student models are embedded on a blockchain, therefore recognizing ownership, paid access and acknowledgments, artists protection, etc. It also brings a solid solution to the generative AI IP conundrum whereas the USPTO cannot, understandably, patent all creations generated by AI systems, but where Distributed Ledger Technologies (DLTs) are being leveraged for proof of ownership and monetization.

It only makes sense to have 1) a truly global coordination to make sure these systems are deployed ethically AND safely (since all the large platforms are unequivocally working toward Artificial General Intelligence (#AGI), and 2) to agree upon what international safeguards and solutions could be put in place as we are releasing an intelligence capable of influencing and decision making, at scale.

Following the last G7 meeting setting up the Hiroshima AI process asking OECD.AI and GPAI to find respectively policy frameworks and solutions, both organizations are working diligently toward this.

If you want to hear more of my thoughts on the topic, Amir Banifatemi at AI Commons , and I spoke with Robin Pomeroy on the latest WEF podcast #RadioDavos here: https://www.weforum.org/podcasts/radio-davos/episodes/generative-ai-episode-4-governance

Spotify: https://open.spotify.com/episode/2bTpufTRkW7vuylavhjy06

Apple: https://podcasts.apple.com/ie/podcast/ai-will-either-compete-with-us-or-augment-us-so-how-do/id1504682164?i=1000617135524


The Presidio Recommendations on Responsible Generative AI

Responsible Development and Release of Generative AI

?This section critically assesses the necessity to protect our society from unforeseen outcomes induced by the swiftly developing generative AI systems, and accordingly advocates for responsible strategies concerning their development and deployment. These recommendations are intended for a broad spectrum of stakeholders

- ranging from AI developers to policy-makers and users. The objective is to foster accountable and inclusive processes for AI development and deployment, thereby enhancing trust and transparency as generative AI systems continue to proliferate.

1???????????Establish precise and shared terminology

All stakeholders are called upon to use precise terminology when discussing the design, development, evaluation and measurement of generative AI models’ capabilities, limitations and issues. It is the responsibility of experts to define and standardize this language. As soon as a consensus is reached, consistent adoption of this terminology by all stakeholders is essential. This approach will boost clarity and promote effective communication, leading to ?a shared understanding among different parties. Ultimately, it will facilitate the establishment of strong, standards, guidelines and regulations for a range of generative AI applications.

2???????????Build public awareness of AI capabilities and their limitations

Public and private stakeholders should prioritize the task of enhancing public understanding. This includes making the terminology related to generative AI models understandable to the general public. Additionally, stakeholders should inform users about the probabilistic (meaning their outputs are not deterministic but based on probability) and stochastic (implying their operation involves a degree of random behavior) nature of generative AI models, while setting accurate expectations for their performance.

3???????????Focus on human values and preferences

The challenge to align generative AI models with human values and preferences needs to be further acknowledged and addressed. Developers of AI systems should be engaged in discussions about normative values and preferences when designing AI models.

4???????????Encourage alignment and participation

Public and private sector stakeholders should recognize that AI systems necessitate quality feedback that is diverse and representative of the user base to be truly aligned. Policy-makers should promote the involvement of diverse stakeholders, including non-technical stakeholders, in AI research and development to ensure alignment with human values. AI developers should work to facilitate interactions and feedback from a broad range of participants to create a more inclusive and human-centric development process.

5???????????Uphold AI accountability with rigorous benchmarknig and use case-specific testing while exploring new metrics and standards

?AI developers should commit to the importance of not only holding models accountable against the highest established benchmarks, but also finding new metrics beyond traditional ones and towards other human-centric dimensions. Benchmarking should be complemented by application-specific and task-defined testing to ensure a comprehensive evaluation of generative AI models.

6???????????Employ diverse red teams

Red teaming, a method of critically analysing perspective to identify potential weaknesses, vulnerabilities?and areas for improvement, should be integral from model design to application and release. Diversity here implies incorporating members from varied genders, backgrounds, experiences and perspectives for a more comprehensive critique. The public and private sectors should implement frameworks and methodologies to facilitate thorough red teaming.

7???????????Adopt transparent release strategies

Producers of AI should be held accountable to release AI models responsibly, making them available to the public without compromising safety. Responsible release strategies should be initiated upstream during project ideation and product design to ensure that potential risks are identified and mitigated throughout the development process.

8???????????Enable user feedback

Users should be empowered with robust controls that allow them to provide real-time feedback on model outputs. Additionally, it is relevant to enable users to have a comprehensive understanding of the limits and responsibilities associated with the generated content.

9???????????Embed model and system traceability

Developers and policy-makers should align on the importance of creating formal evaluation and auditing structures surrounding traceability throughout the entire AI life cycle, from data provenance to training scenarios and ?post-implementation.

10????Ensure content traceability

To increase transparency and accountability, companies developing AI-generated content should be responsible for tracing how content is generated and documenting its provenance. This will help users discern the difference between human-generated and AI-generated content.

11????Disclose non-human interaction

In virtual environments, humans should know whether they are interacting with a human or a machine. AI providers should develop mechanisms to support this, for example, via watermarking.

12????Build human-AI trust

To build trust in AI systems, developers and companies should prioritize transparency, consistency, and meeting and managing user expectations. AI developers should be transparent in their processes and decision-making, providing users with an understanding of how they reach their results. By focusing on these aspects, AI developers can create systems that foster trust and facilitate positive human-AI interactions.

13????Implement a step-by-step review process

Policy-makers and businesses should create a step-by-step review process for AI models and products. This should be similar to the detailed checks used in clinical trials or car manufacturing, both before and after a product goes live. There should be an independent auditor or international agency to oversee this to ensure uniform evaluations and continuous monitoring. To help limit potential risks and negative impacts, certification, or licensing system could be used.

14????Develop comprehensive, multi-level measurement frameworks

Policy-makers should emphasize ongoing efforts and incentivize developers and standardization bodies to focus on creating and employing measurement frameworks with an emphasis on socio-technical aspects rather than solely technical performance.

?15????Adopt sandbox processes

AI developers, standard-setting bodies and regulators should cooperate on more flexible “sandbox” development environments along with new and associated processes of governance and oversight. Sandboxing could help build trust by demonstrating that AI systems have undergone rigorous testing and evaluation to ensure safety, reliability and compliance.

?16????Adapt to the evolving landscape of creativity and intellectual property

?With generative AI impacting content creation, it is essential for policy-makers and legislators to re-examine and update copyright laws to enable appropriate attribution, and ethical and legal reuse of existing content.

Open Innovation and International Collaboration

?This section focuses on the importance of sharing scientific knowledge and enhancing international collaboration. As frontier research capabilities tend to be concentrated in private sector companies in a select few countries, it ?is vital that academic researchers remain an integral part of the exploratory process, while countries worldwide participate and influence the governance of generative AI systems. These recommendations are designed for

a range of stakeholders, including researchers, AI developers, standard-setting bodies and policy-makers. The overarching goal is to cultivate transparency, accountability and inclusivity in the development, implementation and governance of generative AI.

?17????Incentivize public-private research coordination

Public and private stakeholders should actively work to design incentive structures that facilitate greater coordination between academic researchers and the private sector throughout the technology development lifecycle. Possible mechanisms to be considered include joint research programmes, data-sharing protocols and joint IP ownership.

?18????Build a common registry of models, tools, benchmarks and best practices

Producers and researchers of generative AI should contribute to a common and open registry of source codes, models, datasets, tools, benchmarks and best practice guidelines, to be shared within the research community, in order to have a platform for academic and private sector collaboration to build future models and systems that are transparent and accountable to the public.

19????Support responsible open innovation and knowledge sharing

Policy-makers and AI providers should contribute to frameworks to democratize AI through responsible sharing of resources, including data, source code, models and research findings; also encourage the sharing certification processes, ensuring transparency and trust among stakeholders. A public-private long-term initiative could be developed to build public-facing platforms that provide open access to compute, data and pre-trained models. This platform could be treated as a digital public good, and usage could be promoted across borders.

?20????Enhance international collaboration on AI standards

Standard bodies must foster international collaboration on AI standards, ensuring the participation of all AI stakeholders, including all geographical locations.

?21????Establish a global AI governance initiative

To address the challenges and potential risks posed by AI technologies, policy-makers should consider devoting efforts towards creating a global AI governance initiative. This initiative should bring together experts from a wide array of fields. The key focus should be on promoting global understanding of responsible generative AI, ensuring broad inclusion, facilitating access to infrastructure, and fostering collaboration to harmonize response structures at the national level against AI challenges and risks.

Social progress

?This section examines the hurdles tied to AI-driven transformations, spanning from workforce transitions to educational shifts, as well as the necessity of championing AI for societal benefit and advocating for equitable AI access in developing nations. The recommendations are intended for a broad array of stakeholders, including educational institutions, community organizations, corporations, individuals, policy-makers and governments.

The primary objective is to cultivate a society that is more informed, engaged and resilient in the face of these emerging changes.

?22????Prioritize social progress in generative AI development and adoption

All stakeholders must ensure that the technology’s societal implications remain front and centre. This involves a focus beyond technical proficiency towards the technology’s role in enhancing social progress.

Comprehensive support must be provided to communities and workers affected by the shift to an AI-enabled society, encompassing learning initiatives, guidance on surmounting generative AI-specific challenges and assistance in navigating the ethical, social and technical shifts inherent in an AI-influenced environment with an active participation of workers throughout the process.

?23????Drive AI literacy across society

Educational bodies and community institutions must take the initiative to increase AI literacy among the general public. A proactive approach is needed to demystify generative AI tools, outline their potential uses and discuss their ethical implications. This will empower individuals to better understand, interact with and contribute to the evolving landscape of AI, fostering a more informed and participative society.

24????Foster holistic thought approaches in AI-driven environments

Foster diverse modes of thinking – critical, computational and responsible – to better equip society for the generative AI era. Encourage these core competencies across sectors and communities to empower individuals to engage critically with AI-generated content, understand the underlying technology and make responsible decisions about its use.

?25????Steer generative AI’s transformative impact

Address the transformative influence of generative AI on societal systems. Understand its effect on human interactions, knowledge dissemination and evaluation mechanisms. Proactively adapt to the evolving landscape, supporting roles that may transform due to generative AI, and explore innovative ways to evaluate its impacts within our rapidly evolving digital ecosystem, to harness its potential for driving positive societal transformation.

?26????Incentivize innovation for social good

Policy-makers should encourage the development and implementation of generative AI technologies that prioritize social good and address complex and unmet societal needs, such as in healthcare and climate change, to improve the overall quality of life.

?27????Address resource and infrastructure disparities

Policy-makers should increase public investment in national and international research infrastructure. That includes work to ensure greater access to computing resources for researchers, especially those from underrepresented regions and institutions. The private sector is encouraged to contribute to the development of datasets and support governments in making more resources available to researchers.

?28????Promote generative AI expertise within governments

Governments should invest in fostering AI expertise, ensuring an informed, effective and responsible approach to public policies and regulation of these transformative technologies. By leveraging mechanisms such as targeted incentives, private sector collaborations, and exchange programs, governments can nurture AI talent. This commitment while expanding in-house AI proficiency is crucial in securing a future where these technologies advance societal progress and serve the public interest effectively.

?29????Increase equitable access to AI in developing countries

To ensure that the benefits of generative AI technology are accessible to all, public and private stakeholders should focus on establishing initiatives that can provide support and resources at scale, particularly in developing countries where there may be limited access to digital infrastructures. Efforts should focus on providing resources, training, and expertise to make AI more accessible and inclusive, fostering national and international partnerships across sectors to promote diversity and inclusion in the development and deployment of generative AI technology.

?30????Preserve cultural heritage

All stakeholders need to contribute to preserve cultural heritage. Public and private sector should invest in creating curated datasets and developing language models for underrepresented languages, leveraging the expertise of local communities and researchers and making them available. This will improve access to AI technologies to help preserve linguistic diversity and cultural heritage.

Sebastien Staub

Founder SmartDomain.Name Enterprise.IT.com For Lease/Sale/JV - BookVacation.Rentals BudgetCar.Rentals DubaiRealEstate.Properties AbuDhabiRealEstate.Properties MobileRobots.ai IntelligenceArtificielle.ai Immobiliers.mu

1 年

Bonjour Cyrus, Regardez avec vos collaborateurs, je vends IntelligenceArtificielle.AI Un nom unique qui représente l'IA pour plus de 400M de Francophone dans 32 pays. Click sur le lien pour le sécuriser. Bien a vous. Sebastien

回复
CHERAY UNMAN

strategic relationships and venture capital, art,ai, luxury, media ,real estate and tech for emerging growth companies and family office .

1 年

Love the World Economic Forum #ai goverance list Cyrus Hodes . The last point of preserving cultural heritage is so important and would love this group to take this mandate to the next level; preservation of the artisan soul of a country is a beautiful vision . ???? CHERAY UNMAN

要查看或添加评论,请登录

Cyrus Hodes的更多文章

社区洞察

其他会员也浏览了