Achieve Safe, Secure, and Trusty AI through Industry Standards ISO 42001, ISO 23894, and NIST AI RMF and legislation (EU AI Act)

Achieve Safe, Secure, and Trusty AI through Industry Standards ISO 42001, ISO 23894, and NIST AI RMF and legislation (EU AI Act)

Artificial intelligence (AI) is expected to add $500 billion to India’s GDP by 2025 and Globally few trillions.

Everyone must be thinking how AI GRC Standards have anything to do with GDP, and how through voluntary commitments companies can support humanity.

Never before in the annals of human history have we, as a generation, birthed machines endowed with the unprecedented ability to wield decisions once solely within the domain of individuals of towering intellect. Their precision holds the power to shape the destiny of nations, forging paths to either prosperity or peril.

Beyond its capacity to streamline our daily endeavors, AI holds the key to unraveling humanity's most pressing challenges. From preserving eyesight and pioneering cancer treatments to unraveling the mysteries of proteins and safeguarding against cyber threats and civil unrest, its potential knows no bounds. With AI, we stand poised to confront the unknown and protect against the unpredictable, ushering in a new era of innovation and advancement.

Now AI on the other side can create a chaotic world with Job displacement and inequality, ethical concerns include privacy and bias issues, the digital divide, insane cyber threats, unapologetic risks, geopolitical competition, and possible doomsday.

GRC (Governance, Risk and Compliance) Academia and Think Tanks wants companies to adhere to AI Standards so that Globally AI is designed and used ethically, responsibly & transparently.

I am pretty sure this can be done easily since companies over the years aligning certifications and compliance with hundreds of standards, frameworks, and regulations, but let's ask ourselves is it a great solution?

Quick Fix - YES implement ISO 42001, ISO 23894, and NIST AI RMF and achieve compliance, but for the Long Term we need "Voluntary Commitments" to promote the safe, secure, and transparent development and use of AI technology.

These commitments encompass various aspects of AI governance, including safety, security, and trust. Here's a breakdown of the key points:

  1. Safety: Companies commit to internal and external red-teaming of AI models or systems to assess potential risks, including bio, cyber, and societal risks. They also commit to advancing ongoing research in AI safety and publicly disclosing their safety procedures.
  2. Information Sharing: Companies pledge to work toward information sharing among companies and governments regarding trust and safety risks, emergent capabilities, and attempts to circumvent safeguards. This involves establishing or joining a forum or mechanism to develop shared standards and best practices.
  3. Security: Companies commit to investing in cybersecurity and insider threat safeguards to protect proprietary AI model weights. They also agree to incent third-party discovery and reporting of issues and vulnerabilities through bounty systems or bug bounty programs.
  4. Trust: Companies agree to develop mechanisms that enable users to understand if audio or visual content is AI-generated, including provenance or watermarking systems. They also commit to publicly reporting model capabilities, limitations, and societal risks and prioritize research on societal risks posed by AI systems.
  5. Deployment for Societal Challenges: Companies pledge to develop and deploy frontier AI systems to help address society's greatest challenges, such as climate change mitigation, early cancer detection, and combating cyber threats. They also commit to supporting initiatives for education and training in AI.

I applaud the proactive stance taken by companies, demonstrating their dedication to the responsible development and deployment of AI technologies. Emphasizing safety, security, and trust underscores their commitment to ethical AI practices. Moreover, it underscores the significance of collaborative efforts among diverse stakeholders – governments, civil society, academia, and industry – in shaping robust governance frameworks and standards to guide the evolution of AI.

Companies should committed to investing in the future of responsible AI, and helping to inform international standards in the interest of our customers and the communities in which we all live and operate.

Let's explore ISO 42001

ISO 42001 underscores the importance of responsible AI practices, urging organizations to tailor controls to their specific AI systems. By promoting global interoperability and establishing clear guidelines, this standard lays a solid foundation for the responsible adoption of AI technologies.

Trust in AI is paramount, and adherence to standards such as ISO 42001 is instrumental in building and maintaining public confidence. As a prominent figure in the field, I have long advocated for the integration of robust governance and risk management practices in AI initiatives.

Professionals who actively participated in the development of ISO 42001 since its inception in 2021, recognize the significance of international collaboration in shaping the future of AI governance. The involvement of diverse stakeholders, including industry leaders, policymakers, and academics, is crucial in addressing the multifaceted challenges posed by AI.

International standards serve as invaluable tools for organizations, enabling them to navigate complex regulatory landscapes and demonstrate compliance with globally recognized norms. By embracing ISO 42001, organizations can showcase their commitment to excellence in AI governance and risk management.

Moving forward, companies should remain dedicated to advancing the adoption of different standards and other relevant standards, leveraging professional's extensive experience and influence to drive positive change in the AI landscape. Collaboration across disciplines and sectors will be essential in ensuring the responsible and ethical development of AI technologies for the benefit of society as a whole.

In the future, we'll continue exploring ISO 42001, ISO 23894, NIST AI RMF, and legislation like the EU AI Act to support the safe, secure, ethical, and trustworthy development of frontier AI. These standards and laws guide us in understanding AI's nature, capabilities, limitations, and impact, ensuring responsible innovation and fostering public trust in AI.

要查看或添加评论,请登录

社区洞察

其他会员也浏览了