Achieve Safe, Secure, and Trusty AI through Industry Standards ISO 42001, ISO 23894, and NIST AI RMF and legislation (EU AI Act)
Prashant Kamani
Business Head(Pro Services) @ ControlCase | Helping Org to create Information Security Universe | 14 Years Experience in GRC, Security & Privacy | Certification,Consulting and Advisory | Sales | B2B | Partnership | OPS
Artificial intelligence (AI) is expected to add $500 billion to India’s GDP by 2025 and Globally few trillions.
Everyone must be thinking how AI GRC Standards have anything to do with GDP, and how through voluntary commitments companies can support humanity.
Never before in the annals of human history have we, as a generation, birthed machines endowed with the unprecedented ability to wield decisions once solely within the domain of individuals of towering intellect. Their precision holds the power to shape the destiny of nations, forging paths to either prosperity or peril.
Beyond its capacity to streamline our daily endeavors, AI holds the key to unraveling humanity's most pressing challenges. From preserving eyesight and pioneering cancer treatments to unraveling the mysteries of proteins and safeguarding against cyber threats and civil unrest, its potential knows no bounds. With AI, we stand poised to confront the unknown and protect against the unpredictable, ushering in a new era of innovation and advancement.
Now AI on the other side can create a chaotic world with Job displacement and inequality, ethical concerns include privacy and bias issues, the digital divide, insane cyber threats, unapologetic risks, geopolitical competition, and possible doomsday.
GRC (Governance, Risk and Compliance) Academia and Think Tanks wants companies to adhere to AI Standards so that Globally AI is designed and used ethically, responsibly & transparently.
I am pretty sure this can be done easily since companies over the years aligning certifications and compliance with hundreds of standards, frameworks, and regulations, but let's ask ourselves is it a great solution?
Quick Fix - YES implement ISO 42001, ISO 23894, and NIST AI RMF and achieve compliance, but for the Long Term we need "Voluntary Commitments" to promote the safe, secure, and transparent development and use of AI technology.
These commitments encompass various aspects of AI governance, including safety, security, and trust. Here's a breakdown of the key points:
I applaud the proactive stance taken by companies, demonstrating their dedication to the responsible development and deployment of AI technologies. Emphasizing safety, security, and trust underscores their commitment to ethical AI practices. Moreover, it underscores the significance of collaborative efforts among diverse stakeholders – governments, civil society, academia, and industry – in shaping robust governance frameworks and standards to guide the evolution of AI.
Companies should committed to investing in the future of responsible AI, and helping to inform international standards in the interest of our customers and the communities in which we all live and operate.
Let's explore ISO 42001
ISO 42001 underscores the importance of responsible AI practices, urging organizations to tailor controls to their specific AI systems. By promoting global interoperability and establishing clear guidelines, this standard lays a solid foundation for the responsible adoption of AI technologies.
Trust in AI is paramount, and adherence to standards such as ISO 42001 is instrumental in building and maintaining public confidence. As a prominent figure in the field, I have long advocated for the integration of robust governance and risk management practices in AI initiatives.
Professionals who actively participated in the development of ISO 42001 since its inception in 2021, recognize the significance of international collaboration in shaping the future of AI governance. The involvement of diverse stakeholders, including industry leaders, policymakers, and academics, is crucial in addressing the multifaceted challenges posed by AI.
International standards serve as invaluable tools for organizations, enabling them to navigate complex regulatory landscapes and demonstrate compliance with globally recognized norms. By embracing ISO 42001, organizations can showcase their commitment to excellence in AI governance and risk management.
Moving forward, companies should remain dedicated to advancing the adoption of different standards and other relevant standards, leveraging professional's extensive experience and influence to drive positive change in the AI landscape. Collaboration across disciplines and sectors will be essential in ensuring the responsible and ethical development of AI technologies for the benefit of society as a whole.
In the future, we'll continue exploring ISO 42001, ISO 23894, NIST AI RMF, and legislation like the EU AI Act to support the safe, secure, ethical, and trustworthy development of frontier AI. These standards and laws guide us in understanding AI's nature, capabilities, limitations, and impact, ensuring responsible innovation and fostering public trust in AI.