End-of-Year Reflections: AI Standards, the AI Act, and Building a Better Playbook for the Future
@Infinitus

End-of-Year Reflections: AI Standards, the AI Act, and Building a Better Playbook for the Future

As we wrap up another year, it’s hard not to marvel at how artificial intelligence (AI) continues to reshape our world. From education and healthcare to finance, AI is transforming industries at a pace that’s both exciting and, at times, overwhelming. But with great power comes great responsibility, and the rapid growth of AI has brought with it some big questions: How do we ensure AI is ethical? How do we keep it safe? And how do we make sure it works for everyone, not just a select few?

These are the kinds of questions I’ve been reflecting on as we close out the year. And they’re not just theoretical. Around the world, governments, organisations, and experts are working hard to create rules and standards to guide AI’s development. One of the most significant efforts is in Europe, where the AI Act sets the stage for regulating AI. But creating rules for something as complex as AI isn’t easy, and that’s where standards like ISO/IEC 42001 come in.

Let me take you on a journey through what these standards mean, why they matter, and how they’re shaping the future of AI.

(a)?? The AI Act and the Role of Standards: Let’s start with the AI Act. If you’re unfamiliar with it, the AI Act is Europe’s ambitious attempt to regulate AI in a way that balances innovation with safety and ethics. It’s a bold move influencing how other countries think about AI regulation. But here’s the thing: writing laws is one thing; putting them into practice is another. That’s where standards come in. Standards are like rulebooks that help organisations comply with laws like the AI Act. In this case, the European Commission has asked standardisation bodies to create harmonised standards to support the AI Act. One of the most talked-about standards is ISO/IEC 42001, which focuses on making AI systems trustworthy, ethical, and aligned with societal values. Think of it as a guidebook for building AI systems that don’t just work but work responsibly.

(b)? What is ISO/IEC 42001?ISO/IEC 42001 is a relatively new standard. It was introduced in December 2023, and it’s already making waves. At its core, it’s a framework for organisations to manage AI ethically, securely, and aligned with their goals. But what does that mean? Let me break it down. Imagine you’re running a company that’s developing an AI system. Maybe it’s a chatbot for customer service or something more complex, like an AI tool for diagnosing diseases. Either way, you want to ensure your AI is safe and fair and doesn’t accidentally (or intentionally) cause harm. ISO/IEC 42001 helps you do that by providing a structured approach to managing AI. It’s not just about fixing problems when they happen; it’s about building a system that prevents problems in the first place.

(c)?? Core Components of ISO/IEC 42001: To give you a better sense of what this standard involves, here are some of its key components:

?*Context and Leadership: ?Every organisation is different, and ISO/IEC 42001 recognises that. It asks organisations to align their AI strategies with their unique context regulatory requirements, stakeholder expectations, and business goals. Leadership is also a big focus. Top management needs to be on board, ensuring resources are available and fostering a culture of accountability.

*Risk Management: AI has unique risks, like bias, lack of transparency, and unintended societal impacts. ISO/IEC 42001 builds on existing risk management principles (like those in ISO 31000) to help organisations identify and address these risks. It even includes an AI System Impact Assessment, which evaluates how AI might affect individuals and society.

*Policies and Documentation: A good AI policy is like a company’s mission statement for responsible AI. ISO/IEC 42001 requires organisations to create and regularly review their AI policies, ensuring they reflect the company’s commitment to ethical practices. Detailed documentation of AI systems is also necessary, covering everything from data sources to compliance measures.

*Competence and Resources: AI is only as good as the people behind it. That’s why the standard emphasises the importance of training, resource allocation, and continuous expertise evaluation.

*Monitoring and Improvement: AI isn’t static; like other innovative technologies, it continuously evolves. ISO/IEC 42001 encourages organisations to monitor their AI systems, measure their performance, and constantly improve. It’s about staying ahead of the curve, not just keeping up.

*Ethical AI: ISO/IEC 42001 also strongly focuses on ethics. It draws on principles like fairness, transparency, and human oversight to ensure AI systems align with societal values. In a world where trust in technology is often shaky, this focus on ethics is a game changer.

You might rightfully ask now, but how does ISO/IEC 42001 fit into the bigger AI picture?

Let's answer that question and take a peek at the larger picture. ISO/IEC 42001 doesn’t exist in a vacuum. It works alongside other standards to create a comprehensive framework for AI governance. For example:

(a)?? ISO/IEC 27001: This focuses on information security. When paired with ISO 42001, you cover AI's ethical and technical sides, keeping data safe while ensuring the AI is responsible.

(b) ISO 9001: This standard revolves around quality management and is being updated to better address AI. It perfectly complements ISO 42001’s focus on ethics and trustworthiness. ISO 9001 provides a framework for continuous improvement and consistent quality in processes and products. As it undergoes revisions to reflect new business practices and technologies, including AI, it will likely become even more relevant to the AI Act’s requirements.

(c)?? ISO/IEC TR 24368: This standard is like the moral compass that supports ISO 42001. ISO/IEC is an overarching standard addressing ethical and societal issues for AI systems. Identifying core themes and principles helps organisations align their AI activities with societal values and human rights. Notably, the standard needs to be more prescriptive; it guides organisations to create AI systems that mitigate harm while promoting beneficial outcomes.

These standards will be pivotal for implementing the AI Act; however, it's worth noting that the latter requires a broad Safety Risk Management approach, considering the combination of the probability of harm and the severity of that harm. It aims to mitigate risks specific to AI technologies, which may not be adequately addressed by the current risk management frameworks of ISO/IEC 27001 and ISO/IEC 42001, particularly since both standards allow organisations considerable flexibility in implementing controls.

This flexibility, unless adherent with the requirements of the AI Act, could result in QMS implementations that do not fully align with or support the AI Act’s conformity requirements, which vary according to the high-risk AI category. This misalignment could be particularly problematic for high-risk AI applications already regulated under a lex specialis, such as medical devices. In such instances, the AI Act’s conformity requirements may need to operate alongside existing regulations for high-risk AI in specific sectors. Determining which QMS should apply (AI Act or lex specialis) remains challenging, requiring further clarity and guidance.

So, what does all this mean as we head into the new year? Rejoice, a new playbook for AI will be slowly coming to life. Together, these standards provide a toolkit for organisations navigating the complex world of AI regulation. But it’s not all smooth sailing, albeit the message is clear: AI governance isn’t just about avoiding risks. It’s about creating a culture where organisations strive to improve, not because they have to but because it’s the right thing to do. Standards like ISO/IEC 42001 are a step in the right direction, but they’re just one playbook piece. As the AI landscape evolves, we’ll need to keep asking tough questions, challenging assumptions, and working together to build a future where AI benefits everyone.

As someone who’s been following these developments closely, I’m optimistic. The road ahead won’t be easy. With the right tools, standards, frameworks, and mindset, we can participate in an AI playbook that’s not just about compliance but a new way of societal well-being and welfare.

Here’s to a new year of innovation, responsibility, and progress.

?

Article by Dr Ian Gauci

Dr Gauci is the Managing Partner of GTG, a technology-focused corporate and commercial law firm at the forefront of major developments in fintech, cybersecurity, telecommunications, and technology-related legislation.

Disclaimer: This article is not intended to impart legal advice, and readers are asked to seek verification of statements made before acting on them.

Conrad Chircop

Information Security Manager | Skilled in Security Programme Execution & Proactive Risk Mitigation | Keen interest in Fintech and iGaming sectors

2 个月

Excellently penned article Ian Gauci. I appreciated, in particular, how you correlated the different ISO standards while highlighting the unique aspects that AI technologies and their deployment bring to risk management. Incidentally, quality management (traditionally, covering the manufacturing industry) has been drawn, in recent times, into the 'security by design' concept in code development.

Ambrose Muscat CAMS, CCAS, CFCP

Fintech | Cryptoassets | MLRO | Compliance

2 个月

Insightful as always. Are you noticing any meaningful volumes of opt out requests and similar challenges? I remember wondering how one would go about this; I still do ??.

Refat Ametov

Driving Business Automation & AI Integration | Co-founder of Devstark and SpreadSimple | Stoic Mindset

2 个月

Your point about the AI Act's conformity requirements and the role of standards like ISO/IEC 42001 is so relevant. How can smaller organizations with limited resources ensure they keep pace with these complex regulatory and ethical demands?

要查看或添加评论,请登录

Ian Gauci的更多文章

社区洞察

其他会员也浏览了