A GRC leader at a $5B revenue global fintech company asked me this about AI governance frameworks:
"Do we start with the EU AI Act first or do we do all three [AI Act, ISO/IEC 42001, and NIST AI RMF] together?"
Here's how I think of each:
1. EU AI Act
Adopted in 2024, the European Union (EU) AI Act forbids:
-> Inference of non-obvious traits from biometrics
-> Real-time biometric identification in public
-> Criminal profiling not on criminal behavior
-> Purposefully manipulative or deceptive
-> Inferring emotions in school/workplace
-> Blanket facial image collection
-> Social scoring
It heavily regulates AI systems involved in:
-> Intended to be use as safety component; and
-> Underlying products already EU-regulated
-> Criminal behavior risk assessment
-> Education admissions/decisions
-> Job recruitment/advertisement
-> Exam cheating identification
-> Public benefit decisions
-> Emergency call routing
-> Migration and asylum
-> Election management
-> Critical infrastructure
-> Health/life insurance
-> Law enforcement
Fines can be up to 35,000,000 Euros or 7% of worldwide annual revenue. So ignoring the EU AI Act’s requirements can be costly.
It's mandatory for anyone qualifying (according to the AI Act) as a:
-> Provider
-> Deployer
-> Importer
-> Distributor
-> Product Manufacturer
-> Authorized Representative
2. ISO/IEC 42001:2023
Published by the International Organization for Standardization (ISO) and International Electrotechnical Commission (IEC) in December 2023.
ISO 42001 requires building an AI management system (AIMS) to measure and treat risks to:
-> Safety
-> Privacy
-> Security
-> Health and welfare
-> Societal disruption
-> Environmental impact
An external auditor can certify this.
Also, compliance with a “harmonised standard” of the EU AI Act, which ISO 42001 may become, gives you a presumption of conformity with some AI Act provisions.
But ISO 42001 is not a silver bullet.
A U.S.-based company offering facial recognition for public places could be ISO 42001 certified but banned from operating in the EU.
In any case, it's one of the few ways a third party can bless your AI governance program. It's best for:
-> AI-powered B2B startups
-> Companies training on customer data
-> Heavily-regulated enterprises (healthcare/finance)
3. NIST AI RMF
The National Institute of Standards and Technology (NIST) Artificial Intelligence (AI) Risk Management Framework (RMF) launched in January 2023. ISO 42001 also names it as a reference document.
The AI RMF has four functions:
-> Map
-> Measure
-> Manage
-> Govern
These lay out best practices at a high level. But like all NIST standards, there is no way to be “certified."
But because of NIST’s credibility and the fact it was the first major AI framework published, using the AI RMF is a good way for any company to build trust.
BOTTOM LINE
Stack AI frameworks to meet:
-> Regulatory requirements
-> Customer demands
-> Risk profile
How are you doing it?