AI Act: How to Ensure Your Compliance – and Your Documentation
In this month’s newsletter, you get to dive into AI Act compliance and, most importantly, the documentation you need to prove your compliance to regulatory authorities, customers, etc.
We look forward to taking you through:
?? Stats from our latest webinar, indicating how many organizations know how to document their AI compliance
6?? The 6 steps towards AI compliance
?? Words of wisdom from our CEO Martin Folke Vasehus : “Without documentation, all your compliance efforts are wasted”
?? The main AI documentation you need
?The documentation you need as a deployer of high-risk AI
?? Our product launch: Kickstart your AI compliance and documentation now
Happy reading!
Did you know that...
Even though the AI Act is expected to come into force at the end of this month, only 23% of the attendees at our latest webinar had decided on how to document their AI Act compliance.
That’s our cue for focusing on documentation of AI compliance now.
But you can’t document your compliance without having actually done the footwork. So, let’s help you get off on the right foot.
AI Act Compliance: Get Off on the Right Foot with These 6 Steps
You should start your compliance process by leaning towards ISO 42001.
It’s the first international standard for the governance and management of AI. It can help you handle AI in a responsible, transparent, and ethical manner.
Also, it gives you and your organization a structured way to balance risks, benefits, and innovation when it comes to AI.
However, we also know that ISO 42001 can be quite a mouthful if you don’t ‘speak’ law fluently.
Therefore, our team of in-house lawyers has ‘translated’ the ISO 42001 standard into these 6 compliance steps for you.
Words of wisdom from our CEO: “Without documentation, all your compliance efforts are wasted”
The main AI documentation you should have in place
All organizations should ensure to document and be transparent about their use of AI. Here is the main documentation you should have in place if you use AI:
?? Records and categorization
You need to have a list of the AI systems you use and whether they are high-risk, limited-risk, or minimal-risk.
?? Risk assessment
You must prove that you comply with the requirements for appropriate security measures.
Also, be aware that most AI systems process personal data, meaning that the GDPR applies to the processing in the AI system. For that reason, you should do a risk assessment and often also a data protection impact assessment.
?? Data processing agreement (DPA)
You should conclude and save a Data Processing Agreement (DPA). Among other things, you need to prove that you've made an agreement that ensures the data subject's rights.
?? Privacy policy
You should make a privacy policy where you inform how you process personal data in AI.
?? Responsible use policy
You should specify how employees and potentially customers should use AI.
What you need to document as a deployer of high-risk AI
High-risk AI systems are on everybody’s lips because they carry the most obligations. If you’re a deployer of high-risk AI systems, here’s a checklist of what you must document:
1?? Have appropriate technical and organizational measures.
2?? Ensure human oversight.
3?? Monitor the operation of the high-risk AI system.
4?? Ensure relevant and appropriately representative data input.
5?? Keep log files.
6?? Inform employees and their representatives about how they use an AI system that they’ll be subject to.
Besides the above, you must also do an impact assessment according to the GDPR if relevant. Check out our guide for doing a data protection risk assessment here.
Product launch: Kickstart your AI compliance and documentation now
As it goes with new regulations, it can be tricky to 1) know if you’re doing it right, and 2) keep track of your compliance efforts.
That’s why we just launched our free AI Compliance solution.
With this, you can: