Chapter 3: Getting Started with AI Act Compliance
Mert Can Boyar
Founder at Compliance Detective I Director at Bilgi University Privacy Innovation Lab I Author of Hitchhiker's Guide to Privacy Engineering
As we venture into the ever-changing and sometimes unpredictable world of AI, it’s essential to remember that we aren’t starting from scratch.
In the Beginning, There Was GDPR
But let’s rewind even more. Back in 1980, Convention 108 laid the groundwork for privacy and data protection rules and principles.
From there, various regulations evolved, shaping how we treat personal data. Flash forward to today, and we’ve got the AI Act and the GDPR. Yet there are several significant frameworks that we should also consider to fill in the gaps of these EU regulations to ethically handle personal information regarding AI and data protection compliance.
The EU AI Act clearly ties AI systems that deal with personal data to GDPR compliance FIRST! ??
Now, here’s the key takeaway: if your AI system uses personal data, it falls under GDPR. Full stop. Article 2 of the EU AI Act tells you that GDPR applies if your AI system is involved in personal data processing. Not a suggestion, not a guideline—mandatory.
Understanding GDPR Requirements for AI Systems
Here’s the basics to get you covered and make sure your AI doesn’t get you into hot water:
Roles and Responsibilities ?
In the world of GDPR, you’ll encounter two main roles: data controllers and data processors.
Data Controllers and Processors: In GDPR lingo, the controller (your startup) makes the calls on data use. The processor (hello, cloud vendors) simply carries out the instructions. If something goes sideways, the controller takes the heat.
Data Controller: This is your organization—the one determining what data is collected, how it’s processed, and for what purpose. For example, if you're using a cloud service like Digital Ocean to host your app, you (the startup) are the data controller, and Digital Ocean is the processor.
Data Processor: This is the entity that processes data on behalf of the controller. In this case, Digital Ocean hosts your data, but it doesn’t control it.
?? An example of an everyday data flow. The patient is the data subject, the hospital is the data controller, and the HMS vendor is the processor.
However, when it comes to AI systems, the roles can kind of shift.
But when we talk about the EU AI Act, the game changes a bit. The system provider (say, your third-party platform) bears a bigger responsibility. Meanwhile, your startup, as the “deployer,” has its own set of duties to ensure everything’s in line.
??♂? Example: As a deployer of AI (i.e., the one using an AI Assistant API like the one from OpenAI), your organization may now fall under the term "processor" while the cloud provider becomes more of a "controller" with higher compliance obligations this time on their end.
?? It’s essential to understand these shifts in roles and responsibilities as they affect your data protection strategy and obligations set out in your DPAs with AI providers.
Implement Privacy by Design and Default Strategies
? Consent or Legal Bases: Users must know if their data is being used, and they need to consent—or you need another valid reason to process it. No sneaky stuff.
? Data Security: Data should be protected. Not just “protected”—properly secured. People expect that.
? Monitoring: Inform YOUR users about the ML models they're interacting with and monitoring the system for unauthorized access or vulnerabilities (e.g., LLM attacks, data poisoning).
? Data Minimization: Only collect what you actually need. No unnecessary data hoarding.
? Transparency: Be clear with your users about what personal data you're collecting, for how long you’ll keep it, and what rights they have. Be upfront about what data you’re collecting and why. People respect clarity.
? Proportionality: Just because you can collect data, doesn’t mean you should. Don’t go overboard. Design your AI system to use only the data necessary to achieve the desired outcome.
? Purpose Limitation: Use the data for what it was meant for, and don’t hold on to it longer than necessary. Ensure that the data collected is used only for its intended purpose.
? Lawfulness, Fairness, and Transparency: Play by the rules. It’s the only way to avoid problems. Your AI use case should not breach any laws, and you should be clear and open about how data is processed.
? Respect for Data Subject Rights: Give users control over their data, whether it’s correcting it or deleting it. Make sure it’s easy for them to do so.
Data Mapping and Third-Party Risk Management
Know your data. Where is it? Who’s using it? How’s it being used? It’s like tracking your socks. Know where they all are.
It’s vital to understand and map the personal data flow in and out of your app. Start by creating an inventory of the personal data you're collecting, where it resides, who has access, and whether sensitive data is included.
As part of your risk management strategy, ensure that third-party vendors and subprocessors have appropriate data protection measures in place and included in your data maps. Use Data Processing Agreements (DPAs) to establish the legal framework for data processing and monitor compliance throughout the contract.
If you’re working with vendors like Deepseek, Stable Diffusion or OpenAI check their data policies. Get that Data Processing Agreement (DPA) signed, and ensure they follow it. If they mess up, you’re the one who looks bad.
Automated Decision-Making and Explainability
Automated decision-making is a hot topic. If your AI system makes decisions without human intervention, tell your users. Transparency is key here—don’t try to pull a fast one.
?? Article 22 of GDPR applies to AI systems involved in automated decision-making. This requires that individuals have meaningful information about the logic behind decisions made by AI models.
This is where explainable AI (XAI) becomes crucial. While still an emerging field, the ability to explain AI decisions is vital for protecting user rights.
Moving Beyond GDPR: Integrating AI Act Requirements
While GDPR provides a strong foundation for data protection, the EU AI Act introduces additional requirements that focus on the safe and ethical deployment of AI systems. As you begin to integrate these into your AI system, remember that the principles outlined in GDPR still apply, but now with added complexity due to the AI Act’s scope.
Treat the following principles as guidelines during your planning:
? Transparency: Be upfront about how your AI models operate, especially in cases where AI decisions affect individuals’ lives.
? Accountability: Be ready to take responsibility for the AI systems you deploy and the outcomes they produce.
? Risk Management: Identifying, assessing, and mitigating AI risks at every lifecycle stage is key. This could involve continuous monitoring of AI models to ensure they comply with regulatory requirements.
The Road Ahead
As you embark on AI Act compliance, remember: this is a journey, not a destination. The principles may seem daunting at first, but think of them as a guiding light through the complexities of AI development. With careful planning, transparency, and accountability, you’ll ensure that your AI system remains compliant and trustworthy.
?? Stay tuned and go play some of our cool privacy games at ?? https://play.compliancedetective.com/