Ambit Compliance Newsletter: Looking Back on January 2025!

Ambit Compliance Newsletter: Looking Back on January 2025!

Here's what caught our eye in January 2025


1. Italy Fines OpenAI €15m over ChatGPT for GDPR Violations

Italy’s data protection authority, Garante, has fined OpenAI €15 million after concluding that ChatGPT processed users’ personal data without a valid legal basis and lacked transparency in its data handling practices. The regulator also found that OpenAI failed to implement adequate age verification measures, exposing children to inappropriate AI-generated content.

As part of the enforcement, OpenAI has been ordered to run a six-month public awareness campaign in Italy on how ChatGPT processes data. OpenAI has called the fine disproportionate and plans to appeal, arguing that its approach to privacy is industry-leading


2. Volkswagen Data Breach Exposes 800k EV Customers’ Information

Volkswagen Group has suffered a major data breach affecting 800,000 electric vehicle (EV) owners across brands including Volkswagen, Audi, Seat, and Skoda. The breach, first reported by Der Spiegel, was caused by a misconfigured Amazon cloud storage system managed by Volkswagen’s software subsidiary, Cariad.

The exposed data included vehicle location history, timestamps of when EVs were switched on and off, as well as email addresses, phone numbers, and home addresses of affected customers. Among those impacted were two German politicians and members of Hamburg police. While the majority of affected vehicles were in Germany, researchers also found data on cars in Norway, Sweden, the UK, the Netherlands, France, Belgium, and Denmark.


3. ICO Expands Cookie Compliance Crackdown to UK’s Top 1,000 Websites

The UK’s Information Commissioner’s Office (ICO) has announced a major expansion of its enforcement efforts to bring the top 1,000 UK websites into compliance with data protection laws. This follows an initial review of the top 200 websites, where 134 organisations were contacted for failing to provide users with a meaningful choice over tracking.

As part of its 2025 online tracking strategy, the ICO aims to curb harmful online tracking practices and ensure individuals have greater control over their personal information. Concerns include cases where tracking data is used to exploit vulnerable individuals, such as gambling addicts receiving targeted betting ads or LGBTQ+ individuals modifying online behaviour out of fear of unintended disclosure.

Key measures include:

  • New guidance on ‘consent or pay’ models, ensuring organisations offer a fair choice between accepting personalised ads or paying for access without tracking.
  • Engagement with Consent Management Platforms (CMPs) to improve transparency and user control.
  • Public guidance on online tracking, helping users understand their rights and how to report concerns.

The ICO emphasises a balanced approach, combining enforcement, guidance, and support to encourage privacy-friendly business models while holding non-compliant organisations accountable.


4. DORA Comes into Force: Strengthening Digital Resilience in Financial Services

The Digital Operational Resilience Act (DORA) officially came into force on 17 January 2025, introducing a harmonised EU-wide framework to strengthen the digital resilience of financial services firms and critical ICT providers.

DORA aims to ensure that banks, insurers, investment firms, and other financial entities can withstand, respond to, and recover from ICT-related disruptions, reducing systemic risks in the financial sector. It applies to a broad range of financial institutions as well as third-party ICT providers, such as cloud service providers that support financial operations.

Key compliance obligations under DORA include:

  • Robust ICT Risk Management – Firms must implement strong security measures, conduct regular risk assessments, and maintain detailed incident response plans.
  • Incident Reporting Requirements – Organisations must report major ICT-related incidents to regulators in a timely manner.
  • Resilience Testing – Firms must conduct regular penetration testing and cyber resilience assessments.
  • Third-Party Risk Management – Stricter rules govern how financial entities manage and oversee ICT service providers, ensuring contractual safeguards and oversight mechanisms.

Financial institutions and their ICT partners now have 12 months to comply, with full enforcement beginning in January 2026. Firms that fail to meet DORA’s requirements risk regulatory penalties and enforcement action.

The moral of the story is...

Ensuring DPO Independence

A recent decision by the Austrian Data Protection Authority (DSB) highlights the risks of appointing a Data Protection Officer (DPO) with a conflict of interest. A company operating a diagnostic laboratory during the Covid-19 pandemic was fined €5,000 for designating its managing director as the DPO, violating Article 38(6) GDPR.

What happened?

  • The company never notified the DSB of the DPO appointment.
  • The managing director was given dual responsibility as both DPO and company leader.
  • The company argued that combining the roles was more efficient for handling Covid-19 test results and communicating with public entities.
  • The DSB launched an investigation, reviewing company records and conducting an oral hearing.

Why was this a problem?

  • The DSB found that no measures were in place to avoid a conflict of interest.
  • As managing director, the individual had decision-making authority within the organisation, which could create a conflict with the DPO’s obligation to independently monitor compliance.
  • The large-scale processing of health data (Article 9 GDPR) heightened the risk of non-compliance.
  • The company was found to be negligent for failing to inform itself about DPO appointment requirements.

What does it mean for your organisation?

This case serves as an important reminder that appointing a Data Protection Officer (DPO) in compliance with GDPR is not just a formality, it requires careful consideration to ensure independence, expertise, and sufficient resources. The decision by the Austrian Data Protection Authority (DSB) highlights the risks of failing to meet these requirements and demonstrates the potential for regulatory action and fines when a DPO appointment does not align with Articles 37-39 of the GDPR.


Under GDPR, organisations must ensure their DPO:

  • Independence is crucial (Article 38(3) GDPR) – The DPO must be able to perform their duties free from influence or interference. This means they should not be pressured or instructed by senior management when monitoring compliance or handling data protection matters.
  • Avoid conflicts of interest (Article 38(6) GDPR) – The DPO should not hold a position within the organisation that determines the purposes or means of data processing. Senior leadership roles, such as CEO, Managing Director, Head of IT, or Head of HR, create an inherent conflict because they involve making decisions about data processing that the DPO is meant to oversee independently.
  • Ensure sufficient resources (Article 38(2) GDPR) – Organisations are required to provide the DPO with adequate support, funding, and access to relevant information and staff. A DPO must have enough time, training, and authority to effectively carry out their role, rather than being burdened with competing responsibilities.
  • Understand legal obligations (Article 37(5) GDPR) – A DPO must be appointed based on professional qualities, including expert knowledge of data protection law and practices. Simply assigning someone internally without verifying their qualifications or compliance with GDPR requirements can be seen as negligence.

The takeaway? Appointing a DPO with a conflict of interest exposes organisations to regulatory scrutiny, fines, and reputational damage. This case serves as a clear warning that the DPO’s role must be independent, well-resourced, and free from conflicting duties. Organisations should proactively assess DPO appointments to ensure compliance before regulators step in.

Tales from the coalface

Each month, we examine Data Protection issues encountered by our clients. This month, we focus on AI in the Workplace and in particular the Data Protection and Compliance Risks of Generative AI Adoption.

Generative AI tools like Microsoft Copilot, ChatGPT, and other AI assistants are being adopted at an unprecedented rate, with 78% of knowledge workers using their own AI tools and 60% of Fortune 500 companies deploying AI-driven solutions. While these technologies offer significant efficiency and productivity benefits, they also introduce data protection, security, and compliance challenges that organisations cannot afford to ignore.


The Risk of Generative AI in the Workplace:

Organisations rushing to integrate AI into their workflows often encounter several key risks:

  • Shadow AI (Unregulated AI Use) – Employees using unapproved or unvetted AI tools create risks, as organisations may not know what data is being processed, stored, or shared externally.
  • Lack of Transparency in Data Processing – AI tools often retain, learn from, and repurpose inputted data, potentially exposing personal, sensitive, or proprietary business information.
  • Inadvertent Data Leaks – Without clear guidelines, employees may unknowingly input confidential data into AI tools, which could be stored or used for training future models.
  • GDPR and Compliance Challenges – Organisations must ensure that AI processes personal data lawfully, respects data subject rights, and aligns with GDPR principles such as data minimisation, purpose limitation, and transparency.
  • Bias and Decision-Making Risks – AI-generated content may introduce bias, inaccuracies, or lack of explainability, leading to potential discrimination, misinformation, or unfair decision-making in HR, recruitment, and business operations.
  • Regulatory Changes: The EU AI Act – The recently approved EU Artificial Intelligence Act introduces a risk-based classification for AI systems, imposing stricter transparency, documentation, and compliance obligations for high-risk AI applications in areas such as HR, recruitment, finance, and healthcare. Organisations deploying AI must now assess the risk category of their AI tools and implement appropriate safeguards.


Strategies for Managing AI Risks in the Workplace:

  1. To adopt AI responsibly while maintaining compliance, organisations should consider the following best practices:
  2. Establish AI Governance Policies – Develop a clear AI usage policy outlining approved tools, acceptable use, and prohibited data inputs.
  3. Implement Strict Access Controls – Restrict AI usage to verified enterprise solutions with built-in data protection and security safeguards.
  4. Conduct AI Impact Assessments – Evaluate AI tools to ensure they comply with GDPR, AI regulations, and internal security standards.
  5. Monitor and Audit AI Use – Regularly assess AI-generated outputs for accuracy, bias, and compliance.
  6. Train Employees on AI Risks – Educate staff on the importance of responsible AI use, including what data should and should not be inputted into AI systems.

Why this Matters:

Failing to address AI data protection and compliance risks can result in:

  • Data Protection Breaches – Unauthorised AI use can expose sensitive company and personal data to external parties.
  • Regulatory Scrutiny – AI tools that process personal data without a lawful basis or transparency could breach GDPR and lead to fines.
  • EU AI Act Non-Compliance – Organisations using AI in high-risk areas (e.g., HR, finance, legal) may face enforcement action for failing to meet risk assessment and transparency obligations.
  • Reputational Risks – Misuse of AI in decision-making can erode trust, create ethical concerns, and damage brand credibility.

AI is transforming workplaces, but without governance, it can quickly become a compliance and security liability. Organisations must take proactive steps to ensure that AI adoption aligns with data protection laws, ethical standards, and business policies.

If your organisation needs support in assessing AI risks, developing AI governance frameworks, or ensuring compliance with GDPR, Ambit Compliance can help.


Contributors to this newsletter:

Gillian Traynor and Dwayne Morgan

要查看或添加评论,请登录

Ambit Compliance的更多文章

社区洞察

其他会员也浏览了