DPDP Rules vs. AI: Are We Protecting Privacy or Playing Catch-Up?

DPDP Rules vs. AI: Are We Protecting Privacy or Playing Catch-Up?

The Digital Privacy Dilemma in the Age of AI

As Artificial Intelligence (AI) and Machine Learning (ML) redefine digital interactions, India’s Digital Personal Data Protection (DPDP) Rules, 2025, attempt to safeguard privacy. However, do they go far enough? As part of the Scribere program offered by TechReg Bridge, Scribere Samyukta under the mentorship of Anupam Sanghi, prepared a Quick Guide to the Digital Personal Data Protection Rules, 2025.

Encompassing everything from the procedural history of operationalizing data privacy as a fundamental right to a rundown of the Consent Manager Framework, the Guide provides a comprehensive overview of the key provisions of the Draft Rules as well as identified regulatory gaps and explored potential fixes.

For a deeper dive into the tussle between the DPDP Rules and AI, check out our blog article here.

While the DPDP Rules aim to safeguard personal data, they fall short in addressing key challenges posed by AI and ML. This TB Quest explores these gaps and how they can be bridged, drawing lessons from the EU’s General Data Protection Regulation (GDPR), a global benchmark for data protection.

Governance is crucial in mitigating regulatory grey areas in digital markets, requiring a techno-legal framework with a multi-stakeholder perspective. Anupam Sanghi highlighted these regulatory gaps in a White Paper at the 37th LAWASIA Conference, proposing a hybrid techno-legal approach that integrates legal and technological tools for fair governance.

?


The Unaddressed AI Challenge

While the DPDP Rules focus on data collection, storage, and consent, they lack clear provisions on automated profiling, AI-driven decision-making, and re-identification risks. These gaps pose significant concerns in an AI-driven world where data privacy is increasingly at risk:

  • AI-Powered Profiling: AI-driven algorithms determine access to financial services, employment opportunities, and even medical treatments. However, the Rules do not specify whether individuals can challenge, opt out of, or demand transparency in such profiling. Unlike the GDPR’s Article 22, which grants individuals the right not to be subject to automated decision-making with significant effects, India’s framework remains ambiguous.


  • Re-identification Risks: While anonymization is recognized as a privacy-preserving measure, the Rules do not explicitly regulate AI’s ability to de-anonymize data. Techniques such as pattern recognition and cross-referencing multiple datasets can reconstruct personal identities, putting individuals at risk. The absence of specific safeguards against re-identification leaves room for potential privacy violations.


  • Accountability Gaps: AI decision-making can perpetuate discrimination and unfair treatment, particularly in credit scoring, hiring, and law enforcement. The lack of mandated fairness audits and transparency measures in the Rules means that biases in AI models may go unchecked. Without independent oversight or mechanisms to contest unfair AI decisions, individuals have limited recourse against algorithmic discrimination.

?


Closing the Gaps

To align with global best practices like the GDPR, the DPDP Rules could adopt:

  • Explicit Regulation of Automated Decisions: Similar to Article 22 of GDPR, individuals should have the right to contest AI-based decisions that significantly impact them (legally or otherwise), with clear guidelines on appeal processes and human oversight requirements.


  • Stronger Protections Against Re-identification: The Rules should introduce robust anonymization standards, requiring periodic audits and re-identification risk assessments when AI processes large datasets. Additionally, stricter encryption controls and differential privacy techniques could enhance safeguards. However, it is important to follow a balanced, well-structured approach to reduce compliance burden.


  • Transparency & Accountability in AI/ML Systems: Expanding Data Protection Impact Assessments (DPIAs) to cover AI-driven profiling and decision-making, mandating regular bias and fairness audits in algorithms. Further, requiring AI models to maintain explainability mechanisms would help ensure transparency in automated decision-making.

?


The Road Ahead

While the DPDP Rules are a step forward, they risk becoming obsolete before enforcement if AI-related gaps remain unaddressed. Regulations must balance innovation with individual rights, ensuring India’s data protection framework remains robust and future-ready.

As part of the Scribere program, some of the stakeholders discussions we are a part of are: CCAOI Manthan: A Stakeholder discussion on the Draft Digital Personal Data Protection Rules, 2025, and Medianama: Understanding the Draft Data Protection Rules, 2025. For a complete roundup of the discussion, you can access the summary report here.

要查看或添加评论,请登录

TechReg Bridge的更多文章

社区洞察

其他会员也浏览了