ai-dpia
Data Protection Impact Assessment for AI/ML Systems
Overview
AI and ML systems present unique privacy challenges that traditional DPIA methodologies fail to adequately address. The EDPB Guidelines 04/2025 on processing personal data through AI systems establish a specialized framework that supplements the general DPIA requirements of GDPR Article 35 and WP248rev.01. AI-specific DPIAs must evaluate the entire ML pipeline — from training data collection through model deployment and inference — assessing risks that emerge from statistical learning, emergent model behaviours, and the opacity of algorithmic decision-making. This skill implements the EDPB's AI-specific DPIA methodology integrated with the EU AI Act risk classification framework.
AI-Specific DPIA Triggers
Mandatory DPIA Triggers for AI Systems
All AI processing that meets any of the following criteria requires a DPIA before deployment:
| Trigger | Legal Basis | Description |
|---|---|---|
| AI-based profiling with legal effects | Art. 35(3)(a) GDPR | ML models that produce decisions with legal or similarly significant effects on natural persons (credit scoring, hiring, insurance pricing) |
| Training on special category data | Art. 35(3)(b) GDPR | Models trained on health, biometric, genetic, racial, political, religious, sexual orientation, or trade union data at scale |
| AI-powered surveillance | Art. 35(3)(c) GDPR | Computer vision, facial recognition, behavioural analytics, or anomaly detection in public spaces |
| High-risk AI systems | Art. 6 EU AI Act | Systems listed in Annex III of the AI Act (biometric identification, critical infrastructure, employment, law enforcement, migration, justice) |
| Foundation models processing personal data | EDPB Guidelines 04/2025 | LLMs and foundation models trained on datasets containing personal data, regardless of downstream use |