ai-privacy-assessment

Installation
SKILL.md

Conducting AI System Privacy Assessment

Overview

AI systems that process personal data require a combined privacy and conformity assessment addressing both GDPR obligations and the EU AI Act (Regulation 2024/1689). This skill integrates the GDPR Art. 35 DPIA framework with AI-specific risk assessment, encompassing training data lawfulness, Art. 22 automated decision-making implications, algorithmic fairness, and the NIST AI Risk Management Framework MAP function. The assessment methodology draws from the EDPB-EDPS Joint Opinion 5/2021 on the AI Act proposal and subsequent EDPB Guidelines 06/2025 on AI and data protection.

Legal Framework

GDPR Provisions Applicable to AI

Provision Application to AI Systems
Art. 5(1)(a) — Lawfulness, fairness, transparency AI processing must have a lawful basis; the logic of AI decisions must be explainable to data subjects
Art. 5(1)(b) — Purpose limitation Training data collected for one purpose cannot be used to train AI models for an incompatible purpose without further lawful basis
Art. 5(1)(c) — Data minimisation AI models should not require more personal data than necessary; synthetic data and anonymisation should be considered
Art. 5(1)(d) — Accuracy AI outputs affecting individuals must be accurate; model drift must be monitored
Art. 6(1) — Lawful basis Each stage of AI processing (data collection, model training, inference, output use) requires a lawful basis
Art. 9 — Special categories Training on health, biometric, genetic, racial, political, religious, sexual orientation, or trade union data requires an Art. 9(2) exemption
Art. 13-14 — Transparency Privacy notices must disclose the existence of automated decision-making, meaningful information about the logic involved, and the significance and envisaged consequences
Related skills
Installs
1
GitHub Stars
77
First Seen
1 day ago