ai-privacy-inference
AI Privacy Inference and Derived Data
Overview
AI systems routinely generate inferences about individuals — predictions about creditworthiness, health risks, personality traits, political opinions, or behavioural patterns that were never directly provided by the data subject. These AI-derived inferences raise critical privacy questions: Are inferences personal data? When does inference become profiling under GDPR Article 22? What accuracy obligations apply to AI predictions? Can data subjects access, rectify, or object to inferences drawn about them? The CJEU, EDPB, and national DPAs have progressively clarified that inferences are personal data when they relate to an identified or identifiable person, and that GDPR rights extend to derived and inferred data. Cerebrum AI Labs must classify, govern, and provide transparency over all inferences its AI systems generate about individuals.
Legal Framework for AI Inferences
When Inferences Are Personal Data
| Criterion | Analysis | Example |
|---|---|---|
| Relates to an identified person | Inference is linked to a specific customer record or user profile | "Customer C-12345 has 78% churn probability" |
| Relates to an identifiable person | Inference can be linked to a person through combination with other data | "User with session token X-789 is likely aged 25-34" |
| Used to evaluate a person | Inference is used to assess, classify, or make decisions about someone | Credit score derived from transaction patterns |
| Has impact on a person | Inference affects how the person is treated or what options are available | Insurance premium adjusted based on predicted health risk |
CJEU C-434/16 (Nowak, 2017): Personal data includes "any information" relating to a data subject — this encompasses opinions, assessments, and inferences, not only factual data directly provided by the individual.