Skip to main content

PRD-STD-014: AI Product Privacy & Data Rights

Standard ID: PRD-STD-014 Version: 1.0 Status: Active Compliance Level: Level 2 (Managed) Effective Date: 2026-02-22 Last Reviewed: 2026-02-22

How To Use This Standard

This page is the normative source of requirements for this control area. Use it to define policy, evidence expectations, and audit/compliance criteria.

For implementation and rollout support:

Use the Compliance Level metadata on this page to sequence adoption with other PRD-STDs.

1. Purpose

This standard defines mandatory privacy and data rights controls for AI-powered products that process personal data, conversation logs, behavioral signals, or user-generated content through AI inference, fine-tuning, or evaluation pipelines. Without explicit privacy controls, AI products risk unauthorized data processing, cross-border transfer violations, inability to honor erasure requests, and non-compliance with GDPR, PDPL, CCPA, and emerging AI-specific privacy regulations.

2. Scope

This standard applies to:

  • Any AI product feature that processes personal data or user-generated content for inference, training, fine-tuning, evaluation, or analytics
  • Conversational AI, recommendation systems, classification services, content generation, and any AI feature where user data enters the inference pipeline
  • Both first-party AI features and third-party AI service integrations

This standard does not replace PRD-STD-011 (Model & Data Governance). It adds privacy-specific controls required for AI products processing personal data.

3. Definitions

TermDefinition
Data Protection Impact Assessment (DPIA)A structured assessment of privacy risks arising from AI data processing, required before launching AI features that process personal data at scale or make automated decisions
Data Processing Agreement (DPA)A contractual agreement governing the processing of personal data by AI services, including sub-processor obligations and data handling requirements
Cross-Border Data TransferThe movement of personal data across national or regional boundaries for AI processing, subject to adequacy determinations and transfer safeguards
Right to ErasureThe data subject's right to request deletion of their personal data, including data used in AI training, fine-tuning, or evaluation datasets
Machine UnlearningTechniques to remove the influence of specific data points from trained models without full retraining, or verified exclusion through retraining from filtered datasets
Consent RecordA timestamped, auditable record of a data subject's consent for specific AI data processing purposes
Automated DecisionA decision made by an AI system without meaningful human intervention that produces legal effects or similarly significant effects on individuals
Purpose LimitationThe principle that personal data collected for a specific AI purpose MUST NOT be repurposed for materially different AI processing without additional legal basis

4. Requirements

4.1 Privacy-by-Design for AI Features

MANDATORY

REQ-014-01: A Data Protection Impact Assessment (DPIA) MUST be completed before launching any Tier 2 or Tier 3 AI feature that processes personal data at scale or makes automated decisions affecting individuals.

REQ-014-02: AI features MUST implement data minimization: only the minimum personal data necessary for the AI task MUST be included in inference inputs.

REQ-014-03: Privacy requirements MUST be documented as first-class acceptance criteria in the feature specification, not added retrospectively.

REQ-014-04: Personal data MUST be pseudonymized or anonymized before use in AI evaluation, benchmarking, or debugging datasets unless explicit consent or legal basis exists for identifiable data.

4.2 Data Processing Agreements for AI Inference

MANDATORY

REQ-014-05: When using third-party model providers or AI inference services, a Data Processing Agreement (DPA) MUST be in place before personal data is transmitted.

REQ-014-06: DPAs MUST specify data retention limits, sub-processor notification obligations, audit rights, and incident notification timelines.

REQ-014-07: The organization MUST maintain an inventory of all AI sub-processors with documented data flows, processing purposes, and contractual controls.

RECOMMENDED

REQ-014-08: Organizations SHOULD negotiate DPA terms that prohibit third-party model providers from using customer data for model training unless explicit opt-in consent is obtained.

4.3 Cross-Border Data Transfer Controls

MANDATORY

REQ-014-09: Before transmitting personal data to AI services in another jurisdiction, a transfer impact assessment MUST be completed documenting the legal basis, adequacy status, and supplementary safeguards.

REQ-014-10: The organization MUST maintain a data-flow inventory for AI inference pipelines that maps data origins, processing locations, and transfer mechanisms.

REQ-014-11: Standard contractual clauses, binding corporate rules, or equivalent transfer mechanisms MUST be in place for cross-border AI data flows where no adequacy determination exists.

RECOMMENDED

REQ-014-12: Organizations SHOULD implement data residency controls that allow AI inference to be routed to region-specific infrastructure aligned with data subject jurisdiction.

4.4 Retention & Deletion for AI Data

MANDATORY

REQ-014-13: Conversation logs, inference inputs, and AI-generated outputs containing personal data MUST have defined retention periods aligned with the data processing purpose and regulatory requirements.

REQ-014-14: Personal data used for model fine-tuning MUST be subject to documented retention governance, including criteria for when fine-tuning datasets must be refreshed, anonymized, or deleted.

REQ-014-15: When a data subject exercises the right to erasure, the organization MUST delete their personal data from inference logs, evaluation datasets, and fine-tuning datasets within regulatory timelines (typically 30 days).

REQ-014-16: If personal data has been incorporated into a trained model through fine-tuning, the organization MUST either perform verified machine unlearning, retrain the model from a filtered dataset, or document the technical infeasibility with compensating controls and regulatory notification.

REQ-014-17: Deletion verification MUST be documented with evidence that confirms removal from all applicable data stores and pipelines.

MANDATORY

REQ-014-18: When consent is the legal basis for AI data processing, consent MUST be collected with granular purpose specification (e.g., separate consent for inference vs. training vs. analytics).

REQ-014-19: Consent withdrawal MUST be technically enforceable within 72 hours and MUST trigger downstream data processing cessation for the withdrawn purpose.

REQ-014-20: Consent records MUST be retained for the duration of data processing plus the applicable regulatory retention period.

RECOMMENDED

REQ-014-21: Organizations SHOULD implement consent preference centers that allow data subjects to view and modify AI-specific processing consents.

4.6 Automated Decision-Making Rights

MANDATORY

REQ-014-22: AI features that make automated decisions producing legal effects or similarly significant effects on individuals MUST provide a mechanism for data subjects to request human review.

REQ-014-23: When automated decisions are made, the organization MUST provide meaningful information about the logic involved, the significance of the processing, and the envisaged consequences to affected data subjects.

RECOMMENDED

REQ-014-24: Organizations SHOULD implement explainability mechanisms that provide decision-specific reasoning rather than generic descriptions of the AI system.

5. Implementation Guidance

Minimum Privacy Control Pack

Teams SHOULD establish:

  1. DPIA template for AI features with risk scoring matrix
  2. DPA addendum template for AI sub-processors
  3. Cross-border transfer impact assessment template
  4. AI data retention schedule by data type and purpose
  5. Consent collection UI patterns for AI processing
  6. Automated decision disclosure template
  7. Erasure request processing workflow with verification checklist

Example DPIA Summary

AI Feature DPIA Summary
Feature: Customer Intent Classification
Data Subjects: End-user customers (estimated 50,000/month)
Personal Data Categories: Conversation text, metadata, user identifiers
Processing Purpose: Real-time intent classification for routing
Legal Basis: Legitimate interest (documented balancing test)
Risk Level: Moderate (Tier 2)
Key Risks: Misclassification leading to service denial, PII in inference logs
Mitigations: PII redaction pre-inference, 30-day log retention, human escalation path
Residual Risk: Low (with mitigations)
Approved By: DPO + Product Owner
Date: 2026-02-22
Review Date: 2026-08-22

Minimum Operational Metrics

Track at least:

  • DPIA completion rate for qualifying AI features
  • erasure request fulfillment time (median, P95)
  • consent collection rate by AI processing purpose
  • cross-border transfer inventory completeness
  • DPA coverage for active AI sub-processors

6. Exceptions & Waiver Process

Waivers are limited to non-privacy procedural controls and MUST include:

  • business justification
  • compensating controls
  • named approver
  • expiration date (maximum 30 days)

No waivers are permitted for:

  • missing DPIA for Tier 2/3 features processing personal data at scale
  • absent DPA with active AI sub-processors
  • inability to process erasure requests
  • missing automated decision disclosure for qualifying AI features

8. Revision History

VersionDateAuthorChanges
1.02026-02-22AEEF Standards CommitteeInitial release