Skip to main content

EU AI Act Regulatory Profile

This profile maps the requirements of Regulation (EU) 2024/1689 (the EU AI Act) to AEEF controls, enabling organizations deploying AI systems in or affecting the European Union to align their AEEF implementation with EU legal obligations. The EU AI Act entered into force on 1 August 2024 with phased enforcement through 2027.

Assessment date: February 22, 2026

Applicability

This profile applies when any of the following conditions exist:

  • AI systems are placed on the market or put into service in the European Union
  • AI system outputs affect natural persons located in the EU, regardless of provider location
  • The organization is established in the EU or uses an EU-based authorized representative
  • The AI system falls under General-Purpose AI (GPAI) model obligations

EU AI Act Risk Classification

EU AI Act CategoryDescriptionAEEF Risk Tier MappingKey Obligations
Unacceptable Risk (Prohibited)Art. 5 prohibited practicesNo AEEF equivalent — MUST NOT deployPre-deployment screening required
High-RiskAnnex III listed systemsAEEF Tier 3Full compliance with Chapter III obligations
Limited RiskTransparency obligationsAEEF Tier 2Art. 50 transparency and disclosure
Minimal RiskVoluntary codes of practiceAEEF Tier 1Voluntary AEEF governance recommended

EU AI Act Overlay Control Set

Control IDEU AI Act ArticleControl TitleAEEF MappingPriority
EU-AI-01Art. 5Prohibited Practices ScreeningPRD-STD-010 REQ-010-02Immediate
EU-AI-02Art. 6, Annex IIIHigh-Risk ClassificationPRD-STD-010 REQ-010-01Immediate
EU-AI-03Art. 9Risk Management SystemPillar 2 + PRD-STD-010High
EU-AI-04Art. 10Data GovernancePRD-STD-011, PRD-STD-014High
EU-AI-05Art. 11, Annex IVTechnical DocumentationPRD-STD-005, PRD-STD-011High
EU-AI-06Art. 12Record-Keeping and LoggingPillar 2 Retention PolicyHigh
EU-AI-07Art. 13Transparency to DeployersPRD-STD-010, PRD-STD-014High
EU-AI-08Art. 14Human OversightPillar 1 Human-in-the-LoopHigh
EU-AI-09Art. 15Accuracy, Robustness, CybersecurityPRD-STD-003, 004, 007High
EU-AI-10Art. 43Conformity AssessmentPillar 5 Maturity + ISO 42001Medium
EU-AI-11Art. 49, 71EU Database RegistrationNew obligationMedium
EU-AI-12Art. 72Post-Market MonitoringTransformation LifecycleHigh
EU-AI-13Art. 73Serious Incident ReportingPillar 2 Incident ResponseHigh
EU-AI-14Art. 53-56GPAI Model ObligationsPRD-STD-011, PRD-STD-008High
EU-AI-15Art. 50AI-Generated Content TransparencyPillar 2 Code ProvenanceMedium

EU-AI-01: Prohibited Practices Screening (Art. 5)

Organizations MUST screen AI systems against the prohibited practices list before deployment. Systems that perform social scoring, exploit vulnerabilities of specific groups, use real-time remote biometric identification in public spaces (with limited exceptions), or other Art. 5 prohibited uses MUST NOT be deployed.

AEEF Mapping: Extend PRD-STD-010 REQ-010-02 policy boundaries to include an explicit Art. 5 screening checklist.

EU-AI-02: High-Risk Classification (Art. 6, Annex III)

Organizations MUST classify each AI system against Annex III high-risk categories. Classification MUST be documented with rationale, reviewing body, and date. Annex III categories include: biometrics, critical infrastructure, education, employment, essential services, law enforcement, migration, and administration of justice.

AEEF Mapping: Extend PRD-STD-010 REQ-010-01 risk tiering with EU AI Act classification determination.

EU-AI-03: Risk Management System (Art. 9)

High-risk AI systems MUST implement a continuous risk management system that identifies, analyzes, evaluates, and mitigates risks throughout the lifecycle. The system MUST be proportionate to the risks and regularly updated.

AEEF Mapping: Security Risk Framework combined with PRD-STD-010 safety controls.

EU-AI-04: Data Governance (Art. 10)

Training, validation, and testing datasets MUST meet quality criteria: relevance, representativeness, absence of errors, and completeness. Bias examination procedures MUST be documented. Data governance practices MUST address data collection, preparation, and annotation.

AEEF Mapping: PRD-STD-011 and PRD-STD-014 data governance controls, extended with Training Data Governance.

EU-AI-05: Technical Documentation (Art. 11, Annex IV)

High-risk systems MUST maintain technical documentation covering: general description, design specifications, development process, monitoring and control measures, risk management measures, validation and testing results, and operational information.

AEEF Mapping: PRD-STD-005 documentation requirements combined with PRD-STD-011 model cards.

EU-AI-06: Record-Keeping and Logging (Art. 12)

High-risk AI systems MUST enable automatic logging of events relevant to identifying risks and facilitating post-market monitoring. Logs MUST be retained for the system's lifecycle or the minimum regulatory retention period.

AEEF Mapping: Retention & Audit Policy retention controls.

EU-AI-07: Transparency and Information to Deployers (Art. 13)

High-risk systems MUST be designed for sufficient transparency for deployers to interpret outputs. Instructions for use MUST include: intended purpose, accuracy levels, known limitations, human oversight measures, and expected operational lifetime.

AEEF Mapping: PRD-STD-010 trust controls and PRD-STD-014 REQ-014-23 automated decision disclosure.

EU-AI-08: Human Oversight (Art. 14)

High-risk systems MUST be designed for effective human oversight including the ability to: understand system capabilities and limitations, monitor operation, interpret outputs, decide not to use or override outputs, and interrupt operation.

AEEF Mapping: Pillar 1 Human-in-the-Loop controls.

EU-AI-09: Accuracy, Robustness, and Cybersecurity (Art. 15)

High-risk systems MUST achieve appropriate levels of accuracy, robustness, and cybersecurity. Resilience against errors, faults, and manipulation attempts MUST be addressed through technical redundancy and security measures.

AEEF Mapping: PRD-STD-003 testing, PRD-STD-004 security scanning, and PRD-STD-007 quality gates.

EU-AI-10: Conformity Assessment (Art. 43)

High-risk systems MUST undergo conformity assessment before placement on the market. Self-assessment is permitted for most Annex III systems. Third-party assessment via a notified body is required for biometric systems.

AEEF Mapping: ISO 42001 Certification Readiness and Pillar 5 Maturity Assessment.

EU-AI-11: EU Database Registration (Art. 49, 71)

Providers and deployers of high-risk AI systems MUST register the system in the EU database before market placement. Registration MUST include system description, intended purpose, conformity status, and member states of deployment.

AEEF Mapping: No direct AEEF equivalent — new compliance obligation. Organizations MUST add EU database registration to their AI product launch checklist.

EU-AI-12: Post-Market Monitoring (Art. 72)

Providers MUST establish a post-market monitoring system proportionate to the nature and risks. The system MUST actively and systematically collect, document, and analyze data on performance throughout the AI system's lifetime.

AEEF Mapping: Production Monitoring & Drift lifecycle controls.

EU-AI-13: Serious Incident Reporting (Art. 73)

Providers MUST report serious incidents to market surveillance authorities within 15 days (or immediately for widespread infringements). Serious incidents include death, serious damage to health/property/environment, and fundamental rights violations.

AEEF Mapping: Incident Response procedures, extended with EU-specific reporting timelines and authority notification.

EU-AI-14: General-Purpose AI Model Obligations (Art. 53-56)

GPAI model providers MUST: maintain technical documentation, provide information to downstream providers, comply with copyright law, and publish a training content summary. Systemic risk GPAI models face additional obligations including adversarial testing and incident reporting to the AI Office.

AEEF Mapping: PRD-STD-011 model documentation and PRD-STD-008 supply chain controls.

EU-AI-15: Transparency for AI-Generated Content (Art. 50)

AI systems generating synthetic content (text, audio, image, video) MUST ensure outputs are machine-readable as AI-generated. Deployers of emotion recognition or biometric categorization systems MUST inform exposed persons. AI-generated deepfakes MUST be labeled.

AEEF Mapping: Extend Code Provenance principles to AI product output marking.

High-Risk System Implementation Checklist

  • Art. 5 prohibited practices screening completed
  • High-risk classification determination documented (Art. 6)
  • Risk management system established (Art. 9)
  • Data governance requirements met (Art. 10)
  • Technical documentation prepared (Art. 11, Annex IV)
  • Automatic logging enabled (Art. 12)
  • Transparency and instructions for use provided (Art. 13)
  • Human oversight measures designed (Art. 14)
  • Accuracy, robustness, cybersecurity validated (Art. 15)
  • Conformity assessment completed (Art. 43)
  • EU database registration submitted (Art. 49)
  • Post-market monitoring system operational (Art. 72)
  • Serious incident reporting process documented (Art. 73)
  • AI-generated content marking implemented (Art. 50)

Enforcement Timeline

DateMilestone
1 Aug 2024EU AI Act enters into force
2 Feb 2025Prohibited practices (Art. 5) enforceable
2 Aug 2025GPAI obligations enforceable; governance provisions apply
2 Aug 2026Full enforcement for high-risk AI systems; penalties active (up to EUR 35M or 7% global turnover)
2 Aug 2027High-risk systems embedded in regulated products (Annex I) enforceable
PENALTIES

Non-compliance penalties under the EU AI Act:

  • Prohibited practices: Up to EUR 35 million or 7% of global annual turnover
  • High-risk system obligations: Up to EUR 15 million or 3% of global annual turnover
  • Incorrect information to authorities: Up to EUR 7.5 million or 1% of global annual turnover

External Sources