Skip to main content

OWASP LLM Top 10 Control Mapping

This page maps the OWASP Top 10 for Large Language Model Applications (2025 edition) to AEEF controls. It extends the general OWASP coverage in the Security Risk Framework with LLM-specific controls addressing prompt injection, insecure output handling, training data poisoning, and other risks unique to LLM-powered applications.

Version mapped: OWASP Top 10 for LLM Applications v2025. This mapping will be reviewed when OWASP publishes updates.

Control Summary

Control IDOWASP LLM RiskAEEF Primary MappingRisk LevelPriority
LLM-01Prompt InjectionPRD-STD-001, PRD-STD-010CriticalImmediate
LLM-02Insecure Output HandlingPRD-STD-010, PRD-STD-004HighImmediate
LLM-03Training Data PoisoningPRD-STD-011HighPhase 2
LLM-04Model Denial of ServicePRD-STD-012MediumPhase 2
LLM-05Supply Chain VulnerabilitiesPRD-STD-008, PRD-STD-011HighPhase 1
LLM-06Sensitive Information DisclosurePRD-STD-014, Pillar 2CriticalImmediate
LLM-07Insecure Plugin/Tool DesignPRD-STD-009HighPhase 2
LLM-08Excessive AgencyPRD-STD-009HighPhase 1
LLM-09OverreliancePillar 1 Human-in-the-LoopMediumPhase 2
LLM-10Model TheftPRD-STD-011, PRD-STD-004MediumPhase 2

Detailed Control Mapping

LLM-01: Prompt Injection

Risk: Attacker crafts input that causes the LLM to bypass safety controls, leak system prompts, or execute unintended actions. Includes direct injection (user input) and indirect injection (injected through retrieved content).

AEEF Control Mapping:

  • PRD-STD-001 — prompt structure, constraint specification
  • PRD-STD-010 REQ-010-04/05 — safety gates, adversarial testing

Supplementary Controls:

  • Input sanitization pipeline before LLM processing
  • System prompt isolation techniques (instruction hierarchy, delimiters)
  • Output validation against safety boundaries
  • Monitoring for prompt extraction attempts
  • Indirect injection scanning of retrieved/embedded content (RAG sources, tool outputs)

Evidence Requirements: Prompt injection test suite results; input sanitization pipeline configuration; system prompt extraction test results

Testing Approach: Red-team prompt injection attacks (direct + indirect); automated prompt injection scanner integration in CI/CD

LLM-02: Insecure Output Handling

Risk: LLM output is passed to downstream systems (web frontends, databases, APIs, OS commands) without proper sanitization, enabling XSS, SSRF, code injection, or privilege escalation.

AEEF Control Mapping:

Supplementary Controls:

  • Output encoding/escaping for each downstream context (HTML, SQL, shell, markdown)
  • Structured output validation (JSON schema enforcement)
  • Content security policy enforcement
  • Sandbox execution for any generated code

Evidence Requirements: Output sanitization test results by context; DAST scan results covering LLM output rendering paths

Testing Approach: Inject payloads through LLM output to downstream systems; DAST testing of LLM output rendering

LLM-03: Training Data Poisoning

Risk: Attacker manipulates training, fine-tuning, or RAG data to embed backdoors, biases, or misinformation into model behavior.

AEEF Control Mapping:

Supplementary Controls:

  • Training data provenance verification
  • Fine-tuning dataset integrity checks (hash verification, anomaly detection)
  • RAG corpus validation and change monitoring
  • Data source reputation scoring
  • Canary data for poisoning detection

Evidence Requirements: Training data provenance records; dataset integrity verification logs; RAG corpus change audit trail

Testing Approach: Inject known canary data and verify detection; validate dataset integrity checks catch tampering

LLM-04: Model Denial of Service

Risk: Attacker sends inputs designed to consume excessive resources (long prompts, recursive reasoning, resource-intensive queries) degrading availability.

AEEF Control Mapping:

  • PRD-STD-012 REQ-012-01/04/05 — SLOs, circuit breakers, graceful degradation

Supplementary Controls:

  • Input length and complexity limits
  • Per-user/per-tenant rate limiting
  • Token budget enforcement per request
  • Request timeout enforcement
  • Anomalous usage pattern detection

Evidence Requirements: Rate limiting configuration; timeout enforcement records; load test results showing graceful degradation

Testing Approach: Stress testing with adversarial inputs; rate limit validation

LLM-05: Supply Chain Vulnerabilities

Risk: Compromised models, poisoned training data from third parties, vulnerable dependencies in ML pipeline, or compromised model hosting infrastructure.

AEEF Control Mapping:

Supplementary Controls:

  • Model provenance verification (cryptographic signatures, checksums)
  • Third-party model evaluation before adoption
  • ML pipeline dependency scanning
  • Supply chain bill of materials (ML-BOM) including model, data, and framework versions

Evidence Requirements: Model provenance verification records; third-party model evaluation results; ML-BOM; dependency scan results

Testing Approach: Verify model checksums match published signatures; scan ML pipeline dependencies for known vulnerabilities

LLM-06: Sensitive Information Disclosure

Risk: LLM reveals confidential information, PII, proprietary data, system prompts, or training data memorization in outputs.

AEEF Control Mapping:

Supplementary Controls:

  • PII detection and redaction in LLM outputs
  • System prompt protection techniques
  • Training data memorization testing
  • Output classification scanning
  • Data loss prevention (DLP) integration for LLM outputs

Evidence Requirements: PII detection test results; system prompt extraction test results; memorization probing results; DLP policy configuration

Testing Approach: Probe for training data memorization; attempt system prompt extraction; test PII leakage scenarios

LLM-07: Insecure Plugin/Tool Design

Risk: LLM has access to tools, plugins, or function calls with excessive permissions, insufficient input validation, or missing access controls.

AEEF Control Mapping:

  • PRD-STD-009 REQ-009-02/03 — agent contracts, least privilege

Supplementary Controls:

  • Tool/function call schemas with strict input validation
  • Per-tool permission scoping (read-only vs. write)
  • Human approval gates for destructive tool calls
  • Tool call audit logging
  • Rate limiting on tool invocations

Evidence Requirements: Tool permission documentation; input validation test results; tool call audit logs

Testing Approach: Attempt unauthorized tool actions through LLM; test input validation bypass in tool schemas

LLM-08: Excessive Agency

Risk: LLM-based agent takes actions beyond intended scope due to excessive permissions, ambiguous instructions, or inadequate human oversight.

AEEF Control Mapping:

  • PRD-STD-009 REQ-009-01/03/04/07/08 — agent identity, least privilege, human intervention, iteration limits

Supplementary Controls:

  • Action scope boundaries per agent role
  • Confirmation requirements for high-impact actions
  • Action reversal capabilities
  • Real-time agent behavior monitoring
  • Autonomous action audit trail with decision chain

Evidence Requirements: Agent contract documentation; action scope configuration; confirmation gate logs

Testing Approach: Attempt to get agent to exceed its defined scope; verify iteration limits halt runaway agents

LLM-09: Overreliance

Risk: Users or downstream systems trust LLM outputs without verification, leading to propagation of hallucinations, errors, or fabricated information.

AEEF Control Mapping:

Supplementary Controls:

  • Confidence scoring for LLM outputs
  • Citation and source attribution requirements
  • Hallucination detection mechanisms
  • User-facing uncertainty indicators
  • Critical decision outputs require human verification workflow

Evidence Requirements: Confidence scoring implementation; citation coverage metrics; hallucination detection test results

Testing Approach: Measure hallucination rate on domain-specific test sets; verify uncertainty indicators display correctly

LLM-10: Model Theft

Risk: Unauthorized access to proprietary model weights, fine-tuned models, or model extraction through systematic querying.

AEEF Control Mapping:

Supplementary Controls:

  • Model access controls with authentication and authorization
  • API rate limiting to prevent extraction attacks
  • Query pattern monitoring for extraction attempts
  • Model artifact encryption at rest and in transit
  • Network segmentation for model serving infrastructure

Evidence Requirements: Model access control configuration; rate limiting enforcement; query pattern monitoring alerts

Testing Approach: Attempt model extraction through systematic querying; verify rate limits prevent extraction

Audit Readiness Checklist

  • All 10 OWASP LLM risks assessed against current AI product features
  • Prompt injection controls implemented and tested (LLM-01)
  • Output sanitization validated for all downstream contexts (LLM-02)
  • Training/fine-tuning data provenance documented (LLM-03)
  • Rate limiting and resource controls active (LLM-04)
  • Model supply chain verified with ML-BOM (LLM-05)
  • PII detection and system prompt protection tested (LLM-06)
  • Tool/plugin permissions scoped and validated (LLM-07)
  • Agent scope boundaries and human oversight confirmed (LLM-08)
  • Confidence scoring and hallucination detection implemented (LLM-09)
  • Model access controls and extraction monitoring active (LLM-10)
  • Evidence artifacts collected for all applicable controls
  • Risk acceptance documented for any controls not implemented

External Sources