Developer Agent
Overview
| Field | Value |
|---|---|
| Agent ID | developer-agent |
| SDLC Stage | Stage 3: Implementation |
| Human Owner | Senior Developer |
| Role Guide | Developer Guide |
| Prompt Template | prompt-library/by-role/developer/feature-implementation.md |
| Contract Version | 1.0.0 |
| Status | Active |
What This Agent Does
The developer-agent is the workhorse of the pipeline. It takes an approved design and produces production-quality code with unit tests, implementation notes, and AI attribution metadata. This is the agent that directly accelerates delivery velocity.
Core responsibilities:
- Code generation — Produce implementation code following language/framework conventions and architecture constraints
- Unit test creation — Write unit tests covering happy path, error cases, and edge cases
- Implementation documentation — Document assumptions, design decisions, and known limitations
- AI attribution — Include mandatory
AI-Usage,AI-Prompt-Ref, andAgent-IDsmetadata - Risk flagging — Identify implementation risks, uncertain assumptions, and areas needing human attention
- Convention adherence — Follow language-specific and framework-specific coding standards
Agent Contract
agent_id: developer-agent
contract_version: 1.0.0
role_owner: senior-developer
allowed_inputs:
- approved-design-with-constraints
- architecture-constraints
- existing-codebase-context
- language-conventions
- framework-conventions
- test-failure-details # Rework from qa-agent
- security-finding-details # Rework from security-agent
allowed_outputs:
- code-patch
- unit-tests
- implementation-notes
- ai-attribution-metadata
- risk-annotations
forbidden_actions:
- merge-to-main # PRD-STD-009 REQ-009-03: no merge privileges
- merge-to-protected-branch # All protected branches off-limits
- disable-ci-checks # CI gates are non-negotiable
- introduce-secrets # No hardcoded secrets, tokens, or credentials
- modify-architecture # Architecture changes require architect-agent
- skip-unit-tests # Every code change must have tests
- access-production-data # No production data in development
- access-production-environment # Developer-agent does not touch production
required_checks:
- code-compiles # Build must pass
- lint-passes # Code style enforcement
- unit-tests-pass # All tests green
- ai-metadata-present # AI-Usage, AI-Prompt-Ref, Agent-IDs
- no-secrets-in-code # Secret scanning must pass
- no-restricted-data-in-prompts # Data classification compliance
handoff_targets:
- agent: qa-agent
artifact: code-patch-with-tests
condition: gate-3-passed
- agent: security-agent
artifact: code-patch-with-tests
condition: gate-3-passed # Parallel handoff with qa-agent
escalation_path:
approver_role: senior-developer
triggers:
- architecture-change-needed
- auth-or-crypto-implementation
- pii-handling-required
- third-party-integration
- performance-critical-path
System Prompt Blueprint
You are developer-agent for [PROJECT_NAME].
Your role: Implement approved designs as production-quality code with
unit tests, following language and framework conventions.
Technology stack: [YOUR_STACK]
Language conventions: [LINK_TO_LANGUAGE_GUIDE]
Framework conventions: [LINK_TO_FRAMEWORK_GUIDE]
Contract boundaries:
- You MUST NOT merge code to any protected branch
- You MUST NOT disable or bypass CI checks
- You MUST NOT hardcode secrets, tokens, or credentials
- You MUST NOT modify architecture beyond the approved design
- You MUST include unit tests for every code change
- You MUST include AI attribution metadata (AI-Usage, AI-Prompt-Ref, Agent-IDs)
For every approved design you receive, produce:
1. Implementation code following project conventions
2. Unit tests (happy path, error cases, edge cases)
3. Implementation notes (assumptions, design decisions, limitations)
4. Risk annotations (uncertain areas, performance concerns, security notes)
5. AI attribution metadata
When implementation requires architecture changes, auth/crypto work,
or PII handling, escalate to the human Senior Developer.
Language templates: prompt-library/by-language/[python|typescript|go|java]/
Framework templates: prompt-library/by-framework/[nextjs|react|express|fastapi|django|spring-boot]/
Standards: PRD-STD-001, PRD-STD-002, PRD-STD-003, PRD-STD-009
Handoff Specifications
Receives From (Upstream)
| Source | Artifact | Trigger |
|---|---|---|
architect-agent | Approved design with constraints and integration map | Gate 2 passed |
qa-agent (rework) | Test failure details with expected vs actual behavior | Code bug identified |
security-agent (rework) | Vulnerability finding with remediation guidance | Security issue found |
Sends To (Downstream) — Parallel Handoff
| Target | Artifact | Condition |
|---|---|---|
qa-agent | Code patch with unit tests and implementation notes | Gate 3 passed |
security-agent | Code patch with dependency manifest | Gate 3 passed (parallel with qa-agent) |
Handoff Artifact Template
handoff:
id: "HO-developer-agent-qa-agent-{timestamp}"
source_agent: "developer-agent"
target_agent: "qa-agent"
stage_from: 3
stage_to: 4
artifacts:
- type: "code-patch"
path: "{PR or patch reference}"
hash: "{SHA-256}"
- type: "unit-tests"
path: "{test file references}"
- type: "implementation-notes"
format: "markdown"
summary: "Implementation complete with unit tests. All Gate 3 criteria passed."
assumptions:
- "Design constraints from architect-agent respected"
- "Unit tests cover primary acceptance criteria"
risks:
- severity: "{varies}"
description: "{implementation-specific risks}"
decision_request: "Validate acceptance criteria coverage and regression risk"
metadata:
prompt_ref: "prompt-library/by-role/developer/feature-implementation.md"
model_version: "{model used}"
run_duration: "{seconds}"
iteration_count: "{number of refinement loops}"
Gate Responsibilities
This agent owns Gate 3 criteria:
| Criterion | How This Agent Satisfies It |
|---|---|
| Code compiles and passes lint | Runs build and lint before handoff |
| Unit tests written and passing | Produces tests for every code change |
| AI attribution metadata present | Includes AI-Usage, AI-Prompt-Ref, Agent-IDs |
| Implementation notes document assumptions | Produces structured implementation notes |
| No Restricted data in prompts | Validates prompt content before sending to AI model |
Trust Level Progression
| Level | Duration | What Changes |
|---|---|---|
| Level 0 | 2 weeks / 20 runs | Senior Developer reviews every line of generated code |
| Level 1 | 4 weeks / 50 runs | Auto-approve Tier 1 CRUD/routine code; human reviews Tier 2+ and new patterns |
| Level 2 | 8 weeks / 100 runs | Human reviews Tier 3+ and auth/crypto/PII code only |
| Level 3 | 10+ weeks | Human reviews only non-negotiable checkpoints; automated gates handle the rest |
Fastest trust progression among all agents for routine code patterns.
Environment Scope
| Environment | Access | Allowed Actions |
|---|---|---|
| Development | Full | Write code, run tests, create PRs |
| Staging | Limited | Receive test failure and security finding details for rework |
| Production | None | developer-agent does not operate in Production |
Implementation Guide
Step 1: Choose Your Code Generation Model
For the developer-agent, model quality directly impacts code quality:
| Model | Strengths | Best For |
|---|---|---|
| Claude (Anthropic) | Strong instruction following, safe defaults, good at constraints | General implementation, security-sensitive code |
| GPT-4 (OpenAI) | Broad language support, good at complex logic | Multi-language projects |
| Codex / Code-specific models | Optimized for code completion | High-volume routine code |
Step 2: Provide Codebase Context
The developer-agent needs to understand your existing code. Configure:
- Repository structure — Folder tree, module organization
- Key interfaces and types — Shared types, API contracts, database schemas
- Coding conventions — Naming patterns, error handling style, logging format
- Existing test patterns — How tests are structured, what frameworks are used
Use RAG or large context windows (≥100K tokens) for codebase awareness.
Step 3: Configure Language/Framework Templates
Reference the appropriate template from the prompt library:
| Stack | Template |
|---|---|
| Python | prompt-library/by-language/python/ |
| TypeScript | prompt-library/by-language/typescript/ |
| Go | prompt-library/by-language/go/ |
| Java | prompt-library/by-language/java/ |
| Next.js | prompt-library/by-framework/nextjs/ |
| React | prompt-library/by-framework/react/ |
| Express | prompt-library/by-framework/express/ |
| FastAPI | prompt-library/by-framework/fastapi/ |
| Django | prompt-library/by-framework/django/ |
| Spring Boot | prompt-library/by-framework/spring-boot/ |
Step 4: Set Up CI Gate Integration
The developer-agent output must pass through CI before handoff:
gate_3_checks:
- name: build
command: "npm run build" # or equivalent
blocking: true
- name: lint
command: "npm run lint"
blocking: true
- name: unit-tests
command: "npm test"
blocking: true
- name: secret-scan
command: "gitleaks detect"
blocking: true
- name: ai-metadata
command: "check-pr-fields AI-Usage AI-Prompt-Ref Agent-IDs"
blocking: true
Step 5: Configure Rework Loop
When qa-agent or security-agent finds issues:
- Orchestrator routes failure details back to
developer-agent developer-agentreceives the specific failure contextdeveloper-agentproduces a fix patch- Fix goes through Gate 3 again
- Maximum 3 rework iterations before escalation to human
Known Limitations
- Hallucinated APIs and imports — AI models may reference functions, libraries, or APIs that do not exist. Build verification catches this but wastes a rework cycle.
- Business logic correctness — The agent can implement logic that compiles and passes tests but is semantically wrong. Human review of business logic remains essential.
- Context window limits — Large codebases may exceed context. The agent may produce code inconsistent with distant modules.
- Security-sensitive code requires extra review — Auth, crypto, and PII handling code has a higher error rate. These always escalate to human review.
- Test quality varies — AI-generated tests may test implementation details rather than behavior. Human review of test strategy is important at Trust Level 0-1.
Standards Compliance
| Standard | Requirement | Evidence This Agent Produces |
|---|---|---|
| PRD-STD-001 | Structured prompt engineering | System prompt, prompt references in metadata |
| PRD-STD-002 | Code review requirements | PR with AI attribution, implementation notes |
| PRD-STD-003 | Testing requirements | Unit tests with coverage metrics |
| PRD-STD-009 | Agent governance | Contract, run records, handoff artifacts |
| PRD-STD-009 REQ-009-03 | Least-privilege access | No merge privileges, no production access |
| PRD-STD-009 REQ-009-15 | PR metadata fields | AI-Usage, AI-Prompt-Ref, Agent-IDs, AI-Risk-Notes |