Skip to main content

Scenario: Go Microservice with AEEF Standards

This walkthrough shows how to apply AEEF production standards together on a Go microservice. It follows an entire feature from prompt to production, showing how the standards create a governed delivery pipeline — with specific attention to Go concurrency patterns, struct validation, and table-driven testing.

Time required: 60-90 minutes (reading + doing) Prerequisites: Familiarity with Go 1.22+, HTTP routers (Chi/Gin), and basic AEEF concepts from the Startup Quick-Start.

Composite Scenario

This is a realistic composite scenario showing how standards apply together. Adapt the specifics to your stack — the governance workflow is universal.


The Project

An internal rate-limiting service built with:

  • Language: Go 1.22+
  • Router: Chi v5
  • Database: PostgreSQL via pgx (connection pool)
  • Cache: Redis for sliding-window counters
  • Observability: OpenTelemetry + structured logging (slog)
  • Testing: Go standard library + testify + testcontainers-go
  • CI: GitHub Actions with golangci-lint + govulncheck

The team has 4 engineers managing 6 microservices and has completed the Starter Config Files and CI/CD Pipeline Starter tutorials.

The Feature

User story: As a platform engineer, I can define rate-limit policies per tenant so that no single tenant can exhaust shared API capacity.

This feature touches:

  • Database schema (rate-limit policy model)
  • REST API (CRUD for policies + rate-check endpoint)
  • Redis integration (sliding-window counter)
  • Middleware (rate-limiting middleware for downstream services)
  • Concurrency (atomic counter operations under high throughput)

Phase 1: Prompt Engineering (PRD-STD-001)

Step 1.1: Structured Prompt for API Endpoint

Using the Go Secure REST Endpoint template (prompt-library/by-language/go/secure-endpoint.md):

You are generating production-grade Go API code.

**Context:**
- Go 1.22+ with strict linting (golangci-lint, revive, staticcheck)
- Router: Chi v5
- Database: PostgreSQL via pgx/v5 pool
- Validation: go-playground/validator/v10
- Logging: log/slog with structured context
- Config: envconfig or viper for environment-based config

**Task:** Create a rate-limit policy CRUD API.

**Requirements:**
1. Policy model: tenant_id, endpoint_pattern, max_requests, window_seconds,
burst_limit, created_at, updated_at
2. Unique constraint on (tenant_id, endpoint_pattern)
3. Tenant isolation: JWT claim carries tenant_id — enforce at query level
4. Policies can be enabled/disabled without deletion
5. List endpoint supports filtering by tenant_id with pagination

**Constraints:**
- Use struct tags for validation (validate:"required,min=1")
- Use parameterized queries only — never fmt.Sprintf into SQL
- Return proper HTTP status codes (201 created, 409 conflict, 404 not found)
- All errors return a structured JSON envelope:
{"error": {"code": "RATE_POLICY_NOT_FOUND", "message": "..."}}
- Use context.Context for cancellation and timeout propagation
- Close all resources with defer — database rows, response bodies
- All exported functions must have GoDoc comments

Step 1.2: Structured Prompt for Rate-Check Logic

You are implementing a sliding-window rate limiter in Go.

**Context:**
- Go 1.22+, Redis via go-redis/v9
- Must handle 50,000+ checks per second per instance

**Task:** Implement the rate-check logic.

**Requirements:**
1. Sliding window algorithm using Redis sorted sets (ZRANGEBYSCORE + ZADD + ZREMRANGEBYSCORE)
2. Atomic operation — entire check-and-increment must be a single Redis pipeline or Lua script
3. Return: allowed (bool), remaining (int), reset_at (time.Time)
4. Fallback to allow if Redis is unreachable (open-circuit, not closed)
5. Metrics: counter for allowed/denied, histogram for check latency

**Constraints:**
- Use context.Context with timeout (50ms max for Redis call)
- Use sync.Pool for reusable buffers if needed
- No race conditions — the Lua script ensures atomicity in Redis
- All operations must respect context cancellation
- Log Redis errors with slog.Warn, not slog.Error (expected during failover)

Step 1.3: Record Prompt References

AI-Usage: claude
AI-Prompt-Ref: by-language/go/secure-endpoint (policy CRUD),
by-language/go/secure-endpoint (rate-check, adapted)
AI-Confidence: high — CRUD endpoints, medium — Redis sliding window

Phase 2: Human-in-the-Loop Review (PRD-STD-002)

Step 2.1: Review AI Output Against Checklist

Using the Go PR Risk Review prompt (prompt-library/by-language/go/pr-risk-review.md):

Critical items for this feature:

CheckWhat to VerifyStatus
Auth bypassDoes every handler extract tenant_id from JWT and filter queries by it?
SQL injectionAll queries use $1, $2 placeholders — no fmt.Sprintf into SQL?
Race conditionIs the Redis rate-check atomic (Lua script or pipeline)?
Resource leaksAre pgx.Rows closed with defer rows.Close()? Are HTTP response bodies closed?
Context propagationDo all database and Redis calls receive ctx from the request?
Goroutine leaksAny goroutines started without cancellation or WaitGroup?
Error wrappingAre errors wrapped with fmt.Errorf("...: %w", err) for stack context?
Nil pointerAre all struct pointers checked before dereference?

Step 2.2: Go-Specific AI Pitfalls to Check

From the Go anti-patterns table (prompt-library/by-language/go.md):

  • No panic in library code — return errors instead
  • No init() functions with side effects (database connections, HTTP calls)
  • No interface{} / any where a concrete type is known
  • No ignoring errors with _ on error-returning functions
  • No sync.Mutex when a channel-based design is cleaner (or vice versa)
  • Errors wrapped with %w for errors.Is/errors.As compatibility
  • Context passed as first parameter to all functions that do I/O

Phase 3: Testing (PRD-STD-003)

Step 3.1: Generate Test Matrix

Use the Go Risk-Based Test Matrix prompt (prompt-library/by-language/go/test-matrix.md):

Feature: Rate-limit policy CRUD + sliding-window rate checker
Changes: Policy handlers, Redis rate-limiter, Chi middleware

Generate a risk-based test matrix covering:
1. Table-driven unit tests for validation, policy logic, rate-check math
2. Integration tests for API endpoints (auth states, validation, CRUD)
3. Redis integration tests for atomic rate-check behavior
4. Concurrency tests for race conditions under high throughput

Expected test coverage:

Test TypeCountWhat It Covers
Unit (table-driven)12-18Struct validation, rate-check math, error wrapping
API integration8-12CRUD endpoints + auth boundary + pagination
Redis integration6-8Sliding window correctness, atomic operations, expiry
Concurrency3-5Parallel rate checks, no race conditions (-race flag)

Step 3.2: Verify AI-Generated Tests

Common issues with AI-generated Go tests:

  • Tests use t.Parallel() where safe for faster execution
  • Tests use table-driven pattern with tt := tt capture in loop
  • Tests use testify/assert or testify/require consistently (not mixed)
  • Integration tests use testcontainers-go for real PostgreSQL and Redis
  • No time.Sleep() — use ticker, context timeout, or require.Eventually
  • Concurrent tests run with go test -race in CI
  • Cleanup with t.Cleanup() instead of manual defer where appropriate

Phase 4: Security Scanning (PRD-STD-004)

Step 4.1: Automated CI Checks

Your CI pipeline catches:

# These run automatically on every PR
- golangci-lint: staticcheck, gosec, revive, govet
- govulncheck: Known CVEs in Go module dependencies
- Semgrep: SQL injection, command injection, SSRF
- Trivy: Container image vulnerabilities

Step 4.2: Manual Security Review

Rate-limiting services are security-critical. Verify:

  • Rate-check is atomic — no TOCTOU between check and increment
  • Redis auth credentials are not hardcoded (loaded from env/secret store)
  • Tenant isolation is enforced at the query level (WHERE tenant_id = $1)
  • Rate-limit bypass is not possible by omitting headers or spoofing tenant_id
  • Open-circuit fallback (allow on Redis failure) is logged and alerted
  • No timing side-channel in policy lookup (constant-time tenant lookup)

Phase 5: Quality Gates (PRD-STD-007)

Step 5.1: PR Checklist

GateToolPass Criteria
Type safetygo vet ./...Zero issues
Lintgolangci-lint runZero errors (warnings reviewed)
Unit testsgo test ./...100% passing, new code covered
Race detectorgo test -race ./...Zero race conditions detected
Security scangosec + govulncheckZero high/critical findings
Integration teststestcontainers-goAll passing with real Postgres + Redis
Buildgo build ./...Successful, binary runs

Step 5.2: PR Metadata

## Changes
- Add rate-limit policy model with pgx migration
- Add CRUD handlers: POST/GET/PATCH/DELETE /api/v1/policies
- Add sliding-window rate-check with Redis Lua script
- Add Chi middleware for downstream rate enforcement
- Add OpenTelemetry spans for rate-check latency

## AI Disclosure
- AI-Usage: claude
- AI-Prompt-Ref: by-language/go/secure-endpoint (CRUD + rate-check)
- AI-Review: Used by-language/go/pr-risk-review for self-review
- Human-Review: Redis atomicity manually verified, concurrency tests reviewed

## Testing
- 15 unit tests (table-driven: validation, rate math, error wrapping)
- 10 API integration tests (CRUD + auth boundary + pagination)
- 7 Redis integration tests (sliding window, atomicity, expiry)
- 4 concurrency tests (parallel rate-checks with -race flag)

Phase 6: Dependency Compliance (PRD-STD-008)

Use the Go Dependency Risk Check (prompt-library/by-language/go/dependency-check.md) if new modules were added:

Review these dependency additions:
- github.com/redis/go-redis/v9 (Redis client)
- github.com/testcontainers/testcontainers-go (test infrastructure)

Check: license, CVEs, maintenance status, module size, alternatives.

Phase 7: Documentation (PRD-STD-005)

Use the Go Change Runbook (prompt-library/by-language/go/change-runbook.md) to generate:

  1. Migration notes: PostgreSQL migration must run before deployment
  2. Environment variables: REDIS_URL, REDIS_PASSWORD, RATE_LIMIT_DEFAULT_WINDOW, RATE_LIMIT_REDIS_TIMEOUT_MS
  3. Rollback procedure: Revert migration, redeploy previous binary, Redis counters auto-expire
  4. Monitoring:
    • Alert on rate-check Redis error rate > 1% (indicates Redis connectivity issue)
    • Alert on rate-check latency p99 > 30ms
    • Dashboard: allowed/denied ratio by tenant, Redis hit rate, open-circuit events
  5. Operational notes:
    • Open-circuit mode allows all requests when Redis is unreachable — monitor for abuse during Redis outages
    • Rate-limit policies take effect immediately — no cache delay

Summary: Standards Applied

StandardHow It Was AppliedEvidence
PRD-STD-001 (Prompt Engineering)Structured prompts from Go templatesPR description AI-Prompt-Ref
PRD-STD-002 (Code Review)AI + human review with concurrency focusReview comments on PR
PRD-STD-003 (Testing)Table-driven tests + race detector, 36+ testsCI test results
PRD-STD-004 (Security)Automated scans + atomicity reviewCI scan output + review notes
PRD-STD-005 (Documentation)Generated runbook from templatePR description + runbook
PRD-STD-007 (Quality Gates)All gates passing including -raceCI status checks
PRD-STD-008 (Dependencies)Dependency risk check for new modulesPR comment with assessment

What This Demonstrates

  1. Concurrency is Go's biggest AI risk — AI-generated Go code frequently has race conditions; the -race flag and explicit concurrency tests catch these
  2. Table-driven tests are the Go idiom — AI often generates one-test-per-case instead of table-driven; the testing strategy prompt corrects this
  3. Resource leaks are subtledefer rows.Close() and context cancellation are easy for AI to miss; the review checklist catches them
  4. Open-circuit decisions need governance — allowing all requests when Redis fails is a business decision that needs documentation and alerting, not just code
  5. Go's simplicity makes governance lightweight — minimal framework magic means code review is straightforward; the standards add structure without bureaucracy

Apply This Pattern in Your Repo

Use this scenario as a reference pattern, then choose an implementation path:

Next Steps