Scenario: Django Monolith with AEEF Standards
This walkthrough shows how to apply AEEF production standards together on a Django monolith. It follows an entire feature from prompt to production, demonstrating how the standards create a governed delivery pipeline — with specific attention to DRF serializers, Celery background tasks, admin hardening, and N+1 query prevention.
Time required: 60-90 minutes (reading + doing) Prerequisites: Familiarity with Python 3.12+, Django 5, Django REST Framework, and basic AEEF concepts from the Startup Quick-Start.
This is a realistic composite scenario showing how standards apply together. Adapt the specifics to your stack — the governance workflow is universal.
The Project
A B2B analytics platform built as a Django monolith:
- Framework: Django 5.1 with Django REST Framework
- Database: PostgreSQL via Django ORM
- Background tasks: Celery with Redis broker
- Admin: Django Admin with custom dashboards
- Cache: Redis via django-redis
- Testing: pytest-django + factory_boy + pytest-celery
- CI: GitHub Actions with Semgrep + pip-audit + ruff
- Deployment: Docker + AWS ECS
The team has 8 engineers working on a single monolith with 40+ Django apps and has completed the CI/CD Pipeline Starter.
The Feature
User story: As an account manager, I can schedule recurring report generation so that clients receive weekly performance summaries without manual intervention.
This feature touches:
- Database models (report schedule, generated report)
- REST API (CRUD for schedules + download endpoint for reports)
- Celery beat (periodic task for weekly generation)
- Celery worker (report generation with PDF export)
- Admin interface (view/manage schedules, inspect failed reports)
- Authorization (account managers can manage their clients' schedules; clients can view their own reports)
Phase 1: Prompt Engineering (PRD-STD-001)
Step 1.1: Structured Prompt for API Endpoint
Using the Python Secure REST Endpoint template (prompt-library/by-language/python/secure-endpoint.md) combined with the Django Endpoint Implementation template (prompt-library/by-framework/django/endpoint-implementation.md):
You are a senior Django engineer working in a production Python 3.12+ codebase.
**Context:**
- Django 5.1 with Django REST Framework
- Database: PostgreSQL via Django ORM
- Auth: JWT via SimpleJWT with role-based permissions (client, account_manager, admin)
- Testing: pytest-django + factory_boy
- Task queue: Celery with Redis broker
**Task:** Create CRUD endpoints for recurring report schedules.
**Requirements:**
1. ReportSchedule model: account_id (FK to Account), report_type (choices),
frequency (weekly/monthly), day_of_week (0-6 for weekly), day_of_month (1-28 for monthly),
recipients (JSONField — list of email addresses), active, created_by
2. Account managers can manage schedules for their assigned accounts only
3. Admins can manage any schedule
4. Clients can view schedules for their own account (read-only)
5. Max 10 schedules per account
6. Recipients must be valid email addresses (validated in serializer)
7. Report types: performance_summary, traffic_analysis, conversion_report
**Constraints:**
- Use DRF serializers for all request validation — never trust raw request.data
- Use DRF permission classes: custom IsAccountManagerForAccount, IsClientForAccount
- Use select_related('account', 'created_by') to prevent N+1 queries
- Return proper HTTP status codes (201 created, 403 forbidden, 409 limit reached)
- Use timezone.now() not datetime.now() for timestamps
- All model fields must have help_text for admin documentation
Step 1.2: Structured Prompt for Celery Task
You are implementing a Celery background task for report generation.
**Context:**
- Django 5.1, Celery 5.4 with Redis broker
- Report generation: query analytics data, format as PDF (weasyprint)
- Storage: S3 via django-storages
**Task:** Implement the recurring report generation pipeline.
**Requirements:**
1. Celery Beat schedule: check for due reports every hour
2. For each due schedule: create a GeneratedReport record, dispatch generation task
3. Generation task: query analytics data, generate PDF, upload to S3, update record status
4. Handle failures: retry 3 times with exponential backoff (60s, 300s, 900s)
5. Mark schedule as errored after 3 consecutive failures; notify admins
6. Email report to all recipients after successful generation
7. Track generation time and file size for monitoring
**Constraints:**
- Use @shared_task(bind=True) for retry access
- Use task_id as idempotency key — prevent duplicate generation
- Never pass Django model instances to Celery — pass IDs only
- Use select_for_update() when updating report status (prevent race conditions)
- Log structured context: schedule_id, report_id, generation_time_ms, file_size_bytes
- Set task soft_time_limit=300, time_limit=360 (5/6 minute limits)
- Close database connections in task finally block
Step 1.3: Record Prompt References
AI-Usage: claude
AI-Prompt-Ref: by-language/python/secure-endpoint + by-framework/django/endpoint-implementation (schedule CRUD),
by-framework/django/component-implementation (Celery task)
AI-Confidence: high — CRUD endpoints, medium — Celery retry/scheduling logic
Phase 2: Human-in-the-Loop Review (PRD-STD-002)
Step 2.1: Review AI Output Against Checklist
Using the Python PR Risk Review prompt (prompt-library/by-language/python/pr-risk-review.md):
Critical items for this feature:
| Check | What to Verify | Status |
|---|---|---|
| Auth bypass | Do permission classes check account ownership on every endpoint? | |
| Object-level auth | Is get_queryset() filtered by user's accounts, not just get_object()? | |
| N+1 queries | Does get_queryset() use select_related and prefetch_related? | |
| Celery serialization | Are model instances passed as IDs, never pickled objects? | |
| Race condition | Does the generation task use select_for_update() for status changes? | |
| Task idempotency | Can the same report be generated twice if Celery retries? (idempotency key) | |
| Resource limits | Are soft_time_limit and time_limit set on the generation task? | |
| Timezone | All datetime operations use timezone.now(), not datetime.now()? | |
| Email validation | Are recipient emails validated server-side in the serializer? | |
| Admin safety | Are sensitive fields readonly_fields in admin? |
Step 2.2: Django-Specific AI Pitfalls to Check
From the Python anti-patterns table (prompt-library/by-language/python.md) and Django pitfalls (prompt-library/by-framework/django.md):
- No
fields = '__all__'in serializers — explicit field lists only - No bare
except:clauses — specific exception types - No
raw()orextra()with string formatting - No
@csrf_exemptwithout justification (API uses JWT, so DRF handles this) - No
mark_safe()on user-generated content - No mutable default arguments in function signatures
-
select_related/prefetch_relatedin all list querysets - Django signals used sparingly — prefer explicit service functions
Phase 3: Testing (PRD-STD-003)
Step 3.1: Generate Test Matrix
Use the Python Risk-Based Test Matrix prompt (prompt-library/by-language/python/test-matrix.md) combined with Django Testing Strategy (prompt-library/by-framework/django/testing-strategy.md):
Feature: Recurring report schedule CRUD + Celery generation pipeline
Changes: Django models, DRF views/serializers, Celery tasks, admin config
Generate a layered test strategy using pytest-django + factory_boy:
1. Model tests: constraints, validators, custom managers
2. Serializer tests: validation rules, nested data, email validation
3. View tests: CRUD operations, permission enforcement, pagination
4. Celery task tests: generation flow, retry logic, failure handling
5. Performance tests: N+1 query detection with assertNumQueries
Expected test coverage:
| Test Type | Count | What It Covers |
|---|---|---|
| Model (pytest-django) | 8-10 | Constraints, validators, str, Meta ordering |
| Serializer | 8-12 | Validation rules, email list, report type choices |
| View/API | 12-16 | CRUD + permission boundary (3 roles x endpoints) |
| Celery tasks | 8-10 | Generation, retry, failure, idempotency, time limits |
| Admin | 3-5 | List display, filters, readonly enforcement |
| Performance | 3-5 | N+1 detection with django_assert_num_queries |
Step 3.2: Verify AI-Generated Tests
Common issues with AI-generated Django tests (from prompt-library/by-framework/django/testing-strategy.md):
- Tests use
@pytest.mark.django_db— notdjango.test.TestCase - Tests use factory_boy — not manual
Model.objects.create() - Tests use
APIClient.force_authenticate()for auth, not custom token setup - Tests verify N+1 queries with
django_assert_num_queries - Tests cover all three role boundaries (client, account_manager, admin)
- Celery tests use
@pytest.mark.celeryor mock task dispatch - No
time.sleep()— use freezegun for time-dependent tests
Phase 4: Security Scanning (PRD-STD-004)
Step 4.1: Automated CI Checks
Your CI pipeline catches:
# These run automatically on every PR
- Semgrep: SQL injection, Django-specific misconfigurations
- pip-audit: Known CVEs in Python dependencies
- ruff: Linting including security rules
- bandit: Python-specific security analysis (raw SQL, exec, pickle)
- mypy: Type safety (optional strict mode)
Step 4.2: Manual Security Review
Use the Django Security Review prompt (prompt-library/by-framework/django/security-review.md) to check:
- Report download endpoint validates that the requesting user has access to the account
- Generated PDF URLs are signed/temporary (S3 presigned URLs with expiry)
- Report content does not include raw SQL query results or internal system data
- Email recipient list cannot be manipulated to send reports to unauthorized addresses
- Admin actions for schedule management are logged in Django admin log
- Generated reports stored in S3 use server-side encryption
- Celery task logs do not contain PII from report content
Phase 5: Quality Gates (PRD-STD-007)
Step 5.1: PR Checklist
| Gate | Tool | Pass Criteria |
|---|---|---|
| Type safety | mypy (optional) | Zero errors if enabled |
| Lint | ruff | Zero errors |
| Unit tests | pytest | 100% passing, new code covered |
| Migration check | python manage.py makemigrations --check | No missing migrations |
| Security scan | Semgrep + bandit | Zero high/critical findings |
| Dependency audit | pip-audit | Zero high/critical CVEs |
| N+1 detection | django_assert_num_queries | All list endpoints verified |
| Build | Docker build | Successful |
Step 5.2: PR Metadata
## Changes
- Add ReportSchedule and GeneratedReport models with migrations
- Add CRUD endpoints: POST/GET/PATCH/DELETE /api/v1/report-schedules
- Add report download endpoint: GET /api/v1/reports/{id}/download
- Add Celery Beat schedule for hourly due-report check
- Add Celery task for PDF generation with retry logic
- Add custom DRF permission classes for account-level authorization
- Add Django Admin configuration for schedule and report management
## AI Disclosure
- AI-Usage: claude
- AI-Prompt-Ref: by-language/python/secure-endpoint + by-framework/django/endpoint-implementation (CRUD),
by-framework/django/component-implementation (Celery task)
- AI-Review: Used by-language/python/pr-risk-review for self-review
- Human-Review: Permission classes manually verified, Celery retry logic reviewed
## Testing
- 9 model tests (constraints, validators, ordering)
- 10 serializer tests (validation, email list, choices)
- 14 API view tests (CRUD + 3 role boundaries)
- 9 Celery task tests (generation, retry, failure, idempotency)
- 4 admin tests (list display, filters, readonly)
- 4 performance tests (N+1 query detection)
Phase 6: Dependency Compliance (PRD-STD-008)
Use the Python Dependency Risk Check (prompt-library/by-language/python/dependency-check.md) if new packages were added:
Review these dependency additions:
- weasyprint>=62.0 (PDF generation from HTML/CSS)
- django-celery-beat>=2.6 (periodic task scheduling)
- freezegun>=1.4 (time mocking in tests)
Check: license, CVEs, system dependencies (weasyprint needs libpango), maintenance status, alternatives.
Note: weasyprint requires system libraries (libpango, libcairo) — ensure these are in the Docker image.
Phase 7: Documentation (PRD-STD-005)
Use the Python Change Runbook (prompt-library/by-language/python/change-runbook.md) to generate:
- Migration notes: Django migration must run before deployment; Celery Beat schedule auto-registers
- Environment variables:
CELERY_BROKER_URL,AWS_STORAGE_BUCKET_NAME,REPORT_GENERATION_TIMEOUT_SECONDS,REPORT_MAX_RETRIES,REPORT_PRESIGNED_URL_EXPIRY_SECONDS - System dependencies: Docker image must include
libpango-1.0-0,libcairo2,libgdk-pixbuf2.0-0for weasyprint - Rollback procedure: Revert migration, stop Celery Beat schedule, redeploy previous image, existing generated reports remain accessible
- Monitoring:
- Alert on report generation failure rate > 10%
- Alert on report generation duration p99 > 4 minutes (approaching 5-min soft limit)
- Alert on schedule error count > 3 (schedule auto-disabled)
- Dashboard: reports generated per day, generation time distribution, failure reasons, S3 storage usage
- Operational notes:
- Errored schedules require manual reactivation via admin after root cause fix
- Reports are retained in S3 for 90 days, then lifecycle-deleted
- Presigned download URLs expire after 1 hour
Summary: Standards Applied
| Standard | How It Was Applied | Evidence |
|---|---|---|
| PRD-STD-001 (Prompt Engineering) | Structured prompts from Python/Django templates | PR description AI-Prompt-Ref |
| PRD-STD-002 (Code Review) | AI + human review with permission focus | Review comments on PR |
| PRD-STD-003 (Testing) | Layered tests with factory_boy + N+1 detection, 50+ tests | CI test results |
| PRD-STD-004 (Security) | Automated scans + download auth review | CI scan output + review notes |
| PRD-STD-005 (Documentation) | Generated runbook with system dependency notes | PR description + runbook |
| PRD-STD-007 (Quality Gates) | All gates including migration check and N+1 | CI status checks |
| PRD-STD-008 (Dependencies) | Dependency risk check including system deps | PR comment with assessment |
What This Demonstrates
- Django monoliths need object-level auth — DRF permission classes must check account ownership, not just role;
get_queryset()filtering is the first line of defense - Celery tasks have unique testing challenges — serialization (pass IDs not objects), idempotency, retry logic, and time limits all need explicit tests
- N+1 queries are the silent monolith killer —
django_assert_num_queriesin list endpoint tests catches missingselect_relatedbefore production - System dependencies create deployment risk — weasyprint's native library requirements must be documented in the runbook, not just requirements.txt
- Admin hardening is not optional — readonly_fields, list_select_related, and custom permissions prevent the Django Admin from becoming a security backdoor
Apply This Pattern in Your Repo
Use this scenario as a reference pattern, then choose an implementation path:
- Day 1 / small team: Starter Config Files + CI/CD Pipeline Starter
- Live role-based workflow (same repo, 4-role baseline): AEEF CLI Wrapper
- Transformation rollout (Python teams): Tier 2: Transformation Apply Path then Tier 2 Python
- Production rollout (regulated / enterprise): Tier 3: Production Apply Path then Tier 3 Python
Next Steps
- Walk through the Next.js Full-Stack Scenario for a frontend-inclusive example
- Walk through the Python Microservice Scenario for a backend microservice example
- Review the full Production Standards to identify any gaps for your team