Keyboard shortcuts

Press ← or β†’ to navigate between chapters

Press S or / to search in the book

Press ? to show this help

Press Esc to hide this help

Task, Agent & Documentation Manager

Multi-Agent Task Orchestration & Documentation Sync

Status: Production Ready (v1.2.0) Date: January 2026


🎯 Overview

System that:

  1. Manages tasks in multi-agent workflow
  2. Assigns agents automatically based on expertise
  3. Coordinates execution in parallel with approval gates
  4. Extracts decisions as Architecture Decision Records (ADRs)
  5. Maintains documentation automatically synchronized

πŸ“‹ Task Structure

Task Metadata

Tasks are stored in SurrealDB with the following structure:

[task]
id = "task-089"
type = "feature"                    # feature | bugfix | enhancement | tech-debt
title = "Implement learning profiles"
description = "Agent expertise tracking with recency bias"

[status]
state = "in-progress"               # todo | in-progress | review | done | archived
progress = 60                        # 0-100%
created_at = "2026-01-11T10:15:30Z"
updated_at = "2026-01-11T14:30:22Z"

[assignment]
priority = "high"                   # high | medium | low
assigned_agent = "developer"        # Or null if unassigned
assigned_team = "infrastructure"

[estimation]
estimated_hours = 8
actual_hours = null                 # Updated when complete

[context]
related_tasks = ["task-087", "task-088"]
blocking_tasks = []
blocked_by = []

Task Lifecycle

β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”     β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”     β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”     β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
β”‚  TODO   │────▢│ IN-PROGRESS  │────▢│ REVIEW │────▢│   DONE   β”‚
β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜     β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜     β””β”€β”€β”€β”€β”€β”€β”€β”€β”˜     β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜
       β–³                                   β”‚
       β”‚                                   β”‚
       └───────────── ARCHIVED β—€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜

πŸ€– Agent Assignment

Automatic Selection

When a task is created, SwarmCoordinator assigns the best agent:

  1. Capability Matching: Filter agents by role matching task type
  2. Learning Profile Lookup: Get expertise scores for task-type
  3. Load Balancing: Check current agent load (tasks in progress)
  4. Scoring: final_score = 0.3*load + 0.5*expertise + 0.2*confidence
  5. Notification: Agent receives job via NATS JetStream

Agent Roles

RoleSpecializationPrimary Tasks
ArchitectSystem designFeature planning, ADRs, design reviews
DeveloperImplementationCode generation, refactoring, debugging
ReviewerQuality assuranceCode review, test coverage, style checks
TesterQA & BenchmarksTest suite, performance benchmarks
DocumenterDocumentationGuides, API docs, README updates
MarketerMarketing contentBlog posts, case studies, announcements
PresenterPresentationsSlides, deck creation, demo scripts
DevOpsInfrastructureCI/CD setup, deployment, monitoring
MonitorHealth & AlertingSystem monitoring, alerts, incident response
SecurityCompliance & AuditCode security, access control, compliance
ProjectManagerCoordinationRoadmap, tracking, milestone management
DecisionMakerConflict ResolutionTie-breaking, escalation, ADR creation

πŸ”„ Multi-Agent Workflow Execution

Sequential Workflow (Phases)

Phase 1: Design
  └─ Architect creates ADR
     └─ Move to Phase 2 (auto on completion)

Phase 2: Development
  └─ Developer implements
  └─ (Parallel) Documenter writes guide
     └─ Move to Phase 3

Phase 3: Review
  └─ Reviewer checks code quality
  └─ Security audits for compliance
     └─ If approved: Move to Phase 4
     └─ If rejected: Back to Phase 2

Phase 4: Testing
  └─ Tester creates test suite
  └─ Tester runs benchmarks
     └─ If passing: Move to Phase 5
     └─ If failing: Back to Phase 2

Phase 5: Completion
  └─ DevOps deploys
  └─ Monitor sets up alerts
  └─ ProjectManager marks done

Parallel Coordination

Multiple agents work simultaneously when independent:

Task: "Add learning profiles"

β”œβ”€ Architect (ADR)          β–Ά Created in 2h
β”œβ”€ Developer (Code)         β–Ά Implemented in 8h
β”‚  β”œβ”€ Reviewer (Review)     β–Ά Reviewed in 1h (parallel)
β”‚  └─ Documenter (Guide)    β–Ά Documented in 2h (parallel)
β”‚
└─ Tester (Tests)           β–Ά Tests in 3h
   └─ Security (Audit)      β–Ά Audited in 1h (parallel)

Approval Gates

Critical decision points require manual approval:

  • Security Gate: Must approve if code touches auth/secrets
  • Breaking Changes: Architect approval required
  • Production Deployment: DevOps + ProjectManager approval
  • Major Refactoring: Architect + Lead Developer approval

πŸ“ Decision Extraction (ADRs)

Every design decision is automatically captured:

ADR Template

# ADR-042: Learning-Based Agent Selection

## Context

Previous agent assignment used simple load balancing (min tasks),
ignoring historical performance data. This led to poor agent-task matches.

## Decision

Implement per-task-type learning profiles with recency bias.

### Key Points
- Success rate weighted by recency (7-day window, 3Γ— weight)
- Confidence scoring prevents small-sample overfitting
- Supports adaptive recovery from temporary degradation

## Consequences

**Positive**:
- 30-50% improvement in task success rate
- Agents improve continuously

**Negative**:
- Requires KG data collection (startup period)
- Learning period ~20 tasks per task-type

## Alternatives Considered

1. Rule-based routing (rejected: no learning)
2. Pure random assignment (rejected: no improvement)
3. Rolling average (rejected: no recency bias)

## Decision Made

Option A: Learning profiles with recency bias

ADR Extraction Process

  1. Automatic: Each task completion generates execution record
  2. Learning: If decision had trade-offs, extract as ADR candidate
  3. Curation: ProjectManager/Architect reviews and approves
  4. Archival: Stored in docs/architecture/adr/ (numbered, immutable)

πŸ“š Documentation Synchronization

Automatic Updates

When tasks complete, documentation is auto-updated:

Task TypeAuto-Updates
FeatureCHANGELOG.md, feature overview, API docs
BugfixCHANGELOG.md, troubleshooting guide
Tech-DebtArchitecture docs, refactoring guide
EnhancementFeature docs, user guide
DocumentationIndexed in RAG, updated in search

Documentation Lifecycle

Task Created
    β”‚
    β–Ό
Documentation Context Extracted
    β”‚
    β”œβ”€ Decision/ADR created
    β”œβ”€ Related docs identified
    └─ Change summary prepared
    β”‚
    β–Ό
Task Execution
    β”‚
    β”œβ”€ Code generated
    β”œβ”€ Tests created
    └─ Examples documented
    β”‚
    β–Ό
Task Complete
    β”‚
    β”œβ”€ ADR finalized
    β”œβ”€ Docs auto-generated
    β”œβ”€ CHANGELOG entry created
    └─ Search index updated (RAG)
    β”‚
    β–Ό
Archival (if stale)
    β”‚
    └─ Moved to docs/archive/
       (kept for historical reference)

πŸ” Search & Retrieval (RAG Integration)

Document Indexing

All generated documentation is indexed for semantic search:

  • Architecture decisions (ADRs)
  • Feature guides (how-tos)
  • Code examples (patterns)
  • Execution history (knowledge graph)

Query Examples

User asks: "How do I implement learning profiles?"

System searches:

  1. ADRs mentioning "learning"
  2. Implementation guides with "learning"
  3. Execution history with similar task type
  4. Code examples for "learning profiles"

Returns ranked results with sources.


πŸ“Š Metrics & Monitoring

Task Metrics

  • Success Rate: % of tasks completed successfully
  • Cycle Time: Average time from todo β†’ done
  • Agent Utilization: Tasks per agent per role
  • Decision Quality: ADRs implemented vs. abandoned

Agent Metrics (per role)

  • Task Success Rate: % tasks completed successfully
  • Learning Curve: Expert improvement over time
  • Cost per Task: Average LLM spend per completed task
  • Task Coverage: Breadth of task-types handled

Documentation Metrics

  • Coverage: % of features documented
  • Freshness: Days since last update
  • Usage: Search queries hitting each doc
  • Accuracy: User feedback on doc correctness

πŸ—οΈ Implementation Details

SurrealDB Schema

-- Tasks table
DEFINE TABLE tasks SCHEMAFULL;
DEFINE FIELD id ON tasks TYPE string;
DEFINE FIELD type ON tasks TYPE string;
DEFINE FIELD state ON tasks TYPE string;
DEFINE FIELD assigned_agent ON tasks TYPE option<string>;

-- Executions (for learning)
DEFINE TABLE executions SCHEMAFULL;
DEFINE FIELD task_id ON executions TYPE string;
DEFINE FIELD agent_id ON executions TYPE string;
DEFINE FIELD success ON executions TYPE bool;
DEFINE FIELD duration_ms ON executions TYPE number;
DEFINE FIELD cost_cents ON executions TYPE number;

-- ADRs table
DEFINE TABLE adrs SCHEMAFULL;
DEFINE FIELD id ON adrs TYPE string;
DEFINE FIELD task_id ON adrs TYPE string;
DEFINE FIELD title ON adrs TYPE string;
DEFINE FIELD status ON adrs TYPE string; -- draft|approved|archived

NATS Topics

  • tasks.{type}.{priority} β€” Task assignments
  • agents.{role}.ready β€” Agent heartbeats
  • agents.{role}.complete β€” Task completion
  • adrs.created β€” New ADR events
  • docs.updated β€” Documentation changes

🎯 Key Design Patterns

1. Event-Driven Coordination

  • Task creation β†’ Agent assignment (async via NATS)
  • Task completion β†’ Documentation update (eventual consistency)
  • No direct API calls between services (loosely coupled)

2. Learning from Execution History

  • Every task stores execution metadata (success, duration, cost)
  • Learning profiles updated from execution data
  • Better assignments improve continuously

3. Decision Extraction

  • Design decisions captured as ADRs
  • Immutable record of architectural rationale
  • Serves as organizational memory

4. Graceful Degradation

  • NATS offline: In-memory queue fallback
  • Agent unavailable: Task re-assigned to next best
  • Doc generation failed: Manual entry allowed


Status: βœ… Production Ready Version: 1.2.0 Last Updated: January 2026