Task, Agent & Documentation Manager
Multi-Agent Task Orchestration & Documentation Sync
Status: Production Ready (v1.2.0) Date: January 2026
π― Overview
System that:
- Manages tasks in multi-agent workflow
- Assigns agents automatically based on expertise
- Coordinates execution in parallel with approval gates
- Extracts decisions as Architecture Decision Records (ADRs)
- Maintains documentation automatically synchronized
π Task Structure
Task Metadata
Tasks are stored in SurrealDB with the following structure:
[task]
id = "task-089"
type = "feature" # feature | bugfix | enhancement | tech-debt
title = "Implement learning profiles"
description = "Agent expertise tracking with recency bias"
[status]
state = "in-progress" # todo | in-progress | review | done | archived
progress = 60 # 0-100%
created_at = "2026-01-11T10:15:30Z"
updated_at = "2026-01-11T14:30:22Z"
[assignment]
priority = "high" # high | medium | low
assigned_agent = "developer" # Or null if unassigned
assigned_team = "infrastructure"
[estimation]
estimated_hours = 8
actual_hours = null # Updated when complete
[context]
related_tasks = ["task-087", "task-088"]
blocking_tasks = []
blocked_by = []
Task Lifecycle
βββββββββββ ββββββββββββββββ ββββββββββ ββββββββββββ
β TODO ββββββΆβ IN-PROGRESS ββββββΆβ REVIEW ββββββΆβ DONE β
βββββββββββ ββββββββββββββββ ββββββββββ ββββββββββββ
β³ β
β β
ββββββββββββββ ARCHIVED βββββββββββββ
π€ Agent Assignment
Automatic Selection
When a task is created, SwarmCoordinator assigns the best agent:
- Capability Matching: Filter agents by role matching task type
- Learning Profile Lookup: Get expertise scores for task-type
- Load Balancing: Check current agent load (tasks in progress)
- Scoring:
final_score = 0.3*load + 0.5*expertise + 0.2*confidence - Notification: Agent receives job via NATS JetStream
Agent Roles
| Role | Specialization | Primary Tasks |
|---|---|---|
| Architect | System design | Feature planning, ADRs, design reviews |
| Developer | Implementation | Code generation, refactoring, debugging |
| Reviewer | Quality assurance | Code review, test coverage, style checks |
| Tester | QA & Benchmarks | Test suite, performance benchmarks |
| Documenter | Documentation | Guides, API docs, README updates |
| Marketer | Marketing content | Blog posts, case studies, announcements |
| Presenter | Presentations | Slides, deck creation, demo scripts |
| DevOps | Infrastructure | CI/CD setup, deployment, monitoring |
| Monitor | Health & Alerting | System monitoring, alerts, incident response |
| Security | Compliance & Audit | Code security, access control, compliance |
| ProjectManager | Coordination | Roadmap, tracking, milestone management |
| DecisionMaker | Conflict Resolution | Tie-breaking, escalation, ADR creation |
π Multi-Agent Workflow Execution
Sequential Workflow (Phases)
Phase 1: Design
ββ Architect creates ADR
ββ Move to Phase 2 (auto on completion)
Phase 2: Development
ββ Developer implements
ββ (Parallel) Documenter writes guide
ββ Move to Phase 3
Phase 3: Review
ββ Reviewer checks code quality
ββ Security audits for compliance
ββ If approved: Move to Phase 4
ββ If rejected: Back to Phase 2
Phase 4: Testing
ββ Tester creates test suite
ββ Tester runs benchmarks
ββ If passing: Move to Phase 5
ββ If failing: Back to Phase 2
Phase 5: Completion
ββ DevOps deploys
ββ Monitor sets up alerts
ββ ProjectManager marks done
Parallel Coordination
Multiple agents work simultaneously when independent:
Task: "Add learning profiles"
ββ Architect (ADR) βΆ Created in 2h
ββ Developer (Code) βΆ Implemented in 8h
β ββ Reviewer (Review) βΆ Reviewed in 1h (parallel)
β ββ Documenter (Guide) βΆ Documented in 2h (parallel)
β
ββ Tester (Tests) βΆ Tests in 3h
ββ Security (Audit) βΆ Audited in 1h (parallel)
Approval Gates
Critical decision points require manual approval:
- Security Gate: Must approve if code touches auth/secrets
- Breaking Changes: Architect approval required
- Production Deployment: DevOps + ProjectManager approval
- Major Refactoring: Architect + Lead Developer approval
π Decision Extraction (ADRs)
Every design decision is automatically captured:
ADR Template
# ADR-042: Learning-Based Agent Selection
## Context
Previous agent assignment used simple load balancing (min tasks),
ignoring historical performance data. This led to poor agent-task matches.
## Decision
Implement per-task-type learning profiles with recency bias.
### Key Points
- Success rate weighted by recency (7-day window, 3Γ weight)
- Confidence scoring prevents small-sample overfitting
- Supports adaptive recovery from temporary degradation
## Consequences
**Positive**:
- 30-50% improvement in task success rate
- Agents improve continuously
**Negative**:
- Requires KG data collection (startup period)
- Learning period ~20 tasks per task-type
## Alternatives Considered
1. Rule-based routing (rejected: no learning)
2. Pure random assignment (rejected: no improvement)
3. Rolling average (rejected: no recency bias)
## Decision Made
Option A: Learning profiles with recency bias
ADR Extraction Process
- Automatic: Each task completion generates execution record
- Learning: If decision had trade-offs, extract as ADR candidate
- Curation: ProjectManager/Architect reviews and approves
- Archival: Stored in docs/architecture/adr/ (numbered, immutable)
π Documentation Synchronization
Automatic Updates
When tasks complete, documentation is auto-updated:
| Task Type | Auto-Updates |
|---|---|
| Feature | CHANGELOG.md, feature overview, API docs |
| Bugfix | CHANGELOG.md, troubleshooting guide |
| Tech-Debt | Architecture docs, refactoring guide |
| Enhancement | Feature docs, user guide |
| Documentation | Indexed in RAG, updated in search |
Documentation Lifecycle
Task Created
β
βΌ
Documentation Context Extracted
β
ββ Decision/ADR created
ββ Related docs identified
ββ Change summary prepared
β
βΌ
Task Execution
β
ββ Code generated
ββ Tests created
ββ Examples documented
β
βΌ
Task Complete
β
ββ ADR finalized
ββ Docs auto-generated
ββ CHANGELOG entry created
ββ Search index updated (RAG)
β
βΌ
Archival (if stale)
β
ββ Moved to docs/archive/
(kept for historical reference)
π Search & Retrieval (RAG Integration)
Document Indexing
All generated documentation is indexed for semantic search:
- Architecture decisions (ADRs)
- Feature guides (how-tos)
- Code examples (patterns)
- Execution history (knowledge graph)
Query Examples
User asks: "How do I implement learning profiles?"
System searches:
- ADRs mentioning "learning"
- Implementation guides with "learning"
- Execution history with similar task type
- Code examples for "learning profiles"
Returns ranked results with sources.
π Metrics & Monitoring
Task Metrics
- Success Rate: % of tasks completed successfully
- Cycle Time: Average time from todo β done
- Agent Utilization: Tasks per agent per role
- Decision Quality: ADRs implemented vs. abandoned
Agent Metrics (per role)
- Task Success Rate: % tasks completed successfully
- Learning Curve: Expert improvement over time
- Cost per Task: Average LLM spend per completed task
- Task Coverage: Breadth of task-types handled
Documentation Metrics
- Coverage: % of features documented
- Freshness: Days since last update
- Usage: Search queries hitting each doc
- Accuracy: User feedback on doc correctness
ποΈ Implementation Details
SurrealDB Schema
-- Tasks table
DEFINE TABLE tasks SCHEMAFULL;
DEFINE FIELD id ON tasks TYPE string;
DEFINE FIELD type ON tasks TYPE string;
DEFINE FIELD state ON tasks TYPE string;
DEFINE FIELD assigned_agent ON tasks TYPE option<string>;
-- Executions (for learning)
DEFINE TABLE executions SCHEMAFULL;
DEFINE FIELD task_id ON executions TYPE string;
DEFINE FIELD agent_id ON executions TYPE string;
DEFINE FIELD success ON executions TYPE bool;
DEFINE FIELD duration_ms ON executions TYPE number;
DEFINE FIELD cost_cents ON executions TYPE number;
-- ADRs table
DEFINE TABLE adrs SCHEMAFULL;
DEFINE FIELD id ON adrs TYPE string;
DEFINE FIELD task_id ON adrs TYPE string;
DEFINE FIELD title ON adrs TYPE string;
DEFINE FIELD status ON adrs TYPE string; -- draft|approved|archived
NATS Topics
tasks.{type}.{priority}β Task assignmentsagents.{role}.readyβ Agent heartbeatsagents.{role}.completeβ Task completionadrs.createdβ New ADR eventsdocs.updatedβ Documentation changes
π― Key Design Patterns
1. Event-Driven Coordination
- Task creation β Agent assignment (async via NATS)
- Task completion β Documentation update (eventual consistency)
- No direct API calls between services (loosely coupled)
2. Learning from Execution History
- Every task stores execution metadata (success, duration, cost)
- Learning profiles updated from execution data
- Better assignments improve continuously
3. Decision Extraction
- Design decisions captured as ADRs
- Immutable record of architectural rationale
- Serves as organizational memory
4. Graceful Degradation
- NATS offline: In-memory queue fallback
- Agent unavailable: Task re-assigned to next best
- Doc generation failed: Manual entry allowed
π Related Documentation
- VAPORA Architecture β System overview
- Agent Registry & Coordination β Agent patterns
- Multi-Agent Workflows β Workflow execution
- Multi-IA Router β LLM provider selection
- Roles, Permissions & Profiles β RBAC
Status: β Production Ready Version: 1.2.0 Last Updated: January 2026