Vapora/README.md

683 lines
28 KiB
Markdown
Raw Normal View History

2025-11-09 12:27:37 +00:00
<div align="center">
feat: Phase 5.3 - Multi-Agent Learning Infrastructure Implement intelligent agent learning from Knowledge Graph execution history with per-task-type expertise tracking, recency bias, and learning curves. ## Phase 5.3 Implementation ### Learning Infrastructure (✅ Complete) - LearningProfileService with per-task-type expertise metrics - TaskTypeExpertise model tracking success_rate, confidence, learning curves - Recency bias weighting: recent 7 days weighted 3x higher (exponential decay) - Confidence scoring prevents overfitting: min(1.0, executions / 20) - Learning curves computed from daily execution windows ### Agent Scoring Service (✅ Complete) - Unified AgentScore combining SwarmCoordinator + learning profiles - Scoring formula: 0.3*base + 0.5*expertise + 0.2*confidence - Rank agents by combined score for intelligent assignment - Support for recency-biased scoring (recent_success_rate) - Methods: rank_agents, select_best, rank_agents_with_recency ### KG Integration (✅ Complete) - KGPersistence::get_executions_for_task_type() - query by agent + task type - KGPersistence::get_agent_executions() - all executions for agent - Coordinator::load_learning_profile_from_kg() - core KG→Learning integration - Coordinator::load_all_learning_profiles() - batch load for multiple agents - Convert PersistedExecution → ExecutionData for learning calculations ### Agent Assignment Integration (✅ Complete) - AgentCoordinator uses learning profiles for task assignment - extract_task_type() infers task type from title/description - assign_task() scores candidates using AgentScoringService - Fallback to load-based selection if no learning data available - Learning profiles stored in coordinator.learning_profiles RwLock ### Profile Adapter Enhancements (✅ Complete) - create_learning_profile() - initialize empty profiles - add_task_type_expertise() - set task-type expertise - update_profile_with_learning() - update swarm profiles from learning ## Files Modified ### vapora-knowledge-graph/src/persistence.rs (+30 lines) - get_executions_for_task_type(agent_id, task_type, limit) - get_agent_executions(agent_id, limit) ### vapora-agents/src/coordinator.rs (+100 lines) - load_learning_profile_from_kg() - core KG integration method - load_all_learning_profiles() - batch loading for agents - assign_task() already uses learning-based scoring via AgentScoringService ### Existing Complete Implementation - vapora-knowledge-graph/src/learning.rs - calculation functions - vapora-agents/src/learning_profile.rs - data structures and expertise - vapora-agents/src/scoring.rs - unified scoring service - vapora-agents/src/profile_adapter.rs - adapter methods ## Tests Passing - learning_profile: 7 tests ✅ - scoring: 5 tests ✅ - profile_adapter: 6 tests ✅ - coordinator: learning-specific tests ✅ ## Data Flow 1. Task arrives → AgentCoordinator::assign_task() 2. Extract task_type from description 3. Query KG for task-type executions (load_learning_profile_from_kg) 4. Calculate expertise with recency bias 5. Score candidates (SwarmCoordinator + learning) 6. Assign to top-scored agent 7. Execution result → KG → Update learning profiles ## Key Design Decisions ✅ Recency bias: 7-day half-life with 3x weight for recent performance ✅ Confidence scoring: min(1.0, total_executions / 20) prevents overfitting ✅ Hierarchical scoring: 30% base load, 50% expertise, 20% confidence ✅ KG query limit: 100 recent executions per task-type for performance ✅ Async loading: load_learning_profile_from_kg supports concurrent loads ## Next: Phase 5.4 - Cost Optimization Ready to implement budget enforcement and cost-aware provider selection.
2026-01-11 13:03:53 +00:00
<img src="assets/vapora.svg" alt="Vapora Logo" width="400">
2025-11-09 12:27:37 +00:00
2026-01-11 21:53:30 +00:00
2025-11-10 11:45:23 +00:00
🌊 Intelligent Development Orchestration Platform <br>
🎯 Specialized agents orchestrate pipelines for your team <br>
🤖 Multi-IA router (Claude, OpenAI, Gemini, Ollama) <br>
⚡ Built with Rust from backend to frontend <br>
2025-11-09 12:27:37 +00:00
[![License](https://img.shields.io/badge/license-MIT%2FApache--2.0-blue.svg)](LICENSE)
[![Rust](https://img.shields.io/badge/rust-1.75%2B-orange.svg)](https://www.rust-lang.org)
[![Kubernetes](https://img.shields.io/badge/kubernetes-ready-326CE5.svg)](https://kubernetes.io)
[![Istio](https://img.shields.io/badge/istio-service%20mesh-466BB0.svg)](https://istio.io)
[![Tests](https://img.shields.io/badge/tests-316%20passing-green.svg)](crates/)
2025-11-09 12:27:37 +00:00
2026-01-12 03:17:04 +00:00
[Features](#features) • [Quick Start](#quick-start) • [Architecture](#architecture) • [Docs](docs/) • [Contributing](#contributing)
2025-11-09 12:27:37 +00:00
</div>
2025-11-10 12:13:05 +00:00
2025-11-09 12:27:37 +00:00
---
```txt
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
╦ ╦╔═╗╔═╗╔═╗╦═╗╔═╗
╚╗╔╝╠═╣╠═╝║ ║╠╦╝╠═╣
╚╝ ╩ ╩╩ ╚═╝╩╚═╩ ╩
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
Vaporate complexity
Where ideas vaporize into reality
```
feat: Phase 5.3 - Multi-Agent Learning Infrastructure Implement intelligent agent learning from Knowledge Graph execution history with per-task-type expertise tracking, recency bias, and learning curves. ## Phase 5.3 Implementation ### Learning Infrastructure (✅ Complete) - LearningProfileService with per-task-type expertise metrics - TaskTypeExpertise model tracking success_rate, confidence, learning curves - Recency bias weighting: recent 7 days weighted 3x higher (exponential decay) - Confidence scoring prevents overfitting: min(1.0, executions / 20) - Learning curves computed from daily execution windows ### Agent Scoring Service (✅ Complete) - Unified AgentScore combining SwarmCoordinator + learning profiles - Scoring formula: 0.3*base + 0.5*expertise + 0.2*confidence - Rank agents by combined score for intelligent assignment - Support for recency-biased scoring (recent_success_rate) - Methods: rank_agents, select_best, rank_agents_with_recency ### KG Integration (✅ Complete) - KGPersistence::get_executions_for_task_type() - query by agent + task type - KGPersistence::get_agent_executions() - all executions for agent - Coordinator::load_learning_profile_from_kg() - core KG→Learning integration - Coordinator::load_all_learning_profiles() - batch load for multiple agents - Convert PersistedExecution → ExecutionData for learning calculations ### Agent Assignment Integration (✅ Complete) - AgentCoordinator uses learning profiles for task assignment - extract_task_type() infers task type from title/description - assign_task() scores candidates using AgentScoringService - Fallback to load-based selection if no learning data available - Learning profiles stored in coordinator.learning_profiles RwLock ### Profile Adapter Enhancements (✅ Complete) - create_learning_profile() - initialize empty profiles - add_task_type_expertise() - set task-type expertise - update_profile_with_learning() - update swarm profiles from learning ## Files Modified ### vapora-knowledge-graph/src/persistence.rs (+30 lines) - get_executions_for_task_type(agent_id, task_type, limit) - get_agent_executions(agent_id, limit) ### vapora-agents/src/coordinator.rs (+100 lines) - load_learning_profile_from_kg() - core KG integration method - load_all_learning_profiles() - batch loading for agents - assign_task() already uses learning-based scoring via AgentScoringService ### Existing Complete Implementation - vapora-knowledge-graph/src/learning.rs - calculation functions - vapora-agents/src/learning_profile.rs - data structures and expertise - vapora-agents/src/scoring.rs - unified scoring service - vapora-agents/src/profile_adapter.rs - adapter methods ## Tests Passing - learning_profile: 7 tests ✅ - scoring: 5 tests ✅ - profile_adapter: 6 tests ✅ - coordinator: learning-specific tests ✅ ## Data Flow 1. Task arrives → AgentCoordinator::assign_task() 2. Extract task_type from description 3. Query KG for task-type executions (load_learning_profile_from_kg) 4. Calculate expertise with recency bias 5. Score candidates (SwarmCoordinator + learning) 6. Assign to top-scored agent 7. Execution result → KG → Update learning profiles ## Key Design Decisions ✅ Recency bias: 7-day half-life with 3x weight for recent performance ✅ Confidence scoring: min(1.0, total_executions / 20) prevents overfitting ✅ Hierarchical scoring: 30% base load, 50% expertise, 20% confidence ✅ KG query limit: 100 recent executions per task-type for performance ✅ Async loading: load_learning_profile_from_kg supports concurrent loads ## Next: Phase 5.4 - Cost Optimization Ready to implement budget enforcement and cost-aware provider selection.
2026-01-11 13:03:53 +00:00
## 🌟 What is Vapora v1.2?
2025-11-09 12:27:37 +00:00
**VAPORA** is a **17-crate Rust workspace** (316 tests, 100% pass rate) delivering an **intelligent development orchestration platform** where teams and AI agents collaborate seamlessly to solve the 4 critical problems in parallel:
2025-11-10 11:41:29 +00:00
-**Context Switching** (Developers unified in one system instead of jumping between tools)
-**Knowledge Fragmentation** (Team decisions, code, and docs discoverable with RAG)
-**Manual Coordination** (Specialized agents orchestrate pipelines automatically)
-**Dev-Ops Friction** (Unified visibility from code changes through deployment)
Additional capabilities:
- 🤖 **Specialized agents** (Architect, Developer, Reviewer, Tester, Documenter, Marketer, Presenter, DevOps, Monitor, Security, ProjectManager, DecisionMaker - customizable & extensible)
- 🧠 **Multi-IA intelligent routing** (Claude, OpenAI, Gemini, Ollama selected per task)
- ☁️ **Self-hosted on Kubernetes** (no SaaS, complete control, Provisioning-based deployment)
2025-11-09 12:27:37 +00:00
### Key Highlights
2025-11-10 11:41:29 +00:00
- **🎨 Glassmorphism UI**: Vaporwave aesthetics with UnoCSS, Leptos reactive WASM
- **🤖 Specialized Agent Orchestration**: NATS JetStream coordination, parallel pipeline execution with approval gates
- **🧠 Multi-IA Router**: Hybrid routing (rules + dynamic + manual override) for optimal LLM per task
- **🚀 Full-Stack Rust**: Frontend (Leptos), Backend (Axum), Agents, Infrastructure
- **☁️ K8s Native**: Istio service mesh, Rook Ceph storage, Cedar policy engine RBAC
- **📚 RAG Integration**: Semantic search over Vapora docs, guides, code, and project materials
- **🔒 Multi-Tenant**: SurrealDB scope isolation, fine-grained Cedar policies, complete audit trails
2025-11-09 12:27:37 +00:00
---
2026-01-12 03:37:18 +00:00
## Features
2025-11-09 12:27:37 +00:00
### 📋 Project Management
- **Kanban Board**: Drag-and-drop tasks across Todo → Doing → Review → Done columns
- **Real-time Collaboration**: Optimistic updates for instant UI feedback
- **Smart Organization**: Feature tags, priority levels, task ordering
- **Responsive Design**: Works seamlessly from mobile to ultra-wide displays
### 🔄 Workflow Orchestrator (v1.2.0)
- **Multi-Stage Pipelines**: Orchestrate complex workflows with approval gates
- **Short-Lived Agent Contexts**: 95% cache token reduction (from $840/month to $110/month)
- **Artifact Passing**: ADR, Code, TestResults, Review, Documentation artifacts between stages
- **Kogral Integration**: Automatic context enrichment from `.kogral/` filesystem (guidelines, patterns, ADRs)
- **CLI Management**: 6 commands (start, list, status, approve, cancel, templates)
- **Event-Driven Coordination**: NATS pub/sub for workflow stage progression
- **4 Workflow Templates**:
- `feature_development` (5 stages: design → implementation → testing → review → deployment)
- `bugfix` (4 stages: investigation → fix → testing → deployment)
- `documentation_update` (3 stages: creation → review → publish)
- `security_audit` (4 stages: analysis → testing → remediation → verification)
feat: Phase 5.3 - Multi-Agent Learning Infrastructure Implement intelligent agent learning from Knowledge Graph execution history with per-task-type expertise tracking, recency bias, and learning curves. ## Phase 5.3 Implementation ### Learning Infrastructure (✅ Complete) - LearningProfileService with per-task-type expertise metrics - TaskTypeExpertise model tracking success_rate, confidence, learning curves - Recency bias weighting: recent 7 days weighted 3x higher (exponential decay) - Confidence scoring prevents overfitting: min(1.0, executions / 20) - Learning curves computed from daily execution windows ### Agent Scoring Service (✅ Complete) - Unified AgentScore combining SwarmCoordinator + learning profiles - Scoring formula: 0.3*base + 0.5*expertise + 0.2*confidence - Rank agents by combined score for intelligent assignment - Support for recency-biased scoring (recent_success_rate) - Methods: rank_agents, select_best, rank_agents_with_recency ### KG Integration (✅ Complete) - KGPersistence::get_executions_for_task_type() - query by agent + task type - KGPersistence::get_agent_executions() - all executions for agent - Coordinator::load_learning_profile_from_kg() - core KG→Learning integration - Coordinator::load_all_learning_profiles() - batch load for multiple agents - Convert PersistedExecution → ExecutionData for learning calculations ### Agent Assignment Integration (✅ Complete) - AgentCoordinator uses learning profiles for task assignment - extract_task_type() infers task type from title/description - assign_task() scores candidates using AgentScoringService - Fallback to load-based selection if no learning data available - Learning profiles stored in coordinator.learning_profiles RwLock ### Profile Adapter Enhancements (✅ Complete) - create_learning_profile() - initialize empty profiles - add_task_type_expertise() - set task-type expertise - update_profile_with_learning() - update swarm profiles from learning ## Files Modified ### vapora-knowledge-graph/src/persistence.rs (+30 lines) - get_executions_for_task_type(agent_id, task_type, limit) - get_agent_executions(agent_id, limit) ### vapora-agents/src/coordinator.rs (+100 lines) - load_learning_profile_from_kg() - core KG integration method - load_all_learning_profiles() - batch loading for agents - assign_task() already uses learning-based scoring via AgentScoringService ### Existing Complete Implementation - vapora-knowledge-graph/src/learning.rs - calculation functions - vapora-agents/src/learning_profile.rs - data structures and expertise - vapora-agents/src/scoring.rs - unified scoring service - vapora-agents/src/profile_adapter.rs - adapter methods ## Tests Passing - learning_profile: 7 tests ✅ - scoring: 5 tests ✅ - profile_adapter: 6 tests ✅ - coordinator: learning-specific tests ✅ ## Data Flow 1. Task arrives → AgentCoordinator::assign_task() 2. Extract task_type from description 3. Query KG for task-type executions (load_learning_profile_from_kg) 4. Calculate expertise with recency bias 5. Score candidates (SwarmCoordinator + learning) 6. Assign to top-scored agent 7. Execution result → KG → Update learning profiles ## Key Design Decisions ✅ Recency bias: 7-day half-life with 3x weight for recent performance ✅ Confidence scoring: min(1.0, total_executions / 20) prevents overfitting ✅ Hierarchical scoring: 30% base load, 50% expertise, 20% confidence ✅ KG query limit: 100 recent executions per task-type for performance ✅ Async loading: load_learning_profile_from_kg supports concurrent loads ## Next: Phase 5.4 - Cost Optimization Ready to implement budget enforcement and cost-aware provider selection.
2026-01-11 13:03:53 +00:00
### 🧠 Intelligent Learning & Cost Optimization (Phase 5.3 + 5.4)
- **Per-Task-Type Learning**: Agents build expertise profiles from execution history
- **Recency Bias**: Recent performance weighted 3x (last 7 days) for adaptive selection
- **Budget Enforcement**: Hard caps per role (monthly/weekly) with automatic fallback
- **Cost-Efficient Routing**: Quality/cost ratio optimization for provider selection
- **Learning Curves**: Track agent improvement over time with confidence scoring
- **Prometheus Metrics**: Real-time budget utilization, fallback triggers, cost per provider
- **Gradual Production Rollout**: 4-week canary deployment with automated rollback
feat: Phase 5.3 - Multi-Agent Learning Infrastructure Implement intelligent agent learning from Knowledge Graph execution history with per-task-type expertise tracking, recency bias, and learning curves. ## Phase 5.3 Implementation ### Learning Infrastructure (✅ Complete) - LearningProfileService with per-task-type expertise metrics - TaskTypeExpertise model tracking success_rate, confidence, learning curves - Recency bias weighting: recent 7 days weighted 3x higher (exponential decay) - Confidence scoring prevents overfitting: min(1.0, executions / 20) - Learning curves computed from daily execution windows ### Agent Scoring Service (✅ Complete) - Unified AgentScore combining SwarmCoordinator + learning profiles - Scoring formula: 0.3*base + 0.5*expertise + 0.2*confidence - Rank agents by combined score for intelligent assignment - Support for recency-biased scoring (recent_success_rate) - Methods: rank_agents, select_best, rank_agents_with_recency ### KG Integration (✅ Complete) - KGPersistence::get_executions_for_task_type() - query by agent + task type - KGPersistence::get_agent_executions() - all executions for agent - Coordinator::load_learning_profile_from_kg() - core KG→Learning integration - Coordinator::load_all_learning_profiles() - batch load for multiple agents - Convert PersistedExecution → ExecutionData for learning calculations ### Agent Assignment Integration (✅ Complete) - AgentCoordinator uses learning profiles for task assignment - extract_task_type() infers task type from title/description - assign_task() scores candidates using AgentScoringService - Fallback to load-based selection if no learning data available - Learning profiles stored in coordinator.learning_profiles RwLock ### Profile Adapter Enhancements (✅ Complete) - create_learning_profile() - initialize empty profiles - add_task_type_expertise() - set task-type expertise - update_profile_with_learning() - update swarm profiles from learning ## Files Modified ### vapora-knowledge-graph/src/persistence.rs (+30 lines) - get_executions_for_task_type(agent_id, task_type, limit) - get_agent_executions(agent_id, limit) ### vapora-agents/src/coordinator.rs (+100 lines) - load_learning_profile_from_kg() - core KG integration method - load_all_learning_profiles() - batch loading for agents - assign_task() already uses learning-based scoring via AgentScoringService ### Existing Complete Implementation - vapora-knowledge-graph/src/learning.rs - calculation functions - vapora-agents/src/learning_profile.rs - data structures and expertise - vapora-agents/src/scoring.rs - unified scoring service - vapora-agents/src/profile_adapter.rs - adapter methods ## Tests Passing - learning_profile: 7 tests ✅ - scoring: 5 tests ✅ - profile_adapter: 6 tests ✅ - coordinator: learning-specific tests ✅ ## Data Flow 1. Task arrives → AgentCoordinator::assign_task() 2. Extract task_type from description 3. Query KG for task-type executions (load_learning_profile_from_kg) 4. Calculate expertise with recency bias 5. Score candidates (SwarmCoordinator + learning) 6. Assign to top-scored agent 7. Execution result → KG → Update learning profiles ## Key Design Decisions ✅ Recency bias: 7-day half-life with 3x weight for recent performance ✅ Confidence scoring: min(1.0, total_executions / 20) prevents overfitting ✅ Hierarchical scoring: 30% base load, 50% expertise, 20% confidence ✅ KG query limit: 100 recent executions per task-type for performance ✅ Async loading: load_learning_profile_from_kg supports concurrent loads ## Next: Phase 5.4 - Cost Optimization Ready to implement budget enforcement and cost-aware provider selection.
2026-01-11 13:03:53 +00:00
2025-11-10 11:41:29 +00:00
### 🤖 Specialized AI Agents (Customizable & Extensible)
Select, tune, or extend agents for your pipeline orchestration needs. Default agents include:
**Parallel Agent Coordination**:
- 🏗️ **Architect** (Claude Opus): System design & ADRs
- 💻 **Developer** (Claude Sonnet): Code implementation [scales 5-20]
- 👀 **CodeReviewer** (Claude Sonnet): Quality assurance & CR
- 🧪 **Tester** (Claude Sonnet): Tests & benchmarks
- 📚 **Documenter** (GPT-4): Docs & root files
- 📢 **Marketer** (Claude Sonnet): Marketing content
- 🎤 **Presenter** (Claude Sonnet): Presentations & slides
- ⚙️ **DevOps** (Claude Sonnet): CI/CD & deployment
- 👁️ **Monitor** (Gemini Flash): Health & alerting [real-time]
- 🔒 **Security** (Claude Opus): Audit & compliance [can block]
- 📊 **ProjectManager** (Claude Sonnet): Roadmap & tracking
- 🎯 **DecisionMaker** (Claude Opus): Conflict resolution
**Multi-IA Router**: Transparent LLM selection per task context
- ✅ Automatic selection based on task complexity/latency/cost
- ✅ Manual override capability with audit logging
- ✅ Fallback chains (e.g., Claude Opus → GPT-4 → Claude Sonnet)
- ✅ Cost tracking & budget alerts per agent role
feat: Phase 5.3 - Multi-Agent Learning Infrastructure Implement intelligent agent learning from Knowledge Graph execution history with per-task-type expertise tracking, recency bias, and learning curves. ## Phase 5.3 Implementation ### Learning Infrastructure (✅ Complete) - LearningProfileService with per-task-type expertise metrics - TaskTypeExpertise model tracking success_rate, confidence, learning curves - Recency bias weighting: recent 7 days weighted 3x higher (exponential decay) - Confidence scoring prevents overfitting: min(1.0, executions / 20) - Learning curves computed from daily execution windows ### Agent Scoring Service (✅ Complete) - Unified AgentScore combining SwarmCoordinator + learning profiles - Scoring formula: 0.3*base + 0.5*expertise + 0.2*confidence - Rank agents by combined score for intelligent assignment - Support for recency-biased scoring (recent_success_rate) - Methods: rank_agents, select_best, rank_agents_with_recency ### KG Integration (✅ Complete) - KGPersistence::get_executions_for_task_type() - query by agent + task type - KGPersistence::get_agent_executions() - all executions for agent - Coordinator::load_learning_profile_from_kg() - core KG→Learning integration - Coordinator::load_all_learning_profiles() - batch load for multiple agents - Convert PersistedExecution → ExecutionData for learning calculations ### Agent Assignment Integration (✅ Complete) - AgentCoordinator uses learning profiles for task assignment - extract_task_type() infers task type from title/description - assign_task() scores candidates using AgentScoringService - Fallback to load-based selection if no learning data available - Learning profiles stored in coordinator.learning_profiles RwLock ### Profile Adapter Enhancements (✅ Complete) - create_learning_profile() - initialize empty profiles - add_task_type_expertise() - set task-type expertise - update_profile_with_learning() - update swarm profiles from learning ## Files Modified ### vapora-knowledge-graph/src/persistence.rs (+30 lines) - get_executions_for_task_type(agent_id, task_type, limit) - get_agent_executions(agent_id, limit) ### vapora-agents/src/coordinator.rs (+100 lines) - load_learning_profile_from_kg() - core KG integration method - load_all_learning_profiles() - batch loading for agents - assign_task() already uses learning-based scoring via AgentScoringService ### Existing Complete Implementation - vapora-knowledge-graph/src/learning.rs - calculation functions - vapora-agents/src/learning_profile.rs - data structures and expertise - vapora-agents/src/scoring.rs - unified scoring service - vapora-agents/src/profile_adapter.rs - adapter methods ## Tests Passing - learning_profile: 7 tests ✅ - scoring: 5 tests ✅ - profile_adapter: 6 tests ✅ - coordinator: learning-specific tests ✅ ## Data Flow 1. Task arrives → AgentCoordinator::assign_task() 2. Extract task_type from description 3. Query KG for task-type executions (load_learning_profile_from_kg) 4. Calculate expertise with recency bias 5. Score candidates (SwarmCoordinator + learning) 6. Assign to top-scored agent 7. Execution result → KG → Update learning profiles ## Key Design Decisions ✅ Recency bias: 7-day half-life with 3x weight for recent performance ✅ Confidence scoring: min(1.0, total_executions / 20) prevents overfitting ✅ Hierarchical scoring: 30% base load, 50% expertise, 20% confidence ✅ KG query limit: 100 recent executions per task-type for performance ✅ Async loading: load_learning_profile_from_kg supports concurrent loads ## Next: Phase 5.4 - Cost Optimization Ready to implement budget enforcement and cost-aware provider selection.
2026-01-11 13:03:53 +00:00
-**Learning-based agent selection**: Agents improve from execution history
-**Budget enforcement**: Per-role cost limits with automatic fallback
-**Cost-efficient routing**: Quality/cost optimization per task type
2025-11-09 12:27:37 +00:00
### 🏗️ Infrastructure
- **Kubernetes Native**: Built for K8s (K3s, RKE2, or vanilla)
- **Istio Service Mesh**: mTLS, circuit breakers, rate limiting out of the box
- **Secrets Management**: RustyVault or Cosmian KMS (Rust-native)
- **Storage**: Rook Ceph for distributed persistent volumes
- **CI/CD**: Tekton Pipelines with Gitea integration
- **Registry**: Zot for lightweight OCI artifact storage
- **Observability**: Prometheus + Grafana + Loki + Tempo + Kiali
---
## 🛠️ Tech Stack
### Backend
<pre>
Rust Async runtime with Tokio
Axum Fast, ergonomic web framework
SurrealDB Multi-model database with scopes
NATS JetStream Message queue for agent jobs
Rig AI agent framework
2025-11-10 11:41:29 +00:00
fastembed Local embeddings for RAG
2025-11-09 12:27:37 +00:00
</pre>
### Frontend
<pre>
Leptos Reactive Rust WASM framework
UnoCSS Instant on-demand atomic CSS
leptos-use Hooks and utilities
thaw Component library
</pre>
### Infrastructure
<pre>
Kubernetes Orchestration (K3s/RKE2)
Istio Service mesh + ingress
Rook Ceph Distributed storage
Tekton Cloud-native CI/CD
RustyVault Secrets management
Zot OCI registry
Gitea Self-hosted Git
</pre>
2025-11-10 11:41:29 +00:00
### Agents & LLM
2025-11-09 12:27:37 +00:00
<pre>
2025-11-10 11:41:29 +00:00
Rig LLM agent framework with tool calling
fastembed Local embeddings for semantic search (RAG)
NATS JetStream Message queue for async agent coordination
Cedar Policy engine for fine-grained RBAC
MCP Gateway Model Context Protocol plugin system
feat: Phase 5.3 - Multi-Agent Learning Infrastructure Implement intelligent agent learning from Knowledge Graph execution history with per-task-type expertise tracking, recency bias, and learning curves. ## Phase 5.3 Implementation ### Learning Infrastructure (✅ Complete) - LearningProfileService with per-task-type expertise metrics - TaskTypeExpertise model tracking success_rate, confidence, learning curves - Recency bias weighting: recent 7 days weighted 3x higher (exponential decay) - Confidence scoring prevents overfitting: min(1.0, executions / 20) - Learning curves computed from daily execution windows ### Agent Scoring Service (✅ Complete) - Unified AgentScore combining SwarmCoordinator + learning profiles - Scoring formula: 0.3*base + 0.5*expertise + 0.2*confidence - Rank agents by combined score for intelligent assignment - Support for recency-biased scoring (recent_success_rate) - Methods: rank_agents, select_best, rank_agents_with_recency ### KG Integration (✅ Complete) - KGPersistence::get_executions_for_task_type() - query by agent + task type - KGPersistence::get_agent_executions() - all executions for agent - Coordinator::load_learning_profile_from_kg() - core KG→Learning integration - Coordinator::load_all_learning_profiles() - batch load for multiple agents - Convert PersistedExecution → ExecutionData for learning calculations ### Agent Assignment Integration (✅ Complete) - AgentCoordinator uses learning profiles for task assignment - extract_task_type() infers task type from title/description - assign_task() scores candidates using AgentScoringService - Fallback to load-based selection if no learning data available - Learning profiles stored in coordinator.learning_profiles RwLock ### Profile Adapter Enhancements (✅ Complete) - create_learning_profile() - initialize empty profiles - add_task_type_expertise() - set task-type expertise - update_profile_with_learning() - update swarm profiles from learning ## Files Modified ### vapora-knowledge-graph/src/persistence.rs (+30 lines) - get_executions_for_task_type(agent_id, task_type, limit) - get_agent_executions(agent_id, limit) ### vapora-agents/src/coordinator.rs (+100 lines) - load_learning_profile_from_kg() - core KG integration method - load_all_learning_profiles() - batch loading for agents - assign_task() already uses learning-based scoring via AgentScoringService ### Existing Complete Implementation - vapora-knowledge-graph/src/learning.rs - calculation functions - vapora-agents/src/learning_profile.rs - data structures and expertise - vapora-agents/src/scoring.rs - unified scoring service - vapora-agents/src/profile_adapter.rs - adapter methods ## Tests Passing - learning_profile: 7 tests ✅ - scoring: 5 tests ✅ - profile_adapter: 6 tests ✅ - coordinator: learning-specific tests ✅ ## Data Flow 1. Task arrives → AgentCoordinator::assign_task() 2. Extract task_type from description 3. Query KG for task-type executions (load_learning_profile_from_kg) 4. Calculate expertise with recency bias 5. Score candidates (SwarmCoordinator + learning) 6. Assign to top-scored agent 7. Execution result → KG → Update learning profiles ## Key Design Decisions ✅ Recency bias: 7-day half-life with 3x weight for recent performance ✅ Confidence scoring: min(1.0, total_executions / 20) prevents overfitting ✅ Hierarchical scoring: 30% base load, 50% expertise, 20% confidence ✅ KG query limit: 100 recent executions per task-type for performance ✅ Async loading: load_learning_profile_from_kg supports concurrent loads ## Next: Phase 5.4 - Cost Optimization Ready to implement budget enforcement and cost-aware provider selection.
2026-01-11 13:03:53 +00:00
Learning Profiles Per-task-type expertise with recency bias
Budget Manager Per-role cost limits with automatic fallback
Knowledge Graph Temporal execution history with learning curves
2025-11-10 11:41:29 +00:00
Claude API Anthropic Claude (Opus, Sonnet, Haiku)
OpenAI API GPT-4, GPT-4o, GPT-3.5-turbo
Gemini API Google Gemini (2.0 Pro, Flash, 1.5 Pro)
Ollama Local LLMs (Llama 2, Mistral, etc.)
2025-11-09 12:27:37 +00:00
</pre>
---
2026-01-12 03:37:18 +00:00
## Quick Start
2025-11-09 12:27:37 +00:00
### Prerequisites
- **Rust 1.75+** ([rustup](https://rustup.rs))
- **Node.js 20+** (for UnoCSS)
- **Docker** (for local development)
- **Kubernetes cluster** (for production)
### Local Development
```bash
# Clone the repository
git clone https://github.com/vapora-platform/vapora
cd vapora
# Install dependencies
cargo build
# Available Just recipes (50+ commands)
just help # Show all available recipes
just help build # Show build recipes
just help test # Show test recipes
just help ci # Show CI recipes
2025-11-09 12:27:37 +00:00
# Setup SurrealDB (Docker)
docker run -d --name surrealdb \
-p 8000:8000 \
surrealdb/surrealdb:latest \
start --bind 0.0.0.0:8000 file://data/database.db
# Run migrations
cd migrations && surrealdb import --conn http://localhost:8000 *.surql
# Start backend
cd crates/vapora-backend
2025-11-09 12:27:37 +00:00
cargo run
# Start frontend (new terminal)
cd crates/vapora-frontend
2025-11-09 12:27:37 +00:00
trunk serve
# Install CLI (optional - for workflow management)
cd crates/vapora-cli
cargo build --release
cp target/release/vapora ~/.local/bin/
# CLI Usage
vapora workflow start --template feature_development
vapora workflow list
vapora workflow status <id>
vapora workflow approve <id> --approver "Your Name"
2025-11-09 12:27:37 +00:00
Visit http://localhost:3000 🎉
Docker Compose (Full Stack)
docker compose up -d
Includes: Backend, Frontend, SurrealDB, NATS, MCP Gateway
```
2025-11-10 11:41:29 +00:00
### Production Deployment (Kubernetes)
```bash
# Build and push Docker images
feat: Phase 5.3 - Multi-Agent Learning Infrastructure Implement intelligent agent learning from Knowledge Graph execution history with per-task-type expertise tracking, recency bias, and learning curves. ## Phase 5.3 Implementation ### Learning Infrastructure (✅ Complete) - LearningProfileService with per-task-type expertise metrics - TaskTypeExpertise model tracking success_rate, confidence, learning curves - Recency bias weighting: recent 7 days weighted 3x higher (exponential decay) - Confidence scoring prevents overfitting: min(1.0, executions / 20) - Learning curves computed from daily execution windows ### Agent Scoring Service (✅ Complete) - Unified AgentScore combining SwarmCoordinator + learning profiles - Scoring formula: 0.3*base + 0.5*expertise + 0.2*confidence - Rank agents by combined score for intelligent assignment - Support for recency-biased scoring (recent_success_rate) - Methods: rank_agents, select_best, rank_agents_with_recency ### KG Integration (✅ Complete) - KGPersistence::get_executions_for_task_type() - query by agent + task type - KGPersistence::get_agent_executions() - all executions for agent - Coordinator::load_learning_profile_from_kg() - core KG→Learning integration - Coordinator::load_all_learning_profiles() - batch load for multiple agents - Convert PersistedExecution → ExecutionData for learning calculations ### Agent Assignment Integration (✅ Complete) - AgentCoordinator uses learning profiles for task assignment - extract_task_type() infers task type from title/description - assign_task() scores candidates using AgentScoringService - Fallback to load-based selection if no learning data available - Learning profiles stored in coordinator.learning_profiles RwLock ### Profile Adapter Enhancements (✅ Complete) - create_learning_profile() - initialize empty profiles - add_task_type_expertise() - set task-type expertise - update_profile_with_learning() - update swarm profiles from learning ## Files Modified ### vapora-knowledge-graph/src/persistence.rs (+30 lines) - get_executions_for_task_type(agent_id, task_type, limit) - get_agent_executions(agent_id, limit) ### vapora-agents/src/coordinator.rs (+100 lines) - load_learning_profile_from_kg() - core KG integration method - load_all_learning_profiles() - batch loading for agents - assign_task() already uses learning-based scoring via AgentScoringService ### Existing Complete Implementation - vapora-knowledge-graph/src/learning.rs - calculation functions - vapora-agents/src/learning_profile.rs - data structures and expertise - vapora-agents/src/scoring.rs - unified scoring service - vapora-agents/src/profile_adapter.rs - adapter methods ## Tests Passing - learning_profile: 7 tests ✅ - scoring: 5 tests ✅ - profile_adapter: 6 tests ✅ - coordinator: learning-specific tests ✅ ## Data Flow 1. Task arrives → AgentCoordinator::assign_task() 2. Extract task_type from description 3. Query KG for task-type executions (load_learning_profile_from_kg) 4. Calculate expertise with recency bias 5. Score candidates (SwarmCoordinator + learning) 6. Assign to top-scored agent 7. Execution result → KG → Update learning profiles ## Key Design Decisions ✅ Recency bias: 7-day half-life with 3x weight for recent performance ✅ Confidence scoring: min(1.0, total_executions / 20) prevents overfitting ✅ Hierarchical scoring: 30% base load, 50% expertise, 20% confidence ✅ KG query limit: 100 recent executions per task-type for performance ✅ Async loading: load_learning_profile_from_kg supports concurrent loads ## Next: Phase 5.4 - Cost Optimization Ready to implement budget enforcement and cost-aware provider selection.
2026-01-11 13:03:53 +00:00
nu scripts/build-docker.nu --registry docker.io --tag v1.2.0 --push
2025-11-10 11:41:29 +00:00
# Update secrets
edit kubernetes/03-secrets.yaml # Add your API keys
# Deploy to Kubernetes
nu scripts/deploy-k8s.nu
# Or use Provisioning (advanced)
cd provisioning/vapora-wrksp
provisioning workflow run workflows/deploy-full-stack.yaml
```
feat: Phase 5.3 - Multi-Agent Learning Infrastructure Implement intelligent agent learning from Knowledge Graph execution history with per-task-type expertise tracking, recency bias, and learning curves. ## Phase 5.3 Implementation ### Learning Infrastructure (✅ Complete) - LearningProfileService with per-task-type expertise metrics - TaskTypeExpertise model tracking success_rate, confidence, learning curves - Recency bias weighting: recent 7 days weighted 3x higher (exponential decay) - Confidence scoring prevents overfitting: min(1.0, executions / 20) - Learning curves computed from daily execution windows ### Agent Scoring Service (✅ Complete) - Unified AgentScore combining SwarmCoordinator + learning profiles - Scoring formula: 0.3*base + 0.5*expertise + 0.2*confidence - Rank agents by combined score for intelligent assignment - Support for recency-biased scoring (recent_success_rate) - Methods: rank_agents, select_best, rank_agents_with_recency ### KG Integration (✅ Complete) - KGPersistence::get_executions_for_task_type() - query by agent + task type - KGPersistence::get_agent_executions() - all executions for agent - Coordinator::load_learning_profile_from_kg() - core KG→Learning integration - Coordinator::load_all_learning_profiles() - batch load for multiple agents - Convert PersistedExecution → ExecutionData for learning calculations ### Agent Assignment Integration (✅ Complete) - AgentCoordinator uses learning profiles for task assignment - extract_task_type() infers task type from title/description - assign_task() scores candidates using AgentScoringService - Fallback to load-based selection if no learning data available - Learning profiles stored in coordinator.learning_profiles RwLock ### Profile Adapter Enhancements (✅ Complete) - create_learning_profile() - initialize empty profiles - add_task_type_expertise() - set task-type expertise - update_profile_with_learning() - update swarm profiles from learning ## Files Modified ### vapora-knowledge-graph/src/persistence.rs (+30 lines) - get_executions_for_task_type(agent_id, task_type, limit) - get_agent_executions(agent_id, limit) ### vapora-agents/src/coordinator.rs (+100 lines) - load_learning_profile_from_kg() - core KG integration method - load_all_learning_profiles() - batch loading for agents - assign_task() already uses learning-based scoring via AgentScoringService ### Existing Complete Implementation - vapora-knowledge-graph/src/learning.rs - calculation functions - vapora-agents/src/learning_profile.rs - data structures and expertise - vapora-agents/src/scoring.rs - unified scoring service - vapora-agents/src/profile_adapter.rs - adapter methods ## Tests Passing - learning_profile: 7 tests ✅ - scoring: 5 tests ✅ - profile_adapter: 6 tests ✅ - coordinator: learning-specific tests ✅ ## Data Flow 1. Task arrives → AgentCoordinator::assign_task() 2. Extract task_type from description 3. Query KG for task-type executions (load_learning_profile_from_kg) 4. Calculate expertise with recency bias 5. Score candidates (SwarmCoordinator + learning) 6. Assign to top-scored agent 7. Execution result → KG → Update learning profiles ## Key Design Decisions ✅ Recency bias: 7-day half-life with 3x weight for recent performance ✅ Confidence scoring: min(1.0, total_executions / 20) prevents overfitting ✅ Hierarchical scoring: 30% base load, 50% expertise, 20% confidence ✅ KG query limit: 100 recent executions per task-type for performance ✅ Async loading: load_learning_profile_from_kg supports concurrent loads ## Next: Phase 5.4 - Cost Optimization Ready to implement budget enforcement and cost-aware provider selection.
2026-01-11 13:03:53 +00:00
**See full deployment guide**: [`Deployment Guide`](docs/setup/deployment.md)
2025-11-10 11:41:29 +00:00
2025-11-09 12:27:37 +00:00
---
2026-01-12 03:37:18 +00:00
## Architecture
2025-11-09 12:27:37 +00:00
```
┌─────────────────────────────────────────────────────┐
│ Frontend (Leptos + UnoCSS) │
│ Glassmorphism UI • Kanban Board • Drag & Drop │
└────────────────────┬────────────────────────────────┘
┌─────────────────────────────────────────────────────┐
│ Istio Ingress Gateway │
│ mTLS • Rate Limiting • Circuit Breaker • Telemetry │
└────────────────────┬────────────────────────────────┘
┌────────────┼────────────┐
▼ ▼ ▼
┌────────┐ ┌──────────┐ ┌───────────────┐
│ Axum │ │ Agent │ │ MCP Gateway │
│ API │ │ Runtime │ │ │
└───┬────┘ └────┬─────┘ └───────┬───────┘
│ │ │
│ │ ▼
│ │ ┌──────────────┐
│ │ │ MCP Plugins │
│ │ │ - Code │
│ │ │ - RAG │
│ │ │ - GitHub │
│ │ │ - Jira │
│ │ └──────────────┘
│ │ │
▼ ▼ ▼
┌─────────────────────────────────────┐
│ SurrealDB Cluster │
│ (Rook Ceph Persistent Vol) │
└─────────────────────────────────────┘
┌─────────────────────────────────────┐
│ RustyVault / Cosmian KMS │
│ (Secrets + Key Management) │
└─────────────────────────────────────┘
```
Data Flow
1. User interacts with Leptos UI (Kanban board)
2. API calls go through Istio Ingress with mTLS
3. Axum backend handles CRUD operations
4. SurrealDB stores projects, tasks, agents (multi-tenant scopes)
5. Agent jobs queued in NATS JetStream
6. Agent Runtime invokes MCP Gateway
7. MCP Gateway routes to OpenAI/Claude with plugin tools
8. Results streamed back to UI with optimistic updates
---
📸 Screenshots
Kanban Board with Glassmorphism
```
┌─────────────────────────────────────────────────────────┐
│ Vapora 🔍 Search Projects Agents + New │
├─────────────────────────────────────────────────────────┤
│ │
2025-11-10 12:24:13 +00:00
│ 📌 🔵 🟣 ✅ │
2025-11-10 12:23:35 +00:00
│ ┌─────────┬─────────┬─────────┬─────────┐ │
│ │ TODO │ DOING │ REVIEW │ DONE │ │
│ │ (5) │ (3) │ (2) │ (12) │ │
│ ├─────────┼─────────┼─────────┼─────────┤ │
│ │ ┌─────┐ │ ┌─────┐ │ ┌─────┐ │ ┌─────┐ │ │
│ │ │Task │ │ │Task │ │ │Task │ │ │Task │ │ │
│ │ │Card │ │ │Card │ │ │Card │ │ │Card │ │ │
│ │ └─────┘ │ └─────┘ │ └─────┘ │ └─────┘ │ │
│ │ │ │ │ │ │
│ │ Glass │ Neon │ Vapor │ Fluid │ │
│ │ Blur │ Glow │ Waves │ Motion │ │
│ └─────────┴─────────┴─────────┴─────────┘ │
2025-11-09 12:27:37 +00:00
└─────────────────────────────────────────────────────────┘
```
Agent Plugin Marketplace
```
┌─────────────────────────────────────────────────────────┐
│ MCP Plugins 🔌 Install New │
├─────────────────────────────────────────────────────────┤
│ ┌──────────────────────────────────────────┐ │
│ │ 🔍 Code Analysis Plugin │ │
│ │ Analyze code with tree-sitter + LLM │ │
│ │ • Rust, TypeScript, Python │ │
│ │ • Complexity metrics │ │
│ │ [✓ Installed] │ │
│ └──────────────────────────────────────────┘ │
│ │
│ ┌──────────────────────────────────────────┐ │
│ │ 📚 RAG Search Plugin │ │
2025-11-10 12:18:34 +00:00
│ │ Semantic search over Vapora docs & code │ │
2025-11-09 12:27:37 +00:00
│ │ • Local embeddings (fastembed) │ │
2025-11-10 12:18:34 +00:00
│ │ • Indexes: guides/, FEATURES, .coder/ │ │
│ │ [✓ Installed] │ │
2025-11-09 12:27:37 +00:00
│ └──────────────────────────────────────────┘ │
└─────────────────────────────────────────────────────────┘
```
---
📦 Project Structure
feat: Phase 5.3 - Multi-Agent Learning Infrastructure Implement intelligent agent learning from Knowledge Graph execution history with per-task-type expertise tracking, recency bias, and learning curves. ## Phase 5.3 Implementation ### Learning Infrastructure (✅ Complete) - LearningProfileService with per-task-type expertise metrics - TaskTypeExpertise model tracking success_rate, confidence, learning curves - Recency bias weighting: recent 7 days weighted 3x higher (exponential decay) - Confidence scoring prevents overfitting: min(1.0, executions / 20) - Learning curves computed from daily execution windows ### Agent Scoring Service (✅ Complete) - Unified AgentScore combining SwarmCoordinator + learning profiles - Scoring formula: 0.3*base + 0.5*expertise + 0.2*confidence - Rank agents by combined score for intelligent assignment - Support for recency-biased scoring (recent_success_rate) - Methods: rank_agents, select_best, rank_agents_with_recency ### KG Integration (✅ Complete) - KGPersistence::get_executions_for_task_type() - query by agent + task type - KGPersistence::get_agent_executions() - all executions for agent - Coordinator::load_learning_profile_from_kg() - core KG→Learning integration - Coordinator::load_all_learning_profiles() - batch load for multiple agents - Convert PersistedExecution → ExecutionData for learning calculations ### Agent Assignment Integration (✅ Complete) - AgentCoordinator uses learning profiles for task assignment - extract_task_type() infers task type from title/description - assign_task() scores candidates using AgentScoringService - Fallback to load-based selection if no learning data available - Learning profiles stored in coordinator.learning_profiles RwLock ### Profile Adapter Enhancements (✅ Complete) - create_learning_profile() - initialize empty profiles - add_task_type_expertise() - set task-type expertise - update_profile_with_learning() - update swarm profiles from learning ## Files Modified ### vapora-knowledge-graph/src/persistence.rs (+30 lines) - get_executions_for_task_type(agent_id, task_type, limit) - get_agent_executions(agent_id, limit) ### vapora-agents/src/coordinator.rs (+100 lines) - load_learning_profile_from_kg() - core KG integration method - load_all_learning_profiles() - batch loading for agents - assign_task() already uses learning-based scoring via AgentScoringService ### Existing Complete Implementation - vapora-knowledge-graph/src/learning.rs - calculation functions - vapora-agents/src/learning_profile.rs - data structures and expertise - vapora-agents/src/scoring.rs - unified scoring service - vapora-agents/src/profile_adapter.rs - adapter methods ## Tests Passing - learning_profile: 7 tests ✅ - scoring: 5 tests ✅ - profile_adapter: 6 tests ✅ - coordinator: learning-specific tests ✅ ## Data Flow 1. Task arrives → AgentCoordinator::assign_task() 2. Extract task_type from description 3. Query KG for task-type executions (load_learning_profile_from_kg) 4. Calculate expertise with recency bias 5. Score candidates (SwarmCoordinator + learning) 6. Assign to top-scored agent 7. Execution result → KG → Update learning profiles ## Key Design Decisions ✅ Recency bias: 7-day half-life with 3x weight for recent performance ✅ Confidence scoring: min(1.0, total_executions / 20) prevents overfitting ✅ Hierarchical scoring: 30% base load, 50% expertise, 20% confidence ✅ KG query limit: 100 recent executions per task-type for performance ✅ Async loading: load_learning_profile_from_kg supports concurrent loads ## Next: Phase 5.4 - Cost Optimization Ready to implement budget enforcement and cost-aware provider selection.
2026-01-11 13:03:53 +00:00
```text
vapora/
├── crates/
│ ├── vapora-shared/ # Core models, errors, types
│ ├── vapora-backend/ # Axum REST API (40+ endpoints, 161 tests)
│ ├── vapora-agents/ # Agent orchestration + learning profiles (71 tests)
│ ├── vapora-llm-router/ # Multi-provider routing + budget (53 tests)
│ ├── vapora-swarm/ # Swarm coordination + Prometheus (6 tests)
│ ├── vapora-knowledge-graph/ # Temporal KG + learning curves (20 tests)
│ ├── vapora-workflow-engine/ # Multi-stage workflows + Kogral integration (26 tests)
│ ├── vapora-a2a/ # Agent-to-Agent protocol server (7 integration tests)
│ ├── vapora-a2a-client/ # A2A client library (5 tests)
│ ├── vapora-cli/ # CLI commands (start, list, approve, cancel, etc.)
│ ├── vapora-frontend/ # Leptos WASM UI (Kanban)
│ ├── vapora-leptos-ui/ # Leptos component library (16 components, 4 tests)
│ ├── vapora-mcp-server/ # MCP protocol gateway (1 test)
│ ├── vapora-tracking/ # Task/project storage layer (1 test)
│ ├── vapora-telemetry/ # OpenTelemetry integration (16 tests)
│ ├── vapora-analytics/ # Event pipeline + usage stats (5 tests)
│ ├── vapora-worktree/ # Git worktree management (4 tests)
│ └── vapora-doc-lifecycle/ # Documentation management (15 tests)
├── assets/
│ ├── web/ # Landing page (optimized + minified)
│ │ ├── src/index.html # Source (readable, 26KB)
│ │ ├── index.html # Production (minified, 18KB)
│ │ ├── minify.sh # Auto-minification script
│ │ └── README.md # Web assets guide
│ └── vapora.svg # Logo
├── kubernetes/ # K8s manifests (base, overlays, platform)
├── migrations/ # SurrealDB migrations
├── config/ # Configuration files (TOML)
│ ├── vapora.toml # Backend configuration
│ ├── agents.toml # Agent roles and limits
│ ├── workflows.toml # Workflow templates
│ └── agent-budgets.toml # Budget enforcement config
└── docs/ # Product documentation
├── adrs/ # Architecture Decision Records
├── features/ # Feature documentation
└── setup/ # Installation and CLI guides
# Total: 17 crates, 316 tests (100% pass rate)
2025-11-09 12:27:37 +00:00
```
---
🔌 MCP Plugin Development
Create custom agents with the MCP plugin system:
```rust
use vapora_mcp::{Plugin, Tool, Resource};
#[derive(Plugin)]
pub struct MyCustomPlugin;
impl Plugin for MyCustomPlugin {
fn name(&self) -> &str {
"my-custom-plugin"
}
fn tools(&self) -> Vec<Tool> {
vec![
Tool::new("my_tool")
.description("Does something cool")
.parameter("input", "string")
]
}
}
#[async_trait]
impl ToolExecutor for MyCustomPlugin {
async fn execute_tool(
&self,
tool: &str,
params: serde_json::Value
) -> Result<serde_json::Value> {
match tool {
"my_tool" => {
// Your logic here
Ok(json!({"result": "success"}))
}
_ => Err(anyhow!("Unknown tool"))
}
}
}
```
See docs/mcp-plugins.md for full guide.
---
2025-11-10 11:41:29 +00:00
🚢 Deployment with Provisioning
2025-11-09 12:27:37 +00:00
2025-11-10 11:41:29 +00:00
Using Provisioning (not Helm) for declarative K8s infrastructure:
2025-11-09 12:27:37 +00:00
```bash
2025-11-10 11:41:29 +00:00
# Navigate to Provisioning workspace
cd provisioning/vapora-wrksp
# Deploy full VAPORA stack (KCL + Taskservs + Workflows)
provisioning workflow run workflows/deploy-full-stack.yaml
# This creates:
# - K8s cluster (K3s/RKE2) with Cilium CNI
# - Istio service mesh with mTLS
# - Rook Ceph storage
# - SurrealDB (3 replicas, 50Gi storage)
# - Redis (3 replicas, 20Gi storage)
# - NATS JetStream (3 replicas, 30Gi storage)
# - Axum backend (3-10 replicas)
# - Leptos frontend (2 replicas)
# - Agent runtime (3-50 replicas, HPA enabled)
# - LLM Router (2-5 replicas)
# - MCP Gateway (2-5 replicas)
# Access UI
open https://vapora.example.com
# Scale developer agents
provisioning taskserv scale vapora-agents --agent developer --replicas 15
# Monitor health
provisioning health-check --services all
2025-11-09 12:27:37 +00:00
Environment Variables
# Backend
SURREAL_URL=http://surrealdb:8000
NATS_URL=nats://nats:4222
MCP_GATEWAY_URL=http://mcp-gateway:8080
RUSTYVAULT_ADDR=https://rustyvault:8200
# Frontend
VITE_API_URL=https://api.vapora.dev
# Secrets (stored in RustyVault)
OPENAI_API_KEY=sk-...
ANTHROPIC_API_KEY=sk-ant-...
```
---
🧪 Testing
```bash
# Backend tests
cd vapora-backend
cargo test
# Frontend tests
cd vapora-frontend
cargo test --target wasm32-unknown-unknown
# Integration tests
cargo test --workspace
# E2E tests (requires running services)
./scripts/run-e2e-tests.sh
```
---
📊 Monitoring
Access Grafana dashboards:
kubectl port-forward -n observability svc/grafana 3000:80
Pre-configured dashboards:
- Vapora Overview: Request rates, latencies, errors
- Agent Metrics: Job queue depth, execution times, token usage
- Istio Service Mesh: Traffic flows, mTLS status
- Ceph Storage: Capacity, IOPS, health
2025-11-10 11:41:29 +00:00
---
📚 Documentation & Specifications
2025-11-10 12:13:05 +00:00
VAPORA v1.0 is built on comprehensive specifications organized by domain:
2025-11-10 11:41:29 +00:00
**Core Specifications** (guides/core/):
- **VAPORA-ARCHITECTURE-V2.md**: 4-layer architecture (Frontend, API, Service, Data, Infrastructure)
- **AGENT-REGISTRY-COORDINATION.md**: 12 agent roles, NATS message patterns, health checking
- **MULTI-IA-ROUTER.md**: Hybrid routing system (rules + dynamic + override), cost tracking
- **MULTI-AGENT-WORKFLOWS.md**: End-to-end workflows with parallel execution & approval gates
- **ROLES-PERMISSIONS-PROFILES.md**: Cedar policy engine, team profiles, fine-grained RBAC
- **TASK-AGENT-DOC-MANAGER-V2.md**: Multi-agent task orchestration & decision extraction
**Integration Specifications** (guides/integration/):
- **DOC-LIFECYCLE-INTEGRATION.md**: Dual-mode (plugin + standalone) documentation management
- **PROVISIONING-INTEGRATION.md**: KCL schemas, taskservs, workflows for K8s deployment
- **RAG-INTEGRATION.md**: Semantic search with hybrid retrieval & embedding strategy
**Component Specifications** (guides/):
- **SESSION-LIFECYCLE-MANAGER.md**: .coder/ organization, metadata extraction, decision capture
- **ROOT-FILES-KEEPER.md**: Keep README, CHANGELOG, ROADMAP always updated
- **MIGRATION-PLAN.md**: Gradual migration path for existing projects
**Deployment Guide** (provisioning/vapora-wrksp/):
- workspace.toml: Master configuration (agents, LLM, monitoring, backup)
- taskservs/: Service definitions for backend, frontend, agents, LLM router, MCP gateway
- kcl/: Infrastructure as Code schemas (cluster, services, storage, agents, multi-ia)
- workflows/: Batch operations (deploy, scale, upgrade, disaster-recovery)
- README.md: Quick start and operation guide
**References**:
- guides/archive/v1-single-agent/: Legacy v1.0 specifications for historical reference
2025-11-09 12:27:37 +00:00
---
2026-01-12 03:37:18 +00:00
## Contributing
2025-11-09 12:27:37 +00:00
We welcome contributions! Please see CONTRIBUTING.md.
Development Workflow
1. Fork the repository
2. Create a feature branch (git checkout -b feature/amazing)
3. Commit your changes (git commit -m 'Add amazing feature')
4. Push to branch (git push origin feature/amazing)
5. Open a Pull Request
Code Style
- Rust: Follow rustfmt and clippy recommendations
- Frontend: Use UnoCSS shortcuts, avoid inline styles
- Commits: Conventional commits (feat:, fix:, docs:, etc.)
---
📄 License
Dual-licensed under:
- MIT License (LICENSE-MIT)
- Apache License 2.0 (LICENSE-APACHE)
You may choose either license.
---
🙏 Acknowledgments
Inspired by:
- https://github.com/transcend-engineering/archon - Original glassmorphism UI concept
- https://github.com/docker/mcp-gateway - MCP integration pattern
- https://rook.io - Ceph storage operator
- https://istio.io - Service mesh architecture
Built with ❤️ using Rust 🦀
---
🔗 Links
- Website: https://vapora.dev
- Documentation: https://docs.vapora.dev
- GitHub: https://github.com/vapora-platform/vapora
- Discord: https://discord.gg/vapora
- Twitter: https://twitter.com/VaporaPlatform
---
#readme
Made with vaporwave dreams and Rust reality ✨
---
Archivos Adicionales Recomendados
CONTRIBUTING.md
# Contributing to Vapora
Thank you for your interest in contributing to Vapora!
## Code of Conduct
Be respectful, inclusive, and constructive.
## Development Setup
See [README.md](README.md#-quick-start) for setup instructions.
## Pull Request Process
1. Update documentation for any API changes
2. Add tests for new features
3. Ensure `cargo clippy` and `cargo fmt` pass
4. Update CHANGELOG.md
## Coding Standards
- **Rust**: Idiomatic Rust, avoid `unwrap()`, use `?` operator
- **Error Handling**: Use `anyhow::Result` for services, custom errors for API
- **Documentation**: All public items must have doc comments
- **Tests**: Unit tests for services, integration tests for API
## Commit Messages
Follow [Conventional Commits](https://www.conventionalcommits.org/):
feat: add code analysis plugin
fix: resolve race condition in task reordering
docs: update MCP plugin guide
chore: bump dependencies
LICENSE-MIT y LICENSE-APACHE
---
feat: Phase 5.3 - Multi-Agent Learning Infrastructure Implement intelligent agent learning from Knowledge Graph execution history with per-task-type expertise tracking, recency bias, and learning curves. ## Phase 5.3 Implementation ### Learning Infrastructure (✅ Complete) - LearningProfileService with per-task-type expertise metrics - TaskTypeExpertise model tracking success_rate, confidence, learning curves - Recency bias weighting: recent 7 days weighted 3x higher (exponential decay) - Confidence scoring prevents overfitting: min(1.0, executions / 20) - Learning curves computed from daily execution windows ### Agent Scoring Service (✅ Complete) - Unified AgentScore combining SwarmCoordinator + learning profiles - Scoring formula: 0.3*base + 0.5*expertise + 0.2*confidence - Rank agents by combined score for intelligent assignment - Support for recency-biased scoring (recent_success_rate) - Methods: rank_agents, select_best, rank_agents_with_recency ### KG Integration (✅ Complete) - KGPersistence::get_executions_for_task_type() - query by agent + task type - KGPersistence::get_agent_executions() - all executions for agent - Coordinator::load_learning_profile_from_kg() - core KG→Learning integration - Coordinator::load_all_learning_profiles() - batch load for multiple agents - Convert PersistedExecution → ExecutionData for learning calculations ### Agent Assignment Integration (✅ Complete) - AgentCoordinator uses learning profiles for task assignment - extract_task_type() infers task type from title/description - assign_task() scores candidates using AgentScoringService - Fallback to load-based selection if no learning data available - Learning profiles stored in coordinator.learning_profiles RwLock ### Profile Adapter Enhancements (✅ Complete) - create_learning_profile() - initialize empty profiles - add_task_type_expertise() - set task-type expertise - update_profile_with_learning() - update swarm profiles from learning ## Files Modified ### vapora-knowledge-graph/src/persistence.rs (+30 lines) - get_executions_for_task_type(agent_id, task_type, limit) - get_agent_executions(agent_id, limit) ### vapora-agents/src/coordinator.rs (+100 lines) - load_learning_profile_from_kg() - core KG integration method - load_all_learning_profiles() - batch loading for agents - assign_task() already uses learning-based scoring via AgentScoringService ### Existing Complete Implementation - vapora-knowledge-graph/src/learning.rs - calculation functions - vapora-agents/src/learning_profile.rs - data structures and expertise - vapora-agents/src/scoring.rs - unified scoring service - vapora-agents/src/profile_adapter.rs - adapter methods ## Tests Passing - learning_profile: 7 tests ✅ - scoring: 5 tests ✅ - profile_adapter: 6 tests ✅ - coordinator: learning-specific tests ✅ ## Data Flow 1. Task arrives → AgentCoordinator::assign_task() 2. Extract task_type from description 3. Query KG for task-type executions (load_learning_profile_from_kg) 4. Calculate expertise with recency bias 5. Score candidates (SwarmCoordinator + learning) 6. Assign to top-scored agent 7. Execution result → KG → Update learning profiles ## Key Design Decisions ✅ Recency bias: 7-day half-life with 3x weight for recent performance ✅ Confidence scoring: min(1.0, total_executions / 20) prevents overfitting ✅ Hierarchical scoring: 30% base load, 50% expertise, 20% confidence ✅ KG query limit: 100 recent executions per task-type for performance ✅ Async loading: load_learning_profile_from_kg supports concurrent loads ## Next: Phase 5.4 - Cost Optimization Ready to implement budget enforcement and cost-aware provider selection.
2026-01-11 13:03:53 +00:00
⚠️ **Brand Assets Location**: See [`./assets/`](./assets/) for complete brand system including logo variations, color palettes, and interactive preview → [Open Logo Preview](./assets/vapora_logo.html)
2025-11-09 12:27:37 +00:00
---