Implement intelligent agent learning from Knowledge Graph execution history with per-task-type expertise tracking, recency bias, and learning curves. ## Phase 5.3 Implementation ### Learning Infrastructure (✅ Complete) - LearningProfileService with per-task-type expertise metrics - TaskTypeExpertise model tracking success_rate, confidence, learning curves - Recency bias weighting: recent 7 days weighted 3x higher (exponential decay) - Confidence scoring prevents overfitting: min(1.0, executions / 20) - Learning curves computed from daily execution windows ### Agent Scoring Service (✅ Complete) - Unified AgentScore combining SwarmCoordinator + learning profiles - Scoring formula: 0.3*base + 0.5*expertise + 0.2*confidence - Rank agents by combined score for intelligent assignment - Support for recency-biased scoring (recent_success_rate) - Methods: rank_agents, select_best, rank_agents_with_recency ### KG Integration (✅ Complete) - KGPersistence::get_executions_for_task_type() - query by agent + task type - KGPersistence::get_agent_executions() - all executions for agent - Coordinator::load_learning_profile_from_kg() - core KG→Learning integration - Coordinator::load_all_learning_profiles() - batch load for multiple agents - Convert PersistedExecution → ExecutionData for learning calculations ### Agent Assignment Integration (✅ Complete) - AgentCoordinator uses learning profiles for task assignment - extract_task_type() infers task type from title/description - assign_task() scores candidates using AgentScoringService - Fallback to load-based selection if no learning data available - Learning profiles stored in coordinator.learning_profiles RwLock ### Profile Adapter Enhancements (✅ Complete) - create_learning_profile() - initialize empty profiles - add_task_type_expertise() - set task-type expertise - update_profile_with_learning() - update swarm profiles from learning ## Files Modified ### vapora-knowledge-graph/src/persistence.rs (+30 lines) - get_executions_for_task_type(agent_id, task_type, limit) - get_agent_executions(agent_id, limit) ### vapora-agents/src/coordinator.rs (+100 lines) - load_learning_profile_from_kg() - core KG integration method - load_all_learning_profiles() - batch loading for agents - assign_task() already uses learning-based scoring via AgentScoringService ### Existing Complete Implementation - vapora-knowledge-graph/src/learning.rs - calculation functions - vapora-agents/src/learning_profile.rs - data structures and expertise - vapora-agents/src/scoring.rs - unified scoring service - vapora-agents/src/profile_adapter.rs - adapter methods ## Tests Passing - learning_profile: 7 tests ✅ - scoring: 5 tests ✅ - profile_adapter: 6 tests ✅ - coordinator: learning-specific tests ✅ ## Data Flow 1. Task arrives → AgentCoordinator::assign_task() 2. Extract task_type from description 3. Query KG for task-type executions (load_learning_profile_from_kg) 4. Calculate expertise with recency bias 5. Score candidates (SwarmCoordinator + learning) 6. Assign to top-scored agent 7. Execution result → KG → Update learning profiles ## Key Design Decisions ✅ Recency bias: 7-day half-life with 3x weight for recent performance ✅ Confidence scoring: min(1.0, total_executions / 20) prevents overfitting ✅ Hierarchical scoring: 30% base load, 50% expertise, 20% confidence ✅ KG query limit: 100 recent executions per task-type for performance ✅ Async loading: load_learning_profile_from_kg supports concurrent loads ## Next: Phase 5.4 - Cost Optimization Ready to implement budget enforcement and cost-aware provider selection.
9.5 KiB
title, date, status, version
| title | date | status | version |
|---|---|---|---|
| Vapora Project - Quick Start Guide | 2025-11-10 | READY | 1.0 |
🚀 Vapora - Quick Start Guide
⏱️ Time to get running: 15-20 minutes
This guide walks you through building and running the complete Vapora project in the simplest way possible.
📋 Prerequisites
You need:
- ✅ Rust 1.75+ (install from https://rustup.rs)
- ✅ Cargo (comes with Rust)
- ✅ Git
- ✅ NuShell 0.95+ (for scripts)
- ✅ 2GB free disk space
- ✅ Bash or Zsh shell
Check if you have everything:
rustc --version # Should show Rust 1.75+
cargo --version # Should show Cargo 1.75+
which git # Should show /usr/bin/git or similar
nu --version # Should show NuShell 0.95+
Install NuShell if needed:
# Using Homebrew (macOS)
brew install nu
# Or download from: https://www.nushell.sh/
🎯 15-Minute Quick Start
Step 1: Navigate to Vapora
# Verify structure
ls crates/
# Should show: vapora-backend, vapora-frontend, vapora-shared, vapora-agents, vapora-llm-router, vapora-mcp-server, vapora-tracking
Step 2: Install Dependencies
# Update Rust (optional but recommended)
rustup update stable
# Install workspace dependencies
cargo fetch
Step 3: Build All Crates
# Build the complete workspace
cargo build
# This builds all 7 crates:
# - vapora-shared (shared utilities)
# - vapora-agents (agent framework)
# - vapora-llm-router (LLM routing)
# - vapora-tracking (change tracking system)
# - vapora-backend (REST API)
# - vapora-frontend (WASM UI)
# - vapora-mcp-server (MCP protocol support)
Build time: 2-5 minutes (first time)
Expected output:
Finished `dev` profile [unoptimized + debuginfo] target(s) in XXXs
Step 4: Run Tests
# Run all tests in the workspace
cargo test --lib
# Run tests for specific crate
cargo test -p vapora-backend --lib
cargo test -p vapora-tracking --lib
# Expected output:
# test result: ok. XXX passed; 0 failed
Step 5: Start the Backend Service
# Run the backend server (development mode)
cargo run -p vapora-backend
# Expected output:
# 🚀 Vapora Backend Server running on http://127.0.0.1:3000
# Available endpoints:
# GET /api/v1/health
# GET /api/v1/tracking/summary
# POST /api/v1/agents/orchestrate
# GET /api/v1/projects
The server will be available at: http://localhost:3000
Step 6: (In Another Terminal) Start Frontend Development
cd crates/vapora-frontend
# Install frontend dependencies
cargo install trunk
# Run frontend with hot-reload
trunk serve
# Expected output:
# 🦕 Listening on http://127.0.0.1:8080
The UI will be available at: http://localhost:8080
Step 7: Verify Everything Works
# Check health of backend
curl http://localhost:3000/api/v1/health
# Expected response:
# {
# "status": "ok",
# "service": "vapora-backend",
# "timestamp": "2025-11-10T14:30:00Z"
# }
# Check tracking system
curl http://localhost:3000/api/v1/tracking/summary
# Expected response:
# {
# "total_entries": 0,
# "changes": 0,
# "todos": 0
# }
🏗️ Project Structure Overview
├── Cargo.toml (workspace config)
├── crates/
│ ├── vapora-shared/ ← Shared utilities & types
│ ├── vapora-agents/ ← Agent orchestration framework
│ ├── vapora-llm-router/ ← Multi-LLM routing (Claude, OpenAI, Gemini, Ollama)
│ ├── vapora-tracking/ ← Change & TODO tracking system (NEW)
│ ├── vapora-backend/ ← REST API (Axum)
│ ├── vapora-frontend/ ← Web UI (Leptos + WASM)
│ └── vapora-mcp-server/ ← MCP protocol server
├── scripts/
│ ├── sync-tracking.nu ← Sync tracking data
│ ├── export-tracking.nu ← Export reports
│ └── start-tracking-service.nu ← Start tracking service
└── docs/
└── (API docs, architecture, etc.)
📊 Available Commands
Build Commands
# Build specific crate
cargo build -p vapora-backend
cargo build -p vapora-tracking
# Build for production (optimized)
cargo build --release
# Check without building
cargo check
# Clean build artifacts
cargo clean
Test Commands
# Run all tests
cargo test --lib
# Run tests for specific crate
cargo test -p vapora-tracking --lib
# Run tests with output
cargo test -- --nocapture
# Run specific test
cargo test -p vapora-backend test_health_endpoint -- --exact
Development Commands
# Run backend server
cargo run -p vapora-backend
# Run with verbose logging
RUST_LOG=debug cargo run -p vapora-backend
# Format code
cargo fmt
# Lint code
cargo clippy -- -W clippy::all
Documentation
# Generate and open documentation
cargo doc -p vapora-backend --open
# Generate for specific crate
cargo doc -p vapora-tracking --open
🎯 What You Can Do Now
After the quick start, you have:
✅ Backend API running at http://localhost:3000
- Health checks
- Tracking system endpoints
- Agent orchestration API
✅ Frontend UI running at http://localhost:8080
- Real-time project dashboard
- Agent status monitoring
- Change tracking interface
✅ Tracking System
- Log changes:
/log-change "description" - Create TODOs:
/add-todo "task" - Check status:
/track-status - Export reports:
./scripts/export-tracking.nu
✅ Agent Framework
- Orchestrate AI agents
- Multi-LLM routing
- Parallel pipeline execution
🔗 Integration Points
Using the Tracking System
The tracking system integrates with the backend:
# Log a change
/log-change "Implemented user authentication" \
--impact backend \
--files 5
# Create a TODO
/add-todo "Review code changes" \
--priority H \
--estimate M
# Check tracking status
/track-status --limit 10
# Export to report
./scripts/export-tracking.nu json --output report.json
Using the Agent Framework
# Orchestrate agents for a task
curl -X POST http://localhost:3000/api/v1/agents/orchestrate \
-H "Content-Type: application/json" \
-d '{
"task": "Code review",
"agents": ["developer", "reviewer"],
"context": "Review the authentication module"
}'
Using the LLM Router
# Query the LLM router for optimal model selection
curl http://localhost:3000/api/v1/llm-router/select \
-H "Content-Type: application/json" \
-d '{
"task_type": "code_implementation",
"complexity": "high"
}'
🐛 Troubleshooting
Build Fails
# Update Rust
rustup update stable
# Clean and rebuild
cargo clean
cargo build
# Check specific error
cargo build --verbose
Tests Fail
# Run with output
cargo test --lib -- --nocapture --test-threads=1
# Check Rust version
rustc --version # Should be 1.75+
Backend Won't Start
# Check if port 3000 is in use
lsof -i :3000
# Use different port
VAPORA_PORT=3001 cargo run -p vapora-backend
# Check logs
RUST_LOG=debug cargo run -p vapora-backend
Frontend Build Issues
# Update trunk
cargo install --locked trunk
# Clear build cache
rm -rf crates/vapora-frontend/target
# Rebuild
cargo run -p vapora-frontend
📚 Next Steps
Short Term (This Session)
- ✅ Build and run the complete project
- ✅ Visit frontend at
http://localhost:8080 - ✅ Test API endpoints
- ✅ Create first tracking entry
Medium Term (This Week)
- Read
SETUP.md- Complete setup with configuration - Explore crate documentation:
cargo doc --open - Set up development environment
- Configure tracking system
Long Term (Ongoing)
- Contribute to the project
- Deploy to production (see
INTEGRATION.md) - Customize agents and LLM routing
- Integrate with external services
📖 Learning Resources
| Resource | Location | Time |
|---|---|---|
| Project README | README.md |
10 min |
| Complete Setup | SETUP.md |
20 min |
| Tracking System | QUICKSTART_TRACKING.md |
10 min |
| Architecture | .coder/ |
30 min |
| Source Code | crates/ |
varies |
| API Docs | cargo doc --open |
varies |
🎬 Quick Reference
# One-command build and test
cargo build && cargo test --lib
# Run backend in one terminal
cargo run -p vapora-backend
# Run frontend in another terminal
cd crates/vapora-frontend && trunk serve
# Check everything is working
curl http://localhost:3000/api/v1/health
# View logs
RUST_LOG=debug cargo run -p vapora-backend
# Format and lint all code
cargo fmt && cargo clippy --all -- -W clippy::all
🆘 Getting Help
Issues during quick start?
- Check
SETUP.md- Troubleshooting section - Read crate-specific docs in
crates/*/README.md - Check inline code documentation:
cargo doc --open - Review
.coder/documentation
✅ Success Checklist
- Rust 1.75+ installed
- Git repository available
cargo buildsucceedscargo test --libshows all tests passing- Backend runs at
http://localhost:3000 - Frontend runs at
http://localhost:8080 - Health endpoint responds
- Can create tracking entries
All checked? ✅ You're ready to develop with Vapora!
For complete setup with configuration options: See SETUP.md
For tracking system specific guide: See QUICKSTART_TRACKING.md