Implement intelligent agent learning from Knowledge Graph execution history with per-task-type expertise tracking, recency bias, and learning curves. ## Phase 5.3 Implementation ### Learning Infrastructure (✅ Complete) - LearningProfileService with per-task-type expertise metrics - TaskTypeExpertise model tracking success_rate, confidence, learning curves - Recency bias weighting: recent 7 days weighted 3x higher (exponential decay) - Confidence scoring prevents overfitting: min(1.0, executions / 20) - Learning curves computed from daily execution windows ### Agent Scoring Service (✅ Complete) - Unified AgentScore combining SwarmCoordinator + learning profiles - Scoring formula: 0.3*base + 0.5*expertise + 0.2*confidence - Rank agents by combined score for intelligent assignment - Support for recency-biased scoring (recent_success_rate) - Methods: rank_agents, select_best, rank_agents_with_recency ### KG Integration (✅ Complete) - KGPersistence::get_executions_for_task_type() - query by agent + task type - KGPersistence::get_agent_executions() - all executions for agent - Coordinator::load_learning_profile_from_kg() - core KG→Learning integration - Coordinator::load_all_learning_profiles() - batch load for multiple agents - Convert PersistedExecution → ExecutionData for learning calculations ### Agent Assignment Integration (✅ Complete) - AgentCoordinator uses learning profiles for task assignment - extract_task_type() infers task type from title/description - assign_task() scores candidates using AgentScoringService - Fallback to load-based selection if no learning data available - Learning profiles stored in coordinator.learning_profiles RwLock ### Profile Adapter Enhancements (✅ Complete) - create_learning_profile() - initialize empty profiles - add_task_type_expertise() - set task-type expertise - update_profile_with_learning() - update swarm profiles from learning ## Files Modified ### vapora-knowledge-graph/src/persistence.rs (+30 lines) - get_executions_for_task_type(agent_id, task_type, limit) - get_agent_executions(agent_id, limit) ### vapora-agents/src/coordinator.rs (+100 lines) - load_learning_profile_from_kg() - core KG integration method - load_all_learning_profiles() - batch loading for agents - assign_task() already uses learning-based scoring via AgentScoringService ### Existing Complete Implementation - vapora-knowledge-graph/src/learning.rs - calculation functions - vapora-agents/src/learning_profile.rs - data structures and expertise - vapora-agents/src/scoring.rs - unified scoring service - vapora-agents/src/profile_adapter.rs - adapter methods ## Tests Passing - learning_profile: 7 tests ✅ - scoring: 5 tests ✅ - profile_adapter: 6 tests ✅ - coordinator: learning-specific tests ✅ ## Data Flow 1. Task arrives → AgentCoordinator::assign_task() 2. Extract task_type from description 3. Query KG for task-type executions (load_learning_profile_from_kg) 4. Calculate expertise with recency bias 5. Score candidates (SwarmCoordinator + learning) 6. Assign to top-scored agent 7. Execution result → KG → Update learning profiles ## Key Design Decisions ✅ Recency bias: 7-day half-life with 3x weight for recent performance ✅ Confidence scoring: min(1.0, total_executions / 20) prevents overfitting ✅ Hierarchical scoring: 30% base load, 50% expertise, 20% confidence ✅ KG query limit: 100 recent executions per task-type for performance ✅ Async loading: load_learning_profile_from_kg supports concurrent loads ## Next: Phase 5.4 - Cost Optimization Ready to implement budget enforcement and cost-aware provider selection.
6.1 KiB
VAPORA Workspace for Provisioning
Multi-Agent Multi-IA Platform Deployment
Version: 0.2.0 Status: Specification + Structure (Ready for Implementation)
This workspace contains all configuration for deploying VAPORA using Provisioning.
📁 Structure
vapora-wrksp/
├── workspace.toml # Workspace configuration
├── kcl/ # Infrastructure-as-Code (KCL schemas)
│ ├── cluster.k # K8s cluster definition
│ ├── services.k # Microservices
│ ├── storage.k # Databases + storage
│ ├── agents.k # Agent pools
│ └── multi-ia.k # LLM Router configuration
├── taskservs/ # Taskserv definitions (TOML)
│ ├── vapora-backend.toml
│ ├── vapora-frontend.toml
│ ├── vapora-agents.toml
│ ├── vapora-mcp-gateway.toml
│ └── vapora-llm-router.toml
├── workflows/ # Batch operations
│ ├── deploy-full-stack.yaml
│ ├── scale-agents.yaml
│ ├── upgrade-vapora.yaml
│ └── disaster-recovery.yaml
└── README.md # This file
🚀 Quick Start
1. Prerequisites
# Install Provisioning
./vapora-install.sh
# Verify installation
provisioning --version
# Ensure kubectl access
kubectl cluster-info
2. Deploy Cluster
cd provisioning/vapora-wrksp
# Create K8s cluster from KCL
provisioning cluster create --config kcl/cluster.k
# This will:
# - Deploy K3s or RKE2 (auto-detect)
# - Install Istio service mesh
# - Install Rook Ceph for storage
# - Set up monitoring (Prometheus + Grafana)
3. Deploy Full Stack
# Deploy all VAPORA services
provisioning workflow run workflows/deploy-full-stack.yaml
# This will:
# - Deploy SurrealDB (3 replicas)
# - Deploy NATS JetStream (3 replicas)
# - Deploy Redis
# - Deploy Axum backend (3 replicas)
# - Deploy Leptos frontend (2 replicas)
# - Deploy Agent Runtime (3 replicas)
# - Deploy MCP Gateway (2 replicas)
# - Deploy LLM Router (2 replicas)
# - Initialize database
# - Configure ingress
4. Verify Deployment
# Check all services
provisioning health-check --services all
# Get status
provisioning taskserv list
# View logs
provisioning logs tail -f vapora-backend
# Access frontend
open https://vapora.example.com
5. Scale Agents
# Scale developer agents to 10
provisioning taskserv scale vapora-agents --agent developer --replicas 10
# View autoscaling status
provisioning taskserv autoscale status vapora-agents
📋 Configuration
workspace.toml
Master configuration for entire deployment:
- Cluster settings
- Agent pool sizes
- LLM providers
- Monitoring & scaling
- Backup policies
Edit this file to customize:
[agents]
developer = { min = 5, max = 20 } # Scale developers to 20 max
[llm_router]
warn_threshold_daily = 5000 # Alert if daily LLM cost > $50
Taskservs
Each taskserv (in taskservs/) defines:
- Source repository
- Build configuration
- Deployment resources
- Health checks
- Scaling policies
- Environment variables
- Secrets
Customize per-service as needed.
🔧 Common Operations
Deploy New Version
# Update image version in workspace.toml or taskserv
# Then upgrade service:
provisioning taskserv upgrade vapora-backend --image vapora/backend:0.3.0
# This uses rolling update strategy (no downtime)
Rollback Service
provisioning taskserv rollback vapora-backend --to-version 0.2.0
View Agent Logs
provisioning logs tail -f vapora-agents --agent developer-001
Access Database
# Port-forward to SurrealDB
kubectl port-forward -n vapora-system svc/surrealdb 8000:8000
# Then access at http://localhost:8000
Monitor Costs
# View LLM usage & costs
provisioning metrics llm-router --daily
# Output shows:
# - Cost per provider
# - Tokens used per agent
# - Cost per task type
🎯 Workflows
Pre-built batch operations in workflows/:
deploy-full-stack.yaml
Complete deployment from scratch (1 hour)
scale-agents.yaml
Dynamic agent scaling based on queue depth
upgrade-vapora.yaml
Rolling upgrade of all VAPORA services
disaster-recovery.yaml
Restore from backups and verify
🔒 Security
VAPORA deployment includes:
- mTLS: Mutual TLS between all services (Istio)
- Network Policies: Namespace isolation
- RBAC: Kubernetes role-based access
- Secrets: RustyVault integration
- Audit Logs: Full audit trail via Cedar
📊 Monitoring
Post-deployment, access:
-
Grafana Dashboard:
https://vapora.example.com/grafana- Agent health & queue depth
- API latency & error rates
- LLM costs & usage
-
Prometheus:
https://vapora.example.com:9090- Raw metrics scraping
-
Logs:
provisioning logs tail vapora-backend
🆘 Troubleshooting
Pods not starting?
kubectl describe pod -n vapora-system <pod-name>
provisioning logs tail vapora-backend
Service unavailable?
# Check health
provisioning health-check vapora-backend
# Check ingress
kubectl get ingress -n vapora-system
# Check Istio VirtualService
kubectl get vs -n vapora-system
Database connection issues?
# Check SurrealDB
provisioning logs tail vapora-system surrealdb
# Port-forward and test
kubectl port-forward svc/surrealdb 8000:8000
curl http://localhost:8000/health
📚 References
- Workspace Configuration:
workspace.toml - Full Architecture:
../../guides/core/VAPORA-ARCHITECTURE-V2.md - Provisioning Integration:
../../guides/integration/PROVISIONING-INTEGRATION.md - KCL Schemas: Read
.kfiles inkcl/directory - Taskserv Format: Read
.tomlfiles intaskservs/directory
🚀 Next Steps
- ✅ Review
workspace.tomland customize - ✅ Run
provisioning cluster create - ✅ Run
provisioning workflow run deploy-full-stack.yaml - ✅ Access VAPORA at
https://vapora.example.com - ✅ Scale agents as needed for your workload
Version: 0.2.0 Status: Ready for Implementation Maintained: By VAPORA Team