Implement intelligent agent learning from Knowledge Graph execution history with per-task-type expertise tracking, recency bias, and learning curves. ## Phase 5.3 Implementation ### Learning Infrastructure (✅ Complete) - LearningProfileService with per-task-type expertise metrics - TaskTypeExpertise model tracking success_rate, confidence, learning curves - Recency bias weighting: recent 7 days weighted 3x higher (exponential decay) - Confidence scoring prevents overfitting: min(1.0, executions / 20) - Learning curves computed from daily execution windows ### Agent Scoring Service (✅ Complete) - Unified AgentScore combining SwarmCoordinator + learning profiles - Scoring formula: 0.3*base + 0.5*expertise + 0.2*confidence - Rank agents by combined score for intelligent assignment - Support for recency-biased scoring (recent_success_rate) - Methods: rank_agents, select_best, rank_agents_with_recency ### KG Integration (✅ Complete) - KGPersistence::get_executions_for_task_type() - query by agent + task type - KGPersistence::get_agent_executions() - all executions for agent - Coordinator::load_learning_profile_from_kg() - core KG→Learning integration - Coordinator::load_all_learning_profiles() - batch load for multiple agents - Convert PersistedExecution → ExecutionData for learning calculations ### Agent Assignment Integration (✅ Complete) - AgentCoordinator uses learning profiles for task assignment - extract_task_type() infers task type from title/description - assign_task() scores candidates using AgentScoringService - Fallback to load-based selection if no learning data available - Learning profiles stored in coordinator.learning_profiles RwLock ### Profile Adapter Enhancements (✅ Complete) - create_learning_profile() - initialize empty profiles - add_task_type_expertise() - set task-type expertise - update_profile_with_learning() - update swarm profiles from learning ## Files Modified ### vapora-knowledge-graph/src/persistence.rs (+30 lines) - get_executions_for_task_type(agent_id, task_type, limit) - get_agent_executions(agent_id, limit) ### vapora-agents/src/coordinator.rs (+100 lines) - load_learning_profile_from_kg() - core KG integration method - load_all_learning_profiles() - batch loading for agents - assign_task() already uses learning-based scoring via AgentScoringService ### Existing Complete Implementation - vapora-knowledge-graph/src/learning.rs - calculation functions - vapora-agents/src/learning_profile.rs - data structures and expertise - vapora-agents/src/scoring.rs - unified scoring service - vapora-agents/src/profile_adapter.rs - adapter methods ## Tests Passing - learning_profile: 7 tests ✅ - scoring: 5 tests ✅ - profile_adapter: 6 tests ✅ - coordinator: learning-specific tests ✅ ## Data Flow 1. Task arrives → AgentCoordinator::assign_task() 2. Extract task_type from description 3. Query KG for task-type executions (load_learning_profile_from_kg) 4. Calculate expertise with recency bias 5. Score candidates (SwarmCoordinator + learning) 6. Assign to top-scored agent 7. Execution result → KG → Update learning profiles ## Key Design Decisions ✅ Recency bias: 7-day half-life with 3x weight for recent performance ✅ Confidence scoring: min(1.0, total_executions / 20) prevents overfitting ✅ Hierarchical scoring: 30% base load, 50% expertise, 20% confidence ✅ KG query limit: 100 recent executions per task-type for performance ✅ Async loading: load_learning_profile_from_kg supports concurrent loads ## Next: Phase 5.4 - Cost Optimization Ready to implement budget enforcement and cost-aware provider selection.
5.3 KiB
5.3 KiB
SecretumVault Integration
VAPORA integrates with SecretumVault, a post-quantum ready secrets management system, for secure credential and API key management across all microservices.
Overview
SecretumVault provides:
- Post-quantum cryptography ready for future-proof security
- Multi-backend storage (filesystem, SurrealDB, PostgreSQL, etcd)
- Fine-grained access control with Cedar policy engine
- Secrets server for centralized credential management
- CLI tools for operations and development
Integration Points
SecretumVault is integrated into these VAPORA services:
| Service | Purpose | Features |
|---|---|---|
| vapora-backend | REST API credentials, database secrets, JWT keys | Central secrets management |
| vapora-agents | Agent authentication, service credentials | Secure agent-to-service auth |
| vapora-llm-router | LLM provider API keys (Claude, OpenAI, Gemini, Ollama) | Cost tracking + credential rotation |
Architecture
┌─────────────────────────────────────────────────────────────┐
│ VAPORA Services │
├─────────────┬──────────────────┬────────────────────────────┤
│ Backend API │ Agent Orchestration │ LLM Router │
└──────┬──────┴────────┬─────────┴──────────┬─────────────────┘
│ │ │
└───────────────┼────────────────────┘
│
▼
┌─────────────────────────────┐
│ SecretumVault Server │
├─────────────────────────────┤
│ • Credential storage │
│ • Policy enforcement │
│ • Audit logging │
│ • Key rotation │
└──────────┬──────────────────┘
│
┌───────────┴────────────┐
▼ ▼
Storage Layer Policy Engine
(SurrealDB) (Cedar)
Configuration
Environment Variables
# SecretumVault server connection
SECRETUMVAULT_URL=http://secretumvault:3030
SECRETUMVAULT_TOKEN=<identity-token>
# Storage backend
SECRETUMVAULT_STORAGE=surrealdb
SURREAL_URL=ws://surrealdb:8000
SURREAL_DB=secretumvault
# Crypto backend
SECRETUMVAULT_CRYPTO=openssl # or aws-lc for post-quantum
Cargo Features
SecretumVault is integrated with these features enabled:
secretumvault = { workspace = true }
# Automatically uses: "server", "surrealdb-storage"
Usage Examples
In vapora-backend
use secretumvault::SecretClient;
// Initialize client
let client = SecretClient::new(
&env::var("SECRETUMVAULT_URL")?,
&env::var("SECRETUMVAULT_TOKEN")?,
).await?;
// Retrieve API key
let api_key = client.get_secret("llm/claude-api-key").await?;
// Store credential securely
client.store_secret(
"database/postgres-password",
&password,
Some("postgres-creds"),
).await?;
In vapora-llm-router
use secretumvault::SecretClient;
// Get LLM provider credentials
let openai_key = client.get_secret("llm/openai-api-key").await?;
let claude_key = client.get_secret("llm/claude-api-key").await?;
let gemini_key = client.get_secret("llm/gemini-api-key").await?;
// Fallback to Ollama (local, no key needed)
Running SecretumVault
Local Development
# Terminal 1: Start SecretumVault server
cd /Users/Akasha/Development/secretumvault
cargo run --bin secretumvault-server --features server,surrealdb-storage
# Terminal 2: Initialize with default policies
cargo run --bin secretumvault-cli -- init-policies
Production (Kubernetes)
# Will be added to kubernetes/
kubectl apply -f kubernetes/secretumvault/
Security Best Practices
-
Token Management
- Use identity-based tokens (not basic auth)
- Rotate tokens regularly
- Store token in
.env.local(not in git)
-
Secret Storage
- Never commit credentials to git
- Use SecretumVault for all sensitive data
- Enable audit logging for compliance
-
Policy Enforcement
- Define Cedar policies per role/service
- Restrict access by principle of least privilege
- Review policies during security audits
-
Crypto Backend
- Use
aws-lcfor post-quantum readiness - Plan migration as quantum threats evolve
- Use
Related Documentation
Integration Status: ✅ Active Services: Backend, Agents, LLM Router Features: server, surrealdb-storage, cedar-policies