feat: Phase 5.3 - Multi-Agent Learning Infrastructure
Implement intelligent agent learning from Knowledge Graph execution history
with per-task-type expertise tracking, recency bias, and learning curves.
## Phase 5.3 Implementation
### Learning Infrastructure (✅ Complete)
- LearningProfileService with per-task-type expertise metrics
- TaskTypeExpertise model tracking success_rate, confidence, learning curves
- Recency bias weighting: recent 7 days weighted 3x higher (exponential decay)
- Confidence scoring prevents overfitting: min(1.0, executions / 20)
- Learning curves computed from daily execution windows
### Agent Scoring Service (✅ Complete)
- Unified AgentScore combining SwarmCoordinator + learning profiles
- Scoring formula: 0.3*base + 0.5*expertise + 0.2*confidence
- Rank agents by combined score for intelligent assignment
- Support for recency-biased scoring (recent_success_rate)
- Methods: rank_agents, select_best, rank_agents_with_recency
### KG Integration (✅ Complete)
- KGPersistence::get_executions_for_task_type() - query by agent + task type
- KGPersistence::get_agent_executions() - all executions for agent
- Coordinator::load_learning_profile_from_kg() - core KG→Learning integration
- Coordinator::load_all_learning_profiles() - batch load for multiple agents
- Convert PersistedExecution → ExecutionData for learning calculations
### Agent Assignment Integration (✅ Complete)
- AgentCoordinator uses learning profiles for task assignment
- extract_task_type() infers task type from title/description
- assign_task() scores candidates using AgentScoringService
- Fallback to load-based selection if no learning data available
- Learning profiles stored in coordinator.learning_profiles RwLock
### Profile Adapter Enhancements (✅ Complete)
- create_learning_profile() - initialize empty profiles
- add_task_type_expertise() - set task-type expertise
- update_profile_with_learning() - update swarm profiles from learning
## Files Modified
### vapora-knowledge-graph/src/persistence.rs (+30 lines)
- get_executions_for_task_type(agent_id, task_type, limit)
- get_agent_executions(agent_id, limit)
### vapora-agents/src/coordinator.rs (+100 lines)
- load_learning_profile_from_kg() - core KG integration method
- load_all_learning_profiles() - batch loading for agents
- assign_task() already uses learning-based scoring via AgentScoringService
### Existing Complete Implementation
- vapora-knowledge-graph/src/learning.rs - calculation functions
- vapora-agents/src/learning_profile.rs - data structures and expertise
- vapora-agents/src/scoring.rs - unified scoring service
- vapora-agents/src/profile_adapter.rs - adapter methods
## Tests Passing
- learning_profile: 7 tests ✅
- scoring: 5 tests ✅
- profile_adapter: 6 tests ✅
- coordinator: learning-specific tests ✅
## Data Flow
1. Task arrives → AgentCoordinator::assign_task()
2. Extract task_type from description
3. Query KG for task-type executions (load_learning_profile_from_kg)
4. Calculate expertise with recency bias
5. Score candidates (SwarmCoordinator + learning)
6. Assign to top-scored agent
7. Execution result → KG → Update learning profiles
## Key Design Decisions
✅ Recency bias: 7-day half-life with 3x weight for recent performance
✅ Confidence scoring: min(1.0, total_executions / 20) prevents overfitting
✅ Hierarchical scoring: 30% base load, 50% expertise, 20% confidence
✅ KG query limit: 100 recent executions per task-type for performance
✅ Async loading: load_learning_profile_from_kg supports concurrent loads
## Next: Phase 5.4 - Cost Optimization
Ready to implement budget enforcement and cost-aware provider selection.
2026-01-11 13:03:53 +00:00
|
|
|
use chrono::{DateTime, Utc};
|
|
|
|
|
use serde::{Deserialize, Serialize};
|
|
|
|
|
|
|
|
|
|
/// Message type for agent-to-agent communication
|
|
|
|
|
#[derive(Debug, Clone, Serialize, Deserialize)]
|
|
|
|
|
pub enum SwarmMessage {
|
|
|
|
|
TaskProposal {
|
|
|
|
|
task_id: String,
|
|
|
|
|
proposed_by: String,
|
|
|
|
|
task_description: String,
|
|
|
|
|
required_capabilities: Vec<String>,
|
|
|
|
|
},
|
|
|
|
|
BidRequest {
|
|
|
|
|
task_id: String,
|
|
|
|
|
task_description: String,
|
|
|
|
|
},
|
|
|
|
|
BidSubmission {
|
|
|
|
|
task_id: String,
|
|
|
|
|
bidder_id: String,
|
|
|
|
|
bid_value: f64,
|
|
|
|
|
estimated_duration_ms: u64,
|
|
|
|
|
},
|
|
|
|
|
TaskAssignment {
|
|
|
|
|
task_id: String,
|
|
|
|
|
assigned_to: String,
|
|
|
|
|
priority: u32,
|
|
|
|
|
},
|
|
|
|
|
ConsensusVote {
|
|
|
|
|
proposal_id: String,
|
|
|
|
|
voter_id: String,
|
|
|
|
|
vote: Vote,
|
|
|
|
|
reasoning: String,
|
|
|
|
|
},
|
|
|
|
|
CoalitionInvite {
|
|
|
|
|
coalition_id: String,
|
|
|
|
|
coordinator_id: String,
|
|
|
|
|
required_roles: Vec<String>,
|
|
|
|
|
},
|
|
|
|
|
CoalitionAccept {
|
|
|
|
|
coalition_id: String,
|
|
|
|
|
agent_id: String,
|
|
|
|
|
},
|
|
|
|
|
StatusUpdate {
|
|
|
|
|
agent_id: String,
|
|
|
|
|
current_load: f64,
|
|
|
|
|
available: bool,
|
|
|
|
|
},
|
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
/// Vote in consensus mechanism
|
|
|
|
|
#[derive(Debug, Clone, Copy, Serialize, Deserialize, PartialEq, Eq)]
|
|
|
|
|
pub enum Vote {
|
|
|
|
|
Agree,
|
|
|
|
|
Disagree,
|
|
|
|
|
Abstain,
|
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
/// Bid for task execution
|
|
|
|
|
#[derive(Debug, Clone, Serialize, Deserialize)]
|
|
|
|
|
pub struct Bid {
|
|
|
|
|
pub task_id: String,
|
|
|
|
|
pub bidder_id: String,
|
|
|
|
|
pub bid_value: f64,
|
|
|
|
|
pub estimated_duration_ms: u64,
|
|
|
|
|
pub submitted_at: DateTime<Utc>,
|
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
/// Coalition of agents working together
|
|
|
|
|
#[derive(Debug, Clone, Serialize, Deserialize)]
|
|
|
|
|
pub struct Coalition {
|
|
|
|
|
pub id: String,
|
|
|
|
|
pub coordinator_id: String,
|
|
|
|
|
pub members: Vec<String>,
|
|
|
|
|
pub required_roles: Vec<String>,
|
|
|
|
|
pub status: CoalitionStatus,
|
|
|
|
|
pub created_at: DateTime<Utc>,
|
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
/// Coalition status
|
|
|
|
|
#[derive(Debug, Clone, Copy, Serialize, Deserialize, PartialEq, Eq)]
|
|
|
|
|
pub enum CoalitionStatus {
|
|
|
|
|
Forming,
|
|
|
|
|
Active,
|
|
|
|
|
Executing,
|
|
|
|
|
Completed,
|
|
|
|
|
Failed,
|
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
/// Agent capability profile in swarm
|
|
|
|
|
#[derive(Debug, Clone, Serialize, Deserialize)]
|
|
|
|
|
pub struct AgentProfile {
|
|
|
|
|
pub id: String,
|
|
|
|
|
pub roles: Vec<String>,
|
|
|
|
|
pub capabilities: Vec<String>,
|
|
|
|
|
pub current_load: f64,
|
|
|
|
|
pub success_rate: f64,
|
|
|
|
|
pub availability: bool,
|
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
impl Bid {
|
|
|
|
|
pub fn new(task_id: String, bidder_id: String, bid_value: f64, duration_ms: u64) -> Self {
|
|
|
|
|
Self {
|
|
|
|
|
task_id,
|
|
|
|
|
bidder_id,
|
|
|
|
|
bid_value,
|
|
|
|
|
estimated_duration_ms: duration_ms,
|
|
|
|
|
submitted_at: Utc::now(),
|
|
|
|
|
}
|
|
|
|
|
}
|
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
impl Coalition {
|
2026-01-11 21:32:56 +00:00
|
|
|
pub fn new(coordinator_id: String, required_roles: Vec<String>) -> Self {
|
feat: Phase 5.3 - Multi-Agent Learning Infrastructure
Implement intelligent agent learning from Knowledge Graph execution history
with per-task-type expertise tracking, recency bias, and learning curves.
## Phase 5.3 Implementation
### Learning Infrastructure (✅ Complete)
- LearningProfileService with per-task-type expertise metrics
- TaskTypeExpertise model tracking success_rate, confidence, learning curves
- Recency bias weighting: recent 7 days weighted 3x higher (exponential decay)
- Confidence scoring prevents overfitting: min(1.0, executions / 20)
- Learning curves computed from daily execution windows
### Agent Scoring Service (✅ Complete)
- Unified AgentScore combining SwarmCoordinator + learning profiles
- Scoring formula: 0.3*base + 0.5*expertise + 0.2*confidence
- Rank agents by combined score for intelligent assignment
- Support for recency-biased scoring (recent_success_rate)
- Methods: rank_agents, select_best, rank_agents_with_recency
### KG Integration (✅ Complete)
- KGPersistence::get_executions_for_task_type() - query by agent + task type
- KGPersistence::get_agent_executions() - all executions for agent
- Coordinator::load_learning_profile_from_kg() - core KG→Learning integration
- Coordinator::load_all_learning_profiles() - batch load for multiple agents
- Convert PersistedExecution → ExecutionData for learning calculations
### Agent Assignment Integration (✅ Complete)
- AgentCoordinator uses learning profiles for task assignment
- extract_task_type() infers task type from title/description
- assign_task() scores candidates using AgentScoringService
- Fallback to load-based selection if no learning data available
- Learning profiles stored in coordinator.learning_profiles RwLock
### Profile Adapter Enhancements (✅ Complete)
- create_learning_profile() - initialize empty profiles
- add_task_type_expertise() - set task-type expertise
- update_profile_with_learning() - update swarm profiles from learning
## Files Modified
### vapora-knowledge-graph/src/persistence.rs (+30 lines)
- get_executions_for_task_type(agent_id, task_type, limit)
- get_agent_executions(agent_id, limit)
### vapora-agents/src/coordinator.rs (+100 lines)
- load_learning_profile_from_kg() - core KG integration method
- load_all_learning_profiles() - batch loading for agents
- assign_task() already uses learning-based scoring via AgentScoringService
### Existing Complete Implementation
- vapora-knowledge-graph/src/learning.rs - calculation functions
- vapora-agents/src/learning_profile.rs - data structures and expertise
- vapora-agents/src/scoring.rs - unified scoring service
- vapora-agents/src/profile_adapter.rs - adapter methods
## Tests Passing
- learning_profile: 7 tests ✅
- scoring: 5 tests ✅
- profile_adapter: 6 tests ✅
- coordinator: learning-specific tests ✅
## Data Flow
1. Task arrives → AgentCoordinator::assign_task()
2. Extract task_type from description
3. Query KG for task-type executions (load_learning_profile_from_kg)
4. Calculate expertise with recency bias
5. Score candidates (SwarmCoordinator + learning)
6. Assign to top-scored agent
7. Execution result → KG → Update learning profiles
## Key Design Decisions
✅ Recency bias: 7-day half-life with 3x weight for recent performance
✅ Confidence scoring: min(1.0, total_executions / 20) prevents overfitting
✅ Hierarchical scoring: 30% base load, 50% expertise, 20% confidence
✅ KG query limit: 100 recent executions per task-type for performance
✅ Async loading: load_learning_profile_from_kg supports concurrent loads
## Next: Phase 5.4 - Cost Optimization
Ready to implement budget enforcement and cost-aware provider selection.
2026-01-11 13:03:53 +00:00
|
|
|
Self {
|
|
|
|
|
id: format!("coal_{}", uuid::Uuid::new_v4()),
|
|
|
|
|
coordinator_id,
|
|
|
|
|
members: Vec::new(),
|
|
|
|
|
required_roles,
|
|
|
|
|
status: CoalitionStatus::Forming,
|
|
|
|
|
created_at: Utc::now(),
|
|
|
|
|
}
|
|
|
|
|
}
|
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
#[cfg(test)]
|
|
|
|
|
mod tests {
|
|
|
|
|
use super::*;
|
|
|
|
|
|
|
|
|
|
#[test]
|
|
|
|
|
fn test_bid_creation() {
|
|
|
|
|
let bid = Bid::new("task-1".to_string(), "agent-1".to_string(), 0.8, 5000);
|
|
|
|
|
assert_eq!(bid.task_id, "task-1");
|
|
|
|
|
assert_eq!(bid.bid_value, 0.8);
|
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
#[test]
|
|
|
|
|
fn test_coalition_creation() {
|
|
|
|
|
let coal = Coalition::new("agent-1".to_string(), vec!["developer".to_string()]);
|
|
|
|
|
assert_eq!(coal.coordinator_id, "agent-1");
|
|
|
|
|
assert_eq!(coal.status, CoalitionStatus::Forming);
|
|
|
|
|
}
|
|
|
|
|
}
|