feat: Phase 5.3 - Multi-Agent Learning Infrastructure
Implement intelligent agent learning from Knowledge Graph execution history
with per-task-type expertise tracking, recency bias, and learning curves.
## Phase 5.3 Implementation
### Learning Infrastructure (✅ Complete)
- LearningProfileService with per-task-type expertise metrics
- TaskTypeExpertise model tracking success_rate, confidence, learning curves
- Recency bias weighting: recent 7 days weighted 3x higher (exponential decay)
- Confidence scoring prevents overfitting: min(1.0, executions / 20)
- Learning curves computed from daily execution windows
### Agent Scoring Service (✅ Complete)
- Unified AgentScore combining SwarmCoordinator + learning profiles
- Scoring formula: 0.3*base + 0.5*expertise + 0.2*confidence
- Rank agents by combined score for intelligent assignment
- Support for recency-biased scoring (recent_success_rate)
- Methods: rank_agents, select_best, rank_agents_with_recency
### KG Integration (✅ Complete)
- KGPersistence::get_executions_for_task_type() - query by agent + task type
- KGPersistence::get_agent_executions() - all executions for agent
- Coordinator::load_learning_profile_from_kg() - core KG→Learning integration
- Coordinator::load_all_learning_profiles() - batch load for multiple agents
- Convert PersistedExecution → ExecutionData for learning calculations
### Agent Assignment Integration (✅ Complete)
- AgentCoordinator uses learning profiles for task assignment
- extract_task_type() infers task type from title/description
- assign_task() scores candidates using AgentScoringService
- Fallback to load-based selection if no learning data available
- Learning profiles stored in coordinator.learning_profiles RwLock
### Profile Adapter Enhancements (✅ Complete)
- create_learning_profile() - initialize empty profiles
- add_task_type_expertise() - set task-type expertise
- update_profile_with_learning() - update swarm profiles from learning
## Files Modified
### vapora-knowledge-graph/src/persistence.rs (+30 lines)
- get_executions_for_task_type(agent_id, task_type, limit)
- get_agent_executions(agent_id, limit)
### vapora-agents/src/coordinator.rs (+100 lines)
- load_learning_profile_from_kg() - core KG integration method
- load_all_learning_profiles() - batch loading for agents
- assign_task() already uses learning-based scoring via AgentScoringService
### Existing Complete Implementation
- vapora-knowledge-graph/src/learning.rs - calculation functions
- vapora-agents/src/learning_profile.rs - data structures and expertise
- vapora-agents/src/scoring.rs - unified scoring service
- vapora-agents/src/profile_adapter.rs - adapter methods
## Tests Passing
- learning_profile: 7 tests ✅
- scoring: 5 tests ✅
- profile_adapter: 6 tests ✅
- coordinator: learning-specific tests ✅
## Data Flow
1. Task arrives → AgentCoordinator::assign_task()
2. Extract task_type from description
3. Query KG for task-type executions (load_learning_profile_from_kg)
4. Calculate expertise with recency bias
5. Score candidates (SwarmCoordinator + learning)
6. Assign to top-scored agent
7. Execution result → KG → Update learning profiles
## Key Design Decisions
✅ Recency bias: 7-day half-life with 3x weight for recent performance
✅ Confidence scoring: min(1.0, total_executions / 20) prevents overfitting
✅ Hierarchical scoring: 30% base load, 50% expertise, 20% confidence
✅ KG query limit: 100 recent executions per task-type for performance
✅ Async loading: load_learning_profile_from_kg supports concurrent loads
## Next: Phase 5.4 - Cost Optimization
Ready to implement budget enforcement and cost-aware provider selection.
2026-01-11 13:03:53 +00:00
|
|
|
// vapora-backend: Workflow state machine
|
|
|
|
|
// Phase 3: State management for workflow lifecycle
|
|
|
|
|
|
|
|
|
|
use chrono::{DateTime, Utc};
|
|
|
|
|
use serde::{Deserialize, Serialize};
|
|
|
|
|
|
|
|
|
|
#[derive(Debug, Clone, Serialize, Deserialize, PartialEq)]
|
|
|
|
|
pub enum WorkflowStatus {
|
|
|
|
|
Created,
|
|
|
|
|
Planning,
|
|
|
|
|
InProgress,
|
|
|
|
|
Blocked,
|
|
|
|
|
Completed,
|
|
|
|
|
Failed,
|
|
|
|
|
RolledBack,
|
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
#[derive(Debug, Clone, Serialize, Deserialize, PartialEq)]
|
|
|
|
|
pub enum StepStatus {
|
|
|
|
|
Pending,
|
|
|
|
|
Running,
|
|
|
|
|
Completed,
|
|
|
|
|
Failed,
|
|
|
|
|
Skipped,
|
|
|
|
|
Blocked,
|
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
#[derive(Debug, Clone, Serialize, Deserialize)]
|
|
|
|
|
pub struct Workflow {
|
|
|
|
|
pub id: String,
|
|
|
|
|
pub title: String,
|
|
|
|
|
pub status: WorkflowStatus,
|
|
|
|
|
pub phases: Vec<Phase>,
|
|
|
|
|
pub created_at: DateTime<Utc>,
|
|
|
|
|
pub started_at: Option<DateTime<Utc>>,
|
|
|
|
|
pub completed_at: Option<DateTime<Utc>>,
|
|
|
|
|
pub estimated_completion: Option<DateTime<Utc>>,
|
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
#[derive(Debug, Clone, Serialize, Deserialize)]
|
|
|
|
|
pub struct Phase {
|
|
|
|
|
pub id: String,
|
|
|
|
|
pub name: String,
|
|
|
|
|
pub status: StepStatus,
|
|
|
|
|
pub steps: Vec<WorkflowStep>,
|
|
|
|
|
pub parallel: bool,
|
|
|
|
|
pub estimated_hours: f32,
|
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
#[derive(Debug, Clone, Serialize, Deserialize)]
|
|
|
|
|
pub struct WorkflowStep {
|
|
|
|
|
pub id: String,
|
|
|
|
|
pub name: String,
|
|
|
|
|
pub agent_role: String,
|
|
|
|
|
pub status: StepStatus,
|
|
|
|
|
pub depends_on: Vec<String>,
|
|
|
|
|
pub can_parallelize: bool,
|
|
|
|
|
pub started_at: Option<DateTime<Utc>>,
|
|
|
|
|
pub completed_at: Option<DateTime<Utc>>,
|
|
|
|
|
pub result: Option<String>,
|
|
|
|
|
pub error: Option<String>,
|
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
impl Default for WorkflowStep {
|
|
|
|
|
fn default() -> Self {
|
|
|
|
|
Self {
|
|
|
|
|
id: String::new(),
|
|
|
|
|
name: String::new(),
|
|
|
|
|
agent_role: String::new(),
|
|
|
|
|
status: StepStatus::Pending,
|
|
|
|
|
depends_on: Vec::new(),
|
|
|
|
|
can_parallelize: false,
|
|
|
|
|
started_at: None,
|
|
|
|
|
completed_at: None,
|
|
|
|
|
result: None,
|
|
|
|
|
error: None,
|
|
|
|
|
}
|
|
|
|
|
}
|
|
|
|
|
}
|
|
|
|
|
|
2026-02-03 21:35:00 +00:00
|
|
|
#[allow(dead_code)]
|
feat: Phase 5.3 - Multi-Agent Learning Infrastructure
Implement intelligent agent learning from Knowledge Graph execution history
with per-task-type expertise tracking, recency bias, and learning curves.
## Phase 5.3 Implementation
### Learning Infrastructure (✅ Complete)
- LearningProfileService with per-task-type expertise metrics
- TaskTypeExpertise model tracking success_rate, confidence, learning curves
- Recency bias weighting: recent 7 days weighted 3x higher (exponential decay)
- Confidence scoring prevents overfitting: min(1.0, executions / 20)
- Learning curves computed from daily execution windows
### Agent Scoring Service (✅ Complete)
- Unified AgentScore combining SwarmCoordinator + learning profiles
- Scoring formula: 0.3*base + 0.5*expertise + 0.2*confidence
- Rank agents by combined score for intelligent assignment
- Support for recency-biased scoring (recent_success_rate)
- Methods: rank_agents, select_best, rank_agents_with_recency
### KG Integration (✅ Complete)
- KGPersistence::get_executions_for_task_type() - query by agent + task type
- KGPersistence::get_agent_executions() - all executions for agent
- Coordinator::load_learning_profile_from_kg() - core KG→Learning integration
- Coordinator::load_all_learning_profiles() - batch load for multiple agents
- Convert PersistedExecution → ExecutionData for learning calculations
### Agent Assignment Integration (✅ Complete)
- AgentCoordinator uses learning profiles for task assignment
- extract_task_type() infers task type from title/description
- assign_task() scores candidates using AgentScoringService
- Fallback to load-based selection if no learning data available
- Learning profiles stored in coordinator.learning_profiles RwLock
### Profile Adapter Enhancements (✅ Complete)
- create_learning_profile() - initialize empty profiles
- add_task_type_expertise() - set task-type expertise
- update_profile_with_learning() - update swarm profiles from learning
## Files Modified
### vapora-knowledge-graph/src/persistence.rs (+30 lines)
- get_executions_for_task_type(agent_id, task_type, limit)
- get_agent_executions(agent_id, limit)
### vapora-agents/src/coordinator.rs (+100 lines)
- load_learning_profile_from_kg() - core KG integration method
- load_all_learning_profiles() - batch loading for agents
- assign_task() already uses learning-based scoring via AgentScoringService
### Existing Complete Implementation
- vapora-knowledge-graph/src/learning.rs - calculation functions
- vapora-agents/src/learning_profile.rs - data structures and expertise
- vapora-agents/src/scoring.rs - unified scoring service
- vapora-agents/src/profile_adapter.rs - adapter methods
## Tests Passing
- learning_profile: 7 tests ✅
- scoring: 5 tests ✅
- profile_adapter: 6 tests ✅
- coordinator: learning-specific tests ✅
## Data Flow
1. Task arrives → AgentCoordinator::assign_task()
2. Extract task_type from description
3. Query KG for task-type executions (load_learning_profile_from_kg)
4. Calculate expertise with recency bias
5. Score candidates (SwarmCoordinator + learning)
6. Assign to top-scored agent
7. Execution result → KG → Update learning profiles
## Key Design Decisions
✅ Recency bias: 7-day half-life with 3x weight for recent performance
✅ Confidence scoring: min(1.0, total_executions / 20) prevents overfitting
✅ Hierarchical scoring: 30% base load, 50% expertise, 20% confidence
✅ KG query limit: 100 recent executions per task-type for performance
✅ Async loading: load_learning_profile_from_kg supports concurrent loads
## Next: Phase 5.4 - Cost Optimization
Ready to implement budget enforcement and cost-aware provider selection.
2026-01-11 13:03:53 +00:00
|
|
|
impl Workflow {
|
|
|
|
|
/// Create a new workflow
|
|
|
|
|
pub fn new(id: String, title: String, phases: Vec<Phase>) -> Self {
|
|
|
|
|
Self {
|
|
|
|
|
id,
|
|
|
|
|
title,
|
|
|
|
|
status: WorkflowStatus::Created,
|
|
|
|
|
phases,
|
|
|
|
|
created_at: Utc::now(),
|
|
|
|
|
started_at: None,
|
|
|
|
|
completed_at: None,
|
|
|
|
|
estimated_completion: None,
|
|
|
|
|
}
|
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
/// Check if transition is allowed
|
|
|
|
|
pub fn can_transition(&self, to: &WorkflowStatus) -> bool {
|
2026-01-11 21:32:56 +00:00
|
|
|
matches!(
|
|
|
|
|
(&self.status, to),
|
|
|
|
|
(WorkflowStatus::Created, WorkflowStatus::Planning)
|
|
|
|
|
| (WorkflowStatus::Planning, WorkflowStatus::InProgress)
|
|
|
|
|
| (WorkflowStatus::InProgress, WorkflowStatus::Completed)
|
|
|
|
|
| (WorkflowStatus::InProgress, WorkflowStatus::Failed)
|
|
|
|
|
| (WorkflowStatus::InProgress, WorkflowStatus::Blocked)
|
|
|
|
|
| (WorkflowStatus::Blocked, WorkflowStatus::InProgress)
|
|
|
|
|
| (WorkflowStatus::Failed, WorkflowStatus::RolledBack)
|
|
|
|
|
)
|
feat: Phase 5.3 - Multi-Agent Learning Infrastructure
Implement intelligent agent learning from Knowledge Graph execution history
with per-task-type expertise tracking, recency bias, and learning curves.
## Phase 5.3 Implementation
### Learning Infrastructure (✅ Complete)
- LearningProfileService with per-task-type expertise metrics
- TaskTypeExpertise model tracking success_rate, confidence, learning curves
- Recency bias weighting: recent 7 days weighted 3x higher (exponential decay)
- Confidence scoring prevents overfitting: min(1.0, executions / 20)
- Learning curves computed from daily execution windows
### Agent Scoring Service (✅ Complete)
- Unified AgentScore combining SwarmCoordinator + learning profiles
- Scoring formula: 0.3*base + 0.5*expertise + 0.2*confidence
- Rank agents by combined score for intelligent assignment
- Support for recency-biased scoring (recent_success_rate)
- Methods: rank_agents, select_best, rank_agents_with_recency
### KG Integration (✅ Complete)
- KGPersistence::get_executions_for_task_type() - query by agent + task type
- KGPersistence::get_agent_executions() - all executions for agent
- Coordinator::load_learning_profile_from_kg() - core KG→Learning integration
- Coordinator::load_all_learning_profiles() - batch load for multiple agents
- Convert PersistedExecution → ExecutionData for learning calculations
### Agent Assignment Integration (✅ Complete)
- AgentCoordinator uses learning profiles for task assignment
- extract_task_type() infers task type from title/description
- assign_task() scores candidates using AgentScoringService
- Fallback to load-based selection if no learning data available
- Learning profiles stored in coordinator.learning_profiles RwLock
### Profile Adapter Enhancements (✅ Complete)
- create_learning_profile() - initialize empty profiles
- add_task_type_expertise() - set task-type expertise
- update_profile_with_learning() - update swarm profiles from learning
## Files Modified
### vapora-knowledge-graph/src/persistence.rs (+30 lines)
- get_executions_for_task_type(agent_id, task_type, limit)
- get_agent_executions(agent_id, limit)
### vapora-agents/src/coordinator.rs (+100 lines)
- load_learning_profile_from_kg() - core KG integration method
- load_all_learning_profiles() - batch loading for agents
- assign_task() already uses learning-based scoring via AgentScoringService
### Existing Complete Implementation
- vapora-knowledge-graph/src/learning.rs - calculation functions
- vapora-agents/src/learning_profile.rs - data structures and expertise
- vapora-agents/src/scoring.rs - unified scoring service
- vapora-agents/src/profile_adapter.rs - adapter methods
## Tests Passing
- learning_profile: 7 tests ✅
- scoring: 5 tests ✅
- profile_adapter: 6 tests ✅
- coordinator: learning-specific tests ✅
## Data Flow
1. Task arrives → AgentCoordinator::assign_task()
2. Extract task_type from description
3. Query KG for task-type executions (load_learning_profile_from_kg)
4. Calculate expertise with recency bias
5. Score candidates (SwarmCoordinator + learning)
6. Assign to top-scored agent
7. Execution result → KG → Update learning profiles
## Key Design Decisions
✅ Recency bias: 7-day half-life with 3x weight for recent performance
✅ Confidence scoring: min(1.0, total_executions / 20) prevents overfitting
✅ Hierarchical scoring: 30% base load, 50% expertise, 20% confidence
✅ KG query limit: 100 recent executions per task-type for performance
✅ Async loading: load_learning_profile_from_kg supports concurrent loads
## Next: Phase 5.4 - Cost Optimization
Ready to implement budget enforcement and cost-aware provider selection.
2026-01-11 13:03:53 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
|
|
/// Transition to new state
|
|
|
|
|
pub fn transition(&mut self, to: WorkflowStatus) -> Result<(), String> {
|
|
|
|
|
if !self.can_transition(&to) {
|
|
|
|
|
return Err(format!(
|
|
|
|
|
"Cannot transition from {:?} to {:?}",
|
|
|
|
|
self.status, to
|
|
|
|
|
));
|
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
match &to {
|
|
|
|
|
WorkflowStatus::InProgress => {
|
|
|
|
|
self.started_at = Some(Utc::now());
|
|
|
|
|
}
|
|
|
|
|
WorkflowStatus::Completed | WorkflowStatus::Failed | WorkflowStatus::RolledBack => {
|
|
|
|
|
self.completed_at = Some(Utc::now());
|
|
|
|
|
}
|
|
|
|
|
_ => {}
|
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
self.status = to;
|
|
|
|
|
Ok(())
|
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
/// Check if all steps are completed
|
|
|
|
|
pub fn all_steps_completed(&self) -> bool {
|
|
|
|
|
self.phases.iter().all(|p| {
|
|
|
|
|
p.steps
|
|
|
|
|
.iter()
|
|
|
|
|
.all(|s| matches!(s.status, StepStatus::Completed | StepStatus::Skipped))
|
|
|
|
|
})
|
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
/// Check if any step has failed
|
|
|
|
|
pub fn any_step_failed(&self) -> bool {
|
2026-01-11 21:32:56 +00:00
|
|
|
self.phases.iter().any(|p| {
|
|
|
|
|
p.steps
|
|
|
|
|
.iter()
|
|
|
|
|
.any(|s| matches!(s.status, StepStatus::Failed))
|
|
|
|
|
})
|
feat: Phase 5.3 - Multi-Agent Learning Infrastructure
Implement intelligent agent learning from Knowledge Graph execution history
with per-task-type expertise tracking, recency bias, and learning curves.
## Phase 5.3 Implementation
### Learning Infrastructure (✅ Complete)
- LearningProfileService with per-task-type expertise metrics
- TaskTypeExpertise model tracking success_rate, confidence, learning curves
- Recency bias weighting: recent 7 days weighted 3x higher (exponential decay)
- Confidence scoring prevents overfitting: min(1.0, executions / 20)
- Learning curves computed from daily execution windows
### Agent Scoring Service (✅ Complete)
- Unified AgentScore combining SwarmCoordinator + learning profiles
- Scoring formula: 0.3*base + 0.5*expertise + 0.2*confidence
- Rank agents by combined score for intelligent assignment
- Support for recency-biased scoring (recent_success_rate)
- Methods: rank_agents, select_best, rank_agents_with_recency
### KG Integration (✅ Complete)
- KGPersistence::get_executions_for_task_type() - query by agent + task type
- KGPersistence::get_agent_executions() - all executions for agent
- Coordinator::load_learning_profile_from_kg() - core KG→Learning integration
- Coordinator::load_all_learning_profiles() - batch load for multiple agents
- Convert PersistedExecution → ExecutionData for learning calculations
### Agent Assignment Integration (✅ Complete)
- AgentCoordinator uses learning profiles for task assignment
- extract_task_type() infers task type from title/description
- assign_task() scores candidates using AgentScoringService
- Fallback to load-based selection if no learning data available
- Learning profiles stored in coordinator.learning_profiles RwLock
### Profile Adapter Enhancements (✅ Complete)
- create_learning_profile() - initialize empty profiles
- add_task_type_expertise() - set task-type expertise
- update_profile_with_learning() - update swarm profiles from learning
## Files Modified
### vapora-knowledge-graph/src/persistence.rs (+30 lines)
- get_executions_for_task_type(agent_id, task_type, limit)
- get_agent_executions(agent_id, limit)
### vapora-agents/src/coordinator.rs (+100 lines)
- load_learning_profile_from_kg() - core KG integration method
- load_all_learning_profiles() - batch loading for agents
- assign_task() already uses learning-based scoring via AgentScoringService
### Existing Complete Implementation
- vapora-knowledge-graph/src/learning.rs - calculation functions
- vapora-agents/src/learning_profile.rs - data structures and expertise
- vapora-agents/src/scoring.rs - unified scoring service
- vapora-agents/src/profile_adapter.rs - adapter methods
## Tests Passing
- learning_profile: 7 tests ✅
- scoring: 5 tests ✅
- profile_adapter: 6 tests ✅
- coordinator: learning-specific tests ✅
## Data Flow
1. Task arrives → AgentCoordinator::assign_task()
2. Extract task_type from description
3. Query KG for task-type executions (load_learning_profile_from_kg)
4. Calculate expertise with recency bias
5. Score candidates (SwarmCoordinator + learning)
6. Assign to top-scored agent
7. Execution result → KG → Update learning profiles
## Key Design Decisions
✅ Recency bias: 7-day half-life with 3x weight for recent performance
✅ Confidence scoring: min(1.0, total_executions / 20) prevents overfitting
✅ Hierarchical scoring: 30% base load, 50% expertise, 20% confidence
✅ KG query limit: 100 recent executions per task-type for performance
✅ Async loading: load_learning_profile_from_kg supports concurrent loads
## Next: Phase 5.4 - Cost Optimization
Ready to implement budget enforcement and cost-aware provider selection.
2026-01-11 13:03:53 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
|
|
/// Get workflow progress percentage
|
|
|
|
|
pub fn progress_percent(&self) -> u32 {
|
|
|
|
|
let total_steps: usize = self.phases.iter().map(|p| p.steps.len()).sum();
|
|
|
|
|
if total_steps == 0 {
|
|
|
|
|
return 0;
|
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
let completed_steps: usize = self
|
|
|
|
|
.phases
|
|
|
|
|
.iter()
|
|
|
|
|
.flat_map(|p| &p.steps)
|
|
|
|
|
.filter(|s| matches!(s.status, StepStatus::Completed | StepStatus::Skipped))
|
|
|
|
|
.count();
|
|
|
|
|
|
|
|
|
|
((completed_steps as f64 / total_steps as f64) * 100.0) as u32
|
|
|
|
|
}
|
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
#[cfg(test)]
|
|
|
|
|
mod tests {
|
|
|
|
|
use super::*;
|
|
|
|
|
|
|
|
|
|
#[test]
|
|
|
|
|
fn test_workflow_creation() {
|
|
|
|
|
let workflow = Workflow::new("wf-1".to_string(), "Test Workflow".to_string(), vec![]);
|
|
|
|
|
|
|
|
|
|
assert_eq!(workflow.id, "wf-1");
|
|
|
|
|
assert_eq!(workflow.status, WorkflowStatus::Created);
|
|
|
|
|
assert!(workflow.started_at.is_none());
|
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
#[test]
|
|
|
|
|
fn test_valid_transitions() {
|
|
|
|
|
let mut workflow = Workflow::new("wf-1".to_string(), "Test".to_string(), vec![]);
|
|
|
|
|
|
|
|
|
|
assert!(workflow.transition(WorkflowStatus::Planning).is_ok());
|
|
|
|
|
assert_eq!(workflow.status, WorkflowStatus::Planning);
|
|
|
|
|
|
|
|
|
|
assert!(workflow.transition(WorkflowStatus::InProgress).is_ok());
|
|
|
|
|
assert_eq!(workflow.status, WorkflowStatus::InProgress);
|
|
|
|
|
assert!(workflow.started_at.is_some());
|
|
|
|
|
|
|
|
|
|
assert!(workflow.transition(WorkflowStatus::Completed).is_ok());
|
|
|
|
|
assert_eq!(workflow.status, WorkflowStatus::Completed);
|
|
|
|
|
assert!(workflow.completed_at.is_some());
|
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
#[test]
|
|
|
|
|
fn test_invalid_transition() {
|
|
|
|
|
let mut workflow = Workflow::new("wf-1".to_string(), "Test".to_string(), vec![]);
|
|
|
|
|
|
|
|
|
|
let result = workflow.transition(WorkflowStatus::Completed);
|
|
|
|
|
assert!(result.is_err());
|
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
#[test]
|
|
|
|
|
fn test_progress_calculation() {
|
|
|
|
|
let mut workflow = Workflow::new(
|
|
|
|
|
"wf-1".to_string(),
|
|
|
|
|
"Test".to_string(),
|
|
|
|
|
vec![Phase {
|
|
|
|
|
id: "p1".to_string(),
|
|
|
|
|
name: "Phase 1".to_string(),
|
|
|
|
|
status: StepStatus::Running,
|
|
|
|
|
parallel: false,
|
|
|
|
|
estimated_hours: 2.0,
|
|
|
|
|
steps: vec![
|
|
|
|
|
WorkflowStep {
|
|
|
|
|
id: "s1".to_string(),
|
|
|
|
|
status: StepStatus::Completed,
|
|
|
|
|
..Default::default()
|
|
|
|
|
},
|
|
|
|
|
WorkflowStep {
|
|
|
|
|
id: "s2".to_string(),
|
|
|
|
|
status: StepStatus::Running,
|
|
|
|
|
..Default::default()
|
|
|
|
|
},
|
|
|
|
|
],
|
|
|
|
|
}],
|
|
|
|
|
);
|
|
|
|
|
|
|
|
|
|
assert_eq!(workflow.progress_percent(), 50);
|
|
|
|
|
|
|
|
|
|
workflow.phases[0].steps[1].status = StepStatus::Completed;
|
|
|
|
|
assert_eq!(workflow.progress_percent(), 100);
|
|
|
|
|
}
|
|
|
|
|
}
|