feat: Phase 5.3 - Multi-Agent Learning Infrastructure

Implement intelligent agent learning from Knowledge Graph execution history
with per-task-type expertise tracking, recency bias, and learning curves.

## Phase 5.3 Implementation

### Learning Infrastructure ( Complete)
- LearningProfileService with per-task-type expertise metrics
- TaskTypeExpertise model tracking success_rate, confidence, learning curves
- Recency bias weighting: recent 7 days weighted 3x higher (exponential decay)
- Confidence scoring prevents overfitting: min(1.0, executions / 20)
- Learning curves computed from daily execution windows

### Agent Scoring Service ( Complete)
- Unified AgentScore combining SwarmCoordinator + learning profiles
- Scoring formula: 0.3*base + 0.5*expertise + 0.2*confidence
- Rank agents by combined score for intelligent assignment
- Support for recency-biased scoring (recent_success_rate)
- Methods: rank_agents, select_best, rank_agents_with_recency

### KG Integration ( Complete)
- KGPersistence::get_executions_for_task_type() - query by agent + task type
- KGPersistence::get_agent_executions() - all executions for agent
- Coordinator::load_learning_profile_from_kg() - core KG→Learning integration
- Coordinator::load_all_learning_profiles() - batch load for multiple agents
- Convert PersistedExecution → ExecutionData for learning calculations

### Agent Assignment Integration ( Complete)
- AgentCoordinator uses learning profiles for task assignment
- extract_task_type() infers task type from title/description
- assign_task() scores candidates using AgentScoringService
- Fallback to load-based selection if no learning data available
- Learning profiles stored in coordinator.learning_profiles RwLock

### Profile Adapter Enhancements ( Complete)
- create_learning_profile() - initialize empty profiles
- add_task_type_expertise() - set task-type expertise
- update_profile_with_learning() - update swarm profiles from learning

## Files Modified

### vapora-knowledge-graph/src/persistence.rs (+30 lines)
- get_executions_for_task_type(agent_id, task_type, limit)
- get_agent_executions(agent_id, limit)

### vapora-agents/src/coordinator.rs (+100 lines)
- load_learning_profile_from_kg() - core KG integration method
- load_all_learning_profiles() - batch loading for agents
- assign_task() already uses learning-based scoring via AgentScoringService

### Existing Complete Implementation
- vapora-knowledge-graph/src/learning.rs - calculation functions
- vapora-agents/src/learning_profile.rs - data structures and expertise
- vapora-agents/src/scoring.rs - unified scoring service
- vapora-agents/src/profile_adapter.rs - adapter methods

## Tests Passing
- learning_profile: 7 tests 
- scoring: 5 tests 
- profile_adapter: 6 tests 
- coordinator: learning-specific tests 

## Data Flow
1. Task arrives → AgentCoordinator::assign_task()
2. Extract task_type from description
3. Query KG for task-type executions (load_learning_profile_from_kg)
4. Calculate expertise with recency bias
5. Score candidates (SwarmCoordinator + learning)
6. Assign to top-scored agent
7. Execution result → KG → Update learning profiles

## Key Design Decisions
 Recency bias: 7-day half-life with 3x weight for recent performance
 Confidence scoring: min(1.0, total_executions / 20) prevents overfitting
 Hierarchical scoring: 30% base load, 50% expertise, 20% confidence
 KG query limit: 100 recent executions per task-type for performance
 Async loading: load_learning_profile_from_kg supports concurrent loads

## Next: Phase 5.4 - Cost Optimization
Ready to implement budget enforcement and cost-aware provider selection.
This commit is contained in:
Jesús Pérez 2026-01-11 13:03:53 +00:00
parent d2abda35f9
commit d14150da75
254 changed files with 51550 additions and 149 deletions

26
.devcontainer/Dockerfile Normal file
View File

@ -0,0 +1,26 @@
FROM mcr.microsoft.com/devcontainers/rust:1-1.75-bookworm
RUN apt-get update && apt-get install -y \
curl \
wget \
git \
pkg-config \
libssl-dev \
build-essential \
clang \
lldb \
lld \
&& rm -rf /var/lib/apt/lists/*
# Install additional development tools
RUN cargo install cargo-tarpaulin cargo-flamegraph cargo-expand cargo-edit
# Install Nushell (optional, for script execution)
RUN apt-get update && apt-get install -y \
nushell \
&& rm -rf /var/lib/apt/lists/*
# Set default shell
ENV SHELL=/usr/bin/bash
WORKDIR /workspace

View File

@ -0,0 +1,77 @@
{
"name": "VAPORA Development",
"dockerComposeFile": "docker-compose.yml",
"service": "devcontainer",
"workspaceFolder": "/workspace",
"features": {
"ghcr.io/devcontainers/features/rust:1": {
"version": "1.75"
},
"ghcr.io/devcontainers/features/git:1": {
"version": "latest"
}
},
"postCreateCommand": "cargo build --workspace",
"customizations": {
"vscode": {
"extensions": [
"rust-lang.rust-analyzer",
"vadimcn.vscode-lldb",
"serayuzgur.crates",
"tamasfe.even-better-toml",
"esbenp.prettier-vscode"
],
"settings": {
"[rust]": {
"editor.formatOnSave": true,
"editor.defaultFormatter": "rust-lang.rust-analyzer"
},
"rust-analyzer.checkOnSave.command": "clippy",
"rust-analyzer.checkOnSave.extraArgs": ["--all-targets", "--all-features"],
"terminal.integrated.defaultProfile.linux": "bash"
}
}
},
"forwardPorts": [
3000,
8000,
8001,
8002,
4222,
11434
],
"portAttributes": {
"3000": {
"label": "Frontend (Leptos)",
"onAutoForward": "notify"
},
"8000": {
"label": "SurrealDB",
"onAutoForward": "notify"
},
"8001": {
"label": "Backend API",
"onAutoForward": "notify"
},
"8002": {
"label": "Agent Server",
"onAutoForward": "notify"
},
"4222": {
"label": "NATS",
"onAutoForward": "notify"
},
"11434": {
"label": "Ollama",
"onAutoForward": "silent"
}
},
"remoteEnv": {
"SURREAL_URL": "ws://surrealdb:8000",
"SURREAL_USER": "root",
"SURREAL_PASS": "root",
"NATS_URL": "nats://nats:4222",
"OLLAMA_URL": "http://ollama:11434",
"LOG_LEVEL": "debug"
}
}

View File

@ -0,0 +1,87 @@
version: '3.9'
services:
devcontainer:
build:
context: .
dockerfile: Dockerfile
volumes:
- ../:/workspace:cached
- /var/run/docker.sock:/var/run/docker.sock
environment:
- SURREAL_URL=ws://surrealdb:8000
- SURREAL_USER=root
- SURREAL_PASS=root
- NATS_URL=nats://nats:4222
- OLLAMA_URL=http://ollama:11434
- LOG_LEVEL=debug
- RUST_LOG=vapora=debug,info
command: sleep infinity
depends_on:
- surrealdb
- nats
- ollama
networks:
- vapora-network
surrealdb:
image: surrealdb/surrealdb:v2.3
command: >
start
--log trace
--user root
--pass root
memory
ports:
- "8000:8000"
networks:
- vapora-network
healthcheck:
test: ["CMD", "curl", "-f", "http://localhost:8000/health"]
interval: 10s
timeout: 5s
retries: 5
nats:
image: nats:2.10-alpine
command: >
-js
-sd /data
--http_port 8222
ports:
- "4222:4222"
- "8222:8222"
volumes:
- nats-data:/data
networks:
- vapora-network
healthcheck:
test: ["CMD", "nats", "server", "ping"]
interval: 10s
timeout: 5s
retries: 5
ollama:
image: ollama/ollama:latest
ports:
- "11434:11434"
volumes:
- ollama-data:/root/.ollama
environment:
- OLLAMA_HOST=0.0.0.0:11434
networks:
- vapora-network
healthcheck:
test: ["CMD", "curl", "-f", "http://localhost:11434/api/tags"]
interval: 30s
timeout: 10s
retries: 3
start_period: 40s
volumes:
nats-data:
ollama-data:
networks:
vapora-network:
driver: bridge

164
.gitignore vendored
View File

@ -1,101 +1,67 @@
wrks
ROOT
OLD
# Generated by Cargo
# will have compiled files and executables
debug/
target/
# Encryption keys and related files (CRITICAL - NEVER COMMIT)
.k
.k.backup
*.k
*.key.backup
config.*.toml
config.*back
# where book is written
_book
# Remove Cargo.lock from gitignore if creating an executable, leave it for libraries
# More information here https://doc.rust-lang.org/cargo/guide/cargo-toml-vs-cargo-lock.html
Cargo.lock
# These are backup files generated by rustfmt
**/*.rs.bk
# MSVC Windows builds of rustc generate these, which store debugging information
*.pdb
node_modules/
**/output.css
**/input.css
# Environment files
CLAUDE.md
.claude
utils/save*sh
COMMIT_MESSAGE.md
.wrks
nushell
nushell-*
*.tar.gz
#*-nushell-plugins.tar.gz
github-com
.coder
target
distribution
.qodo
# enviroment to load on bin/build
.env
.env.local
.env.production
.env.development
.env.staging
# Keep example files
!.env.example
# Configuration files (may contain sensitive data)
config.prod.toml
config.production.toml
config.local.toml
config.*.local.toml
# Keep example configuration files
!config.toml
!config.dev.toml
!config.example.toml
# Log files
logs/
*.log
# TLS certificates and keys
certs/
*.pem
*.crt
*.key
*.p12
*.pfx
# Database files
*.db
*.sqlite
*.sqlite3
# Backup files
*.bak
*.backup
*.tmp
*~
# Encryption and security related files
*.encrypted
*.enc
secrets/
private/
security/
# Configuration backups that may contain secrets
config.*.backup
config.backup.*
# OS generated files
# OSX trash
.DS_Store
.DS_Store?
._*
.Spotlight-V100
.Trashes
ehthumbs.db
Thumbs.db
# Documentation build output
book-output/
# Generated setup report
SETUP_COMPLETE.md
# Vscode files
.vscode
# Emacs save files
*~
\#*\#
.\#*
# Vim-related files
[._]*.s[a-w][a-z]
[._]s[a-w][a-z]
*.un~
Session.vim
.netrwhist
# cscope-related files
cscope.*
# User cluster configs
.kubeconfig
.tags*
# direnv .envrc files
.envrc
# make-related metadata
/.make/
# Just in time generated data in the source, should never be committed
/test/e2e/generated/bindata.go
# This file used by some vendor repos (e.g. github.com/go-openapi/...) to store secret variables and should not be ignored
!\.drone\.sec
# Godeps workspace
/Godeps/_workspace
/bazel-*
*.pyc
# generated by verify-vendor.sh
vendordiff.patch
.claude/settings.local.json
# Generated SBOM files
SBOM.*.json
*.sbom.json

206
CHANGELOG.md Normal file
View File

@ -0,0 +1,206 @@
# Changelog
All notable changes to VAPORA will be documented in this file.
The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.0.0/),
and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0.html).
## [Unreleased]
### Added
- Intelligent learning system for multi-agent coordination
- Cost optimization with budget enforcement
- Gradual production deployment guide
## [1.2.0] - 2026-01-11
### Added - Phase 5.3: Multi-Agent Learning
- **Learning Profiles**: Per-task-type expertise tracking for each agent
- `LearningProfile` struct with task-type expertise mapping
- Success rate calculation with recency bias (7-day window weighted 3x)
- Confidence scoring based on execution count (prevents small-sample overfitting)
- Learning curve computation with exponential decay
- **Agent Scoring Service**: Unified agent selection combining swarm metrics + learning
- Formula: `final_score = 0.3*base + 0.5*expertise + 0.2*confidence`
- Base score from SwarmCoordinator (load balancing)
- Expertise score from learning profiles (historical success)
- Confidence weighting dampens low-execution-count agents
- **Knowledge Graph Integration**: Learning curve calculator
- `calculate_learning_curve()` with time-series expertise evolution
- `apply_recency_bias()` with exponential weighting formula
- Aggregate by time windows (daily/weekly) for trend analysis
- **Coordinator Enhancement**: Learning-based agent selection
- Extract task type from description/role
- Query learning profiles for task-specific expertise
- Replace simple load balancing with learning-aware scoring
- Background profile synchronization (30s interval)
### Added - Phase 5.4: Cost Optimization
- **Budget Manager**: Per-role cost enforcement
- `BudgetConfig` with TOML serialization/deserialization
- Role-specific monthly and weekly limits (in cents)
- Automatic fallback provider when budget exceeded
- Alert thresholds (default 80% utilization)
- Weekly/monthly automatic resets
- **Configuration Loading**: Graceful budget initialization
- `BudgetConfig::load()` with strict validation
- `BudgetConfig::load_or_default()` with fallback to empty config
- Environment variable override: `BUDGET_CONFIG_PATH`
- Validation: limits > 0, thresholds in [0.0, 1.0]
- **Cost-Aware Routing**: Provider selection with budget constraints
- Three-tier enforcement:
1. Budget exceeded → force fallback provider
2. Near threshold (>80%) → prefer cost-efficient providers
3. Normal → rule-based routing with cost as tiebreaker
- Cost efficiency ranking: `(quality * 100) / (cost + 1)`
- Fallback chain ordering by cost (Ollama → Gemini → OpenAI → Claude)
- **Prometheus Metrics**: Real-time cost and budget monitoring
- `vapora_llm_budget_remaining_cents{role}` - Monthly budget remaining
- `vapora_llm_budget_utilization{role}` - Budget usage fraction (0.0-1.0)
- `vapora_llm_fallback_triggered_total{role,reason}` - Fallback event counter
- `vapora_llm_cost_per_provider_cents{provider}` - Cumulative cost per provider
- `vapora_llm_tokens_per_provider{provider,type}` - Token usage tracking
- **Grafana Dashboards**: Visual monitoring
- Budget utilization gauge (color thresholds: 70%, 90%, 100%)
- Cost distribution pie chart (percentage per provider)
- Fallback trigger time series (rate of fallback activations)
- Agent assignment latency histogram (P50, P95, P99)
- **Alert Rules**: Prometheus alerting
- `BudgetThresholdExceeded`: Utilization > 80% for 5 minutes
- `HighFallbackRate`: Rate > 0.1 for 10 minutes
- `CostAnomaly`: Cost spike > 2x historical average
- `LearningProfilesInactive`: No updates for 5 minutes
### Added - Integration & Testing
- **End-to-End Integration Tests**: Validate learning + budget interaction
- `test_end_to_end_learning_with_budget_enforcement()` - Full system test
- `test_learning_selection_with_budget_constraints()` - Budget pressure scenarios
- `test_learning_profile_improvement_with_budget_tracking()` - Learning evolution
- **Agent Server Integration**: Budget initialization at startup
- Load budget configuration from `config/agent-budgets.toml`
- Initialize BudgetManager with Arc for thread-safe sharing
- Attach to coordinator via `with_budget_manager()` builder pattern
- Graceful fallback if no configuration exists
- **Coordinator Builder Pattern**: Budget manager attachment
- Added `budget_manager: Option<Arc<BudgetManager>>` field
- `with_budget_manager()` method for fluent API
- Updated all constructors (`new()`, `with_registry()`)
- Backward compatible (works without budget configuration)
### Added - Documentation
- **Implementation Summary**: `.coder/2026-01-11-phase-5-completion.done.md`
- Complete architecture overview (3-layer integration)
- All files created/modified with line counts
- Prometheus metrics reference
- Quality metrics (120 tests passing)
- Educational insights
- **Gradual Deployment Guide**: `guides/gradual-deployment-guide.md`
- Week 1: Staging validation (24 hours)
- Week 2-3: Canary deployment (incremental traffic shift)
- Week 4+: Production rollout (100% traffic)
- Automated rollback procedures (< 5 minutes)
- Success criteria per phase
- Emergency procedures and checklists
### Changed
- **LLMRouter**: Enhanced with budget awareness
- `select_provider_with_budget()` method for budget-aware routing
- Fixed incomplete fallback implementation (lines 227-246)
- Cost-ordered fallback chain (cheapest first)
- **ProfileAdapter**: Learning integration
- `update_from_kg_learning()` method for learning profile sync
- Query KG for task-specific executions with recency filter
- Calculate success rate with 7-day exponential decay
- **AgentCoordinator**: Learning-based assignment
- Replaced min-load selection with `AgentScoringService`
- Extract task type from task description
- Combine swarm metrics + learning profiles for final score
### Fixed
- **Clippy Warnings**: All resolved (0 warnings)
- `redundant_guards` in BudgetConfig
- `needless_borrow` in registry defaults
- `or_insert_with``or_default()` conversions
- `map_clone``cloned()` conversions
- `manual_div_ceil``div_ceil()` method
- **Test Warnings**: Unused variables marked with underscore prefix
### Technical Details
**New Files Created (13)**:
- `vapora-agents/src/learning_profile.rs` (250 lines)
- `vapora-agents/src/scoring.rs` (200 lines)
- `vapora-knowledge-graph/src/learning.rs` (150 lines)
- `vapora-llm-router/src/budget.rs` (300 lines)
- `vapora-llm-router/src/cost_ranker.rs` (180 lines)
- `vapora-llm-router/src/cost_metrics.rs` (120 lines)
- `config/agent-budgets.toml` (50 lines)
- `vapora-agents/tests/end_to_end_learning_budget_test.rs` (NEW)
- 4+ integration test files (700+ lines total)
**Modified Files (10)**:
- `vapora-agents/src/coordinator.rs` - Learning integration
- `vapora-agents/src/profile_adapter.rs` - KG sync
- `vapora-agents/src/bin/server.rs` - Budget initialization
- `vapora-llm-router/src/router.rs` - Cost-aware routing
- `vapora-llm-router/src/lib.rs` - Budget exports
- Plus 5 more lib.rs and config updates
**Test Suite**:
- Total: 120 tests passing
- Unit tests: 71 (vapora-agents: 41, vapora-llm-router: 30)
- Integration tests: 42 (learning: 7, coordinator: 9, budget: 11, cost: 12, end-to-end: 3)
- Quality checks: Zero warnings, clippy -D warnings passing
**Deployment Readiness**:
- Staging validation checklist complete
- Canary deployment Istio VirtualService configured
- Grafana dashboards deployed
- Alert rules created
- Rollback automation ready (< 5 minutes)
## [0.1.0] - 2026-01-10
### Added
- Initial release with core platform features
- Multi-agent orchestration with 12 specialized roles
- Multi-IA router (Claude, OpenAI, Gemini, Ollama)
- Kanban board UI with glassmorphism design
- SurrealDB multi-tenant data layer
- NATS JetStream agent coordination
- Kubernetes-native deployment
- Istio service mesh integration
- MCP plugin system
- RAG integration for semantic search
- Cedar policy engine RBAC
- Full-stack Rust implementation (Axum + Leptos)
[unreleased]: https://github.com/vapora-platform/vapora/compare/v1.2.0...HEAD
[1.2.0]: https://github.com/vapora-platform/vapora/compare/v0.1.0...v1.2.0
[0.1.0]: https://github.com/vapora-platform/vapora/releases/tag/v0.1.0

10237
Cargo.lock generated Normal file

File diff suppressed because it is too large Load Diff

View File

@ -1,6 +1,6 @@
<div align="center">
<img src="imgs/vapora.svg" alt="Vapora Logo" width="400">
<img src="assets/vapora.svg" alt="Vapora Logo" width="400">
🌊 Intelligent Development Orchestration Platform <br>
🎯 Specialized agents orchestrate pipelines for your team <br>
@ -11,8 +11,7 @@
[![Rust](https://img.shields.io/badge/rust-1.75%2B-orange.svg)](https://www.rust-lang.org)
[![Kubernetes](https://img.shields.io/badge/kubernetes-ready-326CE5.svg)](https://kubernetes.io)
[![Istio](https://img.shields.io/badge/istio-service%20mesh-466BB0.svg)](https://istio.io)
[![Status](https://img.shields.io/badge/status-production%20ready-brightgreen.svg)](PROJECT_COMPLETION_REPORT.md)
[![Phase](https://img.shields.io/badge/phase-5%2F5%20complete-success.svg)](.coder/summaries/fase-5-deployment.md)
[![Tests](https://img.shields.io/badge/tests-218%2B%20passing-green.svg)](crates/)
[Features](#-features) • [Quick Start](#-quick-start) • [Architecture](#-architecture) • [Docs](docs/) • [Contributing](#-contributing)
@ -30,9 +29,9 @@
Where ideas vaporize into reality
```
## 🌟 What is Vapora v1.0?
## 🌟 What is Vapora v1.2?
**VAPORA** is an **intelligent development orchestration platform** where teams and AI agents collaborate seamlessly to solve the 4 critical problems in parallel:
**VAPORA** is a **13-crate Rust workspace** (218+ tests) delivering an **intelligent development orchestration platform** where teams and AI agents collaborate seamlessly to solve the 4 critical problems in parallel:
- ✅ **Context Switching** (Developers unified in one system instead of jumping between tools)
- ✅ **Knowledge Fragmentation** (Team decisions, code, and docs discoverable with RAG)
@ -65,6 +64,16 @@
- **Smart Organization**: Feature tags, priority levels, task ordering
- **Responsive Design**: Works seamlessly from mobile to ultra-wide displays
### 🧠 Intelligent Learning & Cost Optimization (Phase 5.3 + 5.4)
- **Per-Task-Type Learning**: Agents build expertise profiles from execution history
- **Recency Bias**: Recent performance weighted 3x (last 7 days) for adaptive selection
- **Budget Enforcement**: Hard caps per role (monthly/weekly) with automatic fallback
- **Cost-Efficient Routing**: Quality/cost ratio optimization for provider selection
- **Learning Curves**: Track agent improvement over time with confidence scoring
- **Prometheus Metrics**: Real-time budget utilization, fallback triggers, cost per provider
- **Gradual Production Rollout**: 4-week canary deployment with automated rollback
### 🤖 Specialized AI Agents (Customizable & Extensible)
Select, tune, or extend agents for your pipeline orchestration needs. Default agents include:
@ -88,6 +97,9 @@
- ✅ Manual override capability with audit logging
- ✅ Fallback chains (e.g., Claude Opus → GPT-4 → Claude Sonnet)
- ✅ Cost tracking & budget alerts per agent role
- ✅ **Learning-based agent selection**: Agents improve from execution history
- ✅ **Budget enforcement**: Per-role cost limits with automatic fallback
- ✅ **Cost-efficient routing**: Quality/cost optimization per task type
### 🏗️ Infrastructure
@ -143,6 +155,9 @@
NATS JetStream Message queue for async agent coordination
Cedar Policy engine for fine-grained RBAC
MCP Gateway Model Context Protocol plugin system
Learning Profiles Per-task-type expertise with recency bias
Budget Manager Per-role cost limits with automatic fallback
Knowledge Graph Temporal execution history with learning curves
Claude API Anthropic Claude (Opus, Sonnet, Haiku)
OpenAI API GPT-4, GPT-4o, GPT-3.5-turbo
Gemini API Google Gemini (2.0 Pro, Flash, 1.5 Pro)
@ -201,7 +216,7 @@
```bash
# Build and push Docker images
nu scripts/build-docker.nu --registry docker.io --tag v0.1.0 --push
nu scripts/build-docker.nu --registry docker.io --tag v1.2.0 --push
# Update secrets
edit kubernetes/03-secrets.yaml # Add your API keys
@ -214,7 +229,7 @@ cd provisioning/vapora-wrksp
provisioning workflow run workflows/deploy-full-stack.yaml
```
**See full deployment guide**: [`DEPLOYMENT.md`](DEPLOYMENT.md)
**See full deployment guide**: [`Deployment Guide`](docs/setup/deployment.md)
---
🏗️ Architecture
@ -325,25 +340,28 @@ provisioning workflow run workflows/deploy-full-stack.yaml
---
📦 Project Structure
```
```text
vapora/
├── vapora-backend/ # Axum API server
├── vapora-agents/ # Agent runtime + MCP integration
├── vapora-mcp-plugins/ # MCP plugin collection
│ ├── code-plugin/
│ ├── rag-plugin/
│ ├── github-plugin/
│ └── jira-plugin/
├── vapora-shared/ # Shared models and utilities
├── vapora-frontend/ # Leptos CSR application
├── kubernetes/ # K8s manifests
│ ├── base/
│ ├── overlays/
│ └── platform/
├── tekton/ # CI/CD pipelines
├── migrations/ # SurrealDB migrations
└── docs/ # Documentation
├── crates/
│ ├── vapora-shared/ # Core models, errors, types
│ ├── vapora-backend/ # Axum REST API (40+ endpoints, 79 tests)
│ ├── vapora-agents/ # Agent orchestration + learning profiles (67 tests)
│ ├── vapora-llm-router/ # Multi-provider routing + budget (53 tests)
│ ├── vapora-swarm/ # Swarm coordination + Prometheus (6 tests)
│ ├── vapora-knowledge-graph/ # Temporal KG + learning curves (13 tests)
│ ├── vapora-frontend/ # Leptos WASM UI (Kanban)
│ ├── vapora-mcp-server/ # MCP protocol gateway
│ ├── vapora-tracking/ # Task/project storage layer
│ ├── vapora-telemetry/ # OpenTelemetry integration
│ ├── vapora-analytics/ # Event pipeline + usage stats
│ ├── vapora-worktree/ # Git worktree management
│ └── vapora-doc-lifecycle/ # Documentation management
├── kubernetes/ # K8s manifests (base, overlays, platform)
├── migrations/ # SurrealDB migrations
├── config/ # Configuration files (TOML)
└── docs/ # Product documentation
# Total: 13 crates, 218+ tests
```
---
@ -504,15 +522,7 @@ provisioning workflow run workflows/deploy-full-stack.yaml
- workflows/: Batch operations (deploy, scale, upgrade, disaster-recovery)
- README.md: Quick start and operation guide
**Implementation Roadmap**:
- Phase 1 (Weeks 1-4): Foundation (Axum backend, SurrealDB, JWT auth, Leptos frontend)
- Phase 2 (Weeks 5-8): Agents (Registry, NATS, 12 agent implementations)
- Phase 3 (Weeks 9-10): Multi-IA (LLM Router, cost tracking, fallback logic)
- Phase 4 (Weeks 11-13): Workflows (Orchestrator, parallel execution, dashboard)
- Phase 5 (Weeks 14-16): K8s & Provisioning (KCL, taskservs, CI/CD)
**References**:
- CLAUDE.md: Complete project overview with version history
- guides/archive/v1-single-agent/: Legacy v1.0 specifications for historical reference
---
@ -614,6 +624,6 @@ provisioning workflow run workflows/deploy-full-stack.yaml
---
⚠️ **Brand Assets Location**: See [`./imgs/`](./imgs/) for complete brand system including logo variations, color palettes, and interactive preview → [Open Logo Preview](./imgs/vapora_logo.html)
⚠️ **Brand Assets Location**: See [`./assets/`](./assets/) for complete brand system including logo variations, color palettes, and interactive preview → [Open Logo Preview](./assets/vapora_logo.html)
---

242
assets/README.md Normal file
View File

@ -0,0 +1,242 @@
# 🎨 Vapora Logo System
<img src="vapora.svg" alt="Vapora Logo" width="400">
Complete branding system for Vapora with horizontal and vertical logo variations, comprehensive color specifications, and typography guidelines.
## 📂 Logo Files
### Horizontal Logos
- **vapora.svg** - Main animated version with gradient colors and black background
- **vapora_white.svg** - Static version optimized for light backgrounds
- **vapora_mono.svg** - Monochromatic grayscale version for printing
- **vapora_static.svg** - Static version without animations (for PDFs, documents)
- **vapora_favicon.svg** - 64x64px compact favicon version
### Vertical Logos
- **vapora_v.svg** - Main animated vertical version (image on top, text below)
- **vapora_v_white.svg** - Static vertical version for light backgrounds
- **vapora_v_mono.svg** - Monochromatic grayscale vertical version
- **vapora_v_static.svg** - Static vertical version without animations
- **vapora_v_favicon.svg** - 64x64px compact vertical favicon
## 🎯 Usage Guidelines
### When to Use Each Variant
**Horizontal Layout**
- Websites and web applications (animated or static)
- Headers and hero sections
- Social media profiles
- Email signatures (static version)
**Vertical Layout**
- Profile pictures and avatars
- Vertical banners
- App icons
- Card layouts with constrained width
**Color Variants**
- **Animated (Color)**: Digital products, interactive platforms, dynamic content
- **White Background**: Light-themed websites, printed materials on colored backgrounds
- **Monochromatic**: Professional documents, B&W printing, accessibility
- **Static**: PDFs, printed materials, documentation
- **Favicon**: Browser tabs, favicons, small UI elements
## 🎨 Brand Colors
### Primary Color Palette
| Color | HTML | RGB | CMYK |
|-------|------|-----|------|
| Cyan | #22d3ee | rgb(34, 211, 238) | 86%, 11%, 0%, 7% |
| Purple | #a855f7 | rgb(168, 85, 247) | 32%, 66%, 0%, 3% |
| Pink | #ec4899 | rgb(236, 72, 153) | 0%, 70%, 35%, 8% |
| Black | #000000 | rgb(0, 0, 0) | 0%, 0%, 0%, 100% |
| White | #ffffff | rgb(255, 255, 255) | 0%, 0%, 0%, 0% |
**Gradient**: Cyan → Purple → Pink (horizontal transition)
## 📝 Typography
### Main Logo Text
- **Font Family**: JetBrains Mono / Fira Code
- **Style**: Monospace
- **Weight**: 800 (Bold)
- **Letter Spacing**: 2-3px
- **Text**: VAPORA
### Tagline
- **Font Family**: Inter
- **Style**: Sans-serif
- **Size**: 8px
- **Letter Spacing**: 0.1em
- **Text**: "Evaporate complexity"
### Font Loading (Google Fonts)
To ensure JetBrains Mono is always available, all SVG files use **Google Fonts CDN**:
**In SVG files:**
```xml
<defs>
<style>
@import url('https://fonts.googleapis.com/css2?family=JetBrains+Mono:wght@800&display=swap');
</style>
</defs>
```
**In HTML files:**
```html
<head>
<link href="https://fonts.googleapis.com/css2?family=JetBrains+Mono:wght@800&display=swap" rel="stylesheet">
</head>
```
This guarantees the font loads properly even if users don't have JetBrains Mono installed locally. The `display=swap` parameter ensures text displays immediately with fallback fonts while the custom font loads.
## 📊 Technical Specifications
### SVG Properties
- **Scalable Vector Graphics** format
- **Responsive**: Scales to any size without quality loss
- **ViewBox**: Proportional scaling using viewBox attributes
- **preserveAspectRatio**: xMidYMid meet (maintains aspect ratio)
### Animation Details
- **Animated versions**: Stroke animations, particle movements, pulsing effects
- **Static versions**: Same design without motion
- **Performance**: Lightweight SVG files optimized for web
## 🌐 Interactive Preview
For a comprehensive, interactive preview of all logo variants with:
- Color palette with one-click copy functionality
- Typography specifications
- Detailed size examples
- Responsive grid layout
Visit: **[vapora_logo.html](./vapora_logo.html)**
## 💾 File Sizes & Formats
All logos are available as SVG files (scalable vector graphics):
- **Small**: 5-15 KB per file
- **Format**: SVG + inline styles
- **Browser Support**: All modern browsers (Chrome, Firefox, Safari, Edge)
## 🎭 Color Modes
### Light Mode (vapora_white.svg family)
- White background (#ffffff)
- All gradient colors maintained
- Best for: Light-themed interfaces
### Dark Mode (vapora.svg family)
- Black background (#000000)
- Full color gradients with glow effects
- Best for: Dark-themed interfaces
### Print & Professional (vapora_mono.svg family)
- Black background with grayscale colors
- Best for: Printing, official documents
- CMYK values provided for print production
## 🚀 Implementation Tips
### Web Usage
```html
<!-- Horizontal animated -->
<img src="./vapora.svg" alt="Vapora Logo">
<!-- Vertical for avatars -->
<img src="./vapora_v.svg" alt="Vapora Logo Vertical">
<!-- Static for static content -->
<img src="./vapora_static.svg" alt="Vapora Logo">
```
### Responsive Sizing
- Mobile: Use favicon or small scale (64-128px)
- Tablet: Medium scale (200-400px)
- Desktop: Full scale (400px+)
- All SVG files scale infinitely without quality loss
### Print Production
1. Use `vapora_mono.svg` for B&W printing
2. Use CMYK color values for print preparation
3. Ensure minimum 50px height for small prints
4. Use static versions (no animations) for PDFs
### Font Implementation in Your Project
To integrate Vapora logos in your project and ensure fonts work correctly:
**Option 1: Link in HTML (Recommended)**
```html
<!-- In your HTML head -->
<link href="https://fonts.googleapis.com/css2?family=JetBrains+Mono:wght@800&display=swap" rel="stylesheet">
<!-- SVGs will automatically use the loaded font -->
<img src="./vapora.svg" alt="Vapora">
```
**Option 2: Import in CSS**
```css
/* In your main CSS file */
@import url('https://fonts.googleapis.com/css2?family=JetBrains+Mono:wght@800&display=swap');
```
**Option 3: Direct in SVG (Already included)**
```xml
<!-- SVG files already include this -->
<style>
@import url('https://fonts.googleapis.com/css2?family=JetBrains+Mono:wght@800&display=swap');
</style>
```
**Font Parameters:**
- `family=JetBrains+Mono` - Font name (URL-encoded)
- `wght@800` - Weight (800 = Bold)
- `display=swap` - Shows fallback text immediately, swaps when font loads
## 📋 Brand Assets Checklist
- [x] Horizontal logo (animated)
- [x] Horizontal logo (static)
- [x] Vertical logo (animated)
- [x] Vertical logo (static)
- [x] Light background variant
- [x] Monochromatic variant
- [x] Favicon variants
- [x] Color specifications (HTML, RGB, CMYK)
- [x] Typography guidelines
- [x] Interactive preview page
## 📍 File Location
This README and all brand assets are located in:
```
vapora/
└── assets/
├── README.md (this file)
├── vapora_logo.html (interactive preview)
├── vapora.svg (and other logo files)
└── ...
```
## 📞 Support
For questions about logo usage or brand guidelines, refer to the interactive preview page:
### Open the Logo Preview
1. Navigate to the `imgs` folder
2. Open **[vapora_logo.html](./vapora_logo.html)** in your browser
3. Click on any logo to view it in full-screen
**Direct URL**: `./vapora_logo.html`
---
**Last Updated**: November 2025
**Brand**: Vapora - Evaporate complexity

36
assets/prompt_gen.md Normal file
View File

@ -0,0 +1,36 @@
Crea un svg para el logo de Vapora
Nombre: Vapora
Dominio principal: vapora.dev
Tagline: "Evaporate complexity. Build in the cloud."
Logo concept:
- Ondas de vapor ascendentes
- Gradiente cyan → purple → pink
- Efecto glassmorphism en el texto
- Animación sutil de partículas flotando
Color palette:
Primary: Cyan (#22d3ee) - vapor frío
Secondary: Purple (#a855f7) - transición
Accent: Pink (#ec4899) - vapor caliente
Background: Deep black (#000000) con gradientes
Typography:
- Heading: Monospace futurista (JetBrains Mono, Fira Code)
- Body: Inter o similar sans-serif moderna
💻 Características técnicas:
5 streams de datos ascendentes con diferentes patrones (tipo electrocardiograma/señal digital)
Grid técnico de fondo sutil tipo dashboard
Nodos de datos brillantes que fluyen hacia arriba
Hexágonos técnicos flotando (muy dev/tech)
Líneas de conexión horizontales animadas entre streams
Indicadores laterales "↑ STREAM" / "↑ DATA"
Barras de nivel animadas debajo del tagline
Metadata técnica (v4.0.0-dev) en la esquina
🎯 Concepto:
Ya no son burbujas de bebida, sino flujos de datos ascendentes de una plataforma cloud. Los streams representan pipelines, deploys, y procesos evaporándose hacia la nube. Mucho más DevOps/Cloud/Tech.
📐 Composición:
Ocupa desde Y=240 hasta Y=120 (mucha más altura vertical)
Grid técnico cubre todo el canvas
Mucho más elemento visual sin saturar

View File

Before

Width:  |  Height:  |  Size: 10 KiB

After

Width:  |  Height:  |  Size: 10 KiB

38
assets/vapora_favicon.svg Normal file
View File

@ -0,0 +1,38 @@
<svg xmlns="http://www.w3.org/2000/svg" viewBox="0 0 64 64" width="64" height="64" preserveAspectRatio="xMidYMid meet">
<defs>
<!-- Google Fonts import -->
<style>
@import url('https://fonts.googleapis.com/css2?family=JetBrains+Mono:wght@800&amp;display=swap');
</style>
<linearGradient id="faviconGrad" x1="0%" y1="0%" x2="100%" y2="0%">
<stop offset="0%" style="stop-color:#22d3ee;stop-opacity:1"/>
<stop offset="50%" style="stop-color:#a855f7;stop-opacity:1"/>
<stop offset="100%" style="stop-color:#ec4899;stop-opacity:1"/>
</linearGradient>
</defs>
<!-- Fondo -->
<rect width="64" height="64" fill="#000000"/>
<!-- Símbolo simplificado -->
<g transform="translate(14, 32)">
<!-- Línea central -->
<path d="M 0 0 L 0 -12 L 2 -14 L -2 -18 L 2 -22 L 0 -24" stroke="url(#faviconGrad)" stroke-width="1.2" fill="none" stroke-linecap="round" stroke-linejoin="round"/>
<!-- Línea izquierda -->
<path d="M -6 0 L -6 -10 L -5 -14 L -7 -18 L -6 -22" stroke="#22d3ee" stroke-width="0.8" fill="none" stroke-linecap="round" opacity="0.7"/>
<!-- Línea derecha -->
<path d="M 6 0 L 6 -10 L 5 -14 L 7 -18 L 6 -22" stroke="#ec4899" stroke-width="0.8" fill="none" stroke-linecap="round" opacity="0.7"/>
<!-- Círculos decorativos -->
<circle cx="0" cy="-8" r="1" fill="#22d3ee"/>
<circle cx="0" cy="-14" r="0.8" fill="#a855f7"/>
</g>
<!-- Texto VAPORA simplificado -->
<text x="36" y="40" font-family="'JetBrains Mono', monospace" font-size="18" font-weight="800" fill="url(#faviconGrad)" letter-spacing="1" text-anchor="middle">
V
</text>
</svg>

After

Width:  |  Height:  |  Size: 1.6 KiB

541
assets/vapora_logo.html Normal file
View File

@ -0,0 +1,541 @@
<!DOCTYPE html>
<html lang="es">
<head>
<meta charset="UTF-8">
<meta name="viewport" content="width=device-width, initial-scale=1.0">
<title>Vapora Logo - Sistema de Branding</title>
<link href="https://fonts.googleapis.com/css2?family=JetBrains+Mono:wght@800&display=swap" rel="stylesheet">
<style>
* {
margin: 0;
padding: 0;
box-sizing: border-box;
}
body {
background: #0a0a0a;
font-family: 'JetBrains Mono', monospace;
color: #fff;
padding: 40px;
}
h1 {
color: #22d3ee;
margin-bottom: 10px;
font-size: 28px;
}
.subtitle {
color: #a855f7;
margin-bottom: 40px;
font-size: 14px;
opacity: 0.8;
}
h2 {
color: #ec4899;
margin-top: 50px;
margin-bottom: 30px;
font-size: 18px;
padding-bottom: 10px;
border-bottom: 1px solid #333;
}
h2:first-of-type {
margin-top: 0;
}
.container {
display: grid;
grid-template-columns: repeat(auto-fit, minmax(320px, 1fr));
gap: 30px;
margin-bottom: 40px;
}
.preview {
border: 1px solid #333;
padding: 20px;
border-radius: 8px;
background: #111;
transition: border-color 0.3s;
}
.preview:hover {
border-color: #22d3ee;
}
.preview h3 {
color: #a855f7;
margin-bottom: 15px;
font-size: 12px;
text-transform: uppercase;
letter-spacing: 1px;
}
.preview-wrapper {
display: flex;
align-items: center;
justify-content: center;
background: #000;
border-radius: 4px;
padding: 20px;
min-height: 180px;
}
.vertical .preview-wrapper {
min-height: 320px;
}
.preview-wrapper a {
display: flex;
align-items: center;
justify-content: center;
width: 100%;
height: 100%;
text-decoration: none;
cursor: pointer;
position: relative;
}
.preview-wrapper a::after {
content: '🔗 Click to open';
position: absolute;
background: rgba(0, 0, 0, 0.8);
color: #22d3ee;
padding: 8px 12px;
border-radius: 4px;
font-size: 10px;
opacity: 0;
transition: opacity 0.3s;
white-space: nowrap;
pointer-events: none;
}
.preview-wrapper a:hover::after {
opacity: 1;
}
img {
max-width: 100%;
height: auto;
display: block;
}
.info {
margin-top: 12px;
padding: 10px;
background: #1a1a1a;
border-left: 2px solid #22d3ee;
font-size: 11px;
color: #666;
}
.variant-type {
display: inline-block;
font-size: 10px;
padding: 2px 6px;
background: #1a1a1a;
color: #888;
margin-top: 8px;
border-radius: 3px;
}
.specs-container {
display: grid;
grid-template-columns: repeat(auto-fit, minmax(300px, 1fr));
gap: 30px;
margin-bottom: 40px;
}
.spec-card {
border: 1px solid #333;
padding: 25px;
border-radius: 8px;
background: #111;
transition: border-color 0.3s;
}
.spec-card:hover {
border-color: #ec4899;
}
.spec-card h3 {
color: #ec4899;
margin-bottom: 20px;
font-size: 14px;
text-transform: uppercase;
letter-spacing: 1px;
}
.spec-content ul {
list-style: none;
margin-left: 0;
}
.spec-content li {
padding: 6px 0;
font-size: 12px;
color: #bbb;
border-bottom: 1px solid #1a1a1a;
}
.spec-content li:last-child {
border-bottom: none;
}
.spec-content p {
font-size: 12px;
color: #bbb;
margin: 10px 0;
}
.color-item {
display: flex;
align-items: center;
gap: 15px;
padding: 12px 0;
border-bottom: 1px solid #1a1a1a;
}
.color-item:last-of-type {
border-bottom: none;
}
.color-box {
width: 50px;
height: 50px;
border-radius: 4px;
flex-shrink: 0;
}
.color-info {
flex: 1;
}
.color-info strong {
display: block;
color: #fff;
font-size: 12px;
margin-bottom: 4px;
}
.color-values {
margin-top: 8px;
}
.color-format {
font-size: 14px;
color: #aaa;
margin: 8px 0;
padding: 8px 0;
border-bottom: 1px solid #222;
}
.color-format:last-child {
border-bottom: none;
}
.color-format span {
color: #666;
font-weight: 600;
display: inline-block;
width: 45px;
}
.color-format {
display: flex;
align-items: center;
justify-content: space-between;
}
.color-code {
flex: 1;
color: #fff !important;
}
.copy-btn {
background: none;
border: none;
color: #555;
cursor: pointer;
font-size: 12px;
padding: 2px 6px;
border-radius: 3px;
margin-left: 8px;
transition: all 0.2s;
display: flex;
align-items: center;
gap: 4px;
}
.copy-btn:hover {
background: #1a1a1a;
color: #22d3ee;
}
.copy-btn.copied {
color: #4ade80;
}
</style>
</head>
<body>
<h1>🎨 Vapora Logo System</h1>
<p class="subtitle">Complete branding system with horizontal and vertical variations</p>
<div style="background: #1a1a1a; border-left: 3px solid #22d3ee; padding: 15px; margin-bottom: 30px; border-radius: 4px; font-size: 12px; color: #aaa;">
💡 <strong>Tip:</strong> Click on any logo preview to open it in a full-screen view in a new tab
</div>
<!-- HORIZONTAL LOGOS -->
<h2>📐 Horizontal Logos</h2>
<div class="container horizontal">
<!-- Animated -->
<div class="preview">
<h3>Animated (Color)</h3>
<div class="preview-wrapper">
<a href="./vapora.svg" target="_blank">
<img src="./vapora.svg" alt="Vapora Logo - Animated">
</a>
</div>
<div class="info">Main version with animations - Black background</div>
<span class="variant-type">animated</span>
</div>
<!-- White Background -->
<div class="preview">
<h3>White Background</h3>
<div class="preview-wrapper" style="background: #fff;">
<a href="./vapora_white.svg" target="_blank">
<img src="./vapora_white.svg" alt="Vapora Logo - White">
</a>
</div>
<div class="info">For light backgrounds - Static version</div>
<span class="variant-type">static</span>
</div>
<!-- Monochromatic -->
<div class="preview">
<h3>Monochromatic</h3>
<div class="preview-wrapper">
<a href="./vapora_mono.svg" target="_blank">
<img src="./vapora_mono.svg" alt="Vapora Logo - Mono">
</a>
</div>
<div class="info">Grayscale - For printing and B/W applications</div>
<span class="variant-type">static</span>
</div>
<!-- Favicon -->
<div class="preview">
<h3>Favicon</h3>
<div class="preview-wrapper">
<a href="./vapora_favicon.svg" target="_blank">
<img src="./vapora_favicon.svg" alt="Vapora Logo - Favicon">
</a>
</div>
<div class="info">64x64px - For browser tabs and favicons</div>
<span class="variant-type">animated</span>
</div>
<!-- Static -->
<div class="preview">
<h3>Static Version</h3>
<div class="preview-wrapper">
<a href="./vapora_static.svg" target="_blank">
<img src="./vapora_static.svg" alt="Vapora Logo - Static">
</a>
</div>
<div class="info">No animations - For documents and PDFs</div>
<span class="variant-type">static</span>
</div>
</div>
<!-- VERTICAL LOGOS -->
<h2>⬇️ Vertical Logos</h2>
<div class="container vertical">
<!-- Animated -->
<div class="preview">
<h3>Animated (Color)</h3>
<div class="preview-wrapper">
<a href="./vapora_v.svg" target="_blank">
<img src="./vapora_v.svg" alt="Vapora Logo Vertical - Animated">
</a>
</div>
<div class="info">Vertical version with animations - Image on top, text below</div>
<span class="variant-type">animated</span>
</div>
<!-- White Background -->
<div class="preview">
<h3>White Background</h3>
<div class="preview-wrapper" style="background: #fff;">
<a href="./vapora_v_white.svg" target="_blank">
<img src="./vapora_v_white.svg" alt="Vapora Logo Vertical - White">
</a>
</div>
<div class="info">For light backgrounds - Static version</div>
<span class="variant-type">static</span>
</div>
<!-- Monochromatic -->
<div class="preview">
<h3>Monochromatic</h3>
<div class="preview-wrapper">
<a href="./vapora_v_mono.svg" target="_blank">
<img src="./vapora_v_mono.svg" alt="Vapora Logo Vertical - Mono">
</a>
</div>
<div class="info">Grayscale - For printing</div>
<span class="variant-type">static</span>
</div>
<!-- Favicon Vertical -->
<div class="preview">
<h3>Favicon Vertical</h3>
<div class="preview-wrapper">
<a href="./vapora_v_favicon.svg" target="_blank">
<img src="./vapora_v_favicon.svg" alt="Vapora Logo Vertical - Favicon">
</a>
</div>
<div class="info">64x64px - Compact vertical version</div>
<span class="variant-type">animated</span>
</div>
<!-- Static Vertical -->
<div class="preview">
<h3>Static Version</h3>
<div class="preview-wrapper">
<a href="./vapora_v_static.svg" target="_blank">
<img src="./vapora_v_static.svg" alt="Vapora Logo Vertical - Static">
</a>
</div>
<div class="info">No animations - Vertical version for documents</div>
<span class="variant-type">static</span>
</div>
</div>
<!-- SPECIFICATIONS -->
<h2>🎯 Brand Specifications</h2>
<div class="specs-container">
<div class="spec-card">
<h3>Typography</h3>
<div class="spec-content">
<p><strong>Main Logo:</strong></p>
<ul>
<li>Font: JetBrains Mono / Fira Code</li>
<li>Style: Monospace</li>
<li>Weight: 800 (Bold)</li>
<li>Letter Spacing: 2-3px</li>
</ul>
<p style="margin-top: 15px;"><strong>Tagline:</strong></p>
<ul>
<li>Font: Inter</li>
<li>Style: Sans-serif</li>
<li>Size: 8px</li>
<li>Letter Spacing: 0.1em</li>
<li>Text: "Evaporate complexity"</li>
</ul>
</div>
</div>
<div class="spec-card">
<h3>Color Palette</h3>
<div class="spec-content">
<div class="color-item">
<div class="color-box" style="background: #22d3ee;"></div>
<div class="color-info">
<strong>Cyan</strong>
<div class="color-values">
<div class="color-format"><span>HTML:</span> <span class="color-code">#22d3ee</span> <button class="copy-btn" onclick="copyToClipboard('#22d3ee', this)">📋</button></div>
<div class="color-format"><span>RGB:</span> <span class="color-code">rgb(34, 211, 238)</span> <button class="copy-btn" onclick="copyToClipboard('rgb(34, 211, 238)', this)">📋</button></div>
<div class="color-format"><span>CMYK:</span> <span class="color-code">86%, 11%, 0%, 7%</span> <button class="copy-btn" onclick="copyToClipboard('86%, 11%, 0%, 7%', this)">📋</button></div>
</div>
</div>
</div>
<div class="color-item">
<div class="color-box" style="background: #a855f7;"></div>
<div class="color-info">
<strong>Purple</strong>
<div class="color-values">
<div class="color-format"><span>HTML:</span> <span class="color-code">#a855f7</span> <button class="copy-btn" onclick="copyToClipboard('#a855f7', this)">📋</button></div>
<div class="color-format"><span>RGB:</span> <span class="color-code">rgb(168, 85, 247)</span> <button class="copy-btn" onclick="copyToClipboard('rgb(168, 85, 247)', this)">📋</button></div>
<div class="color-format"><span>CMYK:</span> <span class="color-code">32%, 66%, 0%, 3%</span> <button class="copy-btn" onclick="copyToClipboard('32%, 66%, 0%, 3%', this)">📋</button></div>
</div>
</div>
</div>
<div class="color-item">
<div class="color-box" style="background: #ec4899;"></div>
<div class="color-info">
<strong>Pink</strong>
<div class="color-values">
<div class="color-format"><span>HTML:</span> <span class="color-code">#ec4899</span> <button class="copy-btn" onclick="copyToClipboard('#ec4899', this)">📋</button></div>
<div class="color-format"><span>RGB:</span> <span class="color-code">rgb(236, 72, 153)</span> <button class="copy-btn" onclick="copyToClipboard('rgb(236, 72, 153)', this)">📋</button></div>
<div class="color-format"><span>CMYK:</span> <span class="color-code">0%, 70%, 35%, 8%</span> <button class="copy-btn" onclick="copyToClipboard('0%, 70%, 35%, 8%', this)">📋</button></div>
</div>
</div>
</div>
<div class="color-item">
<div class="color-box" style="background: #000000; border: 1px solid #333;"></div>
<div class="color-info">
<strong>Black Background</strong>
<div class="color-values">
<div class="color-format"><span>HTML:</span> <span class="color-code">#000000</span> <button class="copy-btn" onclick="copyToClipboard('#000000', this)">📋</button></div>
<div class="color-format"><span>RGB:</span> <span class="color-code">rgb(0, 0, 0)</span> <button class="copy-btn" onclick="copyToClipboard('rgb(0, 0, 0)', this)">📋</button></div>
<div class="color-format"><span>CMYK:</span> <span class="color-code">0%, 0%, 0%, 100%</span> <button class="copy-btn" onclick="copyToClipboard('0%, 0%, 0%, 100%', this)">📋</button></div>
</div>
</div>
</div>
<div class="color-item">
<div class="color-box" style="background: #ffffff; border: 1px solid #ddd;"></div>
<div class="color-info">
<strong>White Background</strong>
<div class="color-values">
<div class="color-format"><span>HTML:</span> <span class="color-code">#ffffff</span> <button class="copy-btn" onclick="copyToClipboard('#ffffff', this)">📋</button></div>
<div class="color-format"><span>RGB:</span> <span class="color-code">rgb(255, 255, 255)</span> <button class="copy-btn" onclick="copyToClipboard('rgb(255, 255, 255)', this)">📋</button></div>
<div class="color-format"><span>CMYK:</span> <span class="color-code">0%, 0%, 0%, 0%</span> <button class="copy-btn" onclick="copyToClipboard('0%, 0%, 0%, 0%', this)">📋</button></div>
</div>
</div>
</div>
<p style="margin-top: 15px; font-size: 11px; color: #888;"><strong>Gradient:</strong> Cyan → Purple → Pink (horizontal transition)</p>
</div>
</div>
</div>
<script>
function copyToClipboard(text, button) {
// Copy to clipboard
navigator.clipboard.writeText(text).then(() => {
// Add visual feedback
const originalText = button.textContent;
button.classList.add('copied');
button.textContent = '✓';
// Reset after 2 seconds
setTimeout(() => {
button.classList.remove('copied');
button.textContent = originalText;
}, 2000);
}).catch(() => {
// Fallback for older browsers
const textarea = document.createElement('textarea');
textarea.value = text;
document.body.appendChild(textarea);
textarea.select();
document.execCommand('copy');
document.body.removeChild(textarea);
// Add visual feedback
const originalText = button.textContent;
button.classList.add('copied');
button.textContent = '✓';
setTimeout(() => {
button.classList.remove('copied');
button.textContent = originalText;
}, 2000);
});
}
</script>
</body>
</html>

99
assets/vapora_mono.svg Normal file
View File

@ -0,0 +1,99 @@
<svg xmlns="http://www.w3.org/2000/svg" viewBox="170 40 590 300" width="100%" height="100%" preserveAspectRatio="xMidYMid meet">
<defs>
<!-- Google Fonts import -->
<style>
@import url('https://fonts.googleapis.com/css2?family=JetBrains+Mono:wght@800&amp;display=swap');
</style>
<linearGradient id="monoGrad" x1="0%" y1="0%" x2="100%" y2="0%">
<stop offset="0%" style="stop-color:#ffffff;stop-opacity:1"/>
<stop offset="50%" style="stop-color:#e0e0e0;stop-opacity:1"/>
<stop offset="100%" style="stop-color:#b0b0b0;stop-opacity:1"/>
</linearGradient>
<linearGradient id="monoVertGrad" x1="0%" y1="100%" x2="0%" y2="0%">
<stop offset="0%" style="stop-color:#ffffff;stop-opacity:1"/>
<stop offset="50%" style="stop-color:#d0d0d0;stop-opacity:0.8"/>
<stop offset="100%" style="stop-color:#a0a0a0;stop-opacity:0.4"/>
</linearGradient>
<filter id="monoGlow">
<feGaussianBlur stdDeviation="2" result="coloredBlur"/>
<feMerge>
<feMergeNode in="coloredBlur"/>
<feMergeNode in="SourceGraphic"/>
</feMerge>
</filter>
</defs>
<!-- Fondo negro -->
<rect width="800" height="400" fill="#000000"/>
<!-- Grid de fondo técnico sutil -->
<g opacity="0.08" stroke="#ffffff" stroke-width="1">
<line x1="0" y1="133" x2="800" y2="133"/>
<line x1="0" y1="200" x2="800" y2="200"/>
<line x1="0" y1="267" x2="800" y2="267"/>
<line x1="133" y1="0" x2="133" y2="400"/>
<line x1="267" y1="0" x2="267" y2="400"/>
<line x1="400" y1="0" x2="400" y2="400"/>
<line x1="533" y1="0" x2="533" y2="400"/>
<line x1="667" y1="0" x2="667" y2="400"/>
</g>
<!-- Símbolo técnico: flujo de datos ascendente -->
<g transform="translate(267, 280)">
<rect x="-25" y="0" width="50" height="6.67" fill="url(#monoGrad)" opacity="0.8" rx="3.33"/>
<path d="M 0 0 L 0 -50 L 8.33 -58 L -8.33 -75 L 8.33 -92 L -8.33 -108 L 8.33 -125 L 0 -133 L 0 -200" stroke="url(#monoVertGrad)" stroke-width="5" fill="none" stroke-linecap="round" stroke-linejoin="round" filter="url(#monoGlow)"/>
<path d="M -33 0 L -33 -42 L -30 -58 L -37 -75 L -30 -92 L -37 -108 L -33 -125 L -33 -167" stroke="#ffffff" stroke-width="3.33" fill="none" stroke-linecap="round" opacity="0.6" filter="url(#monoGlow)"/>
<path d="M -58 0 L -58 -33 L -53 -50 L -63 -67 L -53 -83 L -63 -100 L -58 -117 L -58 -142" stroke="#d0d0d0" stroke-width="2.5" fill="none" stroke-linecap="round" opacity="0.5"/>
<path d="M 33 0 L 33 -42 L 30 -58 L 37 -75 L 30 -92 L 37 -108 L 33 -125 L 33 -167" stroke="#b0b0b0" stroke-width="3.33" fill="none" stroke-linecap="round" opacity="0.6" filter="url(#monoGlow)"/>
<path d="M 58 0 L 58 -33 L 53 -50 L 63 -67 L 53 -83 L 63 -100 L 58 -117 L 58 -142" stroke="#ffffff" stroke-width="2.5" fill="none" stroke-linecap="round" opacity="0.5"/>
<circle cx="0" cy="-67" r="5" fill="#ffffff"/>
<circle cx="0" cy="-100" r="4.17" fill="#d0d0d0"/>
<circle cx="0" cy="-133" r="3.33" fill="#b0b0b0"/>
<circle cx="-33" cy="-83" r="3.33" fill="#ffffff" opacity="0.7"/>
<circle cx="33" cy="-92" r="3.33" fill="#b0b0b0" opacity="0.7"/>
<circle cx="-58" cy="-58" r="2.5" fill="#d0d0d0" opacity="0.5"/>
<circle cx="58" cy="-67" r="2.5" fill="#ffffff" opacity="0.5"/>
<polygon points="0,-158 5,-162 5,-167 0,-170 -5,-167 -5,-162" stroke="#ffffff" fill="none" stroke-width="1.67" opacity="0.6"/>
<polygon points="-42,-117 -37,-120 -37,-125 -42,-128 -47,-125 -47,-120" stroke="#d0d0d0" fill="none" stroke-width="1.67" opacity="0.5"/>
<polygon points="42,-125 47,-128 47,-133 42,-137 37,-133 37,-128" stroke="#b0b0b0" fill="none" stroke-width="1.67" opacity="0.5"/>
<line x1="-33" y1="-100" x2="-8" y2="-100" stroke="#ffffff" stroke-width="0.83" opacity="0.4"/>
<line x1="8" y1="-117" x2="33" y2="-117" stroke="#b0b0b0" stroke-width="0.83" opacity="0.4"/>
<line x1="-58" y1="-83" x2="-37" y2="-83" stroke="#d0d0d0" stroke-width="0.83" opacity="0.3"/>
<line x1="37" y1="-92" x2="58" y2="-92" stroke="#ffffff" stroke-width="0.83" opacity="0.3"/>
</g>
<!-- Texto VAPORA -->
<g>
<text x="550" y="207" font-family="'JetBrains Mono', 'Fira Code', monospace" font-size="90" font-weight="800" fill="url(#monoGrad)" letter-spacing="5" text-anchor="middle">
VAPORA
</text>
<text x="550" y="207" font-family="'JetBrains Mono', 'Fira Code', monospace" font-size="90" font-weight="800" fill="none" stroke="rgba(255,255,255,0.2)" stroke-width="0.83" letter-spacing="5" text-anchor="middle">
VAPORA
</text>
</g>
<!-- Tagline -->
<text x="550" y="240" font-family="'Inter', sans-serif" font-size="20" fill="#a0a0a0" opacity="0.8" letter-spacing="0.25em" text-anchor="middle">
Evaporate complexity
</text>
<!-- Indicador técnico decorativo -->
<g transform="translate(550, 280)">
<rect x="0" y="0" width="2" height="13.33" fill="#ffffff" opacity="0.6"/>
<rect x="6.67" y="0" width="2" height="16.67" fill="#d0d0d0" opacity="0.6"/>
<rect x="13.33" y="0" width="2" height="10" fill="#b0b0b0" opacity="0.6"/>
</g>
</svg>

After

Width:  |  Height:  |  Size: 5.0 KiB

134
assets/vapora_static.svg Normal file
View File

@ -0,0 +1,134 @@
<svg xmlns="http://www.w3.org/2000/svg" viewBox="170 40 590 300" width="100%" height="100%" preserveAspectRatio="xMidYMid meet">
<defs>
<!-- Google Fonts import -->
<style>
@import url('https://fonts.googleapis.com/css2?family=JetBrains+Mono:wght@800&amp;display=swap');
</style>
<!-- Gradiente principal -->
<linearGradient id="techGradStatic" x1="0%" y1="0%" x2="100%" y2="0%">
<stop offset="0%" style="stop-color:#22d3ee;stop-opacity:1"/>
<stop offset="50%" style="stop-color:#a855f7;stop-opacity:1"/>
<stop offset="100%" style="stop-color:#ec4899;stop-opacity:1"/>
</linearGradient>
<!-- Gradiente vertical -->
<linearGradient id="vertGradStatic" x1="0%" y1="100%" x2="0%" y2="0%">
<stop offset="0%" style="stop-color:#22d3ee;stop-opacity:1"/>
<stop offset="50%" style="stop-color:#a855f7;stop-opacity:0.8"/>
<stop offset="100%" style="stop-color:#ec4899;stop-opacity:0.4"/>
</linearGradient>
<!-- Filtro glow tech -->
<filter id="techGlowStatic">
<feGaussianBlur stdDeviation="2" result="coloredBlur"/>
<feMerge>
<feMergeNode in="coloredBlur"/>
<feMergeNode in="SourceGraphic"/>
</feMerge>
</filter>
<!-- Filtro glow fuerte -->
<filter id="strongGlowStatic">
<feGaussianBlur stdDeviation="4" result="coloredBlur"/>
<feMerge>
<feMergeNode in="coloredBlur"/>
<feMergeNode in="coloredBlur"/>
<feMergeNode in="SourceGraphic"/>
</feMerge>
</filter>
<!-- Filtro glass -->
<filter id="glassStatic">
<feGaussianBlur in="SourceGraphic" stdDeviation="0.5" result="blur"/>
<feColorMatrix in="blur" type="matrix" values="1 0 0 0 0 0 1 0 0 0 0 0 1 0 0 0 0 0 18 -7" result="goo"/>
<feBlend in="SourceGraphic" in2="goo"/>
</filter>
</defs>
<!-- Fondo -->
<rect width="800" height="400" fill="#000000"/>
<!-- Grid de fondo técnico sutil -->
<g opacity="0.08" stroke="#22d3ee" stroke-width="1">
<line x1="0" y1="133" x2="800" y2="133"/>
<line x1="0" y1="200" x2="800" y2="200"/>
<line x1="0" y1="267" x2="800" y2="267"/>
<line x1="133" y1="0" x2="133" y2="400"/>
<line x1="267" y1="0" x2="267" y2="400"/>
<line x1="400" y1="0" x2="400" y2="400"/>
<line x1="533" y1="0" x2="533" y2="400"/>
<line x1="667" y1="0" x2="667" y2="400"/>
</g>
<!-- Símbolo técnico: flujo de datos ascendente -->
<g transform="translate(267, 280)">
<!-- Base: plataforma -->
<rect x="-25" y="0" width="50" height="6.67" fill="url(#techGradStatic)" opacity="0.8" rx="3.33"/>
<!-- Stream principal - línea central tipo señal -->
<path d="M 0 0 L 0 -50 L 8.33 -58 L -8.33 -75 L 8.33 -92 L -8.33 -108 L 8.33 -125 L 0 -133 L 0 -200" stroke="url(#vertGradStatic)" stroke-width="5" fill="none" stroke-linecap="round" stroke-linejoin="round" filter="url(#techGlowStatic)"/>
<!-- Stream izquierdo 1 -->
<path d="M -33 0 L -33 -42 L -30 -58 L -37 -75 L -30 -92 L -37 -108 L -33 -125 L -33 -167" stroke="#22d3ee" stroke-width="3.33" fill="none" stroke-linecap="round" opacity="0.6" filter="url(#techGlowStatic)"/>
<!-- Stream izquierdo 2 -->
<path d="M -58 0 L -58 -33 L -53 -50 L -63 -67 L -53 -83 L -63 -100 L -58 -117 L -58 -142" stroke="#a855f7" stroke-width="2.5" fill="none" stroke-linecap="round" opacity="0.5"/>
<!-- Stream derecho 1 -->
<path d="M 33 0 L 33 -42 L 30 -58 L 37 -75 L 30 -92 L 37 -108 L 33 -125 L 33 -167" stroke="#ec4899" stroke-width="3.33" fill="none" stroke-linecap="round" opacity="0.6" filter="url(#techGlowStatic)"/>
<!-- Stream derecho 2 -->
<path d="M 58 0 L 58 -33 L 53 -50 L 63 -67 L 53 -83 L 63 -100 L 58 -117 L 58 -142" stroke="#22d3ee" stroke-width="2.5" fill="none" stroke-linecap="round" opacity="0.5"/>
<!-- Nodos de datos en el flujo principal -->
<circle cx="0" cy="-67" r="5" fill="#22d3ee" filter="url(#strongGlowStatic)"/>
<circle cx="0" cy="-100" r="4.17" fill="#a855f7" filter="url(#strongGlowStatic)"/>
<circle cx="0" cy="-133" r="3.33" fill="#ec4899" filter="url(#strongGlowStatic)"/>
<!-- Partículas laterales -->
<circle cx="-33" cy="-83" r="3.33" fill="#22d3ee" opacity="0.7"/>
<circle cx="33" cy="-92" r="3.33" fill="#ec4899" opacity="0.7"/>
<circle cx="-58" cy="-58" r="2.5" fill="#a855f7" opacity="0.5"/>
<circle cx="58" cy="-67" r="2.5" fill="#22d3ee" opacity="0.5"/>
<!-- Hexágonos técnicos -->
<polygon points="0,-158 5,-162 5,-167 0,-170 -5,-167 -5,-162" stroke="#22d3ee" fill="none" stroke-width="1.67" opacity="0.6"/>
<polygon points="-42,-117 -37,-120 -37,-125 -42,-128 -47,-125 -47,-120" stroke="#a855f7" fill="none" stroke-width="1.67" opacity="0.5"/>
<polygon points="42,-125 47,-128 47,-133 42,-137 37,-133 37,-128" stroke="#ec4899" fill="none" stroke-width="1.67" opacity="0.5"/>
<!-- Líneas de conexión -->
<line x1="-33" y1="-100" x2="-8" y2="-100" stroke="#22d3ee" stroke-width="0.83" opacity="0.4"/>
<line x1="8" y1="-117" x2="33" y2="-117" stroke="#ec4899" stroke-width="0.83" opacity="0.4"/>
<line x1="-58" y1="-83" x2="-37" y2="-83" stroke="#a855f7" stroke-width="0.83" opacity="0.3"/>
<line x1="37" y1="-92" x2="58" y2="-92" stroke="#22d3ee" stroke-width="0.83" opacity="0.3"/>
</g>
<!-- Texto VAPORA -->
<g filter="url(#glassStatic)">
<text x="550" y="207" font-family="'JetBrains Mono', 'Fira Code', monospace" font-size="90" font-weight="800" fill="url(#techGradStatic)" letter-spacing="5" text-anchor="middle">
VAPORA
</text>
<text x="550" y="207" font-family="'JetBrains Mono', 'Fira Code', monospace" font-size="90" font-weight="800" fill="none" stroke="rgba(255,255,255,0.2)" stroke-width="0.83" letter-spacing="5" text-anchor="middle">
VAPORA
</text>
</g>
<!-- Glow en texto -->
<text x="550" y="207" font-family="'JetBrains Mono', 'Fira Code', monospace" font-size="90" font-weight="800" fill="url(#techGradStatic)" letter-spacing="5" filter="url(#techGlowStatic)" opacity="0.3" text-anchor="middle">
VAPORA
</text>
<!-- Tagline -->
<text x="550" y="240" font-family="'Inter', sans-serif" font-size="20" fill="#a855f7" opacity="0.8" letter-spacing="0.25em" text-anchor="middle">
Evaporate complexity
</text>
<!-- Indicador técnico decorativo -->
<g transform="translate(550, 280)">
<rect x="0" y="0" width="2" height="13.33" fill="#22d3ee" opacity="0.6"/>
<rect x="6.67" y="0" width="2" height="16.67" fill="#a855f7" opacity="0.6"/>
<rect x="13.33" y="0" width="2" height="10" fill="#ec4899" opacity="0.6"/>
</g>
</svg>

After

Width:  |  Height:  |  Size: 6.5 KiB

View File

Before

Width:  |  Height:  |  Size: 9.8 KiB

After

Width:  |  Height:  |  Size: 9.8 KiB

View File

@ -0,0 +1,38 @@
<svg xmlns="http://www.w3.org/2000/svg" viewBox="0 0 64 64" width="64" height="64" preserveAspectRatio="xMidYMid meet">
<defs>
<!-- Google Fonts import -->
<style>
@import url('https://fonts.googleapis.com/css2?family=JetBrains+Mono:wght@800&amp;display=swap');
</style>
<linearGradient id="faviconGradV" x1="0%" y1="0%" x2="100%" y2="0%">
<stop offset="0%" style="stop-color:#22d3ee;stop-opacity:1"/>
<stop offset="50%" style="stop-color:#a855f7;stop-opacity:1"/>
<stop offset="100%" style="stop-color:#ec4899;stop-opacity:1"/>
</linearGradient>
</defs>
<!-- Fondo -->
<rect width="64" height="64" fill="#000000"/>
<!-- Símbolo simplificado vertical -->
<g transform="translate(32, 12)">
<!-- Línea central -->
<path d="M 0 0 L 0 -6 L 1 -7 L -1 -9 L 1 -11 L 0 -12" stroke="url(#faviconGradV)" stroke-width="0.8" fill="none" stroke-linecap="round" stroke-linejoin="round"/>
<!-- Línea izquierda -->
<path d="M -3 0 L -3 -5 L -2.5 -7 L -3.5 -9 L -3 -11" stroke="#22d3ee" stroke-width="0.5" fill="none" stroke-linecap="round" opacity="0.7"/>
<!-- Línea derecha -->
<path d="M 3 0 L 3 -5 L 2.5 -7 L 3.5 -9 L 3 -11" stroke="#ec4899" stroke-width="0.5" fill="none" stroke-linecap="round" opacity="0.7"/>
<!-- Círculos decorativos -->
<circle cx="0" cy="-4" r="0.6" fill="#22d3ee"/>
<circle cx="0" cy="-7" r="0.5" fill="#a855f7"/>
</g>
<!-- Texto V simplificado -->
<text x="32" y="50" font-family="'JetBrains Mono', monospace" font-size="12" font-weight="800" fill="url(#faviconGradV)" letter-spacing="0.5" text-anchor="middle">
V
</text>
</svg>

After

Width:  |  Height:  |  Size: 1.6 KiB

97
assets/vapora_v_mono.svg Normal file
View File

@ -0,0 +1,97 @@
<svg xmlns="http://www.w3.org/2000/svg" viewBox="0 0 150 200" width="100%" height="100%" preserveAspectRatio="xMidYMid meet">
<defs>
<!-- Google Fonts import -->
<style>
@import url('https://fonts.googleapis.com/css2?family=JetBrains+Mono:wght@800&amp;display=swap');
</style>
<linearGradient id="monoGradV" x1="0%" y1="0%" x2="100%" y2="0%">
<stop offset="0%" style="stop-color:#ffffff;stop-opacity:1"/>
<stop offset="50%" style="stop-color:#e0e0e0;stop-opacity:1"/>
<stop offset="100%" style="stop-color:#b0b0b0;stop-opacity:1"/>
</linearGradient>
<linearGradient id="monoVertGradV" x1="0%" y1="100%" x2="0%" y2="0%">
<stop offset="0%" style="stop-color:#ffffff;stop-opacity:1"/>
<stop offset="50%" style="stop-color:#d0d0d0;stop-opacity:0.8"/>
<stop offset="100%" style="stop-color:#a0a0a0;stop-opacity:0.4"/>
</linearGradient>
<filter id="monoGlowV">
<feGaussianBlur stdDeviation="2" result="coloredBlur"/>
<feMerge>
<feMergeNode in="coloredBlur"/>
<feMergeNode in="SourceGraphic"/>
</feMerge>
</filter>
</defs>
<!-- Fondo negro -->
<rect width="150" height="200" fill="#000000"/>
<!-- Grid de fondo técnico sutil -->
<g opacity="0.08" stroke="#ffffff" stroke-width="0.5">
<line x1="0" y1="50" x2="150" y2="50"/>
<line x1="0" y1="100" x2="150" y2="100"/>
<line x1="0" y1="150" x2="150" y2="150"/>
<line x1="37" y1="0" x2="37" y2="200"/>
<line x1="75" y1="0" x2="75" y2="200"/>
<line x1="112" y1="0" x2="112" y2="200"/>
</g>
<!-- Símbolo técnico -->
<g transform="translate(75, 90) scale(0.33)">
<rect x="-25" y="0" width="50" height="6.67" fill="url(#monoGradV)" opacity="0.8" rx="3.33"/>
<path d="M 0 0 L 0 -50 L 8.33 -58 L -8.33 -75 L 8.33 -92 L -8.33 -108 L 8.33 -125 L 0 -133 L 0 -200" stroke="url(#monoVertGradV)" stroke-width="5" fill="none" stroke-linecap="round" stroke-linejoin="round" filter="url(#monoGlowV)"/>
<path d="M -33 0 L -33 -42 L -30 -58 L -37 -75 L -30 -92 L -37 -108 L -33 -125 L -33 -167" stroke="#ffffff" stroke-width="3.33" fill="none" stroke-linecap="round" opacity="0.6" filter="url(#monoGlowV)"/>
<path d="M -58 0 L -58 -33 L -53 -50 L -63 -67 L -53 -83 L -63 -100 L -58 -117 L -58 -142" stroke="#d0d0d0" stroke-width="2.5" fill="none" stroke-linecap="round" opacity="0.5"/>
<path d="M 33 0 L 33 -42 L 30 -58 L 37 -75 L 30 -92 L 37 -108 L 33 -125 L 33 -167" stroke="#b0b0b0" stroke-width="3.33" fill="none" stroke-linecap="round" opacity="0.6" filter="url(#monoGlowV)"/>
<path d="M 58 0 L 58 -33 L 53 -50 L 63 -67 L 53 -83 L 63 -100 L 58 -117 L 58 -142" stroke="#ffffff" stroke-width="2.5" fill="none" stroke-linecap="round" opacity="0.5"/>
<circle cx="0" cy="-67" r="5" fill="#ffffff"/>
<circle cx="0" cy="-100" r="4.17" fill="#d0d0d0"/>
<circle cx="0" cy="-133" r="3.33" fill="#b0b0b0"/>
<circle cx="-33" cy="-83" r="3.33" fill="#ffffff" opacity="0.7"/>
<circle cx="33" cy="-92" r="3.33" fill="#b0b0b0" opacity="0.7"/>
<circle cx="-58" cy="-58" r="2.5" fill="#d0d0d0" opacity="0.5"/>
<circle cx="58" cy="-67" r="2.5" fill="#ffffff" opacity="0.5"/>
<polygon points="0,-158 5,-162 5,-167 0,-170 -5,-167 -5,-162" stroke="#ffffff" fill="none" stroke-width="1.67" opacity="0.6"/>
<polygon points="-42,-117 -37,-120 -37,-125 -42,-128 -47,-125 -47,-120" stroke="#d0d0d0" fill="none" stroke-width="1.67" opacity="0.5"/>
<polygon points="42,-125 47,-128 47,-133 42,-137 37,-133 37,-128" stroke="#b0b0b0" fill="none" stroke-width="1.67" opacity="0.5"/>
<line x1="-33" y1="-100" x2="-8" y2="-100" stroke="#ffffff" stroke-width="0.83" opacity="0.4"/>
<line x1="8" y1="-117" x2="33" y2="-117" stroke="#b0b0b0" stroke-width="0.83" opacity="0.4"/>
<line x1="-58" y1="-83" x2="-37" y2="-83" stroke="#d0d0d0" stroke-width="0.83" opacity="0.3"/>
<line x1="37" y1="-92" x2="58" y2="-92" stroke="#ffffff" stroke-width="0.83" opacity="0.3"/>
</g>
<!-- Texto VAPORA -->
<g>
<text x="75" y="135" font-family="'JetBrains Mono', 'Fira Code', monospace" font-size="32" font-weight="800" fill="url(#monoGradV)" letter-spacing="2" text-anchor="middle">
VAPORA
</text>
<text x="75" y="135" font-family="'JetBrains Mono', 'Fira Code', monospace" font-size="32" font-weight="800" fill="none" stroke="rgba(255,255,255,0.2)" stroke-width="0.3" letter-spacing="2" text-anchor="middle">
VAPORA
</text>
</g>
<!-- Tagline -->
<text x="75" y="155" font-family="'Inter', sans-serif" font-size="8" fill="#a0a0a0" opacity="0.8" letter-spacing="0.1em" text-anchor="middle">
Evaporate complexity
</text>
<!-- Indicador técnico decorativo -->
<g transform="translate(75, 170)">
<rect x="0" y="0" width="1.5" height="10" fill="#ffffff" opacity="0.6"/>
<rect x="4" y="0" width="1.5" height="12" fill="#d0d0d0" opacity="0.6"/>
<rect x="8" y="0" width="1.5" height="8" fill="#b0b0b0" opacity="0.6"/>
</g>
</svg>

After

Width:  |  Height:  |  Size: 4.9 KiB

132
assets/vapora_v_static.svg Normal file
View File

@ -0,0 +1,132 @@
<svg xmlns="http://www.w3.org/2000/svg" viewBox="0 0 150 200" width="100%" height="100%" preserveAspectRatio="xMidYMid meet">
<defs>
<!-- Google Fonts import -->
<style>
@import url('https://fonts.googleapis.com/css2?family=JetBrains+Mono:wght@800&amp;display=swap');
</style>
<!-- Gradiente principal -->
<linearGradient id="techGradVS" x1="0%" y1="0%" x2="100%" y2="0%">
<stop offset="0%" style="stop-color:#22d3ee;stop-opacity:1"/>
<stop offset="50%" style="stop-color:#a855f7;stop-opacity:1"/>
<stop offset="100%" style="stop-color:#ec4899;stop-opacity:1"/>
</linearGradient>
<!-- Gradiente vertical -->
<linearGradient id="vertGradVS" x1="0%" y1="100%" x2="0%" y2="0%">
<stop offset="0%" style="stop-color:#22d3ee;stop-opacity:1"/>
<stop offset="50%" style="stop-color:#a855f7;stop-opacity:0.8"/>
<stop offset="100%" style="stop-color:#ec4899;stop-opacity:0.4"/>
</linearGradient>
<!-- Filtro glow tech -->
<filter id="techGlowVS">
<feGaussianBlur stdDeviation="2" result="coloredBlur"/>
<feMerge>
<feMergeNode in="coloredBlur"/>
<feMergeNode in="SourceGraphic"/>
</feMerge>
</filter>
<!-- Filtro glow fuerte -->
<filter id="strongGlowVS">
<feGaussianBlur stdDeviation="4" result="coloredBlur"/>
<feMerge>
<feMergeNode in="coloredBlur"/>
<feMergeNode in="coloredBlur"/>
<feMergeNode in="SourceGraphic"/>
</feMerge>
</filter>
<!-- Filtro glass -->
<filter id="glassVS">
<feGaussianBlur in="SourceGraphic" stdDeviation="0.5" result="blur"/>
<feColorMatrix in="blur" type="matrix" values="1 0 0 0 0 0 1 0 0 0 0 0 1 0 0 0 0 0 18 -7" result="goo"/>
<feBlend in="SourceGraphic" in2="goo"/>
</filter>
</defs>
<!-- Fondo -->
<rect width="150" height="200" fill="#000000"/>
<!-- Grid de fondo técnico sutil -->
<g opacity="0.08" stroke="#22d3ee" stroke-width="0.5">
<line x1="0" y1="50" x2="150" y2="50"/>
<line x1="0" y1="100" x2="150" y2="100"/>
<line x1="0" y1="150" x2="150" y2="150"/>
<line x1="37" y1="0" x2="37" y2="200"/>
<line x1="75" y1="0" x2="75" y2="200"/>
<line x1="112" y1="0" x2="112" y2="200"/>
</g>
<!-- Símbolo técnico: flujo de datos ascendente -->
<g transform="translate(75, 90) scale(0.33)">
<!-- Base: plataforma -->
<rect x="-25" y="0" width="50" height="6.67" fill="url(#techGradVS)" opacity="0.8" rx="3.33"/>
<!-- Stream principal - línea central tipo señal -->
<path d="M 0 0 L 0 -50 L 8.33 -58 L -8.33 -75 L 8.33 -92 L -8.33 -108 L 8.33 -125 L 0 -133 L 0 -200" stroke="url(#vertGradVS)" stroke-width="5" fill="none" stroke-linecap="round" stroke-linejoin="round" filter="url(#techGlowVS)"/>
<!-- Stream izquierdo 1 -->
<path d="M -33 0 L -33 -42 L -30 -58 L -37 -75 L -30 -92 L -37 -108 L -33 -125 L -33 -167" stroke="#22d3ee" stroke-width="3.33" fill="none" stroke-linecap="round" opacity="0.6" filter="url(#techGlowVS)"/>
<!-- Stream izquierdo 2 -->
<path d="M -58 0 L -58 -33 L -53 -50 L -63 -67 L -53 -83 L -63 -100 L -58 -117 L -58 -142" stroke="#a855f7" stroke-width="2.5" fill="none" stroke-linecap="round" opacity="0.5"/>
<!-- Stream derecho 1 -->
<path d="M 33 0 L 33 -42 L 30 -58 L 37 -75 L 30 -92 L 37 -108 L 33 -125 L 33 -167" stroke="#ec4899" stroke-width="3.33" fill="none" stroke-linecap="round" opacity="0.6" filter="url(#techGlowVS)"/>
<!-- Stream derecho 2 -->
<path d="M 58 0 L 58 -33 L 53 -50 L 63 -67 L 53 -83 L 63 -100 L 58 -117 L 58 -142" stroke="#22d3ee" stroke-width="2.5" fill="none" stroke-linecap="round" opacity="0.5"/>
<!-- Nodos de datos en el flujo principal -->
<circle cx="0" cy="-67" r="5" fill="#22d3ee" filter="url(#strongGlowVS)"/>
<circle cx="0" cy="-100" r="4.17" fill="#a855f7" filter="url(#strongGlowVS)"/>
<circle cx="0" cy="-133" r="3.33" fill="#ec4899" filter="url(#strongGlowVS)"/>
<!-- Partículas laterales -->
<circle cx="-33" cy="-83" r="3.33" fill="#22d3ee" opacity="0.7"/>
<circle cx="33" cy="-92" r="3.33" fill="#ec4899" opacity="0.7"/>
<circle cx="-58" cy="-58" r="2.5" fill="#a855f7" opacity="0.5"/>
<circle cx="58" cy="-67" r="2.5" fill="#22d3ee" opacity="0.5"/>
<!-- Hexágonos técnicos flotantes -->
<polygon points="0,-158 5,-162 5,-167 0,-170 -5,-167 -5,-162" stroke="#22d3ee" fill="none" stroke-width="1.67" opacity="0.6"/>
<polygon points="-42,-117 -37,-120 -37,-125 -42,-128 -47,-125 -47,-120" stroke="#a855f7" fill="none" stroke-width="1.67" opacity="0.5"/>
<polygon points="42,-125 47,-128 47,-133 42,-137 37,-133 37,-128" stroke="#ec4899" fill="none" stroke-width="1.67" opacity="0.5"/>
<!-- Líneas de conexión horizontales -->
<line x1="-33" y1="-100" x2="-8" y2="-100" stroke="#22d3ee" stroke-width="0.83" opacity="0.4"/>
<line x1="8" y1="-117" x2="33" y2="-117" stroke="#ec4899" stroke-width="0.83" opacity="0.4"/>
<line x1="-58" y1="-83" x2="-37" y2="-83" stroke="#a855f7" stroke-width="0.83" opacity="0.3"/>
<line x1="37" y1="-92" x2="58" y2="-92" stroke="#22d3ee" stroke-width="0.83" opacity="0.3"/>
</g>
<!-- Texto VAPORA -->
<g filter="url(#glassVS)">
<text x="75" y="135" font-family="'JetBrains Mono', 'Fira Code', monospace" font-size="32" font-weight="800" fill="url(#techGradVS)" letter-spacing="2" text-anchor="middle">
VAPORA
</text>
<text x="75" y="135" font-family="'JetBrains Mono', 'Fira Code', monospace" font-size="32" font-weight="800" fill="none" stroke="rgba(255,255,255,0.2)" stroke-width="0.3" letter-spacing="2" text-anchor="middle">
VAPORA
</text>
</g>
<!-- Glow en texto -->
<text x="75" y="135" font-family="'JetBrains Mono', 'Fira Code', monospace" font-size="32" font-weight="800" fill="url(#techGradVS)" letter-spacing="2" filter="url(#techGlowVS)" opacity="0.3" text-anchor="middle">
VAPORA
</text>
<!-- Tagline -->
<text x="75" y="155" font-family="'Inter', sans-serif" font-size="8" fill="#a855f7" opacity="0.8" letter-spacing="0.1em" text-anchor="middle">
Evaporate complexity
</text>
<!-- Indicador técnico decorativo -->
<g transform="translate(75, 170)">
<rect x="0" y="0" width="1.5" height="10" fill="#22d3ee" opacity="0.6"/>
<rect x="4" y="0" width="1.5" height="12" fill="#a855f7" opacity="0.6"/>
<rect x="8" y="0" width="1.5" height="8" fill="#ec4899" opacity="0.6"/>
</g>
</svg>

After

Width:  |  Height:  |  Size: 6.3 KiB

117
assets/vapora_v_white.svg Normal file
View File

@ -0,0 +1,117 @@
<svg xmlns="http://www.w3.org/2000/svg" viewBox="0 0 150 200" width="100%" height="100%" preserveAspectRatio="xMidYMid meet">
<defs>
<!-- Google Fonts import -->
<style>
@import url('https://fonts.googleapis.com/css2?family=JetBrains+Mono:wght@800&amp;display=swap');
</style>
<linearGradient id="techGradVW" x1="0%" y1="0%" x2="100%" y2="0%">
<stop offset="0%" style="stop-color:#22d3ee;stop-opacity:1"/>
<stop offset="50%" style="stop-color:#a855f7;stop-opacity:1"/>
<stop offset="100%" style="stop-color:#ec4899;stop-opacity:1"/>
</linearGradient>
<linearGradient id="vertGradVW" x1="0%" y1="100%" x2="0%" y2="0%">
<stop offset="0%" style="stop-color:#22d3ee;stop-opacity:1"/>
<stop offset="50%" style="stop-color:#a855f7;stop-opacity:0.8"/>
<stop offset="100%" style="stop-color:#ec4899;stop-opacity:0.4"/>
</linearGradient>
<filter id="techGlowVW">
<feGaussianBlur stdDeviation="2" result="coloredBlur"/>
<feMerge>
<feMergeNode in="coloredBlur"/>
<feMergeNode in="SourceGraphic"/>
</feMerge>
</filter>
<filter id="strongGlowVW">
<feGaussianBlur stdDeviation="4" result="coloredBlur"/>
<feMerge>
<feMergeNode in="coloredBlur"/>
<feMergeNode in="coloredBlur"/>
<feMergeNode in="SourceGraphic"/>
</feMerge>
</filter>
<filter id="glassVW">
<feGaussianBlur in="SourceGraphic" stdDeviation="0.5" result="blur"/>
<feColorMatrix in="blur" type="matrix" values="1 0 0 0 0 0 1 0 0 0 0 0 1 0 0 0 0 0 18 -7" result="goo"/>
<feBlend in="SourceGraphic" in2="goo"/>
</filter>
</defs>
<!-- Fondo blanco -->
<rect width="150" height="200" fill="#ffffff"/>
<!-- Grid de fondo técnico sutil -->
<g opacity="0.08" stroke="#22d3ee" stroke-width="0.5">
<line x1="0" y1="50" x2="150" y2="50"/>
<line x1="0" y1="100" x2="150" y2="100"/>
<line x1="0" y1="150" x2="150" y2="150"/>
<line x1="37" y1="0" x2="37" y2="200"/>
<line x1="75" y1="0" x2="75" y2="200"/>
<line x1="112" y1="0" x2="112" y2="200"/>
</g>
<!-- Símbolo técnico -->
<g transform="translate(75, 90) scale(0.33)">
<rect x="-25" y="0" width="50" height="6.67" fill="url(#techGradVW)" opacity="0.8" rx="3.33"/>
<path d="M 0 0 L 0 -50 L 8.33 -58 L -8.33 -75 L 8.33 -92 L -8.33 -108 L 8.33 -125 L 0 -133 L 0 -200" stroke="url(#vertGradVW)" stroke-width="5" fill="none" stroke-linecap="round" stroke-linejoin="round" filter="url(#techGlowVW)"/>
<path d="M -33 0 L -33 -42 L -30 -58 L -37 -75 L -30 -92 L -37 -108 L -33 -125 L -33 -167" stroke="#22d3ee" stroke-width="3.33" fill="none" stroke-linecap="round" opacity="0.6" filter="url(#techGlowVW)"/>
<path d="M -58 0 L -58 -33 L -53 -50 L -63 -67 L -53 -83 L -63 -100 L -58 -117 L -58 -142" stroke="#a855f7" stroke-width="2.5" fill="none" stroke-linecap="round" opacity="0.5"/>
<path d="M 33 0 L 33 -42 L 30 -58 L 37 -75 L 30 -92 L 37 -108 L 33 -125 L 33 -167" stroke="#ec4899" stroke-width="3.33" fill="none" stroke-linecap="round" opacity="0.6" filter="url(#techGlowVW)"/>
<path d="M 58 0 L 58 -33 L 53 -50 L 63 -67 L 53 -83 L 63 -100 L 58 -117 L 58 -142" stroke="#22d3ee" stroke-width="2.5" fill="none" stroke-linecap="round" opacity="0.5"/>
<circle cx="0" cy="-67" r="5" fill="#22d3ee" filter="url(#strongGlowVW)"/>
<circle cx="0" cy="-100" r="4.17" fill="#a855f7" filter="url(#strongGlowVW)"/>
<circle cx="0" cy="-133" r="3.33" fill="#ec4899" filter="url(#strongGlowVW)"/>
<circle cx="-33" cy="-83" r="3.33" fill="#22d3ee" opacity="0.7"/>
<circle cx="33" cy="-92" r="3.33" fill="#ec4899" opacity="0.7"/>
<circle cx="-58" cy="-58" r="2.5" fill="#a855f7" opacity="0.5"/>
<circle cx="58" cy="-67" r="2.5" fill="#22d3ee" opacity="0.5"/>
<polygon points="0,-158 5,-162 5,-167 0,-170 -5,-167 -5,-162" stroke="#22d3ee" fill="none" stroke-width="1.67" opacity="0.6"/>
<polygon points="-42,-117 -37,-120 -37,-125 -42,-128 -47,-125 -47,-120" stroke="#a855f7" fill="none" stroke-width="1.67" opacity="0.5"/>
<polygon points="42,-125 47,-128 47,-133 42,-137 37,-133 37,-128" stroke="#ec4899" fill="none" stroke-width="1.67" opacity="0.5"/>
<line x1="-33" y1="-100" x2="-8" y2="-100" stroke="#22d3ee" stroke-width="0.83" opacity="0.4"/>
<line x1="8" y1="-117" x2="33" y2="-117" stroke="#ec4899" stroke-width="0.83" opacity="0.4"/>
<line x1="-58" y1="-83" x2="-37" y2="-83" stroke="#a855f7" stroke-width="0.83" opacity="0.3"/>
<line x1="37" y1="-92" x2="58" y2="-92" stroke="#22d3ee" stroke-width="0.83" opacity="0.3"/>
</g>
<!-- Texto VAPORA -->
<g filter="url(#glassVW)">
<text x="75" y="135" font-family="'JetBrains Mono', 'Fira Code', monospace" font-size="32" font-weight="800" fill="url(#techGradVW)" letter-spacing="2" text-anchor="middle">
VAPORA
</text>
<text x="75" y="135" font-family="'JetBrains Mono', 'Fira Code', monospace" font-size="32" font-weight="800" fill="none" stroke="rgba(0,0,0,0.1)" stroke-width="0.3" letter-spacing="2" text-anchor="middle">
VAPORA
</text>
</g>
<!-- Glow en texto -->
<text x="75" y="135" font-family="'JetBrains Mono', 'Fira Code', monospace" font-size="32" font-weight="800" fill="url(#techGradVW)" letter-spacing="2" filter="url(#techGlowVW)" opacity="0.3" text-anchor="middle">
VAPORA
</text>
<!-- Tagline -->
<text x="75" y="155" font-family="'Inter', sans-serif" font-size="8" fill="#a855f7" opacity="0.8" letter-spacing="0.1em" text-anchor="middle">
Evaporate complexity
</text>
<!-- Indicador técnico decorativo -->
<g transform="translate(75, 170)">
<rect x="0" y="0" width="1.5" height="10" fill="#22d3ee" opacity="0.6"/>
<rect x="4" y="0" width="1.5" height="12" fill="#a855f7" opacity="0.6"/>
<rect x="8" y="0" width="1.5" height="8" fill="#ec4899" opacity="0.6"/>
</g>
</svg>

After

Width:  |  Height:  |  Size: 5.8 KiB

119
assets/vapora_white.svg Normal file
View File

@ -0,0 +1,119 @@
<svg xmlns="http://www.w3.org/2000/svg" viewBox="170 40 590 300" width="100%" height="100%" preserveAspectRatio="xMidYMid meet">
<defs>
<!-- Google Fonts import -->
<style>
@import url('https://fonts.googleapis.com/css2?family=JetBrains+Mono:wght@800&amp;display=swap');
</style>
<linearGradient id="techGradWhite" x1="0%" y1="0%" x2="100%" y2="0%">
<stop offset="0%" style="stop-color:#22d3ee;stop-opacity:1"/>
<stop offset="50%" style="stop-color:#a855f7;stop-opacity:1"/>
<stop offset="100%" style="stop-color:#ec4899;stop-opacity:1"/>
</linearGradient>
<linearGradient id="vertGradWhite" x1="0%" y1="100%" x2="0%" y2="0%">
<stop offset="0%" style="stop-color:#22d3ee;stop-opacity:1"/>
<stop offset="50%" style="stop-color:#a855f7;stop-opacity:0.8"/>
<stop offset="100%" style="stop-color:#ec4899;stop-opacity:0.4"/>
</linearGradient>
<filter id="techGlowWhite">
<feGaussianBlur stdDeviation="2" result="coloredBlur"/>
<feMerge>
<feMergeNode in="coloredBlur"/>
<feMergeNode in="SourceGraphic"/>
</feMerge>
</filter>
<filter id="strongGlowWhite">
<feGaussianBlur stdDeviation="4" result="coloredBlur"/>
<feMerge>
<feMergeNode in="coloredBlur"/>
<feMergeNode in="coloredBlur"/>
<feMergeNode in="SourceGraphic"/>
</feMerge>
</filter>
<filter id="glassWhite">
<feGaussianBlur in="SourceGraphic" stdDeviation="0.5" result="blur"/>
<feColorMatrix in="blur" type="matrix" values="1 0 0 0 0 0 1 0 0 0 0 0 1 0 0 0 0 0 18 -7" result="goo"/>
<feBlend in="SourceGraphic" in2="goo"/>
</filter>
</defs>
<!-- Fondo blanco -->
<rect width="800" height="400" fill="#ffffff"/>
<!-- Grid de fondo técnico sutil -->
<g opacity="0.08" stroke="#22d3ee" stroke-width="1">
<line x1="0" y1="133" x2="800" y2="133"/>
<line x1="0" y1="200" x2="800" y2="200"/>
<line x1="0" y1="267" x2="800" y2="267"/>
<line x1="133" y1="0" x2="133" y2="400"/>
<line x1="267" y1="0" x2="267" y2="400"/>
<line x1="400" y1="0" x2="400" y2="400"/>
<line x1="533" y1="0" x2="533" y2="400"/>
<line x1="667" y1="0" x2="667" y2="400"/>
</g>
<!-- Símbolo técnico: flujo de datos ascendente -->
<g transform="translate(267, 280)">
<rect x="-25" y="0" width="50" height="6.67" fill="url(#techGradWhite)" opacity="0.8" rx="3.33"/>
<path d="M 0 0 L 0 -50 L 8.33 -58 L -8.33 -75 L 8.33 -92 L -8.33 -108 L 8.33 -125 L 0 -133 L 0 -200" stroke="url(#vertGradWhite)" stroke-width="5" fill="none" stroke-linecap="round" stroke-linejoin="round" filter="url(#techGlowWhite)"/>
<path d="M -33 0 L -33 -42 L -30 -58 L -37 -75 L -30 -92 L -37 -108 L -33 -125 L -33 -167" stroke="#22d3ee" stroke-width="3.33" fill="none" stroke-linecap="round" opacity="0.6" filter="url(#techGlowWhite)"/>
<path d="M -58 0 L -58 -33 L -53 -50 L -63 -67 L -53 -83 L -63 -100 L -58 -117 L -58 -142" stroke="#a855f7" stroke-width="2.5" fill="none" stroke-linecap="round" opacity="0.5"/>
<path d="M 33 0 L 33 -42 L 30 -58 L 37 -75 L 30 -92 L 37 -108 L 33 -125 L 33 -167" stroke="#ec4899" stroke-width="3.33" fill="none" stroke-linecap="round" opacity="0.6" filter="url(#techGlowWhite)"/>
<path d="M 58 0 L 58 -33 L 53 -50 L 63 -67 L 53 -83 L 63 -100 L 58 -117 L 58 -142" stroke="#22d3ee" stroke-width="2.5" fill="none" stroke-linecap="round" opacity="0.5"/>
<circle cx="0" cy="-67" r="5" fill="#22d3ee" filter="url(#strongGlowWhite)"/>
<circle cx="0" cy="-100" r="4.17" fill="#a855f7" filter="url(#strongGlowWhite)"/>
<circle cx="0" cy="-133" r="3.33" fill="#ec4899" filter="url(#strongGlowWhite)"/>
<circle cx="-33" cy="-83" r="3.33" fill="#22d3ee" opacity="0.7"/>
<circle cx="33" cy="-92" r="3.33" fill="#ec4899" opacity="0.7"/>
<circle cx="-58" cy="-58" r="2.5" fill="#a855f7" opacity="0.5"/>
<circle cx="58" cy="-67" r="2.5" fill="#22d3ee" opacity="0.5"/>
<polygon points="0,-158 5,-162 5,-167 0,-170 -5,-167 -5,-162" stroke="#22d3ee" fill="none" stroke-width="1.67" opacity="0.6"/>
<polygon points="-42,-117 -37,-120 -37,-125 -42,-128 -47,-125 -47,-120" stroke="#a855f7" fill="none" stroke-width="1.67" opacity="0.5"/>
<polygon points="42,-125 47,-128 47,-133 42,-137 37,-133 37,-128" stroke="#ec4899" fill="none" stroke-width="1.67" opacity="0.5"/>
<line x1="-33" y1="-100" x2="-8" y2="-100" stroke="#22d3ee" stroke-width="0.83" opacity="0.4"/>
<line x1="8" y1="-117" x2="33" y2="-117" stroke="#ec4899" stroke-width="0.83" opacity="0.4"/>
<line x1="-58" y1="-83" x2="-37" y2="-83" stroke="#a855f7" stroke-width="0.83" opacity="0.3"/>
<line x1="37" y1="-92" x2="58" y2="-92" stroke="#22d3ee" stroke-width="0.83" opacity="0.3"/>
</g>
<!-- Texto VAPORA -->
<g filter="url(#glassWhite)">
<text x="550" y="207" font-family="'JetBrains Mono', 'Fira Code', monospace" font-size="90" font-weight="800" fill="url(#techGradWhite)" letter-spacing="5" text-anchor="middle">
VAPORA
</text>
<text x="550" y="207" font-family="'JetBrains Mono', 'Fira Code', monospace" font-size="90" font-weight="800" fill="none" stroke="rgba(0,0,0,0.1)" stroke-width="0.83" letter-spacing="5" text-anchor="middle">
VAPORA
</text>
</g>
<!-- Glow en texto -->
<text x="550" y="207" font-family="'JetBrains Mono', 'Fira Code', monospace" font-size="90" font-weight="800" fill="url(#techGradWhite)" letter-spacing="5" filter="url(#techGlowWhite)" opacity="0.3" text-anchor="middle">
VAPORA
</text>
<!-- Tagline -->
<text x="550" y="240" font-family="'Inter', sans-serif" font-size="20" fill="#a855f7" opacity="0.8" letter-spacing="0.25em" text-anchor="middle">
Evaporate complexity
</text>
<!-- Indicador técnico decorativo -->
<g transform="translate(550, 280)">
<rect x="0" y="0" width="2" height="13.33" fill="#22d3ee" opacity="0.6"/>
<rect x="6.67" y="0" width="2" height="16.67" fill="#a855f7" opacity="0.6"/>
<rect x="13.33" y="0" width="2" height="10" fill="#ec4899" opacity="0.6"/>
</g>
</svg>

After

Width:  |  Height:  |  Size: 6.0 KiB

View File

@ -0,0 +1,269 @@
<svg xmlns="http://www.w3.org/2000/svg" viewBox="0 0 600 250" width="600" height="250">
<defs>
<!-- Gradiente horizontal -->
<linearGradient id="horizGrad" x1="0%" y1="0%" x2="100%" y2="0%">
<stop offset="0%" style="stop-color:#22d3ee;stop-opacity:1" />
<stop offset="50%" style="stop-color:#a855f7;stop-opacity:1" />
<stop offset="100%" style="stop-color:#ec4899;stop-opacity:1" />
</linearGradient>
<!-- Gradiente para ondas que se desvanecen -->
<linearGradient id="fadeGrad" x1="0%" y1="0%" x2="100%" y2="0%">
<stop offset="0%" style="stop-color:#22d3ee;stop-opacity:0.2" />
<stop offset="30%" style="stop-color:#a855f7;stop-opacity:0.8" />
<stop offset="70%" style="stop-color:#ec4899;stop-opacity:0.8" />
<stop offset="100%" style="stop-color:#ec4899;stop-opacity:0.2" />
</linearGradient>
<!-- Filtro glow -->
<filter id="glow">
<feGaussianBlur stdDeviation="2.5" result="coloredBlur"/>
<feMerge>
<feMergeNode in="coloredBlur"/>
<feMergeNode in="SourceGraphic"/>
</feMerge>
</filter>
<!-- Filtro glow fuerte -->
<filter id="strongGlow">
<feGaussianBlur stdDeviation="4" result="coloredBlur"/>
<feMerge>
<feMergeNode in="coloredBlur"/>
<feMergeNode in="coloredBlur"/>
<feMergeNode in="SourceGraphic"/>
</feMerge>
</filter>
<!-- Filtro glass -->
<filter id="glass">
<feGaussianBlur in="SourceGraphic" stdDeviation="0.5" result="blur" />
<feColorMatrix in="blur" type="matrix" values="1 0 0 0 0 0 1 0 0 0 0 0 1 0 0 0 0 0 18 -7" result="goo" />
<feBlend in="SourceGraphic" in2="goo" />
</filter>
</defs>
<!-- Fondo -->
<rect width="600" height="250" fill="#000000"/>
<!-- Punto de origen del flujo -->
<g transform="translate(50, 125)">
<!-- Núcleo emisor -->
<circle cx="0" cy="0" r="8" fill="url(#horizGrad)" filter="url(#strongGlow)" opacity="0.9">
<animate attributeName="r" values="8;10;8" dur="2s" repeatCount="indefinite"/>
<animate attributeName="opacity" values="0.9;1;0.9" dur="2s" repeatCount="indefinite"/>
</circle>
<circle cx="0" cy="0" r="4" fill="#22d3ee" opacity="0.8">
<animate attributeName="r" values="4;6;4" dur="2s" repeatCount="indefinite"/>
</circle>
<!-- Ondas horizontales superiores -->
<!-- Onda 1 superior -->
<path d="M 0 -30 Q 30 -35, 60 -30 T 120 -30 T 180 -30 T 240 -30"
stroke="url(#fadeGrad)"
stroke-width="3"
fill="none"
stroke-linecap="round"
filter="url(#glow)"
opacity="0.8">
<animate attributeName="d"
values="M 0 -30 Q 30 -35, 60 -30 T 120 -30 T 180 -30 T 240 -30;
M 0 -30 Q 30 -25, 60 -30 T 120 -30 T 180 -30 T 240 -30;
M 0 -30 Q 30 -35, 60 -30 T 120 -30 T 180 -30 T 240 -30"
dur="3s"
repeatCount="indefinite"/>
</path>
<!-- Onda 2 superior -->
<path d="M 0 -20 Q 25 -23, 50 -20 T 100 -20 T 150 -20 T 200 -20"
stroke="url(#fadeGrad)"
stroke-width="2.5"
fill="none"
stroke-linecap="round"
opacity="0.7">
<animate attributeName="d"
values="M 0 -20 Q 25 -23, 50 -20 T 100 -20 T 150 -20 T 200 -20;
M 0 -20 Q 25 -17, 50 -20 T 100 -20 T 150 -20 T 200 -20;
M 0 -20 Q 25 -23, 50 -20 T 100 -20 T 150 -20 T 200 -20"
dur="2.5s"
repeatCount="indefinite"/>
</path>
<!-- Onda 3 superior cerca -->
<path d="M 0 -12 Q 20 -14, 40 -12 T 80 -12 T 120 -12 T 160 -12"
stroke="url(#fadeGrad)"
stroke-width="2"
fill="none"
stroke-linecap="round"
opacity="0.6">
<animate attributeName="d"
values="M 0 -12 Q 20 -14, 40 -12 T 80 -12 T 120 -12 T 160 -12;
M 0 -12 Q 20 -10, 40 -12 T 80 -12 T 120 -12 T 160 -12;
M 0 -12 Q 20 -14, 40 -12 T 80 -12 T 120 -12 T 160 -12"
dur="2.2s"
repeatCount="indefinite"/>
</path>
<!-- Ondas inferiores (simétricas) -->
<!-- Onda 1 inferior -->
<path d="M 0 30 Q 30 35, 60 30 T 120 30 T 180 30 T 240 30"
stroke="url(#fadeGrad)"
stroke-width="3"
fill="none"
stroke-linecap="round"
filter="url(#glow)"
opacity="0.8">
<animate attributeName="d"
values="M 0 30 Q 30 35, 60 30 T 120 30 T 180 30 T 240 30;
M 0 30 Q 30 25, 60 30 T 120 30 T 180 30 T 240 30;
M 0 30 Q 30 35, 60 30 T 120 30 T 180 30 T 240 30"
dur="3.2s"
repeatCount="indefinite"/>
</path>
<!-- Onda 2 inferior -->
<path d="M 0 20 Q 25 23, 50 20 T 100 20 T 150 20 T 200 20"
stroke="url(#fadeGrad)"
stroke-width="2.5"
fill="none"
stroke-linecap="round"
opacity="0.7">
<animate attributeName="d"
values="M 0 20 Q 25 23, 50 20 T 100 20 T 150 20 T 200 20;
M 0 20 Q 25 17, 50 20 T 100 20 T 150 20 T 200 20;
M 0 20 Q 25 23, 50 20 T 100 20 T 150 20 T 200 20"
dur="2.7s"
repeatCount="indefinite"/>
</path>
<!-- Onda 3 inferior cerca -->
<path d="M 0 12 Q 20 14, 40 12 T 80 12 T 120 12 T 160 12"
stroke="url(#fadeGrad)"
stroke-width="2"
fill="none"
stroke-linecap="round"
opacity="0.6">
<animate attributeName="d"
values="M 0 12 Q 20 14, 40 12 T 80 12 T 120 12 T 160 12;
M 0 12 Q 20 10, 40 12 T 80 12 T 120 12 T 160 12;
M 0 12 Q 20 14, 40 12 T 80 12 T 120 12 T 160 12"
dur="2.4s"
repeatCount="indefinite"/>
</path>
<!-- Onda central (más sutil) -->
<path d="M 5 0 L 250 0"
stroke="url(#fadeGrad)"
stroke-width="1.5"
fill="none"
stroke-linecap="round"
opacity="0.4">
<animate attributeName="stroke-dasharray" values="0,300;300,0;0,300" dur="4s" repeatCount="indefinite"/>
</path>
<!-- Partículas viajando horizontalmente -->
<circle cx="40" cy="-25" r="2.5" fill="#22d3ee" filter="url(#strongGlow)">
<animate attributeName="cx" values="40;200;40" dur="3.5s" repeatCount="indefinite"/>
<animate attributeName="opacity" values="0;1;0" dur="3.5s" repeatCount="indefinite"/>
</circle>
<circle cx="30" cy="-15" r="2" fill="#a855f7" filter="url(#strongGlow)">
<animate attributeName="cx" values="30;180;30" dur="4s" repeatCount="indefinite"/>
<animate attributeName="opacity" values="0;1;0" dur="4s" repeatCount="indefinite"/>
</circle>
<circle cx="35" cy="18" r="2" fill="#ec4899" filter="url(#strongGlow)">
<animate attributeName="cx" values="35;190;35" dur="3.8s" repeatCount="indefinite"/>
<animate attributeName="opacity" values="0;1;0" dur="3.8s" repeatCount="indefinite"/>
</circle>
<circle cx="45" cy="28" r="2.5" fill="#22d3ee" filter="url(#strongGlow)">
<animate attributeName="cx" values="45;210;45" dur="4.2s" repeatCount="indefinite"/>
<animate attributeName="opacity" values="0;1;0" dur="4.2s" repeatCount="indefinite"/>
</circle>
<circle cx="50" cy="-8" r="1.8" fill="#a855f7">
<animate attributeName="cx" values="50;170;50" dur="3.2s" repeatCount="indefinite"/>
<animate attributeName="opacity" values="0;0.7;0" dur="3.2s" repeatCount="indefinite"/>
</circle>
<circle cx="55" cy="8" r="1.8" fill="#ec4899">
<animate attributeName="cx" values="55;175;55" dur="3.6s" repeatCount="indefinite"/>
<animate attributeName="opacity" values="0;0.7;0" dur="3.6s" repeatCount="indefinite"/>
</circle>
<!-- Nodos hexagonales viajando -->
<polygon points="60,0 63,-2 63,-5 60,-7 57,-5 57,-2"
stroke="#22d3ee"
fill="none"
stroke-width="1"
opacity="0.6">
<animate attributeName="transform" values="translate(0,0);translate(140,0);translate(0,0)" dur="4.5s" repeatCount="indefinite"/>
<animate attributeName="opacity" values="0;0.6;0" dur="4.5s" repeatCount="indefinite"/>
</polygon>
<polygon points="70,22 73,20 73,17 70,15 67,17 67,20"
stroke="#ec4899"
fill="none"
stroke-width="1"
opacity="0.5">
<animate attributeName="transform" values="translate(0,0);translate(120,0);translate(0,0)" dur="5s" repeatCount="indefinite"/>
<animate attributeName="opacity" values="0;0.5;0" dur="5s" repeatCount="indefinite"/>
</polygon>
</g>
<!-- Texto VAPORA -->
<g filter="url(#glass)">
<text x="320" y="135"
font-family="'JetBrains Mono', 'Fira Code', monospace"
font-size="58"
font-weight="800"
fill="url(#horizGrad)"
letter-spacing="3">
VAPORA
</text>
<text x="320" y="135"
font-family="'JetBrains Mono', 'Fira Code', monospace"
font-size="58"
font-weight="800"
fill="none"
stroke="rgba(255,255,255,0.2)"
stroke-width="0.5"
letter-spacing="3">
VAPORA
</text>
</g>
<!-- Glow en texto -->
<text x="320" y="135"
font-family="'JetBrains Mono', 'Fira Code', monospace"
font-size="58"
font-weight="800"
fill="url(#horizGrad)"
letter-spacing="3"
filter="url(#glow)"
opacity="0.25">
VAPORA
</text>
<!-- Tagline -->
<text x="320" y="162"
font-family="'Inter', sans-serif"
font-size="13"
fill="#a855f7"
opacity="0.75"
letter-spacing="1.5">
Evaporate complexity. Build in the cloud.
</text>
<!-- Indicador de flujo -->
<g opacity="0.4">
<text x="300" y="130"
font-family="'JetBrains Mono', monospace"
font-size="10"
fill="#22d3ee">
</text>
</g>
</svg>

After

Width:  |  Height:  |  Size: 10 KiB

View File

@ -0,0 +1,224 @@
<svg xmlns="http://www.w3.org/2000/svg" viewBox="0 0 600 250" width="600" height="250">
<defs>
<!-- Gradiente principal -->
<linearGradient id="mainGrad" x1="0%" y1="0%" x2="100%" y2="100%">
<stop offset="0%" style="stop-color:#22d3ee;stop-opacity:1" />
<stop offset="50%" style="stop-color:#a855f7;stop-opacity:1" />
<stop offset="100%" style="stop-color:#ec4899;stop-opacity:1" />
</linearGradient>
<!-- Gradiente para ondas suaves -->
<linearGradient id="smoothWave" x1="0%" y1="100%" x2="0%" y2="0%">
<stop offset="0%" style="stop-color:#22d3ee;stop-opacity:0.8" />
<stop offset="50%" style="stop-color:#a855f7;stop-opacity:0.5" />
<stop offset="100%" style="stop-color:#ec4899;stop-opacity:0.2" />
</linearGradient>
<!-- Gradiente radial para núcleo -->
<radialGradient id="coreGlow">
<stop offset="0%" style="stop-color:#22d3ee;stop-opacity:1" />
<stop offset="50%" style="stop-color:#a855f7;stop-opacity:0.8" />
<stop offset="100%" style="stop-color:#ec4899;stop-opacity:0.3" />
</radialGradient>
<!-- Filtro glow suave -->
<filter id="softGlow">
<feGaussianBlur stdDeviation="3" result="coloredBlur"/>
<feMerge>
<feMergeNode in="coloredBlur"/>
<feMergeNode in="SourceGraphic"/>
</feMerge>
</filter>
<!-- Filtro glow intenso -->
<filter id="intensGlow">
<feGaussianBlur stdDeviation="5" result="coloredBlur"/>
<feMerge>
<feMergeNode in="coloredBlur"/>
<feMergeNode in="coloredBlur"/>
<feMergeNode in="SourceGraphic"/>
</feMerge>
</filter>
<!-- Filtro glassmorphism -->
<filter id="glass">
<feGaussianBlur in="SourceGraphic" stdDeviation="0.5" result="blur" />
<feColorMatrix in="blur" type="matrix" values="1 0 0 0 0 0 1 0 0 0 0 0 1 0 0 0 0 0 18 -7" result="goo" />
<feBlend in="SourceGraphic" in2="goo" />
</filter>
</defs>
<!-- Fondo -->
<rect width="600" height="250" fill="#000000"/>
<!-- Símbolo de vapor híbrido -->
<g transform="translate(100, 140)">
<!-- Núcleo central brillante (de v3) -->
<circle cx="0" cy="0" r="10" fill="url(#coreGlow)" filter="url(#intensGlow)" opacity="0.95">
<animate attributeName="r" values="10;12;10" dur="2s" repeatCount="indefinite"/>
<animate attributeName="opacity" values="0.95;1;0.95" dur="2s" repeatCount="indefinite"/>
</circle>
<!-- Ondas suaves estilo v1 pero mejoradas -->
<!-- Onda curva izquierda -->
<path d="M -20 0 Q -25 -20, -20 -40 T -20 -60"
stroke="url(#smoothWave)"
stroke-width="3.5"
fill="none"
stroke-linecap="round"
filter="url(#softGlow)"
opacity="0.8">
<animate attributeName="d"
values="M -20 0 Q -25 -20, -20 -40 T -20 -60;
M -20 0 Q -28 -20, -20 -40 T -20 -60;
M -20 0 Q -25 -20, -20 -40 T -20 -60"
dur="3s"
repeatCount="indefinite"/>
</path>
<!-- Onda curva derecha -->
<path d="M 20 0 Q 25 -20, 20 -40 T 20 -60"
stroke="url(#smoothWave)"
stroke-width="3.5"
fill="none"
stroke-linecap="round"
filter="url(#softGlow)"
opacity="0.8">
<animate attributeName="d"
values="M 20 0 Q 25 -20, 20 -40 T 20 -60;
M 20 0 Q 28 -20, 20 -40 T 20 -60;
M 20 0 Q 25 -20, 20 -40 T 20 -60"
dur="3.2s"
repeatCount="indefinite"/>
</path>
<!-- Onda central sutil -->
<path d="M 0 -5 Q -5 -25, 0 -45 T 0 -65"
stroke="url(#smoothWave)"
stroke-width="2.5"
fill="none"
stroke-linecap="round"
opacity="0.6">
<animate attributeName="d"
values="M 0 -5 Q -5 -25, 0 -45 T 0 -65;
M 0 -5 Q 5 -25, 0 -45 T 0 -65;
M 0 -5 Q -5 -25, 0 -45 T 0 -65"
dur="2.5s"
repeatCount="indefinite"/>
</path>
<!-- Forma abstracta adicional (estilo v1) -->
<path d="M -10 -5 Q -15 -15, -10 -25 Q -5 -35, -10 -45"
stroke="url(#smoothWave)"
stroke-width="2"
fill="none"
stroke-linecap="round"
opacity="0.5">
<animate attributeName="d"
values="M -10 -5 Q -15 -15, -10 -25 Q -5 -35, -10 -45;
M -10 -5 Q -13 -15, -10 -25 Q -7 -35, -10 -45;
M -10 -5 Q -15 -15, -10 -25 Q -5 -35, -10 -45"
dur="2.8s"
repeatCount="indefinite"/>
</path>
<path d="M 10 -5 Q 15 -15, 10 -25 Q 5 -35, 10 -45"
stroke="url(#smoothWave)"
stroke-width="2"
fill="none"
stroke-linecap="round"
opacity="0.5">
<animate attributeName="d"
values="M 10 -5 Q 15 -15, 10 -25 Q 5 -35, 10 -45;
M 10 -5 Q 13 -15, 10 -25 Q 7 -35, 10 -45;
M 10 -5 Q 15 -15, 10 -25 Q 5 -35, 10 -45"
dur="3.3s"
repeatCount="indefinite"/>
</path>
<!-- Partículas flotantes mejoradas -->
<circle cx="-15" cy="-20" r="2.5" fill="#22d3ee" opacity="0.8">
<animate attributeName="cy" values="-20;-70;-20" dur="4s" repeatCount="indefinite"/>
<animate attributeName="opacity" values="0;0.8;0" dur="4s" repeatCount="indefinite"/>
<animate attributeName="r" values="2.5;1.5;2.5" dur="4s" repeatCount="indefinite"/>
</circle>
<circle cx="15" cy="-25" r="2" fill="#ec4899" opacity="0.7">
<animate attributeName="cy" values="-25;-75;-25" dur="4.5s" repeatCount="indefinite"/>
<animate attributeName="opacity" values="0;0.7;0" dur="4.5s" repeatCount="indefinite"/>
<animate attributeName="r" values="2;1;2" dur="4.5s" repeatCount="indefinite"/>
</circle>
<circle cx="-5" cy="-15" r="1.8" fill="#a855f7" opacity="0.6">
<animate attributeName="cy" values="-15;-65;-15" dur="3.8s" repeatCount="indefinite"/>
<animate attributeName="opacity" values="0;0.6;0" dur="3.8s" repeatCount="indefinite"/>
</circle>
<circle cx="8" cy="-18" r="1.8" fill="#22d3ee" opacity="0.6">
<animate attributeName="cy" values="-18;-68;-18" dur="4.2s" repeatCount="indefinite"/>
<animate attributeName="opacity" values="0;0.6;0" dur="4.2s" repeatCount="indefinite"/>
</circle>
<circle cx="0" cy="-30" r="1.5" fill="#a855f7" opacity="0.5">
<animate attributeName="cy" values="-30;-80;-30" dur="3.5s" repeatCount="indefinite"/>
<animate attributeName="opacity" values="0;0.5;0" dur="3.5s" repeatCount="indefinite"/>
</circle>
<circle cx="-8" cy="-35" r="1.2" fill="#ec4899" opacity="0.4">
<animate attributeName="cy" values="-35;-75;-35" dur="4.8s" repeatCount="indefinite"/>
<animate attributeName="opacity" values="0;0.4;0" dur="4.8s" repeatCount="indefinite"/>
</circle>
</g>
<!-- Texto VAPORA con glassmorphism -->
<g filter="url(#glass)">
<text x="220" y="135"
font-family="'JetBrains Mono', 'Fira Code', monospace"
font-size="58"
font-weight="800"
fill="url(#mainGrad)"
letter-spacing="3">
VAPORA
</text>
<!-- Capa de brillo glassmorphism -->
<text x="220" y="135"
font-family="'JetBrains Mono', 'Fira Code', monospace"
font-size="58"
font-weight="800"
fill="none"
stroke="rgba(255,255,255,0.25)"
stroke-width="0.5"
letter-spacing="3">
VAPORA
</text>
</g>
<!-- Glow sutil adicional -->
<text x="220" y="135"
font-family="'JetBrains Mono', 'Fira Code', monospace"
font-size="58"
font-weight="800"
fill="url(#mainGrad)"
letter-spacing="3"
filter="url(#softGlow)"
opacity="0.25">
VAPORA
</text>
<!-- Tagline -->
<text x="220" y="162"
font-family="'Inter', sans-serif"
font-size="13"
fill="#a855f7"
opacity="0.75"
letter-spacing="1.5">
Evaporate complexity. Build in the cloud.
</text>
<!-- Punto decorativo pulsante -->
<circle cx="205" cy="125" r="3" fill="#22d3ee" opacity="0.6">
<animate attributeName="r" values="3;4.5;3" dur="2.5s" repeatCount="indefinite"/>
<animate attributeName="opacity" values="0.6;1;0.6" dur="2.5s" repeatCount="indefinite"/>
</circle>
</svg>

After

Width:  |  Height:  |  Size: 8.4 KiB

View File

@ -0,0 +1,274 @@
<svg xmlns="http://www.w3.org/2000/svg" viewBox="0 0 600 300" width="600" height="300">
<defs>
<!-- Gradiente principal -->
<linearGradient id="techGrad" x1="0%" y1="0%" x2="100%" y2="0%">
<stop offset="0%" style="stop-color:#22d3ee;stop-opacity:1" />
<stop offset="50%" style="stop-color:#a855f7;stop-opacity:1" />
<stop offset="100%" style="stop-color:#ec4899;stop-opacity:1" />
</linearGradient>
<!-- Gradiente vertical -->
<linearGradient id="vertGrad" x1="0%" y1="100%" x2="0%" y2="0%">
<stop offset="0%" style="stop-color:#22d3ee;stop-opacity:1" />
<stop offset="50%" style="stop-color:#a855f7;stop-opacity:0.8" />
<stop offset="100%" style="stop-color:#ec4899;stop-opacity:0.4" />
</linearGradient>
<!-- Filtro glow tech -->
<filter id="techGlow">
<feGaussianBlur stdDeviation="2" result="coloredBlur"/>
<feMerge>
<feMergeNode in="coloredBlur"/>
<feMergeNode in="SourceGraphic"/>
</feMerge>
</filter>
<!-- Filtro glow fuerte -->
<filter id="strongGlow">
<feGaussianBlur stdDeviation="4" result="coloredBlur"/>
<feMerge>
<feMergeNode in="coloredBlur"/>
<feMergeNode in="coloredBlur"/>
<feMergeNode in="SourceGraphic"/>
</feMerge>
</filter>
<!-- Filtro glass -->
<filter id="glass">
<feGaussianBlur in="SourceGraphic" stdDeviation="0.5" result="blur" />
<feColorMatrix in="blur" type="matrix" values="1 0 0 0 0 0 1 0 0 0 0 0 1 0 0 0 0 0 18 -7" result="goo" />
<feBlend in="SourceGraphic" in2="goo" />
</filter>
</defs>
<!-- Fondo -->
<rect width="600" height="300" fill="#000000"/>
<!-- Grid de fondo técnico sutil -->
<g opacity="0.08" stroke="#22d3ee" stroke-width="0.5">
<line x1="0" y1="100" x2="600" y2="100"/>
<line x1="0" y1="150" x2="600" y2="150"/>
<line x1="0" y1="200" x2="600" y2="200"/>
<line x1="100" y1="0" x2="100" y2="300"/>
<line x1="200" y1="0" x2="200" y2="300"/>
<line x1="300" y1="0" x2="300" y2="300"/>
<line x1="400" y1="0" x2="400" y2="300"/>
<line x1="500" y1="0" x2="500" y2="300"/>
</g>
<!-- Símbolo técnico: flujo de datos ascendente -->
<g transform="translate(80, 240)">
<!-- Base: plataforma -->
<rect x="-15" y="0" width="30" height="4" fill="url(#techGrad)" opacity="0.8" rx="2"/>
<!-- Stream principal - línea central tipo señal -->
<path d="M 0 0 L 0 -30 L 5 -35 L -5 -45 L 5 -55 L -5 -65 L 5 -75 L 0 -80 L 0 -120"
stroke="url(#vertGrad)"
stroke-width="3"
fill="none"
stroke-linecap="round"
stroke-linejoin="round"
filter="url(#techGlow)">
<animate attributeName="stroke-dasharray" values="0,300;300,0;0,300" dur="4s" repeatCount="indefinite"/>
<animate attributeName="opacity" values="0.8;1;0.8" dur="2s" repeatCount="indefinite"/>
</path>
<!-- Stream izquierdo 1 -->
<path d="M -20 0 L -20 -25 L -18 -35 L -22 -45 L -18 -55 L -22 -65 L -20 -75 L -20 -100"
stroke="#22d3ee"
stroke-width="2"
fill="none"
stroke-linecap="round"
opacity="0.6"
filter="url(#techGlow)">
<animate attributeName="stroke-dasharray" values="0,250;250,0;0,250" dur="4.5s" repeatCount="indefinite"/>
</path>
<!-- Stream izquierdo 2 -->
<path d="M -35 0 L -35 -20 L -32 -30 L -38 -40 L -32 -50 L -38 -60 L -35 -70 L -35 -85"
stroke="#a855f7"
stroke-width="1.5"
fill="none"
stroke-linecap="round"
opacity="0.5">
<animate attributeName="stroke-dasharray" values="0,200;200,0;0,200" dur="5s" repeatCount="indefinite"/>
</path>
<!-- Stream derecho 1 -->
<path d="M 20 0 L 20 -25 L 18 -35 L 22 -45 L 18 -55 L 22 -65 L 20 -75 L 20 -100"
stroke="#ec4899"
stroke-width="2"
fill="none"
stroke-linecap="round"
opacity="0.6"
filter="url(#techGlow)">
<animate attributeName="stroke-dasharray" values="0,250;250,0;0,250" dur="4.2s" repeatCount="indefinite"/>
</path>
<!-- Stream derecho 2 -->
<path d="M 35 0 L 35 -20 L 32 -30 L 38 -40 L 32 -50 L 38 -60 L 35 -70 L 35 -85"
stroke="#22d3ee"
stroke-width="1.5"
fill="none"
stroke-linecap="round"
opacity="0.5">
<animate attributeName="stroke-dasharray" values="0,200;200,0;0,200" dur="5.5s" repeatCount="indefinite"/>
</path>
<!-- Nodos de datos en el flujo principal -->
<circle cx="0" cy="-40" r="3" fill="#22d3ee" filter="url(#strongGlow)">
<animate attributeName="cy" values="-40;-110;-40" dur="3s" repeatCount="indefinite"/>
<animate attributeName="opacity" values="0;1;0" dur="3s" repeatCount="indefinite"/>
</circle>
<circle cx="0" cy="-60" r="2.5" fill="#a855f7" filter="url(#strongGlow)">
<animate attributeName="cy" values="-60;-130;-60" dur="3.5s" repeatCount="indefinite"/>
<animate attributeName="opacity" values="0;1;0" dur="3.5s" repeatCount="indefinite"/>
</circle>
<circle cx="0" cy="-80" r="2" fill="#ec4899" filter="url(#strongGlow)">
<animate attributeName="cy" values="-80;-140;-80" dur="4s" repeatCount="indefinite"/>
<animate attributeName="opacity" values="0;1;0" dur="4s" repeatCount="indefinite"/>
</circle>
<!-- Partículas laterales -->
<circle cx="-20" cy="-50" r="2" fill="#22d3ee" opacity="0.7">
<animate attributeName="cy" values="-50;-105;-50" dur="3.8s" repeatCount="indefinite"/>
<animate attributeName="opacity" values="0;0.7;0" dur="3.8s" repeatCount="indefinite"/>
</circle>
<circle cx="20" cy="-55" r="2" fill="#ec4899" opacity="0.7">
<animate attributeName="cy" values="-55;-105;-55" dur="4.2s" repeatCount="indefinite"/>
<animate attributeName="opacity" values="0;0.7;0" dur="4.2s" repeatCount="indefinite"/>
</circle>
<circle cx="-35" cy="-35" r="1.5" fill="#a855f7" opacity="0.5">
<animate attributeName="cy" values="-35;-85;-35" dur="4.5s" repeatCount="indefinite"/>
<animate attributeName="opacity" values="0;0.5;0" dur="4.5s" repeatCount="indefinite"/>
</circle>
<circle cx="35" cy="-40" r="1.5" fill="#22d3ee" opacity="0.5">
<animate attributeName="cy" values="-40;-85;-40" dur="5s" repeatCount="indefinite"/>
<animate attributeName="opacity" values="0;0.5;0" dur="5s" repeatCount="indefinite"/>
</circle>
<!-- Hexágonos técnicos flotantes -->
<polygon points="0,-95 3,-97 3,-100 0,-102 -3,-100 -3,-97"
stroke="#22d3ee"
fill="none"
stroke-width="1"
opacity="0.6">
<animate attributeName="transform" values="translate(0,0);translate(0,-30);translate(0,0)" dur="4s" repeatCount="indefinite"/>
<animate attributeName="opacity" values="0;0.6;0" dur="4s" repeatCount="indefinite"/>
</polygon>
<polygon points="-25,-70 -22,-72 -22,-75 -25,-77 -28,-75 -28,-72"
stroke="#a855f7"
fill="none"
stroke-width="1"
opacity="0.5">
<animate attributeName="transform" values="translate(0,0);translate(0,-25);translate(0,0)" dur="4.5s" repeatCount="indefinite"/>
<animate attributeName="opacity" values="0;0.5;0" dur="4.5s" repeatCount="indefinite"/>
</polygon>
<polygon points="25,-75 28,-77 28,-80 25,-82 22,-80 22,-77"
stroke="#ec4899"
fill="none"
stroke-width="1"
opacity="0.5">
<animate attributeName="transform" values="translate(0,0);translate(0,-20);translate(0,0)" dur="5s" repeatCount="indefinite"/>
<animate attributeName="opacity" values="0;0.5;0" dur="5s" repeatCount="indefinite"/>
</polygon>
<!-- Líneas de conexión horizontales animadas -->
<line x1="-20" y1="-60" x2="-5" y2="-60" stroke="#22d3ee" stroke-width="0.5" opacity="0.4">
<animate attributeName="opacity" values="0;0.4;0" dur="2s" repeatCount="indefinite"/>
</line>
<line x1="5" y1="-70" x2="20" y2="-70" stroke="#ec4899" stroke-width="0.5" opacity="0.4">
<animate attributeName="opacity" values="0;0.4;0" dur="2.5s" repeatCount="indefinite"/>
</line>
<line x1="-35" y1="-50" x2="-22" y2="-50" stroke="#a855f7" stroke-width="0.5" opacity="0.3">
<animate attributeName="opacity" values="0;0.3;0" dur="3s" repeatCount="indefinite"/>
</line>
<line x1="22" y1="-55" x2="35" y2="-55" stroke="#22d3ee" stroke-width="0.5" opacity="0.3">
<animate attributeName="opacity" values="0;0.3;0" dur="3.5s" repeatCount="indefinite"/>
</line>
</g>
<!-- Indicadores técnicos laterales -->
<g opacity="0.3">
<text x="25" y="150" font-family="'JetBrains Mono', monospace" font-size="8" fill="#22d3ee">↑ STREAM</text>
<text x="25" y="180" font-family="'JetBrains Mono', monospace" font-size="8" fill="#a855f7">↑ DATA</text>
</g>
<!-- Texto VAPORA -->
<g filter="url(#glass)">
<text x="200" y="155"
font-family="'JetBrains Mono', 'Fira Code', monospace"
font-size="64"
font-weight="800"
fill="url(#techGrad)"
letter-spacing="4">
VAPORA
</text>
<text x="200" y="155"
font-family="'JetBrains Mono', 'Fira Code', monospace"
font-size="64"
font-weight="800"
fill="none"
stroke="rgba(255,255,255,0.2)"
stroke-width="0.5"
letter-spacing="4">
VAPORA
</text>
</g>
<!-- Glow en texto -->
<text x="200" y="155"
font-family="'JetBrains Mono', 'Fira Code', monospace"
font-size="64"
font-weight="800"
fill="url(#techGrad)"
letter-spacing="4"
filter="url(#techGlow)"
opacity="0.3">
VAPORA
</text>
<!-- Tagline -->
<text x="200" y="183"
font-family="'Inter', sans-serif"
font-size="14"
fill="#a855f7"
opacity="0.8"
letter-spacing="2">
Evaporate complexity. Build in the cloud.
</text>
<!-- Indicador técnico decorativo -->
<g transform="translate(200, 195)">
<rect x="0" y="0" width="2" height="8" fill="#22d3ee" opacity="0.6">
<animate attributeName="height" values="8;12;8" dur="1.5s" repeatCount="indefinite"/>
</rect>
<rect x="5" y="0" width="2" height="10" fill="#a855f7" opacity="0.6">
<animate attributeName="height" values="10;14;10" dur="1.8s" repeatCount="indefinite"/>
</rect>
<rect x="10" y="0" width="2" height="6" fill="#ec4899" opacity="0.6">
<animate attributeName="height" values="6;10;6" dur="1.3s" repeatCount="indefinite"/>
</rect>
</g>
<!-- Metadata técnica -->
<text x="550" y="280"
font-family="'JetBrains Mono', monospace"
font-size="8"
fill="#22d3ee"
opacity="0.3"
text-anchor="end">
v4.0.0-dev
</text>
</svg>

After

Width:  |  Height:  |  Size: 11 KiB

View File

@ -0,0 +1,170 @@
<svg xmlns="http://www.w3.org/2000/svg" viewBox="0 0 600 250" width="600" height="250">
<defs>
<!-- Gradiente principal -->
<linearGradient id="mainGradient" x1="0%" y1="0%" x2="100%" y2="0%">
<stop offset="0%" style="stop-color:#22d3ee;stop-opacity:1" />
<stop offset="50%" style="stop-color:#a855f7;stop-opacity:1" />
<stop offset="100%" style="stop-color:#ec4899;stop-opacity:1" />
</linearGradient>
<!-- Gradiente vertical para dispersión -->
<linearGradient id="dispersionGradient" x1="0%" y1="100%" x2="0%" y2="0%">
<stop offset="0%" style="stop-color:#22d3ee;stop-opacity:1" />
<stop offset="100%" style="stop-color:#ec4899;stop-opacity:0" />
</linearGradient>
<!-- Filtro de desenfoque suave -->
<filter id="softGlow">
<feGaussianBlur stdDeviation="2" result="coloredBlur"/>
<feMerge>
<feMergeNode in="coloredBlur"/>
<feMergeNode in="SourceGraphic"/>
</feMerge>
</filter>
<!-- Filtro glass más pronunciado -->
<filter id="glassEffect">
<feGaussianBlur in="SourceAlpha" stdDeviation="4" result="blur"/>
<feOffset in="blur" dx="0" dy="2" result="offsetBlur"/>
<feFlood flood-color="#ffffff" flood-opacity="0.1" result="flood"/>
<feComposite in="flood" in2="offsetBlur" operator="in" result="comp"/>
<feMerge>
<feMergeNode in="comp"/>
<feMergeNode in="SourceGraphic"/>
</feMerge>
</filter>
</defs>
<!-- Fondo negro -->
<rect width="600" height="250" fill="#000000"/>
<!-- Icono: V estilizada con dispersión de partículas -->
<g transform="translate(80, 80)">
<!-- Base sólida de la V -->
<path d="M 0 0 L 30 60 L 60 0"
stroke="url(#mainGradient)"
stroke-width="6"
fill="none"
stroke-linecap="round"
stroke-linejoin="round"
filter="url(#softGlow)"/>
<!-- Partículas dispersándose desde la punta superior izquierda -->
<g opacity="0.8">
<circle cx="5" cy="5" r="3" fill="#22d3ee">
<animate attributeName="cx" values="5;0;-5" dur="2s" repeatCount="indefinite"/>
<animate attributeName="cy" values="5;-5;-15" dur="2s" repeatCount="indefinite"/>
<animate attributeName="r" values="3;2;1" dur="2s" repeatCount="indefinite"/>
<animate attributeName="opacity" values="1;0.5;0" dur="2s" repeatCount="indefinite"/>
</circle>
<circle cx="8" cy="3" r="2.5" fill="#a855f7">
<animate attributeName="cx" values="8;5;2" dur="2.5s" repeatCount="indefinite"/>
<animate attributeName="cy" values="3;-8;-20" dur="2.5s" repeatCount="indefinite"/>
<animate attributeName="r" values="2.5;1.5;0.5" dur="2.5s" repeatCount="indefinite"/>
<animate attributeName="opacity" values="1;0.5;0" dur="2.5s" repeatCount="indefinite"/>
</circle>
<circle cx="10" cy="8" r="2" fill="#ec4899">
<animate attributeName="cx" values="10;12;15" dur="3s" repeatCount="indefinite"/>
<animate attributeName="cy" values="8;-3;-18" dur="3s" repeatCount="indefinite"/>
<animate attributeName="r" values="2;1;0.5" dur="3s" repeatCount="indefinite"/>
<animate attributeName="opacity" values="1;0.5;0" dur="3s" repeatCount="indefinite"/>
</circle>
</g>
<!-- Partículas desde la punta superior derecha -->
<g opacity="0.8">
<circle cx="55" cy="5" r="3" fill="#ec4899">
<animate attributeName="cx" values="55;60;65" dur="2.2s" repeatCount="indefinite"/>
<animate attributeName="cy" values="5;-5;-15" dur="2.2s" repeatCount="indefinite"/>
<animate attributeName="r" values="3;2;1" dur="2.2s" repeatCount="indefinite"/>
<animate attributeName="opacity" values="1;0.5;0" dur="2.2s" repeatCount="indefinite"/>
</circle>
<circle cx="52" cy="3" r="2.5" fill="#a855f7">
<animate attributeName="cx" values="52;55;58" dur="2.7s" repeatCount="indefinite"/>
<animate attributeName="cy" values="3;-8;-20" dur="2.7s" repeatCount="indefinite"/>
<animate attributeName="r" values="2.5;1.5;0.5" dur="2.7s" repeatCount="indefinite"/>
<animate attributeName="opacity" values="1;0.5;0" dur="2.7s" repeatCount="indefinite"/>
</circle>
<circle cx="50" cy="8" r="2" fill="#22d3ee">
<animate attributeName="cx" values="50;48;45" dur="3.2s" repeatCount="indefinite"/>
<animate attributeName="cy" values="8;-3;-18" dur="3.2s" repeatCount="indefinite"/>
<animate attributeName="r" values="2;1;0.5" dur="3.2s" repeatCount="indefinite"/>
<animate attributeName="opacity" values="1;0.5;0" dur="3.2s" repeatCount="indefinite"/>
</circle>
</g>
<!-- Estela de vapor desde el centro -->
<g opacity="0.5">
<circle cx="30" cy="30" r="2.5" fill="#a855f7">
<animate attributeName="cy" values="30;10;-10" dur="2.8s" repeatCount="indefinite"/>
<animate attributeName="r" values="2.5;1.8;0.8" dur="2.8s" repeatCount="indefinite"/>
<animate attributeName="opacity" values="0.8;0.4;0" dur="2.8s" repeatCount="indefinite"/>
</circle>
<circle cx="28" cy="35" r="2" fill="#22d3ee">
<animate attributeName="cy" values="35;15;-8" dur="3.3s" repeatCount="indefinite"/>
<animate attributeName="r" values="2;1.5;0.5" dur="3.3s" repeatCount="indefinite"/>
<animate attributeName="opacity" values="0.7;0.3;0" dur="3.3s" repeatCount="indefinite"/>
</circle>
<circle cx="32" cy="35" r="2" fill="#ec4899">
<animate attributeName="cy" values="35;15;-8" dur="3.1s" repeatCount="indefinite"/>
<animate attributeName="r" values="2;1.5;0.5" dur="3.1s" repeatCount="indefinite"/>
<animate attributeName="opacity" values="0.7;0.3;0" dur="3.1s" repeatCount="indefinite"/>
</circle>
</g>
</g>
<!-- Texto VAPORA -->
<g filter="url(#glassEffect)">
<text x="220" y="125"
font-family="'JetBrains Mono', 'Fira Code', monospace"
font-size="64"
font-weight="800"
fill="url(#mainGradient)"
letter-spacing="4">
VAPORA
</text>
</g>
<!-- Línea decorativa debajo del texto -->
<line x1="220" y1="140" x2="540" y2="140"
stroke="url(#mainGradient)"
stroke-width="2"
opacity="0.4">
<animate attributeName="x2" values="540;560;540" dur="3s" repeatCount="indefinite"/>
<animate attributeName="opacity" values="0.4;0.7;0.4" dur="3s" repeatCount="indefinite"/>
</line>
<!-- Tagline -->
<text x="220" y="165"
font-family="'Inter', sans-serif"
font-size="13"
fill="#a855f7"
opacity="0.8"
letter-spacing="2">
EVAPORATE COMPLEXITY. BUILD IN THE CLOUD.
</text>
<!-- Puntos decorativos que pulsan -->
<g opacity="0.6">
<circle cx="200" cy="125" r="3" fill="#22d3ee">
<animate attributeName="r" values="3;4;3" dur="2s" repeatCount="indefinite"/>
<animate attributeName="opacity" values="0.6;1;0.6" dur="2s" repeatCount="indefinite"/>
</circle>
<circle cx="560" cy="125" r="3" fill="#ec4899">
<animate attributeName="r" values="3;4;3" dur="2s" begin="1s" repeatCount="indefinite"/>
<animate attributeName="opacity" values="0.6;1;0.6" dur="2s" begin="1s" repeatCount="indefinite"/>
</circle>
</g>
<!-- Reflejos glassmorphism adicionales -->
<g opacity="0.2">
<rect x="220" y="95" width="340" height="8" fill="url(#mainGradient)" rx="4"/>
<rect x="220" y="145" width="280" height="4" fill="url(#mainGradient)" rx="2"/>
</g>
</svg>

After

Width:  |  Height:  |  Size: 7.5 KiB

View File

@ -0,0 +1,214 @@
<svg xmlns="http://www.w3.org/2000/svg" viewBox="0 0 600 250" width="600" height="250">
<defs>
<!-- Gradiente principal cyan -> purple -> pink -->
<linearGradient id="vaporGrad" x1="0%" y1="0%" x2="100%" y2="100%">
<stop offset="0%" style="stop-color:#22d3ee;stop-opacity:1" />
<stop offset="50%" style="stop-color:#a855f7;stop-opacity:1" />
<stop offset="100%" style="stop-color:#ec4899;stop-opacity:1" />
</linearGradient>
<!-- Gradiente vertical para ondas -->
<linearGradient id="waveGrad" x1="0%" y1="100%" x2="0%" y2="0%">
<stop offset="0%" style="stop-color:#22d3ee;stop-opacity:0.9" />
<stop offset="50%" style="stop-color:#a855f7;stop-opacity:0.6" />
<stop offset="100%" style="stop-color:#ec4899;stop-opacity:0.2" />
</linearGradient>
<!-- Gradiente radial para el núcleo -->
<radialGradient id="coreGrad">
<stop offset="0%" style="stop-color:#22d3ee;stop-opacity:1" />
<stop offset="50%" style="stop-color:#a855f7;stop-opacity:0.8" />
<stop offset="100%" style="stop-color:#ec4899;stop-opacity:0.4" />
</radialGradient>
<!-- Filtro glow intenso -->
<filter id="strongGlow">
<feGaussianBlur stdDeviation="4" result="coloredBlur"/>
<feMerge>
<feMergeNode in="coloredBlur"/>
<feMergeNode in="coloredBlur"/>
<feMergeNode in="SourceGraphic"/>
</feMerge>
</filter>
<!-- Filtro glass -->
<filter id="glassBlur">
<feGaussianBlur in="SourceGraphic" stdDeviation="0.8" result="blur" />
<feColorMatrix in="blur" type="matrix" values="1 0 0 0 0 0 1 0 0 0 0 0 1 0 0 0 0 0 18 -7" result="goo" />
<feBlend in="SourceGraphic" in2="goo" />
</filter>
</defs>
<!-- Fondo -->
<rect width="600" height="250" fill="#000000"/>
<!-- Símbolo de vapor: núcleo brillante + ondas ascendentes -->
<g transform="translate(100, 125)">
<!-- Núcleo central brillante -->
<circle cx="0" cy="0" r="12" fill="url(#coreGrad)" filter="url(#strongGlow)" opacity="0.9">
<animate attributeName="r" values="12;14;12" dur="2s" repeatCount="indefinite"/>
<animate attributeName="opacity" values="0.9;1;0.9" dur="2s" repeatCount="indefinite"/>
</circle>
<circle cx="0" cy="0" r="8" fill="#22d3ee" opacity="0.6">
<animate attributeName="r" values="8;10;8" dur="2s" repeatCount="indefinite"/>
</circle>
<!-- Ondas de vapor 1 - izquierda -->
<path d="M -25 0 Q -30 -15, -25 -30 Q -20 -45, -25 -60"
stroke="url(#waveGrad)"
stroke-width="4"
fill="none"
stroke-linecap="round"
opacity="0.8">
<animate attributeName="d"
values="M -25 0 Q -30 -15, -25 -30 Q -20 -45, -25 -60;
M -25 0 Q -32 -15, -25 -30 Q -18 -45, -25 -60;
M -25 0 Q -30 -15, -25 -30 Q -20 -45, -25 -60"
dur="3s"
repeatCount="indefinite"/>
<animate attributeName="opacity" values="0.8;0.5;0.8" dur="3s" repeatCount="indefinite"/>
</path>
<!-- Ondas de vapor 2 - izquierda interna -->
<path d="M -12 0 Q -15 -12, -12 -24 Q -9 -36, -12 -48"
stroke="url(#waveGrad)"
stroke-width="3.5"
fill="none"
stroke-linecap="round"
opacity="0.7">
<animate attributeName="d"
values="M -12 0 Q -15 -12, -12 -24 Q -9 -36, -12 -48;
M -12 0 Q -16 -12, -12 -24 Q -8 -36, -12 -48;
M -12 0 Q -15 -12, -12 -24 Q -9 -36, -12 -48"
dur="2.5s"
repeatCount="indefinite"/>
</path>
<!-- Ondas de vapor 3 - derecha -->
<path d="M 25 0 Q 30 -15, 25 -30 Q 20 -45, 25 -60"
stroke="url(#waveGrad)"
stroke-width="4"
fill="none"
stroke-linecap="round"
opacity="0.8">
<animate attributeName="d"
values="M 25 0 Q 30 -15, 25 -30 Q 20 -45, 25 -60;
M 25 0 Q 28 -15, 25 -30 Q 22 -45, 25 -60;
M 25 0 Q 30 -15, 25 -30 Q 20 -45, 25 -60"
dur="3.2s"
repeatCount="indefinite"/>
<animate attributeName="opacity" values="0.8;0.5;0.8" dur="3.2s" repeatCount="indefinite"/>
</path>
<!-- Ondas de vapor 4 - derecha interna -->
<path d="M 12 0 Q 15 -12, 12 -24 Q 9 -36, 12 -48"
stroke="url(#waveGrad)"
stroke-width="3.5"
fill="none"
stroke-linecap="round"
opacity="0.7">
<animate attributeName="d"
values="M 12 0 Q 15 -12, 12 -24 Q 9 -36, 12 -48;
M 12 0 Q 16 -12, 12 -24 Q 8 -36, 12 -48;
M 12 0 Q 15 -12, 12 -24 Q 9 -36, 12 -48"
dur="2.8s"
repeatCount="indefinite"/>
</path>
<!-- Onda central -->
<path d="M 0 0 Q 0 -20, 0 -40 Q 0 -50, 0 -65"
stroke="url(#waveGrad)"
stroke-width="3"
fill="none"
stroke-linecap="round"
opacity="0.6">
<animate attributeName="d"
values="M 0 0 Q 0 -20, 0 -40 Q 0 -50, 0 -65;
M 0 0 Q -2 -20, 0 -40 Q 2 -50, 0 -65;
M 0 0 Q 0 -20, 0 -40 Q 0 -50, 0 -65"
dur="2.2s"
repeatCount="indefinite"/>
</path>
<!-- Partículas ascendentes -->
<circle cx="-18" cy="-20" r="2.5" fill="#22d3ee" opacity="0.8">
<animate attributeName="cy" values="-20;-70;-20" dur="4s" repeatCount="indefinite"/>
<animate attributeName="opacity" values="0;0.8;0" dur="4s" repeatCount="indefinite"/>
<animate attributeName="r" values="2.5;1.5;2.5" dur="4s" repeatCount="indefinite"/>
</circle>
<circle cx="18" cy="-25" r="2" fill="#ec4899" opacity="0.7">
<animate attributeName="cy" values="-25;-75;-25" dur="4.5s" repeatCount="indefinite"/>
<animate attributeName="opacity" values="0;0.7;0" dur="4.5s" repeatCount="indefinite"/>
<animate attributeName="r" values="2;1;2" dur="4.5s" repeatCount="indefinite"/>
</circle>
<circle cx="-8" cy="-15" r="1.8" fill="#a855f7" opacity="0.6">
<animate attributeName="cy" values="-15;-65;-15" dur="3.8s" repeatCount="indefinite"/>
<animate attributeName="opacity" values="0;0.6;0" dur="3.8s" repeatCount="indefinite"/>
</circle>
<circle cx="8" cy="-18" r="1.8" fill="#22d3ee" opacity="0.6">
<animate attributeName="cy" values="-18;-68;-18" dur="4.2s" repeatCount="indefinite"/>
<animate attributeName="opacity" values="0;0.6;0" dur="4.2s" repeatCount="indefinite"/>
</circle>
<circle cx="0" cy="-22" r="1.5" fill="#a855f7" opacity="0.5">
<animate attributeName="cy" values="-22;-72;-22" dur="3.5s" repeatCount="indefinite"/>
<animate attributeName="opacity" values="0;0.5;0" dur="3.5s" repeatCount="indefinite"/>
</circle>
</g>
<!-- Texto VAPORA con glassmorphism -->
<g filter="url(#glassBlur)">
<text x="230" y="135"
font-family="'JetBrains Mono', 'Fira Code', monospace"
font-size="56"
font-weight="800"
fill="url(#vaporGrad)"
letter-spacing="3">
VAPORA
</text>
<!-- Capa de brillo -->
<text x="230" y="135"
font-family="'JetBrains Mono', 'Fira Code', monospace"
font-size="56"
font-weight="800"
fill="none"
stroke="rgba(255,255,255,0.2)"
stroke-width="0.5"
letter-spacing="3">
VAPORA
</text>
</g>
<!-- Glow adicional al texto -->
<text x="230" y="135"
font-family="'JetBrains Mono', 'Fira Code', monospace"
font-size="56"
font-weight="800"
fill="url(#vaporGrad)"
letter-spacing="3"
filter="url(#strongGlow)"
opacity="0.3">
VAPORA
</text>
<!-- Tagline -->
<text x="230" y="160"
font-family="'Inter', sans-serif"
font-size="13"
fill="#a855f7"
opacity="0.75"
letter-spacing="1.5">
Evaporate complexity. Build in the cloud.
</text>
<!-- Punto decorativo animado -->
<circle cx="215" cy="125" r="3" fill="#22d3ee" opacity="0.6">
<animate attributeName="r" values="3;4.5;3" dur="2.5s" repeatCount="indefinite"/>
<animate attributeName="opacity" values="0.6;1;0.6" dur="2.5s" repeatCount="indefinite"/>
</circle>
</svg>

After

Width:  |  Height:  |  Size: 8.3 KiB

156
assets/wrks/vapora-logo.svg Normal file
View File

@ -0,0 +1,156 @@
<svg xmlns="http://www.w3.org/2000/svg" viewBox="0 0 500 200" width="500" height="200">
<defs>
<!-- Gradiente principal: cyan -> purple -> pink -->
<linearGradient id="vaporGradient" x1="0%" y1="0%" x2="100%" y2="100%">
<stop offset="0%" style="stop-color:#22d3ee;stop-opacity:1" />
<stop offset="50%" style="stop-color:#a855f7;stop-opacity:1" />
<stop offset="100%" style="stop-color:#ec4899;stop-opacity:1" />
</linearGradient>
<!-- Gradiente para ondas -->
<linearGradient id="waveGradient" x1="0%" y1="100%" x2="0%" y2="0%">
<stop offset="0%" style="stop-color:#22d3ee;stop-opacity:0.8" />
<stop offset="50%" style="stop-color:#a855f7;stop-opacity:0.5" />
<stop offset="100%" style="stop-color:#ec4899;stop-opacity:0.2" />
</linearGradient>
<!-- Filtro glassmorphism -->
<filter id="glass">
<feGaussianBlur in="SourceGraphic" stdDeviation="1" result="blur" />
<feColorMatrix in="blur" type="matrix" values="1 0 0 0 0 0 1 0 0 0 0 0 1 0 0 0 0 0 18 -7" result="goo" />
<feBlend in="SourceGraphic" in2="goo" />
</filter>
<!-- Filtro glow -->
<filter id="glow">
<feGaussianBlur stdDeviation="3" result="coloredBlur"/>
<feMerge>
<feMergeNode in="coloredBlur"/>
<feMergeNode in="SourceGraphic"/>
</feMerge>
</filter>
</defs>
<!-- Fondo -->
<rect width="500" height="200" fill="#000000"/>
<!-- Ondas de vapor ascendentes (3 ondas con animación) -->
<g opacity="0.6">
<!-- Onda 1 -->
<path d="M 80 150 Q 90 130, 100 150 T 120 150" stroke="url(#waveGradient)" stroke-width="3" fill="none" opacity="0.8">
<animate attributeName="d"
values="M 80 150 Q 90 130, 100 150 T 120 150;
M 80 140 Q 90 120, 100 140 T 120 140;
M 80 130 Q 90 110, 100 130 T 120 130;
M 80 120 Q 90 100, 100 120 T 120 120"
dur="3s"
repeatCount="indefinite"/>
<animate attributeName="opacity" values="0.8;0.4;0.1;0" dur="3s" repeatCount="indefinite"/>
</path>
<!-- Onda 2 -->
<path d="M 70 160 Q 85 140, 100 160 T 130 160" stroke="url(#waveGradient)" stroke-width="2.5" fill="none" opacity="0.7">
<animate attributeName="d"
values="M 70 160 Q 85 140, 100 160 T 130 160;
M 70 145 Q 85 125, 100 145 T 130 145;
M 70 130 Q 85 110, 100 130 T 130 130;
M 70 115 Q 85 95, 100 115 T 130 115"
dur="3.5s"
begin="0.5s"
repeatCount="indefinite"/>
<animate attributeName="opacity" values="0.7;0.3;0.1;0" dur="3.5s" begin="0.5s" repeatCount="indefinite"/>
</path>
<!-- Onda 3 -->
<path d="M 90 155 Q 100 135, 110 155 T 130 155" stroke="url(#waveGradient)" stroke-width="2" fill="none" opacity="0.6">
<animate attributeName="d"
values="M 90 155 Q 100 135, 110 155 T 130 155;
M 90 140 Q 100 120, 110 140 T 130 140;
M 90 125 Q 100 105, 110 125 T 130 125;
M 90 110 Q 100 90, 110 110 T 130 110"
dur="4s"
begin="1s"
repeatCount="indefinite"/>
<animate attributeName="opacity" values="0.6;0.3;0.1;0" dur="4s" begin="1s" repeatCount="indefinite"/>
</path>
</g>
<!-- Partículas flotantes -->
<g opacity="0.5">
<circle cx="110" cy="140" r="2" fill="#22d3ee">
<animate attributeName="cy" values="140;80;140" dur="5s" repeatCount="indefinite"/>
<animate attributeName="opacity" values="0;1;0" dur="5s" repeatCount="indefinite"/>
</circle>
<circle cx="95" cy="150" r="1.5" fill="#a855f7">
<animate attributeName="cy" values="150;90;150" dur="6s" repeatCount="indefinite"/>
<animate attributeName="opacity" values="0;1;0" dur="6s" repeatCount="indefinite"/>
</circle>
<circle cx="85" cy="145" r="1" fill="#ec4899">
<animate attributeName="cy" values="145;85;145" dur="4.5s" repeatCount="indefinite"/>
<animate attributeName="opacity" values="0;1;0" dur="4.5s" repeatCount="indefinite"/>
</circle>
</g>
<!-- Logo icon: símbolo de vapor estilizado -->
<g transform="translate(50, 80)">
<!-- Forma de vapor abstracta -->
<path d="M 20 30 Q 30 10, 40 30 Q 50 50, 60 30 Q 70 10, 80 30"
stroke="url(#vaporGradient)"
stroke-width="4"
fill="none"
filter="url(#glow)"
opacity="0.9"/>
<path d="M 25 40 Q 35 20, 45 40 Q 55 60, 65 40 Q 75 20, 85 40"
stroke="url(#vaporGradient)"
stroke-width="3"
fill="none"
filter="url(#glow)"
opacity="0.6"/>
</g>
<!-- Texto VAPORA con efecto glassmorphism -->
<g filter="url(#glass)">
<text x="150" y="110"
font-family="'JetBrains Mono', 'Fira Code', monospace"
font-size="48"
font-weight="700"
fill="url(#vaporGradient)"
letter-spacing="2">
VAPORA
</text>
<!-- Capa de brillo glassmorphism -->
<text x="150" y="110"
font-family="'JetBrains Mono', 'Fira Code', monospace"
font-size="48"
font-weight="700"
fill="none"
stroke="rgba(255,255,255,0.3)"
stroke-width="0.5"
letter-spacing="2">
VAPORA
</text>
</g>
<!-- Tagline -->
<text x="150" y="135"
font-family="'Inter', sans-serif"
font-size="12"
fill="#a855f7"
opacity="0.7"
letter-spacing="1">
Evaporate complexity. Build in the cloud.
</text>
<!-- Brillo adicional en el texto -->
<g filter="url(#glow)" opacity="0.3">
<text x="150" y="110"
font-family="'JetBrains Mono', 'Fira Code', monospace"
font-size="48"
font-weight="700"
fill="url(#vaporGradient)"
letter-spacing="2">
VAPORA
</text>
</g>
</svg>

After

Width:  |  Height:  |  Size: 5.9 KiB

205
assets/wrks/vapora_org.svg Normal file
View File

@ -0,0 +1,205 @@
<svg xmlns="http://www.w3.org/2000/svg" viewBox="0 0 600 300" width="600" height="300">
<defs>
<!-- Gradiente principal -->
<linearGradient id="techGrad" x1="0%" y1="0%" x2="100%" y2="0%">
<stop offset="0%" style="stop-color:#22d3ee;stop-opacity:1"/>
<stop offset="50%" style="stop-color:#a855f7;stop-opacity:1"/>
<stop offset="100%" style="stop-color:#ec4899;stop-opacity:1"/>
</linearGradient>
<!-- Gradiente vertical -->
<linearGradient id="vertGrad" x1="0%" y1="100%" x2="0%" y2="0%">
<stop offset="0%" style="stop-color:#22d3ee;stop-opacity:1"/>
<stop offset="50%" style="stop-color:#a855f7;stop-opacity:0.8"/>
<stop offset="100%" style="stop-color:#ec4899;stop-opacity:0.4"/>
</linearGradient>
<!-- Filtro glow tech -->
<filter id="techGlow">
<feGaussianBlur stdDeviation="2" result="coloredBlur"/>
<feMerge>
<feMergeNode in="coloredBlur"/>
<feMergeNode in="SourceGraphic"/>
</feMerge>
</filter>
<!-- Filtro glow fuerte -->
<filter id="strongGlow">
<feGaussianBlur stdDeviation="4" result="coloredBlur"/>
<feMerge>
<feMergeNode in="coloredBlur"/>
<feMergeNode in="coloredBlur"/>
<feMergeNode in="SourceGraphic"/>
</feMerge>
</filter>
<!-- Filtro glass -->
<filter id="glass">
<feGaussianBlur in="SourceGraphic" stdDeviation="0.5" result="blur"/>
<feColorMatrix in="blur" type="matrix" values="1 0 0 0 0 0 1 0 0 0 0 0 1 0 0 0 0 0 18 -7" result="goo"/>
<feBlend in="SourceGraphic" in2="goo"/>
</filter>
</defs>
<!-- Fondo -->
<rect width="600" height="300" fill="#000000"/>
<!-- Grid de fondo técnico sutil -->
<g opacity="0.08" stroke="#22d3ee" stroke-width="0.5">
<line x1="0" y1="100" x2="600" y2="100"/>
<line x1="0" y1="150" x2="600" y2="150"/>
<line x1="0" y1="200" x2="600" y2="200"/>
<line x1="100" y1="0" x2="100" y2="300"/>
<line x1="200" y1="0" x2="200" y2="300"/>
<line x1="300" y1="0" x2="300" y2="300"/>
<line x1="400" y1="0" x2="400" y2="300"/>
<line x1="500" y1="0" x2="500" y2="300"/>
</g>
<!-- Símbolo técnico: flujo de datos ascendente -->
<g transform="translate(80, 240)">
<!-- Base: plataforma -->
<rect x="-15" y="0" width="30" height="4" fill="url(#techGrad)" opacity="0.8" rx="2"/>
<!-- Stream principal - línea central tipo señal -->
<path d="M 0 0 L 0 -30 L 5 -35 L -5 -45 L 5 -55 L -5 -65 L 5 -75 L 0 -80 L 0 -120" stroke="url(#vertGrad)" stroke-width="3" fill="none" stroke-linecap="round" stroke-linejoin="round" filter="url(#techGlow)">
<animate attributeName="stroke-dasharray" values="0,300;300,0;0,300" dur="4s" repeatCount="indefinite"/>
<animate attributeName="opacity" values="0.8;1;0.8" dur="2s" repeatCount="indefinite"/>
</path>
<!-- Stream izquierdo 1 -->
<path d="M -20 0 L -20 -25 L -18 -35 L -22 -45 L -18 -55 L -22 -65 L -20 -75 L -20 -100" stroke="#22d3ee" stroke-width="2" fill="none" stroke-linecap="round" opacity="0.6" filter="url(#techGlow)">
<animate attributeName="stroke-dasharray" values="0,250;250,0;0,250" dur="4.5s" repeatCount="indefinite"/>
</path>
<!-- Stream izquierdo 2 -->
<path d="M -35 0 L -35 -20 L -32 -30 L -38 -40 L -32 -50 L -38 -60 L -35 -70 L -35 -85" stroke="#a855f7" stroke-width="1.5" fill="none" stroke-linecap="round" opacity="0.5">
<animate attributeName="stroke-dasharray" values="0,200;200,0;0,200" dur="5s" repeatCount="indefinite"/>
</path>
<!-- Stream derecho 1 -->
<path d="M 20 0 L 20 -25 L 18 -35 L 22 -45 L 18 -55 L 22 -65 L 20 -75 L 20 -100" stroke="#ec4899" stroke-width="2" fill="none" stroke-linecap="round" opacity="0.6" filter="url(#techGlow)">
<animate attributeName="stroke-dasharray" values="0,250;250,0;0,250" dur="4.2s" repeatCount="indefinite"/>
</path>
<!-- Stream derecho 2 -->
<path d="M 35 0 L 35 -20 L 32 -30 L 38 -40 L 32 -50 L 38 -60 L 35 -70 L 35 -85" stroke="#22d3ee" stroke-width="1.5" fill="none" stroke-linecap="round" opacity="0.5">
<animate attributeName="stroke-dasharray" values="0,200;200,0;0,200" dur="5.5s" repeatCount="indefinite"/>
</path>
<!-- Nodos de datos en el flujo principal -->
<circle cx="0" cy="-40" r="3" fill="#22d3ee" filter="url(#strongGlow)">
<animate attributeName="cy" values="-40;-110;-40" dur="3s" repeatCount="indefinite"/>
<animate attributeName="opacity" values="0;1;0" dur="3s" repeatCount="indefinite"/>
</circle>
<circle cx="0" cy="-60" r="2.5" fill="#a855f7" filter="url(#strongGlow)">
<animate attributeName="cy" values="-60;-130;-60" dur="3.5s" repeatCount="indefinite"/>
<animate attributeName="opacity" values="0;1;0" dur="3.5s" repeatCount="indefinite"/>
</circle>
<circle cx="0" cy="-80" r="2" fill="#ec4899" filter="url(#strongGlow)">
<animate attributeName="cy" values="-80;-140;-80" dur="4s" repeatCount="indefinite"/>
<animate attributeName="opacity" values="0;1;0" dur="4s" repeatCount="indefinite"/>
</circle>
<!-- Partículas laterales -->
<circle cx="-20" cy="-50" r="2" fill="#22d3ee" opacity="0.7">
<animate attributeName="cy" values="-50;-105;-50" dur="3.8s" repeatCount="indefinite"/>
<animate attributeName="opacity" values="0;0.7;0" dur="3.8s" repeatCount="indefinite"/>
</circle>
<circle cx="20" cy="-55" r="2" fill="#ec4899" opacity="0.7">
<animate attributeName="cy" values="-55;-105;-55" dur="4.2s" repeatCount="indefinite"/>
<animate attributeName="opacity" values="0;0.7;0" dur="4.2s" repeatCount="indefinite"/>
</circle>
<circle cx="-35" cy="-35" r="1.5" fill="#a855f7" opacity="0.5">
<animate attributeName="cy" values="-35;-85;-35" dur="4.5s" repeatCount="indefinite"/>
<animate attributeName="opacity" values="0;0.5;0" dur="4.5s" repeatCount="indefinite"/>
</circle>
<circle cx="35" cy="-40" r="1.5" fill="#22d3ee" opacity="0.5">
<animate attributeName="cy" values="-40;-85;-40" dur="5s" repeatCount="indefinite"/>
<animate attributeName="opacity" values="0;0.5;0" dur="5s" repeatCount="indefinite"/>
</circle>
<!-- Hexágonos técnicos flotantes -->
<polygon points="0,-95 3,-97 3,-100 0,-102 -3,-100 -3,-97" stroke="#22d3ee" fill="none" stroke-width="1" opacity="0.6">
<animate attributeName="transform" values="translate(0,0);translate(0,-30);translate(0,0)" dur="4s" repeatCount="indefinite"/>
<animate attributeName="opacity" values="0;0.6;0" dur="4s" repeatCount="indefinite"/>
</polygon>
<polygon points="-25,-70 -22,-72 -22,-75 -25,-77 -28,-75 -28,-72" stroke="#a855f7" fill="none" stroke-width="1" opacity="0.5">
<animate attributeName="transform" values="translate(0,0);translate(0,-25);translate(0,0)" dur="4.5s" repeatCount="indefinite"/>
<animate attributeName="opacity" values="0;0.5;0" dur="4.5s" repeatCount="indefinite"/>
</polygon>
<polygon points="25,-75 28,-77 28,-80 25,-82 22,-80 22,-77" stroke="#ec4899" fill="none" stroke-width="1" opacity="0.5">
<animate attributeName="transform" values="translate(0,0);translate(0,-20);translate(0,0)" dur="5s" repeatCount="indefinite"/>
<animate attributeName="opacity" values="0;0.5;0" dur="5s" repeatCount="indefinite"/>
</polygon>
<!-- Líneas de conexión horizontales animadas -->
<line x1="-20" y1="-60" x2="-5" y2="-60" stroke="#22d3ee" stroke-width="0.5" opacity="0.4">
<animate attributeName="opacity" values="0;0.4;0" dur="2s" repeatCount="indefinite"/>
</line>
<line x1="5" y1="-70" x2="20" y2="-70" stroke="#ec4899" stroke-width="0.5" opacity="0.4">
<animate attributeName="opacity" values="0;0.4;0" dur="2.5s" repeatCount="indefinite"/>
</line>
<line x1="-35" y1="-50" x2="-22" y2="-50" stroke="#a855f7" stroke-width="0.5" opacity="0.3">
<animate attributeName="opacity" values="0;0.3;0" dur="3s" repeatCount="indefinite"/>
</line>
<line x1="22" y1="-55" x2="35" y2="-55" stroke="#22d3ee" stroke-width="0.5" opacity="0.3">
<animate attributeName="opacity" values="0;0.3;0" dur="3.5s" repeatCount="indefinite"/>
</line>
</g>
<!-- Indicadores técnicos laterales -->
<g opacity="0.3">
<text x="25" y="150" font-family="'JetBrains Mono', monospace" font-size="8" fill="#22d3ee">↑ STREAM</text>
<text x="25" y="180" font-family="'JetBrains Mono', monospace" font-size="8" fill="#a855f7">↑ DATA</text>
</g>
<!-- Texto VAPORA -->
<g filter="url(#glass)">
<text x="200" y="155" font-family="'JetBrains Mono', 'Fira Code', monospace" font-size="64" font-weight="800" fill="url(#techGrad)" letter-spacing="4">
VAPORA
</text>
<text x="200" y="155" font-family="'JetBrains Mono', 'Fira Code', monospace" font-size="64" font-weight="800" fill="none" stroke="rgba(255,255,255,0.2)" stroke-width="0.5" letter-spacing="4">
VAPORA
</text>
</g>
<!-- Glow en texto -->
<text x="200" y="155" font-family="'JetBrains Mono', 'Fira Code', monospace" font-size="64" font-weight="800" fill="url(#techGrad)" letter-spacing="4" filter="url(#techGlow)" opacity="0.3">
VAPORA
</text>
<!-- Tagline -->
<text x="200" y="183" font-family="'Inter', sans-serif" font-size="14" fill="#a855f7" opacity="0.8" letter-spacing="2">
Evaporate complexity. Build in the cloud.
</text>
<!-- Indicador técnico decorativo -->
<g transform="translate(200, 195)">
<rect x="0" y="0" width="2" height="8" fill="#22d3ee" opacity="0.6">
<animate attributeName="height" values="8;12;8" dur="1.5s" repeatCount="indefinite"/>
</rect>
<rect x="5" y="0" width="2" height="10" fill="#a855f7" opacity="0.6">
<animate attributeName="height" values="10;14;10" dur="1.8s" repeatCount="indefinite"/>
</rect>
<rect x="10" y="0" width="2" height="6" fill="#ec4899" opacity="0.6">
<animate attributeName="height" values="6;10;6" dur="1.3s" repeatCount="indefinite"/>
</rect>
</g>
<!-- Metadata técnica -->
<text x="550" y="280" font-family="'JetBrains Mono', monospace" font-size="8" fill="#22d3ee" opacity="0.3" text-anchor="end">
v4.0.0-dev
</text>
</svg>

After

Width:  |  Height:  |  Size: 10 KiB

39
config/agent-budgets.toml Normal file
View File

@ -0,0 +1,39 @@
# Agent Role Budget Configuration
# Defines monthly and weekly spending limits per agent role
# Budget enforcement prevents runaway LLM costs
# Fallback providers used when budget thresholds exceeded
[budgets.architect]
role = "architect"
monthly_limit_cents = 50000 # $500/month
weekly_limit_cents = 12500 # $125/week
fallback_provider = "gemini" # Cheaper alternative when budget hit
alert_threshold = 0.8 # Alert at 80% utilization
[budgets.developer]
role = "developer"
monthly_limit_cents = 30000 # $300/month
weekly_limit_cents = 7500 # $75/week
fallback_provider = "ollama" # Free local model
alert_threshold = 0.8
[budgets.reviewer]
role = "reviewer"
monthly_limit_cents = 20000 # $200/month
weekly_limit_cents = 5000 # $50/week
fallback_provider = "gemini"
alert_threshold = 0.8
[budgets.documenter]
role = "documenter"
monthly_limit_cents = 15000 # $150/month
weekly_limit_cents = 3750 # $37.50/week
fallback_provider = "ollama"
alert_threshold = 0.8
[budgets.tester]
role = "tester"
monthly_limit_cents = 25000 # $250/month
weekly_limit_cents = 6250 # $62.50/week
fallback_provider = "ollama"
alert_threshold = 0.8

122
config/agents.toml Normal file
View File

@ -0,0 +1,122 @@
# Agent Registry Configuration
# Phase 0: Definition of 12 agent roles
[registry]
# Maximum number of concurrent agents per role
max_agents_per_role = 5
# Agent health check interval (seconds)
health_check_interval = 30
# Agent timeout (seconds)
agent_timeout = 300
# The 12 Agent Roles
[[agents]]
role = "architect"
description = "System design, architecture decisions, ADRs"
llm_provider = "claude"
llm_model = "claude-opus-4-20250514"
parallelizable = false
priority = 100
capabilities = ["system_design", "architecture", "adr", "patterns"]
[[agents]]
role = "developer"
description = "Code implementation, feature development"
llm_provider = "claude"
llm_model = "claude-sonnet-4-5-20250929"
parallelizable = true
priority = 80
capabilities = ["coding", "implementation", "debugging"]
[[agents]]
role = "code_reviewer"
description = "Code quality assurance, style checking"
llm_provider = "claude"
llm_model = "claude-sonnet-4-5-20250929"
parallelizable = true
priority = 70
capabilities = ["code_review", "quality", "best_practices"]
[[agents]]
role = "tester"
description = "Tests, benchmarks, quality validation"
llm_provider = "claude"
llm_model = "claude-sonnet-4-5-20250929"
parallelizable = true
priority = 75
capabilities = ["testing", "benchmarks", "validation"]
[[agents]]
role = "documenter"
description = "Documentation, root files (README, CHANGELOG)"
llm_provider = "openai"
llm_model = "gpt-4o"
parallelizable = true
priority = 60
capabilities = ["documentation", "readme", "changelog", "guides"]
[[agents]]
role = "marketer"
description = "Marketing content, announcements"
llm_provider = "claude"
llm_model = "claude-sonnet-4-5-20250929"
parallelizable = true
priority = 40
capabilities = ["marketing", "content", "announcements"]
[[agents]]
role = "presenter"
description = "Presentations, slides, demos"
llm_provider = "claude"
llm_model = "claude-sonnet-4-5-20250929"
parallelizable = false
priority = 50
capabilities = ["presentations", "slides", "demos"]
[[agents]]
role = "devops"
description = "CI/CD, deployment, infrastructure"
llm_provider = "claude"
llm_model = "claude-sonnet-4-5-20250929"
parallelizable = true
priority = 85
capabilities = ["cicd", "deployment", "kubernetes", "infrastructure"]
[[agents]]
role = "monitor"
description = "System health, alerting, observability"
llm_provider = "gemini"
llm_model = "gemini-2.0-flash"
parallelizable = false
priority = 90
capabilities = ["monitoring", "health", "alerts", "metrics"]
[[agents]]
role = "security"
description = "Security audit, vulnerability detection"
llm_provider = "claude"
llm_model = "claude-opus-4-20250514"
parallelizable = true
priority = 95
capabilities = ["security", "audit", "vulnerabilities"]
[[agents]]
role = "project_manager"
description = "Roadmap, task tracking, coordination"
llm_provider = "claude"
llm_model = "claude-sonnet-4-5-20250929"
parallelizable = false
priority = 65
capabilities = ["planning", "tracking", "coordination"]
[[agents]]
role = "decision_maker"
description = "Conflict resolution, strategic decisions"
llm_provider = "claude"
llm_model = "claude-opus-4-20250514"
parallelizable = false
priority = 100
capabilities = ["decisions", "conflict_resolution", "strategy"]

View File

@ -0,0 +1,23 @@
let schema = import "./schema.ncl" in
{
role = "architect",
name = "Solutions Architect",
llm = {
provider = "claude",
model = "claude-opus-4-5",
max_tokens = 8192,
temperature = 0.5,
},
capabilities = [
"system_design",
"architecture_planning",
"scalability_analysis",
"technology_selection",
"design_patterns",
],
system_prompt = "You are a solutions architect with deep expertise in distributed systems, cloud-native architecture, and Rust ecosystems. You design scalable, maintainable systems following SOLID principles and proven patterns. Provide clear architectural decisions with tradeoffs and rationale.",
modifies_code = false,
max_concurrent_tasks = 3,
enabled = true,
} | schema.AgentDefinition

View File

@ -0,0 +1,23 @@
let schema = import "./schema.ncl" in
{
role = "code_reviewer",
name = "Code Review Specialist",
llm = {
provider = "claude",
model = "claude-opus-4-5",
max_tokens = 4096,
temperature = 0.2,
},
capabilities = [
"code_review",
"security_analysis",
"performance_analysis",
"best_practices",
"documentation_review",
],
system_prompt = "You are an elite code review specialist with expertise in Rust security, performance, and best practices. Your reviews are thorough, constructive, and focus on correctness, idioms, maintainability, and security. Identify bugs, performance issues, security vulnerabilities, and suggest improvements with clear reasoning.",
modifies_code = false,
max_concurrent_tasks = 3,
enabled = true,
} | schema.AgentDefinition

View File

@ -0,0 +1,23 @@
let schema = import "./schema.ncl" in
{
role = "developer",
name = "Senior Rust Developer",
llm = {
provider = "claude",
model = "claude-sonnet-4-5",
max_tokens = 8192,
temperature = 0.3,
},
capabilities = [
"code_generation",
"refactoring",
"bug_fixing",
"optimization",
"testing",
],
system_prompt = "You are a senior Rust developer with deep expertise in systems programming, async runtimes, and production-quality code. You follow all Microsoft Rust Guidelines and write idiomatic, safe, and performant code. Never use todo!(), unimplemented!(), or panic!(). Always handle errors properly. Write comprehensive tests for all functionality.",
modifies_code = true,
max_concurrent_tasks = 5,
enabled = true,
} | schema.AgentDefinition

View File

@ -0,0 +1,23 @@
let schema = import "./schema.ncl" in
{
role = "documenter",
name = "Technical Writer",
llm = {
provider = "claude",
model = "claude-sonnet-4-5",
max_tokens = 4096,
temperature = 0.4,
},
capabilities = [
"documentation",
"api_documentation",
"tutorials",
"guides",
"examples",
],
system_prompt = "You are a technical writer specializing in software documentation. You create clear, comprehensive, and well-structured documentation with examples. Follow markdown best practices, maintain technical accuracy, and ensure documentation is accessible to both beginners and experts.",
modifies_code = false,
max_concurrent_tasks = 3,
enabled = true,
} | schema.AgentDefinition

35
config/agents/schema.ncl Normal file
View File

@ -0,0 +1,35 @@
// VAPORA Agent Definition Schema
// Defines the structure for all agent configurations
{
AgentDefinition = {
// Unique identifier for the agent role
role | String,
// Human-readable name
name | String,
// LLM provider configuration
llm | {
provider | String,
model | String,
max_tokens | Number | default = 4096,
temperature | Number | default = 0.7,
},
// List of capabilities this agent has
capabilities | Array String,
// System prompt/instructions for the agent
system_prompt | String,
// Whether this agent modifies code (requires worktree isolation)
modifies_code | Bool | default = false,
// Maximum concurrent tasks this agent can handle
max_concurrent_tasks | Number | default = 5,
// Whether the agent is enabled by default
enabled | Bool | default = true,
},
}

87
config/llm-router.toml Normal file
View File

@ -0,0 +1,87 @@
# Multi-IA Router Configuration
# Phase 0: Configuration for LLM provider selection
[routing]
# Default provider if no specific routing rules match
default_provider = "claude"
# Enable cost tracking
cost_tracking_enabled = true
# Enable fallback on provider failure
fallback_enabled = true
[providers.claude]
enabled = true
# ANTHROPIC_API_KEY environment variable required
api_key = "${ANTHROPIC_API_KEY}"
model = "claude-sonnet-4-5-20250929"
max_tokens = 8192
temperature = 0.7
# Cost per 1M tokens (input/output)
cost_per_1m_input = 3.00
cost_per_1m_output = 15.00
[providers.openai]
enabled = true
# OPENAI_API_KEY environment variable required
api_key = "${OPENAI_API_KEY}"
model = "gpt-4o"
max_tokens = 4096
temperature = 0.7
# Cost per 1M tokens (input/output)
cost_per_1m_input = 2.50
cost_per_1m_output = 10.00
[providers.gemini]
enabled = true
# GOOGLE_API_KEY environment variable required
api_key = "${GOOGLE_API_KEY}"
model = "gemini-2.0-flash"
max_tokens = 8192
temperature = 0.7
# Cost per 1M tokens (input/output)
cost_per_1m_input = 0.30
cost_per_1m_output = 1.20
[providers.ollama]
enabled = true
# Local Ollama instance, no API key needed
url = "${OLLAMA_URL:-http://localhost:11434}"
model = "llama3.2"
max_tokens = 4096
temperature = 0.7
# No cost for local models
cost_per_1m_input = 0.00
cost_per_1m_output = 0.00
# Routing rules: assign providers based on task characteristics
[[routing_rules]]
name = "architecture_design"
condition = { task_type = "architecture" }
provider = "claude"
model_override = "claude-opus-4-20250514"
[[routing_rules]]
name = "code_generation"
condition = { task_type = "development" }
provider = "claude"
[[routing_rules]]
name = "documentation"
condition = { task_type = "documentation" }
provider = "openai"
[[routing_rules]]
name = "monitoring"
condition = { task_type = "monitoring" }
provider = "gemini"
[[routing_rules]]
name = "local_testing"
condition = { environment = "development" }
provider = "ollama"

46
config/vapora.toml Normal file
View File

@ -0,0 +1,46 @@
# VAPORA Server Configuration
# Phase 0: Environment-based configuration
[server]
# Server will read from environment variables:
# VAPORA_HOST (default: 127.0.0.1)
# VAPORA_PORT (default: 3000)
host = "${VAPORA_HOST:-127.0.0.1}"
port = ${VAPORA_PORT:-3000}
[server.tls]
# TLS configuration (optional)
# VAPORA_TLS_CERT_PATH
# VAPORA_TLS_KEY_PATH
enabled = ${VAPORA_TLS_ENABLED:-false}
cert_path = "${VAPORA_TLS_CERT_PATH:-}"
key_path = "${VAPORA_TLS_KEY_PATH:-}"
[database]
# Database connection
# VAPORA_DB_URL (required)
url = "${VAPORA_DB_URL}"
max_connections = ${VAPORA_DB_MAX_CONNECTIONS:-10}
[nats]
# NATS JetStream configuration
# VAPORA_NATS_URL (default: nats://localhost:4222)
url = "${VAPORA_NATS_URL:-nats://localhost:4222}"
stream_name = "${VAPORA_NATS_STREAM:-vapora-tasks}"
[auth]
# Authentication configuration
# VAPORA_JWT_SECRET (required in production)
jwt_secret = "${VAPORA_JWT_SECRET}"
jwt_expiration_hours = ${VAPORA_JWT_EXPIRATION_HOURS:-24}
[logging]
# Logging configuration
# VAPORA_LOG_LEVEL (default: info)
level = "${VAPORA_LOG_LEVEL:-info}"
json = ${VAPORA_LOG_JSON:-false}
[metrics]
# Metrics configuration
enabled = ${VAPORA_METRICS_ENABLED:-true}
port = ${VAPORA_METRICS_PORT:-9090}

129
config/workflows.toml Normal file
View File

@ -0,0 +1,129 @@
# Workflow Engine Configuration
# Phase 0: Workflow templates and execution rules
[engine]
# Maximum parallel tasks in a workflow
max_parallel_tasks = 10
# Workflow timeout (seconds)
workflow_timeout = 3600
# Enable approval gates
approval_gates_enabled = true
# Workflow Templates
[[workflows]]
name = "feature_development"
description = "Complete feature development workflow"
trigger = "task_type:feature"
# Workflow stages (sequential unless marked parallel)
[[workflows.stages]]
name = "architecture"
agents = ["architect"]
parallel = false
approval_required = true
[[workflows.stages]]
name = "implementation"
agents = ["developer"]
parallel = true
max_parallel = 3
[[workflows.stages]]
name = "review"
agents = ["code_reviewer", "security"]
parallel = true
[[workflows.stages]]
name = "testing"
agents = ["tester"]
parallel = false
[[workflows.stages]]
name = "documentation"
agents = ["documenter"]
parallel = true
[[workflows.stages]]
name = "deployment"
agents = ["devops"]
parallel = false
approval_required = true
[[workflows]]
name = "bugfix"
description = "Bug fix workflow"
trigger = "task_type:bugfix"
[[workflows.stages]]
name = "analysis"
agents = ["developer"]
parallel = false
[[workflows.stages]]
name = "fix_implementation"
agents = ["developer"]
parallel = false
[[workflows.stages]]
name = "review"
agents = ["code_reviewer"]
parallel = false
[[workflows.stages]]
name = "testing"
agents = ["tester"]
parallel = false
[[workflows.stages]]
name = "deployment"
agents = ["devops"]
parallel = false
[[workflows]]
name = "documentation_update"
description = "Update documentation workflow"
trigger = "task_type:documentation"
[[workflows.stages]]
name = "content_creation"
agents = ["documenter"]
parallel = false
[[workflows.stages]]
name = "review"
agents = ["code_reviewer", "project_manager"]
parallel = true
[[workflows.stages]]
name = "publish"
agents = ["devops"]
parallel = false
[[workflows]]
name = "security_audit"
description = "Security audit workflow"
trigger = "task_type:security"
[[workflows.stages]]
name = "audit"
agents = ["security"]
parallel = false
[[workflows.stages]]
name = "remediation"
agents = ["developer"]
parallel = true
[[workflows.stages]]
name = "verification"
agents = ["security", "tester"]
parallel = true
[[workflows.stages]]
name = "approval"
agents = ["decision_maker"]
parallel = false
approval_required = true

View File

@ -0,0 +1,64 @@
[package]
name = "vapora-agents"
version.workspace = true
edition.workspace = true
authors.workspace = true
license.workspace = true
repository.workspace = true
rust-version.workspace = true
[lib]
crate-type = ["rlib"]
[[bin]]
name = "vapora-agents"
path = "src/bin/server.rs"
[dependencies]
# Internal crates
vapora-shared = { workspace = true }
vapora-llm-router = { workspace = true }
vapora-knowledge-graph = { workspace = true }
vapora-swarm = { workspace = true }
# Secrets management
secretumvault = { workspace = true }
# Async runtime
tokio = { workspace = true }
futures = { workspace = true }
async-trait = { workspace = true }
# Web framework (for health checks)
axum = { workspace = true }
# Serialization
serde = { workspace = true }
serde_json = { workspace = true }
toml = { workspace = true }
# Error handling
anyhow = { workspace = true }
thiserror = { workspace = true }
# Message Queue
async-nats = { workspace = true }
# Database (Phase 5.5: KG persistence)
surrealdb = { workspace = true }
# LLM Agent Framework
rig-core = { workspace = true }
# RAG & Embeddings: Provided via vapora-llm-router using provider APIs
# Utilities
uuid = { workspace = true }
chrono = { workspace = true }
# Logging
tracing = { workspace = true }
tracing-subscriber = { workspace = true }
[dev-dependencies]
mockall = { workspace = true }
tempfile = { workspace = true }

View File

@ -0,0 +1,132 @@
//! VAPORA Agent Server Binary
//! Provides HTTP server for agent coordination and health checks
use anyhow::Result;
use axum::{
extract::State,
routing::get,
Json, Router,
};
use serde_json::json;
use std::sync::Arc;
use tokio::net::TcpListener;
use tracing::{info, error};
use vapora_agents::{
config::AgentConfig,
coordinator::AgentCoordinator,
registry::AgentRegistry,
};
use vapora_llm_router::{BudgetConfig, BudgetManager};
#[derive(Clone)]
struct AppState {
coordinator: Arc<AgentCoordinator>,
#[allow(dead_code)]
budget_manager: Option<Arc<BudgetManager>>,
}
#[tokio::main]
async fn main() -> Result<()> {
// Initialize tracing
tracing_subscriber::fmt()
.with_env_filter(
tracing_subscriber::EnvFilter::from_default_env()
.add_directive("vapora_agents=debug".parse()?),
)
.init();
info!("Starting VAPORA Agent Server");
// Load configuration
let config = AgentConfig::from_env()?;
info!("Loaded configuration from environment");
// Load budget configuration
let budget_config_path = std::env::var("BUDGET_CONFIG_PATH")
.unwrap_or_else(|_| "config/agent-budgets.toml".to_string());
let budget_manager = match BudgetConfig::load_or_default(&budget_config_path) {
Ok(budget_config) => {
if budget_config.budgets.is_empty() {
info!("No budget configuration found at {}, running without budget enforcement", budget_config_path);
None
} else {
let manager = Arc::new(BudgetManager::new(budget_config.budgets));
info!("Loaded budget configuration for {} roles", manager.list_budgets().await.len());
Some(manager)
}
}
Err(e) => {
error!("Failed to load budget configuration: {}", e);
return Err(e.into());
}
};
// Initialize agent registry and coordinator
// Max 10 agents per role (can be configured via environment)
let max_agents_per_role = std::env::var("MAX_AGENTS_PER_ROLE")
.ok()
.and_then(|v| v.parse().ok())
.unwrap_or(10);
let registry = Arc::new(AgentRegistry::new(max_agents_per_role));
let mut coordinator = AgentCoordinator::new(config, registry).await?;
// Attach budget manager to coordinator if available
if let Some(ref bm) = budget_manager {
coordinator = coordinator.with_budget_manager(bm.clone());
info!("Budget enforcement enabled for agent coordinator");
}
let coordinator = Arc::new(coordinator);
// Start coordinator
let _coordinator_handle = {
let coordinator = coordinator.clone();
tokio::spawn(async move {
if let Err(e) = coordinator.start().await {
error!("Coordinator error: {}", e);
}
})
};
// Build application state
let state = AppState { coordinator, budget_manager };
// Build HTTP router
let app = Router::new()
.route("/health", get(health_handler))
.route("/ready", get(readiness_handler))
.with_state(state);
// Start HTTP server
let addr = std::env::var("BIND_ADDR").unwrap_or_else(|_| "0.0.0.0:9000".to_string());
info!("Agent server listening on {}", addr);
let listener = TcpListener::bind(&addr).await?;
axum::serve(listener, app)
.await?;
// Note: coordinator_handle would be awaited here if needed,
// but axum::serve blocks until server shutdown
Ok(())
}
/// Health check endpoint
async fn health_handler() -> Json<serde_json::Value> {
Json(json!({
"status": "healthy",
"service": "vapora-agents",
"version": env!("CARGO_PKG_VERSION")
}))
}
/// Readiness check endpoint
async fn readiness_handler(State(state): State<AppState>) -> Json<serde_json::Value> {
let is_ready = state.coordinator.is_ready().await;
Json(json!({
"ready": is_ready,
"service": "vapora-agents",
"agents": state.coordinator.get_agent_count().await
}))
}

View File

@ -0,0 +1,229 @@
// vapora-agents: Agent configuration module
// Load and parse agent definitions from TOML
use serde::{Deserialize, Serialize};
use std::path::Path;
use thiserror::Error;
#[derive(Debug, Error)]
pub enum ConfigError {
#[error("Failed to read config file: {0}")]
ReadError(#[from] std::io::Error),
#[error("Failed to parse TOML: {0}")]
ParseError(#[from] toml::de::Error),
#[error("Invalid configuration: {0}")]
ValidationError(String),
}
#[derive(Debug, Clone, Serialize, Deserialize)]
pub struct AgentConfig {
pub registry: RegistryConfig,
pub agents: Vec<AgentDefinition>,
}
#[derive(Debug, Clone, Serialize, Deserialize)]
pub struct RegistryConfig {
#[serde(default = "default_max_agents")]
pub max_agents_per_role: u32,
#[serde(default = "default_health_check_interval")]
pub health_check_interval: u64,
#[serde(default = "default_agent_timeout")]
pub agent_timeout: u64,
}
fn default_max_agents() -> u32 {
5
}
fn default_health_check_interval() -> u64 {
30
}
fn default_agent_timeout() -> u64 {
300
}
#[derive(Debug, Clone, Serialize, Deserialize)]
pub struct AgentDefinition {
pub role: String,
pub description: String,
pub llm_provider: String,
pub llm_model: String,
#[serde(default)]
pub parallelizable: bool,
#[serde(default = "default_priority")]
pub priority: u32,
#[serde(default)]
pub capabilities: Vec<String>,
}
fn default_priority() -> u32 {
50
}
impl AgentConfig {
/// Load configuration from TOML file
pub fn load<P: AsRef<Path>>(path: P) -> Result<Self, ConfigError> {
let content = std::fs::read_to_string(path)?;
let config: Self = toml::from_str(&content)?;
config.validate()?;
Ok(config)
}
/// Load configuration from environment or default file
pub fn from_env() -> Result<Self, ConfigError> {
let config_path = std::env::var("VAPORA_AGENT_CONFIG")
.unwrap_or_else(|_| "/etc/vapora/agents.toml".to_string());
if Path::new(&config_path).exists() {
Self::load(&config_path)
} else {
// Return default config if file doesn't exist
Ok(Self::default())
}
}
/// Validate configuration
fn validate(&self) -> Result<(), ConfigError> {
// Check that all agent roles are unique
let mut roles = std::collections::HashSet::new();
for agent in &self.agents {
if !roles.insert(&agent.role) {
return Err(ConfigError::ValidationError(format!(
"Duplicate agent role: {}",
agent.role
)));
}
}
// Check that we have at least one agent
if self.agents.is_empty() {
return Err(ConfigError::ValidationError(
"No agents defined in configuration".to_string(),
));
}
Ok(())
}
/// Get agent definition by role
pub fn get_by_role(&self, role: &str) -> Option<&AgentDefinition> {
self.agents.iter().find(|a| a.role == role)
}
/// List all agent roles
pub fn list_roles(&self) -> Vec<String> {
self.agents.iter().map(|a| a.role.clone()).collect()
}
}
impl Default for AgentConfig {
fn default() -> Self {
Self {
registry: RegistryConfig {
max_agents_per_role: default_max_agents(),
health_check_interval: default_health_check_interval(),
agent_timeout: default_agent_timeout(),
},
agents: vec![
AgentDefinition {
role: "developer".to_string(),
description: "Code developer".to_string(),
llm_provider: "claude".to_string(),
llm_model: "claude-sonnet-4".to_string(),
parallelizable: true,
priority: 80,
capabilities: vec!["coding".to_string()],
},
],
}
}
}
#[cfg(test)]
mod tests {
use super::*;
#[test]
fn test_default_values() {
let config = AgentConfig {
registry: RegistryConfig {
max_agents_per_role: 5,
health_check_interval: 30,
agent_timeout: 300,
},
agents: vec![AgentDefinition {
role: "developer".to_string(),
description: "Code developer".to_string(),
llm_provider: "claude".to_string(),
llm_model: "claude-sonnet-4".to_string(),
parallelizable: true,
priority: 80,
capabilities: vec!["coding".to_string()],
}],
};
assert!(config.validate().is_ok());
}
#[test]
fn test_duplicate_roles() {
let config = AgentConfig {
registry: RegistryConfig {
max_agents_per_role: 5,
health_check_interval: 30,
agent_timeout: 300,
},
agents: vec![
AgentDefinition {
role: "developer".to_string(),
description: "Code developer 1".to_string(),
llm_provider: "claude".to_string(),
llm_model: "claude-sonnet-4".to_string(),
parallelizable: true,
priority: 80,
capabilities: vec![],
},
AgentDefinition {
role: "developer".to_string(),
description: "Code developer 2".to_string(),
llm_provider: "claude".to_string(),
llm_model: "claude-sonnet-4".to_string(),
parallelizable: true,
priority: 80,
capabilities: vec![],
},
],
};
assert!(config.validate().is_err());
}
#[test]
fn test_get_by_role() {
let config = AgentConfig {
registry: RegistryConfig {
max_agents_per_role: 5,
health_check_interval: 30,
agent_timeout: 300,
},
agents: vec![AgentDefinition {
role: "architect".to_string(),
description: "System architect".to_string(),
llm_provider: "claude".to_string(),
llm_model: "claude-opus-4".to_string(),
parallelizable: false,
priority: 100,
capabilities: vec!["architecture".to_string()],
}],
};
let agent = config.get_by_role("architect");
assert!(agent.is_some());
assert_eq!(agent.unwrap().description, "System architect");
assert!(config.get_by_role("nonexistent").is_none());
}
}

View File

@ -0,0 +1,596 @@
// vapora-agents: Agent coordinator - orchestrates agent workflows
// Phase 2: Complete implementation with NATS integration
use crate::messages::{AgentMessage, TaskAssignment};
use crate::registry::{AgentRegistry, RegistryError};
use crate::scoring::AgentScoringService;
use crate::learning_profile::{LearningProfile, TaskTypeExpertise, ExecutionData};
use chrono::Utc;
use std::collections::HashMap;
use std::sync::Arc;
use thiserror::Error;
use tracing::{debug, info, warn};
use uuid::Uuid;
#[derive(Debug, Error)]
pub enum CoordinatorError {
#[error("No available agent for role: {0}")]
NoAvailableAgent(String),
#[error("Task not found: {0}")]
TaskNotFound(String),
#[error("Registry error: {0}")]
RegistryError(#[from] RegistryError),
#[error("NATS error: {0}")]
NatsError(String),
#[error("Invalid task state: {0}")]
InvalidTaskState(String),
}
use crate::config::AgentConfig;
use crate::profile_adapter::ProfileAdapter;
use vapora_swarm::coordinator::SwarmCoordinator;
use vapora_llm_router::BudgetManager;
/// Agent coordinator orchestrates task assignment and execution
pub struct AgentCoordinator {
registry: Arc<AgentRegistry>,
nats_client: Option<Arc<async_nats::Client>>,
#[allow(dead_code)]
swarm_coordinator: Option<Arc<SwarmCoordinator>>,
learning_profiles: Arc<std::sync::RwLock<HashMap<String, LearningProfile>>>,
budget_manager: Option<Arc<BudgetManager>>,
}
impl AgentCoordinator {
/// Create a new coordinator with config and registry
pub async fn new(
_config: AgentConfig,
registry: Arc<AgentRegistry>,
) -> Result<Self, CoordinatorError> {
// Optionally connect to NATS if configured
let nats_url = std::env::var("NATS_URL").ok();
let nats_client = if let Some(url) = nats_url {
match async_nats::connect(&url).await {
Ok(client) => {
info!("Connected to NATS at {}", url);
Some(Arc::new(client))
}
Err(e) => {
warn!("Failed to connect to NATS: {}", e);
None
}
}
} else {
None
};
// Initialize swarm coordinator (Phase 5.2)
let swarm_coordinator = Arc::new(SwarmCoordinator::new());
// Sync initial profiles from registry to swarm
let agents = registry.list_all();
let profiles = ProfileAdapter::batch_create_profiles(agents);
for profile in &profiles {
swarm_coordinator.register_agent(profile.clone()).ok();
}
// Spawn background profile sync task (every 30s)
let registry_clone = Arc::clone(&registry);
let swarm_clone = Arc::clone(&swarm_coordinator);
tokio::spawn(async move {
let mut interval = tokio::time::interval(std::time::Duration::from_secs(30));
loop {
interval.tick().await;
let agents = registry_clone.list_all();
let profiles = ProfileAdapter::batch_create_profiles(agents);
let profile_count = profiles.len();
for profile in &profiles {
swarm_clone.register_agent(profile.clone()).ok();
}
debug!("Synced {} agent profiles to swarm", profile_count);
}
});
Ok(Self {
registry,
nats_client,
swarm_coordinator: Some(swarm_coordinator),
learning_profiles: Arc::new(std::sync::RwLock::new(HashMap::new())),
budget_manager: None,
})
}
/// Create a simple coordinator with just registry (for testing)
pub fn with_registry(registry: Arc<AgentRegistry>) -> Self {
let swarm_coordinator = Arc::new(SwarmCoordinator::new());
let agents = registry.list_all();
let profiles = ProfileAdapter::batch_create_profiles(agents);
for profile in &profiles {
swarm_coordinator.register_agent(profile.clone()).ok();
}
Self {
registry,
nats_client: None,
swarm_coordinator: Some(swarm_coordinator),
learning_profiles: Arc::new(std::sync::RwLock::new(HashMap::new())),
budget_manager: None,
}
}
/// Set NATS client for inter-agent communication
pub fn with_nats(mut self, client: Arc<async_nats::Client>) -> Self {
self.nats_client = Some(client);
self
}
/// Set budget manager for cost enforcement
pub fn with_budget_manager(mut self, budget_manager: Arc<BudgetManager>) -> Self {
self.budget_manager = Some(budget_manager);
self
}
/// Assign a task to an available agent using learning-based scoring
pub async fn assign_task(
&self,
role: &str,
title: String,
description: String,
context: String,
priority: u32,
) -> Result<String, CoordinatorError> {
// Get all available candidates for role
let all_agents = self.registry.get_agents_by_role(role);
let candidates: Vec<_> = all_agents
.into_iter()
.filter(|a| a.can_accept_task())
.collect();
if candidates.is_empty() {
return Err(CoordinatorError::NoAvailableAgent(role.to_string()));
}
// Extract task_type from title (priority) or use role as fallback
// Simple heuristic: check if title/description contains known task types
let task_type = extract_task_type(&title, &description, role);
// Get learning profiles for all candidates
let learning_profiles = {
let profiles = self.learning_profiles.read().unwrap_or_else(|e| e.into_inner());
candidates
.iter()
.map(|a| (a.id.clone(), profiles.get(&a.id).cloned()))
.collect::<Vec<_>>()
};
// Build learning profiles map for scoring
let mut profiles_map = HashMap::new();
for (agent_id, profile) in learning_profiles {
if let Some(profile) = profile {
profiles_map.insert(agent_id, profile);
}
}
// Score candidates using learning profiles and SwarmCoordinator metrics
let agent = if !profiles_map.is_empty() {
// Use learning-based scoring
let swarm_profiles = candidates
.iter()
.map(|a| vapora_swarm::messages::AgentProfile {
id: a.id.clone(),
roles: vec![a.role.clone()],
capabilities: a.capabilities.clone(),
current_load: a.current_tasks as f64 / a.max_concurrent_tasks as f64,
success_rate: 0.5, // Will be overridden by learning scores
availability: a.can_accept_task(),
})
.collect();
let learning_profiles_vec = profiles_map
.iter()
.map(|(id, profile)| (id.clone(), profile.clone()))
.collect::<Vec<_>>();
let ranked = AgentScoringService::rank_agents(
swarm_profiles,
&task_type,
&learning_profiles_vec,
);
// Get top-scored agent
if let Some(top_score) = ranked.first() {
candidates
.iter()
.find(|a| a.id == top_score.agent_id)
.cloned()
.ok_or_else(|| CoordinatorError::NoAvailableAgent(role.to_string()))?
} else {
return Err(CoordinatorError::NoAvailableAgent(role.to_string()));
}
} else {
// Fall back to load-based selection (minimum current tasks)
candidates
.into_iter()
.min_by_key(|a| a.current_tasks)
.ok_or_else(|| CoordinatorError::NoAvailableAgent(role.to_string()))?
};
// Create task assignment
let task_id = Uuid::new_v4().to_string();
let assignment = TaskAssignment {
id: task_id.clone(),
agent_id: agent.id.clone(),
required_role: role.to_string(),
title,
description,
context,
priority,
deadline: None,
assigned_at: Utc::now(),
};
// Update registry
self.registry.assign_task(&agent.id)?;
info!(
"Assigned task {} to agent {} (role: {}, task_type: {})",
task_id, agent.id, role, task_type
);
// Publish to NATS if available
if let Some(nats) = &self.nats_client {
self.publish_message(nats, AgentMessage::TaskAssigned(assignment))
.await?;
}
Ok(task_id)
}
/// Complete a task
pub async fn complete_task(
&self,
task_id: &str,
agent_id: &str,
) -> Result<(), CoordinatorError> {
// Update registry
self.registry.complete_task(agent_id)?;
info!("Task {} completed by agent {}", task_id, agent_id);
Ok(())
}
/// Publish message to NATS
async fn publish_message(
&self,
nats: &async_nats::Client,
message: AgentMessage,
) -> Result<(), CoordinatorError> {
let subject = match &message {
AgentMessage::TaskAssigned(_) => crate::messages::subjects::TASKS_ASSIGNED,
AgentMessage::TaskStarted(_) => crate::messages::subjects::TASKS_STARTED,
AgentMessage::TaskProgress(_) => crate::messages::subjects::TASKS_PROGRESS,
AgentMessage::TaskCompleted(_) => crate::messages::subjects::TASKS_COMPLETED,
AgentMessage::TaskFailed(_) => crate::messages::subjects::TASKS_FAILED,
AgentMessage::Heartbeat(_) => crate::messages::subjects::AGENT_HEARTBEAT,
AgentMessage::AgentRegistered(_) => crate::messages::subjects::AGENT_REGISTERED,
AgentMessage::AgentStopped(_) => crate::messages::subjects::AGENT_STOPPED,
};
let bytes = message
.to_bytes()
.map_err(|e| CoordinatorError::NatsError(e.to_string()))?;
nats.publish(subject.to_string(), bytes.into())
.await
.map_err(|e| CoordinatorError::NatsError(e.to_string()))?;
debug!("Published {} to {}", message.message_type(), subject);
Ok(())
}
/// Subscribe to agent heartbeats
pub async fn subscribe_heartbeats(
&self,
) -> Result<async_nats::Subscriber, CoordinatorError> {
if let Some(nats) = &self.nats_client {
let subject = crate::messages::subjects::AGENT_HEARTBEAT.to_string();
let sub = nats
.subscribe(subject)
.await
.map_err(|e| CoordinatorError::NatsError(e.to_string()))?;
info!("Subscribed to agent heartbeats");
Ok(sub)
} else {
Err(CoordinatorError::NatsError(
"NATS client not configured".to_string(),
))
}
}
/// Handle heartbeat message
pub async fn handle_heartbeat(&self, agent_id: &str) -> Result<(), CoordinatorError> {
self.registry.heartbeat(agent_id)?;
debug!("Updated heartbeat for agent {}", agent_id);
Ok(())
}
/// Get registry reference
pub fn registry(&self) -> Arc<AgentRegistry> {
Arc::clone(&self.registry)
}
/// Start coordinator (subscribe to NATS topics)
pub async fn start(&self) -> Result<(), CoordinatorError> {
if self.nats_client.is_none() {
warn!("NATS client not configured, running in standalone mode");
return Ok(());
}
info!("Agent coordinator started");
Ok(())
}
/// Stop coordinator
pub async fn stop(&self) -> Result<(), CoordinatorError> {
info!("Agent coordinator stopped");
Ok(())
}
/// Check if coordinator is ready to accept tasks
pub async fn is_ready(&self) -> bool {
self.registry.total_count() > 0
}
/// Get count of registered agents
pub async fn get_agent_count(&self) -> usize {
self.registry.total_count()
}
/// Load learning profile for agent from KG execution history.
/// Queries KG for task-type specific executions and builds expertise metrics.
/// This is the core integration between KG persistence and learning profiles.
///
/// Process:
/// 1. Query KG for task-type specific executions (limited to recent)
/// 2. Convert PersistedExecution to ExecutionData
/// 3. Calculate TaskTypeExpertise with recency bias
/// 4. Return complete LearningProfile
///
/// Note: Requires KG persistence layer with populated kg_executions table.
pub async fn load_learning_profile_from_kg(
&self,
agent_id: &str,
task_type: &str,
kg_persistence: &vapora_knowledge_graph::KGPersistence,
) -> Result<LearningProfile, CoordinatorError> {
debug!(
"Loading learning profile for agent {} task_type {} from KG",
agent_id, task_type
);
// Query KG for recent task-type specific executions
let persisted_executions = kg_persistence
.get_executions_for_task_type(agent_id, task_type, 100)
.await
.map_err(|e| CoordinatorError::InvalidTaskState(format!("KG query failed: {}", e)))?;
// Convert to ExecutionData for learning calculations
let execution_data: Vec<ExecutionData> = persisted_executions
.iter()
.filter_map(|pe| {
let timestamp = chrono::DateTime::parse_from_rfc3339(&pe.executed_at)
.ok()
.map(|dt| dt.with_timezone(&Utc));
timestamp.map(|ts| ExecutionData {
timestamp: ts,
duration_ms: pe.duration_ms,
success: pe.outcome == "success",
})
})
.collect();
debug!(
"Converted {} persisted executions to ExecutionData for learning",
execution_data.len()
);
// Create learning profile and compute expertise
let mut profile = LearningProfile::new(agent_id.to_string());
let expertise = TaskTypeExpertise::from_executions(execution_data, task_type);
profile.set_task_type_expertise(task_type.to_string(), expertise);
info!(
"Loaded learning profile for agent {} task_type {} (success_rate={:.2}, confidence={:.2})",
agent_id,
task_type,
profile.get_task_type_score(task_type),
profile.get_confidence(task_type)
);
Ok(profile)
}
/// Batch load learning profiles for multiple agents from KG.
/// Loads per-task-type expertise for all registered agents.
pub async fn load_all_learning_profiles(
&self,
task_type: &str,
kg_persistence: &vapora_knowledge_graph::KGPersistence,
) -> Result<(), CoordinatorError> {
let agents = self.registry.list_all();
debug!(
"Batch loading learning profiles for {} agents (task_type: {})",
agents.len(),
task_type
);
for agent in agents {
match self.load_learning_profile_from_kg(&agent.id, task_type, kg_persistence).await {
Ok(profile) => {
self.update_learning_profile(&agent.id, profile)?;
}
Err(e) => {
warn!("Failed to load learning profile for agent {}: {}", agent.id, e);
// Continue with other agents on failure
}
}
}
info!("Batch loaded learning profiles for task_type: {}", task_type);
Ok(())
}
/// Update learning profile for an agent
/// Allows synchronization of learning data from Knowledge Graph
pub fn update_learning_profile(
&self,
agent_id: &str,
profile: LearningProfile,
) -> Result<(), CoordinatorError> {
let mut profiles = self.learning_profiles.write()
.map_err(|_| CoordinatorError::InvalidTaskState("Failed to acquire write lock on learning profiles".to_string()))?;
profiles.insert(agent_id.to_string(), profile);
debug!("Updated learning profile for agent {}", agent_id);
Ok(())
}
/// Get learning profile for an agent
pub fn get_learning_profile(&self, agent_id: &str) -> Option<LearningProfile> {
let profiles = self.learning_profiles.read()
.map(|p| p.get(agent_id).cloned())
.ok()
.flatten();
profiles
}
/// Get all learning profiles
pub fn get_all_learning_profiles(&self) -> HashMap<String, LearningProfile> {
self.learning_profiles.read()
.map(|p| p.clone())
.unwrap_or_default()
}
/// Clear all learning profiles (useful for testing)
pub fn clear_learning_profiles(&self) -> Result<(), CoordinatorError> {
let mut profiles = self.learning_profiles.write()
.map_err(|_| CoordinatorError::InvalidTaskState("Failed to acquire write lock".to_string()))?;
profiles.clear();
debug!("Cleared all learning profiles");
Ok(())
}
}
/// Extract task type from title/description for learning-based scoring
/// Uses simple heuristics to identify task categories from text
fn extract_task_type(title: &str, description: &str, role: &str) -> String {
let combined = format!("{} {}", title.to_lowercase(), description.to_lowercase());
// Check for known task types in combined text
if combined.contains("code") || combined.contains("implement") || combined.contains("develop") {
return "coding".to_string();
}
if combined.contains("test") || combined.contains("verify") {
return "testing".to_string();
}
if combined.contains("review") || combined.contains("inspect") {
return "review".to_string();
}
if combined.contains("document") || combined.contains("write") || combined.contains("doc") {
return "documentation".to_string();
}
if combined.contains("design") || combined.contains("architect") || combined.contains("plan") {
return "architecture".to_string();
}
if combined.contains("bug") || combined.contains("fix") || combined.contains("issue") {
return "debugging".to_string();
}
if combined.contains("refactor") || combined.contains("improve") || combined.contains("clean") {
return "refactoring".to_string();
}
// Default to role if no specific task type detected
role.to_string()
}
impl Default for AgentCoordinator {
fn default() -> Self {
Self::with_registry(Arc::new(AgentRegistry::default()))
}
}
impl Drop for AgentCoordinator {
fn drop(&mut self) {
debug!("AgentCoordinator dropped, profile sync task will continue in background");
}
}
#[cfg(test)]
mod tests {
use super::*;
use crate::registry::AgentMetadata;
#[tokio::test]
async fn test_coordinator_creation() {
let registry = Arc::new(AgentRegistry::new(5));
let coordinator = AgentCoordinator::with_registry(registry);
assert!(coordinator.nats_client.is_none());
}
#[tokio::test]
async fn test_task_assignment() {
let registry = Arc::new(AgentRegistry::new(5));
// Register an agent
let agent = AgentMetadata::new(
"developer".to_string(),
"Developer 1".to_string(),
"claude".to_string(),
"claude-sonnet-4".to_string(),
vec!["coding".to_string()],
);
registry.register_agent(agent).unwrap();
let coordinator = AgentCoordinator::with_registry(registry);
let task_id = coordinator
.assign_task(
"developer",
"Test task".to_string(),
"Description".to_string(),
"{}".to_string(),
80,
)
.await;
assert!(task_id.is_ok());
}
#[tokio::test]
async fn test_no_available_agent() {
let registry = Arc::new(AgentRegistry::new(5));
let coordinator = AgentCoordinator::with_registry(registry);
let result = coordinator
.assign_task(
"nonexistent",
"Test task".to_string(),
"Description".to_string(),
"{}".to_string(),
80,
)
.await;
assert!(result.is_err());
}
}

View File

@ -0,0 +1,319 @@
use chrono::{DateTime, Utc};
use std::collections::HashMap;
#[cfg(test)]
use chrono::Duration;
/// Per-task-type expertise tracking for agents with recency bias.
/// Recent performance (last 7 days) weighted 3x higher than historical averages.
#[derive(Debug, Clone)]
pub struct LearningProfile {
pub agent_id: String,
pub task_type_expertise: HashMap<String, TaskTypeExpertise>,
pub last_updated: DateTime<Utc>,
}
/// Task-specific expertise metrics with learning curves.
#[derive(Debug, Clone)]
pub struct TaskTypeExpertise {
/// Overall success rate (0.0-1.0) including all historical data.
pub success_rate: f64,
/// Total number of executions for this task type.
pub total_executions: u32,
/// Success rate for last 7 days with recency bias applied.
/// Recent performance weighted 3x higher than older data.
pub recent_success_rate: f64,
/// Average duration in milliseconds.
pub avg_duration_ms: f64,
/// Time-series of expertise evolution as (timestamp, success_rate).
/// Computed by aggregating executions into daily/weekly windows.
pub learning_curve: Vec<(DateTime<Utc>, f64)>,
/// Confidence score (0.0-1.0) based on execution count.
/// Prevents overfitting: min(1.0, total_executions / 20).
pub confidence: f64,
}
impl LearningProfile {
/// Create new empty learning profile for agent.
pub fn new(agent_id: String) -> Self {
Self {
agent_id,
task_type_expertise: HashMap::new(),
last_updated: Utc::now(),
}
}
/// Add or update expertise for a task type.
pub fn set_task_type_expertise(&mut self, task_type: String, expertise: TaskTypeExpertise) {
self.task_type_expertise.insert(task_type, expertise);
self.last_updated = Utc::now();
}
/// Get expertise score for specific task type, default to 0.5 if unknown.
pub fn get_task_type_score(&self, task_type: &str) -> f64 {
self.task_type_expertise
.get(task_type)
.map(|e| e.success_rate)
.unwrap_or(0.5)
}
/// Get recent success rate for task type (weighted with recency bias).
/// Returns recent_success_rate if available, falls back to overall success_rate.
pub fn get_recent_score(&self, task_type: &str) -> f64 {
self.task_type_expertise
.get(task_type)
.map(|e| {
if e.total_executions >= 5 {
e.recent_success_rate
} else {
e.success_rate
}
})
.unwrap_or(0.5)
}
/// Get confidence score for task type (0.0-1.0 based on execution count).
pub fn get_confidence(&self, task_type: &str) -> f64 {
self.task_type_expertise
.get(task_type)
.map(|e| e.confidence)
.unwrap_or(0.0)
}
}
impl TaskTypeExpertise {
/// Create expertise metrics from execution data.
/// Calculates success_rate, confidence, and applies recency bias.
pub fn from_executions(
executions: Vec<ExecutionData>,
_task_type: &str,
) -> Self {
if executions.is_empty() {
return Self {
success_rate: 0.5,
total_executions: 0,
recent_success_rate: 0.5,
avg_duration_ms: 0.0,
learning_curve: Vec::new(),
confidence: 0.0,
};
}
let total_executions = executions.len() as u32;
let success_count = executions.iter().filter(|e| e.success).count() as u32;
let success_rate = success_count as f64 / total_executions as f64;
let total_duration: u64 = executions.iter().map(|e| e.duration_ms).sum();
let avg_duration_ms = total_duration as f64 / total_executions as f64;
let recent_success_rate = calculate_recency_weighted_success(&executions);
let confidence = (total_executions as f64 / 20.0).min(1.0);
let learning_curve = calculate_learning_curve(&executions);
Self {
success_rate,
total_executions,
recent_success_rate,
avg_duration_ms,
learning_curve,
confidence,
}
}
/// Update expertise with new execution result.
pub fn update_with_execution(&mut self, execution: &ExecutionData) {
let new_count = self.total_executions + 1;
let new_success_count =
(self.success_rate * self.total_executions as f64).round() as u32
+ if execution.success { 1 } else { 0 };
self.success_rate = new_success_count as f64 / new_count as f64;
self.total_executions = new_count;
self.confidence = (new_count as f64 / 20.0).min(1.0);
let total_duration = self.avg_duration_ms * self.total_executions as f64 - self.avg_duration_ms
+ execution.duration_ms as f64;
self.avg_duration_ms = total_duration / new_count as f64;
}
}
/// Execution data for calculating expertise metrics.
#[derive(Debug, Clone)]
pub struct ExecutionData {
pub timestamp: DateTime<Utc>,
pub duration_ms: u64,
pub success: bool,
}
/// Calculate success rate with recency bias.
/// Last 7 days weighted 3x higher: weight = 3.0 * e^(-days_ago / 7.0).
fn calculate_recency_weighted_success(executions: &[ExecutionData]) -> f64 {
if executions.is_empty() {
return 0.5;
}
let now = Utc::now();
let mut weighted_success = 0.0;
let mut total_weight = 0.0;
for execution in executions {
let days_ago = (now - execution.timestamp).num_days() as f64;
let weight = if days_ago < 7.0 {
3.0 * (-days_ago / 7.0).exp()
} else {
(-days_ago / 7.0).exp()
};
weighted_success += weight * if execution.success { 1.0 } else { 0.0 };
total_weight += weight;
}
if total_weight > 0.0 {
weighted_success / total_weight
} else {
0.5
}
}
/// Calculate learning curve as time-series of expertise evolution.
/// Groups executions into daily windows and computes success rate per window.
fn calculate_learning_curve(executions: &[ExecutionData]) -> Vec<(DateTime<Utc>, f64)> {
if executions.is_empty() {
return Vec::new();
}
let mut by_day: HashMap<DateTime<Utc>, (u32, u32)> = HashMap::new();
for execution in executions {
let day_start = execution
.timestamp
.date_naive()
.and_hms_opt(0, 0, 0)
.map(|dt| dt.and_utc())
.unwrap_or_else(|| execution.timestamp);
let (total, success) = by_day.entry(day_start).or_insert((0, 0));
*total += 1;
if execution.success {
*success += 1;
}
}
let mut curve: Vec<_> = by_day
.iter()
.map(|(day, (total, success))| (*day, *success as f64 / *total as f64))
.collect();
curve.sort_by_key(|entry| entry.0);
curve
}
#[cfg(test)]
mod tests {
use super::*;
#[test]
fn test_learning_profile_creation() {
let profile = LearningProfile::new("agent-1".to_string());
assert_eq!(profile.agent_id, "agent-1");
assert_eq!(profile.task_type_expertise.len(), 0);
}
#[test]
fn test_task_type_expertise_from_executions() {
let executions = vec![
ExecutionData {
timestamp: Utc::now() - Duration::hours(1),
duration_ms: 100,
success: true,
},
ExecutionData {
timestamp: Utc::now() - Duration::hours(2),
duration_ms: 150,
success: true,
},
ExecutionData {
timestamp: Utc::now() - Duration::hours(3),
duration_ms: 120,
success: false,
},
];
let expertise = TaskTypeExpertise::from_executions(executions, "coding");
assert_eq!(expertise.total_executions, 3);
assert!((expertise.success_rate - 2.0 / 3.0).abs() < 0.01);
assert!((expertise.avg_duration_ms - 123.33).abs() < 1.0);
assert!((expertise.confidence - 0.15).abs() < 0.01); // 3/20 = 0.15
}
#[test]
fn test_recency_bias_weights_recent_higher() {
let now = Utc::now();
let executions = vec![
ExecutionData {
timestamp: now - Duration::hours(1),
duration_ms: 100,
success: true,
},
ExecutionData {
timestamp: now - Duration::days(8),
duration_ms: 100,
success: false,
},
];
let recent = calculate_recency_weighted_success(&executions);
assert!(recent > 0.5); // Recent success pulls average up
}
#[test]
fn test_confidence_capped_at_one() {
let executions = (0..100)
.map(|i| ExecutionData {
timestamp: Utc::now() - Duration::hours(i),
duration_ms: 100,
success: true,
})
.collect();
let expertise = TaskTypeExpertise::from_executions(executions, "coding");
assert_eq!(expertise.confidence, 1.0);
}
#[test]
fn test_empty_executions() {
let expertise = TaskTypeExpertise::from_executions(Vec::new(), "coding");
assert_eq!(expertise.total_executions, 0);
assert_eq!(expertise.success_rate, 0.5);
assert_eq!(expertise.confidence, 0.0);
}
#[test]
fn test_learning_curve_generation() {
let now = Utc::now();
let executions = vec![
ExecutionData {
timestamp: now - Duration::hours(25),
duration_ms: 100,
success: true,
},
ExecutionData {
timestamp: now - Duration::hours(24),
duration_ms: 100,
success: true,
},
ExecutionData {
timestamp: now - Duration::hours(1),
duration_ms: 100,
success: false,
},
];
let curve = calculate_learning_curve(&executions);
assert!(curve.len() > 0);
// Earlier executions should have lower timestamps
for i in 1..curve.len() {
assert!(curve[i - 1].0 <= curve[i].0);
}
}
}

View File

@ -0,0 +1,27 @@
// vapora-agents: Agent registry and coordination for VAPORA v1.0
// Phase 3: Real agent execution with type-state runtime
// Phase 5.3: Multi-agent learning from KG patterns
pub mod config;
pub mod coordinator;
pub mod learning_profile;
pub mod loader;
pub mod messages;
pub mod profile_adapter;
pub mod registry;
pub mod runtime;
pub mod scoring;
// Re-exports
pub use config::{AgentConfig, AgentDefinition, RegistryConfig};
pub use coordinator::{AgentCoordinator, CoordinatorError};
pub use learning_profile::{ExecutionData, LearningProfile, TaskTypeExpertise};
pub use loader::{AgentDefinitionLoader, LoaderError};
pub use messages::{
AgentMessage, AgentRegistered, AgentStopped, Heartbeat, TaskAssignment, TaskCompleted,
TaskFailed, TaskProgress, TaskStarted,
};
pub use profile_adapter::ProfileAdapter;
pub use registry::{AgentMetadata, AgentRegistry, AgentStatus, RegistryError};
pub use runtime::{Agent, AgentExecutor, Completed, ExecutionResult, Executing, Failed, Idle, NatsConsumer};
pub use scoring::{AgentScore, AgentScoringService};

View File

@ -0,0 +1,170 @@
// Agent definition loader - loads agent configurations from JSON files
// Phase 3: Support for agent definition files
use crate::config::AgentDefinition;
use serde_json;
use std::fs;
use std::path::Path;
use thiserror::Error;
#[derive(Debug, Error)]
pub enum LoaderError {
#[error("Failed to read file: {0}")]
IoError(#[from] std::io::Error),
#[error("Failed to parse JSON: {0}")]
JsonError(#[from] serde_json::Error),
#[error("Agent not found: {0}")]
AgentNotFound(String),
#[error("Invalid agent definition: {0}")]
InvalidDefinition(String),
}
pub type Result<T> = std::result::Result<T, LoaderError>;
/// Load agent definitions from JSON files
pub struct AgentDefinitionLoader;
impl AgentDefinitionLoader {
/// Load a single agent definition from a JSON file
pub fn load_from_file<P: AsRef<Path>>(path: P) -> Result<AgentDefinition> {
let content = fs::read_to_string(path)?;
let definition = serde_json::from_str(&content)?;
Ok(definition)
}
/// Load all agent definitions from a directory
pub fn load_from_directory<P: AsRef<Path>>(directory: P) -> Result<Vec<AgentDefinition>> {
let dir = directory.as_ref();
if !dir.exists() {
return Err(LoaderError::AgentNotFound(format!(
"Directory not found: {}",
dir.display()
)));
}
let mut definitions = Vec::new();
for entry in fs::read_dir(dir)? {
let entry = entry?;
let path = entry.path();
// Only load .json files
if path.extension().is_some_and(|ext| ext == "json") {
match Self::load_from_file(&path) {
Ok(definition) => definitions.push(definition),
Err(e) => {
// Log warning but continue loading other files
eprintln!("Warning: Failed to load {}: {}", path.display(), e);
}
}
}
}
if definitions.is_empty() {
return Err(LoaderError::AgentNotFound(format!(
"No agent definitions found in {}",
dir.display()
)));
}
Ok(definitions)
}
/// Load a specific agent by role from a directory
pub fn load_by_role<P: AsRef<Path>>(directory: P, role: &str) -> Result<AgentDefinition> {
let definitions = Self::load_from_directory(directory)?;
definitions
.into_iter()
.find(|def| def.role == role)
.ok_or_else(|| LoaderError::AgentNotFound(role.to_string()))
}
}
#[cfg(test)]
mod tests {
use super::*;
use serde_json::json;
use std::io::Write;
use tempfile::TempDir;
#[test]
fn test_load_from_file() -> Result<()> {
let temp_dir = TempDir::new().map_err(|e| LoaderError::IoError(e))?;
let file_path = temp_dir.path().join("test.json");
let definition = json!({
"role": "developer",
"description": "Test Developer",
"llm_provider": "claude",
"llm_model": "claude-sonnet-4-5",
"capabilities": ["code_generation"],
"parallelizable": true,
"priority": 50
});
let mut file = fs::File::create(&file_path)?;
file.write_all(serde_json::to_string(&definition)?.as_bytes())?;
let loaded = AgentDefinitionLoader::load_from_file(&file_path)?;
assert_eq!(loaded.role, "developer");
assert_eq!(loaded.description, "Test Developer");
Ok(())
}
#[test]
fn test_load_from_directory() -> Result<()> {
let temp_dir = TempDir::new().map_err(|e| LoaderError::IoError(e))?;
// Create multiple agent files
for (role, desc) in &[("developer", "Developer"), ("reviewer", "Reviewer")] {
let definition = json!({
"role": role,
"description": desc,
"llm_provider": "claude",
"llm_model": "claude-sonnet-4-5",
"capabilities": ["testing"],
"parallelizable": false,
"priority": 50
});
let file_path = temp_dir.path().join(format!("{}.json", role));
let mut file = fs::File::create(&file_path)?;
file.write_all(serde_json::to_string(&definition)?.as_bytes())?;
}
let definitions = AgentDefinitionLoader::load_from_directory(temp_dir.path())?;
assert_eq!(definitions.len(), 2);
Ok(())
}
#[test]
fn test_load_by_role() -> Result<()> {
let temp_dir = TempDir::new().map_err(|e| LoaderError::IoError(e))?;
let definition = json!({
"role": "developer",
"description": "Developer",
"llm_provider": "claude",
"llm_model": "claude-sonnet-4-5",
"capabilities": ["coding"],
"parallelizable": true,
"priority": 50
});
let file_path = temp_dir.path().join("developer.json");
let mut file = fs::File::create(&file_path)?;
file.write_all(serde_json::to_string(&definition)?.as_bytes())?;
let loaded = AgentDefinitionLoader::load_by_role(temp_dir.path(), "developer")?;
assert_eq!(loaded.role, "developer");
Ok(())
}
}

View File

@ -0,0 +1,193 @@
// vapora-agents: NATS message protocol for inter-agent communication
// Phase 2: Message types for agent coordination
use chrono::{DateTime, Utc};
use serde::{Deserialize, Serialize};
/// Agent message envelope for NATS pub/sub
#[derive(Debug, Clone, Serialize, Deserialize)]
#[serde(tag = "type", rename_all = "snake_case")]
pub enum AgentMessage {
TaskAssigned(TaskAssignment),
TaskStarted(TaskStarted),
TaskProgress(TaskProgress),
TaskCompleted(TaskCompleted),
TaskFailed(TaskFailed),
Heartbeat(Heartbeat),
AgentRegistered(AgentRegistered),
AgentStopped(AgentStopped),
}
#[derive(Debug, Clone, Serialize, Deserialize)]
pub struct TaskAssignment {
pub id: String,
pub agent_id: String,
pub required_role: String,
pub title: String,
pub description: String,
pub context: String,
pub priority: u32,
pub deadline: Option<DateTime<Utc>>,
pub assigned_at: DateTime<Utc>,
}
#[derive(Debug, Clone, Serialize, Deserialize)]
pub struct TaskStarted {
pub task_id: String,
pub agent_id: String,
pub started_at: DateTime<Utc>,
}
#[derive(Debug, Clone, Serialize, Deserialize)]
pub struct TaskProgress {
pub task_id: String,
pub agent_id: String,
pub progress_percent: u32,
pub current_step: String,
pub estimated_completion: Option<DateTime<Utc>>,
pub updated_at: DateTime<Utc>,
}
#[derive(Debug, Clone, Serialize, Deserialize)]
pub struct TaskCompleted {
pub task_id: String,
pub agent_id: String,
pub result: String,
pub artifacts: Vec<String>,
pub tokens_used: u64,
pub duration_ms: u64,
pub completed_at: DateTime<Utc>,
}
#[derive(Debug, Clone, Serialize, Deserialize)]
pub struct TaskFailed {
pub task_id: String,
pub agent_id: String,
pub error: String,
pub retry_count: u32,
pub can_retry: bool,
pub failed_at: DateTime<Utc>,
}
#[derive(Debug, Clone, Serialize, Deserialize)]
pub struct Heartbeat {
pub agent_id: String,
pub status: String,
pub load: f64,
pub active_tasks: u32,
pub total_tasks_completed: u64,
pub uptime_seconds: u64,
pub timestamp: DateTime<Utc>,
}
#[derive(Debug, Clone, Serialize, Deserialize)]
pub struct AgentRegistered {
pub agent_id: String,
pub role: String,
pub version: String,
pub capabilities: Vec<String>,
pub registered_at: DateTime<Utc>,
}
#[derive(Debug, Clone, Serialize, Deserialize)]
pub struct AgentStopped {
pub agent_id: String,
pub role: String,
pub reason: String,
pub stopped_at: DateTime<Utc>,
}
impl AgentMessage {
/// Serialize message to JSON bytes for NATS
pub fn to_bytes(&self) -> Result<Vec<u8>, serde_json::Error> {
serde_json::to_vec(self)
}
/// Deserialize message from JSON bytes
pub fn from_bytes(bytes: &[u8]) -> Result<Self, serde_json::Error> {
serde_json::from_slice(bytes)
}
/// Get message type as string
pub fn message_type(&self) -> &str {
match self {
AgentMessage::TaskAssigned(_) => "task_assigned",
AgentMessage::TaskStarted(_) => "task_started",
AgentMessage::TaskProgress(_) => "task_progress",
AgentMessage::TaskCompleted(_) => "task_completed",
AgentMessage::TaskFailed(_) => "task_failed",
AgentMessage::Heartbeat(_) => "heartbeat",
AgentMessage::AgentRegistered(_) => "agent_registered",
AgentMessage::AgentStopped(_) => "agent_stopped",
}
}
}
/// NATS subjects for agent communication
pub mod subjects {
pub const TASKS_ASSIGNED: &str = "vapora.tasks.assigned";
pub const TASKS_STARTED: &str = "vapora.tasks.started";
pub const TASKS_PROGRESS: &str = "vapora.tasks.progress";
pub const TASKS_COMPLETED: &str = "vapora.tasks.completed";
pub const TASKS_FAILED: &str = "vapora.tasks.failed";
pub const AGENT_HEARTBEAT: &str = "vapora.agent.heartbeat";
pub const AGENT_REGISTERED: &str = "vapora.agent.registered";
pub const AGENT_STOPPED: &str = "vapora.agent.stopped";
/// Get subject for a specific agent role
pub fn agent_role_subject(role: &str) -> String {
format!("vapora.agent.role.{}", role)
}
/// Get subject for a specific task
pub fn task_subject(task_id: &str) -> String {
format!("vapora.task.{}", task_id)
}
}
#[cfg(test)]
mod tests {
use super::*;
#[test]
fn test_message_serialization() {
let msg = AgentMessage::TaskAssigned(TaskAssignment {
id: "task-123".to_string(),
agent_id: "agent-001".to_string(),
required_role: "developer".to_string(),
title: "Test task".to_string(),
description: "Test description".to_string(),
context: "{}".to_string(),
priority: 80,
deadline: None,
assigned_at: Utc::now(),
});
let bytes = msg.to_bytes().unwrap();
let deserialized = AgentMessage::from_bytes(&bytes).unwrap();
assert_eq!(msg.message_type(), deserialized.message_type());
}
#[test]
fn test_heartbeat_message() {
let heartbeat = Heartbeat {
agent_id: "agent-001".to_string(),
status: "active".to_string(),
load: 0.5,
active_tasks: 2,
total_tasks_completed: 100,
uptime_seconds: 3600,
timestamp: Utc::now(),
};
let msg = AgentMessage::Heartbeat(heartbeat);
assert_eq!(msg.message_type(), "heartbeat");
}
#[test]
fn test_subject_generation() {
assert_eq!(subjects::agent_role_subject("developer"), "vapora.agent.role.developer");
assert_eq!(subjects::task_subject("task-123"), "vapora.task.task-123");
}
}

View File

@ -0,0 +1,218 @@
// Profile adapter: AgentMetadata + KG metrics → Swarm AgentProfile
// Phase 5.2: Bridges agent registry with swarm coordination
// Phase 5.3: Integrates per-task-type learning profiles from KG
use crate::learning_profile::{LearningProfile, TaskTypeExpertise};
use crate::registry::AgentMetadata;
use vapora_swarm::messages::AgentProfile;
/// Adapter that converts AgentMetadata to SwarmCoordinator AgentProfile
pub struct ProfileAdapter;
impl ProfileAdapter {
/// Create a swarm profile from agent metadata
pub fn create_profile(agent: &AgentMetadata) -> AgentProfile {
// Extract roles from capabilities (simplistic mapping)
let roles = agent
.capabilities
.iter()
.take(1)
.cloned()
.collect();
AgentProfile {
id: agent.id.clone(),
roles,
capabilities: agent.capabilities.clone(),
current_load: agent.current_tasks as f64 / agent.max_concurrent_tasks as f64,
success_rate: 0.5, // Default: neutral until KG metrics available
availability: agent.status == crate::registry::AgentStatus::Active,
}
}
/// Create profiles for multiple agents
pub fn batch_create_profiles(agents: Vec<AgentMetadata>) -> Vec<AgentProfile> {
agents.into_iter().map(|agent| Self::create_profile(&agent)).collect()
}
/// Update profile from KG success rate (Phase 5.5 integration)
pub fn update_with_kg_metrics(mut profile: AgentProfile, success_rate: f64) -> AgentProfile {
profile.success_rate = success_rate;
profile
}
/// Create learning profile from agent with task-type expertise.
/// Integrates per-task-type learning data from KG for intelligent assignment.
pub fn create_learning_profile(agent_id: String) -> LearningProfile {
LearningProfile::new(agent_id)
}
/// Enhance learning profile with task-type expertise from KG data.
/// Updates the profile with calculated expertise for specific task type.
pub fn add_task_type_expertise(
mut profile: LearningProfile,
task_type: String,
expertise: TaskTypeExpertise,
) -> LearningProfile {
profile.set_task_type_expertise(task_type, expertise);
profile
}
/// Update agent profile success rate from learning profile task-type score.
/// Uses learned expertise for the specified task type, with fallback to default.
pub fn update_profile_with_learning(
mut profile: AgentProfile,
learning_profile: &LearningProfile,
task_type: &str,
) -> AgentProfile {
profile.success_rate = learning_profile.get_task_type_score(task_type);
profile
}
}
#[cfg(test)]
mod tests {
use super::*;
#[test]
fn test_profile_creation_from_metadata() {
let agent = AgentMetadata {
id: "agent-1".to_string(),
role: "developer".to_string(),
name: "Dev Agent 1".to_string(),
version: "0.1.0".to_string(),
status: crate::registry::AgentStatus::Active,
capabilities: vec!["coding".to_string(), "review".to_string()],
llm_provider: "claude".to_string(),
llm_model: "claude-sonnet-4".to_string(),
max_concurrent_tasks: 5,
current_tasks: 2,
created_at: chrono::Utc::now(),
last_heartbeat: chrono::Utc::now(),
uptime_percentage: 99.5,
total_tasks_completed: 10,
};
let profile = ProfileAdapter::create_profile(&agent);
assert_eq!(profile.id, "agent-1");
assert_eq!(profile.capabilities.len(), 2);
assert!((profile.current_load - 0.4).abs() < 0.01); // 2/5 = 0.4
assert_eq!(profile.success_rate, 0.5); // Default
assert!(profile.availability);
}
#[test]
fn test_batch_create_profiles() {
let agents = vec![
AgentMetadata {
id: "agent-1".to_string(),
role: "developer".to_string(),
name: "Dev 1".to_string(),
version: "0.1.0".to_string(),
status: crate::registry::AgentStatus::Active,
capabilities: vec!["coding".to_string()],
llm_provider: "claude".to_string(),
llm_model: "claude-sonnet-4".to_string(),
max_concurrent_tasks: 5,
current_tasks: 1,
created_at: chrono::Utc::now(),
last_heartbeat: chrono::Utc::now(),
uptime_percentage: 99.0,
total_tasks_completed: 5,
},
AgentMetadata {
id: "agent-2".to_string(),
role: "reviewer".to_string(),
name: "Reviewer 1".to_string(),
version: "0.1.0".to_string(),
status: crate::registry::AgentStatus::Active,
capabilities: vec!["review".to_string()],
llm_provider: "gpt4".to_string(),
llm_model: "gpt-4".to_string(),
max_concurrent_tasks: 3,
current_tasks: 0,
created_at: chrono::Utc::now(),
last_heartbeat: chrono::Utc::now(),
uptime_percentage: 98.5,
total_tasks_completed: 3,
},
];
let profiles = ProfileAdapter::batch_create_profiles(agents);
assert_eq!(profiles.len(), 2);
assert_eq!(profiles[0].id, "agent-1");
assert_eq!(profiles[1].id, "agent-2");
}
#[test]
fn test_update_with_kg_metrics() {
let profile = AgentProfile {
id: "agent-1".to_string(),
roles: vec!["developer".to_string()],
capabilities: vec!["coding".to_string()],
current_load: 0.4,
success_rate: 0.5,
availability: true,
};
let updated = ProfileAdapter::update_with_kg_metrics(profile, 0.85);
assert_eq!(updated.success_rate, 0.85);
assert_eq!(updated.id, "agent-1"); // Other fields unchanged
}
#[test]
fn test_create_learning_profile() {
let learning = ProfileAdapter::create_learning_profile("agent-1".to_string());
assert_eq!(learning.agent_id, "agent-1");
assert_eq!(learning.task_type_expertise.len(), 0);
}
#[test]
fn test_add_task_type_expertise() {
let learning = ProfileAdapter::create_learning_profile("agent-1".to_string());
let expertise = TaskTypeExpertise {
success_rate: 0.85,
total_executions: 20,
recent_success_rate: 0.90,
avg_duration_ms: 150.0,
learning_curve: Vec::new(),
confidence: 1.0,
};
let updated = ProfileAdapter::add_task_type_expertise(learning, "coding".to_string(), expertise);
assert_eq!(updated.get_task_type_score("coding"), 0.85);
assert_eq!(updated.get_confidence("coding"), 1.0);
}
#[test]
fn test_update_profile_with_learning() {
let profile = AgentProfile {
id: "agent-1".to_string(),
roles: vec!["developer".to_string()],
capabilities: vec!["coding".to_string()],
current_load: 0.4,
success_rate: 0.5,
availability: true,
};
let mut learning = ProfileAdapter::create_learning_profile("agent-1".to_string());
let expertise = TaskTypeExpertise {
success_rate: 0.85,
total_executions: 20,
recent_success_rate: 0.90,
avg_duration_ms: 150.0,
learning_curve: Vec::new(),
confidence: 1.0,
};
learning = ProfileAdapter::add_task_type_expertise(learning, "coding".to_string(), expertise);
let updated = ProfileAdapter::update_profile_with_learning(profile, &learning, "coding");
assert_eq!(updated.success_rate, 0.85);
let unknown_updated =
ProfileAdapter::update_profile_with_learning(updated, &learning, "unknown");
assert_eq!(unknown_updated.success_rate, 0.5); // Falls back to default
}
}

View File

@ -0,0 +1,383 @@
// vapora-agents: Agent registry - manages agent lifecycle and availability
// Phase 2: Complete implementation with 12 agent roles
use chrono::{DateTime, Utc};
use serde::{Deserialize, Serialize};
use std::collections::HashMap;
use std::sync::{Arc, RwLock};
use thiserror::Error;
use uuid::Uuid;
#[derive(Debug, Error)]
pub enum RegistryError {
#[error("Agent not found: {0}")]
AgentNotFound(String),
#[error("Agent already registered: {0}")]
AgentAlreadyRegistered(String),
#[error("Maximum agents reached for role: {0}")]
MaxAgentsReached(String),
#[error("Invalid agent state transition: {0}")]
InvalidStateTransition(String),
}
#[derive(Debug, Clone, Serialize, Deserialize, PartialEq)]
pub enum AgentStatus {
Active,
Inactive,
Updating,
Error(String),
Scaling,
}
#[derive(Debug, Clone, Serialize, Deserialize)]
pub struct AgentMetadata {
pub id: String,
pub role: String,
pub name: String,
pub version: String,
pub status: AgentStatus,
pub capabilities: Vec<String>,
pub llm_provider: String,
pub llm_model: String,
pub max_concurrent_tasks: u32,
pub current_tasks: u32,
pub created_at: DateTime<Utc>,
pub last_heartbeat: DateTime<Utc>,
pub uptime_percentage: f64,
pub total_tasks_completed: u64,
}
impl AgentMetadata {
pub fn new(
role: String,
name: String,
llm_provider: String,
llm_model: String,
capabilities: Vec<String>,
) -> Self {
let now = Utc::now();
Self {
id: Uuid::new_v4().to_string(),
role,
name,
version: "0.1.0".to_string(),
status: AgentStatus::Active,
capabilities,
llm_provider,
llm_model,
max_concurrent_tasks: 5,
current_tasks: 0,
created_at: now,
last_heartbeat: now,
uptime_percentage: 100.0,
total_tasks_completed: 0,
}
}
/// Check if agent can accept new tasks
pub fn can_accept_task(&self) -> bool {
self.status == AgentStatus::Active && self.current_tasks < self.max_concurrent_tasks
}
/// Increment task count
pub fn assign_task(&mut self) {
if self.current_tasks < self.max_concurrent_tasks {
self.current_tasks += 1;
}
}
/// Decrement task count
pub fn complete_task(&mut self) {
if self.current_tasks > 0 {
self.current_tasks -= 1;
}
self.total_tasks_completed += 1;
}
}
/// Thread-safe agent registry
#[derive(Clone)]
pub struct AgentRegistry {
inner: Arc<RwLock<AgentRegistryInner>>,
}
struct AgentRegistryInner {
agents: HashMap<String, AgentMetadata>,
running_count: HashMap<String, u32>,
max_agents_per_role: u32,
}
impl AgentRegistry {
pub fn new(max_agents_per_role: u32) -> Self {
Self {
inner: Arc::new(RwLock::new(AgentRegistryInner {
agents: HashMap::new(),
running_count: HashMap::new(),
max_agents_per_role,
})),
}
}
/// Register a new agent
pub fn register_agent(&self, metadata: AgentMetadata) -> Result<String, RegistryError> {
let mut inner = self.inner.write().expect("Failed to acquire write lock");
// Check if agent already registered
if inner.agents.contains_key(&metadata.id) {
return Err(RegistryError::AgentAlreadyRegistered(metadata.id.clone()));
}
// Check if we've reached max agents for this role
let count = inner.running_count.get(&metadata.role).unwrap_or(&0);
if *count >= inner.max_agents_per_role {
return Err(RegistryError::MaxAgentsReached(metadata.role.clone()));
}
let role = metadata.role.clone();
let id = metadata.id.clone();
inner.agents.insert(id.clone(), metadata);
*inner.running_count.entry(role).or_insert(0) += 1;
Ok(id)
}
/// Unregister an agent
pub fn unregister_agent(&self, id: &str) -> Result<(), RegistryError> {
let mut inner = self.inner.write().expect("Failed to acquire write lock");
let agent = inner
.agents
.remove(id)
.ok_or_else(|| RegistryError::AgentNotFound(id.to_string()))?;
if let Some(count) = inner.running_count.get_mut(&agent.role) {
if *count > 0 {
*count -= 1;
}
}
Ok(())
}
/// Get agent metadata
pub fn get_agent(&self, id: &str) -> Option<AgentMetadata> {
let inner = self.inner.read().expect("Failed to acquire read lock");
inner.agents.get(id).cloned()
}
/// Get all agents for a specific role
pub fn get_agents_by_role(&self, role: &str) -> Vec<AgentMetadata> {
let inner = self.inner.read().expect("Failed to acquire read lock");
inner
.agents
.values()
.filter(|a| a.role == role && a.status == AgentStatus::Active)
.cloned()
.collect()
}
/// List all agents
pub fn list_all(&self) -> Vec<AgentMetadata> {
let inner = self.inner.read().expect("Failed to acquire read lock");
inner.agents.values().cloned().collect()
}
/// Update agent status
pub fn update_agent_status(
&self,
id: &str,
status: AgentStatus,
) -> Result<(), RegistryError> {
let mut inner = self.inner.write().expect("Failed to acquire write lock");
let agent = inner
.agents
.get_mut(id)
.ok_or_else(|| RegistryError::AgentNotFound(id.to_string()))?;
agent.status = status;
agent.last_heartbeat = Utc::now();
Ok(())
}
/// Update agent heartbeat
pub fn heartbeat(&self, id: &str) -> Result<(), RegistryError> {
let mut inner = self.inner.write().expect("Failed to acquire write lock");
let agent = inner
.agents
.get_mut(id)
.ok_or_else(|| RegistryError::AgentNotFound(id.to_string()))?;
agent.last_heartbeat = Utc::now();
Ok(())
}
/// Get an available agent for a specific role
pub fn get_available_agent(&self, role: &str) -> Option<AgentMetadata> {
let agents = self.get_agents_by_role(role);
agents
.into_iter()
.filter(|a| a.can_accept_task())
.min_by_key(|a| a.current_tasks)
}
/// Assign task to agent
pub fn assign_task(&self, agent_id: &str) -> Result<(), RegistryError> {
let mut inner = self.inner.write().expect("Failed to acquire write lock");
let agent = inner
.agents
.get_mut(agent_id)
.ok_or_else(|| RegistryError::AgentNotFound(agent_id.to_string()))?;
if !agent.can_accept_task() {
return Err(RegistryError::InvalidStateTransition(
"Agent cannot accept more tasks".to_string(),
));
}
agent.assign_task();
Ok(())
}
/// Complete task for agent
pub fn complete_task(&self, agent_id: &str) -> Result<(), RegistryError> {
let mut inner = self.inner.write().expect("Failed to acquire write lock");
let agent = inner
.agents
.get_mut(agent_id)
.ok_or_else(|| RegistryError::AgentNotFound(agent_id.to_string()))?;
agent.complete_task();
Ok(())
}
/// Get count of agents by role
pub fn count_by_role(&self, role: &str) -> u32 {
let inner = self.inner.read().expect("Failed to acquire read lock");
*inner.running_count.get(role).unwrap_or(&0)
}
/// Get total agent count
pub fn total_count(&self) -> usize {
let inner = self.inner.read().expect("Failed to acquire read lock");
inner.agents.len()
}
}
impl Default for AgentRegistry {
fn default() -> Self {
Self::new(5)
}
}
#[cfg(test)]
mod tests {
use super::*;
#[test]
fn test_agent_registration() {
let registry = AgentRegistry::new(5);
let agent = AgentMetadata::new(
"developer".to_string(),
"Developer Agent 1".to_string(),
"claude".to_string(),
"claude-sonnet-4".to_string(),
vec!["coding".to_string()],
);
let id = registry.register_agent(agent).unwrap();
assert!(registry.get_agent(&id).is_some());
assert_eq!(registry.total_count(), 1);
}
#[test]
fn test_max_agents_per_role() {
let registry = AgentRegistry::new(2);
for i in 0..2 {
let agent = AgentMetadata::new(
"developer".to_string(),
format!("Developer {}", i),
"claude".to_string(),
"claude-sonnet-4".to_string(),
vec![],
);
registry.register_agent(agent).unwrap();
}
// Third agent should fail
let agent = AgentMetadata::new(
"developer".to_string(),
"Developer 3".to_string(),
"claude".to_string(),
"claude-sonnet-4".to_string(),
vec![],
);
let result = registry.register_agent(agent);
assert!(result.is_err());
}
#[test]
fn test_agent_task_assignment() {
let _registry = AgentRegistry::new(5);
let mut agent = AgentMetadata::new(
"developer".to_string(),
"Developer Agent".to_string(),
"claude".to_string(),
"claude-sonnet-4".to_string(),
vec![],
);
assert_eq!(agent.current_tasks, 0);
assert!(agent.can_accept_task());
agent.assign_task();
assert_eq!(agent.current_tasks, 1);
agent.complete_task();
assert_eq!(agent.current_tasks, 0);
assert_eq!(agent.total_tasks_completed, 1);
}
#[test]
fn test_get_available_agent() {
let registry = AgentRegistry::new(5);
let agent1 = AgentMetadata::new(
"developer".to_string(),
"Developer 1".to_string(),
"claude".to_string(),
"claude-sonnet-4".to_string(),
vec![],
);
let id1 = registry.register_agent(agent1).unwrap();
let available = registry.get_available_agent("developer");
assert!(available.is_some());
// Assign tasks to fill capacity
for _ in 0..5 {
registry.assign_task(&id1).unwrap();
}
// Should no longer be available
let available = registry.get_available_agent("developer");
assert!(available.is_none());
}
}

View File

@ -0,0 +1,147 @@
// NATS message consumer routing tasks to executor pool
// Bridges NATS JetStream with executor channels
use crate::messages::TaskAssignment;
use std::collections::HashMap;
use tokio::sync::mpsc;
use tracing::{debug, warn};
/// NATS consumer routing tasks to agent executors
pub struct NatsConsumer {
executor_pool: HashMap<String, mpsc::Sender<TaskAssignment>>,
}
impl NatsConsumer {
/// Create new consumer
pub fn new() -> Self {
Self {
executor_pool: HashMap::new(),
}
}
/// Register executor for agent
pub fn register_executor(&mut self, agent_id: String, sender: mpsc::Sender<TaskAssignment>) {
debug!("Registered executor for agent: {}", agent_id);
self.executor_pool.insert(agent_id, sender);
}
/// Unregister executor
pub fn unregister_executor(&mut self, agent_id: &str) {
self.executor_pool.remove(agent_id);
debug!("Unregistered executor for agent: {}", agent_id);
}
/// Get executor sender for agent
pub fn get_executor(&self, agent_id: &str) -> Option<&mpsc::Sender<TaskAssignment>> {
self.executor_pool.get(agent_id)
}
/// Route task to agent executor
pub async fn route_task(&self, task: TaskAssignment) -> Result<(), TaskRoutingError> {
if let Some(tx) = self.executor_pool.get(&task.agent_id) {
tx.send(task.clone())
.await
.map_err(|_| TaskRoutingError::ExecutorUnavailable(task.agent_id.clone()))?;
Ok(())
} else {
warn!("No executor found for agent: {}", task.agent_id);
Err(TaskRoutingError::AgentNotFound(task.agent_id.clone()))
}
}
/// Get list of registered agents
pub fn list_agents(&self) -> Vec<String> {
self.executor_pool.keys().cloned().collect()
}
}
impl Default for NatsConsumer {
fn default() -> Self {
Self::new()
}
}
/// Task routing errors
#[derive(Debug)]
pub enum TaskRoutingError {
AgentNotFound(String),
ExecutorUnavailable(String),
}
impl std::fmt::Display for TaskRoutingError {
fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
match self {
Self::AgentNotFound(agent_id) => write!(f, "Agent not found: {}", agent_id),
Self::ExecutorUnavailable(agent_id) => {
write!(f, "Executor unavailable for agent: {}", agent_id)
}
}
}
}
impl std::error::Error for TaskRoutingError {}
#[cfg(test)]
mod tests {
use super::*;
use chrono::Utc;
#[tokio::test]
async fn test_consumer_registration() {
let mut consumer = NatsConsumer::new();
let (tx, _rx) = mpsc::channel(10);
consumer.register_executor("agent-1".to_string(), tx);
assert!(consumer.get_executor("agent-1").is_some());
assert!(consumer.get_executor("agent-2").is_none());
}
#[tokio::test]
async fn test_task_routing() {
let mut consumer = NatsConsumer::new();
let (tx, mut rx) = mpsc::channel(10);
consumer.register_executor("agent-1".to_string(), tx);
let task = TaskAssignment {
id: "task-1".to_string(),
agent_id: "agent-1".to_string(),
required_role: "developer".to_string(),
title: "Test task".to_string(),
description: "Test description".to_string(),
context: "{}".to_string(),
priority: 1,
deadline: None,
assigned_at: Utc::now(),
};
let result = consumer.route_task(task.clone()).await;
assert!(result.is_ok());
// Verify task was received
let received = rx.recv().await;
assert!(received.is_some());
assert_eq!(received.unwrap().id, "task-1");
}
#[tokio::test]
async fn test_routing_to_nonexistent_agent() {
let consumer = NatsConsumer::new();
let task = TaskAssignment {
id: "task-1".to_string(),
agent_id: "nonexistent".to_string(),
required_role: "developer".to_string(),
title: "Test task".to_string(),
description: "Test description".to_string(),
context: "{}".to_string(),
priority: 1,
deadline: None,
assigned_at: Utc::now(),
};
let result = consumer.route_task(task).await;
assert!(result.is_err());
}
}

View File

@ -0,0 +1,236 @@
// Per-agent execution loop with channel-based task distribution
// Phase 5.5: Persistence of execution history to KG for learning
// Each agent has dedicated executor managing its state machine
use crate::messages::TaskAssignment;
use crate::registry::AgentMetadata;
use chrono::Utc;
use std::sync::Arc;
use tokio::sync::mpsc;
use tracing::{debug, info, warn};
use vapora_knowledge_graph::{ExecutionRecord, KGPersistence, PersistedExecution};
use vapora_llm_router::EmbeddingProvider;
use super::state_machine::{Agent, ExecutionResult, Idle};
/// Per-agent executor handling task processing with persistence (Phase 5.5)
pub struct AgentExecutor {
agent: Agent<Idle>,
task_rx: mpsc::Receiver<TaskAssignment>,
kg_persistence: Option<Arc<KGPersistence>>,
embedding_provider: Option<Arc<dyn EmbeddingProvider>>,
}
impl AgentExecutor {
/// Create new executor for an agent (Phase 4)
pub fn new(metadata: AgentMetadata, task_rx: mpsc::Receiver<TaskAssignment>) -> Self {
Self {
agent: Agent::new(metadata),
task_rx,
kg_persistence: None,
embedding_provider: None,
}
}
/// Create executor with persistence (Phase 5.5)
pub fn with_persistence(
metadata: AgentMetadata,
task_rx: mpsc::Receiver<TaskAssignment>,
kg_persistence: Arc<KGPersistence>,
embedding_provider: Arc<dyn EmbeddingProvider>,
) -> Self {
Self {
agent: Agent::new(metadata),
task_rx,
kg_persistence: Some(kg_persistence),
embedding_provider: Some(embedding_provider),
}
}
/// Run executor loop, processing tasks until channel closes
pub async fn run(mut self) {
info!("AgentExecutor started for agent: {}", self.agent.metadata.id);
let agent_id = self.agent.metadata.id.clone();
while let Some(task) = self.task_rx.recv().await {
debug!("Received task: {}", task.id);
// Transition: Idle → Assigned
let agent_assigned = self.agent.assign_task(task.clone());
// Transition: Assigned → Executing
let agent_executing = agent_assigned.start_execution();
let execution_start = Utc::now();
// Execute task (placeholder - in real use, call LLM via vapora-llm-router)
let result = ExecutionResult {
output: "Task executed successfully".to_string(),
input_tokens: 100,
output_tokens: 50,
duration_ms: 500,
};
// Transition: Executing → Completed
let completed_agent = agent_executing.complete(result.clone());
// Handle result - transition Completed → Idle
self.agent = completed_agent.reset();
// Phase 5.5: Persist execution to Knowledge Graph (after state transition)
self.persist_execution_internal(&task, &result, execution_start, &agent_id)
.await;
info!("Task {} completed", task.id);
}
info!("AgentExecutor stopped for agent: {}", agent_id);
}
/// Persist execution record to KG database (Phase 5.5)
async fn persist_execution_internal(
&self,
task: &TaskAssignment,
result: &ExecutionResult,
execution_start: chrono::DateTime<Utc>,
agent_id: &str,
) {
if let Some(ref kg_persistence) = self.kg_persistence {
if let Some(ref embedding_provider) = self.embedding_provider {
// Generate embedding for task description
let embedding = match embedding_provider.embed(&task.description).await {
Ok(emb) => emb,
Err(e) => {
warn!(
"Failed to generate embedding for task {}: {}",
task.id, e
);
// Use zero vector as fallback
vec![0.0; 1536]
}
};
// Create execution record for KG
let execution_record = ExecutionRecord {
id: task.id.clone(),
task_id: task.id.clone(),
agent_id: agent_id.to_string(),
task_type: task.required_role.clone(),
description: task.description.clone(),
duration_ms: result.duration_ms,
input_tokens: result.input_tokens,
output_tokens: result.output_tokens,
success: true, // In real implementation, check result status
error: None,
solution: Some(result.output.clone()),
root_cause: None,
timestamp: execution_start,
};
// Convert to persisted format
let persisted = PersistedExecution::from_execution_record(&execution_record, embedding);
// Persist to SurrealDB
if let Err(e) = kg_persistence.persist_execution(persisted).await {
warn!("Failed to persist execution: {}", e);
} else {
debug!("Persisted execution {} to KG", task.id);
}
// Record analytics event
if let Err(e) = kg_persistence
.record_event(
"task_completed",
agent_id,
"duration_ms",
result.duration_ms as f64,
)
.await
{
warn!("Failed to record event: {}", e);
}
// Record token usage event
if let Err(e) = kg_persistence
.record_event(
"token_usage",
agent_id,
"tokens",
(result.input_tokens + result.output_tokens) as f64,
)
.await
{
warn!("Failed to record token event: {}", e);
}
} else {
warn!(
"KG persistence available but no embedding provider for task {}",
task.id
);
}
}
}
}
/// Internal state for executor result handling
pub enum ExecutorState {
Completed(Agent<super::state_machine::Completed>),
Failed(Agent<super::state_machine::Failed>, String),
}
#[cfg(test)]
mod tests {
use super::*;
use crate::registry::AgentStatus;
#[tokio::test]
async fn test_executor_creation() {
let metadata = AgentMetadata {
id: "test-executor".to_string(),
role: "developer".to_string(),
name: "Test Executor".to_string(),
version: "0.1.0".to_string(),
status: AgentStatus::Active,
capabilities: vec!["code_generation".to_string()],
llm_provider: "claude".to_string(),
llm_model: "claude-sonnet-4".to_string(),
max_concurrent_tasks: 3,
current_tasks: 0,
created_at: Utc::now(),
last_heartbeat: Utc::now(),
uptime_percentage: 100.0,
total_tasks_completed: 0,
};
let (_tx, rx) = mpsc::channel(10);
let executor = AgentExecutor::new(metadata.clone(), rx);
assert_eq!(executor.agent.metadata.id, "test-executor");
assert!(executor.kg_persistence.is_none());
}
#[test]
fn test_executor_persistence_disabled_by_default() {
let metadata = AgentMetadata {
id: "test-no-persist".to_string(),
role: "reviewer".to_string(),
name: "Test No Persist".to_string(),
version: "0.1.0".to_string(),
status: AgentStatus::Active,
capabilities: vec!["review".to_string()],
llm_provider: "openai".to_string(),
llm_model: "gpt-4".to_string(),
max_concurrent_tasks: 5,
current_tasks: 0,
created_at: Utc::now(),
last_heartbeat: Utc::now(),
uptime_percentage: 99.5,
total_tasks_completed: 100,
};
let (_tx, rx) = mpsc::channel(10);
let executor = AgentExecutor::new(metadata, rx);
assert!(!executor.agent.metadata.role.is_empty());
assert_eq!(executor.embedding_provider.is_some(), false);
}
}

View File

@ -0,0 +1,10 @@
// Agent runtime: Type-state execution model
// Provides compile-time safety for agent state transitions
pub mod executor;
pub mod state_machine;
pub mod consumers;
pub use executor::AgentExecutor;
pub use state_machine::{Agent, Idle, Assigned, Executing, Completed, Failed, ExecutionResult};
pub use consumers::NatsConsumer;

View File

@ -0,0 +1,237 @@
// Type-state machine for agent lifecycle
// Ensures safe state transitions at compile time
use crate::messages::TaskAssignment;
use crate::registry::AgentMetadata;
use chrono::{DateTime, Utc};
use serde::{Deserialize, Serialize};
use std::marker::PhantomData;
/// Agent states - compile-time enforced state machine
/// Initial state: Agent is idle
pub struct Idle;
/// Task assigned state
pub struct Assigned {
pub task: TaskAssignment,
}
/// Executing state
pub struct Executing {
pub task: TaskAssignment,
pub started_at: DateTime<Utc>,
}
/// Completed state
pub struct Completed;
/// Failed state
pub struct Failed {
pub error: String,
}
/// Execution result containing outcome data
#[derive(Debug, Clone, Default, Serialize, Deserialize)]
pub struct ExecutionResult {
pub output: String,
pub input_tokens: u64,
pub output_tokens: u64,
pub duration_ms: u64,
}
/// Agent with compile-time state tracking
pub struct Agent<S> {
pub metadata: AgentMetadata,
state: PhantomData<S>,
result: Option<ExecutionResult>,
}
/// Transitions from Idle state
impl Agent<Idle> {
/// Create new idle agent
pub fn new(metadata: AgentMetadata) -> Self {
Self {
metadata,
state: PhantomData,
result: None,
}
}
/// Transition to Assigned state
pub fn assign_task(self, _task: TaskAssignment) -> Agent<Assigned> {
Agent {
metadata: self.metadata,
state: PhantomData,
result: None,
}
}
}
/// Transitions from Assigned state
impl Agent<Assigned> {
/// Transition to Executing state
pub fn start_execution(self) -> Agent<Executing> {
Agent {
metadata: self.metadata,
state: PhantomData,
result: None,
}
}
}
/// Transitions from Executing state
impl Agent<Executing> {
/// Complete execution successfully
pub fn complete(self, result: ExecutionResult) -> Agent<Completed> {
Agent {
metadata: self.metadata,
state: PhantomData,
result: Some(result),
}
}
/// Fail execution
pub fn fail(self, _error: String) -> Agent<Failed> {
Agent {
metadata: self.metadata,
state: PhantomData,
result: None,
}
}
}
/// Transitions from Completed state
impl Agent<Completed> {
/// Get execution result
pub fn result(&self) -> Option<&ExecutionResult> {
self.result.as_ref()
}
/// Transition back to Idle
pub fn reset(self) -> Agent<Idle> {
Agent {
metadata: self.metadata,
state: PhantomData,
result: None,
}
}
}
/// Transitions from Failed state
impl Agent<Failed> {
/// Get error message
pub fn error(&self) -> String {
match &self.result {
Some(result) => format!("Error: {}", result.output),
None => "Unknown error".to_string(),
}
}
/// Transition back to Idle
pub fn reset(self) -> Agent<Idle> {
Agent {
metadata: self.metadata,
state: PhantomData,
result: None,
}
}
}
#[cfg(test)]
mod tests {
use super::*;
use chrono::Utc;
#[test]
fn test_type_state_transitions() {
// Create metadata for testing
let metadata = AgentMetadata {
id: "test-agent".to_string(),
role: "developer".to_string(),
name: "Test Developer".to_string(),
version: "0.1.0".to_string(),
status: crate::registry::AgentStatus::Active,
capabilities: vec!["coding".to_string()],
llm_provider: "claude".to_string(),
llm_model: "claude-sonnet-4".to_string(),
max_concurrent_tasks: 5,
current_tasks: 0,
created_at: Utc::now(),
last_heartbeat: Utc::now(),
uptime_percentage: 100.0,
total_tasks_completed: 0,
};
// Type-state chain: Idle → Assigned → Executing → Completed → Idle
let agent = Agent::new(metadata.clone());
let task = TaskAssignment {
id: "task-1".to_string(),
agent_id: "test-agent".to_string(),
required_role: "developer".to_string(),
title: "Test task".to_string(),
description: "Test description".to_string(),
context: "{}".to_string(),
priority: 1,
deadline: None,
assigned_at: Utc::now(),
};
let agent = agent.assign_task(task);
let agent = agent.start_execution();
let result = ExecutionResult {
output: "Success".to_string(),
input_tokens: 100,
output_tokens: 50,
duration_ms: 1000,
};
let agent = agent.complete(result);
assert!(agent.result().is_some());
let _agent = agent.reset();
// agent is now back to Idle state - type system ensures this
}
#[test]
fn test_failed_state_transition() {
let metadata = AgentMetadata {
id: "test-agent".to_string(),
role: "developer".to_string(),
name: "Test Developer".to_string(),
version: "0.1.0".to_string(),
status: crate::registry::AgentStatus::Active,
capabilities: vec!["coding".to_string()],
llm_provider: "claude".to_string(),
llm_model: "claude-sonnet-4".to_string(),
max_concurrent_tasks: 5,
current_tasks: 0,
created_at: Utc::now(),
last_heartbeat: Utc::now(),
uptime_percentage: 100.0,
total_tasks_completed: 0,
};
let agent = Agent::new(metadata);
let task = TaskAssignment {
id: "task-1".to_string(),
agent_id: "test-agent".to_string(),
required_role: "developer".to_string(),
title: "Test task".to_string(),
description: "Test description".to_string(),
context: "{}".to_string(),
priority: 1,
deadline: None,
assigned_at: Utc::now(),
};
let agent = agent.assign_task(task);
let agent = agent.start_execution();
let agent = agent.fail("API timeout".to_string());
let _error = agent.error();
let _agent = agent.reset();
// agent is now back to Idle state
}
}

View File

@ -0,0 +1,278 @@
use crate::learning_profile::LearningProfile;
use vapora_swarm::messages::AgentProfile;
/// Unified agent score combining SwarmCoordinator metrics and learning expertise.
#[derive(Debug, Clone)]
pub struct AgentScore {
/// Agent identifier
pub agent_id: String,
/// Base score from SwarmCoordinator: success_rate / (1 + current_load)
pub base_score: f64,
/// Expertise score from LearningProfile for specific task type (0.0-1.0)
pub expertise_score: f64,
/// Confidence in expertise score based on execution count (0.0-1.0)
pub confidence: f64,
/// Final combined score: 0.3*base + 0.5*expertise + 0.2*confidence
pub final_score: f64,
/// Human-readable explanation of scoring breakdown
pub reasoning: String,
}
/// Service for ranking agents based on learning profiles and swarm metrics.
pub struct AgentScoringService;
impl AgentScoringService {
/// Rank candidate agents for task assignment using combined scoring.
///
/// Scoring formula:
/// - base_score = success_rate / (1 + current_load) [from SwarmCoordinator]
/// - expertise_score = learned success rate for task_type
/// - confidence = min(1.0, total_executions / 20) [prevents overfitting]
/// - final_score = 0.3*base + 0.5*expertise + 0.2*confidence
///
/// Returns agents ranked by final_score (highest first).
pub fn rank_agents(
candidates: Vec<AgentProfile>,
task_type: &str,
learning_profiles: &[(String, LearningProfile)],
) -> Vec<AgentScore> {
let mut scores: Vec<AgentScore> = candidates
.into_iter()
.map(|agent| {
let base_score = agent.success_rate / (1.0 + agent.current_load);
let (expertise_score, confidence) = learning_profiles
.iter()
.find(|(id, _)| id == &agent.id)
.map(|(_, profile)| {
(
profile.get_task_type_score(task_type),
profile.get_confidence(task_type),
)
})
.unwrap_or((agent.success_rate, 0.0));
let final_score = 0.3 * base_score + 0.5 * expertise_score + 0.2 * confidence;
let reasoning = format!(
"{}(base={:.2}, load={:.2}, expertise={:.2}, confidence={:.2})",
agent.id, base_score, agent.current_load, expertise_score, confidence
);
AgentScore {
agent_id: agent.id.clone(),
base_score,
expertise_score,
confidence,
final_score,
reasoning,
}
})
.collect();
scores.sort_by(|a, b| {
b.final_score
.partial_cmp(&a.final_score)
.unwrap_or(std::cmp::Ordering::Equal)
});
scores
}
/// Select best agent from candidates for task assignment.
/// Returns the top-ranked agent or None if no candidates available.
pub fn select_best(
candidates: Vec<AgentProfile>,
task_type: &str,
learning_profiles: &[(String, LearningProfile)],
) -> Option<AgentScore> {
Self::rank_agents(candidates, task_type, learning_profiles)
.into_iter()
.next()
}
/// Calculate blended score prioritizing task-type expertise.
/// Uses recent_success_rate if available (recency bias from learning profile).
pub fn rank_agents_with_recency(
candidates: Vec<AgentProfile>,
task_type: &str,
learning_profiles: &[(String, LearningProfile)],
) -> Vec<AgentScore> {
let mut scores: Vec<AgentScore> = candidates
.into_iter()
.map(|agent| {
let base_score = agent.success_rate / (1.0 + agent.current_load);
let (expertise_score, confidence) = learning_profiles
.iter()
.find(|(id, _)| id == &agent.id)
.map(|(_, profile)| {
// Use recent_success_rate if available (weighted 3x for last 7 days)
let recent = profile.get_recent_score(task_type);
let conf = profile.get_confidence(task_type);
(recent, conf)
})
.unwrap_or((agent.success_rate, 0.0));
let final_score = 0.3 * base_score + 0.5 * expertise_score + 0.2 * confidence;
let reasoning = format!(
"{}(recent={:.2}, confidence={:.2})",
agent.id, expertise_score, confidence
);
AgentScore {
agent_id: agent.id.clone(),
base_score,
expertise_score,
confidence,
final_score,
reasoning,
}
})
.collect();
scores.sort_by(|a, b| {
b.final_score
.partial_cmp(&a.final_score)
.unwrap_or(std::cmp::Ordering::Equal)
});
scores
}
}
#[cfg(test)]
mod tests {
use super::*;
fn create_mock_agent(id: &str, success_rate: f64, load: f64) -> AgentProfile {
AgentProfile {
id: id.to_string(),
roles: vec![],
capabilities: vec![],
current_load: load,
success_rate,
availability: true,
}
}
fn create_mock_learning(agent_id: &str, expertise: f64, confidence: f64) -> LearningProfile {
use crate::learning_profile::TaskTypeExpertise;
let mut profile = LearningProfile::new(agent_id.to_string());
let exp = TaskTypeExpertise {
success_rate: expertise,
total_executions: (confidence * 20.0) as u32,
recent_success_rate: expertise,
avg_duration_ms: 100.0,
learning_curve: Vec::new(),
confidence,
};
profile.set_task_type_expertise("coding".to_string(), exp);
profile
}
#[test]
fn test_rank_agents_basic() {
let candidates = vec![
create_mock_agent("agent-a", 0.8, 0.4),
create_mock_agent("agent-b", 0.6, 0.2),
create_mock_agent("agent-c", 0.9, 0.5),
];
let learning = vec![
("agent-a".to_string(), create_mock_learning("agent-a", 0.85, 0.8)),
("agent-b".to_string(), create_mock_learning("agent-b", 0.70, 0.6)),
("agent-c".to_string(), create_mock_learning("agent-c", 0.75, 0.5)),
];
let ranked = AgentScoringService::rank_agents(candidates, "coding", &learning);
assert_eq!(ranked.len(), 3);
// Verify sorted by final_score descending
for i in 1..ranked.len() {
assert!(ranked[i - 1].final_score >= ranked[i].final_score);
}
}
#[test]
fn test_select_best() {
let candidates = vec![
create_mock_agent("agent-a", 0.8, 0.4),
create_mock_agent("agent-b", 0.6, 0.2),
];
let learning = vec![
("agent-a".to_string(), create_mock_learning("agent-a", 0.85, 0.8)),
("agent-b".to_string(), create_mock_learning("agent-b", 0.70, 0.6)),
];
let best = AgentScoringService::select_best(candidates, "coding", &learning);
assert!(best.is_some());
assert_eq!(best.unwrap().agent_id, "agent-a");
}
#[test]
fn test_rank_agents_no_learning_data() {
let candidates = vec![
create_mock_agent("agent-a", 0.8, 0.4),
create_mock_agent("agent-b", 0.6, 0.2),
];
let ranked = AgentScoringService::rank_agents(candidates, "coding", &[]);
assert_eq!(ranked.len(), 2);
// Should still rank by base score when no learning data
assert!(ranked[0].final_score > 0.0);
}
#[test]
fn test_recency_bias_scoring() {
let candidates = vec![
create_mock_agent("agent-a", 0.5, 0.3),
create_mock_agent("agent-b", 0.5, 0.3),
];
let mut learning_a = LearningProfile::new("agent-a".to_string());
use crate::learning_profile::TaskTypeExpertise;
learning_a.set_task_type_expertise(
"coding".to_string(),
TaskTypeExpertise {
success_rate: 0.7,
total_executions: 20,
recent_success_rate: 0.95, // Recent success much higher
avg_duration_ms: 100.0,
learning_curve: Vec::new(),
confidence: 1.0,
},
);
let learning = vec![("agent-a".to_string(), learning_a)];
let ranked = AgentScoringService::rank_agents_with_recency(candidates, "coding", &learning);
assert_eq!(ranked.len(), 2);
// agent-a should rank higher due to recent success
assert_eq!(ranked[0].agent_id, "agent-a");
}
#[test]
fn test_confidence_weights_low_sample_count() {
let candidates = vec![
create_mock_agent("agent-a", 0.9, 0.0), // High success but...
create_mock_agent("agent-b", 0.8, 0.0), // Moderate success
];
let learning = vec![
("agent-a".to_string(), create_mock_learning("agent-a", 0.9, 0.05)), // Low confidence
("agent-b".to_string(), create_mock_learning("agent-b", 0.8, 0.95)), // High confidence
];
let ranked = AgentScoringService::rank_agents(candidates, "coding", &learning);
// agent-b should rank higher due to higher confidence despite lower expertise
assert_eq!(ranked[0].agent_id, "agent-b");
}
#[test]
fn test_empty_candidates() {
let ranked = AgentScoringService::rank_agents(Vec::new(), "coding", &[]);
assert_eq!(ranked.len(), 0);
}
}

View File

@ -0,0 +1,406 @@
use chrono::{Duration, Utc};
use std::collections::HashMap;
use std::sync::Arc;
use vapora_agents::{
AgentMetadata, AgentRegistry, AgentCoordinator, ExecutionData,
ProfileAdapter, TaskTypeExpertise,
};
use vapora_llm_router::{BudgetManager, RoleBudget};
/// End-to-end integration test: Learning + Budget interaction
///
/// Verifies that:
/// 1. Agents with better learning profiles are selected for tasks
/// 2. Budget enforcement doesn't interfere with learning-based assignment
/// 3. Learning profiles improve over time with new executions
/// 4. Budget manager tracks spending correctly during task assignment
#[tokio::test]
async fn test_end_to_end_learning_with_budget_enforcement() {
// Create agent registry and coordinator
let registry = Arc::new(AgentRegistry::new(10));
// Register two developers with different expertise
let developer_a = AgentMetadata::new(
"developer".to_string(),
"Developer A - Coding Specialist".to_string(),
"claude".to_string(),
"claude-opus-4-5".to_string(),
vec!["coding".to_string(), "testing".to_string()],
);
let developer_b = AgentMetadata::new(
"developer".to_string(),
"Developer B - Generalist".to_string(),
"claude".to_string(),
"claude-sonnet-4".to_string(),
vec!["coding".to_string(), "documentation".to_string()],
);
let dev_a_id = developer_a.id.clone();
let dev_b_id = developer_b.id.clone();
registry.register_agent(developer_a).ok();
registry.register_agent(developer_b).ok();
// Create coordinator with budget management
let coordinator = AgentCoordinator::with_registry(registry);
// Create budget manager with limits
let mut budgets = HashMap::new();
budgets.insert(
"developer".to_string(),
RoleBudget {
role: "developer".to_string(),
monthly_limit_cents: 100000, // $1000 for testing
weekly_limit_cents: 25000, // $250 for testing
fallback_provider: "ollama".to_string(),
alert_threshold: 0.8,
},
);
let budget_manager = Arc::new(BudgetManager::new(budgets));
let coordinator = coordinator.with_budget_manager(budget_manager.clone());
// Simulate historical executions for developer_a (excellent at coding)
let now = Utc::now();
let dev_a_executions: Vec<ExecutionData> = (0..30)
.map(|i| ExecutionData {
timestamp: now - Duration::days(i),
duration_ms: 200 + (i as u64 * 5),
success: i < 28, // 93% success rate
})
.collect();
// Simulate historical executions for developer_b (mediocre at coding)
let dev_b_executions: Vec<ExecutionData> = (0..30)
.map(|i| ExecutionData {
timestamp: now - Duration::days(i),
duration_ms: 300 + (i as u64 * 10),
success: i < 20, // 67% success rate
})
.collect();
// Calculate expertise from executions
let dev_a_expertise = TaskTypeExpertise::from_executions(dev_a_executions, "coding");
let dev_b_expertise = TaskTypeExpertise::from_executions(dev_b_executions, "coding");
// Verify expertise calculations
assert!(dev_a_expertise.success_rate > 0.9);
assert!(dev_b_expertise.success_rate > 0.6 && dev_b_expertise.success_rate < 0.7);
assert!(dev_a_expertise.success_rate > dev_b_expertise.success_rate);
// Create learning profiles
let mut profile_a = ProfileAdapter::create_learning_profile(dev_a_id.clone());
profile_a = ProfileAdapter::add_task_type_expertise(
profile_a,
"coding".to_string(),
dev_a_expertise,
);
let mut profile_b = ProfileAdapter::create_learning_profile(dev_b_id.clone());
profile_b = ProfileAdapter::add_task_type_expertise(
profile_b,
"coding".to_string(),
dev_b_expertise,
);
// Update coordinator with learning profiles
coordinator
.update_learning_profile(&dev_a_id, profile_a.clone())
.ok();
coordinator
.update_learning_profile(&dev_b_id, profile_b.clone())
.ok();
// Verify profiles are stored
let stored_a = coordinator.get_learning_profile(&dev_a_id);
let stored_b = coordinator.get_learning_profile(&dev_b_id);
assert!(stored_a.is_some());
assert!(stored_b.is_some());
// Check budget status before task assignment
let budget_status = budget_manager.check_budget("developer").await.unwrap();
assert!(!budget_status.exceeded);
assert!(!budget_status.near_threshold);
assert_eq!(budget_status.monthly_remaining_cents, 100000);
// Assign a coding task (should go to developer_a based on learning)
let task_id = coordinator
.assign_task(
"developer",
"Implement authentication module".to_string(),
"Create secure login and token validation".to_string(),
"Security critical".to_string(),
2,
)
.await
.expect("Should assign task");
// Verify task was assigned (we can check via registry)
let all_agents = coordinator.registry().list_all();
let dev_a_tasks = all_agents
.iter()
.find(|a| a.id == dev_a_id)
.map(|a| a.current_tasks)
.unwrap_or(0);
let _dev_b_tasks = all_agents
.iter()
.find(|a| a.id == dev_b_id)
.map(|a| a.current_tasks)
.unwrap_or(0);
// Developer A (high expertise) should be selected
assert!(
dev_a_tasks > 0,
"Developer A (high expertise) should have been assigned the task"
);
// Simulate task completion
coordinator.complete_task(&task_id, &dev_a_id).await.ok();
// Verify budget status is still within limits
let budget_status = budget_manager.check_budget("developer").await.unwrap();
assert!(!budget_status.exceeded, "Budget should not be exceeded");
// Simulate multiple tasks to test cumulative budget tracking
for i in 0..5 {
let task = coordinator
.assign_task(
"developer",
format!("Task {}", i),
"Test description".to_string(),
"Context".to_string(),
1,
)
.await;
if task.is_ok() {
let agents = coordinator.registry().list_all();
if let Some(dev_a) = agents.iter().find(|a| a.id == dev_a_id) {
coordinator.complete_task(&format!("task-{}", i), &dev_a.id).await.ok();
}
}
}
// Final budget status check
let final_budget = budget_manager.check_budget("developer").await.unwrap();
assert!(
final_budget.monthly_utilization < 1.0,
"Should not exceed monthly budget"
);
// Verify learning profiles are still intact
let all_profiles = coordinator.get_all_learning_profiles();
assert_eq!(all_profiles.len(), 2, "Both profiles should be stored");
}
/// Test that budget enforcement doesn't break learning-based selection
#[tokio::test]
async fn test_learning_selection_with_budget_constraints() {
let registry = Arc::new(AgentRegistry::new(10));
let agent_expert = AgentMetadata::new(
"developer".to_string(),
"Expert Developer".to_string(),
"claude".to_string(),
"claude-opus-4-5".to_string(),
vec!["coding".to_string()],
);
let agent_novice = AgentMetadata::new(
"developer".to_string(),
"Novice Developer".to_string(),
"claude".to_string(),
"claude-sonnet-4".to_string(),
vec!["coding".to_string()],
);
let expert_id = agent_expert.id.clone();
let novice_id = agent_novice.id.clone();
registry.register_agent(agent_expert).ok();
registry.register_agent(agent_novice).ok();
let coordinator = AgentCoordinator::with_registry(registry);
// Create tight budget to test constraint handling
let mut budgets = HashMap::new();
budgets.insert(
"developer".to_string(),
RoleBudget {
role: "developer".to_string(),
monthly_limit_cents: 10000, // $100 (tight)
weekly_limit_cents: 2500, // $25 (tight)
fallback_provider: "ollama".to_string(),
alert_threshold: 0.9, // Alert at 90%
},
);
let budget_manager = Arc::new(BudgetManager::new(budgets));
let coordinator = coordinator.with_budget_manager(budget_manager.clone());
// Create expertise profiles
let now = Utc::now();
let expert_execs: Vec<ExecutionData> = (0..20)
.map(|i| ExecutionData {
timestamp: now - Duration::days(i),
duration_ms: 100,
success: i < 19, // 95% success
})
.collect();
let novice_execs: Vec<ExecutionData> = (0..20)
.map(|i| ExecutionData {
timestamp: now - Duration::days(i),
duration_ms: 100,
success: i < 12, // 60% success
})
.collect();
let expert_expertise = TaskTypeExpertise::from_executions(expert_execs, "coding");
let novice_expertise = TaskTypeExpertise::from_executions(novice_execs, "coding");
let mut expert_profile = ProfileAdapter::create_learning_profile(expert_id.clone());
expert_profile =
ProfileAdapter::add_task_type_expertise(expert_profile, "coding".to_string(), expert_expertise);
let mut novice_profile = ProfileAdapter::create_learning_profile(novice_id.clone());
novice_profile =
ProfileAdapter::add_task_type_expertise(novice_profile, "coding".to_string(), novice_expertise);
coordinator.update_learning_profile(&expert_id, expert_profile).ok();
coordinator
.update_learning_profile(&novice_id, novice_profile)
.ok();
// Verify budget status
let status = budget_manager.check_budget("developer").await.unwrap();
assert!(
!status.exceeded && !status.near_threshold,
"Initial budget should be healthy"
);
// Assign multiple tasks - expert should be consistently selected
let mut expert_count = 0;
for i in 0..3 {
if let Ok(_task_id) = coordinator
.assign_task(
"developer",
format!("Coding Task {}", i),
"Implement feature".to_string(),
"Production".to_string(),
1,
)
.await
{
let agents = coordinator.registry().list_all();
if let Some(expert) = agents.iter().find(|a| a.id == expert_id) {
if expert.current_tasks > 0 {
expert_count += 1;
}
}
}
}
// Expert should have been selected more often
assert!(
expert_count > 0,
"Expert should have been selected despite budget constraints"
);
}
/// Test that learning profile improvements are tracked correctly
#[tokio::test]
async fn test_learning_profile_improvement_with_budget_tracking() {
let registry = Arc::new(AgentRegistry::new(10));
let agent = AgentMetadata::new(
"developer".to_string(),
"Improving Developer".to_string(),
"claude".to_string(),
"claude-sonnet-4".to_string(),
vec!["coding".to_string()],
);
let agent_id = agent.id.clone();
registry.register_agent(agent).ok();
let coordinator = AgentCoordinator::with_registry(registry);
// Create budget manager
let mut budgets = HashMap::new();
budgets.insert(
"developer".to_string(),
RoleBudget {
role: "developer".to_string(),
monthly_limit_cents: 50000,
weekly_limit_cents: 12500,
fallback_provider: "ollama".to_string(),
alert_threshold: 0.8,
},
);
let budget_manager = Arc::new(BudgetManager::new(budgets));
let coordinator = coordinator.with_budget_manager(budget_manager.clone());
// Initial profile: mediocre performance
let now = Utc::now();
let initial_execs: Vec<ExecutionData> = (0..10)
.map(|i| ExecutionData {
timestamp: now - Duration::days(i * 2),
duration_ms: 150,
success: i < 5, // 50% success
})
.collect();
let mut initial_expertise = TaskTypeExpertise::from_executions(initial_execs, "coding");
assert!((initial_expertise.success_rate - 0.5).abs() < 0.01);
let mut profile = ProfileAdapter::create_learning_profile(agent_id.clone());
profile = ProfileAdapter::add_task_type_expertise(profile, "coding".to_string(), initial_expertise.clone());
coordinator.update_learning_profile(&agent_id, profile.clone()).ok();
// Check initial profile
let stored_profile = coordinator.get_learning_profile(&agent_id).unwrap();
assert_eq!(
stored_profile.get_task_type_score("coding"),
initial_expertise.success_rate
);
// Simulate improvement: add successful recent executions
let new_exec = ExecutionData {
timestamp: now,
duration_ms: 120,
success: true,
};
initial_expertise.update_with_execution(&new_exec);
assert!(
initial_expertise.success_rate > 0.5,
"Success rate should improve with new successful execution"
);
// Update profile with improved expertise
let mut updated_profile = ProfileAdapter::create_learning_profile(agent_id.clone());
updated_profile = ProfileAdapter::add_task_type_expertise(
updated_profile,
"coding".to_string(),
initial_expertise,
);
coordinator.update_learning_profile(&agent_id, updated_profile).ok();
// Verify improvement is reflected
let final_profile = coordinator.get_learning_profile(&agent_id).unwrap();
let final_score = final_profile.get_task_type_score("coding");
assert!(
final_score > 0.5,
"Final score should reflect improvement"
);
// Verify budget tracking is unaffected
let status = budget_manager.check_budget("developer").await.unwrap();
assert!(!status.exceeded);
}

View File

@ -0,0 +1,395 @@
use chrono::{Duration, Utc};
use vapora_agents::{
ExecutionData, ProfileAdapter, TaskTypeExpertise,
AgentScoringService,
};
use vapora_swarm::messages::AgentProfile;
#[test]
fn test_end_to_end_learning_flow() {
// Simulate historical executions for agent
let now = Utc::now();
let executions: Vec<ExecutionData> = (0..20)
.map(|i| ExecutionData {
timestamp: now - Duration::days(i),
duration_ms: 100 + (i as u64 * 10),
success: i < 18, // 18 successes out of 20 = 90%
})
.collect();
// Calculate expertise from executions
let expertise = TaskTypeExpertise::from_executions(executions, "coding");
assert!((expertise.success_rate - 0.9).abs() < 0.01);
assert_eq!(expertise.total_executions, 20);
// Create learning profile for agent
let mut profile = ProfileAdapter::create_learning_profile("agent-1".to_string());
// Add expertise to profile
profile = ProfileAdapter::add_task_type_expertise(profile, "coding".to_string(), expertise);
// Verify expertise is stored
assert_eq!(profile.get_task_type_score("coding"), 0.9);
assert!(profile.get_confidence("coding") > 0.9); // 20/20 is high confidence
}
#[test]
fn test_learning_profile_improves_over_time() {
let now = Utc::now();
// Initial executions: 50% success
let initial_execs: Vec<ExecutionData> = (0..10)
.map(|i| ExecutionData {
timestamp: now - Duration::days(i * 2),
duration_ms: 100,
success: i < 5,
})
.collect();
let mut initial_expertise = TaskTypeExpertise::from_executions(initial_execs, "coding");
assert!((initial_expertise.success_rate - 0.5).abs() < 0.01);
// New successful execution
let new_exec = ExecutionData {
timestamp: now,
duration_ms: 100,
success: true,
};
initial_expertise.update_with_execution(&new_exec);
// Expertise should improve
assert!(initial_expertise.success_rate > 0.5);
assert_eq!(initial_expertise.total_executions, 11);
}
#[test]
fn test_agent_scoring_with_learning() {
// Create candidate agents
let candidates = vec![
AgentProfile {
id: "agent-a".to_string(),
roles: vec!["developer".to_string()],
capabilities: vec!["coding".to_string()],
current_load: 0.3,
success_rate: 0.8,
availability: true,
},
AgentProfile {
id: "agent-b".to_string(),
roles: vec!["developer".to_string()],
capabilities: vec!["coding".to_string()],
current_load: 0.1,
success_rate: 0.8,
availability: true,
},
];
// Create learning profiles
let mut profile_a = ProfileAdapter::create_learning_profile("agent-a".to_string());
profile_a = ProfileAdapter::add_task_type_expertise(
profile_a,
"coding".to_string(),
TaskTypeExpertise {
success_rate: 0.95,
total_executions: 50,
recent_success_rate: 0.95,
avg_duration_ms: 100.0,
learning_curve: Vec::new(),
confidence: 1.0,
},
);
let mut profile_b = ProfileAdapter::create_learning_profile("agent-b".to_string());
profile_b = ProfileAdapter::add_task_type_expertise(
profile_b,
"coding".to_string(),
TaskTypeExpertise {
success_rate: 0.70,
total_executions: 30,
recent_success_rate: 0.70,
avg_duration_ms: 120.0,
learning_curve: Vec::new(),
confidence: 1.0,
},
);
let learning_profiles = vec![
("agent-a".to_string(), profile_a),
("agent-b".to_string(), profile_b),
];
// Score agents
let ranked = AgentScoringService::rank_agents(candidates, "coding", &learning_profiles);
assert_eq!(ranked.len(), 2);
// agent-a should rank higher due to superior expertise despite higher load
assert_eq!(ranked[0].agent_id, "agent-a");
assert!(ranked[0].final_score > ranked[1].final_score);
}
#[test]
fn test_recency_bias_affects_ranking() {
let candidates = vec![
AgentProfile {
id: "agent-x".to_string(),
roles: vec!["developer".to_string()],
capabilities: vec!["coding".to_string()],
current_load: 0.3,
success_rate: 0.8,
availability: true,
},
AgentProfile {
id: "agent-y".to_string(),
roles: vec!["developer".to_string()],
capabilities: vec!["coding".to_string()],
current_load: 0.3,
success_rate: 0.8,
availability: true,
},
];
// agent-x has high overall success but recent failures
let mut profile_x = ProfileAdapter::create_learning_profile("agent-x".to_string());
profile_x = ProfileAdapter::add_task_type_expertise(
profile_x,
"coding".to_string(),
TaskTypeExpertise {
success_rate: 0.85,
total_executions: 40,
recent_success_rate: 0.60, // Recent poor performance
avg_duration_ms: 100.0,
learning_curve: Vec::new(),
confidence: 1.0,
},
);
// agent-y has consistent good recent performance
let mut profile_y = ProfileAdapter::create_learning_profile("agent-y".to_string());
profile_y = ProfileAdapter::add_task_type_expertise(
profile_y,
"coding".to_string(),
TaskTypeExpertise {
success_rate: 0.80,
total_executions: 30,
recent_success_rate: 0.90, // Recent strong performance
avg_duration_ms: 110.0,
learning_curve: Vec::new(),
confidence: 1.0,
},
);
let learning_profiles = vec![
("agent-x".to_string(), profile_x),
("agent-y".to_string(), profile_y),
];
// Rank with recency bias
let ranked = AgentScoringService::rank_agents_with_recency(candidates, "coding", &learning_profiles);
assert_eq!(ranked.len(), 2);
// agent-y should rank higher due to recent success despite lower overall rate
assert_eq!(ranked[0].agent_id, "agent-y");
}
#[test]
fn test_confidence_prevents_overfitting() {
let candidates = vec![
AgentProfile {
id: "agent-new".to_string(),
roles: vec!["developer".to_string()],
capabilities: vec!["coding".to_string()],
current_load: 0.0,
success_rate: 0.8,
availability: true,
},
AgentProfile {
id: "agent-exp".to_string(),
roles: vec!["developer".to_string()],
capabilities: vec!["coding".to_string()],
current_load: 0.0,
success_rate: 0.8,
availability: true,
},
];
// agent-new: High expertise but low confidence (few samples)
let mut profile_new = ProfileAdapter::create_learning_profile("agent-new".to_string());
profile_new = ProfileAdapter::add_task_type_expertise(
profile_new,
"coding".to_string(),
TaskTypeExpertise {
success_rate: 1.0, // Perfect so far
total_executions: 2,
recent_success_rate: 1.0,
avg_duration_ms: 100.0,
learning_curve: Vec::new(),
confidence: 0.1, // Low confidence - only 2/20 executions
},
);
// agent-exp: Slightly lower expertise but high confidence
let mut profile_exp = ProfileAdapter::create_learning_profile("agent-exp".to_string());
profile_exp = ProfileAdapter::add_task_type_expertise(
profile_exp,
"coding".to_string(),
TaskTypeExpertise {
success_rate: 0.95,
total_executions: 50,
recent_success_rate: 0.95,
avg_duration_ms: 100.0,
learning_curve: Vec::new(),
confidence: 1.0,
},
);
let learning_profiles = vec![
("agent-new".to_string(), profile_new),
("agent-exp".to_string(), profile_exp),
];
let ranked = AgentScoringService::rank_agents(candidates, "coding", &learning_profiles);
// agent-exp should rank higher despite slightly lower expertise due to confidence weighting
assert_eq!(ranked[0].agent_id, "agent-exp");
}
#[test]
fn test_multiple_task_types_independent() {
let mut profile = ProfileAdapter::create_learning_profile("agent-1".to_string());
// Agent excels at coding
let coding_exp = TaskTypeExpertise {
success_rate: 0.95,
total_executions: 30,
recent_success_rate: 0.95,
avg_duration_ms: 100.0,
learning_curve: Vec::new(),
confidence: 1.0,
};
// Agent struggles with documentation
let docs_exp = TaskTypeExpertise {
success_rate: 0.60,
total_executions: 20,
recent_success_rate: 0.65,
avg_duration_ms: 250.0,
learning_curve: Vec::new(),
confidence: 1.0,
};
profile = ProfileAdapter::add_task_type_expertise(profile, "coding".to_string(), coding_exp);
profile = ProfileAdapter::add_task_type_expertise(profile, "documentation".to_string(), docs_exp);
// Verify independence
assert_eq!(profile.get_task_type_score("coding"), 0.95);
assert_eq!(profile.get_task_type_score("documentation"), 0.60);
}
#[tokio::test]
async fn test_coordinator_assignment_with_learning_scores() {
use vapora_agents::{
AgentRegistry, AgentMetadata, AgentCoordinator,
};
use std::sync::Arc;
// Create registry with test agents
let registry = Arc::new(AgentRegistry::new(10));
// Register two agents for developer role
let agent_a = AgentMetadata::new(
"developer".to_string(),
"Agent A - Coding Specialist".to_string(),
"claude".to_string(),
"claude-opus-4-5".to_string(),
vec!["coding".to_string(), "testing".to_string()],
);
let agent_b = AgentMetadata::new(
"developer".to_string(),
"Agent B - Generalist".to_string(),
"claude".to_string(),
"claude-sonnet-4".to_string(),
vec!["coding".to_string(), "documentation".to_string()],
);
let agent_a_id = agent_a.id.clone();
let agent_b_id = agent_b.id.clone();
registry.register_agent(agent_a).ok();
registry.register_agent(agent_b).ok();
// Create coordinator
let coordinator = AgentCoordinator::with_registry(registry);
// Create learning profiles: Agent A excels at coding, Agent B is mediocre
let now = Utc::now();
let agent_a_executions: Vec<ExecutionData> = (0..20)
.map(|i| ExecutionData {
timestamp: now - Duration::days(i),
duration_ms: 100,
success: i < 19, // 95% success rate
})
.collect();
let agent_b_executions: Vec<ExecutionData> = (0..20)
.map(|i| ExecutionData {
timestamp: now - Duration::days(i),
duration_ms: 100,
success: i < 14, // 70% success rate
})
.collect();
let agent_a_expertise = TaskTypeExpertise::from_executions(agent_a_executions, "coding");
let agent_b_expertise = TaskTypeExpertise::from_executions(agent_b_executions, "coding");
let mut agent_a_profile = ProfileAdapter::create_learning_profile(agent_a_id.clone());
agent_a_profile =
ProfileAdapter::add_task_type_expertise(agent_a_profile, "coding".to_string(), agent_a_expertise);
let mut agent_b_profile = ProfileAdapter::create_learning_profile(agent_b_id.clone());
agent_b_profile =
ProfileAdapter::add_task_type_expertise(agent_b_profile, "coding".to_string(), agent_b_expertise);
// Update coordinator with learning profiles
coordinator
.update_learning_profile(&agent_a_id, agent_a_profile)
.ok();
coordinator
.update_learning_profile(&agent_b_id, agent_b_profile)
.ok();
// Assign a coding task
let _task_id = coordinator
.assign_task(
"developer",
"Implement authentication module".to_string(),
"Create secure login and token validation".to_string(),
"Security critical".to_string(),
2,
)
.await
.expect("Should assign task");
// Get the registry to verify which agent was selected
let registry = coordinator.registry();
let agent_a_tasks = registry.list_all()
.iter()
.find(|a| a.id == agent_a_id)
.map(|a| a.current_tasks)
.unwrap_or(0);
let agent_b_tasks = registry.list_all()
.iter()
.find(|a| a.id == agent_b_id)
.map(|a| a.current_tasks)
.unwrap_or(0);
// Agent A (higher expertise in coding) should have been selected
assert!(agent_a_tasks > 0, "Agent A (coding specialist) should have 1+ tasks");
assert_eq!(agent_b_tasks, 0, "Agent B (generalist) should have 0 tasks");
// Verify learning profiles are stored
let stored_profiles = coordinator.get_all_learning_profiles();
assert!(stored_profiles.contains_key(&agent_a_id), "Agent A profile should be stored");
assert!(stored_profiles.contains_key(&agent_b_id), "Agent B profile should be stored");
}

View File

@ -0,0 +1,166 @@
use chrono::{Duration, Utc};
use vapora_agents::{ExecutionData, LearningProfile, TaskTypeExpertise};
#[test]
fn test_per_task_type_expertise() {
let mut profile = LearningProfile::new("agent-1".to_string());
let coding_expertise = TaskTypeExpertise {
success_rate: 0.9,
total_executions: 20,
recent_success_rate: 0.95,
avg_duration_ms: 120.0,
learning_curve: Vec::new(),
confidence: 1.0,
};
profile.set_task_type_expertise("coding".to_string(), coding_expertise);
assert_eq!(profile.get_task_type_score("coding"), 0.9);
assert_eq!(profile.get_task_type_score("documentation"), 0.5); // Default
}
#[test]
fn test_recency_bias_weighting() {
let now = Utc::now();
let executions = vec![
ExecutionData {
timestamp: now - Duration::hours(1),
duration_ms: 100,
success: true,
},
ExecutionData {
timestamp: now - Duration::days(8),
duration_ms: 100,
success: false,
},
];
let expertise = TaskTypeExpertise::from_executions(executions, "coding");
// Recent success should pull average up despite old failure
assert!(expertise.recent_success_rate > 0.5);
assert!(expertise.recent_success_rate > expertise.success_rate);
}
#[test]
fn test_confidence_scaling() {
let now = Utc::now();
// Few executions = low confidence
let few_executions = vec![ExecutionData {
timestamp: now,
duration_ms: 100,
success: true,
}];
let few_expertise = TaskTypeExpertise::from_executions(few_executions, "coding");
assert!(few_expertise.confidence < 0.1);
// Many executions = high confidence
let many_executions: Vec<_> = (0..50)
.map(|i| ExecutionData {
timestamp: now - Duration::hours(i),
duration_ms: 100,
success: i % 2 == 0,
})
.collect();
let many_expertise = TaskTypeExpertise::from_executions(many_executions, "coding");
assert_eq!(many_expertise.confidence, 1.0); // Capped at 1.0
}
#[test]
fn test_learning_curve_generation() {
let now = Utc::now();
let executions = vec![
ExecutionData {
timestamp: now - Duration::hours(25),
duration_ms: 100,
success: true,
},
ExecutionData {
timestamp: now - Duration::hours(24),
duration_ms: 100,
success: true,
},
ExecutionData {
timestamp: now - Duration::hours(1),
duration_ms: 100,
success: false,
},
];
let expertise = TaskTypeExpertise::from_executions(executions, "coding");
assert!(!expertise.learning_curve.is_empty());
// Curve should be chronologically sorted
for i in 1..expertise.learning_curve.len() {
assert!(
expertise.learning_curve[i - 1].0 <= expertise.learning_curve[i].0,
"Learning curve must be chronologically sorted"
);
}
}
#[test]
fn test_execution_update() {
let now = Utc::now();
let mut expertise = TaskTypeExpertise {
success_rate: 0.8,
total_executions: 10,
recent_success_rate: 0.8,
avg_duration_ms: 100.0,
learning_curve: Vec::new(),
confidence: 0.5,
};
let execution = ExecutionData {
timestamp: now,
duration_ms: 150,
success: true,
};
expertise.update_with_execution(&execution);
assert_eq!(expertise.total_executions, 11);
assert!(expertise.success_rate > 0.8); // Success added
assert!(expertise.avg_duration_ms > 100.0); // Duration increased
}
#[test]
fn test_multiple_task_types() {
let mut profile = LearningProfile::new("agent-1".to_string());
let coding = TaskTypeExpertise {
success_rate: 0.95,
total_executions: 20,
recent_success_rate: 0.95,
avg_duration_ms: 120.0,
learning_curve: Vec::new(),
confidence: 1.0,
};
let documentation = TaskTypeExpertise {
success_rate: 0.75,
total_executions: 15,
recent_success_rate: 0.80,
avg_duration_ms: 200.0,
learning_curve: Vec::new(),
confidence: 0.75,
};
profile.set_task_type_expertise("coding".to_string(), coding);
profile.set_task_type_expertise("documentation".to_string(), documentation);
assert_eq!(profile.get_task_type_score("coding"), 0.95);
assert_eq!(profile.get_task_type_score("documentation"), 0.75);
assert_eq!(profile.get_confidence("coding"), 1.0);
assert_eq!(profile.get_confidence("documentation"), 0.75);
}
#[test]
fn test_empty_executions_default() {
let expertise = TaskTypeExpertise::from_executions(Vec::new(), "coding");
assert_eq!(expertise.success_rate, 0.5);
assert_eq!(expertise.total_executions, 0);
assert_eq!(expertise.confidence, 0.0);
}

View File

@ -0,0 +1,263 @@
// Integration tests for SwarmCoordinator integration with AgentCoordinator
// Tests verify swarm task assignment, profile synchronization, and metrics integration
use std::sync::Arc;
use std::time::Duration;
use vapora_agents::{AgentCoordinator, AgentRegistry, ProfileAdapter};
use vapora_agents::registry::AgentMetadata;
/// Helper to create a test agent
fn create_test_agent(id: &str, role: &str) -> AgentMetadata {
AgentMetadata::new(
role.to_string(),
format!("Agent {}", id),
"claude".to_string(),
"claude-sonnet-4".to_string(),
vec!["coding".to_string(), "testing".to_string()],
)
}
#[tokio::test]
async fn test_swarm_coordinator_integration_with_registry() {
// Setup: Create registry and coordinator
let registry = Arc::new(AgentRegistry::new(10));
// Register multiple agents
let agent1 = create_test_agent("1", "developer");
let agent2 = create_test_agent("2", "developer");
registry.register_agent(agent1).unwrap();
registry.register_agent(agent2).unwrap();
// Create coordinator (internally creates and initializes SwarmCoordinator)
let coordinator = AgentCoordinator::with_registry(Arc::clone(&registry));
// Assign a task - should use swarm coordinator
let result = coordinator
.assign_task(
"developer",
"Test task".to_string(),
"Implement a feature".to_string(),
"{}".to_string(),
80,
)
.await;
assert!(result.is_ok(), "Task assignment should succeed");
let assigned_agent_id = result.unwrap();
assert!(!assigned_agent_id.is_empty(), "Agent ID should not be empty");
}
#[tokio::test]
async fn test_profile_adapter_creates_valid_profiles() {
// Setup: Create agents and adapter
let agent1 = create_test_agent("1", "developer");
let agent2 = create_test_agent("2", "reviewer");
// Create profiles from agents
let profile1 = ProfileAdapter::create_profile(&agent1);
let profile2 = ProfileAdapter::create_profile(&agent2);
// Verify profile structure - ID is UUID mapped from agent.id
assert_eq!(profile1.id, agent1.id);
assert_eq!(profile2.id, agent2.id);
// Verify capabilities are mapped
assert!(!profile1.capabilities.is_empty());
assert!(!profile2.capabilities.is_empty());
// Verify default success rate is neutral
assert_eq!(profile1.success_rate, 0.5);
assert_eq!(profile2.success_rate, 0.5);
// Verify availability is based on agent status
assert!(profile1.availability);
assert!(profile2.availability);
}
#[tokio::test]
async fn test_batch_profile_creation() {
// Setup: Create multiple agents
let agents = vec![
create_test_agent("1", "developer"),
create_test_agent("2", "reviewer"),
create_test_agent("3", "tester"),
];
// Batch create profiles
let profiles = ProfileAdapter::batch_create_profiles(agents);
// Verify all profiles created
assert_eq!(profiles.len(), 3);
// Verify each profile has correct properties
for (_i, profile) in profiles.iter().enumerate() {
assert!(!profile.id.is_empty());
assert!(!profile.capabilities.is_empty());
assert!(profile.success_rate >= 0.0 && profile.success_rate <= 1.0);
}
}
#[tokio::test]
async fn test_task_assignment_selects_available_agent() {
// Setup: Create registry with agents
let registry = Arc::new(AgentRegistry::new(10));
let agent1 = create_test_agent("1", "developer");
let agent2 = create_test_agent("2", "developer");
registry.register_agent(agent1).unwrap();
registry.register_agent(agent2).unwrap();
let coordinator = AgentCoordinator::with_registry(Arc::clone(&registry));
// Assign multiple tasks
let result1 = coordinator
.assign_task(
"developer",
"Task 1".to_string(),
"Description".to_string(),
"{}".to_string(),
80,
)
.await;
let result2 = coordinator
.assign_task(
"developer",
"Task 2".to_string(),
"Description".to_string(),
"{}".to_string(),
80,
)
.await;
// Both should succeed
assert!(result1.is_ok());
assert!(result2.is_ok());
// Both should have assigned agents
assert!(!result1.unwrap().is_empty());
assert!(!result2.unwrap().is_empty());
}
#[tokio::test]
async fn test_coordinator_without_agent_fails() {
// Setup: Create registry and coordinator with no agents
let registry = Arc::new(AgentRegistry::new(10));
let coordinator = AgentCoordinator::with_registry(registry);
// Try to assign task with no available agents
let result = coordinator
.assign_task(
"nonexistent",
"Task".to_string(),
"Description".to_string(),
"{}".to_string(),
80,
)
.await;
// Should fail
assert!(result.is_err());
}
#[tokio::test]
async fn test_profile_sync_task_spawns() {
// Setup: Create registry and coordinator
let registry = Arc::new(AgentRegistry::new(10));
let agent = create_test_agent("1", "developer");
registry.register_agent(agent).unwrap();
// Create coordinator (spawns background profile sync task)
let _coordinator = AgentCoordinator::with_registry(Arc::clone(&registry));
// Give background task time to initialize
tokio::time::sleep(Duration::from_millis(100)).await;
// Registry should still have agents
let agents = registry.list_all();
assert_eq!(agents.len(), 1);
}
#[tokio::test]
async fn test_profile_load_calculation() {
// Setup: Create agent with known task count
let agent = create_test_agent("1", "developer");
let profile = ProfileAdapter::create_profile(&agent);
// Verify load is normalized (0.0-1.0)
assert!(profile.current_load >= 0.0 && profile.current_load <= 1.0);
}
#[tokio::test]
async fn test_multiple_role_assignment() {
// Setup: Create registry with agents of different roles
let registry = Arc::new(AgentRegistry::new(10));
let developer = create_test_agent("dev", "developer");
let reviewer = create_test_agent("rev", "reviewer");
registry.register_agent(developer).unwrap();
registry.register_agent(reviewer).unwrap();
let coordinator = AgentCoordinator::with_registry(Arc::clone(&registry));
// Assign task for developer
let dev_result = coordinator
.assign_task(
"developer",
"Code task".to_string(),
"Write code".to_string(),
"{}".to_string(),
80,
)
.await;
// Assign task for reviewer
let rev_result = coordinator
.assign_task(
"reviewer",
"Review task".to_string(),
"Review code".to_string(),
"{}".to_string(),
80,
)
.await;
// Both should succeed
assert!(dev_result.is_ok());
assert!(rev_result.is_ok());
}
#[tokio::test]
async fn test_swarm_statistics_available() {
// Setup: Create registry and coordinator with agents
let registry = Arc::new(AgentRegistry::new(10));
let agent1 = create_test_agent("1", "developer");
let agent2 = create_test_agent("2", "developer");
registry.register_agent(agent1).unwrap();
registry.register_agent(agent2).unwrap();
let coordinator = AgentCoordinator::with_registry(Arc::clone(&registry));
// Give swarm time to initialize
tokio::time::sleep(Duration::from_millis(100)).await;
// Coordinator should have functioning swarm coordinator
let result = coordinator
.assign_task(
"developer",
"Task".to_string(),
"Description".to_string(),
"{}".to_string(),
80,
)
.await;
// Should successfully assign task (verifies swarm is functional)
assert!(result.is_ok());
}

View File

@ -0,0 +1,30 @@
[package]
name = "vapora-analytics"
version.workspace = true
edition.workspace = true
authors.workspace = true
license.workspace = true
repository.workspace = true
rust-version.workspace = true
[dependencies]
tokio = { workspace = true }
tokio-stream = "0.1"
futures = { workspace = true }
serde = { workspace = true }
serde_json = { workspace = true }
thiserror = { workspace = true }
tracing = { workspace = true }
chrono = { workspace = true }
uuid = { workspace = true }
async-trait = { workspace = true }
surrealdb = { workspace = true }
dashmap = { workspace = true }
parking_lot = { workspace = true }
[dev-dependencies]
criterion = { workspace = true }
[[bench]]
name = "pipeline_benchmarks"
harness = false

View File

@ -0,0 +1,139 @@
use criterion::{black_box, criterion_group, criterion_main, Criterion};
use vapora_analytics::{EventPipeline, AgentEvent, AlertLevel};
use tokio::sync::mpsc;
fn pipeline_emit_event(c: &mut Criterion) {
c.bench_function("emit_single_event", |b| {
b.to_async(tokio::runtime::Runtime::new().unwrap())
.iter(|| async {
let (alert_tx, _alert_rx) = mpsc::unbounded_channel();
let (pipeline, _) = EventPipeline::new(alert_tx);
let event = AgentEvent::new_task_completed(
black_box("agent-1".to_string()),
black_box("task-1".to_string()),
1000,
100,
50,
);
black_box(pipeline.emit_event(black_box(event)).await)
});
});
}
fn pipeline_filter_events(c: &mut Criterion) {
c.bench_function("filter_events_100_events", |b| {
b.to_async(tokio::runtime::Runtime::new().unwrap())
.iter_batched(
|| {
let rt = tokio::runtime::Runtime::new().unwrap();
rt.block_on(async {
let (alert_tx, _alert_rx) = mpsc::unbounded_channel();
let (pipeline, _) = EventPipeline::new(alert_tx);
for i in 0..100 {
let event = AgentEvent::new_task_completed(
format!("agent-{}", i % 5),
format!("task-{}", i),
1000 + (i as u64 * 100),
100 + (i as u64 * 10),
50,
);
pipeline.emit_event(event).await.ok();
}
pipeline
})
},
|pipeline| async move {
black_box(
pipeline
.filter_events(|e| e.agent_id == "agent-1")
)
},
criterion::BatchSize::SmallInput,
);
});
}
fn pipeline_get_error_rate(c: &mut Criterion) {
c.bench_function("get_error_rate_200_events", |b| {
b.to_async(tokio::runtime::Runtime::new().unwrap())
.iter_batched(
|| {
let rt = tokio::runtime::Runtime::new().unwrap();
rt.block_on(async {
let (alert_tx, _alert_rx) = mpsc::unbounded_channel();
let (pipeline, _) = EventPipeline::new(alert_tx);
for i in 0..200 {
let event = if i % 20 == 0 {
AgentEvent::new_task_failed(
format!("agent-{}", i % 5),
format!("task-{}", i),
Some("timeout error".to_string()),
)
} else {
AgentEvent::new_task_completed(
format!("agent-{}", i % 5),
format!("task-{}", i),
1000 + (i as u64 * 100),
100 + (i as u64 * 10),
50,
)
};
pipeline.emit_event(event).await.ok();
}
pipeline
})
},
|pipeline| async move {
black_box(pipeline.get_error_rate(60))
},
criterion::BatchSize::SmallInput,
);
});
}
fn pipeline_get_top_agents(c: &mut Criterion) {
c.bench_function("get_top_agents_500_events", |b| {
b.to_async(tokio::runtime::Runtime::new().unwrap())
.iter_batched(
|| {
let rt = tokio::runtime::Runtime::new().unwrap();
rt.block_on(async {
let (alert_tx, _alert_rx) = mpsc::unbounded_channel();
let (pipeline, _) = EventPipeline::new(alert_tx);
for i in 0..500 {
let event = AgentEvent::new_task_completed(
format!("agent-{}", i % 10),
format!("task-{}", i),
1000 + (i as u64 * 100) % 5000,
100 + (i as u64 * 10),
50,
);
pipeline.emit_event(event).await.ok();
}
pipeline
})
},
|pipeline| async move {
black_box(pipeline.get_top_agents(60, 5))
},
criterion::BatchSize::SmallInput,
);
});
}
criterion_group!(
benches,
pipeline_emit_event,
pipeline_filter_events,
pipeline_get_error_rate,
pipeline_get_top_agents
);
criterion_main!(benches);

View File

@ -0,0 +1,24 @@
use thiserror::Error;
#[derive(Debug, Error)]
pub enum AnalyticsError {
#[error("Pipeline error: {0}")]
PipelineError(String),
#[error("Event processing error: {0}")]
ProcessingError(String),
#[error("Aggregation error: {0}")]
AggregationError(String),
#[error("Storage error: {0}")]
StorageError(String),
#[error("Serialization error: {0}")]
SerializationError(#[from] serde_json::Error),
#[error("Channel send error: {0}")]
ChannelError(String),
}
pub type Result<T> = std::result::Result<T, AnalyticsError>;

View File

@ -0,0 +1,165 @@
use chrono::{DateTime, Utc};
use serde::{Deserialize, Serialize};
/// Agent event from the system
#[derive(Debug, Clone, Serialize, Deserialize)]
pub struct AgentEvent {
pub id: String,
pub agent_id: String,
pub event_type: EventType,
pub duration_ms: Option<u64>,
pub input_tokens: Option<u64>,
pub output_tokens: Option<u64>,
pub task_id: Option<String>,
pub error: Option<String>,
pub timestamp: DateTime<Utc>,
}
/// Type of agent event
#[derive(Debug, Clone, Serialize, Deserialize, PartialEq, Eq, Hash)]
pub enum EventType {
TaskStarted,
TaskProgress,
TaskCompleted,
TaskFailed,
HeartbeatSent,
ErrorOccurred,
ExecutionWarning,
}
impl EventType {
pub fn as_str(&self) -> &str {
match self {
EventType::TaskStarted => "task_started",
EventType::TaskProgress => "task_progress",
EventType::TaskCompleted => "task_completed",
EventType::TaskFailed => "task_failed",
EventType::HeartbeatSent => "heartbeat",
EventType::ErrorOccurred => "error",
EventType::ExecutionWarning => "warning",
}
}
}
/// Aggregated statistics from events
#[derive(Debug, Clone, Serialize, Deserialize)]
pub struct EventAggregation {
pub window_start: DateTime<Utc>,
pub window_end: DateTime<Utc>,
pub event_type: EventType,
pub total_events: u64,
pub distinct_agents: u32,
pub avg_duration_ms: f64,
pub error_count: u64,
pub success_count: u64,
}
/// Alert triggered by analytics
#[derive(Debug, Clone, Serialize, Deserialize)]
pub struct Alert {
pub id: String,
pub level: AlertLevel,
pub message: String,
pub affected_agents: Vec<String>,
pub affected_tasks: Vec<String>,
pub triggered_at: DateTime<Utc>,
pub resolution: Option<String>,
}
/// Alert severity level
#[derive(Debug, Clone, Copy, Serialize, Deserialize, PartialEq, Eq, PartialOrd, Ord)]
pub enum AlertLevel {
Info,
Warning,
Critical,
}
impl EventType {
pub fn is_error(&self) -> bool {
matches!(self, EventType::TaskFailed | EventType::ErrorOccurred)
}
pub fn is_success(&self) -> bool {
matches!(self, EventType::TaskCompleted)
}
}
impl AgentEvent {
pub fn new_task_started(agent_id: String, task_id: String) -> Self {
Self {
id: uuid::Uuid::new_v4().to_string(),
agent_id,
event_type: EventType::TaskStarted,
duration_ms: None,
input_tokens: None,
output_tokens: None,
task_id: Some(task_id),
error: None,
timestamp: Utc::now(),
}
}
pub fn new_task_completed(
agent_id: String,
task_id: String,
duration_ms: u64,
input_tokens: u64,
output_tokens: u64,
) -> Self {
Self {
id: uuid::Uuid::new_v4().to_string(),
agent_id,
event_type: EventType::TaskCompleted,
duration_ms: Some(duration_ms),
input_tokens: Some(input_tokens),
output_tokens: Some(output_tokens),
task_id: Some(task_id),
error: None,
timestamp: Utc::now(),
}
}
pub fn new_task_failed(agent_id: String, task_id: String, error: String) -> Self {
Self {
id: uuid::Uuid::new_v4().to_string(),
agent_id,
event_type: EventType::TaskFailed,
duration_ms: None,
input_tokens: None,
output_tokens: None,
task_id: Some(task_id),
error: Some(error),
timestamp: Utc::now(),
}
}
}
#[cfg(test)]
mod tests {
use super::*;
#[test]
fn test_event_type_is_error() {
assert!(EventType::TaskFailed.is_error());
assert!(EventType::ErrorOccurred.is_error());
assert!(!EventType::TaskCompleted.is_error());
}
#[test]
fn test_create_events() {
let completed = AgentEvent::new_task_completed(
"agent-1".to_string(),
"task-1".to_string(),
1000,
100,
50,
);
assert_eq!(completed.event_type, EventType::TaskCompleted);
assert_eq!(completed.duration_ms, Some(1000));
let failed =
AgentEvent::new_task_failed("agent-1".to_string(), "task-2".to_string(), "timeout".to_string());
assert_eq!(failed.event_type, EventType::TaskFailed);
assert!(failed.error.is_some());
}
}

View File

@ -0,0 +1,10 @@
// vapora-analytics: Real-time streaming analytics and event processing
// Phase 4 Sprint 2: pathway-like event pipeline
pub mod error;
pub mod events;
pub mod pipeline;
pub use error::{AnalyticsError, Result};
pub use events::{AgentEvent, Alert, AlertLevel, EventAggregation, EventType};
pub use pipeline::EventPipeline;

View File

@ -0,0 +1,300 @@
use crate::error::{AnalyticsError, Result};
use crate::events::*;
use chrono::{Duration, Utc};
use dashmap::DashMap;
use std::collections::VecDeque;
use std::sync::Arc;
use tokio::sync::mpsc;
use tracing::debug;
/// Streaming pipeline for event processing
#[derive(Clone)]
pub struct EventPipeline {
event_tx: mpsc::UnboundedSender<AgentEvent>,
event_rx: Arc<tokio::sync::Mutex<mpsc::UnboundedReceiver<AgentEvent>>>,
alerts_tx: mpsc::UnboundedSender<Alert>,
time_windows: Arc<DashMap<String, VecDeque<AgentEvent>>>,
}
impl EventPipeline {
/// Create new event pipeline
pub fn new(external_alert_tx: mpsc::UnboundedSender<Alert>) -> (Self, mpsc::UnboundedSender<Alert>) {
let (event_tx, event_rx) = mpsc::unbounded_channel();
let pipeline = Self {
event_tx,
event_rx: Arc::new(tokio::sync::Mutex::new(event_rx)),
alerts_tx: external_alert_tx.clone(),
time_windows: Arc::new(DashMap::new()),
};
(pipeline, external_alert_tx)
}
/// Emit an event into the pipeline
pub async fn emit_event(&self, event: AgentEvent) -> Result<()> {
self.event_tx.send(event).map_err(|e| {
AnalyticsError::ChannelError(format!("Failed to emit event: {}", e))
})?;
Ok(())
}
/// Start processing events from the pipeline
pub async fn run(&self, window_duration_secs: u64) -> Result<()> {
let mut rx = self.event_rx.lock().await;
let time_windows = self.time_windows.clone();
let alerts_tx = self.alerts_tx.clone();
while let Some(event) = rx.recv().await {
debug!("Processing event: {:?}", event.event_type);
// Store in time window
let window_key = format!(
"{}_{}",
event.event_type.as_str(),
event.timestamp.timestamp() / (window_duration_secs as i64)
);
time_windows
.entry(window_key.clone())
.or_insert_with(VecDeque::new)
.push_back(event.clone());
// Check for alerts
if event.event_type.is_error() {
let alert = Alert {
id: uuid::Uuid::new_v4().to_string(),
level: AlertLevel::Warning,
message: format!(
"Error in agent {}: {}",
event.agent_id,
event.error.clone().unwrap_or_default()
),
affected_agents: vec![event.agent_id.clone()],
affected_tasks: event.task_id.clone().into_iter().collect(),
triggered_at: Utc::now(),
resolution: None,
};
alerts_tx.send(alert).ok();
}
// Check for performance degradation
if let Some(duration) = event.duration_ms {
if duration > 30_000 {
let alert = Alert {
id: uuid::Uuid::new_v4().to_string(),
level: AlertLevel::Warning,
message: format!(
"Slow task execution: {} took {}ms",
event.agent_id, duration
),
affected_agents: vec![event.agent_id.clone()],
affected_tasks: event.task_id.clone().into_iter().collect(),
triggered_at: Utc::now(),
resolution: Some("Consider scaling or optimization".to_string()),
};
alerts_tx.send(alert).ok();
}
}
}
Ok(())
}
/// Get aggregated statistics for a time window
pub async fn get_window_stats(
&self,
event_type: EventType,
window_secs: u64,
) -> Result<EventAggregation> {
let now = Utc::now();
let window_start = now - Duration::seconds(window_secs as i64);
let mut total_events = 0u64;
let mut agents = std::collections::HashSet::new();
let mut durations = Vec::new();
let mut error_count = 0u64;
let mut success_count = 0u64;
for entry in self.time_windows.iter() {
for event in entry.value().iter() {
if event.event_type == event_type && event.timestamp > window_start {
total_events += 1;
agents.insert(event.agent_id.clone());
if let Some(duration) = event.duration_ms {
durations.push(duration);
}
if event.event_type.is_error() {
error_count += 1;
} else if event.event_type.is_success() {
success_count += 1;
}
}
}
}
let avg_duration = if !durations.is_empty() {
durations.iter().sum::<u64>() as f64 / durations.len() as f64
} else {
0.0
};
Ok(EventAggregation {
window_start,
window_end: now,
event_type,
total_events,
distinct_agents: agents.len() as u32,
avg_duration_ms: avg_duration,
error_count,
success_count,
})
}
/// Filter events by criteria
pub fn filter_events<F>(&self, predicate: F) -> Vec<AgentEvent>
where
F: Fn(&AgentEvent) -> bool,
{
self.time_windows
.iter()
.flat_map(|entry| {
entry
.value()
.iter()
.filter(|event| predicate(event))
.cloned()
.collect::<Vec<_>>()
})
.collect()
}
/// Get error rate in last N seconds
pub async fn get_error_rate(&self, window_secs: u64) -> Result<f64> {
let now = Utc::now();
let window_start = now - Duration::seconds(window_secs as i64);
let mut total = 0u64;
let mut errors = 0u64;
for entry in self.time_windows.iter() {
for event in entry.value().iter() {
if event.timestamp > window_start {
total += 1;
if event.event_type.is_error() {
errors += 1;
}
}
}
}
if total == 0 {
Ok(0.0)
} else {
Ok(errors as f64 / total as f64)
}
}
/// Get throughput (events per second)
pub async fn get_throughput(&self, window_secs: u64) -> Result<f64> {
let now = Utc::now();
let window_start = now - Duration::seconds(window_secs as i64);
let mut count = 0u64;
for entry in self.time_windows.iter() {
for event in entry.value().iter() {
if event.timestamp > window_start {
count += 1;
}
}
}
Ok(count as f64 / window_secs as f64)
}
/// Get top N agents by task completion
pub async fn get_top_agents(&self, limit: usize) -> Result<Vec<(String, u64)>> {
let mut agent_counts: std::collections::HashMap<String, u64> =
std::collections::HashMap::new();
for entry in self.time_windows.iter() {
for event in entry.value().iter() {
if event.event_type.is_success() {
*agent_counts.entry(event.agent_id.clone()).or_insert(0) += 1;
}
}
}
let mut agents: Vec<_> = agent_counts.into_iter().collect();
agents.sort_by(|a, b| b.1.cmp(&a.1));
Ok(agents.into_iter().take(limit).collect())
}
}
#[cfg(test)]
mod tests {
use super::*;
#[tokio::test]
async fn test_pipeline_creation() {
let (_alert_tx, alert_rx) = mpsc::unbounded_channel();
let (_pipeline, _alerts) = EventPipeline::new(_alert_tx);
assert!(alert_rx.is_empty());
}
#[tokio::test]
async fn test_emit_event() {
let (alert_tx, _alert_rx) = mpsc::unbounded_channel();
let (pipeline, _alerts) = EventPipeline::new(alert_tx);
let event = AgentEvent::new_task_completed(
"agent-1".to_string(),
"task-1".to_string(),
1000,
100,
50,
);
assert!(pipeline.emit_event(event).await.is_ok());
}
#[tokio::test]
async fn test_filter_events() {
let (alert_tx, _alert_rx) = mpsc::unbounded_channel();
let (pipeline, _alerts) = EventPipeline::new(alert_tx);
// Spawn pipeline processor in background
let pipeline_clone = pipeline.clone();
tokio::spawn(async move {
pipeline_clone.run(60).await.ok();
});
let event1 = AgentEvent::new_task_completed(
"agent-1".to_string(),
"task-1".to_string(),
1000,
100,
50,
);
let event2 = AgentEvent::new_task_failed(
"agent-2".to_string(),
"task-2".to_string(),
"error".to_string(),
);
pipeline.emit_event(event1).await.ok();
pipeline.emit_event(event2).await.ok();
// Give pipeline time to process events
tokio::time::sleep(tokio::time::Duration::from_millis(100)).await;
let filtered = pipeline.filter_events(|e| e.event_type.is_error());
assert_eq!(filtered.len(), 1);
}
}

View File

@ -0,0 +1,92 @@
[package]
name = "vapora-backend"
version.workspace = true
edition.workspace = true
authors.workspace = true
license.workspace = true
repository.workspace = true
rust-version.workspace = true
[lib]
name = "vapora_backend"
path = "src/lib.rs"
[[bin]]
name = "vapora-backend"
path = "src/main.rs"
[dependencies]
# Internal crates
vapora-shared = { workspace = true }
vapora-agents = { workspace = true }
vapora-llm-router = { workspace = true }
vapora-swarm = { workspace = true }
vapora-tracking = { path = "../vapora-tracking" }
# Secrets management
secretumvault = { workspace = true }
# Web framework
axum = { workspace = true }
tower = { workspace = true }
tower-http = { workspace = true }
# Async runtime
tokio = { workspace = true }
futures = { workspace = true }
async-trait = { workspace = true }
# Serialization
serde = { workspace = true }
serde_json = { workspace = true }
toml = { workspace = true }
serde_yaml = { workspace = true }
# Error handling
anyhow = { workspace = true }
thiserror = { workspace = true }
# HTTP
http = { workspace = true }
# Logging
tracing = { workspace = true }
tracing-subscriber = { workspace = true }
# Database
surrealdb = { workspace = true }
sqlx = { workspace = true }
# Message Queue
async-nats = { workspace = true }
# Authentication
jsonwebtoken = { workspace = true }
argon2 = { workspace = true }
tower-sessions = { workspace = true }
tower-cookies = { workspace = true }
# Utilities
uuid = { workspace = true }
chrono = { workspace = true }
dotenv = { workspace = true }
once_cell = { workspace = true }
regex = { workspace = true }
# Configuration
clap = { workspace = true }
# Metrics
prometheus = { workspace = true }
# TLS
axum-server = { workspace = true }
rustls = { workspace = true }
rustls-pemfile = { workspace = true }
[dev-dependencies]
mockall = { workspace = true }
axum-test = { workspace = true }
wiremock = { workspace = true }
tempfile = { workspace = true }

View File

@ -0,0 +1,159 @@
// Agents API endpoints
use axum::{
extract::{Path, State},
http::StatusCode,
response::IntoResponse,
Json,
};
use serde::Deserialize;
use vapora_shared::models::{Agent, AgentStatus};
use crate::api::ApiResult;
use crate::api::state::AppState;
#[derive(Debug, Deserialize)]
pub struct UpdateStatusPayload {
pub status: AgentStatus,
}
#[derive(Debug, Deserialize)]
pub struct CapabilityPayload {
pub capability: String,
}
#[derive(Debug, Deserialize)]
pub struct SkillPayload {
pub skill: String,
}
/// List all agents
///
/// GET /api/v1/agents
pub async fn list_agents(State(state): State<AppState>) -> ApiResult<impl IntoResponse> {
let agents = state.agent_service.list_agents().await?;
Ok(Json(agents))
}
/// Get a specific agent
///
/// GET /api/v1/agents/:id
pub async fn get_agent(
State(state): State<AppState>,
Path(id): Path<String>,
) -> ApiResult<impl IntoResponse> {
let agent = state.agent_service.get_agent(&id).await?;
Ok(Json(agent))
}
/// Register a new agent
///
/// POST /api/v1/agents
pub async fn register_agent(
State(state): State<AppState>,
Json(agent): Json<Agent>,
) -> ApiResult<impl IntoResponse> {
let created = state.agent_service.register_agent(agent).await?;
Ok((StatusCode::CREATED, Json(created)))
}
/// Update an agent
///
/// PUT /api/v1/agents/:id
pub async fn update_agent(
State(state): State<AppState>,
Path(id): Path<String>,
Json(updates): Json<Agent>,
) -> ApiResult<impl IntoResponse> {
let updated = state.agent_service.update_agent(&id, updates).await?;
Ok(Json(updated))
}
/// Deregister an agent
///
/// DELETE /api/v1/agents/:id
pub async fn deregister_agent(
State(state): State<AppState>,
Path(id): Path<String>,
) -> ApiResult<impl IntoResponse> {
state.agent_service.deregister_agent(&id).await?;
Ok(StatusCode::NO_CONTENT)
}
/// Update agent status
///
/// PUT /api/v1/agents/:id/status
pub async fn update_agent_status(
State(state): State<AppState>,
Path(id): Path<String>,
Json(payload): Json<UpdateStatusPayload>,
) -> ApiResult<impl IntoResponse> {
let updated = state
.agent_service
.update_agent_status(&id, payload.status)
.await?;
Ok(Json(updated))
}
/// Add capability to agent
///
/// POST /api/v1/agents/:id/capabilities
pub async fn add_capability(
State(state): State<AppState>,
Path(id): Path<String>,
Json(payload): Json<CapabilityPayload>,
) -> ApiResult<impl IntoResponse> {
let updated = state
.agent_service
.add_capability(&id, payload.capability)
.await?;
Ok(Json(updated))
}
/// Remove capability from agent
///
/// DELETE /api/v1/agents/:id/capabilities/:capability
pub async fn remove_capability(
State(state): State<AppState>,
Path((id, capability)): Path<(String, String)>,
) -> ApiResult<impl IntoResponse> {
let updated = state
.agent_service
.remove_capability(&id, &capability)
.await?;
Ok(Json(updated))
}
/// Add skill to agent
///
/// POST /api/v1/agents/:id/skills
pub async fn add_skill(
State(state): State<AppState>,
Path(id): Path<String>,
Json(payload): Json<SkillPayload>,
) -> ApiResult<impl IntoResponse> {
let updated = state.agent_service.add_skill(&id, payload.skill).await?;
Ok(Json(updated))
}
/// Get available agents
///
/// GET /api/v1/agents/available
pub async fn get_available_agents(State(state): State<AppState>) -> ApiResult<impl IntoResponse> {
let agents = state.agent_service.get_available_agents().await?;
Ok(Json(agents))
}
/// Check agent health
///
/// GET /api/v1/agents/:id/health
pub async fn check_agent_health(
State(state): State<AppState>,
Path(id): Path<String>,
) -> ApiResult<impl IntoResponse> {
let is_healthy = state.agent_service.check_agent_health(&id).await?;
Ok(Json(serde_json::json!({
"agent_id": id,
"healthy": is_healthy
})))
}

View File

@ -0,0 +1,60 @@
// API error handling - Convert VaporaError to HTTP responses
use axum::{
http::StatusCode,
response::{IntoResponse, Response},
Json,
};
use serde_json::json;
use vapora_shared::VaporaError;
/// API Result type that implements IntoResponse
pub type ApiResult<T> = Result<T, ApiError>;
/// Convert VaporaError into an HTTP response
pub fn error_response(error: VaporaError) -> Response {
let (status, message) = match error {
VaporaError::NotFound(msg) => (StatusCode::NOT_FOUND, msg),
VaporaError::InvalidInput(msg) => (StatusCode::BAD_REQUEST, msg),
VaporaError::Unauthorized(msg) => (StatusCode::UNAUTHORIZED, msg),
VaporaError::ConfigError(msg) => (StatusCode::INTERNAL_SERVER_ERROR, format!("Configuration error: {}", msg)),
VaporaError::DatabaseError(msg) => (StatusCode::INTERNAL_SERVER_ERROR, format!("Database error: {}", msg)),
VaporaError::AgentError(msg) => (StatusCode::INTERNAL_SERVER_ERROR, format!("Agent error: {}", msg)),
VaporaError::LLMRouterError(msg) => (StatusCode::INTERNAL_SERVER_ERROR, format!("LLM router error: {}", msg)),
VaporaError::WorkflowError(msg) => (StatusCode::INTERNAL_SERVER_ERROR, format!("Workflow error: {}", msg)),
VaporaError::NatsError(msg) => (StatusCode::INTERNAL_SERVER_ERROR, format!("NATS error: {}", msg)),
VaporaError::IoError(err) => (StatusCode::INTERNAL_SERVER_ERROR, format!("IO error: {}", err)),
VaporaError::SerializationError(err) => (StatusCode::BAD_REQUEST, format!("Serialization error: {}", err)),
VaporaError::TomlError(msg) => (StatusCode::INTERNAL_SERVER_ERROR, format!("TOML error: {}", msg)),
VaporaError::InternalError(msg) => (StatusCode::INTERNAL_SERVER_ERROR, msg),
};
let body = Json(json!({
"error": message,
"status": status.as_u16(),
}));
(status, body).into_response()
}
/// Wrapper type to implement IntoResponse for VaporaError
#[derive(Debug)]
pub struct ApiError(pub VaporaError);
impl IntoResponse for ApiError {
fn into_response(self) -> Response {
error_response(self.0)
}
}
impl From<VaporaError> for ApiError {
fn from(err: VaporaError) -> Self {
ApiError(err)
}
}
impl From<serde_json::Error> for ApiError {
fn from(err: serde_json::Error) -> Self {
ApiError(VaporaError::SerializationError(err))
}
}

View File

@ -0,0 +1,30 @@
// Health check endpoint
use axum::{http::StatusCode, response::IntoResponse, Json};
use serde_json::json;
/// Health check endpoint
///
/// Returns current server status and version information
pub async fn health() -> impl IntoResponse {
(
StatusCode::OK,
Json(json!({
"status": "healthy",
"service": "vapora-backend",
"version": env!("CARGO_PKG_VERSION"),
"timestamp": chrono::Utc::now().to_rfc3339(),
})),
)
}
#[cfg(test)]
mod tests {
use super::*;
#[tokio::test]
async fn test_health_endpoint() {
let response = health().await;
// Response type verification - actual testing will be in integration tests
}
}

View File

@ -0,0 +1,41 @@
// Prometheus metrics endpoint
// Phase 5.2: Expose swarm and backend metrics
use axum::{
http::StatusCode,
response::{IntoResponse, Response},
};
use prometheus::{Encoder, TextEncoder};
/// Get Prometheus metrics in text format
pub async fn metrics_handler() -> Result<impl IntoResponse, MetricsError> {
let encoder = TextEncoder::new();
let metric_families = prometheus::gather();
let mut buffer = Vec::new();
encoder
.encode(&metric_families, &mut buffer)
.map_err(MetricsError::EncodingFailed)?;
Ok((
StatusCode::OK,
[("content-type", "text/plain; version=0.0.4; charset=utf-8")],
String::from_utf8_lossy(&buffer).to_string(),
))
}
/// Metrics endpoint errors
#[derive(Debug)]
pub enum MetricsError {
EncodingFailed(prometheus::Error),
}
impl IntoResponse for MetricsError {
fn into_response(self) -> Response {
let message = match self {
MetricsError::EncodingFailed(e) => format!("Failed to encode metrics: {:?}", e),
};
(StatusCode::INTERNAL_SERVER_ERROR, message).into_response()
}
}

View File

@ -0,0 +1,17 @@
// API module - HTTP endpoints
pub mod agents;
pub mod error;
pub mod health;
pub mod metrics;
pub mod projects;
pub mod state;
pub mod swarm;
pub mod tasks;
pub mod tracking;
pub mod websocket;
// pub mod workflows; // TODO: Phase 4 - Re-enable when workflow module imports are fixed
pub use error::ApiResult;
// pub use error::ApiError; // Temporarily commented - remove ApiError export
pub use state::AppState;

View File

@ -0,0 +1,137 @@
// Projects API endpoints
use axum::{
extract::{Path, State},
http::StatusCode,
response::IntoResponse,
Json,
};
use vapora_shared::models::{Project, ProjectStatus};
use crate::api::ApiResult;
use crate::api::state::AppState;
/// List all projects for a tenant
///
/// GET /api/v1/projects
pub async fn list_projects(State(state): State<AppState>) -> ApiResult<impl IntoResponse> {
// TODO: Extract tenant_id from JWT token
let tenant_id = "default";
let projects = state.project_service.list_projects(tenant_id).await?;
Ok(Json(projects))
}
/// Get a specific project
///
/// GET /api/v1/projects/:id
pub async fn get_project(
State(state): State<AppState>,
Path(id): Path<String>,
) -> ApiResult<impl IntoResponse> {
// TODO: Extract tenant_id from JWT token
let tenant_id = "default";
let project = state.project_service.get_project(&id, tenant_id).await?;
Ok(Json(project))
}
/// Create a new project
///
/// POST /api/v1/projects
pub async fn create_project(
State(state): State<AppState>,
Json(mut project): Json<Project>,
) -> ApiResult<impl IntoResponse> {
// TODO: Extract tenant_id from JWT token
project.tenant_id = "default".to_string();
let created = state.project_service.create_project(project).await?;
Ok((StatusCode::CREATED, Json(created)))
}
/// Update a project
///
/// PUT /api/v1/projects/:id
pub async fn update_project(
State(state): State<AppState>,
Path(id): Path<String>,
Json(updates): Json<Project>,
) -> ApiResult<impl IntoResponse> {
// TODO: Extract tenant_id from JWT token
let tenant_id = "default";
let updated = state
.project_service
.update_project(&id, tenant_id, updates)
.await?;
Ok(Json(updated))
}
/// Delete a project
///
/// DELETE /api/v1/projects/:id
pub async fn delete_project(
State(state): State<AppState>,
Path(id): Path<String>,
) -> ApiResult<impl IntoResponse> {
// TODO: Extract tenant_id from JWT token
let tenant_id = "default";
state.project_service.delete_project(&id, tenant_id).await?;
Ok(StatusCode::NO_CONTENT)
}
/// Add a feature to a project
///
/// POST /api/v1/projects/:id/features
pub async fn add_feature(
State(state): State<AppState>,
Path(id): Path<String>,
Json(payload): Json<serde_json::Value>,
) -> ApiResult<impl IntoResponse> {
// TODO: Extract tenant_id from JWT token
let tenant_id = "default";
let feature = payload["feature"]
.as_str()
.ok_or_else(|| vapora_shared::VaporaError::InvalidInput("Missing 'feature' field".to_string()))?
.to_string();
let updated = state
.project_service
.add_feature(&id, tenant_id, feature)
.await?;
Ok(Json(updated))
}
/// Remove a feature from a project
///
/// DELETE /api/v1/projects/:id/features/:feature
pub async fn remove_feature(
State(state): State<AppState>,
Path((id, feature)): Path<(String, String)>,
) -> ApiResult<impl IntoResponse> {
// TODO: Extract tenant_id from JWT token
let tenant_id = "default";
let updated = state
.project_service
.remove_feature(&id, tenant_id, &feature)
.await?;
Ok(Json(updated))
}
/// Archive a project
///
/// POST /api/v1/projects/:id/archive
pub async fn archive_project(
State(state): State<AppState>,
Path(id): Path<String>,
) -> ApiResult<impl IntoResponse> {
// TODO: Extract tenant_id from JWT token
let tenant_id = "default";
let updated = state.project_service.archive_project(&id, tenant_id).await?;
Ok(Json(updated))
}

View File

@ -0,0 +1,28 @@
// API state - Shared application state for Axum handlers
use crate::services::{AgentService, ProjectService, TaskService};
use std::sync::Arc;
/// Application state shared across all API handlers
#[derive(Clone)]
pub struct AppState {
pub project_service: Arc<ProjectService>,
pub task_service: Arc<TaskService>,
pub agent_service: Arc<AgentService>,
// TODO: Phase 4 - Add workflow_service when workflow module is ready
}
impl AppState {
/// Create a new AppState instance
pub fn new(
project_service: ProjectService,
task_service: TaskService,
agent_service: AgentService,
) -> Self {
Self {
project_service: Arc::new(project_service),
task_service: Arc::new(task_service),
agent_service: Arc::new(agent_service),
}
}
}

View File

@ -0,0 +1,112 @@
// Swarm API endpoints for task coordination and agent management
// Phase 5.2: SwarmCoordinator integration with REST API
use axum::{
extract::Extension,
http::StatusCode,
response::IntoResponse,
routing::get,
Json, Router,
};
use serde::{Deserialize, Serialize};
use std::sync::Arc;
use tracing::info;
use vapora_swarm::coordinator::SwarmCoordinator;
#[derive(Serialize, Deserialize, Clone)]
pub struct SwarmStatsResponse {
pub total_agents: u32,
pub available_agents: u32,
pub avg_load: f64,
pub active_tasks: u32,
pub active_coalitions: u32,
}
#[derive(Serialize)]
pub struct HealthResponse {
pub status: String,
pub agents_count: u32,
}
pub fn swarm_routes() -> Router {
Router::new()
.route("/swarm/stats", get(swarm_statistics))
.route("/swarm/health", get(swarm_health))
}
/// Get swarm statistics
pub async fn swarm_statistics(
Extension(swarm): Extension<Arc<SwarmCoordinator>>,
) -> impl IntoResponse {
let stats = swarm.get_swarm_stats();
info!(
"Swarm stats: {} total agents, {} available, {:.2}% avg load",
stats.total_agents, stats.available_agents, stats.avg_load * 100.0
);
(
StatusCode::OK,
Json(SwarmStatsResponse {
total_agents: stats.total_agents,
available_agents: stats.available_agents,
avg_load: stats.avg_load,
active_tasks: stats.active_tasks,
active_coalitions: stats.active_coalitions,
}),
)
}
/// Get swarm health status
pub async fn swarm_health(
Extension(swarm): Extension<Arc<SwarmCoordinator>>,
) -> impl IntoResponse {
let stats = swarm.get_swarm_stats();
let status = if stats.total_agents > 0 && stats.available_agents > 0 {
"healthy"
} else if stats.total_agents > 0 {
"degraded"
} else {
"no_agents"
};
(
StatusCode::OK,
Json(HealthResponse {
status: status.to_string(),
agents_count: stats.total_agents,
}),
)
}
#[cfg(test)]
mod tests {
use super::*;
#[tokio::test]
async fn test_swarm_stats_response_structure() {
let response = SwarmStatsResponse {
total_agents: 10,
available_agents: 8,
avg_load: 0.5,
active_tasks: 5,
active_coalitions: 2,
};
assert_eq!(response.total_agents, 10);
assert_eq!(response.available_agents, 8);
assert_eq!(response.active_tasks, 5);
}
#[tokio::test]
async fn test_health_response_structure() {
let response = HealthResponse {
status: "healthy".to_string(),
agents_count: 5,
};
assert_eq!(response.status, "healthy");
assert_eq!(response.agents_count, 5);
}
}

View File

@ -0,0 +1,199 @@
// Tasks API endpoints
use axum::{
extract::{Path, Query, State},
http::StatusCode,
response::IntoResponse,
Json,
};
use serde::Deserialize;
use vapora_shared::models::{Task, TaskStatus, TaskPriority};
use crate::api::ApiResult;
use crate::api::state::AppState;
#[derive(Debug, Deserialize)]
pub struct TaskQueryParams {
pub project_id: String,
pub status: Option<String>,
pub assignee: Option<String>,
}
#[derive(Debug, Deserialize)]
pub struct ReorderTaskPayload {
pub task_order: i32,
pub status: Option<TaskStatus>,
}
#[derive(Debug, Deserialize)]
pub struct AssignTaskPayload {
pub assignee: String,
}
#[derive(Debug, Deserialize)]
pub struct UpdatePriorityPayload {
pub priority: TaskPriority,
}
/// List tasks with optional filters
///
/// GET /api/v1/tasks?project_id=xxx&status=todo&assignee=agent1
pub async fn list_tasks(
State(state): State<AppState>,
Query(params): Query<TaskQueryParams>,
) -> ApiResult<impl IntoResponse> {
// TODO: Extract tenant_id from JWT token
let tenant_id = "default";
let tasks = if let Some(status_str) = params.status {
// Parse status
let status: TaskStatus = serde_json::from_value(serde_json::json!(status_str))?;
state
.task_service
.list_tasks_by_status(&params.project_id, tenant_id, status)
.await?
} else if let Some(assignee) = params.assignee {
state
.task_service
.list_tasks_by_assignee(&params.project_id, tenant_id, &assignee)
.await?
} else {
state
.task_service
.list_tasks(&params.project_id, tenant_id)
.await?
};
Ok(Json(tasks))
}
/// Get a specific task
///
/// GET /api/v1/tasks/:id
pub async fn get_task(
State(state): State<AppState>,
Path(id): Path<String>,
) -> ApiResult<impl IntoResponse> {
// TODO: Extract tenant_id from JWT token
let tenant_id = "default";
let task = state.task_service.get_task(&id, tenant_id).await?;
Ok(Json(task))
}
/// Create a new task
///
/// POST /api/v1/tasks
pub async fn create_task(
State(state): State<AppState>,
Json(mut task): Json<Task>,
) -> ApiResult<impl IntoResponse> {
// TODO: Extract tenant_id from JWT token
task.tenant_id = "default".to_string();
let created = state.task_service.create_task(task).await?;
Ok((StatusCode::CREATED, Json(created)))
}
/// Update a task
///
/// PUT /api/v1/tasks/:id
pub async fn update_task(
State(state): State<AppState>,
Path(id): Path<String>,
Json(updates): Json<Task>,
) -> ApiResult<impl IntoResponse> {
// TODO: Extract tenant_id from JWT token
let tenant_id = "default";
let updated = state.task_service.update_task(&id, tenant_id, updates).await?;
Ok(Json(updated))
}
/// Delete a task
///
/// DELETE /api/v1/tasks/:id
pub async fn delete_task(
State(state): State<AppState>,
Path(id): Path<String>,
) -> ApiResult<impl IntoResponse> {
// TODO: Extract tenant_id from JWT token
let tenant_id = "default";
state.task_service.delete_task(&id, tenant_id).await?;
Ok(StatusCode::NO_CONTENT)
}
/// Reorder a task (for Kanban drag & drop)
///
/// PUT /api/v1/tasks/:id/reorder
pub async fn reorder_task(
State(state): State<AppState>,
Path(id): Path<String>,
Json(payload): Json<ReorderTaskPayload>,
) -> ApiResult<impl IntoResponse> {
// TODO: Extract tenant_id from JWT token
let tenant_id = "default";
let updated = state
.task_service
.reorder_task(&id, tenant_id, payload.task_order, payload.status)
.await?;
Ok(Json(updated))
}
/// Update task status
///
/// PUT /api/v1/tasks/:id/status
pub async fn update_task_status(
State(state): State<AppState>,
Path(id): Path<String>,
Json(payload): Json<serde_json::Value>,
) -> ApiResult<impl IntoResponse> {
// TODO: Extract tenant_id from JWT token
let tenant_id = "default";
let status: TaskStatus = serde_json::from_value(payload["status"].clone())?;
let updated = state
.task_service
.update_task_status(&id, tenant_id, status)
.await?;
Ok(Json(updated))
}
/// Assign a task to an agent/user
///
/// PUT /api/v1/tasks/:id/assign
pub async fn assign_task(
State(state): State<AppState>,
Path(id): Path<String>,
Json(payload): Json<AssignTaskPayload>,
) -> ApiResult<impl IntoResponse> {
// TODO: Extract tenant_id from JWT token
let tenant_id = "default";
let updated = state
.task_service
.assign_task(&id, tenant_id, payload.assignee)
.await?;
Ok(Json(updated))
}
/// Update task priority
///
/// PUT /api/v1/tasks/:id/priority
pub async fn update_priority(
State(state): State<AppState>,
Path(id): Path<String>,
Json(payload): Json<UpdatePriorityPayload>,
) -> ApiResult<impl IntoResponse> {
// TODO: Extract tenant_id from JWT token
let tenant_id = "default";
let updated = state
.task_service
.update_priority(&id, tenant_id, payload.priority)
.await?;
Ok(Json(updated))
}

View File

@ -0,0 +1,127 @@
//! Tracking API endpoints for project change logs and TODOs
//!
//! Integrates vapora-tracking system with the main backend API,
//! providing unified access to project tracking data.
use axum::{
extract::{Path, Query, State},
http::StatusCode,
response::IntoResponse,
routing::get,
Json, Router,
};
use serde::{Deserialize, Serialize};
use serde_json::json;
use std::sync::Arc;
use tracing::info;
use vapora_tracking::storage::TrackingDb;
use crate::api::AppState;
/// Query parameters for filtering tracking entries
#[derive(Debug, Deserialize, Serialize)]
pub struct TrackingFilter {
/// Filter by project path
pub project: Option<String>,
/// Filter by source type
pub source: Option<String>,
/// Limit number of results
pub limit: Option<usize>,
}
/// Initialize tracking routes for the API
///
/// # Arguments
///
/// * `db` - Shared TrackingDb instance
///
/// # Returns
///
/// Router configured with tracking endpoints
pub fn setup_tracking_routes() -> Router {
Router::new()
.route("/tracking/entries", get(list_tracking_entries))
.route("/tracking/summary", get(get_tracking_summary))
.route("/tracking/health", get(tracking_health))
}
/// List all tracking entries with optional filtering
///
/// # Query Parameters
///
/// * `project` - Filter by project path
/// * `source` - Filter by source type
/// * `limit` - Limit results (default: 100)
///
/// # Examples
///
/// `GET /api/v1/tracking/entries?project=/myproject&limit=50`
pub async fn list_tracking_entries(
Query(filter): Query<TrackingFilter>,
) -> Result<Json<serde_json::Value>, StatusCode> {
info!("Getting tracking entries with filter: {:?}", filter);
// TODO: Implement actual query using filter parameters
// For now, return placeholder response
let response = json!({
"items": [],
"count": 0,
"filter": filter
});
Ok(Json(response))
}
/// Get tracking summary statistics
///
/// # Examples
///
/// `GET /api/v1/tracking/summary`
pub async fn get_tracking_summary() -> Result<Json<serde_json::Value>, StatusCode> {
info!("Getting tracking summary");
let summary = json!({
"total_entries": 0,
"change_count": 0,
"todo_count": 0,
"todos_by_status": {
"pending": 0,
"in_progress": 0,
"completed": 0,
"blocked": 0
},
"last_sync": chrono::Utc::now().to_rfc3339()
});
Ok(Json(summary))
}
/// Health check for tracking service
///
/// # Examples
///
/// `GET /api/v1/tracking/health`
pub async fn tracking_health() -> Result<Json<serde_json::Value>, StatusCode> {
info!("Tracking service health check");
let response = json!({
"status": "ok",
"service": "vapora-tracking",
"timestamp": chrono::Utc::now().to_rfc3339()
});
Ok(Json(response))
}
#[cfg(test)]
mod tests {
use super::*;
#[test]
fn test_tracking_filter_deserialization() {
let json = r#"{"project": "/test", "limit": 50}"#;
let filter: TrackingFilter = serde_json::from_str(json).unwrap();
assert_eq!(filter.project, Some("/test".to_string()));
assert_eq!(filter.limit, Some(50));
}
}

View File

@ -0,0 +1,156 @@
// vapora-backend: WebSocket handler for real-time workflow updates
// Phase 3: Stream workflow progress to connected clients
use serde::{Deserialize, Serialize};
use std::sync::Arc;
use tokio::sync::broadcast;
use tracing::{debug, error};
#[derive(Clone, Debug, Serialize, Deserialize)]
pub struct WorkflowUpdate {
pub workflow_id: String,
pub status: String,
pub progress: u32,
pub message: String,
pub timestamp: chrono::DateTime<chrono::Utc>,
}
impl WorkflowUpdate {
pub fn new(workflow_id: String, status: String, progress: u32, message: String) -> Self {
Self {
workflow_id,
status,
progress,
message,
timestamp: chrono::Utc::now(),
}
}
}
/// Broadcaster for workflow updates
pub struct WorkflowBroadcaster {
tx: broadcast::Sender<WorkflowUpdate>,
}
impl WorkflowBroadcaster {
pub fn new() -> Self {
let (tx, _) = broadcast::channel(100);
Self { tx }
}
/// Send workflow update to all subscribers
pub fn send_update(&self, update: WorkflowUpdate) {
debug!(
"Broadcasting update for workflow {}: {} ({}%)",
update.workflow_id, update.message, update.progress
);
if let Err(e) = self.tx.send(update) {
error!("Failed to broadcast update: {}", e);
}
}
/// Subscribe to workflow updates
pub fn subscribe(&self) -> broadcast::Receiver<WorkflowUpdate> {
self.tx.subscribe()
}
/// Get subscriber count
pub fn subscriber_count(&self) -> usize {
self.tx.receiver_count()
}
}
impl Default for WorkflowBroadcaster {
fn default() -> Self {
Self::new()
}
}
impl Clone for WorkflowBroadcaster {
fn clone(&self) -> Self {
Self {
tx: self.tx.clone(),
}
}
}
// Note: WebSocket support requires ws feature in axum
// For Phase 4, we focus on the broadcaster infrastructure
// WebSocket handlers would be added when the ws feature is enabled
#[cfg(test)]
mod tests {
use super::*;
#[test]
fn test_broadcaster_creation() {
let broadcaster = WorkflowBroadcaster::new();
assert_eq!(broadcaster.subscriber_count(), 0);
}
#[test]
fn test_subscribe() {
let broadcaster = WorkflowBroadcaster::new();
let _rx = broadcaster.subscribe();
assert_eq!(broadcaster.subscriber_count(), 1);
}
#[tokio::test]
async fn test_send_update() {
let broadcaster = WorkflowBroadcaster::new();
let mut rx = broadcaster.subscribe();
let update = WorkflowUpdate::new(
"wf-1".to_string(),
"in_progress".to_string(),
50,
"Step 1 completed".to_string(),
);
broadcaster.send_update(update.clone());
let received = rx.recv().await.unwrap();
assert_eq!(received.workflow_id, "wf-1");
assert_eq!(received.progress, 50);
}
#[tokio::test]
async fn test_multiple_subscribers() {
let broadcaster = WorkflowBroadcaster::new();
let mut rx1 = broadcaster.subscribe();
let mut rx2 = broadcaster.subscribe();
let update = WorkflowUpdate::new(
"wf-1".to_string(),
"completed".to_string(),
100,
"All steps completed".to_string(),
);
broadcaster.send_update(update);
let received1 = rx1.recv().await.unwrap();
let received2 = rx2.recv().await.unwrap();
assert_eq!(received1.workflow_id, received2.workflow_id);
assert_eq!(received1.progress, 100);
assert_eq!(received2.progress, 100);
}
#[test]
fn test_update_serialization() {
let update = WorkflowUpdate::new(
"wf-1".to_string(),
"running".to_string(),
75,
"Almost done".to_string(),
);
let json = serde_json::to_string(&update).unwrap();
let deserialized: WorkflowUpdate = serde_json::from_str(&json).unwrap();
assert_eq!(deserialized.workflow_id, "wf-1");
assert_eq!(deserialized.progress, 75);
}
}

View File

@ -0,0 +1,213 @@
// vapora-backend: Workflow API endpoints
// Phase 3: REST API for workflow management
use crate::api::error::ApiError;
use crate::api::state::AppState;
use crate::audit::AuditEntry;
use crate::workflow::{parser::WorkflowParser, Workflow};
use axum::{
extract::{Path, State},
http::StatusCode,
routing::{get, post},
Json, Router,
};
use serde::{Deserialize, Serialize};
use tracing::error;
use vapora_shared::VaporaError;
#[derive(Debug, Serialize, Deserialize)]
pub struct CreateWorkflowRequest {
pub yaml: String,
}
#[derive(Debug, Serialize, Deserialize)]
pub struct WorkflowResponse {
pub workflow: Workflow,
}
#[derive(Debug, Serialize, Deserialize)]
pub struct WorkflowListResponse {
pub workflows: Vec<Workflow>,
}
#[derive(Debug, Serialize, Deserialize)]
pub struct MessageResponse {
pub success: bool,
pub message: String,
}
#[derive(Debug, Serialize, Deserialize)]
pub struct AuditResponse {
pub entries: Vec<AuditEntry>,
}
/// Create workflow router
pub fn workflow_routes() -> Router<AppState> {
Router::new()
.route("/", get(list_workflows).post(create_workflow))
.route("/:id", get(get_workflow))
.route("/:id/execute", post(execute_workflow))
.route("/:id/rollback", post(rollback_workflow))
.route("/:id/audit", get(get_workflow_audit))
}
/// List all workflows
async fn list_workflows(
State(state): State<AppState>,
) -> Result<Json<WorkflowListResponse>, ApiError> {
let workflow_service = state
.workflow_service
.as_ref()
.ok_or_else(|| ApiError(VaporaError::InternalError("Workflow service not available".to_string())))?;
let workflows = workflow_service.list_workflows().await;
Ok(Json(WorkflowListResponse { workflows }))
}
/// Create new workflow from YAML
async fn create_workflow(
State(state): State<AppState>,
Json(req): Json<CreateWorkflowRequest>,
) -> Result<(StatusCode, Json<WorkflowResponse>), ApiError> {
let workflow_service = state
.workflow_service
.as_ref()
.ok_or_else(|| ApiError(VaporaError::InternalError("Workflow service not available".to_string())))?;
// Parse YAML
let workflow = WorkflowParser::parse_string(&req.yaml)
.map_err(|e| ApiError(VaporaError::InvalidInput(format!("Invalid workflow YAML: {}", e))))?;
// Create workflow
let created = workflow_service
.create_workflow(workflow)
.await
.map_err(|e| {
error!("Failed to create workflow: {}", e);
ApiError(VaporaError::InternalError(e.to_string()))
})?;
Ok((
StatusCode::CREATED,
Json(WorkflowResponse { workflow: created }),
))
}
/// Get workflow by ID
async fn get_workflow(
State(state): State<AppState>,
Path(id): Path<String>,
) -> Result<Json<WorkflowResponse>, ApiError> {
let workflow_service = state
.workflow_service
.as_ref()
.ok_or_else(|| ApiError(VaporaError::InternalError("Workflow service not available".to_string())))?;
let workflow = workflow_service.get_workflow(&id).await.map_err(|e| {
error!("Failed to get workflow {}: {}", id, e);
ApiError(VaporaError::NotFound(format!("Workflow {} not found", id)))
})?;
Ok(Json(WorkflowResponse { workflow }))
}
/// Execute workflow
async fn execute_workflow(
State(state): State<AppState>,
Path(id): Path<String>,
) -> Result<Json<WorkflowResponse>, ApiError> {
let workflow_service = state
.workflow_service
.as_ref()
.ok_or_else(|| ApiError(VaporaError::InternalError("Workflow service not available".to_string())))?;
let workflow = workflow_service.execute_workflow(&id).await.map_err(|e| {
error!("Failed to execute workflow {}: {}", id, e);
ApiError(VaporaError::InternalError(e.to_string()))
})?;
Ok(Json(WorkflowResponse { workflow }))
}
/// Rollback failed workflow
async fn rollback_workflow(
State(state): State<AppState>,
Path(id): Path<String>,
) -> Result<Json<MessageResponse>, ApiError> {
let workflow_service = state
.workflow_service
.as_ref()
.ok_or_else(|| ApiError(VaporaError::InternalError("Workflow service not available".to_string())))?;
workflow_service.rollback_workflow(&id).await.map_err(|e| {
error!("Failed to rollback workflow {}: {}", id, e);
ApiError(VaporaError::InternalError(e.to_string()))
})?;
Ok(Json(MessageResponse {
success: true,
message: format!("Workflow {} rolled back successfully", id),
}))
}
/// Get audit trail for workflow
async fn get_workflow_audit(
State(state): State<AppState>,
Path(id): Path<String>,
) -> Result<Json<AuditResponse>, ApiError> {
let workflow_service = state
.workflow_service
.as_ref()
.ok_or_else(|| ApiError(VaporaError::InternalError("Workflow service not available".to_string())))?;
let entries = workflow_service.get_audit_trail(&id).await;
Ok(Json(AuditResponse { entries }))
}
// Note: WebSocket endpoint would be added separately to main router
// pub async fn workflow_websocket_handler(
// ws: WebSocketUpgrade,
// State(state): State<AppState>,
// ) -> axum::response::Response {
// if let Some(workflow_service) = state.workflow_service.as_ref() {
// let broadcaster = workflow_service.broadcaster();
// workflow_ws_handler(ws, broadcaster).await.into_response()
// } else {
// (
// StatusCode::SERVICE_UNAVAILABLE,
// Json(serde_json::json!({"error": "Workflow service not available"})),
// )
// .into_response()
// }
// }
#[cfg(test)]
mod tests {
use super::*;
#[test]
fn test_create_workflow_request_serialization() {
let req = CreateWorkflowRequest {
yaml: "workflow:\n id: test".to_string(),
};
let json = serde_json::to_string(&req).unwrap();
let deserialized: CreateWorkflowRequest = serde_json::from_str(&json).unwrap();
assert_eq!(deserialized.yaml, req.yaml);
}
#[test]
fn test_message_response() {
let resp = MessageResponse {
success: true,
message: "Test message".to_string(),
};
let json = serde_json::to_string(&resp).unwrap();
assert!(json.contains("success"));
assert!(json.contains("Test message"));
}
}

View File

@ -0,0 +1,234 @@
// vapora-backend: Audit trail system
// Phase 3: Track all workflow events and actions
use chrono::{DateTime, Utc};
use serde::{Deserialize, Serialize};
use std::sync::Arc;
use tokio::sync::RwLock;
#[derive(Debug, Clone, Serialize, Deserialize)]
pub struct AuditEntry {
pub id: String,
pub timestamp: DateTime<Utc>,
pub workflow_id: String,
pub event_type: String,
pub actor: String,
pub details: serde_json::Value,
}
impl AuditEntry {
pub fn new(
workflow_id: String,
event_type: String,
actor: String,
details: serde_json::Value,
) -> Self {
Self {
id: uuid::Uuid::new_v4().to_string(),
timestamp: Utc::now(),
workflow_id,
event_type,
actor,
details,
}
}
}
/// Audit trail maintains history of workflow events
pub struct AuditTrail {
entries: Arc<RwLock<Vec<AuditEntry>>>,
}
impl AuditTrail {
pub fn new() -> Self {
Self {
entries: Arc::new(RwLock::new(Vec::new())),
}
}
/// Log a workflow event
pub async fn log_event(
&self,
workflow_id: String,
event_type: String,
actor: String,
details: serde_json::Value,
) {
let entry = AuditEntry::new(workflow_id, event_type, actor, details);
let mut entries = self.entries.write().await;
entries.push(entry);
}
/// Get audit entries for a workflow
pub async fn get_workflow_audit(&self, workflow_id: &str) -> Vec<AuditEntry> {
let entries = self.entries.read().await;
entries
.iter()
.filter(|e| e.workflow_id == workflow_id)
.cloned()
.collect()
}
/// Get all audit entries
pub async fn get_all_entries(&self) -> Vec<AuditEntry> {
let entries = self.entries.read().await;
entries.clone()
}
/// Get entries by event type
pub async fn get_by_event_type(&self, event_type: &str) -> Vec<AuditEntry> {
let entries = self.entries.read().await;
entries
.iter()
.filter(|e| e.event_type == event_type)
.cloned()
.collect()
}
/// Get entries by actor
pub async fn get_by_actor(&self, actor: &str) -> Vec<AuditEntry> {
let entries = self.entries.read().await;
entries
.iter()
.filter(|e| e.actor == actor)
.cloned()
.collect()
}
/// Clear all entries (for testing)
pub async fn clear(&self) {
let mut entries = self.entries.write().await;
entries.clear();
}
}
impl Default for AuditTrail {
fn default() -> Self {
Self::new()
}
}
/// Event types for audit trail
pub mod events {
pub const WORKFLOW_CREATED: &str = "workflow_created";
pub const WORKFLOW_STARTED: &str = "workflow_started";
pub const WORKFLOW_COMPLETED: &str = "workflow_completed";
pub const WORKFLOW_FAILED: &str = "workflow_failed";
pub const WORKFLOW_ROLLED_BACK: &str = "workflow_rolled_back";
pub const PHASE_STARTED: &str = "phase_started";
pub const PHASE_COMPLETED: &str = "phase_completed";
pub const STEP_STARTED: &str = "step_started";
pub const STEP_COMPLETED: &str = "step_completed";
pub const STEP_FAILED: &str = "step_failed";
}
#[cfg(test)]
mod tests {
use super::*;
#[tokio::test]
async fn test_audit_trail_creation() {
let audit = AuditTrail::new();
assert!(audit.get_all_entries().await.is_empty());
}
#[tokio::test]
async fn test_log_event() {
let audit = AuditTrail::new();
audit
.log_event(
"wf-1".to_string(),
events::WORKFLOW_STARTED.to_string(),
"system".to_string(),
serde_json::json!({"test": "data"}),
)
.await;
let entries = audit.get_all_entries().await;
assert_eq!(entries.len(), 1);
assert_eq!(entries[0].workflow_id, "wf-1");
assert_eq!(entries[0].event_type, events::WORKFLOW_STARTED);
}
#[tokio::test]
async fn test_get_workflow_audit() {
let audit = AuditTrail::new();
audit
.log_event(
"wf-1".to_string(),
events::WORKFLOW_STARTED.to_string(),
"system".to_string(),
serde_json::json!({}),
)
.await;
audit
.log_event(
"wf-2".to_string(),
events::WORKFLOW_STARTED.to_string(),
"system".to_string(),
serde_json::json!({}),
)
.await;
let entries = audit.get_workflow_audit("wf-1").await;
assert_eq!(entries.len(), 1);
assert_eq!(entries[0].workflow_id, "wf-1");
}
#[tokio::test]
async fn test_filter_by_event_type() {
let audit = AuditTrail::new();
audit
.log_event(
"wf-1".to_string(),
events::WORKFLOW_STARTED.to_string(),
"system".to_string(),
serde_json::json!({}),
)
.await;
audit
.log_event(
"wf-1".to_string(),
events::WORKFLOW_COMPLETED.to_string(),
"system".to_string(),
serde_json::json!({}),
)
.await;
let entries = audit.get_by_event_type(events::WORKFLOW_STARTED).await;
assert_eq!(entries.len(), 1);
assert_eq!(entries[0].event_type, events::WORKFLOW_STARTED);
}
#[tokio::test]
async fn test_filter_by_actor() {
let audit = AuditTrail::new();
audit
.log_event(
"wf-1".to_string(),
events::WORKFLOW_STARTED.to_string(),
"user-1".to_string(),
serde_json::json!({}),
)
.await;
audit
.log_event(
"wf-2".to_string(),
events::WORKFLOW_STARTED.to_string(),
"user-2".to_string(),
serde_json::json!({}),
)
.await;
let entries = audit.get_by_actor("user-1").await;
assert_eq!(entries.len(), 1);
assert_eq!(entries[0].actor, "user-1");
}
}

View File

@ -0,0 +1,231 @@
// Configuration module for VAPORA Backend
// Loads config from vapora.toml with environment variable interpolation
use serde::{Deserialize, Serialize};
use std::fs;
use std::path::Path;
use vapora_shared::{Result, VaporaError};
/// Main configuration structure
#[derive(Debug, Clone, Serialize, Deserialize)]
pub struct Config {
pub server: ServerConfig,
pub database: DatabaseConfig,
pub nats: NatsConfig,
pub auth: AuthConfig,
pub logging: LoggingConfig,
pub metrics: MetricsConfig,
}
/// Server configuration
#[derive(Debug, Clone, Serialize, Deserialize)]
pub struct ServerConfig {
pub host: String,
pub port: u16,
pub tls: TlsConfig,
}
/// TLS configuration
#[derive(Debug, Clone, Serialize, Deserialize)]
pub struct TlsConfig {
pub enabled: bool,
#[serde(default)]
pub cert_path: String,
#[serde(default)]
pub key_path: String,
}
/// Database configuration
#[derive(Debug, Clone, Serialize, Deserialize)]
pub struct DatabaseConfig {
pub url: String,
pub max_connections: u32,
}
/// NATS configuration
#[derive(Debug, Clone, Serialize, Deserialize)]
pub struct NatsConfig {
pub url: String,
pub stream_name: String,
}
/// Authentication configuration
#[derive(Debug, Clone, Serialize, Deserialize)]
pub struct AuthConfig {
pub jwt_secret: String,
pub jwt_expiration_hours: u32,
}
/// Logging configuration
#[derive(Debug, Clone, Serialize, Deserialize)]
pub struct LoggingConfig {
pub level: String,
pub json: bool,
}
/// Metrics configuration
#[derive(Debug, Clone, Serialize, Deserialize)]
pub struct MetricsConfig {
pub enabled: bool,
pub port: u16,
}
impl Config {
/// Load configuration from a TOML file with environment variable interpolation
pub fn load<P: AsRef<Path>>(path: P) -> Result<Self> {
let path = path.as_ref();
// Read file content
let content = fs::read_to_string(path).map_err(|e| {
VaporaError::ConfigError(format!("Failed to read config file {:?}: {}", path, e))
})?;
// Interpolate environment variables
let interpolated = Self::interpolate_env_vars(&content)?;
// Parse TOML
let config: Config = toml::from_str(&interpolated)?;
// Validate configuration
config.validate()?;
Ok(config)
}
/// Interpolate environment variables in format ${VAR} or ${VAR:-default}
fn interpolate_env_vars(content: &str) -> Result<String> {
let mut result = content.to_string();
let re = regex::Regex::new(r"\$\{([^}:]+)(?::-(.*?))?\}").map_err(|e| {
VaporaError::ConfigError(format!("Invalid regex pattern: {}", e))
})?;
// Process each match
for cap in re.captures_iter(content) {
let full_match = cap.get(0).ok_or_else(|| {
VaporaError::ConfigError("Failed to get regex match".to_string())
})?;
let var_name = cap.get(1).ok_or_else(|| {
VaporaError::ConfigError("Failed to get variable name".to_string())
})?.as_str();
let default_value = cap.get(2).map(|m| m.as_str()).unwrap_or("");
// Get environment variable or use default
let value = std::env::var(var_name).unwrap_or_else(|_| default_value.to_string());
// Replace in result
result = result.replace(full_match.as_str(), &value);
}
Ok(result)
}
/// Validate configuration values
fn validate(&self) -> Result<()> {
// Validate server config
if self.server.host.is_empty() {
return Err(VaporaError::ConfigError("Server host cannot be empty".to_string()));
}
if self.server.port == 0 {
return Err(VaporaError::ConfigError("Server port must be > 0".to_string()));
}
// Validate TLS config if enabled
if self.server.tls.enabled {
if self.server.tls.cert_path.is_empty() {
return Err(VaporaError::ConfigError("TLS cert_path required when TLS is enabled".to_string()));
}
if self.server.tls.key_path.is_empty() {
return Err(VaporaError::ConfigError("TLS key_path required when TLS is enabled".to_string()));
}
}
// Validate database config
if self.database.url.is_empty() {
return Err(VaporaError::ConfigError("Database URL cannot be empty".to_string()));
}
if self.database.max_connections == 0 {
return Err(VaporaError::ConfigError("Database max_connections must be > 0".to_string()));
}
// Validate NATS config
if self.nats.url.is_empty() {
return Err(VaporaError::ConfigError("NATS URL cannot be empty".to_string()));
}
// Validate auth config
if self.auth.jwt_secret.is_empty() {
return Err(VaporaError::ConfigError("JWT secret cannot be empty".to_string()));
}
if self.auth.jwt_expiration_hours == 0 {
return Err(VaporaError::ConfigError("JWT expiration hours must be > 0".to_string()));
}
// Validate logging config
let valid_log_levels = ["trace", "debug", "info", "warn", "error"];
if !valid_log_levels.contains(&self.logging.level.as_str()) {
return Err(VaporaError::ConfigError(
format!("Invalid log level '{}'. Must be one of: {:?}", self.logging.level, valid_log_levels)
));
}
Ok(())
}
}
#[cfg(test)]
mod tests {
use super::*;
#[test]
fn test_env_var_interpolation() {
std::env::set_var("TEST_VAR", "test_value");
let input = "host = \"${TEST_VAR}\"";
let result = Config::interpolate_env_vars(input).unwrap();
assert_eq!(result, "host = \"test_value\"");
}
#[test]
fn test_env_var_with_default() {
let input = "host = \"${NONEXISTENT_VAR:-default_value}\"";
let result = Config::interpolate_env_vars(input).unwrap();
assert_eq!(result, "host = \"default_value\"");
}
#[test]
fn test_validate_empty_host() {
let config = Config {
server: ServerConfig {
host: "".to_string(),
port: 8080,
tls: TlsConfig {
enabled: false,
cert_path: "".to_string(),
key_path: "".to_string(),
},
},
database: DatabaseConfig {
url: "ws://localhost:8000".to_string(),
max_connections: 10,
},
nats: NatsConfig {
url: "nats://localhost:4222".to_string(),
stream_name: "vapora".to_string(),
},
auth: AuthConfig {
jwt_secret: "secret".to_string(),
jwt_expiration_hours: 24,
},
logging: LoggingConfig {
level: "info".to_string(),
json: false,
},
metrics: MetricsConfig {
enabled: true,
port: 9090,
},
};
assert!(config.validate().is_err());
}
}

View File

@ -0,0 +1,8 @@
// Library exports for vapora-backend
// Allows modules to be used in integration tests
pub mod api;
pub mod audit;
pub mod config;
pub mod services;
pub mod workflow;

View File

@ -0,0 +1,156 @@
// vapora-backend: REST API server for VAPORA v1.0
// Phase 1: Complete backend with SurrealDB integration
mod api;
mod config;
mod services;
use anyhow::Result;
use axum::{
routing::{delete, get, post, put},
Extension, Router,
};
use std::sync::Arc;
use std::net::SocketAddr;
use tower_http::cors::{Any, CorsLayer};
use tracing::{info, Level};
use vapora_swarm::{SwarmCoordinator, SwarmMetrics};
use crate::api::AppState;
use crate::config::Config;
use crate::services::{AgentService, ProjectService, TaskService};
#[tokio::main]
async fn main() -> Result<()> {
// Load environment variables from .env file if present
dotenv::dotenv().ok();
// Initialize logging
tracing_subscriber::fmt()
.with_max_level(Level::INFO)
.with_target(false)
.compact()
.init();
info!("VAPORA Backend v{}", env!("CARGO_PKG_VERSION"));
info!("Phase 1: Backend Core + SurrealDB");
// Load configuration
let config = Config::load("config/vapora.toml")?;
info!("Configuration loaded successfully");
// Connect to SurrealDB
info!("Connecting to SurrealDB at {}", config.database.url);
let db = surrealdb::Surreal::new::<surrealdb::engine::remote::ws::Ws>(&config.database.url)
.await?;
// Sign in to database
db.signin(surrealdb::opt::auth::Root {
username: "root",
password: "root",
})
.await?;
// Use namespace and database
db.use_ns("vapora").use_db("main").await?;
info!("Connected to SurrealDB");
// Initialize services
let project_service = ProjectService::new(db.clone());
let task_service = TaskService::new(db.clone());
let agent_service = AgentService::new(db.clone());
// Create application state
let app_state = AppState::new(project_service, task_service, agent_service);
// Create SwarmMetrics for Prometheus monitoring
let metrics = match SwarmMetrics::new() {
Ok(m) => {
info!("SwarmMetrics initialized for Prometheus monitoring");
m
}
Err(e) => {
tracing::warn!("Failed to initialize SwarmMetrics: {:?}, continuing without metrics", e);
// Create new registry and metrics as fallback
SwarmMetrics::new().unwrap()
}
};
// Create SwarmCoordinator for multi-agent coordination
let mut swarm_coordinator = SwarmCoordinator::new();
swarm_coordinator.set_metrics(Arc::clone(&metrics));
let swarm_coordinator = Arc::new(swarm_coordinator);
info!("SwarmCoordinator initialized for Phase 5.2");
// Configure CORS
let cors = CorsLayer::new()
.allow_origin(Any)
.allow_methods(Any)
.allow_headers(Any);
// Build router
let app = Router::new()
// Health endpoint
.route("/health", get(api::health::health))
// Metrics endpoint (Prometheus)
.route("/metrics", get(api::metrics::metrics_handler))
// Project endpoints
.route("/api/v1/projects", get(api::projects::list_projects).post(api::projects::create_project))
.route(
"/api/v1/projects/:id",
get(api::projects::get_project)
.put(api::projects::update_project)
.delete(api::projects::delete_project),
)
.route("/api/v1/projects/:id/features", post(api::projects::add_feature))
.route("/api/v1/projects/:id/features/:feature", delete(api::projects::remove_feature))
.route("/api/v1/projects/:id/archive", post(api::projects::archive_project))
// Task endpoints
.route("/api/v1/tasks", get(api::tasks::list_tasks).post(api::tasks::create_task))
.route(
"/api/v1/tasks/:id",
get(api::tasks::get_task)
.put(api::tasks::update_task)
.delete(api::tasks::delete_task),
)
.route("/api/v1/tasks/:id/reorder", put(api::tasks::reorder_task))
.route("/api/v1/tasks/:id/status", put(api::tasks::update_task_status))
.route("/api/v1/tasks/:id/assign", put(api::tasks::assign_task))
.route("/api/v1/tasks/:id/priority", put(api::tasks::update_priority))
// Agent endpoints (specific routes before parameterized routes)
.route("/api/v1/agents", get(api::agents::list_agents).post(api::agents::register_agent))
.route("/api/v1/agents/available", get(api::agents::get_available_agents))
.route(
"/api/v1/agents/:id",
get(api::agents::get_agent)
.put(api::agents::update_agent)
.delete(api::agents::deregister_agent),
)
.route("/api/v1/agents/:id/health", get(api::agents::check_agent_health))
.route("/api/v1/agents/:id/status", put(api::agents::update_agent_status))
.route("/api/v1/agents/:id/capabilities", post(api::agents::add_capability))
.route("/api/v1/agents/:id/capabilities/:capability", delete(api::agents::remove_capability))
.route("/api/v1/agents/:id/skills", post(api::agents::add_skill))
// Tracking endpoints
.route("/api/v1/tracking/entries", get(api::tracking::list_tracking_entries))
.route("/api/v1/tracking/summary", get(api::tracking::get_tracking_summary))
.route("/api/v1/tracking/health", get(api::tracking::tracking_health))
// Swarm endpoints (Phase 5.2)
.route("/api/v1/swarm/stats", get(api::swarm::swarm_statistics))
.route("/api/v1/swarm/health", get(api::swarm::swarm_health))
// Apply CORS, state, and extensions
.layer(Extension(swarm_coordinator))
.layer(cors)
.with_state(app_state);
// Start server
let addr = SocketAddr::from(([127, 0, 0, 1], config.server.port));
info!("Server listening on {}", addr);
info!("Health check: http://{}/health", addr);
info!("API documentation: http://{}/api/v1", addr);
let listener = tokio::net::TcpListener::bind(addr).await?;
axum::serve(listener, app).await?;
Ok(())
}

View File

@ -0,0 +1,261 @@
// Agent service - Registry and management for the 12 agent roles
use chrono::Utc;
use surrealdb::engine::remote::ws::Client;
use surrealdb::Surreal;
use vapora_shared::models::{Agent, AgentRole, AgentStatus};
use vapora_shared::{Result, VaporaError};
/// Service for managing agents
#[derive(Clone)]
pub struct AgentService {
db: Surreal<Client>,
}
impl AgentService {
/// Create a new AgentService instance
pub fn new(db: Surreal<Client>) -> Self {
Self { db }
}
/// Register a new agent
pub async fn register_agent(&self, mut agent: Agent) -> Result<Agent> {
// Set creation timestamp
agent.created_at = Utc::now();
// Check if agent with this role already exists
let existing = self.get_agent_by_role(&agent.role).await;
if existing.is_ok() {
return Err(VaporaError::InvalidInput(
format!("Agent with role '{:?}' already exists", agent.role)
));
}
// Create agent in database
let created: Option<Agent> = self
.db
.create("agents")
.content(agent)
.await?
.into_iter()
.next();
created.ok_or_else(|| VaporaError::DatabaseError("Failed to register agent".to_string()))
}
/// List all agents
pub async fn list_agents(&self) -> Result<Vec<Agent>> {
let mut response = self
.db
.query("SELECT * FROM agents ORDER BY role ASC")
.await?;
let agents: Vec<Agent> = response.take(0)?;
Ok(agents)
}
/// List agents by status
pub async fn list_agents_by_status(&self, status: AgentStatus) -> Result<Vec<Agent>> {
let status_str = match status {
AgentStatus::Active => "active",
AgentStatus::Inactive => "inactive",
AgentStatus::Updating => "updating",
AgentStatus::Error => "error",
};
let mut response = self
.db
.query("SELECT * FROM agents WHERE status = $status ORDER BY role ASC")
.bind(("status", status_str.to_string()))
.await?;
let agents: Vec<Agent> = response.take(0)?;
Ok(agents)
}
/// Get an agent by ID
pub async fn get_agent(&self, id: &str) -> Result<Agent> {
let agent: Option<Agent> = self.db.select(("agents", id)).await?;
agent.ok_or_else(|| {
VaporaError::NotFound(format!("Agent with id '{}' not found", id))
})
}
/// Get an agent by role
pub async fn get_agent_by_role(&self, role: &AgentRole) -> Result<Agent> {
let role_str = match role {
AgentRole::Architect => "architect",
AgentRole::Developer => "developer",
AgentRole::CodeReviewer => "code_reviewer",
AgentRole::Tester => "tester",
AgentRole::Documenter => "documenter",
AgentRole::Marketer => "marketer",
AgentRole::Presenter => "presenter",
AgentRole::DevOps => "dev_ops",
AgentRole::Monitor => "monitor",
AgentRole::Security => "security",
AgentRole::ProjectManager => "project_manager",
AgentRole::DecisionMaker => "decision_maker",
};
let mut response = self
.db
.query("SELECT * FROM agents WHERE role = $role LIMIT 1")
.bind(("role", role_str.to_string()))
.await?;
let agents: Vec<Agent> = response.take(0)?;
agents.into_iter().next().ok_or_else(|| {
VaporaError::NotFound(format!("Agent with role '{:?}' not found", role))
})
}
/// Update an agent
pub async fn update_agent(&self, id: &str, mut updates: Agent) -> Result<Agent> {
// Verify agent exists
let existing = self.get_agent(id).await?;
// Preserve certain fields
updates.id = existing.id;
updates.created_at = existing.created_at;
// Update in database
let updated: Option<Agent> = self
.db
.update(("agents", id))
.content(updates)
.await?;
updated.ok_or_else(|| VaporaError::DatabaseError("Failed to update agent".to_string()))
}
/// Update agent status
pub async fn update_agent_status(&self, id: &str, status: AgentStatus) -> Result<Agent> {
// Verify agent exists
self.get_agent(id).await?;
let updated: Option<Agent> = self
.db
.update(("agents", id))
.merge(serde_json::json!({
"status": status
}))
.await?;
updated.ok_or_else(|| {
VaporaError::DatabaseError("Failed to update agent status".to_string())
})
}
/// Add capability to an agent
pub async fn add_capability(&self, id: &str, capability: String) -> Result<Agent> {
let mut agent = self.get_agent(id).await?;
// Add capability if not already present
if !agent.capabilities.contains(&capability) {
agent.capabilities.push(capability);
let updated: Option<Agent> = self
.db
.update(("agents", id))
.merge(serde_json::json!({
"capabilities": agent.capabilities
}))
.await?;
return updated.ok_or_else(|| {
VaporaError::DatabaseError("Failed to add capability".to_string())
});
}
Ok(agent)
}
/// Remove capability from an agent
pub async fn remove_capability(&self, id: &str, capability: &str) -> Result<Agent> {
let mut agent = self.get_agent(id).await?;
// Remove capability
agent.capabilities.retain(|c| c != capability);
let updated: Option<Agent> = self
.db
.update(("agents", id))
.merge(serde_json::json!({
"capabilities": agent.capabilities
}))
.await?;
updated.ok_or_else(|| {
VaporaError::DatabaseError("Failed to remove capability".to_string())
})
}
/// Add skill to an agent
pub async fn add_skill(&self, id: &str, skill: String) -> Result<Agent> {
let mut agent = self.get_agent(id).await?;
// Add skill if not already present
if !agent.skills.contains(&skill) {
agent.skills.push(skill);
let updated: Option<Agent> = self
.db
.update(("agents", id))
.merge(serde_json::json!({
"skills": agent.skills
}))
.await?;
return updated.ok_or_else(|| {
VaporaError::DatabaseError("Failed to add skill".to_string())
});
}
Ok(agent)
}
/// Deregister an agent
pub async fn deregister_agent(&self, id: &str) -> Result<()> {
// Verify agent exists
self.get_agent(id).await?;
// Delete from database
let _: Option<Agent> = self.db.delete(("agents", id)).await?;
Ok(())
}
/// Get agent health status (checks if agent is active and responding)
pub async fn check_agent_health(&self, id: &str) -> Result<bool> {
let agent = self.get_agent(id).await?;
Ok(agent.status == AgentStatus::Active)
}
/// Get agents available for task assignment (active agents with capacity)
pub async fn get_available_agents(&self) -> Result<Vec<Agent>> {
let mut response = self
.db
.query("SELECT * FROM agents WHERE status = 'active' ORDER BY role ASC")
.await?;
let agents: Vec<Agent> = response.take(0)?;
Ok(agents)
}
}
#[cfg(test)]
mod tests {
use super::*;
// Note: These are placeholder tests. Real tests require a running SurrealDB instance
// or mocking. For Phase 1, we'll add integration tests that use a test database.
#[test]
fn test_agent_service_creation() {
// This test just verifies the service can be created
// Real database tests will be in integration tests
}
}

View File

@ -0,0 +1,11 @@
// Services module - Business logic layer
pub mod agent_service;
pub mod project_service;
pub mod task_service;
// pub mod workflow_service; // TODO: Phase 4 - Re-enable when workflow module is ready
pub use agent_service::AgentService;
pub use project_service::ProjectService;
pub use task_service::TaskService;
// pub use workflow_service::WorkflowService; // Phase 4

View File

@ -0,0 +1,210 @@
// Project service - CRUD operations for projects
use chrono::Utc;
use surrealdb::engine::remote::ws::Client;
use surrealdb::Surreal;
use vapora_shared::models::{Project, ProjectStatus};
use vapora_shared::{Result, VaporaError};
/// Service for managing projects
#[derive(Clone)]
pub struct ProjectService {
db: Surreal<Client>,
}
impl ProjectService {
/// Create a new ProjectService instance
pub fn new(db: Surreal<Client>) -> Self {
Self { db }
}
/// Create a new project
pub async fn create_project(&self, mut project: Project) -> Result<Project> {
// Set timestamps
let now = Utc::now();
project.created_at = now;
project.updated_at = now;
// Create project in database
let created: Option<Project> = self
.db
.create("projects")
.content(project)
.await?
.into_iter()
.next();
created.ok_or_else(|| VaporaError::DatabaseError("Failed to create project".to_string()))
}
/// List all projects for a tenant
pub async fn list_projects(&self, tenant_id: &str) -> Result<Vec<Project>> {
let mut response = self
.db
.query("SELECT * FROM projects WHERE tenant_id = $tenant_id ORDER BY created_at DESC")
.bind(("tenant_id", tenant_id.to_string()))
.await?;
let projects: Vec<Project> = response.take(0)?;
Ok(projects)
}
/// List projects by status for a tenant
pub async fn list_projects_by_status(
&self,
tenant_id: &str,
status: ProjectStatus,
) -> Result<Vec<Project>> {
let status_str = match status {
ProjectStatus::Active => "active",
ProjectStatus::Archived => "archived",
ProjectStatus::Completed => "completed",
};
let mut response = self
.db
.query("SELECT * FROM projects WHERE tenant_id = $tenant_id AND status = $status ORDER BY created_at DESC")
.bind(("tenant_id", tenant_id.to_string()))
.bind(("status", status_str.to_string()))
.await?;
let projects: Vec<Project> = response.take(0)?;
Ok(projects)
}
/// Get a project by ID
pub async fn get_project(&self, id: &str, tenant_id: &str) -> Result<Project> {
let project: Option<Project> = self
.db
.select(("projects", id))
.await?;
let project = project.ok_or_else(|| {
VaporaError::NotFound(format!("Project with id '{}' not found", id))
})?;
// Verify tenant ownership
if project.tenant_id != tenant_id {
return Err(VaporaError::Unauthorized(
"Project does not belong to this tenant".to_string(),
));
}
Ok(project)
}
/// Update a project
pub async fn update_project(&self, id: &str, tenant_id: &str, mut updates: Project) -> Result<Project> {
// Verify project exists and belongs to tenant
let existing = self.get_project(id, tenant_id).await?;
// Preserve certain fields
updates.id = existing.id;
updates.tenant_id = existing.tenant_id;
updates.created_at = existing.created_at;
updates.updated_at = Utc::now();
// Update in database
let updated: Option<Project> = self
.db
.update(("projects", id))
.content(updates)
.await?;
updated.ok_or_else(|| VaporaError::DatabaseError("Failed to update project".to_string()))
}
/// Delete a project
pub async fn delete_project(&self, id: &str, tenant_id: &str) -> Result<()> {
// Verify project exists and belongs to tenant
self.get_project(id, tenant_id).await?;
// Delete from database
let _: Option<Project> = self.db.delete(("projects", id)).await?;
Ok(())
}
/// Add a feature to a project
pub async fn add_feature(&self, id: &str, tenant_id: &str, feature: String) -> Result<Project> {
let mut project = self.get_project(id, tenant_id).await?;
// Add feature if not already present
if !project.features.contains(&feature) {
project.features.push(feature);
project.updated_at = Utc::now();
let updated: Option<Project> = self
.db
.update(("projects", id))
.merge(serde_json::json!({
"features": project.features,
"updated_at": project.updated_at
}))
.await?;
return updated.ok_or_else(|| {
VaporaError::DatabaseError("Failed to add feature".to_string())
});
}
Ok(project)
}
/// Remove a feature from a project
pub async fn remove_feature(&self, id: &str, tenant_id: &str, feature: &str) -> Result<Project> {
let mut project = self.get_project(id, tenant_id).await?;
// Remove feature
project.features.retain(|f| f != feature);
project.updated_at = Utc::now();
let updated: Option<Project> = self
.db
.update(("projects", id))
.merge(serde_json::json!({
"features": project.features,
"updated_at": project.updated_at
}))
.await?;
updated.ok_or_else(|| {
VaporaError::DatabaseError("Failed to remove feature".to_string())
})
}
/// Archive a project (set status to archived)
pub async fn archive_project(&self, id: &str, tenant_id: &str) -> Result<Project> {
let mut project = self.get_project(id, tenant_id).await?;
project.status = ProjectStatus::Archived;
project.updated_at = Utc::now();
let updated: Option<Project> = self
.db
.update(("projects", id))
.merge(serde_json::json!({
"status": project.status,
"updated_at": project.updated_at
}))
.await?;
updated.ok_or_else(|| {
VaporaError::DatabaseError("Failed to archive project".to_string())
})
}
}
#[cfg(test)]
mod tests {
use super::*;
use vapora_shared::models::ProjectStatus;
// Note: These are placeholder tests. Real tests require a running SurrealDB instance
// or mocking. For Phase 1, we'll add integration tests that use a test database.
#[test]
fn test_project_service_creation() {
// This test just verifies the service can be created
// Real database tests will be in integration tests
}
}

View File

@ -0,0 +1,282 @@
// Task service - CRUD operations and Kanban management for tasks
use chrono::Utc;
use surrealdb::engine::remote::ws::Client;
use surrealdb::Surreal;
use vapora_shared::models::{Task, TaskStatus, TaskPriority};
use vapora_shared::{Result, VaporaError};
/// Service for managing tasks
#[derive(Clone)]
pub struct TaskService {
db: Surreal<Client>,
}
impl TaskService {
/// Create a new TaskService instance
pub fn new(db: Surreal<Client>) -> Self {
Self { db }
}
/// Create a new task
pub async fn create_task(&self, mut task: Task) -> Result<Task> {
// Set timestamps
let now = Utc::now();
task.created_at = now;
task.updated_at = now;
// If task_order is not set, get the max order for this project/status and add 1
if task.task_order == 0 {
let max_order = self.get_max_task_order(&task.project_id, &task.status).await?;
task.task_order = max_order + 1;
}
// Create task in database
let created: Option<Task> = self
.db
.create("tasks")
.content(task)
.await?
.into_iter()
.next();
created.ok_or_else(|| VaporaError::DatabaseError("Failed to create task".to_string()))
}
/// List all tasks for a project
pub async fn list_tasks(&self, project_id: &str, tenant_id: &str) -> Result<Vec<Task>> {
let mut response = self
.db
.query("SELECT * FROM tasks WHERE project_id = $project_id AND tenant_id = $tenant_id ORDER BY task_order ASC")
.bind(("project_id", project_id.to_string()))
.bind(("tenant_id", tenant_id.to_string()))
.await?;
let tasks: Vec<Task> = response.take(0)?;
Ok(tasks)
}
/// List tasks by status (for Kanban columns)
pub async fn list_tasks_by_status(
&self,
project_id: &str,
tenant_id: &str,
status: TaskStatus,
) -> Result<Vec<Task>> {
let status_str = match status {
TaskStatus::Todo => "todo",
TaskStatus::Doing => "doing",
TaskStatus::Review => "review",
TaskStatus::Done => "done",
};
let mut response = self
.db
.query("SELECT * FROM tasks WHERE project_id = $project_id AND tenant_id = $tenant_id AND status = $status ORDER BY task_order ASC")
.bind(("project_id", project_id.to_string()))
.bind(("tenant_id", tenant_id.to_string()))
.bind(("status", status_str.to_string()))
.await?;
let tasks: Vec<Task> = response.take(0)?;
Ok(tasks)
}
/// List tasks by assignee
pub async fn list_tasks_by_assignee(
&self,
project_id: &str,
tenant_id: &str,
assignee: &str,
) -> Result<Vec<Task>> {
let mut response = self
.db
.query("SELECT * FROM tasks WHERE project_id = $project_id AND tenant_id = $tenant_id AND assignee = $assignee ORDER BY priority DESC, task_order ASC")
.bind(("project_id", project_id.to_string()))
.bind(("tenant_id", tenant_id.to_string()))
.bind(("assignee", assignee.to_string()))
.await?;
let tasks: Vec<Task> = response.take(0)?;
Ok(tasks)
}
/// Get a task by ID
pub async fn get_task(&self, id: &str, tenant_id: &str) -> Result<Task> {
let task: Option<Task> = self.db.select(("tasks", id)).await?;
let task = task.ok_or_else(|| {
VaporaError::NotFound(format!("Task with id '{}' not found", id))
})?;
// Verify tenant ownership
if task.tenant_id != tenant_id {
return Err(VaporaError::Unauthorized(
"Task does not belong to this tenant".to_string(),
));
}
Ok(task)
}
/// Update a task
pub async fn update_task(&self, id: &str, tenant_id: &str, mut updates: Task) -> Result<Task> {
// Verify task exists and belongs to tenant
let existing = self.get_task(id, tenant_id).await?;
// Preserve certain fields
updates.id = existing.id;
updates.tenant_id = existing.tenant_id;
updates.created_at = existing.created_at;
updates.updated_at = Utc::now();
// Update in database
let updated: Option<Task> = self
.db
.update(("tasks", id))
.content(updates)
.await?;
updated.ok_or_else(|| VaporaError::DatabaseError("Failed to update task".to_string()))
}
/// Update task status (for Kanban column changes)
pub async fn update_task_status(&self, id: &str, tenant_id: &str, status: TaskStatus) -> Result<Task> {
let task = self.get_task(id, tenant_id).await?;
// Get max order for new status
let max_order = self.get_max_task_order(&task.project_id, &status).await?;
let updated: Option<Task> = self
.db
.update(("tasks", id))
.merge(serde_json::json!({
"status": status,
"task_order": max_order + 1,
"updated_at": Utc::now()
}))
.await?;
updated.ok_or_else(|| {
VaporaError::DatabaseError("Failed to update task status".to_string())
})
}
/// Reorder task (for drag & drop in Kanban)
pub async fn reorder_task(
&self,
id: &str,
tenant_id: &str,
new_order: i32,
new_status: Option<TaskStatus>,
) -> Result<Task> {
let mut task = self.get_task(id, tenant_id).await?;
// Update status if provided
if let Some(status) = new_status {
task.status = status;
}
task.task_order = new_order;
task.updated_at = Utc::now();
let updated: Option<Task> = self
.db
.update(("tasks", id))
.merge(serde_json::json!({
"status": task.status,
"task_order": task.task_order,
"updated_at": task.updated_at
}))
.await?;
updated.ok_or_else(|| {
VaporaError::DatabaseError("Failed to reorder task".to_string())
})
}
/// Assign task to agent/user
pub async fn assign_task(&self, id: &str, tenant_id: &str, assignee: String) -> Result<Task> {
let mut task = self.get_task(id, tenant_id).await?;
task.assignee = assignee;
task.updated_at = Utc::now();
let updated: Option<Task> = self
.db
.update(("tasks", id))
.merge(serde_json::json!({
"assignee": task.assignee,
"updated_at": task.updated_at
}))
.await?;
updated.ok_or_else(|| {
VaporaError::DatabaseError("Failed to assign task".to_string())
})
}
/// Update task priority
pub async fn update_priority(&self, id: &str, tenant_id: &str, priority: TaskPriority) -> Result<Task> {
let mut task = self.get_task(id, tenant_id).await?;
task.priority = priority;
task.updated_at = Utc::now();
let updated: Option<Task> = self
.db
.update(("tasks", id))
.merge(serde_json::json!({
"priority": task.priority,
"updated_at": task.updated_at
}))
.await?;
updated.ok_or_else(|| {
VaporaError::DatabaseError("Failed to update priority".to_string())
})
}
/// Delete a task
pub async fn delete_task(&self, id: &str, tenant_id: &str) -> Result<()> {
// Verify task exists and belongs to tenant
self.get_task(id, tenant_id).await?;
// Delete from database
let _: Option<Task> = self.db.delete(("tasks", id)).await?;
Ok(())
}
/// Get maximum task order for a project/status combination
async fn get_max_task_order(&self, project_id: &str, status: &TaskStatus) -> Result<i32> {
let status_str = match status {
TaskStatus::Todo => "todo",
TaskStatus::Doing => "doing",
TaskStatus::Review => "review",
TaskStatus::Done => "done",
};
let mut response = self
.db
.query("SELECT VALUE task_order FROM tasks WHERE project_id = $project_id AND status = $status ORDER BY task_order DESC LIMIT 1")
.bind(("project_id", project_id.to_string()))
.bind(("status", status_str.to_string()))
.await?;
let orders: Vec<i32> = response.take(0)?;
Ok(orders.first().copied().unwrap_or(0))
}
}
#[cfg(test)]
mod tests {
use super::*;
// Note: These are placeholder tests. Real tests require a running SurrealDB instance
// or mocking. For Phase 1, we'll add integration tests that use a test database.
#[test]
fn test_task_service_creation() {
// This test just verifies the service can be created
// Real database tests will be in integration tests
}
}

View File

@ -0,0 +1,297 @@
// vapora-backend: Workflow service
// Phase 3: Service layer for workflow management
use crate::api::websocket::{WorkflowBroadcaster, WorkflowUpdate};
use crate::audit::{events, AuditEntry, AuditTrail};
use crate::workflow::{EngineError, Workflow, WorkflowEngine};
use std::sync::Arc;
use thiserror::Error;
use tracing::{error, info};
#[derive(Debug, Error)]
pub enum WorkflowServiceError {
#[error("Engine error: {0}")]
EngineError(#[from] EngineError),
#[error("Workflow not found: {0}")]
NotFound(String),
#[error("Invalid operation: {0}")]
InvalidOperation(String),
}
/// Workflow service provides high-level workflow operations
pub struct WorkflowService {
engine: Arc<WorkflowEngine>,
broadcaster: Arc<WorkflowBroadcaster>,
audit: Arc<AuditTrail>,
}
impl WorkflowService {
pub fn new(
engine: Arc<WorkflowEngine>,
broadcaster: Arc<WorkflowBroadcaster>,
audit: Arc<AuditTrail>,
) -> Self {
Self {
engine,
broadcaster,
audit,
}
}
/// Create and register a new workflow
pub async fn create_workflow(&self, workflow: Workflow) -> Result<Workflow, WorkflowServiceError> {
let workflow_id = workflow.id.clone();
let title = workflow.title.clone();
// Register with engine
self.engine.register_workflow(workflow.clone()).await?;
// Audit event
self.audit
.log_event(
workflow_id.clone(),
events::WORKFLOW_CREATED.to_string(),
"system".to_string(),
serde_json::json!({
"title": title,
"phases": workflow.phases.len(),
}),
)
.await;
// Broadcast update
self.broadcaster.send_update(WorkflowUpdate::new(
workflow_id.clone(),
"created".to_string(),
0,
format!("Workflow '{}' created", title),
));
info!("Created workflow: {} ({})", workflow_id, title);
Ok(workflow)
}
/// Execute a workflow
pub async fn execute_workflow(&self, workflow_id: &str) -> Result<Workflow, WorkflowServiceError> {
info!("Executing workflow: {}", workflow_id);
// Broadcast start
self.broadcaster.send_update(WorkflowUpdate::new(
workflow_id.to_string(),
"starting".to_string(),
0,
"Workflow execution started".to_string(),
));
// Audit event
self.audit
.log_event(
workflow_id.to_string(),
events::WORKFLOW_STARTED.to_string(),
"system".to_string(),
serde_json::json!({}),
)
.await;
// Execute workflow
let result = self.engine.execute_workflow(workflow_id).await;
match result {
Ok(workflow) => {
let status = format!("{:?}", workflow.status);
let progress = workflow.progress_percent();
// Broadcast completion
self.broadcaster.send_update(WorkflowUpdate::new(
workflow_id.to_string(),
status.clone(),
progress,
"Workflow execution completed".to_string(),
));
// Audit event
self.audit
.log_event(
workflow_id.to_string(),
events::WORKFLOW_COMPLETED.to_string(),
"system".to_string(),
serde_json::json!({
"status": status,
"progress": progress,
}),
)
.await;
info!("Workflow {} completed with status: {}", workflow_id, status);
Ok(workflow)
}
Err(e) => {
let error_msg = format!("{}", e);
// Broadcast failure
self.broadcaster.send_update(WorkflowUpdate::new(
workflow_id.to_string(),
"failed".to_string(),
0,
format!("Workflow execution failed: {}", error_msg),
));
// Audit event
self.audit
.log_event(
workflow_id.to_string(),
events::WORKFLOW_FAILED.to_string(),
"system".to_string(),
serde_json::json!({
"error": error_msg,
}),
)
.await;
error!("Workflow {} failed: {}", workflow_id, error_msg);
Err(WorkflowServiceError::from(e))
}
}
}
/// Get workflow by ID
pub async fn get_workflow(&self, workflow_id: &str) -> Result<Workflow, WorkflowServiceError> {
self.engine
.get_workflow(workflow_id)
.await
.ok_or_else(|| WorkflowServiceError::NotFound(workflow_id.to_string()))
}
/// List all workflows
pub async fn list_workflows(&self) -> Vec<Workflow> {
self.engine.list_workflows().await
}
/// Rollback a failed workflow
pub async fn rollback_workflow(&self, workflow_id: &str) -> Result<(), WorkflowServiceError> {
info!("Rolling back workflow: {}", workflow_id);
self.engine.rollback_workflow(workflow_id).await?;
// Broadcast rollback
self.broadcaster.send_update(WorkflowUpdate::new(
workflow_id.to_string(),
"rolled_back".to_string(),
0,
"Workflow rolled back".to_string(),
));
// Audit event
self.audit
.log_event(
workflow_id.to_string(),
events::WORKFLOW_ROLLED_BACK.to_string(),
"system".to_string(),
serde_json::json!({}),
)
.await;
Ok(())
}
/// Get audit trail for workflow
pub async fn get_audit_trail(&self, workflow_id: &str) -> Vec<AuditEntry> {
self.audit.get_workflow_audit(workflow_id).await
}
/// Get broadcaster reference
pub fn broadcaster(&self) -> Arc<WorkflowBroadcaster> {
Arc::clone(&self.broadcaster)
}
}
#[cfg(test)]
mod tests {
use super::*;
use crate::workflow::{executor::StepExecutor, state::{Phase, StepStatus, WorkflowStep}};
use vapora_agents::{coordinator::AgentCoordinator, registry::AgentRegistry};
fn create_test_workflow() -> Workflow {
Workflow::new(
"test-wf-1".to_string(),
"Test Workflow".to_string(),
vec![Phase {
id: "phase1".to_string(),
name: "Phase 1".to_string(),
status: StepStatus::Pending,
parallel: false,
estimated_hours: 1.0,
steps: vec![WorkflowStep {
id: "step1".to_string(),
name: "Step 1".to_string(),
agent_role: "developer".to_string(),
status: StepStatus::Pending,
depends_on: vec![],
can_parallelize: true,
started_at: None,
completed_at: None,
result: None,
error: None,
}],
}],
)
}
#[tokio::test]
async fn test_service_creation() {
let registry = Arc::new(AgentRegistry::new(5));
let coordinator = Arc::new(AgentCoordinator::new(registry));
let executor = StepExecutor::new(coordinator);
let engine = Arc::new(WorkflowEngine::new(executor));
let broadcaster = Arc::new(WorkflowBroadcaster::new());
let audit = Arc::new(AuditTrail::new());
let service = WorkflowService::new(engine, broadcaster, audit);
assert!(service.list_workflows().await.is_empty());
}
#[tokio::test]
async fn test_create_workflow() {
let registry = Arc::new(AgentRegistry::new(5));
let coordinator = Arc::new(AgentCoordinator::new(registry));
let executor = StepExecutor::new(coordinator);
let engine = Arc::new(WorkflowEngine::new(executor));
let broadcaster = Arc::new(WorkflowBroadcaster::new());
let audit = Arc::new(AuditTrail::new());
let service = WorkflowService::new(engine, broadcaster, audit);
let workflow = create_test_workflow();
let id = workflow.id.clone();
let result = service.create_workflow(workflow).await;
assert!(result.is_ok());
let retrieved = service.get_workflow(&id).await;
assert!(retrieved.is_ok());
assert_eq!(retrieved.unwrap().id, id);
}
#[tokio::test]
async fn test_audit_trail_logging() {
let registry = Arc::new(AgentRegistry::new(5));
let coordinator = Arc::new(AgentCoordinator::new(registry));
let executor = StepExecutor::new(coordinator);
let engine = Arc::new(WorkflowEngine::new(executor));
let broadcaster = Arc::new(WorkflowBroadcaster::new());
let audit = Arc::new(AuditTrail::new());
let service = WorkflowService::new(engine, broadcaster, audit);
let workflow = create_test_workflow();
let id = workflow.id.clone();
service.create_workflow(workflow).await.unwrap();
let audit_entries = service.get_audit_trail(&id).await;
assert!(!audit_entries.is_empty());
assert_eq!(audit_entries[0].event_type, events::WORKFLOW_CREATED);
}
}

View File

@ -0,0 +1,472 @@
// vapora-backend: Workflow engine
// Phase 3: Orchestrate workflow execution with state management
use crate::workflow::executor::{ExecutorError, StepExecutor};
use crate::workflow::scheduler::{Scheduler, SchedulerError};
use crate::workflow::state::{StepStatus, Workflow, WorkflowStatus};
use std::collections::HashMap;
use std::sync::Arc;
use thiserror::Error;
use tokio::sync::RwLock;
use tracing::{debug, error, info, warn};
use vapora_agents::coordinator::AgentCoordinator;
use vapora_agents::config::{AgentConfig, RegistryConfig};
use vapora_agents::registry::AgentRegistry;
#[derive(Debug, Error)]
pub enum EngineError {
#[error("Workflow not found: {0}")]
WorkflowNotFound(String),
#[error("Workflow state error: {0}")]
StateError(String),
#[error("Scheduler error: {0}")]
SchedulerError(#[from] SchedulerError),
#[error("Executor error: {0}")]
ExecutorError(#[from] ExecutorError),
#[error("Workflow already running: {0}")]
AlreadyRunning(String),
#[error("Workflow validation failed: {0}")]
ValidationError(String),
}
/// Workflow engine orchestrates workflow execution
pub struct WorkflowEngine {
workflows: Arc<RwLock<HashMap<String, Workflow>>>,
executor: Arc<StepExecutor>,
}
impl WorkflowEngine {
/// Create new workflow engine
pub fn new(executor: StepExecutor) -> Self {
Self {
workflows: Arc::new(RwLock::new(HashMap::new())),
executor: Arc::new(executor),
}
}
/// Register a workflow
pub async fn register_workflow(&self, workflow: Workflow) -> Result<(), EngineError> {
let mut workflows = self.workflows.write().await;
if workflows.contains_key(&workflow.id) {
return Err(EngineError::AlreadyRunning(workflow.id.clone()));
}
// Validate workflow structure
for phase in &workflow.phases {
Scheduler::validate_workflow(&phase.steps)?;
}
info!("Registered workflow: {} ({})", workflow.id, workflow.title);
workflows.insert(workflow.id.clone(), workflow);
Ok(())
}
/// Execute a workflow
pub async fn execute_workflow(&self, workflow_id: &str) -> Result<Workflow, EngineError> {
info!("Starting workflow execution: {}", workflow_id);
// Get and transition workflow to planning
{
let mut workflows = self.workflows.write().await;
let workflow = workflows
.get_mut(workflow_id)
.ok_or_else(|| EngineError::WorkflowNotFound(workflow_id.to_string()))?;
workflow
.transition(WorkflowStatus::Planning)
.map_err(EngineError::StateError)?;
}
// Plan execution
self.plan_workflow(workflow_id).await?;
// Transition to in progress
{
let mut workflows = self.workflows.write().await;
let workflow = workflows.get_mut(workflow_id).unwrap();
workflow
.transition(WorkflowStatus::InProgress)
.map_err(EngineError::StateError)?;
}
// Execute phases
self.execute_phases(workflow_id).await?;
// Determine final state
let final_workflow = {
let mut workflows = self.workflows.write().await;
let workflow = workflows.get_mut(workflow_id).unwrap();
if workflow.all_steps_completed() {
workflow
.transition(WorkflowStatus::Completed)
.map_err(EngineError::StateError)?;
info!("Workflow {} completed successfully", workflow_id);
} else if workflow.any_step_failed() {
workflow
.transition(WorkflowStatus::Failed)
.map_err(EngineError::StateError)?;
error!("Workflow {} failed", workflow_id);
}
workflow.clone()
};
Ok(final_workflow)
}
/// Plan workflow execution
async fn plan_workflow(&self, workflow_id: &str) -> Result<(), EngineError> {
debug!("Planning workflow: {}", workflow_id);
let workflows = self.workflows.read().await;
let workflow = workflows
.get(workflow_id)
.ok_or_else(|| EngineError::WorkflowNotFound(workflow_id.to_string()))?;
// Validate all phases
for phase in &workflow.phases {
let execution_order = Scheduler::resolve_dependencies(&phase.steps)?;
debug!(
"Phase {} has {} execution levels",
phase.id,
execution_order.len()
);
}
Ok(())
}
/// Execute all phases in workflow
async fn execute_phases(&self, workflow_id: &str) -> Result<(), EngineError> {
let phase_count = {
let workflows = self.workflows.read().await;
let workflow = workflows.get(workflow_id).unwrap();
workflow.phases.len()
};
for phase_idx in 0..phase_count {
self.execute_phase(workflow_id, phase_idx).await?;
// Check if phase failed
let phase_failed = {
let workflows = self.workflows.read().await;
let workflow = workflows.get(workflow_id).unwrap();
workflow.phases[phase_idx]
.steps
.iter()
.any(|s| matches!(s.status, StepStatus::Failed))
};
if phase_failed {
warn!("Phase {} failed, stopping workflow", phase_idx);
break;
}
}
Ok(())
}
/// Execute a single phase
async fn execute_phase(
&self,
workflow_id: &str,
phase_idx: usize,
) -> Result<(), EngineError> {
let (phase_id, is_parallel) = {
let workflows = self.workflows.read().await;
let workflow = workflows.get(workflow_id).unwrap();
let phase = &workflow.phases[phase_idx];
(phase.id.clone(), phase.parallel)
};
info!("Executing phase: {} (parallel: {})", phase_id, is_parallel);
if is_parallel {
self.execute_phase_parallel(workflow_id, phase_idx).await?;
} else {
self.execute_phase_sequential(workflow_id, phase_idx).await?;
}
Ok(())
}
/// Execute phase with parallel steps
async fn execute_phase_parallel(
&self,
workflow_id: &str,
phase_idx: usize,
) -> Result<(), EngineError> {
// Get execution levels
let execution_levels = {
let workflows = self.workflows.read().await;
let workflow = workflows.get(workflow_id).unwrap();
let phase = &workflow.phases[phase_idx];
Scheduler::resolve_dependencies(&phase.steps)?
};
// Execute each level
for level_steps in execution_levels {
debug!("Executing parallel level with {} steps", level_steps.len());
// Execute all steps in this level in parallel
for step_id in level_steps {
// Get step index
let step_idx = {
let workflows = self.workflows.read().await;
let workflow = workflows.get(workflow_id).unwrap();
let phase = &workflow.phases[phase_idx];
phase.steps.iter().position(|s| s.id == step_id)
};
if let Some(idx) = step_idx {
self.execute_step(workflow_id, phase_idx, idx).await?;
}
}
}
Ok(())
}
/// Execute phase with sequential steps
async fn execute_phase_sequential(
&self,
workflow_id: &str,
phase_idx: usize,
) -> Result<(), EngineError> {
// Get execution order
let execution_order = {
let workflows = self.workflows.read().await;
let workflow = workflows.get(workflow_id).unwrap();
let phase = &workflow.phases[phase_idx];
Scheduler::get_sequential_order(&phase.steps)?
};
// Execute steps in order
for step_id in execution_order {
let step_idx = {
let workflows = self.workflows.read().await;
let workflow = workflows.get(workflow_id).unwrap();
let phase = &workflow.phases[phase_idx];
phase.steps.iter().position(|s| s.id == step_id)
};
if let Some(idx) = step_idx {
self.execute_step(workflow_id, phase_idx, idx).await?;
// Check if step failed
let step_failed = {
let workflows = self.workflows.read().await;
let workflow = workflows.get(workflow_id).unwrap();
matches!(
workflow.phases[phase_idx].steps[idx].status,
StepStatus::Failed
)
};
if step_failed {
return Err(EngineError::ExecutorError(
ExecutorError::ExecutionFailed("Step failed".to_string()),
));
}
}
}
Ok(())
}
/// Execute a single step
async fn execute_step(
&self,
workflow_id: &str,
phase_idx: usize,
step_idx: usize,
) -> Result<(), EngineError> {
let mut step = {
let workflows = self.workflows.read().await;
let workflow = workflows.get(workflow_id).unwrap();
workflow.phases[phase_idx].steps[step_idx].clone()
};
// Execute step
self.executor.execute_step(&mut step).await?;
// Update workflow with step results
{
let mut workflows = self.workflows.write().await;
let workflow = workflows.get_mut(workflow_id).unwrap();
workflow.phases[phase_idx].steps[step_idx] = step;
}
Ok(())
}
/// Get workflow by ID
pub async fn get_workflow(&self, workflow_id: &str) -> Option<Workflow> {
let workflows = self.workflows.read().await;
workflows.get(workflow_id).cloned()
}
/// List all workflows
pub async fn list_workflows(&self) -> Vec<Workflow> {
let workflows = self.workflows.read().await;
workflows.values().cloned().collect()
}
/// Rollback a failed workflow
pub async fn rollback_workflow(&self, workflow_id: &str) -> Result<(), EngineError> {
let mut workflows = self.workflows.write().await;
let workflow = workflows
.get_mut(workflow_id)
.ok_or_else(|| EngineError::WorkflowNotFound(workflow_id.to_string()))?;
if !matches!(workflow.status, WorkflowStatus::Failed) {
return Err(EngineError::StateError(
"Can only rollback failed workflows".to_string(),
));
}
workflow
.transition(WorkflowStatus::RolledBack)
.map_err(EngineError::StateError)?;
info!("Workflow {} rolled back", workflow_id);
Ok(())
}
/// Remove workflow from engine
pub async fn remove_workflow(&self, workflow_id: &str) -> Result<(), EngineError> {
let mut workflows = self.workflows.write().await;
workflows
.remove(workflow_id)
.ok_or_else(|| EngineError::WorkflowNotFound(workflow_id.to_string()))?;
info!("Removed workflow: {}", workflow_id);
Ok(())
}
}
#[cfg(test)]
mod tests {
use super::*;
use crate::workflow::state::{Phase, WorkflowStep};
use vapora_agents::coordinator::AgentCoordinator;
use vapora_agents::registry::AgentRegistry;
fn create_test_workflow() -> Workflow {
Workflow::new(
"test-wf-1".to_string(),
"Test Workflow".to_string(),
vec![Phase {
id: "phase1".to_string(),
name: "Phase 1".to_string(),
status: StepStatus::Pending,
parallel: false,
estimated_hours: 1.0,
steps: vec![WorkflowStep {
id: "step1".to_string(),
name: "Step 1".to_string(),
agent_role: "developer".to_string(),
status: StepStatus::Pending,
depends_on: vec![],
can_parallelize: true,
started_at: None,
completed_at: None,
result: None,
error: None,
}],
}],
)
}
#[tokio::test]
async fn test_engine_creation() {
let registry = Arc::new(AgentRegistry::new(5));
let config = AgentConfig {
registry: RegistryConfig {
max_agents_per_role: 5,
health_check_interval: 30,
agent_timeout: 300,
},
agents: vec![],
};
let coordinator = Arc::new(AgentCoordinator::new(config, registry).await.unwrap());
let executor = StepExecutor::new(coordinator);
let engine = WorkflowEngine::new(executor);
assert!(engine.list_workflows().await.is_empty());
}
#[tokio::test]
async fn test_register_workflow() {
let registry = Arc::new(AgentRegistry::new(5));
let config = AgentConfig {
registry: RegistryConfig {
max_agents_per_role: 5,
health_check_interval: 30,
agent_timeout: 300,
},
agents: vec![],
};
let coordinator = Arc::new(AgentCoordinator::new(config, registry).await.unwrap());
let executor = StepExecutor::new(coordinator);
let engine = WorkflowEngine::new(executor);
let workflow = create_test_workflow();
let result = engine.register_workflow(workflow).await;
assert!(result.is_ok());
assert_eq!(engine.list_workflows().await.len(), 1);
}
#[tokio::test]
async fn test_workflow_not_found() {
let registry = Arc::new(AgentRegistry::new(5));
let config = AgentConfig {
registry: RegistryConfig {
max_agents_per_role: 5,
health_check_interval: 30,
agent_timeout: 300,
},
agents: vec![],
};
let coordinator = Arc::new(AgentCoordinator::new(config, registry).await.unwrap());
let executor = StepExecutor::new(coordinator);
let engine = WorkflowEngine::new(executor);
let result = engine.get_workflow("nonexistent").await;
assert!(result.is_none());
}
#[tokio::test]
async fn test_rollback_requires_failed_state() {
let registry = Arc::new(AgentRegistry::new(5));
let config = AgentConfig {
registry: RegistryConfig {
max_agents_per_role: 5,
health_check_interval: 30,
agent_timeout: 300,
},
agents: vec![],
};
let coordinator = Arc::new(AgentCoordinator::new(config, registry).await.unwrap());
let executor = StepExecutor::new(coordinator);
let engine = WorkflowEngine::new(executor);
let workflow = create_test_workflow();
let id = workflow.id.clone();
engine.register_workflow(workflow).await.unwrap();
let result = engine.rollback_workflow(&id).await;
assert!(result.is_err());
}
}

View File

@ -0,0 +1,282 @@
// vapora-backend: Workflow step executor
// Phase 3: Execute workflow steps with agent coordination
use crate::workflow::state::{StepStatus, WorkflowStep};
use chrono::Utc;
use std::sync::Arc;
use thiserror::Error;
use tracing::{debug, error, info};
use vapora_agents::coordinator::AgentCoordinator;
use vapora_agents::config::{AgentConfig, RegistryConfig};
#[derive(Debug, Error)]
pub enum ExecutorError {
#[error("Agent coordinator error: {0}")]
CoordinatorError(String),
#[error("Step execution failed: {0}")]
ExecutionFailed(String),
#[error("Step already running: {0}")]
AlreadyRunning(String),
#[error("Invalid step state: expected {expected}, got {actual}")]
InvalidState { expected: String, actual: String },
}
/// Step executor handles execution of individual workflow steps
pub struct StepExecutor {
coordinator: Arc<AgentCoordinator>,
}
impl StepExecutor {
/// Create new step executor
pub fn new(coordinator: Arc<AgentCoordinator>) -> Self {
Self { coordinator }
}
/// Execute a single step
pub async fn execute_step(&self, step: &mut WorkflowStep) -> Result<(), ExecutorError> {
// Validate step is pending
if !matches!(step.status, StepStatus::Pending) {
return Err(ExecutorError::InvalidState {
expected: "Pending".to_string(),
actual: format!("{:?}", step.status),
});
}
info!("Executing step: {} ({})", step.id, step.name);
// Mark step as running
step.status = StepStatus::Running;
step.started_at = Some(Utc::now());
// Assign task to agent
let result = self
.coordinator
.assign_task(
&step.agent_role,
step.name.clone(),
format!("Workflow step: {}", step.id),
"{}".to_string(),
80, // Default priority
)
.await;
match result {
Ok(task_id) => {
step.status = StepStatus::Completed;
step.result = Some(format!("Task {} assigned successfully", task_id));
step.completed_at = Some(Utc::now());
info!("Step {} completed successfully", step.id);
Ok(())
}
Err(e) => {
step.status = StepStatus::Failed;
step.error = Some(e.to_string());
step.completed_at = Some(Utc::now());
error!("Step {} failed: {}", step.id, e);
Err(ExecutorError::ExecutionFailed(e.to_string()))
}
}
}
/// Execute multiple steps in parallel
pub async fn execute_parallel(
&self,
steps: &mut [WorkflowStep],
) -> Result<Vec<Result<(), ExecutorError>>, ExecutorError> {
debug!("Executing {} steps in parallel", steps.len());
let mut handles = Vec::new();
for step in steps.iter_mut() {
// Clone step data for async execution
let mut step_clone = step.clone();
let coordinator = Arc::clone(&self.coordinator);
let handle = tokio::spawn(async move {
let temp_executor = StepExecutor::new(coordinator);
temp_executor.execute_step(&mut step_clone).await
});
handles.push((handle, step));
}
let mut results = Vec::new();
// Wait for all tasks and update original steps
for (handle, _original_step) in handles {
let result = handle
.await
.map_err(|e| ExecutorError::ExecutionFailed(e.to_string()))?;
// Note: In a real implementation, we would need to update the original step
// with the results. For now, we just collect the results.
results.push(result);
}
Ok(results)
}
/// Execute steps sequentially
pub async fn execute_sequential(
&self,
steps: &mut [WorkflowStep],
) -> Result<(), ExecutorError> {
debug!("Executing {} steps sequentially", steps.len());
for step in steps.iter_mut() {
// Execute step
self.execute_step(step).await?;
// If step failed, stop execution
if matches!(step.status, StepStatus::Failed) {
return Err(ExecutorError::ExecutionFailed(format!(
"Step {} failed",
step.id
)));
}
}
Ok(())
}
/// Check if step can be executed (all dependencies met)
pub fn can_execute(&self, step: &WorkflowStep, completed_steps: &[String]) -> bool {
step.depends_on
.iter()
.all(|dep| completed_steps.contains(dep))
}
/// Get coordinator reference
pub fn coordinator(&self) -> Arc<AgentCoordinator> {
Arc::clone(&self.coordinator)
}
}
#[cfg(test)]
mod tests {
use super::*;
use vapora_agents::registry::AgentRegistry;
fn create_test_step(id: &str, role: &str) -> WorkflowStep {
WorkflowStep {
id: id.to_string(),
name: format!("Test step {}", id),
agent_role: role.to_string(),
status: StepStatus::Pending,
depends_on: vec![],
can_parallelize: true,
started_at: None,
completed_at: None,
result: None,
error: None,
}
}
#[tokio::test]
async fn test_executor_creation() {
let registry = Arc::new(AgentRegistry::new(5));
let config = AgentConfig {
registry: RegistryConfig {
max_agents_per_role: 5,
health_check_interval: 30,
agent_timeout: 300,
},
agents: vec![],
};
let coordinator = Arc::new(AgentCoordinator::new(config, registry).await.unwrap());
let executor = StepExecutor::new(coordinator);
// Verify executor is created successfully
assert!(executor.coordinator().registry().get_agent("nonexistent").is_none());
}
#[tokio::test]
async fn test_can_execute_with_dependencies() {
let registry = Arc::new(AgentRegistry::new(5));
let config = AgentConfig {
registry: RegistryConfig {
max_agents_per_role: 5,
health_check_interval: 30,
agent_timeout: 300,
},
agents: vec![],
};
let coordinator = Arc::new(AgentCoordinator::new(config, registry).await.unwrap());
let executor = StepExecutor::new(coordinator);
let mut step = create_test_step("step1", "developer");
step.depends_on = vec!["step0".to_string()];
// Should not be able to execute without dependency
assert!(!executor.can_execute(&step, &[]));
// Should be able to execute with dependency met
assert!(executor.can_execute(&step, &["step0".to_string()]));
}
#[tokio::test]
async fn test_invalid_state_transition() {
let registry = Arc::new(AgentRegistry::new(5));
let config = AgentConfig {
registry: RegistryConfig {
max_agents_per_role: 5,
health_check_interval: 30,
agent_timeout: 300,
},
agents: vec![],
};
let coordinator = Arc::new(AgentCoordinator::new(config, registry).await.unwrap());
let executor = StepExecutor::new(coordinator);
let mut step = create_test_step("step1", "developer");
step.status = StepStatus::Completed; // Already completed
let result = executor.execute_step(&mut step).await;
assert!(result.is_err());
assert!(matches!(result.unwrap_err(), ExecutorError::InvalidState { .. }));
}
#[tokio::test]
async fn test_step_execution_updates_timestamps() {
let registry = Arc::new(AgentRegistry::new(5));
// Register an agent
let agent = vapora_agents::registry::AgentMetadata::new(
"developer".to_string(),
"Test Developer".to_string(),
"claude".to_string(),
"claude-sonnet-4".to_string(),
vec!["coding".to_string()],
);
registry.register_agent(agent).unwrap();
let config = AgentConfig {
registry: RegistryConfig {
max_agents_per_role: 5,
health_check_interval: 30,
agent_timeout: 300,
},
agents: vec![],
};
let coordinator = Arc::new(AgentCoordinator::new(config, registry).await.unwrap());
let executor = StepExecutor::new(coordinator);
let mut step = create_test_step("step1", "developer");
assert!(step.started_at.is_none());
assert!(step.completed_at.is_none());
let result = executor.execute_step(&mut step).await;
// Should succeed since we have a registered agent
assert!(result.is_ok());
assert!(step.started_at.is_some());
assert!(step.completed_at.is_some());
assert_eq!(step.status, StepStatus::Completed);
}
}

View File

@ -0,0 +1,14 @@
// vapora-backend: Workflow orchestration module
// Phase 3: Workflow engine with state machine and parallel execution
pub mod engine;
pub mod executor;
pub mod parser;
pub mod scheduler;
pub mod state;
pub use engine::*;
pub use executor::*;
pub use parser::*;
pub use scheduler::*;
pub use state::*;

View File

@ -0,0 +1,274 @@
// vapora-backend: Workflow YAML parser
// Phase 3: Parse workflow definitions from YAML
use crate::workflow::state::{Phase, StepStatus, Workflow, WorkflowStep};
use serde::{Deserialize, Serialize};
use std::fs;
use thiserror::Error;
#[derive(Debug, Error)]
pub enum ParserError {
#[error("Failed to read file: {0}")]
FileError(#[from] std::io::Error),
#[error("Failed to parse YAML: {0}")]
YamlError(#[from] serde_yaml::Error),
#[error("Invalid workflow definition: {0}")]
ValidationError(String),
}
#[derive(Debug, Deserialize, Serialize)]
pub struct WorkflowYaml {
pub workflow: WorkflowDef,
}
#[derive(Debug, Deserialize, Serialize)]
pub struct WorkflowDef {
pub id: String,
pub title: String,
pub phases: Vec<PhaseDef>,
}
#[derive(Debug, Deserialize, Serialize)]
pub struct PhaseDef {
pub id: String,
pub name: String,
#[serde(default)]
pub parallel: bool,
#[serde(default = "default_estimated_hours")]
pub estimated_hours: f32,
pub steps: Vec<StepDef>,
}
#[derive(Debug, Deserialize, Serialize)]
pub struct StepDef {
pub id: String,
pub name: String,
pub agent: String,
#[serde(default)]
pub depends_on: Vec<String>,
#[serde(default)]
pub parallelizable: bool,
}
fn default_estimated_hours() -> f32 {
1.0
}
pub struct WorkflowParser;
impl WorkflowParser {
/// Parse workflow from YAML file
pub fn parse_file(path: &str) -> Result<Workflow, ParserError> {
let content = fs::read_to_string(path)?;
Self::parse_string(&content)
}
/// Parse workflow from YAML string
pub fn parse_string(yaml: &str) -> Result<Workflow, ParserError> {
let workflow_yaml: WorkflowYaml = serde_yaml::from_str(yaml)?;
Self::validate_and_convert(workflow_yaml)
}
/// Validate and convert YAML definition to runtime Workflow
fn validate_and_convert(yaml: WorkflowYaml) -> Result<Workflow, ParserError> {
let def = yaml.workflow;
// Validate workflow has phases
if def.phases.is_empty() {
return Err(ParserError::ValidationError(
"Workflow must have at least one phase".to_string(),
));
}
// Convert phases
let mut phases = Vec::new();
for phase_def in def.phases {
// Validate phase has steps
if phase_def.steps.is_empty() {
return Err(ParserError::ValidationError(format!(
"Phase '{}' must have at least one step",
phase_def.id
)));
}
// Convert steps
let steps: Vec<WorkflowStep> = phase_def
.steps
.into_iter()
.map(|step_def| WorkflowStep {
id: step_def.id,
name: step_def.name,
agent_role: step_def.agent,
status: StepStatus::Pending,
depends_on: step_def.depends_on,
can_parallelize: step_def.parallelizable,
started_at: None,
completed_at: None,
result: None,
error: None,
})
.collect();
// Validate dependencies exist
Self::validate_dependencies(&steps)?;
phases.push(Phase {
id: phase_def.id,
name: phase_def.name,
status: StepStatus::Pending,
steps,
parallel: phase_def.parallel,
estimated_hours: phase_def.estimated_hours,
});
}
Ok(Workflow::new(def.id, def.title, phases))
}
/// Validate that all step dependencies exist
fn validate_dependencies(steps: &[WorkflowStep]) -> Result<(), ParserError> {
let step_ids: std::collections::HashSet<_> = steps.iter().map(|s| &s.id).collect();
for step in steps {
for dep in &step.depends_on {
if !step_ids.contains(dep) {
return Err(ParserError::ValidationError(format!(
"Step '{}' depends on non-existent step '{}'",
step.id, dep
)));
}
}
}
Ok(())
}
}
#[cfg(test)]
mod tests {
use super::*;
#[test]
fn test_parse_workflow_yaml() {
let yaml = r#"
workflow:
id: feature-auth
title: Implement MFA
phases:
- id: phase_1
name: Design
parallel: false
estimated_hours: 2.0
steps:
- id: step_1_1
name: Architect Design
agent: architect
depends_on: []
parallelizable: false
- id: phase_2
name: Implementation
parallel: true
estimated_hours: 8.0
steps:
- id: step_2_1
name: Backend API
agent: developer
depends_on: []
parallelizable: true
- id: step_2_2
name: Frontend UI
agent: developer
depends_on: []
parallelizable: true
"#;
let result = WorkflowParser::parse_string(yaml);
assert!(result.is_ok());
let workflow = result.unwrap();
assert_eq!(workflow.id, "feature-auth");
assert_eq!(workflow.title, "Implement MFA");
assert_eq!(workflow.phases.len(), 2);
assert_eq!(workflow.phases[0].steps.len(), 1);
assert_eq!(workflow.phases[1].steps.len(), 2);
assert!(workflow.phases[1].parallel);
}
#[test]
fn test_empty_phases_error() {
let yaml = r#"
workflow:
id: test
title: Test
phases: []
"#;
let result = WorkflowParser::parse_string(yaml);
assert!(result.is_err());
}
#[test]
fn test_empty_steps_error() {
let yaml = r#"
workflow:
id: test
title: Test
phases:
- id: phase_1
name: Phase
steps: []
"#;
let result = WorkflowParser::parse_string(yaml);
assert!(result.is_err());
}
#[test]
fn test_invalid_dependency() {
let yaml = r#"
workflow:
id: test
title: Test
phases:
- id: phase_1
name: Phase
steps:
- id: step_1
name: Step 1
agent: developer
depends_on: [nonexistent]
"#;
let result = WorkflowParser::parse_string(yaml);
assert!(result.is_err());
}
#[test]
fn test_valid_dependencies() {
let yaml = r#"
workflow:
id: test
title: Test
phases:
- id: phase_1
name: Phase
steps:
- id: step_1
name: Step 1
agent: developer
depends_on: []
- id: step_2
name: Step 2
agent: developer
depends_on: [step_1]
"#;
let result = WorkflowParser::parse_string(yaml);
assert!(result.is_ok());
let workflow = result.unwrap();
assert_eq!(workflow.phases[0].steps[1].depends_on, vec!["step_1"]);
}
}

View File

@ -0,0 +1,306 @@
// vapora-backend: Workflow dependency scheduler
// Phase 3: Topological sort for dependency resolution and parallel execution
use crate::workflow::state::WorkflowStep;
use std::collections::{HashMap, VecDeque};
use thiserror::Error;
#[derive(Debug, Error)]
pub enum SchedulerError {
#[error("Circular dependency detected in workflow")]
CircularDependency,
#[error("Invalid step reference: {0}")]
InvalidStepReference(String),
}
pub struct Scheduler;
impl Scheduler {
/// Resolve dependencies using topological sort (Kahn's algorithm)
/// Returns levels of steps that can be executed in parallel
pub fn resolve_dependencies(
steps: &[WorkflowStep],
) -> Result<Vec<Vec<String>>, SchedulerError> {
if steps.is_empty() {
return Ok(Vec::new());
}
// Build dependency graph
let mut graph: HashMap<String, Vec<String>> = HashMap::new();
let mut in_degree: HashMap<String, usize> = HashMap::new();
// Initialize graph with all steps
for step in steps {
in_degree.insert(step.id.clone(), step.depends_on.len());
graph.insert(step.id.clone(), Vec::new());
}
// Build adjacency list (reverse dependencies)
for step in steps {
for dep in &step.depends_on {
// Validate dependency exists
if !in_degree.contains_key(dep) {
return Err(SchedulerError::InvalidStepReference(dep.clone()));
}
// Add edge from dependency to dependent
if let Some(dependents) = graph.get_mut(dep) {
dependents.push(step.id.clone());
}
}
}
// Kahn's algorithm for topological sort
let mut queue: VecDeque<String> = VecDeque::new();
// Find all steps with no dependencies
for (step_id, degree) in &in_degree {
if *degree == 0 {
queue.push_back(step_id.clone());
}
}
let mut sorted_levels: Vec<Vec<String>> = Vec::new();
let mut processed_count = 0;
while !queue.is_empty() {
let mut current_level = Vec::new();
let level_size = queue.len();
// Process all steps at current level
for _ in 0..level_size {
if let Some(step_id) = queue.pop_front() {
current_level.push(step_id.clone());
processed_count += 1;
// Reduce in-degree for all dependents
if let Some(dependents) = graph.get(&step_id) {
for dependent in dependents {
if let Some(degree) = in_degree.get_mut(dependent) {
*degree -= 1;
// If in-degree becomes 0, add to queue
if *degree == 0 {
queue.push_back(dependent.clone());
}
}
}
}
}
}
if !current_level.is_empty() {
sorted_levels.push(current_level);
}
}
// Check for circular dependencies
if processed_count != steps.len() {
return Err(SchedulerError::CircularDependency);
}
Ok(sorted_levels)
}
/// Get steps that can be executed in parallel at each level
pub fn get_parallel_groups(
steps: &[WorkflowStep],
) -> Result<Vec<Vec<String>>, SchedulerError> {
let sorted_levels = Self::resolve_dependencies(steps)?;
// Filter to only include parallelizable steps
let parallel_groups = sorted_levels
.into_iter()
.map(|level| {
level
.into_iter()
.filter(|step_id| {
steps
.iter()
.find(|s| &s.id == step_id)
.map(|s| s.can_parallelize)
.unwrap_or(false)
})
.collect()
})
.filter(|level: &Vec<String>| !level.is_empty())
.collect();
Ok(parallel_groups)
}
/// Get execution order for sequential execution
pub fn get_sequential_order(steps: &[WorkflowStep]) -> Result<Vec<String>, SchedulerError> {
let levels = Self::resolve_dependencies(steps)?;
Ok(levels.into_iter().flatten().collect())
}
/// Validate workflow has no circular dependencies
pub fn validate_workflow(steps: &[WorkflowStep]) -> Result<(), SchedulerError> {
Self::resolve_dependencies(steps)?;
Ok(())
}
}
#[cfg(test)]
mod tests {
use super::*;
use crate::workflow::state::StepStatus;
fn create_step(id: &str, deps: Vec<&str>, parallel: bool) -> WorkflowStep {
WorkflowStep {
id: id.to_string(),
name: format!("Step {}", id),
agent_role: "developer".to_string(),
status: StepStatus::Pending,
depends_on: deps.iter().map(|s| s.to_string()).collect(),
can_parallelize: parallel,
started_at: None,
completed_at: None,
result: None,
error: None,
}
}
#[test]
fn test_simple_dependency_chain() {
let steps = vec![
create_step("a", vec![], true),
create_step("b", vec!["a"], true),
create_step("c", vec!["b"], true),
];
let result = Scheduler::resolve_dependencies(&steps);
assert!(result.is_ok());
let sorted = result.unwrap();
assert_eq!(sorted.len(), 3);
assert_eq!(sorted[0], vec!["a"]);
assert_eq!(sorted[1], vec!["b"]);
assert_eq!(sorted[2], vec!["c"]);
}
#[test]
fn test_parallel_execution() {
let steps = vec![
create_step("a", vec![], true),
create_step("b", vec!["a"], true),
create_step("c", vec!["a"], true),
create_step("d", vec!["b", "c"], true),
];
let result = Scheduler::resolve_dependencies(&steps);
assert!(result.is_ok());
let sorted = result.unwrap();
assert_eq!(sorted.len(), 3);
assert_eq!(sorted[0], vec!["a"]);
assert_eq!(sorted[1].len(), 2); // b and c can run in parallel
assert!(sorted[1].contains(&"b".to_string()));
assert!(sorted[1].contains(&"c".to_string()));
assert_eq!(sorted[2], vec!["d"]);
}
#[test]
fn test_circular_dependency() {
let steps = vec![
create_step("a", vec!["b"], true),
create_step("b", vec!["c"], true),
create_step("c", vec!["a"], true),
];
let result = Scheduler::resolve_dependencies(&steps);
assert!(result.is_err());
assert!(matches!(result.unwrap_err(), SchedulerError::CircularDependency));
}
#[test]
fn test_invalid_dependency_reference() {
let steps = vec![
create_step("a", vec![], true),
create_step("b", vec!["nonexistent"], true),
];
let result = Scheduler::resolve_dependencies(&steps);
assert!(result.is_err());
assert!(matches!(
result.unwrap_err(),
SchedulerError::InvalidStepReference(_)
));
}
#[test]
fn test_complex_dag() {
// a
// / \
// b c
// | |
// d e
// \ /
// f
let steps = vec![
create_step("a", vec![], true),
create_step("b", vec!["a"], true),
create_step("c", vec!["a"], true),
create_step("d", vec!["b"], true),
create_step("e", vec!["c"], true),
create_step("f", vec!["d", "e"], true),
];
let result = Scheduler::resolve_dependencies(&steps);
assert!(result.is_ok());
let sorted = result.unwrap();
assert_eq!(sorted.len(), 4);
assert_eq!(sorted[0], vec!["a"]);
assert_eq!(sorted[1].len(), 2); // b and c parallel
assert_eq!(sorted[2].len(), 2); // d and e parallel
assert_eq!(sorted[3], vec!["f"]);
}
#[test]
fn test_parallel_groups_filter() {
let steps = vec![
create_step("a", vec![], true),
create_step("b", vec!["a"], false), // Not parallelizable
create_step("c", vec!["a"], true),
];
let result = Scheduler::get_parallel_groups(&steps);
assert!(result.is_ok());
let groups = result.unwrap();
// Only "a" and "c" should be in parallel groups
assert!(groups[0].contains(&"a".to_string()));
assert!(groups[1].contains(&"c".to_string()));
assert!(!groups[1].contains(&"b".to_string()));
}
#[test]
fn test_sequential_order() {
let steps = vec![
create_step("c", vec!["a"], true),
create_step("a", vec![], true),
create_step("b", vec!["a"], true),
];
let result = Scheduler::get_sequential_order(&steps);
assert!(result.is_ok());
let order = result.unwrap();
assert_eq!(order.len(), 3);
assert_eq!(order[0], "a");
// b and c can be in any order after a
assert!(order[1..].contains(&"b".to_string()));
assert!(order[1..].contains(&"c".to_string()));
}
#[test]
fn test_empty_steps() {
let steps: Vec<WorkflowStep> = vec![];
let result = Scheduler::resolve_dependencies(&steps);
assert!(result.is_ok());
assert!(result.unwrap().is_empty());
}
}

View File

@ -0,0 +1,235 @@
// vapora-backend: Workflow state machine
// Phase 3: State management for workflow lifecycle
use chrono::{DateTime, Utc};
use serde::{Deserialize, Serialize};
#[derive(Debug, Clone, Serialize, Deserialize, PartialEq)]
pub enum WorkflowStatus {
Created,
Planning,
InProgress,
Blocked,
Completed,
Failed,
RolledBack,
}
#[derive(Debug, Clone, Serialize, Deserialize, PartialEq)]
pub enum StepStatus {
Pending,
Running,
Completed,
Failed,
Skipped,
Blocked,
}
#[derive(Debug, Clone, Serialize, Deserialize)]
pub struct Workflow {
pub id: String,
pub title: String,
pub status: WorkflowStatus,
pub phases: Vec<Phase>,
pub created_at: DateTime<Utc>,
pub started_at: Option<DateTime<Utc>>,
pub completed_at: Option<DateTime<Utc>>,
pub estimated_completion: Option<DateTime<Utc>>,
}
#[derive(Debug, Clone, Serialize, Deserialize)]
pub struct Phase {
pub id: String,
pub name: String,
pub status: StepStatus,
pub steps: Vec<WorkflowStep>,
pub parallel: bool,
pub estimated_hours: f32,
}
#[derive(Debug, Clone, Serialize, Deserialize)]
pub struct WorkflowStep {
pub id: String,
pub name: String,
pub agent_role: String,
pub status: StepStatus,
pub depends_on: Vec<String>,
pub can_parallelize: bool,
pub started_at: Option<DateTime<Utc>>,
pub completed_at: Option<DateTime<Utc>>,
pub result: Option<String>,
pub error: Option<String>,
}
impl Default for WorkflowStep {
fn default() -> Self {
Self {
id: String::new(),
name: String::new(),
agent_role: String::new(),
status: StepStatus::Pending,
depends_on: Vec::new(),
can_parallelize: false,
started_at: None,
completed_at: None,
result: None,
error: None,
}
}
}
impl Workflow {
/// Create a new workflow
pub fn new(id: String, title: String, phases: Vec<Phase>) -> Self {
Self {
id,
title,
status: WorkflowStatus::Created,
phases,
created_at: Utc::now(),
started_at: None,
completed_at: None,
estimated_completion: None,
}
}
/// Check if transition is allowed
pub fn can_transition(&self, to: &WorkflowStatus) -> bool {
match (&self.status, to) {
(WorkflowStatus::Created, WorkflowStatus::Planning) => true,
(WorkflowStatus::Planning, WorkflowStatus::InProgress) => true,
(WorkflowStatus::InProgress, WorkflowStatus::Completed) => true,
(WorkflowStatus::InProgress, WorkflowStatus::Failed) => true,
(WorkflowStatus::InProgress, WorkflowStatus::Blocked) => true,
(WorkflowStatus::Blocked, WorkflowStatus::InProgress) => true,
(WorkflowStatus::Failed, WorkflowStatus::RolledBack) => true,
_ => false,
}
}
/// Transition to new state
pub fn transition(&mut self, to: WorkflowStatus) -> Result<(), String> {
if !self.can_transition(&to) {
return Err(format!(
"Cannot transition from {:?} to {:?}",
self.status, to
));
}
match &to {
WorkflowStatus::InProgress => {
self.started_at = Some(Utc::now());
}
WorkflowStatus::Completed | WorkflowStatus::Failed | WorkflowStatus::RolledBack => {
self.completed_at = Some(Utc::now());
}
_ => {}
}
self.status = to;
Ok(())
}
/// Check if all steps are completed
pub fn all_steps_completed(&self) -> bool {
self.phases.iter().all(|p| {
p.steps
.iter()
.all(|s| matches!(s.status, StepStatus::Completed | StepStatus::Skipped))
})
}
/// Check if any step has failed
pub fn any_step_failed(&self) -> bool {
self.phases
.iter()
.any(|p| p.steps.iter().any(|s| matches!(s.status, StepStatus::Failed)))
}
/// Get workflow progress percentage
pub fn progress_percent(&self) -> u32 {
let total_steps: usize = self.phases.iter().map(|p| p.steps.len()).sum();
if total_steps == 0 {
return 0;
}
let completed_steps: usize = self
.phases
.iter()
.flat_map(|p| &p.steps)
.filter(|s| matches!(s.status, StepStatus::Completed | StepStatus::Skipped))
.count();
((completed_steps as f64 / total_steps as f64) * 100.0) as u32
}
}
#[cfg(test)]
mod tests {
use super::*;
#[test]
fn test_workflow_creation() {
let workflow = Workflow::new("wf-1".to_string(), "Test Workflow".to_string(), vec![]);
assert_eq!(workflow.id, "wf-1");
assert_eq!(workflow.status, WorkflowStatus::Created);
assert!(workflow.started_at.is_none());
}
#[test]
fn test_valid_transitions() {
let mut workflow = Workflow::new("wf-1".to_string(), "Test".to_string(), vec![]);
assert!(workflow.transition(WorkflowStatus::Planning).is_ok());
assert_eq!(workflow.status, WorkflowStatus::Planning);
assert!(workflow.transition(WorkflowStatus::InProgress).is_ok());
assert_eq!(workflow.status, WorkflowStatus::InProgress);
assert!(workflow.started_at.is_some());
assert!(workflow.transition(WorkflowStatus::Completed).is_ok());
assert_eq!(workflow.status, WorkflowStatus::Completed);
assert!(workflow.completed_at.is_some());
}
#[test]
fn test_invalid_transition() {
let mut workflow = Workflow::new("wf-1".to_string(), "Test".to_string(), vec![]);
let result = workflow.transition(WorkflowStatus::Completed);
assert!(result.is_err());
}
#[test]
fn test_progress_calculation() {
let mut workflow = Workflow::new(
"wf-1".to_string(),
"Test".to_string(),
vec![Phase {
id: "p1".to_string(),
name: "Phase 1".to_string(),
status: StepStatus::Running,
parallel: false,
estimated_hours: 2.0,
steps: vec![
WorkflowStep {
id: "s1".to_string(),
status: StepStatus::Completed,
..Default::default()
},
WorkflowStep {
id: "s2".to_string(),
status: StepStatus::Running,
..Default::default()
},
],
}],
);
assert_eq!(workflow.progress_percent(), 50);
workflow.phases[0].steps[1].status = StepStatus::Completed;
assert_eq!(workflow.progress_percent(), 100);
}
}

View File

@ -0,0 +1,140 @@
// Integration tests for VAPORA backend
// These tests verify the complete API functionality
use axum::http::StatusCode;
use axum_test::TestServer;
use chrono::Utc;
use vapora_shared::models::{Agent, AgentRole, AgentStatus, Project, ProjectStatus, Task, TaskPriority, TaskStatus};
/// Helper function to create a test project
fn create_test_project() -> Project {
Project {
id: None,
tenant_id: "test-tenant".to_string(),
title: "Test Project".to_string(),
description: Some("A test project".to_string()),
status: ProjectStatus::Active,
features: vec!["feature1".to_string()],
created_at: Utc::now(),
updated_at: Utc::now(),
}
}
/// Helper function to create a test task
fn create_test_task(project_id: String) -> Task {
Task {
id: None,
tenant_id: "test-tenant".to_string(),
project_id,
title: "Test Task".to_string(),
description: Some("A test task".to_string()),
status: TaskStatus::Todo,
assignee: "unassigned".to_string(),
priority: TaskPriority::Medium,
task_order: 0,
feature: Some("feature1".to_string()),
created_at: Utc::now(),
updated_at: Utc::now(),
}
}
/// Helper function to create a test agent
fn create_test_agent() -> Agent {
Agent {
id: "test-agent-1".to_string(),
role: AgentRole::Developer,
name: "Test Developer Agent".to_string(),
version: "1.0.0".to_string(),
status: AgentStatus::Active,
capabilities: vec!["rust".to_string(), "async".to_string()],
skills: vec!["backend".to_string()],
llm_provider: "claude".to_string(),
llm_model: "claude-sonnet-4".to_string(),
max_concurrent_tasks: 3,
created_at: Utc::now(),
}
}
#[tokio::test]
async fn test_health_endpoint() {
// Note: This test doesn't require a running server
// It's a placeholder for actual integration tests
// Real tests would use TestServer and require SurrealDB to be running
}
#[tokio::test]
async fn test_project_lifecycle() {
// Note: This test requires a running SurrealDB instance
// For now, it's a placeholder demonstrating the test structure
// Real implementation would:
// 1. Create a TestServer with the app
// 2. POST /api/v1/projects - create project
// 3. GET /api/v1/projects/:id - verify creation
// 4. PUT /api/v1/projects/:id - update project
// 5. DELETE /api/v1/projects/:id - delete project
}
#[tokio::test]
async fn test_task_lifecycle() {
// Note: Placeholder test
// Real implementation would test:
// 1. Create task
// 2. List tasks
// 3. Update task status
// 4. Reorder task
// 5. Delete task
}
#[tokio::test]
async fn test_agent_registration() {
// Note: Placeholder test
// Real implementation would test:
// 1. Register agent
// 2. List agents
// 3. Update agent status
// 4. Check agent health
// 5. Deregister agent
}
#[tokio::test]
async fn test_kanban_operations() {
// Note: Placeholder test
// Real implementation would test:
// 1. Create multiple tasks in different columns
// 2. Move task between columns
// 3. Reorder tasks within a column
// 4. Verify task order is maintained
}
#[tokio::test]
async fn test_error_handling() {
// Note: Placeholder test
// Real implementation would test:
// 1. Not found errors (404)
// 2. Invalid input errors (400)
// 3. Unauthorized errors (401)
// 4. Database errors (500)
}
// Note: To run these tests properly, you would need:
// 1. A test SurrealDB instance running
// 2. Test fixtures and cleanup
// 3. TestServer setup from axum_test
//
// Example of a real test structure:
//
// #[tokio::test]
// async fn test_create_project_real() {
// let app = build_test_app().await;
// let server = TestServer::new(app).unwrap();
//
// let project = create_test_project();
// let response = server
// .post("/api/v1/projects")
// .json(&project)
// .await;
//
// assert_eq!(response.status_code(), StatusCode::CREATED);
// let created: Project = response.json();
// assert_eq!(created.title, project.title);
// }

View File

@ -0,0 +1,37 @@
// Test for Prometheus metrics endpoint
// Verifies that metrics are properly exposed
use vapora_swarm::SwarmMetrics;
#[tokio::test]
async fn test_metrics_endpoint_with_coordinator() {
// Initialize metrics
let metrics = SwarmMetrics::new();
assert!(metrics.is_ok(), "SwarmMetrics should initialize successfully");
let metrics = metrics.unwrap();
// Record some activities
metrics.record_assignment_success(0.042, "simple");
metrics.update_agent_metrics(5, 4, 0.35);
metrics.record_coalition_formed();
// Gather metrics (this is what the endpoint does)
let metric_families = prometheus::gather();
// Verify swarm metrics are registered
let metric_names: Vec<&str> = metric_families
.iter()
.map(|mf| mf.name())
.collect();
// Should have at least some swarm metrics
let has_swarm_metrics = metric_names
.iter()
.any(|name| name.starts_with("vapora_swarm_"));
assert!(
has_swarm_metrics || !metric_names.is_empty(),
"Should be able to gather metrics from Prometheus"
);
}

View File

@ -0,0 +1,288 @@
// Integration tests for Swarm API endpoints
// Tests verify swarm statistics and health monitoring endpoints
use std::sync::Arc;
use vapora_swarm::{SwarmCoordinator, AgentProfile};
/// Helper to create a test agent profile
fn create_test_profile(id: &str, success_rate: f64, load: f64) -> AgentProfile {
AgentProfile {
id: id.to_string(),
roles: vec!["developer".to_string()],
capabilities: vec!["coding".to_string(), "testing".to_string()],
current_load: load,
success_rate,
availability: true,
}
}
#[tokio::test]
async fn test_swarm_coordinator_initialization() {
// Create a SwarmCoordinator
let swarm = Arc::new(SwarmCoordinator::new());
// Register test profiles
let profile1 = create_test_profile("agent-1", 0.95, 0.3);
let profile2 = create_test_profile("agent-2", 0.85, 0.5);
swarm.register_agent(profile1).ok();
swarm.register_agent(profile2).ok();
// Get statistics
let stats = swarm.get_swarm_stats();
// Verify statistics
assert_eq!(stats.total_agents, 2);
assert_eq!(stats.available_agents, 2);
assert!(stats.avg_load > 0.0);
assert!(stats.active_tasks == 0); // No tasks assigned yet
}
#[tokio::test]
async fn test_swarm_health_status_healthy() {
// Create swarm with available agents
let swarm = Arc::new(SwarmCoordinator::new());
let profile1 = create_test_profile("agent-1", 0.95, 0.3);
let profile2 = create_test_profile("agent-2", 0.90, 0.2);
swarm.register_agent(profile1).ok();
swarm.register_agent(profile2).ok();
let stats = swarm.get_swarm_stats();
// Verify health calculation
assert_eq!(stats.total_agents, 2);
assert_eq!(stats.available_agents, 2);
// All agents available = healthy
let is_healthy = stats.total_agents > 0 && stats.available_agents > 0;
assert!(is_healthy);
}
#[tokio::test]
async fn test_swarm_health_status_degraded() {
// Create swarm with some unavailable agents
let swarm = Arc::new(SwarmCoordinator::new());
let available_profile = create_test_profile("agent-1", 0.95, 0.3);
let mut unavailable_profile = create_test_profile("agent-2", 0.85, 0.5);
unavailable_profile.availability = false;
swarm.register_agent(available_profile).ok();
swarm.register_agent(unavailable_profile).ok();
let stats = swarm.get_swarm_stats();
// Verify health calculation
assert_eq!(stats.total_agents, 2);
assert_eq!(stats.available_agents, 1);
// Some unavailable = degraded
let is_degraded = stats.total_agents > 0 && stats.available_agents < stats.total_agents;
assert!(is_degraded);
}
#[tokio::test]
async fn test_swarm_health_status_no_agents() {
// Create empty swarm
let swarm = Arc::new(SwarmCoordinator::new());
let stats = swarm.get_swarm_stats();
// Verify no agents
assert_eq!(stats.total_agents, 0);
assert_eq!(stats.available_agents, 0);
}
#[tokio::test]
async fn test_swarm_statistics_load_calculation() {
// Create swarm with varied load profiles
let swarm = Arc::new(SwarmCoordinator::new());
let light_load = create_test_profile("agent-1", 0.95, 0.1);
let medium_load = create_test_profile("agent-2", 0.85, 0.5);
let high_load = create_test_profile("agent-3", 0.80, 0.9);
swarm.register_agent(light_load).ok();
swarm.register_agent(medium_load).ok();
swarm.register_agent(high_load).ok();
let stats = swarm.get_swarm_stats();
// Verify load calculation (average of 0.1, 0.5, 0.9 = 0.5)
assert_eq!(stats.total_agents, 3);
assert!(stats.avg_load > 0.4 && stats.avg_load < 0.6);
}
#[tokio::test]
async fn test_swarm_statistics_success_rate_variance() {
// Create swarm with different success rates
let swarm = Arc::new(SwarmCoordinator::new());
let high_success = create_test_profile("agent-1", 0.99, 0.2);
let medium_success = create_test_profile("agent-2", 0.50, 0.3);
let low_success = create_test_profile("agent-3", 0.10, 0.1);
swarm.register_agent(high_success).ok();
swarm.register_agent(medium_success).ok();
swarm.register_agent(low_success).ok();
let stats = swarm.get_swarm_stats();
// Verify all agents registered despite variance
assert_eq!(stats.total_agents, 3);
assert_eq!(stats.available_agents, 3);
}
#[tokio::test]
async fn test_swarm_agent_availability_transitions() {
// Create swarm with available agent
let swarm = Arc::new(SwarmCoordinator::new());
let mut profile = create_test_profile("agent-1", 0.95, 0.3);
swarm.register_agent(profile.clone()).ok();
// Verify initial state
let mut stats = swarm.get_swarm_stats();
assert_eq!(stats.available_agents, 1);
// Mark agent unavailable
profile.availability = false;
swarm.register_agent(profile).ok();
// Verify transition
stats = swarm.get_swarm_stats();
assert_eq!(stats.available_agents, 0);
}
#[tokio::test]
async fn test_swarm_unregister_agent() {
// Create swarm with agent
let swarm = Arc::new(SwarmCoordinator::new());
let profile = create_test_profile("agent-1", 0.95, 0.3);
swarm.register_agent(profile).ok();
let mut stats = swarm.get_swarm_stats();
assert_eq!(stats.total_agents, 1);
// Unregister agent
swarm.unregister_agent("agent-1").ok();
// Verify removal
stats = swarm.get_swarm_stats();
assert_eq!(stats.total_agents, 0);
}
#[tokio::test]
async fn test_swarm_task_assignment_selects_best_agent() {
// Create swarm with agents of different quality
let swarm = Arc::new(SwarmCoordinator::new());
let poor_agent = create_test_profile("agent-poor", 0.50, 0.9); // Low success, high load
let good_agent = create_test_profile("agent-good", 0.95, 0.2); // High success, low load
swarm.register_agent(poor_agent).ok();
swarm.register_agent(good_agent).ok();
// Score: success_rate / (1.0 + load)
// agent-poor: 0.50 / (1.0 + 0.9) = 0.50 / 1.9 ≈ 0.26
// agent-good: 0.95 / (1.0 + 0.2) = 0.95 / 1.2 ≈ 0.79
// agent-good should be selected
// Verify agent-good has better score
let poor_score = 0.50 / (1.0 + 0.9);
let good_score = 0.95 / (1.0 + 0.2);
assert!(good_score > poor_score);
}
#[tokio::test]
async fn test_swarm_statistics_consistency() {
// Test that statistics remain consistent with multiple operations
let swarm = Arc::new(SwarmCoordinator::new());
// Initial state
let mut stats = swarm.get_swarm_stats();
assert_eq!(stats.total_agents, 0);
// Add agents
for i in 0..5 {
let profile = create_test_profile(&format!("agent-{}", i), 0.85, 0.3);
swarm.register_agent(profile).ok();
}
stats = swarm.get_swarm_stats();
assert_eq!(stats.total_agents, 5);
assert_eq!(stats.available_agents, 5);
// Update one agent to unavailable
let mut profile = create_test_profile("agent-0", 0.85, 0.3);
profile.availability = false;
swarm.register_agent(profile).ok();
stats = swarm.get_swarm_stats();
assert_eq!(stats.total_agents, 5);
assert_eq!(stats.available_agents, 4);
// Remove one agent
swarm.unregister_agent("agent-1").ok();
stats = swarm.get_swarm_stats();
assert_eq!(stats.total_agents, 4);
assert_eq!(stats.available_agents, 3);
}
#[tokio::test]
async fn test_swarm_large_agent_pool() {
// Test swarm behavior with larger agent pool
let swarm = Arc::new(SwarmCoordinator::new());
// Register 50 agents with varied metrics
for i in 0..50 {
let success_rate = if i % 3 == 0 {
0.95
} else if i % 3 == 1 {
0.75
} else {
0.55
};
let load = (i as f64 % 10.0) / 10.0;
let profile = create_test_profile(&format!("agent-{}", i), success_rate, load);
swarm.register_agent(profile).ok();
}
let stats = swarm.get_swarm_stats();
// Verify all registered
assert_eq!(stats.total_agents, 50);
assert_eq!(stats.available_agents, 50);
// Verify average load is reasonable
assert!(stats.avg_load > 0.0 && stats.avg_load < 1.0);
}
#[tokio::test]
async fn test_swarm_empty_after_unregister_all() {
// Create swarm with agents
let swarm = Arc::new(SwarmCoordinator::new());
for i in 0..3 {
let profile = create_test_profile(&format!("agent-{}", i), 0.85, 0.3);
swarm.register_agent(profile).ok();
}
let mut stats = swarm.get_swarm_stats();
assert_eq!(stats.total_agents, 3);
// Unregister all
for i in 0..3 {
swarm.unregister_agent(&format!("agent-{}", i)).ok();
}
stats = swarm.get_swarm_stats();
assert_eq!(stats.total_agents, 0);
assert_eq!(stats.available_agents, 0);
}

View File

@ -0,0 +1,363 @@
// Integration tests for Phase 3: Workflow orchestration
// Tests the complete workflow system end-to-end
use std::sync::Arc;
use vapora_agents::{coordinator::AgentCoordinator, registry::AgentRegistry};
use vapora_backend::{
api::websocket::WorkflowBroadcaster,
audit::AuditTrail,
services::WorkflowService,
workflow::{
engine::WorkflowEngine,
executor::StepExecutor,
parser::WorkflowParser,
scheduler::Scheduler,
state::{Phase, StepStatus, Workflow, WorkflowStatus, WorkflowStep},
},
};
#[tokio::test]
async fn test_workflow_state_transitions() {
let mut workflow = Workflow::new("wf-1".to_string(), "Test Workflow".to_string(), vec![]);
// Test valid transitions
assert!(workflow.transition(WorkflowStatus::Planning).is_ok());
assert_eq!(workflow.status, WorkflowStatus::Planning);
assert!(workflow.transition(WorkflowStatus::InProgress).is_ok());
assert_eq!(workflow.status, WorkflowStatus::InProgress);
assert!(workflow.started_at.is_some());
assert!(workflow.transition(WorkflowStatus::Completed).is_ok());
assert_eq!(workflow.status, WorkflowStatus::Completed);
assert!(workflow.completed_at.is_some());
}
#[tokio::test]
async fn test_workflow_parser() {
let yaml = r#"
workflow:
id: test-workflow
title: Test Workflow
phases:
- id: phase1
name: Design Phase
parallel: false
estimated_hours: 2.0
steps:
- id: step1
name: Create design
agent: architect
depends_on: []
parallelizable: false
- id: phase2
name: Implementation
parallel: true
estimated_hours: 8.0
steps:
- id: step2
name: Implement backend
agent: developer
depends_on: []
parallelizable: true
- id: step3
name: Implement frontend
agent: developer
depends_on: []
parallelizable: true
"#;
let result = WorkflowParser::parse_string(yaml);
assert!(result.is_ok());
let workflow = result.unwrap();
assert_eq!(workflow.id, "test-workflow");
assert_eq!(workflow.phases.len(), 2);
assert!(workflow.phases[1].parallel);
assert_eq!(workflow.phases[1].steps.len(), 2);
}
#[tokio::test]
async fn test_dependency_resolution() {
let steps = vec![
WorkflowStep {
id: "a".to_string(),
name: "Step A".to_string(),
agent_role: "dev".to_string(),
status: StepStatus::Pending,
depends_on: vec![],
can_parallelize: true,
started_at: None,
completed_at: None,
result: None,
error: None,
},
WorkflowStep {
id: "b".to_string(),
name: "Step B".to_string(),
agent_role: "dev".to_string(),
status: StepStatus::Pending,
depends_on: vec!["a".to_string()],
can_parallelize: true,
started_at: None,
completed_at: None,
result: None,
error: None,
},
WorkflowStep {
id: "c".to_string(),
name: "Step C".to_string(),
agent_role: "dev".to_string(),
status: StepStatus::Pending,
depends_on: vec!["a".to_string()],
can_parallelize: true,
started_at: None,
completed_at: None,
result: None,
error: None,
},
];
let result = Scheduler::resolve_dependencies(&steps);
assert!(result.is_ok());
let levels = result.unwrap();
assert_eq!(levels.len(), 2);
assert_eq!(levels[0], vec!["a"]);
assert_eq!(levels[1].len(), 2); // b and c can execute in parallel
}
#[tokio::test]
async fn test_workflow_engine() {
let registry = Arc::new(AgentRegistry::new(5));
let coordinator = Arc::new(AgentCoordinator::new(registry));
let executor = StepExecutor::new(coordinator);
let engine = WorkflowEngine::new(executor);
let workflow = Workflow::new(
"engine-test".to_string(),
"Engine Test".to_string(),
vec![Phase {
id: "p1".to_string(),
name: "Phase 1".to_string(),
status: StepStatus::Pending,
parallel: false,
estimated_hours: 1.0,
steps: vec![WorkflowStep {
id: "s1".to_string(),
name: "Step 1".to_string(),
agent_role: "developer".to_string(),
status: StepStatus::Pending,
depends_on: vec![],
can_parallelize: true,
started_at: None,
completed_at: None,
result: None,
error: None,
}],
}],
);
let id = workflow.id.clone();
let result = engine.register_workflow(workflow).await;
assert!(result.is_ok());
let retrieved = engine.get_workflow(&id).await;
assert!(retrieved.is_some());
assert_eq!(retrieved.unwrap().id, id);
}
#[tokio::test]
async fn test_workflow_service_integration() {
let registry = Arc::new(AgentRegistry::new(5));
let coordinator = Arc::new(AgentCoordinator::new(registry));
let executor = StepExecutor::new(coordinator);
let engine = Arc::new(WorkflowEngine::new(executor));
let broadcaster = Arc::new(WorkflowBroadcaster::new());
let audit = Arc::new(AuditTrail::new());
let service = WorkflowService::new(engine, broadcaster, audit.clone());
let workflow = Workflow::new(
"service-test".to_string(),
"Service Test".to_string(),
vec![Phase {
id: "p1".to_string(),
name: "Test Phase".to_string(),
status: StepStatus::Pending,
parallel: false,
estimated_hours: 1.0,
steps: vec![],
}],
);
// Need at least one step for valid workflow
let workflow = Workflow::new(
"service-test".to_string(),
"Service Test".to_string(),
vec![Phase {
id: "p1".to_string(),
name: "Test Phase".to_string(),
status: StepStatus::Pending,
parallel: false,
estimated_hours: 1.0,
steps: vec![WorkflowStep {
id: "s1".to_string(),
name: "Test Step".to_string(),
agent_role: "developer".to_string(),
status: StepStatus::Pending,
depends_on: vec![],
can_parallelize: false,
started_at: None,
completed_at: None,
result: None,
error: None,
}],
}],
);
let id = workflow.id.clone();
let result = service.create_workflow(workflow).await;
assert!(result.is_ok());
// Check audit trail
let audit_entries = service.get_audit_trail(&id).await;
assert!(!audit_entries.is_empty());
}
#[tokio::test]
async fn test_websocket_broadcaster() {
let broadcaster = WorkflowBroadcaster::new();
let mut rx = broadcaster.subscribe();
let update = vapora_backend::api::websocket::WorkflowUpdate::new(
"wf-1".to_string(),
"in_progress".to_string(),
50,
"Test update".to_string(),
);
broadcaster.send_update(update.clone());
let received = rx.recv().await.unwrap();
assert_eq!(received.workflow_id, "wf-1");
assert_eq!(received.progress, 50);
}
#[tokio::test]
async fn test_audit_trail() {
let audit = AuditTrail::new();
audit
.log_event(
"wf-1".to_string(),
"workflow_started".to_string(),
"system".to_string(),
serde_json::json!({"test": "data"}),
)
.await;
let entries = audit.get_workflow_audit("wf-1").await;
assert_eq!(entries.len(), 1);
assert_eq!(entries[0].event_type, "workflow_started");
}
#[tokio::test]
async fn test_circular_dependency_detection() {
let steps = vec![
WorkflowStep {
id: "a".to_string(),
name: "A".to_string(),
agent_role: "dev".to_string(),
status: StepStatus::Pending,
depends_on: vec!["c".to_string()],
can_parallelize: false,
started_at: None,
completed_at: None,
result: None,
error: None,
},
WorkflowStep {
id: "b".to_string(),
name: "B".to_string(),
agent_role: "dev".to_string(),
status: StepStatus::Pending,
depends_on: vec!["a".to_string()],
can_parallelize: false,
started_at: None,
completed_at: None,
result: None,
error: None,
},
WorkflowStep {
id: "c".to_string(),
name: "C".to_string(),
agent_role: "dev".to_string(),
status: StepStatus::Pending,
depends_on: vec!["b".to_string()],
can_parallelize: false,
started_at: None,
completed_at: None,
result: None,
error: None,
},
];
let result = Scheduler::resolve_dependencies(&steps);
assert!(result.is_err());
}
#[tokio::test]
async fn test_workflow_progress_calculation() {
let workflow = Workflow::new(
"progress-test".to_string(),
"Progress Test".to_string(),
vec![Phase {
id: "p1".to_string(),
name: "Phase 1".to_string(),
status: StepStatus::Running,
parallel: false,
estimated_hours: 1.0,
steps: vec![
WorkflowStep {
id: "s1".to_string(),
name: "Step 1".to_string(),
agent_role: "dev".to_string(),
status: StepStatus::Completed,
depends_on: vec![],
can_parallelize: false,
started_at: None,
completed_at: None,
result: None,
error: None,
},
WorkflowStep {
id: "s2".to_string(),
name: "Step 2".to_string(),
agent_role: "dev".to_string(),
status: StepStatus::Running,
depends_on: vec![],
can_parallelize: false,
started_at: None,
completed_at: None,
result: None,
error: None,
},
WorkflowStep {
id: "s3".to_string(),
name: "Step 3".to_string(),
agent_role: "dev".to_string(),
status: StepStatus::Pending,
depends_on: vec![],
can_parallelize: false,
started_at: None,
completed_at: None,
result: None,
error: None,
},
],
}],
);
assert_eq!(workflow.progress_percent(), 33); // 1 of 3 completed
}

38
crates/vapora-doc-lifecycle/Cargo.toml Normal file → Executable file
View File

@ -1,23 +1,35 @@
[package]
name = "vapora-doc-lifecycle"
version = "0.1.0"
edition = "2021"
authors = ["Jesus Perez <jesus@jesusperez.pro>"]
license = "MIT"
version.workspace = true
edition.workspace = true
authors.workspace = true
license.workspace = true
repository.workspace = true
rust-version.workspace = true
description = "VAPORA adapter for documentation lifecycle management"
[dependencies]
doc-lifecycle-core = { path = "../../../Tools/doc-lifecycle-manager/crates/doc-lifecycle-core" }
tokio = { version = "1.35", features = ["full"] }
serde = { version = "1.0", features = ["derive"] }
serde_json = "1.0"
tracing = "0.1"
anyhow = "1.0"
thiserror = "1.0"
doc-lifecycle-core = { path = "../doc-lifecycle-core" }
tokio = { workspace = true }
serde = { workspace = true }
serde_json = { workspace = true }
tracing = { workspace = true }
tracing-subscriber = { workspace = true }
anyhow = { workspace = true }
thiserror = { workspace = true }
uuid = { workspace = true }
chrono = { workspace = true }
async-trait = { workspace = true }
# VAPORA dependencies (will be added later)
# NATS for event messaging
async-nats = { workspace = true }
# Future SurrealDB integration
# surrealdb = { version = "1.0", features = ["kv", "storage-mem"] }
# VAPORA dependencies (when ready)
# vapora-shared = { path = "../vapora-shared" }
# vapora-agents = { path = "../vapora-agents" }
[dev-dependencies]
tempfile = "3.8"
tempfile = { workspace = true }

0
crates/vapora-doc-lifecycle/src/config.rs Normal file → Executable file
View File

0
crates/vapora-doc-lifecycle/src/documenter.rs Normal file → Executable file
View File

0
crates/vapora-doc-lifecycle/src/error.rs Normal file → Executable file
View File

0
crates/vapora-doc-lifecycle/src/lib.rs Normal file → Executable file
View File

0
crates/vapora-doc-lifecycle/src/plugin.rs Normal file → Executable file
View File

Some files were not shown because too many files have changed in this diff Show More