19 KiB
VAPORA Examples Guide
Comprehensive guide to understanding and using VAPORA's example collection.
Overview
VAPORA includes 26+ runnable examples demonstrating all major features:
- 6 Basic examples - Hello world for each component
- 9 Intermediate examples - Multi-system integration patterns
- 2 Advanced examples - End-to-end full-stack workflows
- 3 Real-world examples - Production scenarios with ROI analysis
- 4 Interactive notebooks - Marimo-based exploration (requires Python)
Total time to explore all examples: 2-3 hours
Quick Start
Run Your First Example
# Navigate to workspace root
cd /path/to/vapora
# Run basic agent example
cargo run --example 01-simple-agent -p vapora-agents
Expected output:
=== Simple Agent Registration Example ===
Created agent registry with capacity 10
Defined agent: "Developer A" (role: developer)
Capabilities: ["coding", "testing"]
Agent registered successfully
Agent ID: <uuid>
List All Available Examples
# Per-crate examples
cargo build --examples -p vapora-agents
# All examples in workspace
cargo build --examples --workspace
Examples by Category
Phase 1: Basic Examples (Foundation)
Start here to understand individual components.
Agent Registry
File: crates/vapora-agents/examples/01-simple-agent.rs
What it demonstrates:
- Creating an agent registry
- Registering agents with metadata
- Querying registered agents
- Agent status management
Run:
cargo run --example 01-simple-agent -p vapora-agents
Key concepts:
AgentRegistry- thread-safe registry with capacity limitsAgentMetadata- agent name, role, capabilities, LLM providerAgentStatus- Active, Busy, Offline
Time: 5-10 minutes
LLM Provider Selection
File: crates/vapora-llm-router/examples/01-provider-selection.rs
What it demonstrates:
- Available LLM providers (Claude, GPT-4, Gemini, Ollama)
- Provider pricing and use cases
- Routing rules by task type
- Cost comparison
Run:
cargo run --example 01-provider-selection -p vapora-llm-router
Key concepts:
- Provider routing rules
- Cost per 1M tokens
- Fallback strategy
- Task type matching
Time: 5-10 minutes
Swarm Coordination
File: crates/vapora-swarm/examples/01-agent-registration.rs
What it demonstrates:
- Swarm coordinator creation
- Agent registration with capabilities
- Swarm statistics
- Load balancing basics
Run:
cargo run --example 01-agent-registration -p vapora-swarm
Key concepts:
SwarmCoordinator- manages agent pool- Agent capabilities filtering
- Load distribution calculation
success_rate / (1 + current_load)scoring
Time: 5-10 minutes
Knowledge Graph
File: crates/vapora-knowledge-graph/examples/01-execution-tracking.rs
What it demonstrates:
- Recording execution history
- Querying executions by agent/task type
- Cost analysis per provider
- Success rate calculations
Run:
cargo run --example 01-execution-tracking -p vapora-knowledge-graph
Key concepts:
ExecutionRecord- timestamp, duration, success, cost- Temporal queries (last 7/14/30 days)
- Provider cost breakdown
- Success rate trends
Time: 5-10 minutes
Backend Health Check
File: crates/vapora-backend/examples/01-health-check.rs
What it demonstrates:
- Backend service health status
- Dependency verification
- Monitoring endpoints
- Troubleshooting guide
Run:
cargo run --example 01-health-check -p vapora-backend
Prerequisites:
- Backend running:
cd crates/vapora-backend && cargo run - SurrealDB running:
docker run -d surrealdb/surrealdb:latest
Key concepts:
- Health endpoint status
- Dependency checklist
- Prometheus metrics endpoint
- Startup verification
Time: 5-10 minutes
Error Handling
File: crates/vapora-shared/examples/01-error-handling.rs
What it demonstrates:
- Custom error types
- Error propagation with
? - Error context
- Display and Debug implementations
Run:
cargo run --example 01-error-handling -p vapora-shared
Key concepts:
Result<T>pattern- Error types (InvalidInput, NotFound, Unauthorized)
- Error chaining
- User-friendly messages
Time: 5-10 minutes
Phase 2: Intermediate Examples (Integration)
Combine 2-3 systems to solve realistic problems.
Learning Profiles
File: crates/vapora-agents/examples/02-learning-profile.rs
What it demonstrates:
- Building expertise profiles from execution history
- Recency bias weighting (recent 7 days weighted 3× higher)
- Confidence scaling based on sample size
- Task type specialization
Run:
cargo run --example 02-learning-profile -p vapora-agents
Key metrics:
- Success rate: percentage of successful executions
- Confidence: increases with sample size (0-1.0)
- Recent trend: last 7 days weighted heavily
- Task type expertise: separate profiles per task type
Real scenario: Agent Alice has 93.3% success rate on coding (28/30 executions over 30 days), with confidence 1.0 from ample data.
Time: 10-15 minutes
Agent Selection Scoring
File: crates/vapora-agents/examples/03-agent-selection.rs
What it demonstrates:
- Ranking agents for task assignment
- Scoring formula:
(1 - 0.3*load) + 0.5*expertise + 0.2*confidence - Load balancing prevents over-allocation
- Why confidence matters
Run:
cargo run --example 03-agent-selection -p vapora-agents
Scoring breakdown:
- Availability:
1 - (0.3 * current_load)- lower load = higher score - Expertise:
0.5 * success_rate- proven capability - Confidence:
0.2 * confidence- trust the data
Real scenario: Three agents competing for coding task:
- Alice: 0.92 expertise, 30% load → score 0.71
- Bob: 0.78 expertise, 10% load → score 0.77 (selected despite lower expertise)
- Carol: 0.88 expertise, 50% load → score 0.59
Time: 10-15 minutes
Budget Enforcement
File: crates/vapora-llm-router/examples/02-budget-enforcement.rs
What it demonstrates:
- Per-role budget limits (monthly/weekly)
- Three-tier enforcement: Normal → Caution → Exceeded
- Automatic fallback to cheaper providers
- Alert thresholds
Run:
cargo run --example 02-budget-enforcement -p vapora-llm-router
Budget tiers:
- 0-50%: Normal - use preferred provider (Claude)
- 50-80%: Caution - monitor spending closely
- 80-100%: Near threshold - use cheaper alternative (GPT-4)
- 100%+: Exceeded - use fallback only (Ollama)
Real scenario: Developer role with $300/month budget:
- Spend $145 (48% used) - in Normal tier
- All tasks use Claude (highest quality)
- If reaches $240+ (80%), automatically switch to cheaper providers
Time: 10-15 minutes
Cost Tracking
File: crates/vapora-llm-router/examples/03-cost-tracking.rs
What it demonstrates:
- Token usage recording per provider
- Cost calculation by provider and task type
- Report generation
- Cost per 1M tokens analysis
Run:
cargo run --example 03-cost-tracking -p vapora-llm-router
Report includes:
- Total cost (cents or dollars)
- Cost by provider (Claude, GPT-4, Gemini, Ollama)
- Cost by task type (coding, testing, documentation)
- Average cost per task
- Cost efficiency (tokens per dollar)
Real scenario: 4 tasks processed:
- Claude (2 tasks): 3,500 tokens → $0.067
- GPT-4 (1 task): 4,500 tokens → $0.130
- Gemini (1 task): 4,500 tokens → $0.053
- Total: $0.250
Time: 10-15 minutes
Task Assignment
File: crates/vapora-swarm/examples/02-task-assignment.rs
What it demonstrates:
- Submitting tasks to swarm
- Load-balanced agent selection
- Capability filtering
- Swarm statistics
Run:
cargo run --example 02-task-assignment -p vapora-swarm
Assignment algorithm:
- Filter agents by required capabilities
- Score each agent:
success_rate / (1 + current_load) - Assign to highest-scoring agent
- Update swarm statistics
Real scenario: Coding task submitted to swarm with 3 agents:
- agent-1: coding ✓, load 20%, success 92% → score 0.77
- agent-2: coding ✓, load 10%, success 85% → score 0.77 (selected, lower load)
- agent-3: code_review only ✗ (filtered out)
Time: 10-15 minutes
Learning Curves
File: crates/vapora-knowledge-graph/examples/02-learning-curves.rs
What it demonstrates:
- Computing learning curves from daily data
- Success rate trends over 30 days
- Recency bias impact
- Performance trend analysis
Run:
cargo run --example 02-learning-curves -p vapora-knowledge-graph
Metrics tracked:
- Daily success rate (0-100%)
- Average execution time (milliseconds)
- Recent 7-day success rate
- Recent 14-day success rate
- Weighted score with recency bias
Trend indicators:
- ✓ IMPROVING: Agent learning over time
- → STABLE: Consistent performance
- ✗ DECLINING: Possible issues or degradation
Real scenario: Agent bob over 30 days:
- Days 1-15: 70% success rate, 300ms/execution
- Days 16-30: 70% success rate, 300ms/execution
- Weighted score: 72% (no improvement detected)
- Trend: STABLE (consistent but not improving)
Time: 10-15 minutes
Similarity Search
File: crates/vapora-knowledge-graph/examples/03-similarity-search.rs
What it demonstrates:
- Semantic similarity matching
- Jaccard similarity scoring
- Recommendation generation
- Pattern recognition
Run:
cargo run --example 03-similarity-search -p vapora-knowledge-graph
Similarity calculation:
- Input: New task description ("Implement API key authentication")
- Compare: Against past execution descriptions
- Score: Jaccard similarity (intersection / union of keywords)
- Rank: Sort by similarity score
Real scenario: New task: "Implement API key authentication for third-party services" Keywords: ["authentication", "API", "third-party"]
Matches against past tasks:
- "Implement user authentication with JWT" (87% similarity)
- "Implement token refresh mechanism" (81% similarity)
- "Add API rate limiting" (79% similarity)
→ Recommend: "Use OAuth2 + API keys with rotation strategy"
Time: 10-15 minutes
Phase 3: Advanced Examples (Full-Stack)
End-to-end integration of all systems.
Agent with LLM Routing
File: examples/full-stack/01-agent-with-routing.rs
What it demonstrates:
- Agent executes task with intelligent provider selection
- Budget checking before execution
- Cost tracking during execution
- Provider fallback strategy
Run:
rustc examples/full-stack/01-agent-with-routing.rs -o /tmp/example && /tmp/example
Workflow:
- Initialize agent (developer-001)
- Set task (implement authentication, 1,500 input + 800 output tokens)
- Check budget ($250 remaining)
- Select provider (Claude for quality)
- Execute task
- Track costs ($0.069 total)
- Update learning profile
Time: 15-20 minutes
Swarm with Learning Profiles
File: examples/full-stack/02-swarm-with-learning.rs
What it demonstrates:
- Swarm coordinates agents with learning profiles
- Task assignment based on expertise
- Load balancing with learned preferences
- Profile updates after execution
Run:
rustc examples/full-stack/02-swarm-with-learning.rs -o /tmp/example && /tmp/example
Workflow:
- Register agents with learning profiles
- alice: 92% coding, 60% testing, 30% load
- bob: 78% coding, 85% testing, 10% load
- carol: 90% documentation, 75% testing, 20% load
- Submit tasks (3 different types)
- Swarm assigns based on expertise + load
- Execute tasks
- Update learning profiles with results
- Verify assignments improved for next round
Time: 15-20 minutes
Phase 5: Real-World Examples
Production scenarios with business value analysis.
Code Review Pipeline
File: examples/real-world/01-code-review-workflow.rs
What it demonstrates:
- Multi-agent code review workflow
- Cost optimization with tiered providers
- Quality vs cost trade-off
- Business metrics (ROI, time savings)
Run:
rustc examples/real-world/01-code-review-workflow.rs -o /tmp/example && /tmp/example
Three-stage pipeline:
Stage 1 (Ollama - FREE):
- Static analysis, linting
- Dead code detection
- Security rule violations
- Cost: $0.00/PR, Time: 5s
Stage 2 (GPT-4 - $10/1M):
- Logic verification
- Test coverage analysis
- Performance implications
- Cost: $0.08/PR, Time: 15s
Stage 3 (Claude - $15/1M, 10% of PRs):
- Architecture validation
- Design pattern verification
- Triggered for risky changes
- Cost: $0.20/PR, Time: 30s
Business impact:
- Volume: 50 PRs/day
- Cost: $0.60/day ($12/month)
- vs Manual: 40+ hours/month ($500+)
- Savings: $488/month
- Quality: 99%+ accuracy
Time: 15-20 minutes
Documentation Generation
File: examples/real-world/02-documentation-generation.rs
What it demonstrates:
- Automated doc generation from code
- Multi-stage pipeline (analyze → write → check)
- Cost optimization
- Keeping docs in sync with code
Run:
rustc examples/real-world/02-documentation-generation.rs -o /tmp/example && /tmp/example
Pipeline:
Phase 1 (Ollama - FREE):
- Parse source files
- Extract API endpoints, types
- Identify breaking changes
- Cost: $0.00, Time: 2min for 10k LOC
Phase 2 (Claude - $15/1M):
- Generate descriptions
- Create examples
- Document parameters
- Cost: $0.40/endpoint, Time: 30s
Phase 3 (GPT-4 - $10/1M):
- Verify accuracy vs code
- Check completeness
- Ensure clarity
- Cost: $0.15/doc, Time: 15s
Business impact:
- Docs in sync instantly (vs 2 week lag)
- Per-endpoint cost: $0.55
- Monthly cost: ~$11 (vs $1000+ manual)
- Savings: $989/month
- Quality: 99%+ accuracy
Time: 15-20 minutes
Issue Triage
File: examples/real-world/03-issue-triage.rs
What it demonstrates:
- Intelligent issue classification
- Two-stage escalation pipeline
- Cost optimization
- Consistent routing rules
Run:
rustc examples/real-world/03-issue-triage.rs -o /tmp/example && /tmp/example
Two-stage pipeline:
Stage 1 (Ollama - FREE, 85% accuracy):
- Classify issue type (bug, feature, docs, support)
- Extract component, priority
- Route to team
- Cost: $0.00/issue, Time: 2s
Stage 2 (Claude - $15/1M, 15% of issues):
- Detailed analysis for unclear issues
- Extract root cause
- Create investigation
- Cost: $0.05/issue, Time: 10s
Business impact:
- Volume: 200 issues/month
- Stage 1: 170 issues × $0.00 = $0.00
- Stage 2: 30 issues × $0.08 = $2.40
- Manual triage: 20 hours × $50 = $1,000
- Savings: $997.60/month
- Speed: Seconds vs hours
Time: 15-20 minutes
Learning Paths
Path 1: Quick Overview (30 minutes)
- Run
01-simple-agent(agent basics) - Run
01-provider-selection(LLM routing) - Run
01-error-handling(error patterns)
Takeaway: Understand basic components
Path 2: System Integration (90 minutes)
- Run all Phase 1 examples (30 min)
- Run
02-learning-profile+03-agent-selection(20 min) - Run
02-budget-enforcement+03-cost-tracking(20 min) - Run
02-task-assignment+02-learning-curves(20 min)
Takeaway: Understand component interactions
Path 3: Production Ready (2-3 hours)
- Complete Path 2 (90 min)
- Run Phase 5 real-world examples (45 min)
- Study
docs/tutorials/(30-45 min)
Takeaway: Ready to implement VAPORA in production
Common Tasks
I want to understand agent learning
Read: docs/tutorials/04-learning-profiles.md
Run examples (in order):
02-learning-profile- See expertise calculation03-agent-selection- See scoring in action02-learning-curves- See trends over time
Time: 30-40 minutes
I want to understand cost control
Read: docs/tutorials/05-budget-management.md
Run examples (in order):
01-provider-selection- See provider pricing02-budget-enforcement- See budget tiers03-cost-tracking- See detailed reports
Time: 25-35 minutes
I want to understand multi-agent workflows
Read: docs/tutorials/06-swarm-coordination.md
Run examples (in order):
01-agent-registration- See swarm setup02-task-assignment- See task routing02-swarm-with-learning- See full workflow
Time: 30-40 minutes
I want to see business value
Run examples (real-world):
01-code-review-workflow- $488/month savings02-documentation-generation- $989/month savings03-issue-triage- $997/month savings
Takeaway: VAPORA saves $2,474/month for typical usage
Time: 40-50 minutes
Running Examples with Parameters
Some examples support command-line arguments:
# Budget enforcement with custom budget
cargo run --example 02-budget-enforcement -p vapora-llm-router -- \
--monthly-budget 50000 --verbose
# Learning profile with custom sample size
cargo run --example 02-learning-profile -p vapora-agents -- \
--sample-size 100
Check example documentation for available options:
# View example header
head -20 crates/vapora-agents/examples/02-learning-profile.rs
Troubleshooting
"example not found"
Ensure you're running from workspace root:
cd /path/to/vapora
cargo run --example 01-simple-agent -p vapora-agents
"Cannot find module"
Ensure workspace is synced:
cargo update
cargo build --examples --workspace
Example fails at runtime
Check prerequisites:
Backend examples require:
# Terminal 1: Start SurrealDB
docker run -d -p 8000:8000 surrealdb/surrealdb:latest
# Terminal 2: Start backend
cd crates/vapora-backend && cargo run
# Terminal 3: Run example
cargo run --example 01-health-check -p vapora-backend
Want verbose output
Set logging:
RUST_LOG=debug cargo run --example 02-learning-profile -p vapora-agents
Next Steps
After exploring examples:
- Read tutorials:
docs/tutorials/README.md- step-by-step guides - Study code snippets:
docs/examples/- quick reference - Explore source:
crates/*/src/- understand implementations - Run tests:
cargo test --workspace- verify functionality - Build projects: Create your first VAPORA integration
Quick Reference
Build all examples
cargo build --examples --workspace
Run specific example
cargo run --example <name> -p <crate>
Clean build artifacts
cargo clean
cargo build --examples
List examples in crate
ls -la crates/<crate>/examples/
View example documentation
head -30 crates/<crate>/examples/<name>.rs
Run with output
cargo run --example <name> -- 2>&1 | tee output.log
Resources
- Main docs: See
docs/directory - Tutorial path:
docs/tutorials/README.md - Code snippets:
docs/examples/ - API documentation:
cargo doc --open - Project examples:
examples/directory
Total examples: 23 Rust + 4 Marimo notebooks
Estimated learning time: 2-3 hours for complete understanding
Next: Start with Path 1 (Quick Overview) →