849 lines
19 KiB
Markdown
849 lines
19 KiB
Markdown
# VAPORA Examples Guide
|
||
|
||
Comprehensive guide to understanding and using VAPORA's example collection.
|
||
|
||
## Overview
|
||
|
||
VAPORA includes 26+ runnable examples demonstrating all major features:
|
||
|
||
- **6 Basic examples** - Hello world for each component
|
||
- **9 Intermediate examples** - Multi-system integration patterns
|
||
- **2 Advanced examples** - End-to-end full-stack workflows
|
||
- **3 Real-world examples** - Production scenarios with ROI analysis
|
||
- **4 Interactive notebooks** - Marimo-based exploration (requires Python)
|
||
|
||
Total time to explore all examples: **2-3 hours**
|
||
|
||
## Quick Start
|
||
|
||
### Run Your First Example
|
||
|
||
```bash
|
||
# Navigate to workspace root
|
||
cd /path/to/vapora
|
||
|
||
# Run basic agent example
|
||
cargo run --example 01-simple-agent -p vapora-agents
|
||
```
|
||
|
||
Expected output:
|
||
```
|
||
=== Simple Agent Registration Example ===
|
||
|
||
Created agent registry with capacity 10
|
||
Defined agent: "Developer A" (role: developer)
|
||
Capabilities: ["coding", "testing"]
|
||
|
||
Agent registered successfully
|
||
Agent ID: <uuid>
|
||
```
|
||
|
||
### List All Available Examples
|
||
|
||
```bash
|
||
# Per-crate examples
|
||
cargo build --examples -p vapora-agents
|
||
|
||
# All examples in workspace
|
||
cargo build --examples --workspace
|
||
```
|
||
|
||
## Examples by Category
|
||
|
||
### Phase 1: Basic Examples (Foundation)
|
||
|
||
Start here to understand individual components.
|
||
|
||
#### Agent Registry
|
||
**File**: `crates/vapora-agents/examples/01-simple-agent.rs`
|
||
|
||
**What it demonstrates**:
|
||
- Creating an agent registry
|
||
- Registering agents with metadata
|
||
- Querying registered agents
|
||
- Agent status management
|
||
|
||
**Run**:
|
||
```bash
|
||
cargo run --example 01-simple-agent -p vapora-agents
|
||
```
|
||
|
||
**Key concepts**:
|
||
- `AgentRegistry` - thread-safe registry with capacity limits
|
||
- `AgentMetadata` - agent name, role, capabilities, LLM provider
|
||
- `AgentStatus` - Active, Busy, Offline
|
||
|
||
**Time**: 5-10 minutes
|
||
|
||
---
|
||
|
||
#### LLM Provider Selection
|
||
**File**: `crates/vapora-llm-router/examples/01-provider-selection.rs`
|
||
|
||
**What it demonstrates**:
|
||
- Available LLM providers (Claude, GPT-4, Gemini, Ollama)
|
||
- Provider pricing and use cases
|
||
- Routing rules by task type
|
||
- Cost comparison
|
||
|
||
**Run**:
|
||
```bash
|
||
cargo run --example 01-provider-selection -p vapora-llm-router
|
||
```
|
||
|
||
**Key concepts**:
|
||
- Provider routing rules
|
||
- Cost per 1M tokens
|
||
- Fallback strategy
|
||
- Task type matching
|
||
|
||
**Time**: 5-10 minutes
|
||
|
||
---
|
||
|
||
#### Swarm Coordination
|
||
**File**: `crates/vapora-swarm/examples/01-agent-registration.rs`
|
||
|
||
**What it demonstrates**:
|
||
- Swarm coordinator creation
|
||
- Agent registration with capabilities
|
||
- Swarm statistics
|
||
- Load balancing basics
|
||
|
||
**Run**:
|
||
```bash
|
||
cargo run --example 01-agent-registration -p vapora-swarm
|
||
```
|
||
|
||
**Key concepts**:
|
||
- `SwarmCoordinator` - manages agent pool
|
||
- Agent capabilities filtering
|
||
- Load distribution calculation
|
||
- `success_rate / (1 + current_load)` scoring
|
||
|
||
**Time**: 5-10 minutes
|
||
|
||
---
|
||
|
||
#### Knowledge Graph
|
||
**File**: `crates/vapora-knowledge-graph/examples/01-execution-tracking.rs`
|
||
|
||
**What it demonstrates**:
|
||
- Recording execution history
|
||
- Querying executions by agent/task type
|
||
- Cost analysis per provider
|
||
- Success rate calculations
|
||
|
||
**Run**:
|
||
```bash
|
||
cargo run --example 01-execution-tracking -p vapora-knowledge-graph
|
||
```
|
||
|
||
**Key concepts**:
|
||
- `ExecutionRecord` - timestamp, duration, success, cost
|
||
- Temporal queries (last 7/14/30 days)
|
||
- Provider cost breakdown
|
||
- Success rate trends
|
||
|
||
**Time**: 5-10 minutes
|
||
|
||
---
|
||
|
||
#### Backend Health Check
|
||
**File**: `crates/vapora-backend/examples/01-health-check.rs`
|
||
|
||
**What it demonstrates**:
|
||
- Backend service health status
|
||
- Dependency verification
|
||
- Monitoring endpoints
|
||
- Troubleshooting guide
|
||
|
||
**Run**:
|
||
```bash
|
||
cargo run --example 01-health-check -p vapora-backend
|
||
```
|
||
|
||
**Prerequisites**:
|
||
- Backend running: `cd crates/vapora-backend && cargo run`
|
||
- SurrealDB running: `docker run -d surrealdb/surrealdb:latest`
|
||
|
||
**Key concepts**:
|
||
- Health endpoint status
|
||
- Dependency checklist
|
||
- Prometheus metrics endpoint
|
||
- Startup verification
|
||
|
||
**Time**: 5-10 minutes
|
||
|
||
---
|
||
|
||
#### Error Handling
|
||
**File**: `crates/vapora-shared/examples/01-error-handling.rs`
|
||
|
||
**What it demonstrates**:
|
||
- Custom error types
|
||
- Error propagation with `?`
|
||
- Error context
|
||
- Display and Debug implementations
|
||
|
||
**Run**:
|
||
```bash
|
||
cargo run --example 01-error-handling -p vapora-shared
|
||
```
|
||
|
||
**Key concepts**:
|
||
- `Result<T>` pattern
|
||
- Error types (InvalidInput, NotFound, Unauthorized)
|
||
- Error chaining
|
||
- User-friendly messages
|
||
|
||
**Time**: 5-10 minutes
|
||
|
||
---
|
||
|
||
### Phase 2: Intermediate Examples (Integration)
|
||
|
||
Combine 2-3 systems to solve realistic problems.
|
||
|
||
#### Learning Profiles
|
||
**File**: `crates/vapora-agents/examples/02-learning-profile.rs`
|
||
|
||
**What it demonstrates**:
|
||
- Building expertise profiles from execution history
|
||
- Recency bias weighting (recent 7 days weighted 3× higher)
|
||
- Confidence scaling based on sample size
|
||
- Task type specialization
|
||
|
||
**Run**:
|
||
```bash
|
||
cargo run --example 02-learning-profile -p vapora-agents
|
||
```
|
||
|
||
**Key metrics**:
|
||
- Success rate: percentage of successful executions
|
||
- Confidence: increases with sample size (0-1.0)
|
||
- Recent trend: last 7 days weighted heavily
|
||
- Task type expertise: separate profiles per task type
|
||
|
||
**Real scenario**:
|
||
Agent Alice has 93.3% success rate on coding (28/30 executions over 30 days), with confidence 1.0 from ample data.
|
||
|
||
**Time**: 10-15 minutes
|
||
|
||
---
|
||
|
||
#### Agent Selection Scoring
|
||
**File**: `crates/vapora-agents/examples/03-agent-selection.rs`
|
||
|
||
**What it demonstrates**:
|
||
- Ranking agents for task assignment
|
||
- Scoring formula: `(1 - 0.3*load) + 0.5*expertise + 0.2*confidence`
|
||
- Load balancing prevents over-allocation
|
||
- Why confidence matters
|
||
|
||
**Run**:
|
||
```bash
|
||
cargo run --example 03-agent-selection -p vapora-agents
|
||
```
|
||
|
||
**Scoring breakdown**:
|
||
- Availability: `1 - (0.3 * current_load)` - lower load = higher score
|
||
- Expertise: `0.5 * success_rate` - proven capability
|
||
- Confidence: `0.2 * confidence` - trust the data
|
||
|
||
**Real scenario**:
|
||
Three agents competing for coding task:
|
||
- Alice: 0.92 expertise, 30% load → score 0.71
|
||
- Bob: 0.78 expertise, 10% load → score 0.77 (selected despite lower expertise)
|
||
- Carol: 0.88 expertise, 50% load → score 0.59
|
||
|
||
**Time**: 10-15 minutes
|
||
|
||
---
|
||
|
||
#### Budget Enforcement
|
||
**File**: `crates/vapora-llm-router/examples/02-budget-enforcement.rs`
|
||
|
||
**What it demonstrates**:
|
||
- Per-role budget limits (monthly/weekly)
|
||
- Three-tier enforcement: Normal → Caution → Exceeded
|
||
- Automatic fallback to cheaper providers
|
||
- Alert thresholds
|
||
|
||
**Run**:
|
||
```bash
|
||
cargo run --example 02-budget-enforcement -p vapora-llm-router
|
||
```
|
||
|
||
**Budget tiers**:
|
||
- **0-50%**: Normal - use preferred provider (Claude)
|
||
- **50-80%**: Caution - monitor spending closely
|
||
- **80-100%**: Near threshold - use cheaper alternative (GPT-4)
|
||
- **100%+**: Exceeded - use fallback only (Ollama)
|
||
|
||
**Real scenario**:
|
||
Developer role with $300/month budget:
|
||
- Spend $145 (48% used) - in Normal tier
|
||
- All tasks use Claude (highest quality)
|
||
- If reaches $240+ (80%), automatically switch to cheaper providers
|
||
|
||
**Time**: 10-15 minutes
|
||
|
||
---
|
||
|
||
#### Cost Tracking
|
||
**File**: `crates/vapora-llm-router/examples/03-cost-tracking.rs`
|
||
|
||
**What it demonstrates**:
|
||
- Token usage recording per provider
|
||
- Cost calculation by provider and task type
|
||
- Report generation
|
||
- Cost per 1M tokens analysis
|
||
|
||
**Run**:
|
||
```bash
|
||
cargo run --example 03-cost-tracking -p vapora-llm-router
|
||
```
|
||
|
||
**Report includes**:
|
||
- Total cost (cents or dollars)
|
||
- Cost by provider (Claude, GPT-4, Gemini, Ollama)
|
||
- Cost by task type (coding, testing, documentation)
|
||
- Average cost per task
|
||
- Cost efficiency (tokens per dollar)
|
||
|
||
**Real scenario**:
|
||
4 tasks processed:
|
||
- Claude (2 tasks): 3,500 tokens → $0.067
|
||
- GPT-4 (1 task): 4,500 tokens → $0.130
|
||
- Gemini (1 task): 4,500 tokens → $0.053
|
||
- Total: $0.250
|
||
|
||
**Time**: 10-15 minutes
|
||
|
||
---
|
||
|
||
#### Task Assignment
|
||
**File**: `crates/vapora-swarm/examples/02-task-assignment.rs`
|
||
|
||
**What it demonstrates**:
|
||
- Submitting tasks to swarm
|
||
- Load-balanced agent selection
|
||
- Capability filtering
|
||
- Swarm statistics
|
||
|
||
**Run**:
|
||
```bash
|
||
cargo run --example 02-task-assignment -p vapora-swarm
|
||
```
|
||
|
||
**Assignment algorithm**:
|
||
1. Filter agents by required capabilities
|
||
2. Score each agent: `success_rate / (1 + current_load)`
|
||
3. Assign to highest-scoring agent
|
||
4. Update swarm statistics
|
||
|
||
**Real scenario**:
|
||
Coding task submitted to swarm with 3 agents:
|
||
- agent-1: coding ✓, load 20%, success 92% → score 0.77
|
||
- agent-2: coding ✓, load 10%, success 85% → score 0.77 (selected, lower load)
|
||
- agent-3: code_review only ✗ (filtered out)
|
||
|
||
**Time**: 10-15 minutes
|
||
|
||
---
|
||
|
||
#### Learning Curves
|
||
**File**: `crates/vapora-knowledge-graph/examples/02-learning-curves.rs`
|
||
|
||
**What it demonstrates**:
|
||
- Computing learning curves from daily data
|
||
- Success rate trends over 30 days
|
||
- Recency bias impact
|
||
- Performance trend analysis
|
||
|
||
**Run**:
|
||
```bash
|
||
cargo run --example 02-learning-curves -p vapora-knowledge-graph
|
||
```
|
||
|
||
**Metrics tracked**:
|
||
- Daily success rate (0-100%)
|
||
- Average execution time (milliseconds)
|
||
- Recent 7-day success rate
|
||
- Recent 14-day success rate
|
||
- Weighted score with recency bias
|
||
|
||
**Trend indicators**:
|
||
- ✓ IMPROVING: Agent learning over time
|
||
- → STABLE: Consistent performance
|
||
- ✗ DECLINING: Possible issues or degradation
|
||
|
||
**Real scenario**:
|
||
Agent bob over 30 days:
|
||
- Days 1-15: 70% success rate, 300ms/execution
|
||
- Days 16-30: 70% success rate, 300ms/execution
|
||
- Weighted score: 72% (no improvement detected)
|
||
- Trend: STABLE (consistent but not improving)
|
||
|
||
**Time**: 10-15 minutes
|
||
|
||
---
|
||
|
||
#### Similarity Search
|
||
**File**: `crates/vapora-knowledge-graph/examples/03-similarity-search.rs`
|
||
|
||
**What it demonstrates**:
|
||
- Semantic similarity matching
|
||
- Jaccard similarity scoring
|
||
- Recommendation generation
|
||
- Pattern recognition
|
||
|
||
**Run**:
|
||
```bash
|
||
cargo run --example 03-similarity-search -p vapora-knowledge-graph
|
||
```
|
||
|
||
**Similarity calculation**:
|
||
- Input: New task description ("Implement API key authentication")
|
||
- Compare: Against past execution descriptions
|
||
- Score: Jaccard similarity (intersection / union of keywords)
|
||
- Rank: Sort by similarity score
|
||
|
||
**Real scenario**:
|
||
New task: "Implement API key authentication for third-party services"
|
||
Keywords: ["authentication", "API", "third-party"]
|
||
|
||
Matches against past tasks:
|
||
1. "Implement user authentication with JWT" (87% similarity)
|
||
2. "Implement token refresh mechanism" (81% similarity)
|
||
3. "Add API rate limiting" (79% similarity)
|
||
|
||
→ Recommend: "Use OAuth2 + API keys with rotation strategy"
|
||
|
||
**Time**: 10-15 minutes
|
||
|
||
---
|
||
|
||
### Phase 3: Advanced Examples (Full-Stack)
|
||
|
||
End-to-end integration of all systems.
|
||
|
||
#### Agent with LLM Routing
|
||
**File**: `examples/full-stack/01-agent-with-routing.rs`
|
||
|
||
**What it demonstrates**:
|
||
- Agent executes task with intelligent provider selection
|
||
- Budget checking before execution
|
||
- Cost tracking during execution
|
||
- Provider fallback strategy
|
||
|
||
**Run**:
|
||
```bash
|
||
rustc examples/full-stack/01-agent-with-routing.rs -o /tmp/example && /tmp/example
|
||
```
|
||
|
||
**Workflow**:
|
||
1. Initialize agent (developer-001)
|
||
2. Set task (implement authentication, 1,500 input + 800 output tokens)
|
||
3. Check budget ($250 remaining)
|
||
4. Select provider (Claude for quality)
|
||
5. Execute task
|
||
6. Track costs ($0.069 total)
|
||
7. Update learning profile
|
||
|
||
**Time**: 15-20 minutes
|
||
|
||
---
|
||
|
||
#### Swarm with Learning Profiles
|
||
**File**: `examples/full-stack/02-swarm-with-learning.rs`
|
||
|
||
**What it demonstrates**:
|
||
- Swarm coordinates agents with learning profiles
|
||
- Task assignment based on expertise
|
||
- Load balancing with learned preferences
|
||
- Profile updates after execution
|
||
|
||
**Run**:
|
||
```bash
|
||
rustc examples/full-stack/02-swarm-with-learning.rs -o /tmp/example && /tmp/example
|
||
```
|
||
|
||
**Workflow**:
|
||
1. Register agents with learning profiles
|
||
- alice: 92% coding, 60% testing, 30% load
|
||
- bob: 78% coding, 85% testing, 10% load
|
||
- carol: 90% documentation, 75% testing, 20% load
|
||
2. Submit tasks (3 different types)
|
||
3. Swarm assigns based on expertise + load
|
||
4. Execute tasks
|
||
5. Update learning profiles with results
|
||
6. Verify assignments improved for next round
|
||
|
||
**Time**: 15-20 minutes
|
||
|
||
---
|
||
|
||
### Phase 5: Real-World Examples
|
||
|
||
Production scenarios with business value analysis.
|
||
|
||
#### Code Review Pipeline
|
||
**File**: `examples/real-world/01-code-review-workflow.rs`
|
||
|
||
**What it demonstrates**:
|
||
- Multi-agent code review workflow
|
||
- Cost optimization with tiered providers
|
||
- Quality vs cost trade-off
|
||
- Business metrics (ROI, time savings)
|
||
|
||
**Run**:
|
||
```bash
|
||
rustc examples/real-world/01-code-review-workflow.rs -o /tmp/example && /tmp/example
|
||
```
|
||
|
||
**Three-stage pipeline**:
|
||
|
||
**Stage 1** (Ollama - FREE):
|
||
- Static analysis, linting
|
||
- Dead code detection
|
||
- Security rule violations
|
||
- Cost: $0.00/PR, Time: 5s
|
||
|
||
**Stage 2** (GPT-4 - $10/1M):
|
||
- Logic verification
|
||
- Test coverage analysis
|
||
- Performance implications
|
||
- Cost: $0.08/PR, Time: 15s
|
||
|
||
**Stage 3** (Claude - $15/1M, 10% of PRs):
|
||
- Architecture validation
|
||
- Design pattern verification
|
||
- Triggered for risky changes
|
||
- Cost: $0.20/PR, Time: 30s
|
||
|
||
**Business impact**:
|
||
- Volume: 50 PRs/day
|
||
- Cost: $0.60/day ($12/month)
|
||
- vs Manual: 40+ hours/month ($500+)
|
||
- **Savings: $488/month**
|
||
- Quality: 99%+ accuracy
|
||
|
||
**Time**: 15-20 minutes
|
||
|
||
---
|
||
|
||
#### Documentation Generation
|
||
**File**: `examples/real-world/02-documentation-generation.rs`
|
||
|
||
**What it demonstrates**:
|
||
- Automated doc generation from code
|
||
- Multi-stage pipeline (analyze → write → check)
|
||
- Cost optimization
|
||
- Keeping docs in sync with code
|
||
|
||
**Run**:
|
||
```bash
|
||
rustc examples/real-world/02-documentation-generation.rs -o /tmp/example && /tmp/example
|
||
```
|
||
|
||
**Pipeline**:
|
||
|
||
**Phase 1** (Ollama - FREE):
|
||
- Parse source files
|
||
- Extract API endpoints, types
|
||
- Identify breaking changes
|
||
- Cost: $0.00, Time: 2min for 10k LOC
|
||
|
||
**Phase 2** (Claude - $15/1M):
|
||
- Generate descriptions
|
||
- Create examples
|
||
- Document parameters
|
||
- Cost: $0.40/endpoint, Time: 30s
|
||
|
||
**Phase 3** (GPT-4 - $10/1M):
|
||
- Verify accuracy vs code
|
||
- Check completeness
|
||
- Ensure clarity
|
||
- Cost: $0.15/doc, Time: 15s
|
||
|
||
**Business impact**:
|
||
- Docs in sync instantly (vs 2 week lag)
|
||
- Per-endpoint cost: $0.55
|
||
- Monthly cost: ~$11 (vs $1000+ manual)
|
||
- **Savings: $989/month**
|
||
- Quality: 99%+ accuracy
|
||
|
||
**Time**: 15-20 minutes
|
||
|
||
---
|
||
|
||
#### Issue Triage
|
||
**File**: `examples/real-world/03-issue-triage.rs`
|
||
|
||
**What it demonstrates**:
|
||
- Intelligent issue classification
|
||
- Two-stage escalation pipeline
|
||
- Cost optimization
|
||
- Consistent routing rules
|
||
|
||
**Run**:
|
||
```bash
|
||
rustc examples/real-world/03-issue-triage.rs -o /tmp/example && /tmp/example
|
||
```
|
||
|
||
**Two-stage pipeline**:
|
||
|
||
**Stage 1** (Ollama - FREE, 85% accuracy):
|
||
- Classify issue type (bug, feature, docs, support)
|
||
- Extract component, priority
|
||
- Route to team
|
||
- Cost: $0.00/issue, Time: 2s
|
||
|
||
**Stage 2** (Claude - $15/1M, 15% of issues):
|
||
- Detailed analysis for unclear issues
|
||
- Extract root cause
|
||
- Create investigation
|
||
- Cost: $0.05/issue, Time: 10s
|
||
|
||
**Business impact**:
|
||
- Volume: 200 issues/month
|
||
- Stage 1: 170 issues × $0.00 = $0.00
|
||
- Stage 2: 30 issues × $0.08 = $2.40
|
||
- Manual triage: 20 hours × $50 = $1,000
|
||
- **Savings: $997.60/month**
|
||
- Speed: Seconds vs hours
|
||
|
||
**Time**: 15-20 minutes
|
||
|
||
---
|
||
|
||
## Learning Paths
|
||
|
||
### Path 1: Quick Overview (30 minutes)
|
||
1. Run `01-simple-agent` (agent basics)
|
||
2. Run `01-provider-selection` (LLM routing)
|
||
3. Run `01-error-handling` (error patterns)
|
||
|
||
**Takeaway**: Understand basic components
|
||
|
||
---
|
||
|
||
### Path 2: System Integration (90 minutes)
|
||
1. Run all Phase 1 examples (30 min)
|
||
2. Run `02-learning-profile` + `03-agent-selection` (20 min)
|
||
3. Run `02-budget-enforcement` + `03-cost-tracking` (20 min)
|
||
4. Run `02-task-assignment` + `02-learning-curves` (20 min)
|
||
|
||
**Takeaway**: Understand component interactions
|
||
|
||
---
|
||
|
||
### Path 3: Production Ready (2-3 hours)
|
||
1. Complete Path 2 (90 min)
|
||
2. Run Phase 5 real-world examples (45 min)
|
||
3. Study `docs/tutorials/` (30-45 min)
|
||
|
||
**Takeaway**: Ready to implement VAPORA in production
|
||
|
||
---
|
||
|
||
## Common Tasks
|
||
|
||
### I want to understand agent learning
|
||
|
||
**Read**: `docs/tutorials/04-learning-profiles.md`
|
||
|
||
**Run examples** (in order):
|
||
1. `02-learning-profile` - See expertise calculation
|
||
2. `03-agent-selection` - See scoring in action
|
||
3. `02-learning-curves` - See trends over time
|
||
|
||
**Time**: 30-40 minutes
|
||
|
||
---
|
||
|
||
### I want to understand cost control
|
||
|
||
**Read**: `docs/tutorials/05-budget-management.md`
|
||
|
||
**Run examples** (in order):
|
||
1. `01-provider-selection` - See provider pricing
|
||
2. `02-budget-enforcement` - See budget tiers
|
||
3. `03-cost-tracking` - See detailed reports
|
||
|
||
**Time**: 25-35 minutes
|
||
|
||
---
|
||
|
||
### I want to understand multi-agent workflows
|
||
|
||
**Read**: `docs/tutorials/06-swarm-coordination.md`
|
||
|
||
**Run examples** (in order):
|
||
1. `01-agent-registration` - See swarm setup
|
||
2. `02-task-assignment` - See task routing
|
||
3. `02-swarm-with-learning` - See full workflow
|
||
|
||
**Time**: 30-40 minutes
|
||
|
||
---
|
||
|
||
### I want to see business value
|
||
|
||
**Run examples** (real-world):
|
||
1. `01-code-review-workflow` - $488/month savings
|
||
2. `02-documentation-generation` - $989/month savings
|
||
3. `03-issue-triage` - $997/month savings
|
||
|
||
**Takeaway**: VAPORA saves $2,474/month for typical usage
|
||
|
||
**Time**: 40-50 minutes
|
||
|
||
---
|
||
|
||
## Running Examples with Parameters
|
||
|
||
Some examples support command-line arguments:
|
||
|
||
```bash
|
||
# Budget enforcement with custom budget
|
||
cargo run --example 02-budget-enforcement -p vapora-llm-router -- \
|
||
--monthly-budget 50000 --verbose
|
||
|
||
# Learning profile with custom sample size
|
||
cargo run --example 02-learning-profile -p vapora-agents -- \
|
||
--sample-size 100
|
||
```
|
||
|
||
Check example documentation for available options:
|
||
|
||
```bash
|
||
# View example header
|
||
head -20 crates/vapora-agents/examples/02-learning-profile.rs
|
||
```
|
||
|
||
---
|
||
|
||
## Troubleshooting
|
||
|
||
### "example not found"
|
||
|
||
Ensure you're running from workspace root:
|
||
|
||
```bash
|
||
cd /path/to/vapora
|
||
cargo run --example 01-simple-agent -p vapora-agents
|
||
```
|
||
|
||
---
|
||
|
||
### "Cannot find module"
|
||
|
||
Ensure workspace is synced:
|
||
|
||
```bash
|
||
cargo update
|
||
cargo build --examples --workspace
|
||
```
|
||
|
||
---
|
||
|
||
### Example fails at runtime
|
||
|
||
Check prerequisites:
|
||
|
||
**Backend examples** require:
|
||
```bash
|
||
# Terminal 1: Start SurrealDB
|
||
docker run -d -p 8000:8000 surrealdb/surrealdb:latest
|
||
|
||
# Terminal 2: Start backend
|
||
cd crates/vapora-backend && cargo run
|
||
|
||
# Terminal 3: Run example
|
||
cargo run --example 01-health-check -p vapora-backend
|
||
```
|
||
|
||
---
|
||
|
||
### Want verbose output
|
||
|
||
Set logging:
|
||
|
||
```bash
|
||
RUST_LOG=debug cargo run --example 02-learning-profile -p vapora-agents
|
||
```
|
||
|
||
---
|
||
|
||
## Next Steps
|
||
|
||
After exploring examples:
|
||
|
||
1. **Read tutorials**: `docs/tutorials/README.md` - step-by-step guides
|
||
2. **Study code snippets**: `docs/examples/` - quick reference
|
||
3. **Explore source**: `crates/*/src/` - understand implementations
|
||
4. **Run tests**: `cargo test --workspace` - verify functionality
|
||
5. **Build projects**: Create your first VAPORA integration
|
||
|
||
---
|
||
|
||
## Quick Reference
|
||
|
||
### Build all examples
|
||
|
||
```bash
|
||
cargo build --examples --workspace
|
||
```
|
||
|
||
### Run specific example
|
||
|
||
```bash
|
||
cargo run --example <name> -p <crate>
|
||
```
|
||
|
||
### Clean build artifacts
|
||
|
||
```bash
|
||
cargo clean
|
||
cargo build --examples
|
||
```
|
||
|
||
### List examples in crate
|
||
|
||
```bash
|
||
ls -la crates/<crate>/examples/
|
||
```
|
||
|
||
### View example documentation
|
||
|
||
```bash
|
||
head -30 crates/<crate>/examples/<name>.rs
|
||
```
|
||
|
||
### Run with output
|
||
|
||
```bash
|
||
cargo run --example <name> -- 2>&1 | tee output.log
|
||
```
|
||
|
||
---
|
||
|
||
## Resources
|
||
|
||
- **Main docs**: See `docs/` directory
|
||
- **Tutorial path**: `docs/tutorials/README.md`
|
||
- **Code snippets**: `docs/examples/`
|
||
- **API documentation**: `cargo doc --open`
|
||
- **Project examples**: `examples/` directory
|
||
|
||
---
|
||
|
||
**Total examples**: 23 Rust + 4 Marimo notebooks
|
||
|
||
**Estimated learning time**: 2-3 hours for complete understanding
|
||
|
||
**Next**: Start with Path 1 (Quick Overview) →
|