9.6 KiB
Agent Execution Examples
This directory contains practical examples of using TypeDialog Agent (typedialog-ag) CLI for various real-world tasks.
Overview
TypeDialog Agent allows you to execute AI agents defined as markdown files (.agent.mdx) with powerful features:
- Template variable substitution
- File imports with glob patterns
- Shell command integration
- Output validation
- Multi-provider LLM support (Claude, OpenAI, Gemini, Ollama)
Examples Included
1. Basic Greeting (basic-greeting.agent.mdx)
Purpose: Simple introduction to agent execution LLM: Claude Haiku (fast, cost-effective) Features: Basic template variables, optional inputs
cargo run --package typedialog-ag -- examples/12-agent-execution/basic-greeting.agent.mdx
What it demonstrates:
- Simple agent configuration
- Required and optional inputs
- Conditional logic in templates
- Fast response with Haiku model
2. Code Review (code-review.agent.mdx)
Purpose: Automated code review with security and performance analysis LLM: Claude Sonnet (balanced quality/cost) Features: File imports, git integration, output validation
cargo run --package typedialog-ag -- examples/12-agent-execution/code-review.agent.mdx
What it demonstrates:
- File pattern imports (
@import) - Shell command execution (
@shell) - Output validation rules
- Security-focused analysis
- Lower temperature for consistent output
3. Documentation Generator (documentation.agent.mdx)
Purpose: Generate comprehensive documentation from source code LLM: GPT-4o (excellent code understanding) Features: Multi-file imports, cargo integration
cargo run --package typedialog-ag -- examples/12-agent-execution/documentation.agent.mdx
What it demonstrates:
- Complex file glob patterns
- Integration with cargo doc
- Long-form content generation (8K tokens)
- Structured output requirements
4. Task Planner (task-planner.agent.mdx)
Purpose: Break down projects into actionable tasks LLM: Claude Sonnet Features: Git status integration, complexity assessment
cargo run --package typedialog-ag -- examples/12-agent-execution/task-planner.agent.mdx
What it demonstrates:
- Project planning without time estimates
- Complexity-based task breakdown
- Risk identification
- Dependency management
- Implementation sequencing
5. Local Privacy Analysis (local-privacy.agent.mdx)
Purpose: Privacy-first data analysis using local models LLM: Ollama (llama2) - runs entirely locally Features: Zero cloud API calls, complete privacy
# Requires Ollama server running
ollama serve
# Then run the agent
cargo run --package typedialog-ag -- examples/12-agent-execution/local-privacy.agent.mdx
What it demonstrates:
- Local LLM execution with Ollama
- Privacy-preserving workflows
- Sensitive data handling
- Offline operation
- No API costs
6. Creative Writer (creative-writer.agent.mdx)
Purpose: Creative content generation LLM: Gemini 2.0 Flash (creative, multi-modal) Features: High temperature, creative output
cargo run --package typedialog-ag -- examples/12-agent-execution/creative-writer.agent.mdx
What it demonstrates:
- High temperature for creativity (0.9)
- Gemini's creative capabilities
- Long-form content generation
- Style customization
7. Refactoring Assistant (refactoring-assistant.agent.mdx)
Purpose: Automated code refactoring with best practices LLM: GPT-4o-mini (fast, code-focused) Features: Clippy integration, Rust-specific patterns
cargo run --package typedialog-ag -- examples/12-agent-execution/refactoring-assistant.agent.mdx
What it demonstrates:
- Low temperature for consistent code (0.2)
- Integration with clippy warnings
- Before/after code comparisons
- Rust idioms and best practices
- Migration planning
8. Software Architect (architect.agent.mdx)
Purpose: High-level architecture design and ADRs LLM: Claude Opus (most capable) Features: Tool calling, diagram generation
cargo run --package typedialog-ag -- examples/12-agent-execution/architect.agent.mdx
What it demonstrates:
- Most capable model for complex design
- Tool calling capabilities
- Architecture decision records
- Mermaid diagram generation
- Comprehensive validation
Model Selection Guide
| Use Case | Recommended Model | Why |
|---|---|---|
| Quick tasks, simple queries | Claude Haiku | Fast, cheap, good for straightforward tasks |
| Code review, planning | Claude Sonnet | Balanced performance and cost |
| Complex architecture, critical design | Claude Opus | Most capable, highest quality |
| Code generation, refactoring | GPT-4o, GPT-4o-mini | Excellent code understanding |
| Creative content | Gemini 2.0 Flash | Creative, multi-modal capabilities |
| Privacy-sensitive data | Ollama (local) | No cloud APIs, complete privacy |
Running Examples
Basic Execution
# From project root
cargo run --package typedialog-ag -- examples/12-agent-execution/<example>.agent.mdx
With Input Variables
# Interactive prompts
cargo run --package typedialog-ag -- examples/12-agent-execution/basic-greeting.agent.mdx
# You'll be prompted:
# name (String): Alice
# language (optional, String): Spanish
Skip Prompts (Use Defaults)
cargo run --package typedialog-ag -- examples/12-agent-execution/basic-greeting.agent.mdx --yes
Requirements
API Keys
Set environment variables for cloud providers:
# Claude
export ANTHROPIC_API_KEY=sk-ant-...
# OpenAI
export OPENAI_API_KEY=sk-...
# Gemini
export GEMINI_API_KEY=...
Local Models (Ollama)
# Install Ollama
# Download from: https://ollama.ai
# Start server
ollama serve
# Pull model
ollama pull llama2
Related Resources
Tests and Demos
Core Library Resources
-
Rust Examples - Programmatic usage examples
provider_comparison.rs- Compare all 4 LLM providersllm_execution.rs- Basic LLM execution
-
Test Fixtures - Integration tests
integration_test.rs- Full pipeline testssimple_integration_test.rs- Basic validation
-
Documentation - Complete LLM integration guide
Agent File Format
All examples follow this structure:
---
@agent {
role: <agent role>,
llm: <model-name>,
max_tokens: <number>,
temperature: <0.0-1.0>
}
@input <variable>: <Type>
@input <optional_var>?: <Type>
@import "<pattern>" as <variable>
@shell "<command>" as <variable>
@validate output {
must_contain: ["string1", "string2"],
format: <markdown|json|yaml>,
min_length: <number>
}
---
Template content with {{variables}}
Features Showcased
Template System
- Variable substitution:
{{ name }} - Conditionals:
{{#if condition}}...{{/if}} - Iteration:
{{#each items}}...{{/each}} - Filters:
{{ name | upcase }}
Context Injection
- File imports:
@import "./src/**/*.rs" as code - Shell commands:
@shell "git status" as status - Glob patterns: Match multiple files
- Line ranges: Extract specific lines (future feature)
Validation
- Required content:
must_contain: ["keywords"] - Format checking:
format: markdown - Length requirements:
min_length: 500 - Custom validation rules (future feature)
Performance Tips
Model Selection
- Use Haiku for simple, fast tasks
- Use Sonnet for balanced quality/speed
- Use Opus only for complex, critical tasks
- Use GPT-4o-mini for cost-effective code tasks
- Use Ollama for privacy and offline operation
Token Optimization
@agent {
max_tokens: 500 # Limit response length for simple tasks
}
Temperature Control
@agent {
temperature: 0.2 # Low for consistent, factual output (code, analysis)
temperature: 0.7 # Moderate for balanced creativity
temperature: 0.9 # High for creative content
}
Troubleshooting
API Key Not Set
Error: ANTHROPIC_API_KEY environment variable not set
Solution: Set the required API key for your chosen model
File Not Found
Error: Failed to import "./path/to/file.md"
Solution: Check file paths are relative to project root or use absolute paths
Validation Failed
Error: Validation failed: missing required content
Solution: Adjust validation rules or improve agent prompt
Ollama Connection Failed
Error: Failed to call Ollama API - is Ollama running?
Solution: Start Ollama server with ollama serve
Next Steps
- Try the examples - Run each example to understand capabilities
- Create your own - Use examples as templates for your use cases
- Explore demos - See demos/agent for provider comparisons
- Read the docs - Check LLM_INTEGRATION.md
- Run tests - Validate with tests/agent
Contributing
To add a new example:
- Create a
.agent.mdxfile following the format above - Test it works with
cargo run --package typedialog-ag -- <path> - Document it in this README
- Add any required dependencies or setup steps
See Also:
- Project README - Main project documentation
- Quickstart - Getting started guide