10 KiB
Getting Started with TypeDialog Agent
Complete guide to installing and using TypeDialog Agent for the first time.
Installation
Prerequisites
- Rust 1.75 or later
- Cargo (comes with Rust)
- At least one LLM provider:
- API key for Claude, OpenAI, or Gemini
- OR Ollama installed for local models
Install Rust
If you don't have Rust installed:
# Install Rust via rustup
curl --proto '=https' --tlsv1.2 -sSf https://sh.rustup.rs | sh
# Verify installation
rustc --version
cargo --version
Build TypeDialog Agent
# Clone the repository (if not already)
cd /path/to/typedialog
# Build the CLI
cargo build --release --package typedialog-ag
# The binary will be at:
# target/release/typedialog-ag
Add to PATH (Optional)
# Option 1: Copy to a directory in PATH
sudo cp target/release/typedialog-ag /usr/local/bin/
# Option 2: Create alias
echo 'alias typedialog-ag="/path/to/typedialog/target/release/typedialog-ag"' >> ~/.bashrc
source ~/.bashrc
# Verify
typedialog-ag --help
Set Up an LLM Provider
Choose at least one provider to get started:
Option 1: Claude (Anthropic) - Recommended
# Sign up at: https://console.anthropic.com
# Get your API key
export ANTHROPIC_API_KEY=sk-ant-...
# Add to shell profile for persistence
echo 'export ANTHROPIC_API_KEY=sk-ant-...' >> ~/.bashrc
Why Claude?
- Excellent code understanding
- Strong reasoning capabilities
- Reliable and consistent
- Good balance of speed/quality/cost
Option 2: OpenAI (GPT)
# Sign up at: https://platform.openai.com
# Get your API key
export OPENAI_API_KEY=sk-...
# Add to shell profile
echo 'export OPENAI_API_KEY=sk-...' >> ~/.bashrc
Why OpenAI?
- Excellent code generation
- Broad general knowledge
- Well-documented and reliable
Option 3: Google Gemini
# Get API key from: https://aistudio.google.com/app/apikey
export GEMINI_API_KEY=...
# Or use GOOGLE_API_KEY
export GOOGLE_API_KEY=...
Why Gemini?
- Creative content generation
- Multi-modal capabilities
- Free tier available
Option 4: Ollama (Local) - Privacy First
# Install Ollama
# Download from: https://ollama.ai
# Start Ollama server
ollama serve
# Pull a model (in another terminal)
ollama pull llama2
# Verify
curl http://localhost:11434/api/tags
Why Ollama?
- ✅ Complete privacy - runs locally
- ✅ No API costs
- ✅ Offline operation
- ✅ No data sent to cloud
Your First Agent
Step 1: Create an Agent File
Create hello.agent.mdx:
---
@agent {
role: friendly assistant,
llm: claude-3-5-haiku-20241022
}
@input name: String
---
Say hello to {{name}} in a warm and friendly way!
Include a fun fact about their name if you know one.
Step 2: Run the Agent
typedialog-ag hello.agent.mdx
You'll be prompted:
name (String): Alice
Expected output:
Hello Alice! It's wonderful to meet you! 🌟
Fun fact: The name Alice means "noble" and "truth" in Old German...
Step 3: Understand What Happened
- Agent loaded - Parsed the
.agent.mdxfile - Provider selected - Chose Claude Haiku based on
llm: claude-3-5-haiku-20241022 - Input collected - Prompted for the
namevariable - Template rendered - Replaced
{{name}}with "Alice" - LLM called - Sent to Claude API
- Response streamed - Displayed token-by-token
Common Examples
Code Review Agent
Create review.agent.mdx:
---
@agent {
role: senior code reviewer,
llm: claude-3-5-sonnet-20241022,
temperature: 0.3
}
@input file_path: String
@import "{{file_path}}" as code_content
@validate output {
must_contain: ["## Security", "## Performance"],
format: markdown,
min_length: 300
}
---
# Code Review
## Code to Review
```text
{{code_content}}
Task
Perform a thorough code review focusing on:
- Security vulnerabilities
- Performance issues
- Best practices
- Potential bugs
Provide specific, actionable recommendations.
Run it:
```bash
typedialog-ag review.agent.mdx
# Prompts: file_path (String): ./src/main.rs
Documentation Generator
Create docs.agent.mdx:
---
@agent {
role: technical writer,
llm: gpt-4o,
max_tokens: 4096
}
@input module_path: String
@import "{{module_path}}/**/*.rs" as source_code
@validate output {
format: markdown,
min_length: 800
}
---
# Generate API Documentation
## Source Code
{{source_code}}
## Task
Create comprehensive API documentation including:
1. Overview of the module
2. Public API reference
3. Usage examples with code snippets
4. Common use cases
Run it:
typedialog-ag docs.agent.mdx
# Prompts: module_path (String): ./src/parser
Local Privacy Analysis
Create privacy.agent.mdx:
---
@agent {
role: privacy consultant,
llm: llama2
}
@input data_type: String
@input regulation: String
@import ".env" as env_config
---
# Privacy Analysis
You're analyzing: {{data_type}}
Regulation: {{regulation}}
Environment: {{env_config}}
Provide:
1. Privacy requirements for {{regulation}}
2. Recommended security measures
3. Data handling guidelines
4. Compliance checklist
**Note**: This runs 100% locally via Ollama - your data never leaves your machine!
Run it:
# Requires Ollama running
ollama serve
typedialog-ag privacy.agent.mdx
# Prompts: data_type (String): user emails
# Prompts: regulation (String): GDPR
CLI Usage Patterns
Interactive Mode (Default)
# Prompts for all inputs
typedialog-ag agent.agent.mdx
Skip Prompts with --yes
# Uses default values (empty strings for missing inputs)
typedialog-ag agent.agent.mdx --yes
Validate Without Execution
# Check syntax and type-check
typedialog-ag validate agent.agent.mdx
Transpile to Nickel
# See the Nickel intermediate representation
typedialog-ag transpile agent.agent.mdx
Show Help
typedialog-ag --help
typedialog-ag validate --help
Next Steps
Learn More
- AGENTS.md - Complete agent file format
- llm_providers.md - Provider comparison and setup
- TEMPLATES.md - Template system guide
- CLI_REFERENCE.md - All CLI commands
Try Examples
# Basic examples
typedialog-ag examples/12-agent-execution/basic-greeting.agent.mdx
# Advanced examples
typedialog-ag examples/12-agent-execution/code-review.agent.mdx
typedialog-ag examples/12-agent-execution/task-planner.agent.mdx
# See all examples
ls examples/12-agent-execution/
cat examples/12-agent-execution/README.md
Run Demos
# Compare all providers
./demos/agent/run_demo.sh
# Or run provider comparison example
cargo run --example provider_comparison
Troubleshooting
API Key Not Set
Error:
Error: ANTHROPIC_API_KEY environment variable not set
Solution:
export ANTHROPIC_API_KEY=sk-ant-...
# Or add to ~/.bashrc for persistence
File Not Found
Error:
Error: Failed to read agent file: hello.agent.mdx
Solution:
# Check file exists
ls hello.agent.mdx
# Use absolute path
typedialog-ag /full/path/to/hello.agent.mdx
Import Pattern Not Found
Error:
Error: No files matched pattern: ./src/**/*.rs
Solution:
# Check files exist
ls ./src/**/*.rs
# Use correct relative path
# Paths are relative to project root, not agent file
Ollama Not Running
Error:
Error: Failed to call Ollama API - is Ollama running?
Solution:
# Start Ollama server
ollama serve
# In another terminal, verify
curl http://localhost:11434/api/tags
Quota Exceeded
Error:
Error: 429 Too Many Requests - quota exceeded
Solution:
- Wait for quota reset
- Upgrade your API plan
- Use a different provider
- Use Ollama for local, free execution
Validation Failed
Error:
Error: Validation failed: missing required content
Solution:
# Adjust validation rules in your agent
@validate output {
must_contain: ["Summary"], # Make less strict
min_length: 100 # Lower requirement
}
Template Error
Error:
Error: Failed to render template
Solution:
- Check variable names match:
@input nameand{{name}} - Ensure proper liquid syntax:
{{#if var}}...{{/if}} - Verify nested templates are closed properly
Performance Tips
Choose the Right Model
- Quick tasks → Use Haiku (fast, cheap)
- Complex reasoning → Use Sonnet (balanced)
- Critical design → Use Opus (best quality)
- Local/private → Use Ollama (free, private)
Limit Tokens
@agent {
max_tokens: 500 # For short responses
}
Adjust Temperature
@agent {
temperature: 0.2 # Low for consistent, factual output (code)
temperature: 0.7 # Medium for balanced
temperature: 0.9 # High for creative content
}
Use Validation
@validate output {
min_length: 300,
max_length: 1000 # Prevent overly long responses
}
Best Practices
1. Start Simple
Begin with basic agents, add complexity gradually.
2. Use Appropriate Models
Don't use Opus for simple tasks - it's slower and more expensive.
3. Validate Critical Outputs
Always validate when accuracy matters:
@validate output {
must_contain: ["PASS", "FAIL"],
format: json
}
4. Use Ollama for Iteration
Develop and test with Ollama (free), then switch to cloud providers for production.
5. Keep Prompts Focused
Clear, specific prompts get better results than vague ones.
6. Use Context Wisely
Import only relevant context - too much can confuse the model.
What's Next?
Ready to dive deeper?
- Read AGENTS.md to master agent file format
- Compare providers in llm_providers.md
- Learn advanced templates in TEMPLATES.md
- Explore examples/12-agent-execution/
Want to contribute? See Developer Documentation
Questions? Check troubleshooting or read the FAQ.