TypeDialog Agent Documentation
User guide for TypeDialog Agent - Execute AI agents defined as markdown files with multi-provider LLM support.
What is TypeDialog Agent?
TypeDialog Agent (typedialog-ag) is a CLI tool that allows you to:
- Define AI agents as markdown files (
.agent.mdx) - Execute them with multiple LLM providers (Claude, OpenAI, Gemini, Ollama)
- Inject context from files, shell commands, and templates
- Validate outputs automatically
- Stream responses in real-time
Quick Links
Getting Started
- getting_started.md - Installation and first agent
- AGENTS.md - How to write agent files
- CLI_REFERENCE.md - Command reference
LLM Providers
- llm_providers.md - Complete provider guide
- Claude (Anthropic)
- OpenAI (GPT)
- Google Gemini
- Ollama (local models)
Advanced Topics
- TEMPLATES.md - Template system and variables
- VALIDATION.md - Output validation rules
- BEST_PRACTICES.md - Tips and patterns
Quick Start
1. Install
# Build from source
cargo build --release --package typedialog-ag
# The binary will be at: target/release/typedialog-ag
2. Set API Key
# For Claude
export ANTHROPIC_API_KEY=sk-ant-...
# For OpenAI
export OPENAI_API_KEY=sk-...
# For Gemini
export GEMINI_API_KEY=...
# For Ollama (local - no API key needed)
ollama serve
3. Create Your First Agent
Create hello.agent.mdx:
---
@agent {
role: friendly assistant,
llm: claude-3-5-haiku-20241022
}
@input name: String
---
Say hello to {{name}} in a warm and friendly way!
4. Run It
typedialog-ag hello.agent.mdx
# Prompts: name (String): Alice
# Output: Hello Alice! It's wonderful to meet you...
Documentation Structure
docs/agent/
├── README.md ← Overview (you are here)
├── getting_started.md ← Installation & first steps
├── AGENTS.md ← Writing agent files
├── llm_providers.md ← Provider setup & comparison
├── CLI_REFERENCE.md ← Command-line reference
├── TEMPLATES.md ← Template system guide
├── VALIDATION.md ← Output validation
└── BEST_PRACTICES.md ← Tips & patterns
Features
Multi-Provider Support
Choose the best LLM for your task:
- Claude - Best for code, analysis, reasoning
- OpenAI - Excellent code understanding, broad capabilities
- Gemini - Creative tasks, multi-modal
- Ollama - Privacy-first, local execution, no costs
Context Injection
Import context from multiple sources:
@import "./src/**/*.rs" as code # File imports
@shell "git status" as git_status # Shell commands
@import ".env" as environment # Configuration
Template System
Powerful templating with LiquidJS:
Hello {{name}}!
{{#if premium}}
Welcome to premium features!
{{else}}
Upgrade to unlock more!
{{/if}}
Output Validation
Ensure quality automatically:
@validate output {
must_contain: ["## Summary", "## Recommendations"],
format: markdown,
min_length: 500
}
Real-time Streaming
See responses as they're generated with token-by-token streaming.
Examples
Code Review
typedialog-ag examples/12-agent-execution/code-review.agent.mdx
Documentation Generation
typedialog-ag examples/12-agent-execution/documentation.agent.mdx
Task Planning
typedialog-ag examples/12-agent-execution/task-planner.agent.mdx
Local Privacy Analysis
# Runs entirely locally with Ollama
typedialog-ag examples/12-agent-execution/local-privacy.agent.mdx
See examples/12-agent-execution/ for all examples.
Use Cases
Development
- Code reviews
- Refactoring assistance
- Documentation generation
- Bug analysis
- Architecture design
Content Creation
- Blog posts
- Documentation
- Technical writing
- Creative content
Business
- Task planning
- Process documentation
- Decision support
- Data analysis
Privacy-Sensitive
- Local data analysis with Ollama
- Compliance review
- Sensitive document processing
Learning Path
1. getting_started.md
↓ Install and run first agent
2. AGENTS.md
↓ Learn agent file format
3. llm_providers.md
↓ Choose your provider
4. TEMPLATES.md
↓ Master template system
5. Examples & Demos
↓ See real-world usage
6. BEST_PRACTICES.md
↓ Optimize your agents
Resources
Examples & Demos
Technical Documentation
- LLM Integration - Complete technical guide
- Core Examples - Rust API usage
- Developer Docs - Architecture and APIs
Main Project
- TypeDialog README - Project overview
- Examples - All project examples
Common Questions
Which LLM provider should I use?
See llm_providers.md for a detailed comparison.
Quick guide:
- Quick tasks → Claude Haiku
- Code work → Claude Sonnet or GPT-4o
- Creative → Gemini 2.0
- Privacy → Ollama (local)
How much does it cost?
- Claude - Pay per token (see Anthropic pricing)
- OpenAI - Pay per token (see OpenAI pricing)
- Gemini - Free tier + pay per token
- Ollama - Free (runs locally)
See llm_providers.md for details.
Can I use it offline?
Yes. Use Ollama for completely offline operation:
ollama serve
ollama pull llama2
typedialog-ag my-agent.agent.mdx # Uses llama2 locally
Is my data private?
- Cloud providers (Claude, OpenAI, Gemini) - Data sent to their APIs
- Ollama - Completely private, runs on your machine
See BEST_PRACTICES.md for guidance.
Getting Help
Documentation
- Read getting_started.md
- Check CLI_REFERENCE.md
- Browse examples
Troubleshooting
See getting_started.md#troubleshooting
Contributing
What's Next?
Ready to get started? → getting_started.md
Want to understand agent files? → AGENTS.md
Need to choose a provider? → llm_providers.md
Version: 0.1.0 Last Updated: December 2024