321 lines
6.9 KiB
Markdown
321 lines
6.9 KiB
Markdown
|
|
# TypeDialog Agent Documentation
|
||
|
|
|
||
|
|
User guide for **TypeDialog Agent** - Execute AI agents defined as markdown files with multi-provider LLM support.
|
||
|
|
|
||
|
|
## What is TypeDialog Agent?
|
||
|
|
|
||
|
|
TypeDialog Agent (`typedialog-ag`) is a CLI tool that allows you to:
|
||
|
|
|
||
|
|
- Define AI agents as markdown files (`.agent.mdx`)
|
||
|
|
- Execute them with multiple LLM providers (Claude, OpenAI, Gemini, Ollama)
|
||
|
|
- Inject context from files, shell commands, and templates
|
||
|
|
- Validate outputs automatically
|
||
|
|
- Stream responses in real-time
|
||
|
|
|
||
|
|
## Quick Links
|
||
|
|
|
||
|
|
### Getting Started
|
||
|
|
|
||
|
|
- **[getting_started.md](getting_started.md)** - Installation and first agent
|
||
|
|
- **[AGENTS.md](AGENTS.md)** - How to write agent files
|
||
|
|
- **[CLI_REFERENCE.md](CLI_REFERENCE.md)** - Command reference
|
||
|
|
|
||
|
|
### LLM Providers
|
||
|
|
|
||
|
|
- **[llm_providers.md](llm_providers.md)** - Complete provider guide
|
||
|
|
- Claude (Anthropic)
|
||
|
|
- OpenAI (GPT)
|
||
|
|
- Google Gemini
|
||
|
|
- Ollama (local models)
|
||
|
|
|
||
|
|
### Advanced Topics
|
||
|
|
|
||
|
|
- **[TEMPLATES.md](TEMPLATES.md)** - Template system and variables
|
||
|
|
- **[VALIDATION.md](VALIDATION.md)** - Output validation rules
|
||
|
|
- **[BEST_PRACTICES.md](BEST_PRACTICES.md)** - Tips and patterns
|
||
|
|
|
||
|
|
## Quick Start
|
||
|
|
|
||
|
|
### 1. Install
|
||
|
|
|
||
|
|
```bash
|
||
|
|
# Build from source
|
||
|
|
cargo build --release --package typedialog-ag
|
||
|
|
|
||
|
|
# The binary will be at: target/release/typedialog-ag
|
||
|
|
```
|
||
|
|
|
||
|
|
### 2. Set API Key
|
||
|
|
|
||
|
|
```bash
|
||
|
|
# For Claude
|
||
|
|
export ANTHROPIC_API_KEY=sk-ant-...
|
||
|
|
|
||
|
|
# For OpenAI
|
||
|
|
export OPENAI_API_KEY=sk-...
|
||
|
|
|
||
|
|
# For Gemini
|
||
|
|
export GEMINI_API_KEY=...
|
||
|
|
|
||
|
|
# For Ollama (local - no API key needed)
|
||
|
|
ollama serve
|
||
|
|
```
|
||
|
|
|
||
|
|
### 3. Create Your First Agent
|
||
|
|
|
||
|
|
Create `hello.agent.mdx`:
|
||
|
|
|
||
|
|
```markdown
|
||
|
|
---
|
||
|
|
@agent {
|
||
|
|
role: friendly assistant,
|
||
|
|
llm: claude-3-5-haiku-20241022
|
||
|
|
}
|
||
|
|
|
||
|
|
@input name: String
|
||
|
|
---
|
||
|
|
|
||
|
|
Say hello to {{name}} in a warm and friendly way!
|
||
|
|
```
|
||
|
|
|
||
|
|
### 4. Run It
|
||
|
|
|
||
|
|
```bash
|
||
|
|
typedialog-ag hello.agent.mdx
|
||
|
|
# Prompts: name (String): Alice
|
||
|
|
# Output: Hello Alice! It's wonderful to meet you...
|
||
|
|
```
|
||
|
|
|
||
|
|
## Documentation Structure
|
||
|
|
|
||
|
|
```text
|
||
|
|
docs/agent/
|
||
|
|
├── README.md ← Overview (you are here)
|
||
|
|
├── getting_started.md ← Installation & first steps
|
||
|
|
├── AGENTS.md ← Writing agent files
|
||
|
|
├── llm_providers.md ← Provider setup & comparison
|
||
|
|
├── CLI_REFERENCE.md ← Command-line reference
|
||
|
|
├── TEMPLATES.md ← Template system guide
|
||
|
|
├── VALIDATION.md ← Output validation
|
||
|
|
└── BEST_PRACTICES.md ← Tips & patterns
|
||
|
|
```
|
||
|
|
|
||
|
|
## Features
|
||
|
|
|
||
|
|
### Multi-Provider Support
|
||
|
|
|
||
|
|
Choose the best LLM for your task:
|
||
|
|
|
||
|
|
- **Claude** - Best for code, analysis, reasoning
|
||
|
|
- **OpenAI** - Excellent code understanding, broad capabilities
|
||
|
|
- **Gemini** - Creative tasks, multi-modal
|
||
|
|
- **Ollama** - Privacy-first, local execution, no costs
|
||
|
|
|
||
|
|
### Context Injection
|
||
|
|
|
||
|
|
Import context from multiple sources:
|
||
|
|
|
||
|
|
```yaml
|
||
|
|
@import "./src/**/*.rs" as code # File imports
|
||
|
|
@shell "git status" as git_status # Shell commands
|
||
|
|
@import ".env" as environment # Configuration
|
||
|
|
```
|
||
|
|
|
||
|
|
### Template System
|
||
|
|
|
||
|
|
Powerful templating with LiquidJS:
|
||
|
|
|
||
|
|
```markdown
|
||
|
|
Hello {{name}}!
|
||
|
|
|
||
|
|
{{#if premium}}
|
||
|
|
Welcome to premium features!
|
||
|
|
{{else}}
|
||
|
|
Upgrade to unlock more!
|
||
|
|
{{/if}}
|
||
|
|
```
|
||
|
|
|
||
|
|
### Output Validation
|
||
|
|
|
||
|
|
Ensure quality automatically:
|
||
|
|
|
||
|
|
```yaml
|
||
|
|
@validate output {
|
||
|
|
must_contain: ["## Summary", "## Recommendations"],
|
||
|
|
format: markdown,
|
||
|
|
min_length: 500
|
||
|
|
}
|
||
|
|
```
|
||
|
|
|
||
|
|
### Real-time Streaming
|
||
|
|
|
||
|
|
See responses as they're generated with token-by-token streaming.
|
||
|
|
|
||
|
|
## Examples
|
||
|
|
|
||
|
|
### Code Review
|
||
|
|
|
||
|
|
```bash
|
||
|
|
typedialog-ag examples/12-agent-execution/code-review.agent.mdx
|
||
|
|
```
|
||
|
|
|
||
|
|
### Documentation Generation
|
||
|
|
|
||
|
|
```bash
|
||
|
|
typedialog-ag examples/12-agent-execution/documentation.agent.mdx
|
||
|
|
```
|
||
|
|
|
||
|
|
### Task Planning
|
||
|
|
|
||
|
|
```bash
|
||
|
|
typedialog-ag examples/12-agent-execution/task-planner.agent.mdx
|
||
|
|
```
|
||
|
|
|
||
|
|
### Local Privacy Analysis
|
||
|
|
|
||
|
|
```bash
|
||
|
|
# Runs entirely locally with Ollama
|
||
|
|
typedialog-ag examples/12-agent-execution/local-privacy.agent.mdx
|
||
|
|
```
|
||
|
|
|
||
|
|
See [examples/12-agent-execution/](../../examples/12-agent-execution/) for all examples.
|
||
|
|
|
||
|
|
## Use Cases
|
||
|
|
|
||
|
|
### Development
|
||
|
|
|
||
|
|
- Code reviews
|
||
|
|
- Refactoring assistance
|
||
|
|
- Documentation generation
|
||
|
|
- Bug analysis
|
||
|
|
- Architecture design
|
||
|
|
|
||
|
|
### Content Creation
|
||
|
|
|
||
|
|
- Blog posts
|
||
|
|
- Documentation
|
||
|
|
- Technical writing
|
||
|
|
- Creative content
|
||
|
|
|
||
|
|
### Business
|
||
|
|
|
||
|
|
- Task planning
|
||
|
|
- Process documentation
|
||
|
|
- Decision support
|
||
|
|
- Data analysis
|
||
|
|
|
||
|
|
### Privacy-Sensitive
|
||
|
|
|
||
|
|
- Local data analysis with Ollama
|
||
|
|
- Compliance review
|
||
|
|
- Sensitive document processing
|
||
|
|
|
||
|
|
## Learning Path
|
||
|
|
|
||
|
|
```text
|
||
|
|
1. getting_started.md
|
||
|
|
↓ Install and run first agent
|
||
|
|
|
||
|
|
2. AGENTS.md
|
||
|
|
↓ Learn agent file format
|
||
|
|
|
||
|
|
3. llm_providers.md
|
||
|
|
↓ Choose your provider
|
||
|
|
|
||
|
|
4. TEMPLATES.md
|
||
|
|
↓ Master template system
|
||
|
|
|
||
|
|
5. Examples & Demos
|
||
|
|
↓ See real-world usage
|
||
|
|
|
||
|
|
6. BEST_PRACTICES.md
|
||
|
|
↓ Optimize your agents
|
||
|
|
```
|
||
|
|
|
||
|
|
## Resources
|
||
|
|
|
||
|
|
### Examples & Demos
|
||
|
|
|
||
|
|
- [Examples](../../examples/12-agent-execution/README.md) - 8 practical use cases
|
||
|
|
- [Demos](../../demos/agent/README.md) - Provider comparison demos
|
||
|
|
- [Tests](../../tests/agent/README.md) - Validation tests
|
||
|
|
|
||
|
|
### Technical Documentation
|
||
|
|
|
||
|
|
- [LLM Integration](../../crates/typedialog-agent/typedialog-ag-core/LLM_INTEGRATION.md) - Complete technical guide
|
||
|
|
- [Core Examples](../../crates/typedialog-agent/typedialog-ag-core/examples/) - Rust API usage
|
||
|
|
- [Developer Docs](../../crates/typedialog-agent/README.md) - Architecture and APIs
|
||
|
|
|
||
|
|
### Main Project
|
||
|
|
|
||
|
|
- [TypeDialog README](../../README.md) - Project overview
|
||
|
|
- [Examples](../../examples/README.md) - All project examples
|
||
|
|
|
||
|
|
## Common Questions
|
||
|
|
|
||
|
|
### Which LLM provider should I use?
|
||
|
|
|
||
|
|
See [llm_providers.md](llm_providers.md#choosing-a-provider) for a detailed comparison.
|
||
|
|
|
||
|
|
**Quick guide:**
|
||
|
|
|
||
|
|
- **Quick tasks** → Claude Haiku
|
||
|
|
- **Code work** → Claude Sonnet or GPT-4o
|
||
|
|
- **Creative** → Gemini 2.0
|
||
|
|
- **Privacy** → Ollama (local)
|
||
|
|
|
||
|
|
### How much does it cost?
|
||
|
|
|
||
|
|
- **Claude** - Pay per token (see Anthropic pricing)
|
||
|
|
- **OpenAI** - Pay per token (see OpenAI pricing)
|
||
|
|
- **Gemini** - Free tier + pay per token
|
||
|
|
- **Ollama** - Free (runs locally)
|
||
|
|
|
||
|
|
See [llm_providers.md](llm_providers.md#cost-comparison) for details.
|
||
|
|
|
||
|
|
### Can I use it offline?
|
||
|
|
|
||
|
|
Yes. Use Ollama for completely offline operation:
|
||
|
|
|
||
|
|
```bash
|
||
|
|
ollama serve
|
||
|
|
ollama pull llama2
|
||
|
|
typedialog-ag my-agent.agent.mdx # Uses llama2 locally
|
||
|
|
```
|
||
|
|
|
||
|
|
### Is my data private?
|
||
|
|
|
||
|
|
- **Cloud providers** (Claude, OpenAI, Gemini) - Data sent to their APIs
|
||
|
|
- **Ollama** - Completely private, runs on your machine
|
||
|
|
|
||
|
|
See [BEST_PRACTICES.md](BEST_PRACTICES.md#privacy-considerations) for guidance.
|
||
|
|
|
||
|
|
## Getting Help
|
||
|
|
|
||
|
|
### Documentation
|
||
|
|
|
||
|
|
1. Read [getting_started.md](getting_started.md)
|
||
|
|
2. Check [CLI_REFERENCE.md](CLI_REFERENCE.md)
|
||
|
|
3. Browse [examples](../../examples/12-agent-execution/)
|
||
|
|
|
||
|
|
### Troubleshooting
|
||
|
|
|
||
|
|
See [getting_started.md#troubleshooting](getting_started.md#troubleshooting)
|
||
|
|
|
||
|
|
### Contributing
|
||
|
|
|
||
|
|
See [Developer Documentation](../../crates/typedialog-agent/README.md)
|
||
|
|
|
||
|
|
## What's Next?
|
||
|
|
|
||
|
|
Ready to get started? → [getting_started.md](getting_started.md)
|
||
|
|
|
||
|
|
Want to understand agent files? → [AGENTS.md](AGENTS.md)
|
||
|
|
|
||
|
|
Need to choose a provider? → [llm_providers.md](llm_providers.md)
|
||
|
|
|
||
|
|
---
|
||
|
|
|
||
|
|
**Version**: 0.1.0
|
||
|
|
**Last Updated**: December 2024
|