115 lines
2.7 KiB
Markdown
115 lines
2.7 KiB
Markdown
|
|
# TypeDialog Agent Demos
|
||
|
|
|
||
|
|
This directory contains demonstration agent files showcasing the multi-provider LLM capabilities of TypeDialog Agent.
|
||
|
|
|
||
|
|
## Available Demos
|
||
|
|
|
||
|
|
### Provider-Specific Demos
|
||
|
|
|
||
|
|
Each demo showcases a different LLM provider with the same haiku generation task:
|
||
|
|
|
||
|
|
- **`demo-claude.agent.mdx`** - Claude (Anthropic) with claude-3-5-haiku-20241022
|
||
|
|
- **`demo-openai.agent.mdx`** - OpenAI GPT with gpt-4o-mini
|
||
|
|
- **`demo-gemini.agent.mdx`** - Google Gemini with gemini-2.0-flash-exp
|
||
|
|
- **`demo-ollama.agent.mdx`** - Ollama local models with llama2
|
||
|
|
|
||
|
|
## Running Demos
|
||
|
|
|
||
|
|
### Run All Providers
|
||
|
|
|
||
|
|
From the project root directory:
|
||
|
|
|
||
|
|
```bash
|
||
|
|
./demos/agent/run_demo.sh
|
||
|
|
```
|
||
|
|
|
||
|
|
This will:
|
||
|
|
1. Check for available API keys and Ollama server
|
||
|
|
2. Build the CLI if needed
|
||
|
|
3. Prompt for a haiku theme
|
||
|
|
4. Run demos for all configured providers
|
||
|
|
5. Display performance metrics and results
|
||
|
|
|
||
|
|
### Run Individual Demos
|
||
|
|
|
||
|
|
From the project root directory:
|
||
|
|
|
||
|
|
```bash
|
||
|
|
# Claude
|
||
|
|
cargo run --package typedialog-ag -- demos/agent/demo-claude.agent.mdx
|
||
|
|
|
||
|
|
# OpenAI
|
||
|
|
cargo run --package typedialog-ag -- demos/agent/demo-openai.agent.mdx
|
||
|
|
|
||
|
|
# Gemini
|
||
|
|
cargo run --package typedialog-ag -- demos/agent/demo-gemini.agent.mdx
|
||
|
|
|
||
|
|
# Ollama (local)
|
||
|
|
cargo run --package typedialog-ag -- demos/agent/demo-ollama.agent.mdx
|
||
|
|
```
|
||
|
|
|
||
|
|
## Requirements
|
||
|
|
|
||
|
|
### Cloud Providers
|
||
|
|
|
||
|
|
Set the appropriate API keys as environment variables:
|
||
|
|
|
||
|
|
```bash
|
||
|
|
# For Claude
|
||
|
|
export ANTHROPIC_API_KEY=sk-ant-...
|
||
|
|
|
||
|
|
# For OpenAI
|
||
|
|
export OPENAI_API_KEY=sk-...
|
||
|
|
|
||
|
|
# For Gemini
|
||
|
|
export GEMINI_API_KEY=...
|
||
|
|
# OR
|
||
|
|
export GOOGLE_API_KEY=...
|
||
|
|
```
|
||
|
|
|
||
|
|
### Local Models (Ollama)
|
||
|
|
|
||
|
|
1. Install Ollama from https://ollama.ai
|
||
|
|
2. Start the Ollama server:
|
||
|
|
```bash
|
||
|
|
ollama serve
|
||
|
|
```
|
||
|
|
3. Pull a model:
|
||
|
|
```bash
|
||
|
|
ollama pull llama2
|
||
|
|
```
|
||
|
|
|
||
|
|
Optional: Override default Ollama URL:
|
||
|
|
```bash
|
||
|
|
export OLLAMA_BASE_URL=http://localhost:11434
|
||
|
|
```
|
||
|
|
|
||
|
|
## Provider Comparison
|
||
|
|
|
||
|
|
| Provider | Streaming | Token Usage | Privacy | API Key Required |
|
||
|
|
|----------|-----------|-------------|---------|------------------|
|
||
|
|
| Claude | SSE | ✅ In stream | Cloud | ✅ Required |
|
||
|
|
| OpenAI | SSE | ❌ No usage | Cloud | ✅ Required |
|
||
|
|
| Gemini | JSON | ✅ In stream | Cloud | ✅ Required |
|
||
|
|
| Ollama | JSON | ✅ In stream | Local | ❌ Not required |
|
||
|
|
|
||
|
|
## Demo Features
|
||
|
|
|
||
|
|
All demos showcase:
|
||
|
|
- Template variable substitution (`{{ theme }}`)
|
||
|
|
- System prompts defining agent role
|
||
|
|
- Model configuration (max_tokens, temperature)
|
||
|
|
- Real-time streaming output
|
||
|
|
- Token usage tracking
|
||
|
|
- Performance metrics
|
||
|
|
|
||
|
|
## Rust Example
|
||
|
|
|
||
|
|
For a programmatic comparison of all providers, see:
|
||
|
|
|
||
|
|
```bash
|
||
|
|
cargo run --example provider_comparison
|
||
|
|
```
|
||
|
|
|
||
|
|
This example demonstrates both blocking and streaming modes for all four providers.
|