2.7 KiB
2.7 KiB
TypeDialog Agent Demos
This directory contains demonstration agent files showcasing the multi-provider LLM capabilities of TypeDialog Agent.
Available Demos
Provider-Specific Demos
Each demo showcases a different LLM provider with the same haiku generation task:
demo-claude.agent.mdx- Claude (Anthropic) with claude-3-5-haiku-20241022demo-openai.agent.mdx- OpenAI GPT with gpt-4o-minidemo-gemini.agent.mdx- Google Gemini with gemini-2.0-flash-expdemo-ollama.agent.mdx- Ollama local models with llama2
Running Demos
Run All Providers
From the project root directory:
./demos/agent/run_demo.sh
This will:
- Check for available API keys and Ollama server
- Build the CLI if needed
- Prompt for a haiku theme
- Run demos for all configured providers
- Display performance metrics and results
Run Individual Demos
From the project root directory:
# Claude
cargo run --package typedialog-ag -- demos/agent/demo-claude.agent.mdx
# OpenAI
cargo run --package typedialog-ag -- demos/agent/demo-openai.agent.mdx
# Gemini
cargo run --package typedialog-ag -- demos/agent/demo-gemini.agent.mdx
# Ollama (local)
cargo run --package typedialog-ag -- demos/agent/demo-ollama.agent.mdx
Requirements
Cloud Providers
Set the appropriate API keys as environment variables:
# For Claude
export ANTHROPIC_API_KEY=sk-ant-...
# For OpenAI
export OPENAI_API_KEY=sk-...
# For Gemini
export GEMINI_API_KEY=...
# OR
export GOOGLE_API_KEY=...
Local Models (Ollama)
- Install Ollama from https://ollama.ai
- Start the Ollama server:
ollama serve - Pull a model:
ollama pull llama2
Optional: Override default Ollama URL:
export OLLAMA_BASE_URL=http://localhost:11434
Provider Comparison
| Provider | Streaming | Token Usage | Privacy | API Key Required |
|---|---|---|---|---|
| Claude | SSE | ✅ In stream | Cloud | ✅ Required |
| OpenAI | SSE | ❌ No usage | Cloud | ✅ Required |
| Gemini | JSON | ✅ In stream | Cloud | ✅ Required |
| Ollama | JSON | ✅ In stream | Local | ❌ Not required |
Demo Features
All demos showcase:
- Template variable substitution (
{{ theme }}) - System prompts defining agent role
- Model configuration (max_tokens, temperature)
- Real-time streaming output
- Token usage tracking
- Performance metrics
Rust Example
For a programmatic comparison of all providers, see:
cargo run --example provider_comparison
This example demonstrates both blocking and streaming modes for all four providers.