2026-01-11 22:35:49 +00:00

2.7 KiB

TypeDialog Agent Demos

This directory contains demonstration agent files showcasing the multi-provider LLM capabilities of TypeDialog Agent.

Available Demos

Provider-Specific Demos

Each demo showcases a different LLM provider with the same haiku generation task:

  • demo-claude.agent.mdx - Claude (Anthropic) with claude-3-5-haiku-20241022
  • demo-openai.agent.mdx - OpenAI GPT with gpt-4o-mini
  • demo-gemini.agent.mdx - Google Gemini with gemini-2.0-flash-exp
  • demo-ollama.agent.mdx - Ollama local models with llama2

Running Demos

Run All Providers

From the project root directory:

./demos/agent/run_demo.sh
```text

This will:
1. Check for available API keys and Ollama server
2. Build the CLI if needed
3. Prompt for a haiku theme
4. Run demos for all configured providers
5. Display performance metrics and results

### Run Individual Demos

From the project root directory:

```bash
# Claude
cargo run --package typedialog-ag -- demos/agent/demo-claude.agent.mdx

# OpenAI
cargo run --package typedialog-ag -- demos/agent/demo-openai.agent.mdx

# Gemini
cargo run --package typedialog-ag -- demos/agent/demo-gemini.agent.mdx

# Ollama (local)
cargo run --package typedialog-ag -- demos/agent/demo-ollama.agent.mdx
```text

## Requirements

### Cloud Providers

Set the appropriate API keys as environment variables:

```bash
# For Claude
export ANTHROPIC_API_KEY=sk-ant-...

# For OpenAI
export OPENAI_API_KEY=sk-...

# For Gemini
export GEMINI_API_KEY=...
# OR
export GOOGLE_API_KEY=...
```text

### Local Models (Ollama)

1. Install Ollama from https://ollama.ai
2. Start the Ollama server:
   ```bash
   ollama serve
  1. Pull a model:
    ollama pull llama2
    

Optional: Override default Ollama URL:

export OLLAMA_BASE_URL=http://localhost:11434
```text

## Provider Comparison

| Provider | Streaming | Token Usage | Privacy | API Key Required |
|----------|-----------|-------------|---------|------------------|
| Claude   | SSE       | ✅ In stream | Cloud   | ✅ Required      |
| OpenAI   | SSE       | ❌ No usage  | Cloud   | ✅ Required      |
| Gemini   | JSON      | ✅ In stream | Cloud   | ✅ Required      |
| Ollama   | JSON      | ✅ In stream | Local   | ❌ Not required  |

## Demo Features

All demos showcase:
- Template variable substitution (`{{ theme }}`)
- System prompts defining agent role
- Model configuration (max_tokens, temperature)
- Real-time streaming output
- Token usage tracking
- Performance metrics

## Rust Example

For a programmatic comparison of all providers, see:

```bash
cargo run --example provider_comparison
```text

This example demonstrates both blocking and streaming modes for all four providers.