# Getting Started with TypeDialog Agent Complete guide to installing and using TypeDialog Agent for the first time. ## Installation ### Prerequisites - **Rust** 1.75 or later - **Cargo** (comes with Rust) - At least one LLM provider: - API key for Claude, OpenAI, or Gemini - OR Ollama installed for local models ### Install Rust If you don't have Rust installed: ```bash # Install Rust via rustup curl --proto '=https' --tlsv1.2 -sSf https://sh.rustup.rs | sh # Verify installation rustc --version cargo --version ```text ### Build TypeDialog Agent ```bash # Clone the repository (if not already) cd /path/to/typedialog # Build the CLI cargo build --release --package typedialog-ag # The binary will be at # target/release/typedialog-ag ```text ### Add to PATH (Optional) ```bash # Option 1: Copy to a directory in PATH sudo cp target/release/typedialog-ag /usr/local/bin/ # Option 2: Create alias echo 'alias typedialog-ag="/path/to/typedialog/target/release/typedialog-ag"' >> ~/.bashrc source ~/.bashrc # Verify typedialog-ag --help ```text ## Set Up an LLM Provider Choose at least one provider to get started: ### Option 1: Claude (Anthropic) - Recommended ```bash # Sign up at: https://console.anthropic.com # Get your API key export ANTHROPIC_API_KEY=sk-ant-... # Add to shell profile for persistence echo 'export ANTHROPIC_API_KEY=sk-ant-...' >> ~/.bashrc ```text **Why Claude?** - Excellent code understanding - Strong reasoning capabilities - Reliable and consistent - Good balance of speed/quality/cost ### Option 2: OpenAI (GPT) ```bash # Sign up at: https://platform.openai.com # Get your API key export OPENAI_API_KEY=sk-... # Add to shell profile echo 'export OPENAI_API_KEY=sk-...' >> ~/.bashrc ```text **Why OpenAI?** - Excellent code generation - Broad general knowledge - Well-documented and reliable ### Option 3: Google Gemini ```bash # Get API key from: https://aistudio.google.com/app/apikey export GEMINI_API_KEY=... # Or use GOOGLE_API_KEY export GOOGLE_API_KEY=... ```text **Why Gemini?** - Creative content generation - Multi-modal capabilities - Free tier available ### Option 4: Ollama (Local) - Privacy First ```bash # Install Ollama # Download from: https://ollama.ai # Start Ollama server ollama serve # Pull a model (in another terminal) ollama pull llama2 # Verify curl http://localhost:11434/api/tags ```text **Why Ollama?** - ✅ Complete privacy - runs locally - ✅ No API costs - ✅ Offline operation - ✅ No data sent to cloud ## Your First Agent ### Step 1: Create an Agent File Create `hello.agent.mdx`: ```markdown --- @agent { role: friendly assistant, llm: claude-3-5-haiku-20241022 } @input name: String --- Say hello to {{name}} in a warm and friendly way! Include a fun fact about their name if you know one. ```text ### Step 2: Run the Agent ```bash typedialog-ag hello.agent.mdx ```text You'll be prompted: ```text name (String): Alice ```text Expected output: ```text Hello Alice! It's wonderful to meet you! 🌟 Fun fact: The name Alice means "noble" and "truth" in Old German... ```text ### Step 3: Understand What Happened 1. **Agent loaded** - Parsed the `.agent.mdx` file 2. **Provider selected** - Chose Claude Haiku based on `llm: claude-3-5-haiku-20241022` 3. **Input collected** - Prompted for the `name` variable 4. **Template rendered** - Replaced `{{name}}` with "Alice" 5. **LLM called** - Sent to Claude API 6. **Response streamed** - Displayed token-by-token ## Common Examples ### Code Review Agent Create `review.agent.mdx`: ```markdown --- @agent { role: senior code reviewer, llm: claude-3-5-sonnet-20241022, temperature: 0.3 } @input file_path: String @import "{{file_path}}" as code_content @validate output { must_contain: ["## Security", "## Performance"], format: markdown, min_length: 300 } --- # Code Review ## Code to Review ```text {{code_content}} ```text ## Task Perform a thorough code review focusing on: - Security vulnerabilities - Performance issues - Best practices - Potential bugs Provide specific, actionable recommendations. ```text Run it: ```bash typedialog-ag review.agent.mdx # Prompts: file_path (String): ./src/main.rs ```text ### Documentation Generator Create `docs.agent.mdx`: ```markdown --- @agent { role: technical writer, llm: gpt-4o, max_tokens: 4096 } @input module_path: String @import "{{module_path}}/**/*.rs" as source_code @validate output { format: markdown, min_length: 800 } --- # Generate API Documentation ## Source Code {{source_code}} ## Task Create comprehensive API documentation including: 1. Overview of the module 2. Public API reference 3. Usage examples with code snippets 4. Common use cases ```text Run it: ```bash typedialog-ag docs.agent.mdx # Prompts: module_path (String): ./src/parser ```text ### Local Privacy Analysis Create `privacy.agent.mdx`: ```markdown --- @agent { role: privacy consultant, llm: llama2 } @input data_type: String @input regulation: String @import ".env" as env_config --- # Privacy Analysis You're analyzing: {{data_type}} Regulation: {{regulation}} Environment: {{env_config}} Provide: 1. Privacy requirements for {{regulation}} 2. Recommended security measures 3. Data handling guidelines 4. Compliance checklist **Note**: This runs 100% locally via Ollama - your data never leaves your machine! ```text Run it: ```bash # Requires Ollama running ollama serve typedialog-ag privacy.agent.mdx # Prompts: data_type (String): user emails # Prompts: regulation (String): GDPR ```text ## CLI Usage Patterns ### Interactive Mode (Default) ```bash # Prompts for all inputs typedialog-ag agent.agent.mdx ```text ### Skip Prompts with --yes ```bash # Uses default values (empty strings for missing inputs) typedialog-ag agent.agent.mdx --yes ```text ### Validate Without Execution ```bash # Check syntax and type-check typedialog-ag validate agent.agent.mdx ```text ### Transpile to Nickel ```bash # See the Nickel intermediate representation typedialog-ag transpile agent.agent.mdx ```text ### Show Help ```bash typedialog-ag --help typedialog-ag validate --help ```text ## Next Steps ### Learn More - **[AGENTS.md](AGENTS.md)** - Complete agent file format - **[llm-providers.md](llm-providers.md)** - Provider comparison and setup - **[TEMPLATES.md](TEMPLATES.md)** - Template system guide - **[CLI_REFERENCE.md](CLI_REFERENCE.md)** - All CLI commands ### Try Examples ```bash # Basic examples typedialog-ag examples/12-agent-execution/basic-greeting.agent.mdx # Advanced examples typedialog-ag examples/12-agent-execution/code-review.agent.mdx typedialog-ag examples/12-agent-execution/task-planner.agent.mdx # See all examples ls examples/12-agent-execution/ cat examples/12-agent-execution/README.md ```text ### Run Demos ```bash # Compare all providers ./demos/agent/run_demo.sh # Or run provider comparison example cargo run --example provider_comparison ```text ## Troubleshooting ### API Key Not Set **Error:** ```text Error: ANTHROPIC_API_KEY environment variable not set ```text **Solution:** ```bash export ANTHROPIC_API_KEY=sk-ant-... # Or add to ~/.bashrc for persistence ```text ### File Not Found **Error:** ```text Error: Failed to read agent file: hello.agent.mdx ```text **Solution:** ```bash # Check file exists ls hello.agent.mdx # Use absolute path typedialog-ag /full/path/to/hello.agent.mdx ```text ### Import Pattern Not Found **Error:** ```text Error: No files matched pattern: ./src/**/*.rs ```text **Solution:** ```bash # Check files exist ls ./src/**/*.rs # Use correct relative path # Paths are relative to project root, not agent file ```text ### Ollama Not Running **Error:** ```text Error: Failed to call Ollama API - is Ollama running? ```text **Solution:** ```bash # Start Ollama server ollama serve # In another terminal, verify curl http://localhost:11434/api/tags ```text ### Quota Exceeded **Error:** ```text Error: 429 Too Many Requests - quota exceeded ```text **Solution:** - Wait for quota reset - Upgrade your API plan - Use a different provider - Use Ollama for local, free execution ### Validation Failed **Error:** ```text Error: Validation failed: missing required content ```text **Solution:** ```yaml # Adjust validation rules in your agent @validate output { must_contain: ["Summary"], # Make less strict min_length: 100 # Lower requirement } ```text ### Template Error **Error:** ```text Error: Failed to render template ```text **Solution:** - Check variable names match: `@input name` and `{{name}}` - Ensure proper liquid syntax: `{{#if var}}...{{/if}}` - Verify nested templates are closed properly ## Performance Tips ### Choose the Right Model - **Quick tasks** → Use Haiku (fast, cheap) - **Complex reasoning** → Use Sonnet (balanced) - **Critical design** → Use Opus (best quality) - **Local/private** → Use Ollama (free, private) ### Limit Tokens ```yaml @agent { max_tokens: 500 # For short responses } ```text ### Adjust Temperature ```yaml @agent { temperature: 0.2 # Low for consistent, factual output (code) temperature: 0.7 # Medium for balanced temperature: 0.9 # High for creative content } ```text ### Use Validation ```yaml @validate output { min_length: 300, max_length: 1000 # Prevent overly long responses } ```text ## Best Practices ### 1. Start Simple Begin with basic agents, add complexity gradually. ### 2. Use Appropriate Models Don't use Opus for simple tasks - it's slower and more expensive. ### 3. Validate Critical Outputs Always validate when accuracy matters: ```yaml @validate output { must_contain: ["PASS", "FAIL"], format: json } ```text ### 4. Use Ollama for Iteration Develop and test with Ollama (free), then switch to cloud providers for production. ### 5. Keep Prompts Focused Clear, specific prompts get better results than vague ones. ### 6. Use Context Wisely Import only relevant context - too much can confuse the model. ## What's Next **Ready to dive deeper?** 1. Read [AGENTS.md](AGENTS.md) to master agent file format 2. Compare providers in [llm-providers.md](llm-providers.md) 3. Learn advanced templates in [TEMPLATES.md](TEMPLATES.md) 4. Explore [examples/12-agent-execution/](../../examples/12-agent-execution/) **Want to contribute?** See [Developer Documentation](../../crates/typedialog-agent/README.md) --- **Questions?** Check [troubleshooting](#troubleshooting) or read the [FAQ](README.md#common-questions).