Jesús Pérez ab6c097819
Some checks failed
CI / Lint (bash) (push) Has been cancelled
CI / Lint (markdown) (push) Has been cancelled
CI / Lint (nickel) (push) Has been cancelled
CI / Lint (nushell) (push) Has been cancelled
CI / Lint (rust) (push) Has been cancelled
CI / Code Coverage (push) Has been cancelled
CI / Test (macos-latest) (push) Has been cancelled
CI / Test (ubuntu-latest) (push) Has been cancelled
CI / Test (windows-latest) (push) Has been cancelled
CI / Build (macos-latest) (push) Has been cancelled
CI / Build (ubuntu-latest) (push) Has been cancelled
CI / Build (windows-latest) (push) Has been cancelled
CI / Benchmark (push) Has been cancelled
CI / Security Audit (push) Has been cancelled
CI / License Compliance (push) Has been cancelled
chore: fix md lint
2026-01-11 22:36:45 +00:00
..
2026-01-11 22:36:45 +00:00
2026-01-11 22:36:45 +00:00

Configuration Files

Pre-configured settings for each typedialog backend and environment.

Overview

Configuration files are organized by backend (CLI, TUI, Web, AI, Agent, Prov-gen) and environment (default, dev, production).

config/
├── cli/
│   ├── default.toml       # Standard CLI settings
│   ├── dev.toml           # Development (debugging enabled)
│   └── production.toml    # Production (optimized, hardened)
├── tui/
│   ├── default.toml       # Standard TUI settings
│   ├── dev.toml           # Development features enabled
│   └── production.toml    # Optimized for deployment
├── web/
│   ├── default.toml       # Standard web server settings
│   ├── dev.toml           # Development (hot reload)
│   └── production.toml    # Hardened for production
├── ai/
│   ├── default.toml       # Standard AI/RAG settings
│   ├── dev.toml           # Development RAG pipeline
│   └── production.toml    # Optimized AI inference
├── ag/
│   ├── default.toml       # Standard Agent settings
│   ├── dev.toml           # Development (local Ollama)
│   └── production.toml    # Production (Claude Sonnet)
└── prov-gen/
    ├── default.toml       # Standard provisioning settings
    ├── dev.toml           # Development (Hetzner/LXD)
    └── production.toml    # Production (AWS/GCP)
```text

## Backend Configurations

### CLI Backend

**command-line interface** - Simple text-based forms for scripts and automation.

| Config | Purpose | Debug | Colors | Timeout |
|--------|---------|-------|--------|---------|
| default | Standard | No | Yes | 300s |
| dev | Development | Yes | Yes | 300s |
| production | Production | No | Yes | 3600s |

**Usage:**
```bash
typedialog --config config/cli/production.toml form.toml
```text

**Features:**
- Inline validation
- Optional mouse support
- Placeholder text display
- Help text display

### TUI Backend

**terminal user interface** - Interactive multi-panel forms with navigation.

| Config | Purpose | Borders | Animations | Render |
|--------|---------|---------|-----------|--------|
| default | Standard | Rounded | Enabled | Auto |
| dev | Development | Double | Enabled | Debug |
| production | Production | Rounded | Disabled | Optimized |

**Usage:**
```bash
typedialog-tui --config config/tui/production.toml form.toml
```text

**Features:**
- 3-panel layout (fields, input, buttons)
- Mouse support
- Keyboard navigation
- Real-time field updates

### Web Backend

**HTTP server** - Browser-based forms with REST API.

| Config | Purpose | CORS | HTTPS | Rate Limit |
|--------|---------|------|-------|-----------|
| default | Standard | Localhost | No | Unlimited |
| dev | Development | Enabled | No | Unlimited |
| production | Production | Restricted | Required | 100/min |

**Usage:**
```bash
typedialog-web --config config/web/production.toml
# Server starts on http://localhost:8080
```text

**Features:**
- HTML/CSS rendering
- CSRF protection
- Response caching
- Gzip compression

### AI Backend

**RAG and embeddings** - AI-powered form assistance and knowledge retrieval.

| Config | Purpose | Provider | Embeddings | Cache |
|--------|---------|----------|-----------|-------|
| default | Standard | OpenAI | text-embedding-3-small | Memory |
| dev | Development | Ollama | nomic-embed-text | Memory |
| production | Production | OpenAI | text-embedding-3-large | Redis |

**Usage:**
```bash
typedialog-ai --config config/ai/production.toml --query "user question"
```text

**Features:**
- RAG pipeline for context retrieval
- Vector embeddings
- Knowledge graph generation
- Multi-provider support (OpenAI, Ollama)

### Agent Backend (typedialog-ag)

**LLM agent execution** - Run AI agents from markdown files.

| Config | Purpose | Provider | Model | Streaming |
|--------|---------|----------|-------|-----------|
| default | Standard | Claude | haiku | Yes |
| dev | Development | Ollama | llama2 | Yes |
| production | Production | Claude | sonnet | Yes |

**Usage:**
```bash
typedialog-ag --config config/ag/production.toml task.agent.mdx
```text

**Features:**
- Multi-provider (Claude, OpenAI, Gemini, Ollama)
- Tera template engine (Jinja2-compatible)
- Streaming responses
- Variable substitution

### Provisioning Generator (typedialog-prov-gen)

**Infrastructure as Code** - Generate provisioning configurations.

| Config | Purpose | Providers | AI | Validation |
|--------|---------|-----------|----|-----------|
| default | Standard | AWS, Hetzner | Disabled | Enabled |
| dev | Development | Hetzner, LXD | Ollama | Strict |
| production | Production | AWS, GCP | Claude | Strict + Security |

**Usage:**
```bash
typedialog-prov-gen --config config/prov-gen/production.toml --name myproject
```text

**Features:**
- Multi-cloud support (AWS, GCP, Hetzner, UpCloud, LXD)
- Nickel-based validation
- AI-assisted generation
- Template fragments

## Configuration by Environment

### Development Configuration

Enabled features for development and debugging:

```toml
# CLI Dev
debug_output = true
log_level = "debug"
show_field_types = true

# TUI Dev
show_field_indices = true
trace_rendering = false

# Web Dev
hot_reload = true
debug = true
logs = "/tmp/typedialog-web.log"

# AI Dev
provider = "ollama"
embedding_model = "nomic-embed-text"
log_level = "debug"

# Agent Dev
default_provider = "ollama"
log_level = "debug"
temperature = 0.7

# Prov-gen Dev
default_providers = ["hetzner", "lxd"]
verbose = true
log_level = "debug"
```text

**Usage:**
```bash
typedialog --config config/cli/dev.toml form.toml
typedialog-tui --config config/tui/dev.toml form.toml
typedialog-web --config config/web/dev.toml
typedialog-ai --config config/ai/dev.toml --query "question"
typedialog-ag --config config/ag/dev.toml task.agent.mdx
typedialog-prov-gen --config config/prov-gen/dev.toml --name project
```text

### Production Configuration

Hardened settings optimized for deployment:

```toml
# CLI Production
debug_output = false
log_level = "error"
show_placeholders = false
timeout = 3600

# TUI Production
enable_animations = false
render_throttle = 16ms
max_fps = 60

# Web Production
require_https = true
csrf_enabled = true
rate_limit = 100
cache_ttl = 3600

# AI Production
provider = "openai"
embedding_model = "text-embedding-3-large"
cache_type = "redis"
log_level = "warn"

# Agent Production
default_provider = "claude"
model = "claude-3-5-sonnet-20241022"
temperature = 0.3
rate_limit_enabled = true

# Prov-gen Production
default_providers = ["aws", "gcp"]
strict_validation = true
require_encryption = true
require_tests = true
```text

**Usage:**
```bash
typedialog --config config/cli/production.toml form.toml
typedialog-tui --config config/tui/production.toml form.toml
typedialog-web --config config/web/production.toml
typedialog-ai --config config/ai/production.toml --query "question"
typedialog-ag --config config/ag/production.toml task.agent.mdx
typedialog-prov-gen --config config/prov-gen/production.toml --name project
```text

## Common Settings

### Form Configuration

```toml
[form]
title = "Form Title"
description = "Optional description"

[form.validation]
validate_on_change = true
show_errors_inline = true
strict_validation = true
```text

### Output Configuration

```toml
[output]
format = "json"          # json, yaml, toml, text
pretty_print = true
debug_output = false
```text

### Logging

```toml
[logging]
level = "info"           # debug, info, warn, error
file = "/var/log/typedialog/app.log"
```text

## Custom Configuration

### Creating Custom Configurations

Create a new TOML file based on an environment template:

```bash
# Copy production config as base
cp config/cli/production.toml config/cli/custom.toml

# Edit for your needs
nano config/cli/custom.toml

# Use it
typedialog --config config/cli/custom.toml form.toml
```text

### Override Specific Settings

Use environment variables to override config:

```bash
# CLI Backend
export TYPEDIALOG_DEBUG=1
export TYPEDIALOG_LOG_LEVEL=debug
typedialog --config config/cli/default.toml form.toml

# TUI Backend
export TYPEDIALOG_TUI_BORDER=double
typedialog-tui --config config/tui/default.toml form.toml

# Web Backend
export TYPEDIALOG_WEB_PORT=3000
export TYPEDIALOG_WEB_CORS_ORIGINS="localhost,example.com"
typedialog-web --config config/web/default.toml
```text

## CLI Backend Configuration Details

```toml
[terminal]
use_raw_mode = true          # Enable raw terminal mode
enable_mouse = false         # Mouse support
use_color = true             # Colored output

[appearance]
theme = "default"            # Color theme
show_help = true             # Display field help
show_placeholders = true     # Show placeholder text

[validation]
validate_on_change = true    # Real-time validation
show_errors_inline = true    # Inline error messages

[timeout]
max_duration = 3600          # Max form time (seconds)
input_timeout = 300          # Field input timeout
```text

## TUI Backend Configuration Details

```toml
[terminal]
use_raw_mode = true
enable_mouse = true
enable_scrolling = true
height = -1                  # -1 = auto
width = -1                   # -1 = auto

[ui]
show_borders = true
show_focus = true
highlight_on_hover = true
enable_animations = true

[appearance]
theme = "default"
border_style = "rounded"     # rounded, double, simple
color_scheme = "default"

[keyboard]
vi_mode = false
emacs_mode = false

[performance]
render_throttle = 16         # milliseconds
max_fps = 60                 # frames per second
```text

## Web Backend Configuration Details

```toml
[server]
host = "0.0.0.0"
port = 8080
workers = 4

[html]
css_framework = "none"       # bootstrap, tailwind, none
inline_styles = false
responsive = true
dark_mode = true

[submission]
method = "post"
webhook_url = "https://api.example.com/forms"
redirect_on_success = true
redirect_url = "https://example.com/thank-you"

[security]
csrf_enabled = true
rate_limit = 100             # requests per minute
require_https = true
add_security_headers = true

[performance]
cache_static = true
cache_ttl = 3600
enable_compression = true
compression_threshold = 1024
```text

## AI Backend Configuration Details

```toml
[ai]
provider = "openai"           # openai, ollama
embedding_model = "text-embedding-3-small"

[ai.rag]
enabled = true
chunk_size = 512
chunk_overlap = 50
top_k = 5                     # Top results to return

[ai.vector_store]
type = "memory"               # memory, redis
dimensions = 1536

[ai.cache]
enabled = true
ttl = 3600                    # seconds

[ai.knowledge_graph]
enabled = false
max_depth = 3
```text

## Agent Backend Configuration Details

```toml
[agent]
default_provider = "claude"   # claude, openai, gemini, ollama

[agent.models]
claude = "claude-3-5-haiku-20241022"
openai = "gpt-4o-mini"
gemini = "gemini-2.0-flash-exp"
ollama = "llama2"

[agent.defaults]
max_tokens = 4096
temperature = 0.7
streaming = true

[agent.template]
engine = "tera"               # Jinja2-compatible
strict_variables = true       # Error on undefined vars

[agent.validation]
enabled = true
strict = true

[agent.output]
format = "markdown"
color = true
timestamp = true

[agent.logging]
level = "info"                # debug, info, warn, error
file = true
```text

## Provisioning Generator Configuration Details

```toml
[provisioning]
output_dir = "./provisioning"
default_providers = ["aws", "hetzner"]

[provisioning.generation]
overwrite = false             # Require explicit --force
dry_run = false
verbose = false

[provisioning.templates]
base_path = "templates"
custom_path = null

[provisioning.infrastructure]
environment = "development"   # development, staging, production
region = "us-east-1"

[provisioning.nickel]
validate_schemas = true
generate_defaults = true
use_constraints = true

[provisioning.ai]
enabled = false               # AI-assisted generation
provider = "claude"
model = "claude-3-5-sonnet-20241022"

[provisioning.logging]
level = "info"
file = false

[provisioning.validation]
strict = false
require_tests = false

[provisioning.security]
require_encryption = false
scan_templates = false
```text

## Distribution Configurations

When creating release distributions, all configurations are included:

```bash
# Build and package
just distro::build-release
just distro::create-package

# Result includes all configs
distribution/typedialog-0.1.0/
├── config/
│   ├── cli/
│   ├── tui/
│   ├── web/
│   ├── ai/
│   ├── ag/
│   └── prov-gen/
└── ...
```text

Users can then choose configs during installation:

```bash
# Extract distribution
tar -xzf typedialog-0.1.0.tar.gz
cd typedialog-0.1.0

# Install
bash installers/install.sh

# Use specific config
typedialog --config ~/.config/typedialog/cli/production.toml form.toml
```text

## Best Practices

### Development
- Use `dev` configurations for local development
- Enable debugging and verbose logging
- Use shorter timeouts for faster iteration

### Testing
- Use `default` configurations for testing
- Run integration tests with all environments

### Production
- Always use `production` configurations
- Verify HTTPS is enabled (web backend)
- Set appropriate rate limits
- Configure logging to persistent location
- Test thoroughly in staging first

## Related Documentation

- [BUILD_AND_DISTRIBUTION.md](../BUILD_AND_DISTRIBUTION.md) - Build guide
- [DISTRIBUTION_WORKFLOW.md](../DISTRIBUTION_WORKFLOW.md) - Release workflow
- [README.md](../README.md) - Main documentation