2026-01-11 22:35:49 +00:00
..
2026-01-11 22:35:49 +00:00

TypeDialog AI Backend

AI-powered backend with RAG (Retrieval-Augmented Generation), knowledge graphs, and semantic search.

Overview

The AI backend (typedialog-ai) provides AI/ML capabilities for TypeDialog forms including semantic search, document retrieval, knowledge graph integration, and embeddings-based similarity.

Features

  • RAG System: Retrieval-Augmented Generation for context-aware responses
  • Knowledge Graphs: Structured knowledge representation
  • Vector Embeddings: Semantic similarity and search
  • Batch Operations: Efficient processing of large document sets
  • Multi-Model Support: Pluggable embedding models
  • REST API: HTTP endpoints for AI operations
  • WebSocket: Real-time streaming responses

Quick Start

Installation

cargo build --release -p typedialog-ai --features ai_backend
sudo cp target/release/typedialog-ai /usr/local/bin/

# Or use just
just build::ai
```text

### Basic Usage

```bash
# Start AI backend server
typedialog-ai --config config/ai/default.toml

# With custom port
typedialog-ai --port 8000

# In library mode (no server)
cargo run --example ai_rag
```text

## Architecture

### Components

```text
┌─────────────────────────────────────────┐
│         TypeDialog AI Backend           │
├─────────────────────────────────────────┤
│ REST API + WebSocket                    │
├─────────────────────────────────────────┤
│ RAG Engine                              │
│  ├─ Document Store                      │
│  ├─ Vector Database                     │
│  └─ Retrieval System                    │
├─────────────────────────────────────────┤
│ Knowledge Graph                         │
│  ├─ Entity Extraction                   │
│  ├─ Relationship Mapping                │
│  └─ Graph Queries                       │
├─────────────────────────────────────────┤
│ Embedding Engine                        │
│  ├─ Model Loading                       │
│  ├─ Batch Processing                    │
│  └─ Similarity Search                   │
└─────────────────────────────────────────┘
```text

### RAG Pipeline

1. **Ingestion**: Add documents to vector store
2. **Embedding**: Generate embeddings with chosen model
3. **Indexing**: Store in vector database
4. **Retrieval**: Semantic search + keyword matching
5. **Generation**: Augment prompts with retrieved context

## Configuration

### AI Backend Config

```toml
[ai]
port = 8000
host = "0.0.0.0"

[ai.rag]
embedding_model = "sentence-transformers/all-MiniLM-L6-v2"
chunk_size = 512
overlap = 50
top_k = 5  # Number of results to retrieve

[ai.vector_store]
backend = "in_memory"  # or "faiss", "milvus", "qdrant"
dimension = 384  # Match embedding model dimension

[ai.knowledge_graph]
enabled = true
backend = "neo4j"  # or "in_memory"
uri = "bolt://localhost:7687"
```text

### Embedding Models

Supported embedding models:

| Model | Dimension | Performance | Use Case |
 | ------- | ----------- | ------------- | ---------- |
| `all-MiniLM-L6-v2` | 384 | Fast, lightweight | General purpose |
| `all-mpnet-base-v2` | 768 | Balanced | High quality |
| `paraphrase-multilingual-MiniLM-L12-v2` | 384 | Fast | Multilingual |
| `text-embedding-ada-002` | 1536 | API-based | OpenAI API |

## API Endpoints

### RAG Endpoints

```bash
# Add document
POST /api/rag/documents
{
  "content": "Document text...",
  "metadata": {"source": "manual"}
}

# Search documents
POST /api/rag/search
{
  "query": "How to install?",
  "top_k": 5
}

# Generate with RAG
POST /api/rag/generate
{
  "query": "Explain installation",
  "context_sources": ["manual", "faq"]
}
```text

### Knowledge Graph Endpoints

```bash
# Add entity
POST /api/kg/entities
{
  "name": "TypeDialog",
  "type": "Software",
  "properties": {"version": "1.0.0"}
}

# Query relationships
GET /api/kg/query?entity=TypeDialog&relation=depends_on
```text

### Embedding Endpoints

```bash
# Generate embeddings
POST /api/embeddings
{
  "texts": ["text1", "text2"],
  "model": "all-MiniLM-L6-v2"
}

# Similarity search
POST /api/similarity
{
  "query": "installation guide",
  "candidates": ["doc1", "doc2", "doc3"]
}
```text

## Usage Examples

### RAG System

```rust
use typedialog_ai::rag::{RagEngine, Document};

// Initialize RAG engine
let mut rag = RagEngine::new(config)?;

// Add documents
rag.add_document(Document {
    content: "TypeDialog is a form library...".to_string(),
    metadata: HashMap::from([("source", "docs")]),
})?;

// Semantic search
let results = rag.search("What is TypeDialog?", 5)?;

// Generate with context
let response = rag.generate("Explain TypeDialog", &results)?;
```text

### Knowledge Graph

```rust
use typedialog_ai::kg::{KnowledgeGraph, Entity, Relationship};

// Initialize knowledge graph
let mut kg = KnowledgeGraph::new(config)?;

// Add entities
kg.add_entity(Entity {
    name: "TypeDialog".to_string(),
    entity_type: "Software".to_string(),
    properties: HashMap::new(),
})?;

// Add relationships
kg.add_relationship(Relationship {
    from: "TypeDialog".to_string(),
    to: "Rust".to_string(),
    relation_type: "written_in".to_string(),
})?;

// Query
let results = kg.query("TypeDialog", "depends_on")?;
```text

### Embeddings

```rust
use typedialog_ai::embeddings::EmbeddingEngine;

// Initialize
let engine = EmbeddingEngine::new("all-MiniLM-L6-v2")?;

// Generate embeddings
let texts = vec!["Hello world", "Goodbye world"];
let embeddings = engine.encode(&texts)?;

// Similarity
let similarity = embeddings[0].cosine_similarity(&embeddings[1]);
```text

## Use Cases

### 1. Semantic Document Search

Search documentation by meaning, not just keywords:

```bash
curl -X POST http://localhost:8000/api/rag/search \
  -H "Content-Type: application/json" \
  -d '{"query": "How to configure backends?", "top_k": 3}'
```text

### 2. Context-Aware Help

Provide relevant help based on user context:

```bash
curl -X POST http://localhost:8000/api/rag/generate \
  -H "Content-Type: application/json" \
  -d '{"query": "I need help with forms", "user_context": {"backend": "web"}}'
```text

### 3. Knowledge Graph Queries

Discover relationships between concepts:

```bash
curl http://localhost:8000/api/kg/query?entity=TypeDialog&relation=depends_on
```text

### 4. Batch Document Processing

Process large document sets efficiently:

```rust
let documents = load_documents("./docs")?;
rag.add_documents_batch(&documents, 100)?; // Batch size: 100
```text

## Performance

### Optimization Tips

1. **Batch Processing**: Use batch operations for multiple documents
2. **Model Selection**: Choose smaller models for faster inference
3. **Caching**: Enable embedding caching for repeated queries
4. **Vector Store**: Use persistent stores (FAISS, Milvus) for large datasets

### Benchmarks

| Operation | Throughput | Latency |
 | ----------- | ----------- | --------- |
| Embedding (batch=100) | ~1000 texts/s | ~100 ms |
| Semantic search (10K docs) | ~50 queries/s | ~20 ms |
| Knowledge graph query | ~1000 queries/s | ~1 ms |

## Integration

### With Forms

Use AI backend to enhance forms:

```toml
[[fields]]
name = "documentation"
field_type = "Text"
ai_enabled = true
ai_endpoint = "http://localhost:8000/api/rag/search"
```text

### With Web Backend

Integrate semantic search in web forms:

```javascript
async function searchDocs(query) {
  const response = await fetch('http://localhost:8000/api/rag/search', {
    method: 'POST',
    headers: { 'Content-Type': 'application/json' },
    body: JSON.stringify({ query, top_k: 5 })
  });
  return await response.json();
}
```text

## Examples

See [examples/10-ai-backend/](../../examples/10-ai-backend/) for:

- RAG system setup
- Knowledge graph integration
- Batch document processing
- Semantic search
- Performance optimization

## Related Documentation

- [Configuration](../configuration.md) - AI backend configuration
- [Examples](../../examples/10-ai-backend/) - Working examples
- [API Reference](../../crates/typedialog-ai/README.md) - Complete API docs

## Troubleshooting

### "Model not found"

Download embedding model:

```bash
python -c "from sentence_transformers import SentenceTransformer; SentenceTransformer('all-MiniLM-L6-v2')"
```text

### "Out of memory"

Use smaller batch size or lighter model:

```toml
[ai.rag]
embedding_model = "all-MiniLM-L6-v2"  # 384-dim instead of 768
chunk_size = 256  # Smaller chunks
```text

### "Slow search performance"

Enable vector index:

```toml
[ai.vector_store]
backend = "faiss"  # Faster than in_memory
index_type = "IVF"  # Approximate search
```text

### "Connection refused"

Check AI backend is running:

```bash
curl http://localhost:8000/health
```text

---

**Ready to start?** See [examples/10-ai-backend/](../../examples/10-ai-backend/)