chore: add examples for new crates and tools

This commit is contained in:
Jesús Pérez 2025-12-24 03:22:48 +00:00
parent 4f83c8603b
commit 749570f52b
Signed by: jesus
GPG Key ID: 9F243E355E0BC939
42 changed files with 5378 additions and 3 deletions

View File

@ -0,0 +1,301 @@
# AI Backend Integration Guide
## Problem: Why AI Backend Can't Be Just a Library
The AI backend (RAG, Knowledge Graph, embeddings) must be **integrated into actual backends** (CLI, TUI, Web) to be useful. Here's how:
## Architecture: AI Backend Integration Layers
```
┌─────────────────────────────────────────────────────────┐
│ USER INTERFACE │
│ (CLI / TUI / Web - Interactive Forms) │
└──────────────────────┬──────────────────────────────────┘
┌──────────────────────▼──────────────────────────────────┐
│ AI SERVICE LAYER │
│ • RAG search (semantic + keyword) │
│ • Knowledge graph queries │
│ • Batch operations (efficient) │
│ • Persistence (save/load state) │
└──────────────────────┬──────────────────────────────────┘
┌──────────────────────▼──────────────────────────────────┐
│ KNOWLEDGE BASE / STORAGE │
│ • Vector embeddings (HNSW index) │
│ • Full-text index │
│ • Entity relationships (KG) │
│ • Persistent state │
└─────────────────────────────────────────────────────────┘
```
## Integration Patterns
### 1. CLI Backend Integration
```rust
// Example: Search-enhanced CLI form
use typedialog_core::ai::rag::{RagSystem, RagConfig};
pub struct SearchableForm {
rag: RagSystem,
form: FormDefinition,
}
impl SearchableForm {
pub fn new(form_toml: &str) -> Result<Self> {
let rag = RagSystem::new(RagConfig::default())?;
let form = form_parser::parse_toml(form_toml)?;
Ok(SearchableForm { rag, form })
}
pub fn execute_with_search(&mut self) -> Result<FormResults> {
// Show form
let results = execute_form(&self.form)?;
// Optionally: Search documents based on form data
if let Some(search_query) = results.get("_search") {
let docs = self.rag.retrieve(search_query)?;
println!("Related documents found: {}", docs.len());
for doc in docs {
println!(" - {}: {}", doc.doc_id, doc.content);
}
}
Ok(results)
}
}
```
**Use Cases:**
- Interactive CLI tools with search suggestions
- Configuration assistants with knowledge base
- Automated documentation lookup during form fill
### 2. TUI Backend Integration
```rust
// Example: Split-pane TUI with RAG search
use ratatui::prelude::*;
pub struct RagTuiPanel {
rag: RagSystem,
search_results: Vec<RetrievalResult>,
selected_result: usize,
}
impl RagTuiPanel {
pub fn draw(&self, f: &mut Frame, area: Rect) {
// Left pane: Search input + results
// Right pane: Selected document preview
// Display semantic score indicators
}
pub fn on_query(&mut self, query: &str) -> Result<()> {
self.search_results = self.rag.retrieve(query)?;
self.selected_result = 0;
Ok(())
}
}
impl Widget for RagTuiPanel {
fn render(self, area: Rect, buf: &mut Buffer) {
// Render search UI with batched results
}
}
```
**Use Cases:**
- Dashboard with AI-powered search
- Document browser with semantic search
- Knowledge base explorer
- Configuration helper with real-time suggestions
### 3. Web Backend Integration
```rust
// Example: Web form with RAG API endpoint
use axum::{routing::post, Json, Router};
pub struct RagWebService {
rag: RagSystem,
}
impl RagWebService {
pub fn router(mut rag: RagSystem) -> Router {
Router::new()
.route("/search", post(Self::search))
.route("/documents", post(Self::add_documents))
.route("/documents/:id", axum::routing::delete(Self::remove_document))
.with_state(rag)
}
async fn search(
Json(query): Json<SearchRequest>,
) -> Json<Vec<RetrievalResult>> {
let results = self.rag.retrieve(&query.text)?;
Json(results)
}
async fn add_documents(
Json(docs): Json<Vec<(String, String)>>,
) -> Json<ApiResponse> {
self.rag.add_documents_batch(docs)?;
Json(ApiResponse { success: true })
}
}
```
**Use Cases:**
- Web SaaS with knowledge base search
- Document management system with semantic search
- FAQ finder with intelligent matching
- Support chatbot with context retrieval
## Service Models
### Model 1: Embedded Service (Simple)
```
Frontend Form → RAG Service (in-memory) → Results
```
- **Pros:** Fast, no network latency, easy to prototype
- **Cons:** Can't scale, state lost on restart
- **Use when:** Development, small documents, single user
- **Impl:** Load documents on startup, search in-memory
### Model 2: Microservice (Production)
```
Frontend Form → HTTP REST API → RAG Service → Vector DB / File Store
```
- **Pros:** Scalable, persistent, can handle large datasets
- **Cons:** Network latency, operational complexity
- **Use when:** Multiple users, large knowledge base, production SaaS
- **Impl:** Axum server with RagSystem, persist to disk/database
### Model 3: Hybrid (Best of Both)
```
Frontend Form → Local Cache (RAG) → Sync ↔ Remote Service
```
- **Pros:** Fast local access, keeps in sync, fallback support
- **Cons:** Complexity, consistency challenges
- **Use when:** High availability needed, offline support required
- **Impl:** Embedded RAG with periodic sync to central service
## Implementation Roadmap
### Phase 1: Embedded Integration ✅ (Current)
```rust
// In CLI/TUI/Web backends
let mut rag = RagSystem::new(config)?;
rag.add_documents_batch(docs)?;
let results = rag.retrieve(query)?;
```
**Status:** Ready - batch operations optimized, persistence working
### Phase 2: Microservice Wrapper (Next)
```rust
// Create: typedialog-ai-service (separate binary)
// Features:
// - REST API with Axum
// - Persistent storage (SQLite / file-based)
// - Health checks, metrics
// - Document management API
```
**Scope:**
- New binary crate: `crates/typedialog-ai-service/`
- Routes: `/search`, `/documents`, `/status`
- Persistence layer
- Docker support
### Phase 3: Backend Integration
**CLI Integration:**
- Add `--rag` flag to forms
- Search suggestions during field fill
- Context-aware help text
**TUI Integration:**
- Right pane: RAG search results
- Keyboard shortcuts for search
- Result scoring visualization
**Web Integration:**
- AJAX search endpoint
- Autocomplete with semantic results
- Results sidebar in form UI
## Example: End-to-End Integration
```rust
// 1. Initialize RAG during app startup
let mut rag = RagSystem::new(RagConfig {
semantic_weight: 0.6,
keyword_weight: 0.4,
max_results: 5,
min_score: 0.3,
})?;
// 2. Load knowledge base (batch operation - efficient)
let knowledge_base = load_documents_from_file("kb.json")?;
rag.add_documents_batch(knowledge_base)?;
// 3. Wire into form execution
let mut form = MyForm::new()?;
form.set_rag_service(rag);
// 4. During field fill: suggest relevant docs
form.on_field_change(&field_name, &user_input)?;
// → RAG search triggered automatically
// → Results shown as suggestions
// 5. User can explore suggested documents
let selected_doc = form.get_suggestion_selection()?;
form.populate_field_from_doc(&field_name, &selected_doc)?;
```
## Performance Considerations
### Batch Operations Critical for Scale
| Operation | Count | Sequential | Batch | Speedup |
|-----------|-------|-----------|-------|---------|
| Add docs | 10 | 10ms | 8ms | 1.25x |
| Add docs | 50 | 45ms | 25ms | 1.8x |
| Add docs | 100 | 85ms | 35ms | **2.4x** |
| Add docs | 500 | 380ms | 95ms | **4.0x** |
**Key:** HNSW cache rebuild is expensive. Batch ops avoid N rebuilds.
### Production Recommendations
1. **Use `add_documents_batch()`** for bulk operations
2. **Use `remove_documents_batch()`** for deletions
3. **Enable persistence** with `save_to_file()` / `load_from_file()`
4. **Configure weights** based on use case
5. **Monitor scores** to tune `min_score` threshold
## Next Steps
1. Create `typedialog-ai-service` microservice crate
2. Add REST API wrappers for RAG operations
3. Integrate into CLI/TUI/Web backends
4. Add example integration scenarios
5. Document deployment patterns
6. Create Docker compose for development
## Related Code
- **AI Backend:** `crates/typedialog-core/src/ai/`
- **RAG System:** `crates/typedialog-core/src/ai/rag.rs`
- **Batch Ops:** `add_documents_batch()`, `remove_documents_batch()`
- **Persistence:** `save_to_file()`, `load_from_file()`

View File

@ -0,0 +1,110 @@
# AI Backend Example
Demonstrates the TypeDialog AI backend with RAG (Retrieval-Augmented Generation) system.
⚠️ **Important**: The AI backend cannot remain a library. See **[INTEGRATION_GUIDE.md](INTEGRATION_GUIDE.md)** for how to integrate into real services.
## Features Shown
- **Creating a RAG System**: Initialize with configurable semantic/keyword weights
- **Batch Document Addition**: Efficient bulk document indexing
- **Document Retrieval**: Search using hybrid semantic + keyword approach
- **Batch Document Removal**: Efficient bulk document deletion
- **Performance Comparison**: Shows speedup of batch vs sequential operations
## What is the AI Backend?
The AI backend is **not a rendering backend** (like CLI, TUI, Web). It's a **library of AI/ML capabilities**:
- **RAG System**: Combines semantic search (embeddings) + keyword search (full-text)
- **Knowledge Graph**: Entity and relationship modeling using petgraph
- **Embeddings**: Text-to-vector conversion for semantic similarity
- **Vector Store**: HNSW-optimized approximate nearest neighbor search
- **Full-Text Indexer**: Efficient keyword-based document search
- **Persistence**: Save/load AI state to disk with version compatibility
## Running the Example
```bash
# Build the example
just build::ai
# Run the example
cargo run --example main --features ai_backend
# Or directly
cargo run --example main --features ai_backend --release
```
## Output Highlights
The example demonstrates:
1. **Batch Add Performance**: Adding 5 documents efficiently
2. **Retrieval Quality**: Combining semantic + keyword scores
3. **Batch Remove**: Efficiently removing multiple documents
4. **Performance Benchmark**: 20-document test showing ~2x speedup with batch ops
## API Overview
```rust
// Create RAG system
let mut rag = RagSystem::new(RagConfig::default())?;
// Add documents (batch - efficient for large sets)
let docs = vec![
("id1".into(), "content1".into()),
("id2".into(), "content2".into()),
];
rag.add_documents_batch(docs)?;
// Retrieve relevant documents
let results = rag.retrieve("query text")?;
for result in results {
println!("{}: {}", result.doc_id, result.content);
}
// Remove documents (batch - efficient)
let removed = rag.remove_documents_batch(&["id1", "id2"]);
// Save/Load
rag.save_to_file("rag.bin")?;
let loaded = RagSystem::load_from_file("rag.bin")?;
```
## Configuration
`RagConfig` controls retrieval behavior:
```rust
RagConfig {
semantic_weight: 0.6, // Weight for vector similarity
keyword_weight: 0.4, // Weight for keyword matching
max_results: 5, // Maximum results to return
min_score: 0.0, // Minimum combined score threshold
}
```
## Integration Points
The AI backend can be integrated with:
- **CLI Backend**: Add AI-powered search to CLI prompts
- **TUI Backend**: Add semantic search UI
- **Web Backend**: Add AI features to HTTP forms
- **Custom Applications**: Use as a library in any Rust project
## Performance Notes
- **Vector Store**: Uses HNSW for O(log N) approximate nearest neighbor search
- **Batch Operations**: Avoid repeated index rebuilds (2x speedup typical)
- **Embeddings**: Deterministic hash-based (production: integrate real ML models)
- **Full-Text Index**: Simple substring matching (production: consider tantivy)
## Next Steps
- Integrate Knowledge Graph for relationship modeling
- Use real embedding models (OpenAI, local transformers)
- Add custom similarity metrics
- Implement caching strategies
- Build domain-specific RAG pipelines

View File

@ -0,0 +1,436 @@
// Example: Simple AI Backend Service (Microservice Pattern)
//
// This demonstrates how to create a standalone RAG service that can be
// integrated into the TypeDialog system or used independently.
//
// This is pseudocode for clarity. To implement:
// 1. Create new crate: cargo new crates/typedialog-ai-service
// 2. Add dependencies: axum, tokio, serde, typedialog-core
// 3. Implement handlers as shown below
#![allow(dead_code)]
// ============================================================================
// SERVICE DEFINITION
// ============================================================================
/// AI Backend Service - Provides RAG and Knowledge Graph capabilities
pub struct AiBackendService {
/// RAG system for document retrieval
rag: typedialog_core::ai::rag::RagSystem,
/// Optional: Knowledge graph for entity relationships
#[cfg(feature = "ai_backend")]
kg: typedialog_core::ai::kg::KnowledgeGraph,
}
impl AiBackendService {
/// Create new service instance
pub fn new(config: ServiceConfig) -> Result<Self, Box<dyn std::error::Error>> {
let rag_config = typedialog_core::ai::rag::RagConfig {
semantic_weight: config.semantic_weight,
keyword_weight: config.keyword_weight,
max_results: config.max_results,
min_score: config.min_score,
};
let rag = typedialog_core::ai::rag::RagSystem::new(rag_config)?;
Ok(AiBackendService {
rag,
#[cfg(feature = "ai_backend")]
kg: typedialog_core::ai::kg::KnowledgeGraph::new(),
})
}
/// Load knowledge base from file or database
pub async fn load_knowledge_base(&mut self, path: &str) -> Result<usize, Box<dyn std::error::Error>> {
// In production: Load from database or file system
// For now: Load from JSON file with documents
let contents = std::fs::read_to_string(path)?;
let docs: Vec<(String, String)> = serde_json::from_str(&contents)?;
let count = docs.len();
self.rag.add_documents_batch(docs)?;
Ok(count)
}
/// Save service state to disk
pub fn persist(&self, path: &str) -> Result<(), Box<dyn std::error::Error>> {
self.rag.save_to_file(path)?;
Ok(())
}
/// Load service state from disk
pub fn restore(path: &str) -> Result<Self, Box<dyn std::error::Error>> {
let rag = typedialog_core::ai::rag::RagSystem::load_from_file(path)?;
Ok(AiBackendService {
rag,
#[cfg(feature = "ai_backend")]
kg: typedialog_core::ai::kg::KnowledgeGraph::new(),
})
}
}
// ============================================================================
// SERVICE CONFIGURATION
// ============================================================================
#[derive(serde::Serialize, serde::Deserialize, Clone)]
pub struct ServiceConfig {
/// Semantic search weight (0.0-1.0)
pub semantic_weight: f32,
/// Keyword search weight (0.0-1.0)
pub keyword_weight: f32,
/// Maximum results per query
pub max_results: usize,
/// Minimum score threshold
pub min_score: f32,
}
impl Default for ServiceConfig {
fn default() -> Self {
ServiceConfig {
semantic_weight: 0.6,
keyword_weight: 0.4,
max_results: 5,
min_score: 0.0,
}
}
}
// ============================================================================
// REST API ENDPOINTS (Pseudo-code with Axum)
// ============================================================================
/*
// In production, implement with Axum web framework:
use axum::{
extract::{Json, State, Path},
routing::{get, post, delete},
Router,
http::StatusCode,
};
use std::sync::{Arc, Mutex};
pub type SharedService = Arc<Mutex<AiBackendService>>;
/// Create REST router for AI service
pub fn create_routes(service: SharedService) -> Router {
Router::new()
// Health check
.route("/health", get(health_check))
// Search endpoints
.route("/search", post(search_documents))
.route("/search/semantic", post(search_semantic))
.route("/search/keyword", post(search_keyword))
// Document management
.route("/documents", post(add_documents))
.route("/documents/batch", post(add_documents_batch))
.route("/documents/:id", delete(remove_document))
.route("/documents/batch", delete(remove_documents_batch))
.route("/documents/count", get(document_count))
// Configuration
.route("/config", get(get_config))
.route("/config", post(update_config))
// Persistence
.route("/save", post(save_state))
.route("/load", post(load_state))
.with_state(service)
}
// ============================================================================
// HANDLER IMPLEMENTATIONS
// ============================================================================
/// GET /health - Service health check
async fn health_check() -> Json<HealthResponse> {
Json(HealthResponse {
status: "healthy".into(),
uptime_seconds: 0, // Calculate in real impl
})
}
#[derive(serde::Serialize)]
struct HealthResponse {
status: String,
uptime_seconds: u64,
}
/// POST /search - Hybrid search (semantic + keyword)
async fn search_documents(
State(service): State<SharedService>,
Json(req): Json<SearchRequest>,
) -> Result<Json<Vec<SearchResult>>, ApiError> {
let mut svc = service.lock().map_err(|_| ApiError::LockPoisoned)?;
let results = svc.rag.retrieve(&req.query)
.map_err(|e| ApiError::SearchFailed(e.to_string()))?;
Ok(Json(
results.into_iter().map(|r| SearchResult {
doc_id: r.doc_id,
content: r.content,
semantic_score: r.semantic_score,
keyword_score: r.keyword_score,
combined_score: r.combined_score,
}).collect()
))
}
#[derive(serde::Deserialize)]
struct SearchRequest {
query: String,
}
#[derive(serde::Serialize)]
struct SearchResult {
doc_id: String,
content: String,
semantic_score: f32,
keyword_score: f32,
combined_score: f32,
}
/// POST /documents - Add single document
async fn add_documents(
State(service): State<SharedService>,
Json(req): Json<AddDocumentRequest>,
) -> Result<StatusCode, ApiError> {
let mut svc = service.lock().map_err(|_| ApiError::LockPoisoned)?;
svc.rag.add_document(req.id, req.content)
.map_err(|e| ApiError::AddFailed(e.to_string()))?;
Ok(StatusCode::CREATED)
}
#[derive(serde::Deserialize)]
struct AddDocumentRequest {
id: String,
content: String,
}
/// POST /documents/batch - Add multiple documents efficiently
async fn add_documents_batch(
State(service): State<SharedService>,
Json(req): Json<AddDocumentsBatchRequest>,
) -> Result<Json<AddBatchResponse>, ApiError> {
let mut svc = service.lock().map_err(|_| ApiError::LockPoisoned)?;
let count_before = svc.rag.doc_count();
svc.rag.add_documents_batch(req.documents)
.map_err(|e| ApiError::AddFailed(e.to_string()))?;
let count_after = svc.rag.doc_count();
Ok(Json(AddBatchResponse {
added: count_after - count_before,
total: count_after,
}))
}
#[derive(serde::Deserialize)]
struct AddDocumentsBatchRequest {
documents: Vec<(String, String)>,
}
#[derive(serde::Serialize)]
struct AddBatchResponse {
added: usize,
total: usize,
}
/// DELETE /documents/:id - Remove single document
async fn remove_document(
State(service): State<SharedService>,
Path(id): Path<String>,
) -> Result<StatusCode, ApiError> {
let mut svc = service.lock().map_err(|_| ApiError::LockPoisoned)?;
let removed = svc.rag.remove_document(&id);
if removed {
Ok(StatusCode::OK)
} else {
Err(ApiError::NotFound)
}
}
/// DELETE /documents/batch - Remove multiple documents
async fn remove_documents_batch(
State(service): State<SharedService>,
Json(req): Json<RemoveBatchRequest>,
) -> Result<Json<RemoveBatchResponse>, ApiError> {
let mut svc = service.lock().map_err(|_| ApiError::LockPoisoned)?;
let ids: Vec<&str> = req.ids.iter().map(|s| s.as_str()).collect();
let removed = svc.rag.remove_documents_batch(&ids);
Ok(Json(RemoveBatchResponse {
removed,
remaining: svc.rag.doc_count(),
}))
}
#[derive(serde::Deserialize)]
struct RemoveBatchRequest {
ids: Vec<String>,
}
#[derive(serde::Serialize)]
struct RemoveBatchResponse {
removed: usize,
remaining: usize,
}
/// GET /documents/count - Get document count
async fn document_count(
State(service): State<SharedService>,
) -> Result<Json<CountResponse>, ApiError> {
let svc = service.lock().map_err(|_| ApiError::LockPoisoned)?;
Ok(Json(CountResponse {
count: svc.rag.doc_count(),
}))
}
#[derive(serde::Serialize)]
struct CountResponse {
count: usize,
}
/// GET /config - Get current configuration
async fn get_config(
State(service): State<SharedService>,
) -> Result<Json<ServiceConfig>, ApiError> {
let svc = service.lock().map_err(|_| ApiError::LockPoisoned)?;
Ok(Json(ServiceConfig {
semantic_weight: svc.rag.config().semantic_weight,
keyword_weight: svc.rag.config().keyword_weight,
max_results: svc.rag.config().max_results,
min_score: svc.rag.config().min_score,
}))
}
/// POST /config - Update configuration
async fn update_config(
State(service): State<SharedService>,
Json(config): Json<ServiceConfig>,
) -> Result<StatusCode, ApiError> {
let mut svc = service.lock().map_err(|_| ApiError::LockPoisoned)?;
let rag_config = typedialog_core::ai::rag::RagConfig {
semantic_weight: config.semantic_weight,
keyword_weight: config.keyword_weight,
max_results: config.max_results,
min_score: config.min_score,
};
svc.rag.set_config(rag_config);
Ok(StatusCode::OK)
}
/// POST /save - Persist service state
async fn save_state(
State(service): State<SharedService>,
Json(req): Json<SaveRequest>,
) -> Result<StatusCode, ApiError> {
let svc = service.lock().map_err(|_| ApiError::LockPoisoned)?;
svc.rag.save_to_file(&req.path)
.map_err(|e| ApiError::SaveFailed(e.to_string()))?;
Ok(StatusCode::OK)
}
#[derive(serde::Deserialize)]
struct SaveRequest {
path: String,
}
/// POST /load - Restore service state
async fn load_state(
State(service): State<SharedService>,
Json(req): Json<LoadRequest>,
) -> Result<StatusCode, ApiError> {
let mut svc = service.lock().map_err(|_| ApiError::LockPoisoned)?;
let loaded = typedialog_core::ai::rag::RagSystem::load_from_file(&req.path)
.map_err(|e| ApiError::LoadFailed(e.to_string()))?;
*svc = AiBackendService { rag: loaded,
#[cfg(feature = "ai_backend")]
kg: typedialog_core::ai::kg::KnowledgeGraph::new(),
};
Ok(StatusCode::OK)
}
#[derive(serde::Deserialize)]
struct LoadRequest {
path: String,
}
// ============================================================================
// ERROR HANDLING
// ============================================================================
#[derive(Debug)]
pub enum ApiError {
LockPoisoned,
SearchFailed(String),
AddFailed(String),
NotFound,
SaveFailed(String),
LoadFailed(String),
}
impl axum::response::IntoResponse for ApiError {
fn into_response(self) -> axum::response::Response {
use axum::response::IntoResponse;
match self {
ApiError::NotFound => StatusCode::NOT_FOUND.into_response(),
_ => StatusCode::INTERNAL_SERVER_ERROR.into_response(),
}
}
}
// ============================================================================
// MAIN APPLICATION
// ============================================================================
/*
#[tokio::main]
async fn main() -> Result<(), Box<dyn std::error::Error>> {
// Initialize service
let config = ServiceConfig::default();
let mut service = AiBackendService::new(config)?;
// Load knowledge base
service.load_knowledge_base("kb.json").await?;
println!("Loaded {} documents", service.rag.doc_count());
// Create shared service for handlers
let shared_service = Arc::new(Mutex::new(service));
// Create router
let app = create_routes(shared_service);
// Start server
let listener = tokio::net::TcpListener::bind("0.0.0.0:3000").await?;
axum::serve(listener, app).await?;
Ok(())
}
*/
#[cfg(test)]
mod tests {
use super::*;
#[test]
fn test_service_creation() {
let config = ServiceConfig::default();
let result = AiBackendService::new(config);
assert!(result.is_ok());
}
}

View File

@ -0,0 +1,170 @@
//! AI Backend Example: RAG System with Batch Operations
//!
//! Demonstrates:
//! - Creating a RAG (Retrieval-Augmented Generation) system
//! - Adding documents using batch operations (efficient)
//! - Retrieving relevant documents for queries
//! - Removing documents using batch operations
//! - Performance comparison: batch vs sequential operations
//!
//! Run with: cargo run --example main --features ai_backend
//! Or via just: just build::ai && cargo run --example main --features ai_backend
#[cfg(feature = "ai_backend")]
fn main() -> Result<(), Box<dyn std::error::Error>> {
use std::time::Instant;
use typedialog_core::ai::rag::{RagConfig, RagSystem};
println!("\n╔════════════════════════════════════════════════════════╗");
println!("║ TypeDialog AI Backend - RAG System Example ║");
println!("║ Batch Operations Performance Demonstration ║");
println!("╚════════════════════════════════════════════════════════╝\n");
// Create RAG system with default configuration
let mut rag = RagSystem::new(RagConfig::default())?;
println!("✓ Created RAG system with default config");
println!(" - Semantic weight: 0.6");
println!(" - Keyword weight: 0.4");
println!(" - Max results: 5\n");
// === BATCH ADD DOCUMENTS ===
println!("╔════════════════════════════════════════════════════════╗");
println!("║ 1. BATCH ADD DOCUMENTS (Efficient) ║");
println!("╚════════════════════════════════════════════════════════╝\n");
let documents = vec![
(
"doc1".to_string(),
"Rust is a systems programming language with memory safety guarantees."
.to_string(),
),
(
"doc2".to_string(),
"TypeDialog is an interactive forms library for multiple backends."
.to_string(),
),
(
"doc3".to_string(),
"RAG combines retrieval with generation for better LLM responses.".to_string(),
),
(
"doc4".to_string(),
"Knowledge graphs model relationships between entities in data."
.to_string(),
),
(
"doc5".to_string(),
"Embeddings transform text into dense vector representations."
.to_string(),
),
];
let start = Instant::now();
rag.add_documents_batch(documents)?;
let batch_duration = start.elapsed();
println!("✓ Added 5 documents using batch operation");
println!(" Duration: {:.2}ms", batch_duration.as_secs_f64() * 1000.0);
println!(" Document count: {}\n", rag.doc_count());
// === RETRIEVE DOCUMENTS ===
println!("╔════════════════════════════════════════════════════════╗");
println!("║ 2. RETRIEVE RELEVANT DOCUMENTS ║");
println!("╚════════════════════════════════════════════════════════╝\n");
let query = "programming language memory safety";
println!("Query: \"{}\"\n", query);
let results = rag.retrieve(query)?;
println!("Retrieved {} document(s):\n", results.len());
for (idx, result) in results.iter().enumerate() {
println!(
"{}. [{}] Score: {:.3} (Semantic: {:.3}, Keyword: {:.3})",
idx + 1,
result.doc_id,
result.combined_score,
result.semantic_score,
result.keyword_score
);
println!(" Content: {}\n", result.content);
}
// === BATCH REMOVE DOCUMENTS ===
println!("╔════════════════════════════════════════════════════════╗");
println!("║ 3. BATCH REMOVE DOCUMENTS (Efficient) ║");
println!("╚════════════════════════════════════════════════════════╝\n");
let to_remove = vec!["doc2", "doc4"];
println!("Removing: {:?}", to_remove);
let removed = rag.remove_documents_batch(&to_remove);
println!(
"✓ Removed {} document(s)",
removed
);
println!(" Remaining documents: {}\n", rag.doc_count());
// === PERFORMANCE COMPARISON ===
println!("╔════════════════════════════════════════════════════════╗");
println!("║ 4. PERFORMANCE: BATCH vs SEQUENTIAL (20 docs) ║");
println!("╚════════════════════════════════════════════════════════╝\n");
let large_docs: Vec<_> = (0..20)
.map(|i| {
(
format!("perf_doc{}", i),
format!("Performance test document number {}", i),
)
})
.collect();
// Sequential
let mut rag_seq = RagSystem::new(RagConfig::default())?;
let start = Instant::now();
rag_seq.add_documents(large_docs.clone())?;
let seq_duration = start.elapsed();
// Batch
let mut rag_batch = RagSystem::new(RagConfig::default())?;
let start = Instant::now();
rag_batch.add_documents_batch(large_docs)?;
let batch_duration = start.elapsed();
let speedup = seq_duration.as_nanos() as f64 / batch_duration.as_nanos() as f64;
println!("Sequential add (20 docs): {:.2}ms", seq_duration.as_secs_f64() * 1000.0);
println!("Batch add (20 docs): {:.2}ms", batch_duration.as_secs_f64() * 1000.0);
println!("Speedup: {:.2}x\n", speedup);
println!("💡 Tip: Use batch operations for large document sets (100+)");
println!(" Batch avoids multiple HNSW index rebuilds!\n");
// === SUMMARY ===
println!("╔════════════════════════════════════════════════════════╗");
println!("║ SUMMARY ║");
println!("╚════════════════════════════════════════════════════════╝\n");
println!("RAG System Features:");
println!(" ✓ Semantic search (vector embeddings)");
println!(" ✓ Keyword search (full-text indexing)");
println!(" ✓ Hybrid retrieval (combines both)");
println!(" ✓ Batch operations (efficient bulk processing)");
println!(" ✓ Configurable weights and thresholds");
println!(" ✓ Persistence (save/load to disk)\n");
println!("Next Steps:");
println!(" • Build with Knowledge Graph integration");
println!(" • Use with different embedding models");
println!(" • Integrate with LLM backends");
println!(" • Add custom similarity metrics\n");
Ok(())
}
#[cfg(not(feature = "ai_backend"))]
fn main() {
eprintln!("This example requires the 'ai_backend' feature.");
eprintln!("Run with: cargo run --example main --features ai_backend");
std::process::exit(1);
}

View File

@ -0,0 +1,343 @@
# 11. Provisioning Generator
**Complete provisioning infrastructure generation from multiple input sources**
Demonstrates the `typedialog-provisioning-gen` tool that generates a complete 7-layer validation architecture for configuration management, supporting multiple input modes and producing TypeDialog forms, Nickel schemas, validators, and orchestration scripts.
## Quick Start
### Pick Your Input Mode
**Mode A: Analyze Cargo.toml** (Automatic Detection)
```bash
# Auto-detect features from dependencies
typedialog-provisioning-gen cargo \
--input mode-a-cargo/Cargo.toml \
--output /tmp/my-provisioning
```
→ See [`mode-a-cargo/README.md`](mode-a-cargo/) for details
**Mode B: Config File** (Explicit Specification)
```bash
# Define all features in TOML config
typedialog-provisioning-gen config \
--input mode-b-config/project-spec.toml \
--output /tmp/my-provisioning
```
→ See [`mode-b-config/README.md`](mode-b-config/) for details
**Mode C: Interactive Wizard** (Conversational)
```bash
# Step-by-step guidance with AI suggestions
typedialog-provisioning-gen wizard \
--project my-service \
--output /tmp/my-provisioning
```
→ See [`mode-c-wizard/README.md`](mode-c-wizard/) for details
**Mode D: Nickel Schema** (Migration)
```bash
# Convert existing Nickel schema
typedialog-provisioning-gen nickel \
--input mode-d-nickel/existing-config.ncl \
--output /tmp/my-provisioning
```
→ See [`mode-d-nickel/README.md`](mode-d-nickel/) for details
## What Gets Generated
All modes produce the same 7-layer structure:
```
provisioning/
├── constraints.toml # Layer 1: Validation bounds (single source of truth)
├── schemas/ # Layer 2: Type contracts (Nickel)
├── validators/ # Layer 3: Validation logic
├── defaults/ # Layer 4: Default values
├── fragments/ # Layer 5: TypeDialog forms
├── scripts/ # Layer 6: Orchestration scripts
└── config.ncl # Layer 7: Master configuration
```
→ See [`output-example/README.md`](output-example/) for detailed structure explanation
## Key Features
### ✅ 4 Input Modes
- **Cargo.toml Analysis**: Automatic feature detection from dependencies
- **Config File**: Explicit specification with full control
- **Interactive Wizard**: Conversational guidance with AI suggestions
- **Nickel Schema**: Convert existing Nickel to provisioning structure
### ✅ 7-Layer Validation
- **Constraints** (single source of truth)
- **Schemas** (Nickel type contracts)
- **Validators** (validation logic)
- **Defaults** (sensible defaults)
- **Fragments** (TypeDialog forms)
- **Scripts** (orchestration)
- **Master Config** (integration point)
### ✅ Domain Features
Each mode discovers or defines domain features:
| Feature | Fields | Example |
|---------|--------|---------|
| http_server | bind_address, timeout_seconds | REST API |
| database | host, port, username, password | PostgreSQL, MySQL |
| authentication | jwt_secret, expiry | JWT tokens |
| caching | cache_enabled, ttl | Redis layer |
| monitoring | prometheus_url, retention | Observability |
### ✅ Multiple Field Types
- Text, Number, Password, Confirm
- Select, MultiSelect
- Editor, Date
- RepeatingGroup (arrays)
### ✅ Sensitive Data Handling
Automatic encryption for sensitive fields:
- `age` (modern encryption)
- `sops` (Mozilla SOPS)
- `secretumvault` (integration ready)
- `aws-kms`, `gcp-kms`, `azure-kms`
### ✅ Constraint Interpolation
Single source of truth - define constraints once, use everywhere:
- In validators (Nickel)
- In forms (TypeDialog)
- In documentation
- In validation scripts
## Complete Workflow
### 1. Generate from Cargo.toml
```bash
cd my-rust-service
typedialog-provisioning-gen cargo --output ./provisioning
```
### 2. Test Form Interactively
```bash
typedialog provisioning/fragments/http_server-section.toml --backend cli
```
### 3. Validate Configuration
```bash
./provisioning/scripts/validate-nickel.sh < config.ncl
```
### 4. Use in Application
```rust
let config = nickel::eval_file("provisioning/config.ncl")?;
```
## Directory Guide
| Path | Purpose |
|------|---------|
| [`mode-a-cargo/`](mode-a-cargo/) | Cargo.toml introspection example |
| [`mode-b-config/`](mode-b-config/) | Config file specification example |
| [`mode-c-wizard/`](mode-c-wizard/) | Interactive wizard example |
| [`mode-d-nickel/`](mode-d-nickel/) | Nickel schema conversion example |
| [`output-example/`](output-example/) | Complete generated output structure |
## Examples by Scenario
### Scenario 1: Existing Rust Project
```bash
# You have a Cargo.toml with dependencies
typedialog-provisioning-gen cargo --output ./infra
# Auto-detects:
# - axum → HTTP server config
# - sqlx → database config
# - redis → caching config
# - prometheus → monitoring
```
Best for: Rust projects, quick setup
### Scenario 2: New Multi-Service Architecture
```bash
# You have a detailed specification
typedialog-provisioning-gen config --input architecture.toml --output ./platform
# Supports:
# - Multiple database types
# - Event streaming setup
# - Monitoring configuration
# - Cloud provider selection
```
Best for: Complex systems, cross-team coordination
### Scenario 3: Interactive Setup
```bash
# You want guidance through the process
typedialog-provisioning-gen wizard --project my-service
# Walks through:
# - Project type selection
# - Infrastructure requirements
# - Feature selection
# - Field configuration
# - AI-powered suggestions
```
Best for: First-time users, exploring options
### Scenario 4: Legacy System Migration
```bash
# You have existing Nickel schemas
typedialog-provisioning-gen nickel --input config.ncl --output ./new-provisioning
# Converts:
# - Type definitions → schemas
# - Records → domain features
# - Fields → form fragments
```
Best for: System modernization, adding forms to existing configs
## Using Generated Artifacts
### TypeDialog Forms
```bash
# Display configuration form
typedialog provisioning/fragments/database-section.toml --backend cli
# Web interface
typedialog-web provisioning/fragments/*.toml --port 3000
```
### Nickel Validation
```bash
# Validate syntax
./provisioning/scripts/validate-nickel.sh
# Use in config
nickel eval provisioning/config.ncl
```
### Scripts
```bash
# Master orchestrator
./provisioning/scripts/config.sh
# Format conversion
./provisioning/scripts/json-to-nickel.sh < input.json
./provisioning/scripts/nickel-to-json.sh < config.ncl
```
### Application Integration
```rust
// Load provisioning config
let config = provisioning::load("provisioning/config.ncl")?;
```
## What This Demonstrates
### Core Concepts
✅ Multi-mode input pipeline normalization
✅ 7-layer validation architecture
✅ Constraint interpolation (single source of truth)
✅ Template-based code generation
✅ Domain-driven feature extraction
### Integration Points
✅ TypeDialog forms generation
✅ Nickel schema contracts
✅ Sensitive data encryption
✅ Cross-format conversion (JSON ↔ Nickel)
✅ Orchestration scripts (bash + nushell)
### Advanced Features
✅ Cargo.toml dependency analysis
✅ Project type inference
✅ Infrastructure requirement detection
✅ AI-powered wizard with RAG retrieval
✅ Reusable template library
✅ Custom field types and validation
## Next Steps
### Try Each Mode
1. **Start with Mode A**: `cd mode-a-cargo && cat README.md`
2. **Explore Mode B**: `cd mode-b-config && cat README.md`
3. **Test Mode C**: `cd mode-c-wizard && cat README.md`
4. **Learn Mode D**: `cd mode-d-nickel && cat README.md`
### Understand the Output
- **View Structure**: `cd output-example && cat README.md`
- **Study Layers**: Examine each generated directory (schemas/, validators/, etc.)
- **Test Interactively**: Run TypeDialog on generated fragments
### Build Your Own
1. **Pick a mode** based on your needs
2. **Generate** from your input
3. **Customize** the generated structure
4. **Deploy** using generated scripts
5. **Iterate** as your requirements evolve
## More Information
- **TypeDialog Docs**: Form UI and integration
- **Nickel Docs**: Type-safe configuration language
- **Encryption**: Secrets management with age, sops, KMS
- **Scripts**: Bash and Nushell orchestration
## Testing All Examples
```bash
# Run Mode A
cargo run -p typedialog-provisioning-gen -- cargo \
--input examples/11-provisioning-generation/mode-a-cargo/Cargo.toml \
--output /tmp/mode-a-output
# Run Mode B
cargo run -p typedialog-provisioning-gen -- config \
--input examples/11-provisioning-generation/mode-b-config/project-spec.toml \
--output /tmp/mode-b-output
# Run Mode D
cargo run -p typedialog-provisioning-gen -- nickel \
--input examples/11-provisioning-generation/mode-d-nickel/existing-config.ncl \
--output /tmp/mode-d-output
# Run Mode C (interactive)
cargo run -p typedialog-provisioning-gen -- wizard \
--project test-service \
--output /tmp/mode-c-output
```
## Key Files
### Example Inputs
- `mode-a-cargo/Cargo.toml` - Real Cargo.toml with dependencies
- `mode-b-config/project-spec.toml` - Complete feature specification
- `mode-d-nickel/existing-config.ncl` - Nickel schema example
### Documentation
- `mode-a-cargo/README.md` - Cargo.toml analysis details
- `mode-b-config/README.md` - Config specification format
- `mode-c-wizard/README.md` - Wizard interaction flow
- `mode-d-nickel/README.md` - Nickel schema conversion
- `output-example/README.md` - Generated structure explanation
## Summary
The Provisioning Generator provides a complete solution for infrastructure-as-code configuration:
- **Flexible Input**: 4 modes for different scenarios
- **Type-Safe Output**: Nickel schemas with validation
- **User-Friendly Forms**: TypeDialog for interactive setup
- **Production-Ready**: Scripts for deployment and orchestration
- **Extensible**: Template library and custom domain features
Perfect for:
- Microservice configuration
- Infrastructure setup automation
- Configuration management at scale
- DevOps tooling
- SaaS platform provisioning

View File

@ -0,0 +1,20 @@
[package]
name = "web-service-example"
version = "0.1.0"
edition = "2021"
description = "Example web service with HTTP API and database"
[dependencies]
axum = "0.7"
tokio = { version = "1", features = ["full"] }
serde = { version = "1.0", features = ["derive"] }
serde_json = "1.0"
sqlx = { version = "0.7", features = ["postgres", "uuid", "chrono"] }
jsonwebtoken = "9.0"
redis = "0.24"
prometheus = "0.13"
tracing = "0.1"
tracing-subscriber = "0.3"
[dev-dependencies]
tokio-test = "0.4"

View File

@ -0,0 +1,132 @@
# Mode A: Cargo.toml Introspection
This example shows automatic provisioning generation from a `Cargo.toml` file.
## What Gets Detected
From the included `Cargo.toml`, the generator detects:
### Project Type
- **WebService** - detected from `axum` dependency
### Domain Features
- **http_server** - from `axum` dependency
- Fields: `bind_address`, `timeout_seconds`
- **authentication** - from `jsonwebtoken` dependency
- Fields: `jwt_secret` (sensitive)
- **caching** - from `redis` dependency
- Fields: `cache_enabled`, `cache_ttl_seconds`
### Infrastructure
- **Database**: PostgreSQL (from `sqlx` with postgres feature)
- Fields: host, port, username, password, database name
- **SSH**: Not required (no openssh-keys or ssh2)
- **Monitoring**: Prometheus detected (from `prometheus` dependency)
- **Cloud Providers**: Default to LXD
## Running the Example
### Step 1: Analyze the Cargo.toml
```bash
typedialog-provisioning-gen cargo \
--input examples/11-provisioning-generation/mode-a-cargo/Cargo.toml \
--output /tmp/my-provisioning
```
### Step 2: Inspect Generated Structure
```bash
tree /tmp/my-provisioning
```
Expected output:
```
/tmp/my-provisioning/
├── config.ncl # Master configuration
├── constraints.toml # Validation bounds
├── schemas/ # Domain types
│ ├── http_server.ncl
│ ├── authentication.ncl
│ ├── caching.ncl
│ ├── database.ncl
│ ├── environment.ncl
│ └── ...
├── validators/ # Validation functions
│ ├── http_server.ncl
│ ├── authentication.ncl
│ ├── caching.ncl
│ ├── common.ncl
│ └── ...
├── defaults/ # Default values
│ ├── http_server.ncl
│ ├── authentication.ncl
│ ├── caching.ncl
│ ├── database.ncl
│ └── ...
├── fragments/ # TypeDialog forms
│ ├── http_server-section.toml
│ ├── authentication-section.toml
│ ├── caching-section.toml
│ ├── database-mysql-section.toml
│ └── ...
└── scripts/ # Orchestration
├── config.sh
├── config.nu
├── json-to-nickel.sh
├── validate-nickel.sh
└── ...
```
### Step 3: Validate Generated Configuration
```bash
cd /tmp/my-provisioning
./scripts/validate-nickel.sh
```
### Step 4: Use in TypeDialog Forms
```bash
# Display HTTP server configuration form
typedialog fragments/http_server-section.toml --backend cli
# Display database configuration form
typedialog fragments/database-mysql-section.toml --backend cli
```
## Dependency → Feature Mapping
The generator uses these heuristics:
| Dependency | Feature | Fields |
|---|---|---|
| axum, actix-web, rocket | http_server | bind_address, timeout_seconds |
| jsonwebtoken, oauth2 | authentication | jwt_secret |
| redis, memcache | caching | cache_enabled, cache_ttl_seconds |
| sqlx, mysql, postgres | database | host, port, username, password |
| prometheus | monitoring | scrape_interval, retention_time |
| grafana | dashboard | admin_password |
| openssh-keys, ssh2 | ssh | private_key_path, username, port |
## Customization
To override the detection, use Mode B (config file) with custom features:
1. Generate initial config from Cargo.toml
2. Edit the config file to add/modify features
3. Re-generate with Mode B
See `../mode-b-config/` for the config file format.
## What This Demonstrates
✅ Automatic feature detection from dependencies
✅ Type inference (WebService detected from web frameworks)
✅ Infrastructure requirements (database, monitoring setup)
✅ Complete 7-layer validation stack generation
✅ Integration with TypeDialog forms
✅ Reusable template library (generic validators, schemas, etc.)
## Next Steps
1. **Modify Configuration**: Edit generated `config.ncl` to customize
2. **Add Custom Fields**: Create domain-specific fields in generated fragments
3. **Deploy**: Use `config.sh` or `config.nu` scripts for orchestration
4. **Integrate**: Import generated schemas in your application configuration

View File

@ -0,0 +1,294 @@
# Mode B: Config File Specification
This example shows provisioning generation from a declarative TOML configuration file.
## Use Cases
Mode B is ideal when:
- You don't have a Cargo.toml to analyze (e.g., Python, Go services)
- You want complete control over feature definitions
- You're designing configuration before implementation
- You need custom fields not automatically detected
- You're configuring existing services with different tech stacks
## Configuration Structure
The `project-spec.toml` file has three main sections:
### 1. Project Metadata
```toml
[project]
name = "service-name"
description = "What this service does"
type = "Microservice" # WebService, CliTool, Microservice, Library
```
### 2. Infrastructure Requirements
```toml
[infrastructure]
ssh = true
monitoring = ["prometheus", "grafana"]
cloud_providers = ["aws", "gcp"]
[[infrastructure.databases]]
type = "postgres"
required = true
```
### 3. Domain Features
Each feature represents a functional capability:
```toml
[features.my_feature]
description = "What this feature does"
enabled = true
[[features.my_feature.fields]]
name = "field_name"
type = "Text" # Text, Number, Password, Confirm, Select, etc.
prompt = "User prompt"
default = "default_value"
help = "Helpful text"
required = true
sensitive = false # Set to true for passwords/secrets
encryption_backend = "age" # For sensitive fields
min = 1 # For numbers/arrays
max = 100
options = ["a", "b"] # For Select/MultiSelect
```
### 4. Constraints (Single Source of Truth)
```toml
[constraints.feature_name.field_name]
min = 1
max = 100
min_length = 5
max_length = 255
```
## Running the Example
### Generate Provisioning Structure
```bash
typedialog-provisioning-gen config \
--input examples/11-provisioning-generation/mode-b-config/project-spec.toml \
--output /tmp/microservice-platform
```
### Inspect Generated Files
```bash
# View master configuration
cat /tmp/microservice-platform/config.ncl
# View API gateway schema
cat /tmp/microservice-platform/schemas/api_gateway.ncl
# View authentication fragment
cat /tmp/microservice-platform/fragments/authentication-section.toml
# Validate constraints
cat /tmp/microservice-platform/constraints.toml
```
### Test Forms Interactively
```bash
# Test API gateway configuration form
typedialog /tmp/microservice-platform/fragments/api_gateway-section.toml --backend cli
# Test authentication configuration form
typedialog /tmp/microservice-platform/fragments/authentication-section.toml --backend cli
```
### Use Generated Scripts
```bash
cd /tmp/microservice-platform
# Validate Nickel syntax
./scripts/validate-nickel.sh
# Create Nickel from JSON input
echo '{"api_gateway": {"bind_address": "0.0.0.0:8080"}}' | \
./scripts/json-to-nickel.sh
# Convert back to JSON
./scripts/nickel-to-json.sh < config.ncl
```
## Features Defined in This Example
### 1. API Gateway
- **Purpose**: REST API entry point with rate limiting
- **Fields**:
- `bind_address` - Interface and port
- `rate_limit` - Requests per second
- `timeout_seconds` - Request timeout
**Generated Output**:
- Schema: `schemas/api_gateway.ncl` - Type contract
- Validator: `validators/api_gateway.ncl` - Validation rules
- Defaults: `defaults/api_gateway.ncl` - Default values
- Fragment: `fragments/api_gateway-section.toml` - UI form
### 2. Event Streaming
- **Purpose**: Message queue for event-driven architecture
- **Fields**:
- `broker_url` - Message broker connection
- `consumer_group` - Kafka consumer group ID
- `max_concurrent_consumers` - Parallelism
### 3. Authentication
- **Purpose**: JWT-based access control
- **Fields**:
- `jwt_secret` - Sensitive signing key (encrypted)
- `jwt_expiry_hours` - Token lifetime
- `allow_refresh_tokens` - Refresh mechanism
### 4. Observability
- **Purpose**: Logging and distributed tracing
- **Fields**:
- `log_level` - Trace/Debug/Info/Warn/Error
- `enable_distributed_tracing` - Toggle tracing
- `trace_sample_rate` - Sampling percentage
### 5. Database Migrations
- **Purpose**: Schema versioning and evolution
- **Fields**:
- `migration_directory` - Path to SQL files
- `auto_migrate_on_startup` - Automatic migrations
## Field Types Reference
| Type | Use Case | Example |
|---|---|---|
| **Text** | Free-form string | domain.example.com |
| **Number** | Integer or float | 8080, 0.5 |
| **Password** | Sensitive text | secret key (encrypted) |
| **Confirm** | Yes/No toggle | Enable feature? |
| **Select** | Single choice | Log level: debug, info, warn |
| **MultiSelect** | Multiple choices | Monitoring: prometheus, grafana |
| **Editor** | Multi-line code | SQL migration script |
| **Date** | Calendar picker | Deployment date |
| **RepeatingGroup** | Array of items | Multiple database servers |
## Sensitive Fields
Mark fields as sensitive for encryption:
```toml
[[features.authentication.fields]]
name = "jwt_secret"
type = "Password"
sensitive = true
encryption_backend = "age" # Default: age
```
**Supported backends**:
- `age` - Modern encryption (recommended)
- `sops` - Mozilla SOPS
- `secretumvault` - SecretumVault integration
- `aws-kms` - AWS Key Management Service
- `gcp-kms` - Google Cloud KMS
- `azure-kms` - Azure Key Vault
## Constraints System
Constraints provide a single source of truth for validation bounds:
```toml
[constraints.api_gateway.bind_address]
min_length = 3
max_length = 50
[constraints.api_gateway.rate_limit]
min = 1
max = 10000
```
These constraints are:
- Used in TypeDialog forms (enforced client-side)
- Referenced in validators (Nickel validation)
- Available in fragments (UI constraints)
- Documented in generated schema
## Extending This Example
### Add a New Feature
1. Add to `project-spec.toml`:
```toml
[features.caching]
description = "Redis caching layer"
[[features.caching.fields]]
name = "redis_url"
type = "Text"
prompt = "Redis connection string"
```
2. Regenerate:
```bash
typedialog-provisioning-gen config --input project-spec.toml --output ./output
```
3. Generated files appear:
- `schemas/caching.ncl`
- `validators/caching.ncl`
- `defaults/caching.ncl`
- `fragments/caching-section.toml`
### Customize a Field
1. Modify the field definition
2. Update constraints if needed
3. Regenerate
The regeneration merges with existing configuration, making it safe to iterate.
## What This Demonstrates
✅ Complete feature definition without code
✅ Type safety through Nickel contracts
✅ Constraint interpolation (single source of truth)
✅ Multi-language field options (Select, MultiSelect)
✅ Sensitive data encryption configuration
✅ Infrastructure requirements specification
✅ Full 7-layer validation stack generation
## Next Steps
1. **Copy and Customize**: Use this as a template for your service
2. **Add Your Features**: Define the capabilities your service provides
3. **Generate**: Run provisioning-gen to create artifacts
4. **Test**: Use TypeDialog forms to validate configuration
5. **Deploy**: Use generated scripts for orchestration
6. **Iterate**: Modify spec and regenerate as needed
## Integration with Application
Once generated, import Nickel schemas in your app:
```nickel
# config.ncl
let infrastructure = import "schemas/database.ncl"
let auth = import "schemas/authentication.ncl"
{
database = {
%host% = "localhost",
port = 5432,
} | infrastructure.Database,
authentication = auth.Authentication,
}
```
Or deserialize from JSON:
```bash
# Load configuration from TypeDialog form output
./scripts/json-to-nickel.sh < form-output.json > config.ncl
# Validate against schema
nickel eval config.ncl
# Export to application
./scripts/nickel-to-json.sh < config.ncl > config.json
```

View File

@ -0,0 +1,181 @@
# Project Specification for Mode B Example
# This file defines a provisioning configuration without relying on Cargo.toml
[project]
name = "microservice-platform"
description = "Distributed microservice platform with event streaming"
type = "Microservice"
[infrastructure]
# SSH configuration for remote deployments
ssh = true
# Database configuration
[[infrastructure.databases]]
type = "postgres"
required = true
# Monitoring stack
monitoring = ["prometheus", "grafana"]
# Cloud provider options
cloud_providers = ["aws", "gcp"]
# Optional services
cache_enabled = true
# Domain Features - define what your service actually does
[features.api_gateway]
description = "REST API gateway with rate limiting"
enabled = true
[[features.api_gateway.fields]]
name = "bind_address"
type = "Text"
prompt = "API Gateway bind address"
default = "0.0.0.0:8080"
help = "Interface and port for API requests"
required = true
[[features.api_gateway.fields]]
name = "rate_limit"
type = "Number"
prompt = "Rate limit (requests/second)"
default = 100
min = 1
max = 10000
help = "Maximum requests per second per client"
[[features.api_gateway.fields]]
name = "timeout_seconds"
type = "Number"
prompt = "Request timeout"
default = 30
min = 5
max = 300
[features.event_streaming]
description = "Event-driven architecture with message queue"
enabled = true
[[features.event_streaming.fields]]
name = "broker_url"
type = "Text"
prompt = "Message broker URL"
placeholder = "kafka://localhost:9092"
required = true
help = "Connection string for event broker"
[[features.event_streaming.fields]]
name = "consumer_group"
type = "Text"
prompt = "Consumer group ID"
default = "microservice-platform"
required = true
[[features.event_streaming.fields]]
name = "max_concurrent_consumers"
type = "Number"
prompt = "Max concurrent consumers"
default = 10
min = 1
max = 100
[features.authentication]
description = "JWT-based authentication with role-based access"
enabled = true
[[features.authentication.fields]]
name = "jwt_secret"
type = "Password"
prompt = "JWT signing secret"
sensitive = true
encryption_backend = "age"
required = true
help = "Secret key for JWT token signing and verification"
[[features.authentication.fields]]
name = "jwt_expiry_hours"
type = "Number"
prompt = "JWT token expiry (hours)"
default = 24
min = 1
max = 720
[[features.authentication.fields]]
name = "allow_refresh_tokens"
type = "Confirm"
prompt = "Allow token refresh?"
default = true
help = "Enable token refresh mechanism"
[features.observability]
description = "Comprehensive logging and tracing"
enabled = true
[[features.observability.fields]]
name = "log_level"
type = "Select"
prompt = "Log level"
default = "info"
options = ["trace", "debug", "info", "warn", "error"]
[[features.observability.fields]]
name = "enable_distributed_tracing"
type = "Confirm"
prompt = "Enable distributed tracing?"
default = true
[[features.observability.fields]]
name = "trace_sample_rate"
type = "Number"
prompt = "Trace sampling rate (0.0-1.0)"
default = 0.1
min = 0.0
max = 1.0
help = "Percentage of requests to trace (0.0-1.0)"
[features.database_migrations]
description = "Database schema and migration management"
enabled = true
[[features.database_migrations.fields]]
name = "migration_directory"
type = "Text"
prompt = "Migrations directory"
default = "./migrations"
help = "Path to SQL migration files"
[[features.database_migrations.fields]]
name = "auto_migrate_on_startup"
type = "Confirm"
prompt = "Auto-migrate on startup?"
default = false
help = "Automatically run pending migrations when service starts"
# Constraints - single source of truth for validation bounds
[constraints.api_gateway.bind_address]
min_length = 3
max_length = 50
[constraints.api_gateway.rate_limit]
min = 1
max = 10000
[constraints.event_streaming.consumer_group]
min_length = 1
max_length = 100
[constraints.authentication.jwt_secret]
min_length = 32
max_length = 256
[constraints.observability.trace_sample_rate]
min = 0.0
max = 1.0
[constraints.database_migrations.migration_directory]
min_length = 1
max_length = 255

View File

@ -0,0 +1,348 @@
# Mode C: Interactive Wizard
This example demonstrates the interactive provisioning generator wizard with AI-powered suggestions.
## Overview
Mode C guides you through a conversational process to build your provisioning configuration:
1. **Project Details**: Name and type (WebService, CliTool, Microservice, Library)
2. **Infrastructure**: Database, SSH, monitoring, cloud providers
3. **Domain Features**: Application-specific capabilities
4. **Field Configuration**: Details for each feature
5. **Generation**: Creates complete provisioning structure
## Running the Wizard
```bash
# Interactive wizard with project name
typedialog-provisioning-gen wizard --project my-service
# Or use default name
typedialog-provisioning-gen wizard
```
## Interactive Flow
### Step 1: Project Type Selection
```
What type of project are you building?
1) WebService - REST APIs, web servers
2) Microservice - Distributed services with event streaming
3) CliTool - Command-line utilities
4) Library - Reusable Rust library
5) Other
Select (1-5):
```
### Step 2: Infrastructure Requirements
```
Infrastructure Configuration
Enable SSH access? (y/n): y
→ SSH access enabled for deployments
Which databases do you need?
1) PostgreSQL
2) MySQL
3) SQLite
4) None
5) Multiple databases
Select (1-5): 1
→ PostgreSQL selected
Enable monitoring? (y/n): y
Which monitoring stack?
1) Prometheus
2) Grafana
3) Both
4) None
Select (1-3): 3
→ Prometheus + Grafana enabled
Cloud deployment target?
1) LXD (local containers)
2) AWS
3) GCP
4) Multiple
Select (1-4): 2
→ AWS selected
```
### Step 3: Domain Features
```
Domain Features
The wizard suggests features based on your project type.
Suggested features:
✓ http_server (HTTP REST API)
✓ authentication (User authentication)
□ caching (Redis caching)
□ event_streaming (Message queue)
□ file_upload (File handling)
Select features to include:
[✓] http_server
[✓] authentication
[□] caching
[✓] event_streaming
[ ] file_upload
Confirm? (y/n): y
```
### Step 4: Feature Configuration
```
Configure http_server
1) Bind Address (default: 0.0.0.0:8080)
2) Request Timeout (default: 30 seconds)
3) Rate Limit (default: 100 req/sec)
4) Done
Select field to configure (1-4):
```
The wizard guides you through each feature's fields with:
- Helpful descriptions
- Validation rules
- Default values
- Examples
### Step 5: Review and Generate
```
Summary
Project: my-service
Type: WebService
Features: http_server, authentication, caching
Database: PostgreSQL
Monitoring: Prometheus, Grafana
Generate to: ./provisioning
Continue? (y/n): y
Generating...
✓ Constraints
✓ Schemas
✓ Validators
✓ Defaults
✓ Fragments
✓ Scripts
Done! Generated to ./provisioning
```
## AI-Powered Suggestions
When Mode C integrates with typedialog-ai, it provides:
### 1. RAG-Based Feature Suggestions
The wizard retrieves similar project examples:
```
Projects similar to "WebService" in your domain:
1. E-commerce platform
- Features: product_catalog, shopping_cart, payment_gateway
- Database: PostgreSQL
- Monitoring: Prometheus + Grafana
2. Content delivery service
- Features: media_upload, streaming, analytics
- Database: PostgreSQL
- Caching: Redis
Suggest these features? (y/n):
```
### 2. Field Generation Assistance
LLM analyzes domain features and suggests fields:
```
Configure http_server feature
Based on similar services, recommended fields:
- bind_address (Type: Text)
- max_connections (Type: Number)
- keepalive_timeout (Type: Number)
- compression_enabled (Type: Confirm)
Add recommended fields? (y/n):
```
### 3. Constraint Learning
LLM suggests validation bounds from patterns:
```
For max_connections field:
Learned constraints from similar services:
- Min: 1
- Max: 100000
- Typical: 1000
Apply? (y/n):
```
## Typical Workflow Examples
### Example 1: Simple Web Service
```bash
typedialog-provisioning-gen wizard --project api-service
# Selections:
# - WebService
# - PostgreSQL
# - Prometheus monitoring
# - Features: http_server, authentication
```
Generated structure optimized for REST APIs with user management.
### Example 2: Microservice Platform
```bash
typedialog-provisioning-gen wizard --project order-service
# Selections:
# - Microservice
# - PostgreSQL + Redis
# - Multiple cloud providers (AWS + GCP)
# - Features: http_server, authentication, event_streaming, caching
```
Generated for distributed event-driven architecture.
### Example 3: CLI Tool
```bash
typedialog-provisioning-gen wizard --project deployment-tool
# Selections:
# - CliTool
# - No database
# - SSH enabled
# - Features: configuration_management, deployment
```
Generated for infrastructure automation.
## Customizing the Wizard
The wizard can be extended with:
1. **Custom feature library** (in typedialog-ai):
- Add project types
- Define feature templates
- Set default constraints
2. **Custom LLM integration**:
- Use your own LLM provider
- Custom prompt engineering
- Domain-specific knowledge base
## Fallback Mode
If typedialog-ai is not available, the wizard uses sensible defaults:
```rust
// Minimal wizard without AI
fn simple_wizard() {
let name = prompt("Project name? ", "my-project");
let project_type = select(
"Project type?",
vec!["WebService", "CliTool", "Microservice", "Library"]
)?;
let spec = ProjectSpec {
name,
project_type,
infrastructure: InfrastructureSpec::default(),
domain_features: vec![DomainFeature::new("basic_config")],
constraints: vec![],
};
Ok(spec)
}
```
This ensures the wizard works standalone without dependencies.
## Integration with typedialog-ai
For full AI capabilities:
1. Start typedialog-ai service:
```bash
typedialog-ai serve --port 3000
```
2. Configure in .env:
```bash
TYPEDIALOG_AI_URL=http://localhost:3000
LLM_API_KEY=sk-...
```
3. Run wizard:
```bash
typedialog-provisioning-gen wizard --project my-service
```
The wizard automatically uses AI for:
- Feature suggestions
- Constraint learning
- Field recommendations
- Example retrieval
## What This Demonstrates
✅ Conversational configuration
✅ AI-powered feature suggestions
✅ RAG-based example retrieval
✅ Interactive field configuration
✅ Sensible defaults
✅ Validation and constraints
✅ Graceful fallback without AI
✅ Complete 7-layer generation
## Troubleshooting
### "typedialog-ai not available"
The wizard falls back to basic mode. Features:
- Manual feature selection
- Default field configurations
- Standard constraints
To use full AI:
1. Install typedialog-ai: `cargo build -p typedialog-ai`
2. Start service: `typedialog-ai serve`
3. Run wizard: `typedialog-provisioning-gen wizard`
### "Generation failed"
Check:
1. Output directory is writable
2. Templates are installed correctly
3. Feature names don't contain special characters
4. Run with `--dry-run` first to validate
### "Fields missing after selection"
Some features are added/modified during wizard. Use Mode B to customize:
1. Run wizard: generates initial config
2. Export to TOML: inspect generated structure
3. Modify with Mode B: adjust features and fields
4. Regenerate: final structure
## Next Steps
1. **Run the Wizard**: `typedialog-provisioning-gen wizard --project my-service`
2. **Inspect Output**: Check generated provisioning/ structure
3. **Test Forms**: Run TypeDialog with fragments
4. **Deploy**: Use scripts for orchestration
5. **Iterate**: Modify and regenerate as needed
For detailed customization, see Mode B (Config File) examples.

View File

@ -0,0 +1,305 @@
# Mode D: Existing Nickel Schema Conversion
This example shows how to convert an existing Nickel schema into a complete provisioning structure with forms, validators, and orchestration scripts.
## Use Cases
Mode D is useful when:
- You have existing Nickel schemas from other projects
- You want to migrate configuration systems
- You need to add provisioning automation to existing Nickel configs
- You want to bridge Nickel schemas and TypeDialog forms
## Schema Conversion Process
The `existing-config.ncl` file defines types for a service:
```nickel
{
Server = { hostname, port, tls_enabled, ... },
Database = { engine, host, port, username, ... },
Authentication = { provider, secret_key, ... },
Monitoring = { enabled, prometheus_url, ... },
Application = { name, version, log_level, ... },
}
```
Mode D analyzes this schema and:
1. **Extracts Records**: Identifies type definitions (`Server`, `Database`, etc.)
2. **Maps Fields**: Converts Nickel types to TypeDialog field types
3. **Infers Features**: Groups related records into domain features
4. **Generates Artifacts**: Creates complete provisioning structure
## Running the Example
### Convert Nickel Schema to Provisioning
```bash
typedialog-provisioning-gen nickel \
--input examples/11-provisioning-generation/mode-d-nickel/existing-config.ncl \
--output /tmp/service-provisioning
```
### Inspect Conversion
```bash
# View inferred project spec
cat /tmp/service-provisioning/config.ncl
# View extracted server schema
cat /tmp/service-provisioning/schemas/server.ncl
# View generated form fragment
cat /tmp/service-provisioning/fragments/server-section.toml
# View extracted validators
cat /tmp/service-provisioning/validators/server.ncl
```
### Test Generated Forms
```bash
# Test server configuration form
typedialog /tmp/service-provisioning/fragments/server-section.toml --backend cli
# Test database configuration form
typedialog /tmp/service-provisioning/fragments/database-section.toml --backend cli
```
## Schema Extraction Details
### Type → Feature Mapping
| Nickel Type | Feature Name | Domain |
|---|---|---|
| `Server` | server | Infrastructure |
| `Database` | database | Infrastructure |
| `Authentication` | authentication | Security |
| `Monitoring` | monitoring | Observability |
| `Application` | application | Core |
### Field Type Conversion
| Nickel Type | TypeDialog Type | Example |
|---|---|---|
| `String` | Text | hostname, database_name |
| `Number` | Number | port (1-65535), timeout |
| `Bool` | Confirm | tls_enabled, debug |
| `[String]` | MultiSelect | ["warn", "error"] |
| `{...}` | RepeatingGroup | Array of records |
| `String \| optional` | Text (optional) | certificate_path |
### Example Conversion
**Input Nickel:**
```nickel
Server = {
hostname | String,
port | Number,
tls_enabled | Bool | optional,
}
```
**Generated Schema** (`schemas/server.ncl`):
```nickel
{
Server = {
hostname | String,
port | Number,
tls_enabled | Bool | optional,
},
}
```
**Generated Fragment** (`fragments/server-section.toml`):
```toml
[section.server]
description = "Server configuration"
[[section.server.fields]]
name = "hostname"
prompt = "Server hostname"
type = "text"
required = true
[[section.server.fields]]
name = "port"
prompt = "Server port"
type = "number"
required = true
min = 1
max = 65535
[[section.server.fields]]
name = "tls_enabled"
prompt = "Enable TLS?"
type = "confirm"
required = false
```
**Generated Validator** (`validators/server.ncl`):
```nickel
{
validate_hostname = fun value =>
(std.is_string value) && (std.string.length value > 0),
validate_port = fun value =>
(std.is_number value) && (value >= 1) && (value <= 65535),
validate_tls_enabled = fun value =>
(std.is_bool value),
}
```
## Handling Complex Types
### Nested Records
When a record contains another record:
```nickel
ServiceConfig = {
server | Server, # References Server record
database | Database, # References Database record
}
```
Generated as separate features with relationships documented.
### Optional Fields
Nickel `optional` modifier:
```nickel
certificate_path | String | optional,
```
Generated as optional form field (not required in TypeDialog form).
### Array Types
Repeating items:
```nickel
tags | [String],
```
Generated as `RepeatingGroup` field with array validation.
## Extending Converted Schema
After conversion, you can:
1. **Add New Fields**: Modify generated fragments
```toml
[[section.server.fields]]
name = "max_connections"
type = "number"
```
2. **Add Validation**: Enhance validators
```nickel
validate_port = fun value =>
(value >= 1) && (value <= 65535) && (value % 2 == 0)
```
3. **Add Defaults**: Fill in missing defaults
```nickel
defaults.server = {
port = 8080,
tls_enabled = false,
}
```
4. **Add Constraints**: Update constraints.toml
```toml
[constraints.server.port]
min = 1024
max = 65535
```
## Integration with Existing Services
The converted provisioning can integrate with existing services:
### 1. Use as Configuration Loader
```rust
// Load configuration from generated schema
let config = nickel_eval("provisioning/config.ncl")?;
let parsed: ServiceConfig = serde_json::from_value(config)?;
```
### 2. Provide Configuration UI
```bash
# Use generated TypeDialog forms for configuration
typedialog provisioning/fragments/database-section.toml \
--backend web --port 3000
```
### 3. Add Validation Layer
```bash
# Validate user input against Nickel schema
./provisioning/scripts/validate-nickel.sh < user-config.ncl
```
## Migration Path
To migrate an existing configuration system:
1. **Define Nickel Schema**
```bash
# Write or extract your configuration types
cat > config.ncl << 'EOF'
{ MyService = { ... } }
EOF
```
2. **Convert with Mode D**
```bash
typedialog-provisioning-gen nickel --input config.ncl --output ./provisioning
```
3. **Test Generated Forms**
```bash
typedialog provisioning/fragments/*.toml --backend cli
```
4. **Integrate Validation**
```bash
# Use validation in your service
./provisioning/scripts/validate-nickel.sh < config.ncl
```
5. **Deploy Forms**
```bash
# Serve configuration UI
typedialog-web provisioning/fragments/*.toml
```
## What This Demonstrates
✅ Nickel schema extraction and analysis
✅ Type inference to TypeDialog field mapping
✅ Automatic feature grouping from types
✅ Generator adaptation to schema structure
✅ Validator generation from type constraints
✅ Fragment generation from record fields
✅ Integration with existing Nickel code
✅ Schema versioning and migration
## Limitations
Some conversions have limitations:
| Limitation | Workaround |
|---|---|
| Custom Nickel functions | Manual validator enhancement |
| Complex constraints | Update constraints.toml |
| Domain-specific logic | Modify generated fragments |
| Record references | Links documented in fragments |
## Next Steps
1. **Analyze Your Schema**: Run Mode D on your Nickel file
2. **Review Generated Artifacts**: Check schemas, forms, validators
3. **Test Forms**: Use TypeDialog CLI to test UX
4. **Enhance**: Add custom fields and validation
5. **Deploy**: Use generated scripts for provisioning
6. **Iterate**: Modify and regenerate as needed
For more control, use Mode B (Config File) examples to define features explicitly.

View File

@ -0,0 +1,58 @@
# Existing Nickel configuration schema
# This demonstrates Mode D: converting existing Nickel to provisioning structure
# Infrastructure types
{
# Server configuration
Server = {
hostname | String,
port | Number,
tls_enabled | Bool | optional,
certificate_path | String | optional,
},
# Database connection configuration
Database = {
engine | String, # "postgres", "mysql", "sqlite"
host | String | optional,
port | Number | optional,
database | String,
username | String | optional,
password | String | optional,
ssl_mode | String | optional,
},
# Authentication settings
Authentication = {
provider | String, # "jwt", "oauth2", "basic"
secret_key | String,
expiry_hours | Number | optional,
refresh_enabled | Bool | optional,
},
# Monitoring configuration
Monitoring = {
enabled | Bool,
prometheus_url | String | optional,
scrape_interval | String | optional,
retention_days | Number | optional,
},
# Application features
Application = {
name | String,
version | String,
debug | Bool | optional,
log_level | String | optional, # "trace", "debug", "info", "warn", "error"
max_connections | Number | optional,
},
# Complete service configuration
ServiceConfig = {
application | Application,
server | Server,
database | Database,
authentication | Authentication,
monitoring | Monitoring,
},
}

View File

@ -0,0 +1,451 @@
# Generated Provisioning Output Example
This directory demonstrates the complete output structure generated by the provisioning generator across all 7 layers.
## Directory Structure
```
output-example/
├── config.ncl # Master configuration file
├── constraints.toml # Layer 1: Validation bounds (single source of truth)
├── schemas/ # Layer 2: Type contracts
│ ├── http_server.ncl
│ ├── database.ncl
│ ├── authentication.ncl
│ └── ...
├── validators/ # Layer 3: Validation logic
│ ├── http_server.ncl
│ ├── database.ncl
│ ├── common.ncl
│ └── ...
├── defaults/ # Layer 4: Default values
│ ├── http_server.ncl
│ ├── database.ncl
│ └── ...
├── fragments/ # Layer 5: TypeDialog forms
│ ├── http_server-section.toml
│ ├── database-section.toml
│ ├── authentication-section.toml
│ └── ...
├── scripts/ # Layer 6: Orchestration
│ ├── config.sh
│ ├── config.nu
│ ├── json-to-nickel.sh
│ ├── nickel-to-json.sh
│ ├── validate-nickel.sh
│ └── README.md (orchestration guide)
└── README.md # This file
```
## 7-Layer Validation Architecture
The provisioning system implements a complete validation stack:
### Layer 1: Constraints (Single Source of Truth)
**File**: `constraints.toml`
Defines bounds and rules for all fields:
```toml
[http_server.bind_address]
min_length = 3
max_length = 100
[http_server.timeout_seconds]
min = 1
max = 600
[database.port]
min = 1
max = 65535
unique = false
[authentication.jwt_secret]
min_length = 32
max_length = 256
```
**Purpose**:
- Single source of truth for validation rules
- Referenced by validators, forms, and documentation
- Enables constraint interpolation across layers
- Facilitates consistency across multiple validation backends
### Layer 2: Schemas (Type Contracts)
**Directory**: `schemas/`
Nickel type definitions for each domain feature:
**Example**: `schemas/http_server.ncl`
```nickel
{
HttpServer = {
bind_address | String,
timeout_seconds | Number,
max_connections | Number | optional,
},
}
```
**Purpose**:
- Type-safe configuration definitions
- Interfaces between application and configuration
- Validated by Nickel before use
- Self-documenting code
### Layer 3: Validators (Validation Logic)
**Directory**: `validators/`
Functions that enforce constraints:
**Example**: `validators/http_server.ncl`
```nickel
let validate_bind_address = fun value =>
(std.is_string value) &&
(std.string.length value >= 3) &&
(std.string.length value <= 100);
let validate_timeout_seconds = fun value =>
(std.is_number value) &&
(value >= 1) &&
(value <= 600);
{
validate_bind_address,
validate_timeout_seconds,
}
```
**Purpose**:
- Enforce constraints defined in Layer 1
- Provide detailed validation error messages
- Can include complex logic (e.g., regex matching)
- Reusable across multiple configs
### Layer 4: Defaults (Default Values)
**Directory**: `defaults/`
Sensible defaults for each feature:
**Example**: `defaults/http_server.ncl`
```nickel
{
http_server = {
bind_address = "0.0.0.0:8080",
timeout_seconds = 30,
max_connections = 1000,
},
}
```
**Purpose**:
- Provide reasonable configuration starting points
- Reduce user decisions
- Document typical values
- Enable incremental configuration
### Layer 5: Fragments (Form Definitions)
**Directory**: `fragments/`
TypeDialog form fragments for user input:
**Example**: `fragments/http_server-section.toml`
```toml
[section.http_server]
description = "HTTP Server Configuration"
[[section.http_server.fields]]
name = "bind_address"
type = "text"
prompt = "Server bind address"
placeholder = "0.0.0.0:8080"
default = "0.0.0.0:8080"
help = "Format: IP:PORT (e.g., 0.0.0.0:8080)"
required = true
[[section.http_server.fields]]
name = "timeout_seconds"
type = "number"
prompt = "Request timeout"
default = 30
min = 1
max = 600
help = "Maximum seconds to wait for request completion"
```
**Purpose**:
- Provide user-friendly configuration UI
- Enforce constraints client-side
- Guide users with prompts and help text
- Support multiple backends (CLI, TUI, Web)
### Layer 6: Scripts (Orchestration)
**Directory**: `scripts/`
Automated configuration management:
- **config.sh / config.nu**: Master orchestrator
- **json-to-nickel.sh / json-to-nickel.nu**: Convert JSON → Nickel
- **nickel-to-json.sh / nickel-to-json.nu**: Convert Nickel → JSON
- **validate-nickel.sh / validate-nickel.nu**: Validate Nickel syntax
**Purpose**:
- Automate configuration pipeline
- Convert between formats
- Validate configuration integrity
- Enable CI/CD integration
### Layer 7: Master Config
**File**: `config.ncl`
Integrates all layers:
```nickel
let constraints = import "constraints.toml"
let http_server_schema = import "schemas/http_server.ncl"
let http_server_validator = import "validators/http_server.ncl"
let http_server_defaults = import "defaults/http_server.ncl"
{
http_server = http_server_defaults.http_server
| http_server_schema.HttpServer
| http_server_validator,
}
```
## Using the Generated Structure
### 1. Display Configuration Form
```bash
# Interactive CLI form
typedialog fragments/http_server-section.toml --backend cli
# Web-based form
typedialog-web fragments/http_server-section.toml --port 3000
```
### 2. Validate Existing Config
```bash
# Validate Nickel syntax
./scripts/validate-nickel.sh < config.ncl
# Convert and validate JSON
./scripts/json-to-nickel.sh < input.json | ./scripts/validate-nickel.sh
```
### 3. Generate Config from User Input
```bash
# Collect form input
typedialog fragments/*.toml --backend cli --output config.json
# Convert to Nickel
./scripts/json-to-nickel.sh < config.json > config.ncl
# Validate
./scripts/validate-nickel.sh < config.ncl
```
### 4. Use in Application
```rust
// Load and parse configuration
let config_str = std::fs::read_to_string("config.ncl")?;
let config: ServiceConfig = nickel::evaluate(&config_str)?;
// Use typed configuration
println!("Server: {}:{}", config.http_server.bind_address);
```
### 5. Export to Other Formats
```bash
# Export to JSON
./scripts/nickel-to-json.sh < config.ncl > config.json
# Export to YAML
./scripts/nickel-to-json.sh < config.ncl | jq -r 'to_entries | map("\(.key): \(.value)") | .[]'
```
## Constraint Interpolation
Constraints can be referenced in validators and forms:
**In constraints.toml**:
```toml
[http_server.timeout_seconds]
min = 1
max = 600
```
**In validators**:
```nickel
let max_timeout = {{ constraints.http_server.timeout_seconds.max }};
validate_timeout = fun value => value <= max_timeout
```
**In fragments**:
```toml
max = {{ constraints.http_server.timeout_seconds.max }}
```
This ensures single source of truth for validation bounds.
## Field Types Supported
### Primitive Types
- **Text** - Free-form string input
- **Number** - Numeric value (integer or float)
- **Password** - Sensitive text (encrypted)
- **Confirm** - Boolean yes/no toggle
### Selection Types
- **Select** - Single choice from predefined options
- **MultiSelect** - Multiple selections
### Complex Types
- **Editor** - Multi-line code or text editing
- **Date** - Calendar date picker
- **RepeatingGroup** - Array of structured items
## Sensitive Data Handling
Fields marked as sensitive are automatically encrypted:
```toml
[[section.authentication.fields]]
name = "jwt_secret"
type = "password"
sensitive = true
encryption_backend = "age" # or: sops, secretumvault, aws-kms, gcp-kms
```
**Supported Encryption Backends**:
- `age` - Modern encryption (recommended)
- `sops` - Mozilla SOPS
- `secretumvault` - SecretumVault integration
- `aws-kms` - AWS KMS
- `gcp-kms` - GCP Cloud KMS
- `azure-kms` - Azure Key Vault
Encrypted fields are:
- Masked in TypeDialog forms
- Decrypted only when needed
- Never logged or stored in plain text
## Configuration Workflow
Typical workflow with generated structure:
```
1. User runs interactive form
typedialog fragments/*.toml
User input: {http_server: {bind_address: "0.0.0.0:3000"}}
2. Convert to Nickel
./scripts/json-to-nickel.sh < input.json > config.ncl
3. Validate against schema
./scripts/validate-nickel.sh < config.ncl
✓ Valid configuration
4. Use in application
app.load_config("config.ncl")
Server starts on 0.0.0.0:3000
```
## Extending Generated Structure
### Add a New Feature
1. **Add schema** (`schemas/my_feature.ncl`):
```nickel
{
MyFeature = {
field1 | String,
field2 | Number | optional,
},
}
```
2. **Add validator** (`validators/my_feature.ncl`):
```nickel
{
validate_field1 = fun value => true,
validate_field2 = fun value => true,
}
```
3. **Add defaults** (`defaults/my_feature.ncl`):
```nickel
{
my_feature = {
field1 = "default",
field2 = 42,
},
}
```
4. **Add fragment** (`fragments/my_feature-section.toml`):
```toml
[section.my_feature]
[[section.my_feature.fields]]
name = "field1"
type = "text"
```
5. **Add constraints** (update `constraints.toml`):
```toml
[my_feature.field1]
min_length = 1
max_length = 100
```
6. **Update master config** (`config.ncl`):
```nickel
let my_feature_schema = import "schemas/my_feature.ncl"
# ... include in final config
```
## Testing Configuration
```bash
# Test entire pipeline
./scripts/config.sh validate
# Test specific feature
typedialog fragments/http_server-section.toml --backend cli --test
# Test with sample data
echo '{"http_server":{"bind_address":"127.0.0.1:8080"}}' | \
./scripts/json-to-nickel.sh | \
./scripts/validate-nickel.sh
```
## Documentation and References
- **Nickel Docs**: Type contracts and validation
- **TypeDialog Docs**: Form field types and features
- **Scripts Guide**: Orchestration and conversions (`scripts/README.md`)
## Summary
The 7-layer architecture provides:
**Separation of Concerns**: Each layer has a specific role
**Single Source of Truth**: Constraints defined once, used everywhere
**Type Safety**: Nickel contracts prevent invalid configs
**User-Friendly Forms**: TypeDialog provides intuitive UI
**Automated Validation**: Constraints enforced at multiple layers
**Format Flexibility**: Convert between JSON, TOML, Nickel
**Reusability**: Generic components shared across projects
**Extensibility**: Easy to add features and customize
The generated structure is ready for:
- Production deployment
- CI/CD integration
- Distributed configuration management
- Multi-environment setups

View File

@ -0,0 +1,114 @@
# TypeDialog Agent Demos
This directory contains demonstration agent files showcasing the multi-provider LLM capabilities of TypeDialog Agent.
## Available Demos
### Provider-Specific Demos
Each demo showcases a different LLM provider with the same haiku generation task:
- **`demo-claude.agent.mdx`** - Claude (Anthropic) with claude-3-5-haiku-20241022
- **`demo-openai.agent.mdx`** - OpenAI GPT with gpt-4o-mini
- **`demo-gemini.agent.mdx`** - Google Gemini with gemini-2.0-flash-exp
- **`demo-ollama.agent.mdx`** - Ollama local models with llama2
## Running Demos
### Run All Providers
From the project root directory:
```bash
./demos/agent/run_demo.sh
```
This will:
1. Check for available API keys and Ollama server
2. Build the CLI if needed
3. Prompt for a haiku theme
4. Run demos for all configured providers
5. Display performance metrics and results
### Run Individual Demos
From the project root directory:
```bash
# Claude
cargo run --package typedialog-ag -- demos/agent/demo-claude.agent.mdx
# OpenAI
cargo run --package typedialog-ag -- demos/agent/demo-openai.agent.mdx
# Gemini
cargo run --package typedialog-ag -- demos/agent/demo-gemini.agent.mdx
# Ollama (local)
cargo run --package typedialog-ag -- demos/agent/demo-ollama.agent.mdx
```
## Requirements
### Cloud Providers
Set the appropriate API keys as environment variables:
```bash
# For Claude
export ANTHROPIC_API_KEY=sk-ant-...
# For OpenAI
export OPENAI_API_KEY=sk-...
# For Gemini
export GEMINI_API_KEY=...
# OR
export GOOGLE_API_KEY=...
```
### Local Models (Ollama)
1. Install Ollama from https://ollama.ai
2. Start the Ollama server:
```bash
ollama serve
```
3. Pull a model:
```bash
ollama pull llama2
```
Optional: Override default Ollama URL:
```bash
export OLLAMA_BASE_URL=http://localhost:11434
```
## Provider Comparison
| Provider | Streaming | Token Usage | Privacy | API Key Required |
|----------|-----------|-------------|---------|------------------|
| Claude | SSE | ✅ In stream | Cloud | ✅ Required |
| OpenAI | SSE | ❌ No usage | Cloud | ✅ Required |
| Gemini | JSON | ✅ In stream | Cloud | ✅ Required |
| Ollama | JSON | ✅ In stream | Local | ❌ Not required |
## Demo Features
All demos showcase:
- Template variable substitution (`{{ theme }}`)
- System prompts defining agent role
- Model configuration (max_tokens, temperature)
- Real-time streaming output
- Token usage tracking
- Performance metrics
## Rust Example
For a programmatic comparison of all providers, see:
```bash
cargo run --example provider_comparison
```
This example demonstrates both blocking and streaming modes for all four providers.

View File

@ -0,0 +1,375 @@
# Agent Execution Examples
This directory contains practical examples of using **TypeDialog Agent** (`typedialog-ag`) CLI for various real-world tasks.
## Overview
TypeDialog Agent allows you to execute AI agents defined as markdown files (`.agent.mdx`) with powerful features:
- Template variable substitution
- File imports with glob patterns
- Shell command integration
- Output validation
- Multi-provider LLM support (Claude, OpenAI, Gemini, Ollama)
## Examples Included
### 1. Basic Greeting (`basic-greeting.agent.mdx`)
**Purpose**: Simple introduction to agent execution
**LLM**: Claude Haiku (fast, cost-effective)
**Features**: Basic template variables, optional inputs
```bash
cargo run --package typedialog-ag -- examples/12-agent-execution/basic-greeting.agent.mdx
```
**What it demonstrates:**
- Simple agent configuration
- Required and optional inputs
- Conditional logic in templates
- Fast response with Haiku model
---
### 2. Code Review (`code-review.agent.mdx`)
**Purpose**: Automated code review with security and performance analysis
**LLM**: Claude Sonnet (balanced quality/cost)
**Features**: File imports, git integration, output validation
```bash
cargo run --package typedialog-ag -- examples/12-agent-execution/code-review.agent.mdx
```
**What it demonstrates:**
- File pattern imports (`@import`)
- Shell command execution (`@shell`)
- Output validation rules
- Security-focused analysis
- Lower temperature for consistent output
---
### 3. Documentation Generator (`documentation.agent.mdx`)
**Purpose**: Generate comprehensive documentation from source code
**LLM**: GPT-4o (excellent code understanding)
**Features**: Multi-file imports, cargo integration
```bash
cargo run --package typedialog-ag -- examples/12-agent-execution/documentation.agent.mdx
```
**What it demonstrates:**
- Complex file glob patterns
- Integration with cargo doc
- Long-form content generation (8K tokens)
- Structured output requirements
---
### 4. Task Planner (`task-planner.agent.mdx`)
**Purpose**: Break down projects into actionable tasks
**LLM**: Claude Sonnet
**Features**: Git status integration, complexity assessment
```bash
cargo run --package typedialog-ag -- examples/12-agent-execution/task-planner.agent.mdx
```
**What it demonstrates:**
- Project planning without time estimates
- Complexity-based task breakdown
- Risk identification
- Dependency management
- Implementation sequencing
---
### 5. Local Privacy Analysis (`local-privacy.agent.mdx`)
**Purpose**: Privacy-first data analysis using local models
**LLM**: Ollama (llama2) - runs entirely locally
**Features**: Zero cloud API calls, complete privacy
```bash
# Requires Ollama server running
ollama serve
# Then run the agent
cargo run --package typedialog-ag -- examples/12-agent-execution/local-privacy.agent.mdx
```
**What it demonstrates:**
- Local LLM execution with Ollama
- Privacy-preserving workflows
- Sensitive data handling
- Offline operation
- No API costs
---
### 6. Creative Writer (`creative-writer.agent.mdx`)
**Purpose**: Creative content generation
**LLM**: Gemini 2.0 Flash (creative, multi-modal)
**Features**: High temperature, creative output
```bash
cargo run --package typedialog-ag -- examples/12-agent-execution/creative-writer.agent.mdx
```
**What it demonstrates:**
- High temperature for creativity (0.9)
- Gemini's creative capabilities
- Long-form content generation
- Style customization
---
### 7. Refactoring Assistant (`refactoring-assistant.agent.mdx`)
**Purpose**: Automated code refactoring with best practices
**LLM**: GPT-4o-mini (fast, code-focused)
**Features**: Clippy integration, Rust-specific patterns
```bash
cargo run --package typedialog-ag -- examples/12-agent-execution/refactoring-assistant.agent.mdx
```
**What it demonstrates:**
- Low temperature for consistent code (0.2)
- Integration with clippy warnings
- Before/after code comparisons
- Rust idioms and best practices
- Migration planning
---
### 8. Software Architect (`architect.agent.mdx`)
**Purpose**: High-level architecture design and ADRs
**LLM**: Claude Opus (most capable)
**Features**: Tool calling, diagram generation
```bash
cargo run --package typedialog-ag -- examples/12-agent-execution/architect.agent.mdx
```
**What it demonstrates:**
- Most capable model for complex design
- Tool calling capabilities
- Architecture decision records
- Mermaid diagram generation
- Comprehensive validation
---
## Model Selection Guide
| Use Case | Recommended Model | Why |
|----------|-------------------|-----|
| Quick tasks, simple queries | Claude Haiku | Fast, cheap, good for straightforward tasks |
| Code review, planning | Claude Sonnet | Balanced performance and cost |
| Complex architecture, critical design | Claude Opus | Most capable, highest quality |
| Code generation, refactoring | GPT-4o, GPT-4o-mini | Excellent code understanding |
| Creative content | Gemini 2.0 Flash | Creative, multi-modal capabilities |
| Privacy-sensitive data | Ollama (local) | No cloud APIs, complete privacy |
## Running Examples
### Basic Execution
```bash
# From project root
cargo run --package typedialog-ag -- examples/12-agent-execution/<example>.agent.mdx
```
### With Input Variables
```bash
# Interactive prompts
cargo run --package typedialog-ag -- examples/12-agent-execution/basic-greeting.agent.mdx
# You'll be prompted:
# name (String): Alice
# language (optional, String): Spanish
```
### Skip Prompts (Use Defaults)
```bash
cargo run --package typedialog-ag -- examples/12-agent-execution/basic-greeting.agent.mdx --yes
```
## Requirements
### API Keys
Set environment variables for cloud providers:
```bash
# Claude
export ANTHROPIC_API_KEY=sk-ant-...
# OpenAI
export OPENAI_API_KEY=sk-...
# Gemini
export GEMINI_API_KEY=...
```
### Local Models (Ollama)
```bash
# Install Ollama
# Download from: https://ollama.ai
# Start server
ollama serve
# Pull model
ollama pull llama2
```
## Related Resources
### Tests and Demos
- **[Tests](../../tests/agent/README.md)** - Test agent files for validation
- **[Demos](../../demos/agent/README.md)** - Multi-provider comparison demos
### Core Library Resources
- **[Rust Examples](../../crates/typedialog-agent/typedialog-ag-core/examples/)** - Programmatic usage examples
- `provider_comparison.rs` - Compare all 4 LLM providers
- `llm_execution.rs` - Basic LLM execution
- **[Test Fixtures](../../crates/typedialog-agent/typedialog-ag-core/tests/)** - Integration tests
- `integration_test.rs` - Full pipeline tests
- `simple_integration_test.rs` - Basic validation
- **[Documentation](../../crates/typedialog-agent/typedialog-ag-core/LLM_INTEGRATION.md)** - Complete LLM integration guide
## Agent File Format
All examples follow this structure:
```markdown
---
@agent {
role: <agent role>,
llm: <model-name>,
max_tokens: <number>,
temperature: <0.0-1.0>
}
@input <variable>: <Type>
@input <optional_var>?: <Type>
@import "<pattern>" as <variable>
@shell "<command>" as <variable>
@validate output {
must_contain: ["string1", "string2"],
format: <markdown|json|yaml>,
min_length: <number>
}
---
Template content with {{variables}}
```
## Features Showcased
### Template System
- Variable substitution: `{{ name }}`
- Conditionals: `{{#if condition}}...{{/if}}`
- Iteration: `{{#each items}}...{{/each}}`
- Filters: `{{ name | upcase }}`
### Context Injection
- **File imports**: `@import "./src/**/*.rs" as code`
- **Shell commands**: `@shell "git status" as status`
- **Glob patterns**: Match multiple files
- **Line ranges**: Extract specific lines (future feature)
### Validation
- Required content: `must_contain: ["keywords"]`
- Format checking: `format: markdown`
- Length requirements: `min_length: 500`
- Custom validation rules (future feature)
## Performance Tips
### Model Selection
- Use **Haiku** for simple, fast tasks
- Use **Sonnet** for balanced quality/speed
- Use **Opus** only for complex, critical tasks
- Use **GPT-4o-mini** for cost-effective code tasks
- Use **Ollama** for privacy and offline operation
### Token Optimization
```yaml
@agent {
max_tokens: 500 # Limit response length for simple tasks
}
```
### Temperature Control
```yaml
@agent {
temperature: 0.2 # Low for consistent, factual output (code, analysis)
temperature: 0.7 # Moderate for balanced creativity
temperature: 0.9 # High for creative content
}
```
## Troubleshooting
### API Key Not Set
```
Error: ANTHROPIC_API_KEY environment variable not set
```
**Solution**: Set the required API key for your chosen model
### File Not Found
```
Error: Failed to import "./path/to/file.md"
```
**Solution**: Check file paths are relative to project root or use absolute paths
### Validation Failed
```
Error: Validation failed: missing required content
```
**Solution**: Adjust validation rules or improve agent prompt
### Ollama Connection Failed
```
Error: Failed to call Ollama API - is Ollama running?
```
**Solution**: Start Ollama server with `ollama serve`
## Next Steps
1. **Try the examples** - Run each example to understand capabilities
2. **Create your own** - Use examples as templates for your use cases
3. **Explore demos** - See [demos/agent](../../demos/agent/) for provider comparisons
4. **Read the docs** - Check [LLM_INTEGRATION.md](../../crates/typedialog-agent/typedialog-ag-core/LLM_INTEGRATION.md)
5. **Run tests** - Validate with [tests/agent](../../tests/agent/)
## Contributing
To add a new example:
1. Create a `.agent.mdx` file following the format above
2. Test it works with `cargo run --package typedialog-ag -- <path>`
3. Document it in this README
4. Add any required dependencies or setup steps
---
**See Also:**
- [Project README](../../README.md) - Main project documentation
- [Quickstart](../../crates/typedialog-agent/quickstart.md) - Getting started guide

View File

@ -0,0 +1,42 @@
---
@agent {
role: architect
llm: claude-opus-4
tools: [analyze_codebase, generate_adr, create_diagram]
}
@input feature_name: String
@input requirements?: String
@import "./docs/architecture/**/*.md" as arch_docs
@shell "git log --oneline -20" as recent_commits
@validate output {
must_contain: ["# ADR", "```mermaid"]
format: markdown
min_length: 1000
}
---
# Architecture Design: {{feature_name}}
{{#if requirements}}
## Requirements
{{requirements}}
{{/if}}
## Current Architecture
{{arch_docs}}
## Recent Changes
{{recent_commits}}
## Deliverables
1. Architecture Decision Record (ADR)
2. System diagram (mermaid format)
3. Implementation steps
Use tools to analyze codebase and generate comprehensive design.

View File

@ -0,0 +1,19 @@
---
@agent {
role: friendly assistant,
llm: claude-3-5-haiku-20241022
}
@input name: String
@input language?: String
---
Greet {{name}} in a warm and friendly way.
{{#if language}}
Please respond in {{language}}.
{{else}}
Please respond in English.
{{/if}}
Include a fun fact about their name if you know one!

View File

@ -0,0 +1,54 @@
---
@agent {
role: senior software engineer and code reviewer,
llm: claude-3-5-sonnet-20241022,
max_tokens: 4096,
temperature: 0.3
}
@input file_pattern: String
@input focus?: String
@import "{{file_pattern}}" as code_files
@shell "git diff --staged" as staged_changes
@validate output {
must_contain: ["## Security", "## Performance", "## Recommendations"],
format: markdown,
min_length: 500
}
---
# Code Review
## Files Under Review
{{code_files}}
## Staged Changes
{{staged_changes}}
## Review Focus
{{#if focus}}
Pay special attention to: {{focus}}
{{else}}
Comprehensive review covering:
- Security vulnerabilities
- Performance issues
- Code quality and maintainability
- Best practices
- Error handling
{{/if}}
## Instructions
Provide a thorough code review with:
1. **Security Analysis**: Identify any security vulnerabilities (SQL injection, XSS, authentication issues, etc.)
2. **Performance Review**: Spot performance bottlenecks and optimization opportunities
3. **Code Quality**: Assess readability, maintainability, and adherence to best practices
4. **Recommendations**: Provide specific, actionable suggestions for improvement
Rate the overall code quality (1-10) and justify your rating.

View File

@ -0,0 +1,64 @@
---
@agent {
role: creative writer and storyteller,
llm: gemini-2.0-flash-exp,
max_tokens: 4096,
temperature: 0.9
}
@input topic: String
@input style?: String
@input length?: String
@validate output {
format: markdown,
min_length: 300
}
---
# Creative Writing Task
## Topic
{{topic}}
## Style
{{#if style}}
Write in the style of: {{style}}
{{else}}
Use an engaging, descriptive narrative style.
{{/if}}
## Length
{{#if length}}
Target length: {{length}}
{{else}}
Medium length (2-3 paragraphs)
{{/if}}
---
## Task
Create an original, creative piece about **{{topic}}**.
### Requirements:
1. **Hook**: Start with a captivating opening that draws readers in
2. **Vivid Details**: Use sensory language and descriptive imagery
3. **Engagement**: Maintain reader interest throughout
4. **Conclusion**: End with a memorable closing
### Creative Elements to Include:
- Metaphors or similes where appropriate
- Varied sentence structure for rhythm
- Show, don't tell
- Unexpected perspectives or insights
### Gemini's Strengths:
Gemini excels at:
- Multi-modal understanding
- Creative and varied outputs
- Long-form content generation
- Cultural and contextual awareness
Write the piece now. Be creative, original, and engaging!

View File

@ -0,0 +1,17 @@
---
@agent {
role: creative AI poet,
llm: claude-3-5-haiku-20241022
}
@input theme: String
---
Write a beautiful haiku about {{ theme }}.
Focus on:
- Vivid imagery
- Natural flow
- Emotional resonance
Return only the haiku, nothing else.

View File

@ -0,0 +1,17 @@
---
@agent {
role: creative AI poet,
llm: gemini-2.0-flash-exp
}
@input theme: String
---
Write a beautiful haiku about {{ theme }}.
Focus on:
- Vivid imagery
- Natural flow
- Emotional resonance
Return only the haiku, nothing else.

View File

@ -0,0 +1,17 @@
---
@agent {
role: creative AI poet,
llm: llama2
}
@input theme: String
---
Write a beautiful haiku about {{ theme }}.
Focus on:
- Vivid imagery
- Natural flow
- Emotional resonance
Return only the haiku, nothing else.

View File

@ -0,0 +1,17 @@
---
@agent {
role: creative AI poet,
llm: gpt-4o-mini
}
@input theme: String
---
Write a beautiful haiku about {{ theme }}.
Focus on:
- Vivid imagery
- Natural flow
- Emotional resonance
Return only the haiku, nothing else.

View File

@ -0,0 +1,153 @@
---
@agent {
role: technical documentation architect,
llm: claude-3-5-sonnet-20241022,
max_tokens: 8192,
temperature: 0.3
}
@shell "ls -la /Users/Akasha/Development/typedialog/docs/" as current_docs
@shell "find /Users/Akasha/Development/typedialog/crates -name 'README.md' -o -name '*.md' | head -20" as crate_docs
@import "/Users/Akasha/Development/typedialog/README.md" as main_readme
@import "/Users/Akasha/Development/typedialog/CHANGES.md" as changes_file
@import "/Users/Akasha/Development/typedialog/docs/README.md" as docs_readme
@validate output {
must_contain: ["## Documentation Structure", "## Changes to CHANGES.md", "## README Updates"],
format: markdown,
min_length: 1500
}
---
# TypeDialog Documentation Reorganization Plan
## Current State
### Existing docs/ Structure
{{current_docs}}
### Crate Documentation
{{crate_docs}}
### Main README
{{main_readme}}
### Current CHANGES.md
{{changes_file}}
### Docs README
{{docs_readme}}
---
## Task: Documentation Reorganization
You need to create a comprehensive plan to:
### 1. Create Documentation Folders
Create the following folders in `docs/` (similar to existing `docs/agent/`):
- **docs/cli/** - CLI backend documentation (typedialog CLI)
- **docs/tui/** - TUI backend documentation (typedialog-tui)
- **docs/web/** - Web backend documentation (typedialog-web)
- **docs/ai/** - AI backend documentation (RAG, embeddings)
- **docs/encryption/** - Move encryption docs here:
- ENCRYPTION-QUICK-START.md
- ENCRYPTION-SERVICES-SETUP.md
- ENCRYPTION-UNIFIED-ARCHITECTURE.md
Each folder should follow the pattern established by `docs/agent/`:
- README.md (overview + navigation)
- GETTING_STARTED.md
- REFERENCE.md (API/CLI reference)
- Additional specialized docs as needed
### 2. Update CHANGES.md
The CHANGES.md file is outdated. It needs to include these major changes that happened after the RepeatingGroup work:
**TypeDialog Agent (NEW):**
- Complete multi-provider LLM support (Claude, OpenAI, Gemini, Ollama)
- Agent execution system (.agent.mdx files)
- Template system with LiquidJS (variables, conditionals, imports)
- File imports with glob patterns
- Shell command integration
- Output validation rules
- Streaming support (SSE for Claude/OpenAI, JSON for Gemini/Ollama)
- Token usage tracking
**Naming Refactor:**
- Renamed typeagent-* to typedialog-ag-* for consistency
- typeagent-core → typedialog-ag-core
- typeagent-cli → typedialog-ag
- typeagent-server → typedialog-ag-server
- Updated all imports, docs, and scripts
**Organization:**
- Created demos/agent/ with 4 provider comparison demos
- Created tests/agent/ with validation tests
- Created examples/12-agent-execution/ with 8 practical examples
- Created docs/agent/ with complete user documentation
**Documentation:**
- docs/agent/README.md - Overview
- docs/agent/GETTING_STARTED.md - Installation and first agent
- docs/agent/LLM_PROVIDERS.md - Complete provider comparison
Format this as a proper CHANGES.md entry with:
- Version/date heading
- Categorized sections (Added, Changed, Fixed)
- Specific, actionable descriptions
- Links to relevant docs/examples
### 3. Update Main README.md
Add a section about TypeDialog Agent after the "Backends at a Glance" section:
**TypeDialog Agent (typedialog-ag)**
- What it is (AI agent execution from markdown files)
- Quick example
- Link to docs/agent/
- Link to examples/12-agent-execution/
### 4. Update docs/README.md
Add references to the new documentation folders:
- Link to cli/
- Link to tui/
- Link to web/
- Link to ai/
- Link to agent/
- Link to encryption/
Organize them logically (e.g., "Backend Documentation", "Feature Documentation")
---
## Instructions
Provide a **detailed, actionable plan** that includes:
1. **Folder Structure** - Exact directory tree showing what to create
2. **File Moves** - Which existing files to move where
3. **New Files to Create** - List each new README.md with a brief outline
4. **CHANGES.md Content** - Complete markdown entry to add
5. **README.md Updates** - Exact markdown sections to add/modify
6. **docs/README.md Updates** - Complete reorganized structure
7. **Migration Order** - Step-by-step execution order to avoid conflicts
For each new documentation folder (cli/, tui/, web/, ai/), specify:
- What topics should be covered
- What existing content can be referenced
- What new content needs to be written
- Cross-references to examples and code
**IMPORTANT:**
- DO NOT include time estimates
- Focus on WHAT needs to be done and WHY
- Identify dependencies (what must be done first)
- Flag any risks (e.g., breaking existing links)
- Provide clear success criteria
Generate the complete plan now.

View File

@ -0,0 +1,68 @@
---
@agent {
role: technical writer and documentation specialist,
llm: gpt-4o,
max_tokens: 8192
}
@input module_path: String
@input doc_type: String
@import "{{module_path}}/**/*.rs" as source_code
@import "{{module_path}}/../README.md" as project_readme
@shell "cargo doc --package {{module_path}} --no-deps 2>&1 | head -20" as cargo_doc_output
@validate output {
must_contain: ["# ", "## ", "```"],
format: markdown,
min_length: 1000
}
---
# Generate Documentation
## Documentation Type
{{doc_type}}
## Source Code
{{source_code}}
## Project Context
{{project_readme}}
## Existing Documentation Status
{{cargo_doc_output}}
---
## Task
Generate comprehensive {{doc_type}} documentation for the module at `{{module_path}}`.
### Requirements:
1. **For API Documentation:**
- Document all public functions, structs, and traits
- Include usage examples with code snippets
- Explain parameters, return values, and error cases
- Add "See Also" sections for related functionality
2. **For User Guide:**
- Start with a clear overview
- Include getting started instructions
- Provide practical examples
- Add troubleshooting section
3. **For Architecture Documentation:**
- Explain high-level design decisions
- Show component interactions (use mermaid diagrams)
- Document data flow
- Explain key design patterns used
4. **General Guidelines:**
- Use clear, concise language
- Include code examples in markdown fences
- Add links to related documentation
- Follow Rust documentation conventions
Generate the documentation now.

View File

@ -0,0 +1,64 @@
---
@agent {
role: data privacy consultant,
llm: llama2,
max_tokens: 2048,
temperature: 0.7
}
@input data_type: String
@input use_case: String
@import ".env" as env_file
@shell "ls -la *.json *.yaml *.toml 2>/dev/null | head -10" as config_files
---
# Privacy-First Data Analysis
## Scenario
You need to analyze {{data_type}} for {{use_case}}.
## Environment Configuration
{{env_file}}
## Configuration Files Found
{{config_files}}
---
## Why Ollama for Sensitive Data?
This agent uses **Ollama** (llama2) which runs entirely locally on your machine:
- ✅ No data sent to external APIs
- ✅ Complete privacy - data never leaves your computer
- ✅ No API costs
- ✅ Offline operation
- ✅ Full control over model execution
## Task
Analyze the following aspects of handling {{data_type}}:
1. **Privacy Requirements**
- What privacy regulations apply? (GDPR, CCPA, HIPAA, etc.)
- What data classification level is this?
- What consent mechanisms are needed?
2. **Security Recommendations**
- How should this data be encrypted at rest and in transit?
- What access controls should be implemented?
- What audit logging is required?
3. **Processing Guidelines**
- Can this data be processed in the cloud?
- Should it remain on-premises?
- What data minimization strategies apply?
4. **Compliance Checklist**
- What documentation is required?
- What rights do data subjects have?
- What breach notification procedures apply?
Provide specific, actionable recommendations for {{use_case}}.
**Note:** Since this analysis runs locally, you can safely include sensitive context in your prompts!

View File

@ -0,0 +1,86 @@
---
@agent {
role: expert Rust developer and refactoring specialist,
llm: gpt-4o-mini,
max_tokens: 4096,
temperature: 0.2
}
@input target_file: String
@input refactoring_goal: String
@import "{{target_file}}" as source_code
@shell "cargo clippy --message-format=short 2>&1 | grep {{target_file}}" as clippy_warnings
@validate output {
must_contain: ["```rust", "## Before", "## After", "## Rationale"],
format: markdown,
min_length: 500
}
---
# Rust Refactoring Assistant
## Target File
{{target_file}}
## Current Code
```rust
{{source_code}}
```
## Clippy Warnings
{{clippy_warnings}}
## Refactoring Goal
{{refactoring_goal}}
---
## Task
Refactor the code to achieve: **{{refactoring_goal}}**
### Provide the following:
1. **Analysis**
- Current issues or code smells
- Performance implications
- Maintainability concerns
- Safety considerations
2. **Before (Current State)**
- Highlight the problematic sections
- Explain what makes them suboptimal
3. **After (Refactored Code)**
- Provide complete, working refactored code
- Use idiomatic Rust patterns
- Apply these principles:
- DRY (Don't Repeat Yourself)
- SRP (Single Responsibility Principle)
- Prefer composition over inheritance
- Use type system for safety
- Minimize unwrap() and panic!()
- Leverage iterators and functional patterns
4. **Rationale**
- Explain each significant change
- Justify design decisions
- Note any trade-offs made
5. **Testing Recommendations**
- What tests should be added/updated?
- Edge cases to consider
6. **Migration Path**
- Is this a breaking change?
- How to safely deploy the refactoring?
### OpenAI GPT-4o-mini Strengths:
- Fast iteration on code changes
- Strong code comprehension
- Cost-effective for refactoring tasks
- Good at following specific coding patterns
Generate the refactoring plan and implementation now.

View File

@ -0,0 +1,167 @@
#!/usr/bin/env bash
# Multi-Provider TypeAgent Demo
#
# This script demonstrates all four LLM providers:
# - Claude (Anthropic)
# - OpenAI (GPT)
# - Gemini (Google)
# - Ollama (Local Models)
#
# Requirements:
# - ANTHROPIC_API_KEY for Claude
# - OPENAI_API_KEY for OpenAI
# - GEMINI_API_KEY or GOOGLE_API_KEY for Gemini
# - Ollama server running (http://localhost:11434) for local models
set -e
# Colors
BOLD="\033[1m"
DIM="\033[2m"
RESET="\033[0m"
PURPLE="\033[35m"
GREEN="\033[32m"
BLUE="\033[34m"
RED="\033[31m"
YELLOW="\033[33m"
# Check if we're in the right directory (should be project root)
if [[ ! -f "Cargo.toml" ]] || [[ ! -d "crates/typedialog-agent" ]]; then
echo -e "${RED}Error: Please run this script from the typedialog project root${RESET}"
echo -e "${RED}Usage: ./demos/agent/run_demo.sh${RESET}"
exit 1
fi
# Function to check API key
check_api_key() {
local key_name=$1
local provider=$2
local color=$3
if [[ -z "${!key_name}" ]]; then
echo -e "${color}⚠️ ${provider}: ${YELLOW}${key_name} not set - skipping${RESET}"
return 1
else
echo -e "${color}${provider}: ${key_name} configured${RESET}"
return 0
fi
}
# Banner
echo -e "${BOLD}════════════════════════════════════════════════════════════════════${RESET}"
echo -e "${BOLD}🤖 TypeAgent Multi-Provider Demo${RESET}"
echo -e "${BOLD}════════════════════════════════════════════════════════════════════${RESET}\n"
# Check API keys and Ollama server
echo -e "${BOLD}Checking Providers:${RESET}"
CLAUDE_OK=false
OPENAI_OK=false
GEMINI_OK=false
OLLAMA_OK=false
if check_api_key "ANTHROPIC_API_KEY" "Claude " "$PURPLE"; then
CLAUDE_OK=true
fi
if check_api_key "OPENAI_API_KEY" "OpenAI " "$GREEN"; then
OPENAI_OK=true
fi
if check_api_key "GEMINI_API_KEY" "Gemini " "$BLUE" || check_api_key "GOOGLE_API_KEY" "Gemini " "$BLUE"; then
GEMINI_OK=true
fi
# Check Ollama server
OLLAMA_URL="${OLLAMA_BASE_URL:-http://localhost:11434}"
if curl -s -f "$OLLAMA_URL/api/tags" > /dev/null 2>&1; then
echo -e "${YELLOW}✓ Ollama : Server running at $OLLAMA_URL${RESET}"
OLLAMA_OK=true
else
echo -e "${YELLOW}⚠️ Ollama : ${YELLOW}Server not running at $OLLAMA_URL - skipping${RESET}"
fi
echo ""
# Check if at least one provider is configured
if [[ "$CLAUDE_OK" == "false" ]] && [[ "$OPENAI_OK" == "false" ]] && [[ "$GEMINI_OK" == "false" ]] && [[ "$OLLAMA_OK" == "false" ]]; then
echo -e "${RED}Error: No providers available. Please configure at least one:${RESET}"
echo -e " ${PURPLE}export ANTHROPIC_API_KEY=sk-ant-...${RESET}"
echo -e " ${GREEN}export OPENAI_API_KEY=sk-...${RESET}"
echo -e " ${BLUE}export GEMINI_API_KEY=...${RESET}"
echo -e " ${YELLOW}ollama serve # Start Ollama server${RESET}"
exit 1
fi
# Build the CLI if needed
if [[ ! -f "target/release/typedialog-ag" ]]; then
echo -e "${BOLD}Building TypeDialog Agent CLI...${RESET}"
cargo build --release --package typedialog-ag
echo ""
fi
# Get theme from user
echo -e "${BOLD}Enter a theme for the haiku (default: 'autumn sunset'):${RESET}"
read -r THEME
THEME=${THEME:-"autumn sunset"}
echo ""
# Run demo for each provider
echo -e "${BOLD}════════════════════════════════════════════════════════════════════${RESET}"
echo -e "${BOLD}Running demos with theme: \"${THEME}\"${RESET}"
echo -e "${BOLD}════════════════════════════════════════════════════════════════════${RESET}\n"
# Claude demo
if [[ "$CLAUDE_OK" == "true" ]]; then
echo -e "${PURPLE}${BOLD}🟣 CLAUDE (claude-3-5-haiku-20241022)${RESET}"
echo -e "${DIM}────────────────────────────────────────────────────────────────────${RESET}"
echo "theme (String): $THEME" | cargo run --release --package typedialog-ag -- demos/agent/demo-claude.agent.mdx --yes 2>/dev/null || echo -e "${RED}Error running Claude demo${RESET}"
echo ""
fi
# OpenAI demo
if [[ "$OPENAI_OK" == "true" ]]; then
echo -e "${GREEN}${BOLD}🟢 OPENAI (gpt-4o-mini)${RESET}"
echo -e "${DIM}────────────────────────────────────────────────────────────────────${RESET}"
echo "theme (String): $THEME" | cargo run --release --package typedialog-ag -- demos/agent/demo-openai.agent.mdx --yes 2>/dev/null || echo -e "${RED}Error running OpenAI demo${RESET}"
echo ""
fi
# Gemini demo
if [[ "$GEMINI_OK" == "true" ]]; then
echo -e "${BLUE}${BOLD}🔵 GEMINI (gemini-2.0-flash-exp)${RESET}"
echo -e "${DIM}────────────────────────────────────────────────────────────────────${RESET}"
echo "theme (String): $THEME" | cargo run --release --package typedialog-ag -- demos/agent/demo-gemini.agent.mdx --yes 2>/dev/null || echo -e "${RED}Error running Gemini demo${RESET}"
echo ""
fi
# Ollama demo
if [[ "$OLLAMA_OK" == "true" ]]; then
echo -e "${YELLOW}${BOLD}🟡 OLLAMA (llama2)${RESET}"
echo -e "${DIM}────────────────────────────────────────────────────────────────────${RESET}"
echo "theme (String): $THEME" | cargo run --release --package typedialog-ag -- demos/agent/demo-ollama.agent.mdx --yes 2>/dev/null || echo -e "${RED}Error running Ollama demo${RESET}"
echo ""
fi
# Summary
echo -e "${BOLD}════════════════════════════════════════════════════════════════════${RESET}"
echo -e "${BOLD}📊 DEMO COMPLETE${RESET}"
echo -e "${BOLD}════════════════════════════════════════════════════════════════════${RESET}\n"
echo -e "${BOLD}All providers demonstrated:${RESET}"
[[ "$CLAUDE_OK" == "true" ]] && echo -e " ${PURPLE}✓ Claude (Anthropic) - SSE streaming with usage tracking${RESET}"
[[ "$OPENAI_OK" == "true" ]] && echo -e " ${GREEN}✓ OpenAI (GPT) - SSE streaming (no usage in stream)${RESET}"
[[ "$GEMINI_OK" == "true" ]] && echo -e " ${BLUE}✓ Gemini (Google) - JSON streaming with usage tracking${RESET}"
[[ "$OLLAMA_OK" == "true" ]] && echo -e " ${YELLOW}✓ Ollama (Local) - JSON streaming with usage tracking${RESET}"
echo -e "\n${BOLD}To run individual providers:${RESET}"
echo -e " ${DIM}cargo run --package typedialog-ag -- demos/agent/demo-claude.agent.mdx${RESET}"
echo -e " ${DIM}cargo run --package typedialog-ag -- demos/agent/demo-openai.agent.mdx${RESET}"
echo -e " ${DIM}cargo run --package typedialog-ag -- demos/agent/demo-gemini.agent.mdx${RESET}"
echo -e " ${DIM}cargo run --package typedialog-ag -- demos/agent/demo-ollama.agent.mdx${RESET}"
echo -e "\n${BOLD}To run the Rust comparison example:${RESET}"
echo -e " ${DIM}cargo run --example provider_comparison${RESET}"
echo -e " ${DIM}cargo run --example provider_comparison claude${RESET}"
echo -e " ${DIM}cargo run --example provider_comparison openai${RESET}"
echo -e " ${DIM}cargo run --example provider_comparison gemini${RESET}"
echo -e " ${DIM}cargo run --example provider_comparison ollama${RESET}"

View File

@ -0,0 +1,87 @@
---
@agent {
role: project manager and task planner,
llm: claude-3-5-sonnet-20241022,
temperature: 0.5
}
@input project_goal: String
@input deadline?: String
@input team_size?: String
@shell "ls -la" as current_directory
@shell "git status --short" as git_status
@validate output {
must_contain: ["## Tasks", "## Dependencies", "## Risks"],
format: markdown,
min_length: 800
}
---
# Project Planning Assistant
## Goal
{{project_goal}}
{{#if deadline}}
## Deadline
{{deadline}}
{{/if}}
{{#if team_size}}
## Team Size
{{team_size}} developers
{{/if}}
## Current Project State
**Directory Structure:**
{{current_directory}}
**Git Status:**
{{git_status}}
---
## Instructions
Create a detailed project plan that includes:
### 1. Task Breakdown
Break down the goal into specific, actionable tasks:
- [ ] Task 1 (Complexity: simple/moderate/complex)
- [ ] Task 2
- etc.
### 2. Dependencies
Identify task dependencies:
- Task A must complete before Task B can start
- Tasks C and D can run in parallel
### 3. Complexity Assessment
For each task, estimate complexity (NOT time):
- **trivial** - Straightforward, well-understood
- **simple** - Clear but requires some thought
- **moderate** - Multiple components, some uncertainty
- **complex** - Significant design decisions needed
- **very_complex** - High uncertainty, research required
### 4. Success Criteria
Define clear acceptance criteria for each major task.
### 5. Risks & Mitigation
Identify potential blockers and mitigation strategies:
- What could go wrong?
- How to prevent it?
- Fallback plans
### 6. Implementation Order
Recommend the sequence to tackle tasks based on:
- Dependencies
- Risk reduction (do risky things first)
- Value delivery (what provides immediate benefit)
**IMPORTANT:** Do NOT include time estimates. Focus on what needs to be done and why.
Generate the plan now.

View File

@ -1,6 +1,6 @@
<div align="center">
<img src="../imgs/typedialog_logo_h_s.svg" alt="TypeDialog Logo" width="600" />
<img src="../assets/typedialog_logo_h_s.svg" alt="TypeDialog Logo" width="600" />
</div>
# TypeDialog Examples
@ -111,7 +111,34 @@ Nickel type-safe schemas with TypeDialog form integration.
- [`arrays-form.toml`](07-nickel-generation/arrays-form.toml) - Form with RepeatingGroup arrays
- Fragments for: `api-endpoint-item.toml`, `user-item.toml`
### 9. **Real-World Templates** → [`09-templates/`](09-templates/)
### 8. **Encryption & Sensitive Data** → [`08-encryption/`](08-encryption/)
Handling sensitive fields with encryption.
- Field encryption
- Sensitive data redaction
- Encrypted storage
- **Best for:** Security-sensitive forms, PII handling
### 9. **AI Backend** → [`10-ai-backend/`](10-ai-backend/)
RAG system, knowledge graphs, and embeddings.
- RAG (Retrieval-Augmented Generation) system
- Batch document operations
- Semantic + keyword search
- Performance optimization
- **Best for:** ML-powered search, knowledge management, semantic retrieval
**Running AI examples:**
```bash
# Build AI backend
just build::ai
# Run the AI backend example
cargo run --example main --features ai_backend
# Or test AI module
just test::ai
```
### 10. **Real-World Templates** → [`09-templates/`](09-templates/)
Production-ready examples for common use cases.
| Template | Path | Use Case |
@ -120,6 +147,48 @@ Production-ready examples for common use cases.
| **User Registration** | [`09-templates/user_registration/`](09-templates/user_registration/) | SaaS, apps |
| **Library Catalog** | [`09-templates/library_catalog/`](09-templates/library_catalog/) | Management systems |
### 11. **Provisioning Generation** → [`11-prov-gen/`](11-prov-gen/)
Infrastructure as Code and configuration generation.
- Infrastructure provisioning
- Configuration management
- Resource templates
- **Best for:** DevOps automation, cloud deployment
### 12. **Agent Execution** → [`12-agent-execution/`](12-agent-execution/)
AI agent execution with multi-provider LLM support.
| Example | LLM Provider | Use Case |
|---------|--------------|----------|
| **Basic Greeting** | Claude Haiku | Simple agent introduction |
| **Code Review** | Claude Sonnet | Security & performance analysis |
| **Documentation** | GPT-4o | Generate docs from code |
| **Task Planner** | Claude Sonnet | Project planning & breakdown |
| **Local Privacy** | Ollama (local) | Privacy-first data analysis |
| **Creative Writer** | Gemini 2.0 | Creative content generation |
| **Refactoring** | GPT-4o-mini | Code refactoring assistant |
| **Architect** | Claude Opus | High-level design & ADRs |
**Features:**
- Template variables & conditionals
- File imports with glob patterns
- Shell command integration
- Output validation
- 4 LLM providers (Claude, OpenAI, Gemini, Ollama)
**Running agent examples:**
```bash
# Run an agent
cargo run --package typedialog-ag -- examples/12-agent-execution/basic-greeting.agent.mdx
# See all examples
cat examples/12-agent-execution/README.md
```
**Related:**
- [Demos](../demos/agent/) - Multi-provider comparison demos
- [Tests](../tests/agent/) - Agent validation tests
- [Core Examples](../crates/typedialog-agent/typedialog-ag-core/examples/) - Rust API usage
## Learning Path
```
@ -147,6 +216,13 @@ START HERE
├→ arrays-schema.ncl ← Array(Record) types
└→ arrays-form.toml ← RepeatingGroup fields
08-encryption/ ← Secure sensitive fields
10-ai-backend/ ← Add AI/ML features
├→ RAG system for semantic search
├→ Batch operations for efficiency
└→ Knowledge graph integration
09-templates/ ← Deploy to production
```

101
scripts/check_deps.sh Executable file
View File

@ -0,0 +1,101 @@
#!/bin/bash
# TypeDialog - Dependency Checker
# Verifies all critical dependencies for building TypeDialog
set -e
echo "=== Checking Critical Dependencies ==="
echo ""
ERRORS=0
# Check just
if command -v just &> /dev/null; then
echo "✓ just $(just --version)"
else
echo "✗ just - NOT FOUND"
echo " Install: cargo install just"
echo " Or macOS: brew install just"
ERRORS=$((ERRORS + 1))
fi
# Check nickel
if command -v nickel &> /dev/null; then
echo "✓ nickel $(nickel --version)"
else
echo "✗ nickel - NOT FOUND"
echo " Install: cargo install nickel-lang-cli"
echo " Or macOS: brew install nickel"
ERRORS=$((ERRORS + 1))
fi
# Check cargo (should always be present but verify)
if command -v cargo &> /dev/null; then
echo "✓ cargo $(cargo --version)"
else
echo "✗ cargo - NOT FOUND"
echo " Install Rust: https://rustup.rs/"
ERRORS=$((ERRORS + 1))
fi
# Check prov-ecosystem (path dependency)
# Check shellcheck
if command -v shellcheck &> /dev/null; then
echo "✓ shellcheck $(shellcheck --version | head -1)"
else
echo "✗ shellcheck - NOT FOUND"
echo " Install: brew install shellcheck (macOS) or apt install shellcheck (Linux)"
ERRORS=$((ERRORS + 1))
fi
# Check nushell
if command -v nu &> /dev/null; then
echo "✓ nushell $(nu --version)"
else
echo "✗ nushell - NOT FOUND"
echo " Install: brew install nushell (macOS) or cargo install nu"
ERRORS=$((ERRORS + 1))
fi
# Check markdownlint-cli2
if command -v markdownlint-cli2 &> /dev/null; then
echo "✓ markdownlint-cli2 $(markdownlint-cli2 --version)"
else
echo "✗ markdownlint-cli2 - NOT FOUND"
echo " Install: npm install -g markdownlint-cli2"
ERRORS=$((ERRORS + 1))
fi
echo ""
echo "=== Checking Internal Dependencies ==="
if cargo metadata --format-version=1 2>/dev/null | grep -q prov-ecosystem; then
echo "✓ prov-ecosystem - FOUND"
else
echo "✗ prov-ecosystem - NOT FOUND"
echo " Clone to: ../prov-ecosystem"
echo " Or run: git clone https://github.com/your-org/prov-ecosystem.git ../prov-ecosystem"
ERRORS=$((ERRORS + 1))
fi
# Check secretumvault (path dependency)
if cargo metadata --format-version=1 2>/dev/null | grep -q secretumvault; then
echo "✓ secretumvault - FOUND"
else
echo "✗ secretumvault - NOT FOUND"
echo " Clone to: ../secretumvault"
echo " Or run: git clone https://github.com/your-org/secretumvault.git ../secretumvault"
ERRORS=$((ERRORS + 1))
fi
# Summary
echo ""
echo "=== Summary ==="
if [ $ERRORS -eq 0 ]; then
echo "✓ All dependencies found!"
echo " You can now build TypeDialog with: just build::all"
exit 0
else
echo "$ERRORS dependencies missing"
echo " Fix the issues above before building"
exit 1
fi

View File

@ -67,6 +67,19 @@ copy_binaries() {
if [[ -f "${BUILD_DIR}/typedialog-web" ]]; then
cp "${BUILD_DIR}/typedialog-web" "${distro_path}/bin/"
fi
if [[ -f "${BUILD_DIR}/typedialog-ai" ]]; then
cp "${BUILD_DIR}/typedialog-ai" "${distro_path}/bin/"
fi
if [[ -f "${BUILD_DIR}/typedialog-ag" ]]; then
cp "${BUILD_DIR}/typedialog-ag" "${distro_path}/bin/"
fi
if [[ -f "${BUILD_DIR}/typedialog-prov-gen" ]]; then
cp "${BUILD_DIR}/typedialog-prov-gen" "${distro_path}/bin/"
fi
}
# Function: Copy configurations
@ -119,7 +132,10 @@ create_manifest() {
"binaries": [
"bin/typedialog",
"bin/typedialog-tui",
"bin/typedialog-web"
"bin/typedialog-web",
"bin/typedialog-ai",
"bin/typedialog-ag",
"bin/typedialog-prov-gen"
],
"configs": {
"cli": [
@ -136,6 +152,11 @@ create_manifest() {
"config/web/default.toml",
"config/web/dev.toml",
"config/web/production.toml"
],
"ai": [
"config/ai/default.toml",
"config/ai/dev.toml",
"config/ai/production.toml"
]
},
"installers": {

View File

@ -114,6 +114,9 @@ The package includes:
- typedialog - CLI backend
- typedialog-tui - TUI backend
- typedialog-web - Web backend
- typedialog-ai - AI backend (FormBackend + microservice)
- typedialog-ag - Agent executor
- typedialog-prov-gen - Provisioning generator
- **config/** - Configuration files for each backend and environment
- cli/{default,dev,production}.toml

63
scripts/setup-pre-commit.sh Executable file
View File

@ -0,0 +1,63 @@
#!/usr/bin/env bash
# Setup pre-commit hooks for TypeDialog project
set -e
echo "=== Pre-commit Hook Setup ==="
echo ""
# Check if pre-commit is installed
if ! command -v pre-commit &> /dev/null; then
echo "❌ pre-commit is not installed"
echo ""
echo "Install with:"
echo " pip install pre-commit"
echo " # OR"
echo " pipx install pre-commit"
echo " # OR (macOS)"
echo " brew install pre-commit"
echo ""
exit 1
fi
echo "✓ pre-commit found: $(pre-commit --version)"
echo ""
# Install hooks
echo "Installing pre-commit hooks..."
pre-commit install
echo ""
echo "✓ Pre-commit hooks installed successfully"
echo ""
# Run hooks on all files as a test
echo "Testing hooks on all files (this may take a while)..."
echo ""
if pre-commit run --all-files; then
echo ""
echo "✅ All hooks passed!"
else
echo ""
echo "⚠️ Some hooks failed. Please fix the issues above."
echo ""
echo "Common fixes:"
echo " - Rust: cargo fmt --all"
echo " - Markdown: just dev::lint-markdown-fix"
echo " - Shell: shellcheck <file> and fix issues"
echo ""
exit 1
fi
echo ""
echo "=== Setup Complete ==="
echo ""
echo "Pre-commit hooks will now run automatically before every commit."
echo ""
echo "To run manually:"
echo " pre-commit run --all-files"
echo ""
echo "To skip hooks (emergency only):"
echo " git commit --no-verify"
echo ""
echo "See docs/PRE-COMMIT-SETUP.md for more information."

136
scripts/update-docs-refs.sh Executable file
View File

@ -0,0 +1,136 @@
#!/usr/bin/env bash
# Update references to renamed documentation files
# Run AFTER renaming docs to lowercase
set -e
PROJECT_ROOT="$(cd "$(dirname "${BASH_SOURCE[0]}")/.." && pwd)"
cd "$PROJECT_ROOT"
echo "=== Updating documentation references ==="
echo ""
# Backup function
backup_file() {
local file=$1
if [ -f "$file" ]; then
cp "$file" "${file}.bak"
echo "✓ Backed up: $file"
fi
}
# Update function with display
update_refs() {
local file=$1
local search=$2
local replace=$3
if [ -f "$file" ]; then
if grep -q "$search" "$file" 2>/dev/null; then
backup_file "$file"
sed -i '' "s|$search|$replace|g" "$file"
echo "✓ Updated: $file"
echo " ├─ $search$replace"
fi
fi
}
echo "Step 1: Update root-level doc references"
echo "-----------------------------------------"
# Find all markdown files and update references
for file in $(find . -name "*.md" -type f -not -path "./target/*" -not -path "./.git/*" -not -path "./node_modules/*" -not -name "*.bak"); do
# Root level docs
update_refs "$file" 'docs/BUILD\.md' 'docs/build.md'
update_refs "$file" 'BUILD\.md' 'build.md'
update_refs "$file" 'docs/CONFIGURATION\.md' 'docs/configuration.md'
update_refs "$file" 'CONFIGURATION\.md' 'configuration.md'
update_refs "$file" 'docs/DEVELOPMENT\.md' 'docs/development.md'
update_refs "$file" 'DEVELOPMENT\.md' 'development.md'
update_refs "$file" 'docs/FIELD_TYPES\.md' 'docs/field_types.md'
update_refs "$file" 'FIELD_TYPES\.md' 'field_types.md'
update_refs "$file" 'docs/INSTALLATION\.md' 'docs/installation.md'
update_refs "$file" 'INSTALLATION\.md' 'installation.md'
update_refs "$file" 'docs/NICKEL\.md' 'docs/nickel.md'
update_refs "$file" 'NICKEL\.md' 'nickel.md'
update_refs "$file" 'docs/PRE-COMMIT-SETUP\.md' 'docs/pre-commit-setup.md'
update_refs "$file" 'PRE-COMMIT-SETUP\.md' 'pre-commit-setup.md'
update_refs "$file" 'docs/RELEASE\.md' 'docs/release.md'
update_refs "$file" 'RELEASE\.md' 'release.md'
done
echo ""
echo "Step 2: Update agent/ doc references"
echo "-------------------------------------"
for file in $(find . -name "*.md" -type f -not -path "./target/*" -not -path "./.git/*" -not -path "./node_modules/*" -not -name "*.bak"); do
update_refs "$file" 'docs/agent/GETTING_STARTED\.md' 'docs/agent/getting_started.md'
update_refs "$file" 'agent/GETTING_STARTED\.md' 'agent/getting_started.md'
update_refs "$file" 'GETTING_STARTED\.md' 'getting_started.md'
update_refs "$file" 'docs/agent/LLM_PROVIDERS\.md' 'docs/agent/llm_providers.md'
update_refs "$file" 'agent/LLM_PROVIDERS\.md' 'agent/llm_providers.md'
update_refs "$file" 'LLM_PROVIDERS\.md' 'llm_providers.md'
done
echo ""
echo "Step 3: Update encryption/ doc references"
echo "------------------------------------------"
for file in $(find . -name "*.md" -type f -not -path "./target/*" -not -path "./.git/*" -not -path "./node_modules/*" -not -name "*.bak"); do
update_refs "$file" 'docs/encryption/ENCRYPTION-QUICK-START\.md' 'docs/encryption/encryption-quick-start.md'
update_refs "$file" 'encryption/ENCRYPTION-QUICK-START\.md' 'encryption/encryption-quick-start.md'
update_refs "$file" 'ENCRYPTION-QUICK-START\.md' 'encryption-quick-start.md'
update_refs "$file" 'docs/encryption/ENCRYPTION-SERVICES-SETUP\.md' 'docs/encryption/encryption-services-setup.md'
update_refs "$file" 'encryption/ENCRYPTION-SERVICES-SETUP\.md' 'encryption/encryption-services-setup.md'
update_refs "$file" 'ENCRYPTION-SERVICES-SETUP\.md' 'encryption-services-setup.md'
update_refs "$file" 'docs/encryption/ENCRYPTION-UNIFIED-ARCHITECTURE\.md' 'docs/encryption/encryption-unified-architecture.md'
update_refs "$file" 'encryption/ENCRYPTION-UNIFIED-ARCHITECTURE\.md' 'encryption/encryption-unified-architecture.md'
update_refs "$file" 'ENCRYPTION-UNIFIED-ARCHITECTURE\.md' 'encryption-unified-architecture.md'
done
echo ""
echo "Step 4: Check for remaining uppercase doc references"
echo "-----------------------------------------------------"
remaining=$(grep -rE "BUILD\.md|CONFIGURATION\.md|DEVELOPMENT\.md|FIELD_TYPES\.md|INSTALLATION\.md|NICKEL\.md|PRE-COMMIT-SETUP\.md|RELEASE\.md|GETTING_STARTED\.md|LLM_PROVIDERS\.md|ENCRYPTION-QUICK-START\.md|ENCRYPTION-SERVICES-SETUP\.md|ENCRYPTION-UNIFIED-ARCHITECTURE\.md" \
--include="*.md" --include="*.toml" --include="*.rs" \
--exclude-dir=target --exclude-dir=.git --exclude-dir=node_modules \
--exclude="*.bak" . 2>/dev/null | grep -v "update-docs-refs.sh" | wc -l || echo "0")
if [ "$remaining" -gt 0 ]; then
echo "⚠️ Found $remaining remaining uppercase doc references:"
grep -rE "BUILD\.md|CONFIGURATION\.md|DEVELOPMENT\.md|FIELD_TYPES\.md|INSTALLATION\.md|NICKEL\.md|PRE-COMMIT-SETUP\.md|RELEASE\.md|GETTING_STARTED\.md|LLM_PROVIDERS\.md|ENCRYPTION-QUICK-START\.md|ENCRYPTION-SERVICES-SETUP\.md|ENCRYPTION-UNIFIED-ARCHITECTURE\.md" \
--include="*.md" --include="*.toml" --include="*.rs" \
--exclude-dir=target --exclude-dir=.git --exclude-dir=node_modules \
--exclude="*.bak" . 2>/dev/null | grep -v "update-docs-refs.sh" | head -10
else
echo "✓ No remaining uppercase doc references found"
fi
echo ""
echo "Step 5: Cleanup backups (optional)"
echo "-----------------------------------"
backup_count=$(find . -name "*.bak" -type f | wc -l)
echo "Found $backup_count backup files"
echo "To remove backups: find . -name '*.bak' -type f -delete"
echo ""
echo "=== Summary ===="
echo "✓ Updated references to lowercase doc filenames"
echo "✓ Checked for remaining uppercase references"
echo ""
echo "Next steps:"
echo "1. Review changes: git diff"
echo "2. Test that links work"
echo "3. Remove backups: find . -name '*.bak' -delete"
echo "4. Commit: git add -A && git commit -m 'docs: rename files to lowercase'"

146
scripts/update-references.sh Executable file
View File

@ -0,0 +1,146 @@
#!/usr/bin/env bash
# Update file references after reorganization
# Run AFTER git mv commands
set -e
PROJECT_ROOT="$(cd "$(dirname "${BASH_SOURCE[0]}")/.." && pwd)"
cd "$PROJECT_ROOT"
echo "=== Updating references after reorganization ==="
echo ""
# Backup function
backup_file() {
local file=$1
if [ -f "$file" ]; then
cp "$file" "${file}.bak"
echo "✓ Backed up: $file"
fi
}
# Update function with display
update_refs() {
local file=$1
local search=$2
local replace=$3
if [ -f "$file" ]; then
if grep -q "$search" "$file" 2>/dev/null; then
backup_file "$file"
sed -i '' "s|$search|$replace|g" "$file"
echo "✓ Updated: $file"
echo " ├─ $search$replace"
fi
fi
}
echo "Step 1: Update README.md"
echo "------------------------"
# Update imgs/ → assets/ in README.md
update_refs "README.md" 'src="imgs/' 'src="assets/'
update_refs "README.md" 'href="imgs/' 'href="assets/'
# Update INSTALLATION.md → installation.md (uppercase to lowercase)
update_refs "README.md" 'docs/INSTALLATION\.md' 'docs/installation.md'
update_refs "README.md" 'docs/DEPLOYMENT\.md' 'docs/deployment.md'
update_refs "README.md" 'docs/QUICKSTART\.md' 'docs/quickstart.md'
echo ""
echo "Step 2: Update assets/ files (formerly imgs/)"
echo "----------------------------------------------"
# Update all files in assets/ directory
if [ -d "assets" ]; then
for file in assets/*.md assets/*.html; do
[ -f "$file" ] || continue
# Update /imgs/ → /assets/
update_refs "$file" '/imgs/' '/assets/'
update_refs "$file" 'href="imgs/' 'href="assets/'
update_refs "$file" 'src="imgs/' 'src="assets/'
done
fi
echo ""
echo "Step 3: Update docs/ cross-references"
echo "--------------------------------------"
# Update any cross-references in docs/
if [ -d "docs" ]; then
for file in docs/*.md; do
[ -f "$file" ] || continue
# Update uppercase references to lowercase
update_refs "$file" '\.\./INSTALLATION\.md' '../installation.md'
update_refs "$file" '\.\./DEPLOYMENT\.md' '../deployment.md'
update_refs "$file" '\.\./QUICKSTART\.md' '../quickstart.md'
update_refs "$file" 'INSTALLATION\.md' 'installation.md'
update_refs "$file" 'DEPLOYMENT\.md' 'deployment.md'
update_refs "$file" 'QUICKSTART\.md' 'quickstart.md'
# Update imgs/ → assets/
update_refs "$file" '\.\./imgs/' '../assets/'
update_refs "$file" '/imgs/' '/assets/'
done
fi
echo ""
echo "Step 4: Check for remaining old references"
echo "-------------------------------------------"
# Search for any remaining references (excluding backups, target, .git)
echo "Searching for remaining 'imgs/' references..."
remaining_imgs=$(grep -r "imgs/" --include="*.md" --include="*.html" --include="*.toml" \
--exclude-dir=target --exclude-dir=.git --exclude-dir=node_modules \
--exclude="*.bak" . 2>/dev/null | grep -v ".coder" | wc -l || echo "0")
if [ "$remaining_imgs" -gt 0 ]; then
echo "⚠️ Found $remaining_imgs remaining 'imgs/' references:"
grep -r "imgs/" --include="*.md" --include="*.html" --include="*.toml" \
--exclude-dir=target --exclude-dir=.git --exclude-dir=node_modules \
--exclude="*.bak" . 2>/dev/null | grep -v ".coder" | head -10
else
echo "✓ No remaining 'imgs/' references found"
fi
echo ""
echo "Searching for remaining uppercase doc references..."
remaining_docs=$(grep -rE "INSTALLATION\.md|DEPLOYMENT\.md|QUICKSTART\.md" \
--include="*.md" --include="*.toml" --include="*.rs" \
--exclude-dir=target --exclude-dir=.git --exclude-dir=node_modules \
--exclude="*.bak" . 2>/dev/null | grep -v ".coder" | wc -l || echo "0")
if [ "$remaining_docs" -gt 0 ]; then
echo "⚠️ Found $remaining_docs remaining uppercase doc references:"
grep -rE "INSTALLATION\.md|DEPLOYMENT\.md|QUICKSTART\.md" \
--include="*.md" --include="*.toml" --include="*.rs" \
--exclude-dir=target --exclude-dir=.git --exclude-dir=node_modules \
--exclude="*.bak" . 2>/dev/null | grep -v ".coder" | head -10
else
echo "✓ No remaining uppercase doc references found"
fi
echo ""
echo "Step 5: Cleanup backups (optional)"
echo "-----------------------------------"
echo "Backup files created with .bak extension"
echo "To remove backups: find . -name '*.bak' -type f -delete"
echo ""
backup_count=$(find . -name "*.bak" -type f | wc -l)
echo "Found $backup_count backup files"
echo ""
echo "=== Summary ==="
echo "✓ Updated references in README.md"
echo "✓ Updated references in assets/"
echo "✓ Updated references in docs/"
echo "✓ Checked for remaining old references"
echo ""
echo "Next steps:"
echo "1. Review changes: git diff"
echo "2. Test that links work"
echo "3. Remove backups: find . -name '*.bak' -delete"
echo "4. Commit: git add -A && git commit -m 'docs: reorganize structure and update references'"

150
tests/agent/README.md Normal file
View File

@ -0,0 +1,150 @@
# TypeDialog Agent Tests
This directory contains test agent files for validating LLM provider integrations and streaming functionality.
## Test Files
- **`test-streaming.agent.mdx`** - Tests streaming LLM responses with Claude
- **`test-openai.agent.mdx`** - Tests OpenAI provider integration
- **`test-gemini.agent.mdx`** - Tests Gemini provider integration
## Running Tests
### Individual Tests
From the project root directory:
```bash
# Test streaming with Claude
cargo run --package typedialog-ag -- tests/agent/test-streaming.agent.mdx
# Test OpenAI provider
cargo run --package typedialog-ag -- tests/agent/test-openai.agent.mdx
# Test Gemini provider
cargo run --package typedialog-ag -- tests/agent/test-gemini.agent.mdx
```
### Requirements
Each test requires the appropriate API key:
```bash
# For Claude/streaming test
export ANTHROPIC_API_KEY=sk-ant-...
# For OpenAI test
export OPENAI_API_KEY=sk-...
# For Gemini test
export GEMINI_API_KEY=...
```
## Unit Tests
For automated testing without API calls, use the mock tests in the core library:
```bash
# Run all core library tests (includes mock streaming tests)
cargo test --package typedialog-ag-core
# Run only LLM provider tests
cargo test --package typedialog-ag-core llm::
# Run with ignored tests (requires API keys)
cargo test --package typedialog-ag-core -- --ignored
```
## Integration Tests
Full integration tests that exercise the complete pipeline:
```bash
# Run integration tests
cargo test --package typedialog-ag-core --test integration_test
# Run simple integration test
cargo test --package typedialog-ag-core --test simple_integration_test
```
## Test Coverage
The test suite covers:
### Mock Tests (No API Keys Required)
- SSE parsing for Claude and OpenAI
- JSON parsing for Gemini and Ollama
- Streaming chunk handling
- Token usage extraction
- Error handling
- Empty input edge cases
### Integration Tests (API Keys Required)
- Real LLM provider calls
- Template rendering
- Agent execution pipeline
- Streaming callbacks
- Validation rules
## Provider-Specific Tests
### Claude (test-streaming.agent.mdx)
Tests:
- SSE streaming format
- Token usage in stream
- Real-time output rendering
### OpenAI (test-openai.agent.mdx)
Tests:
- GPT model integration
- SSE streaming (note: no token usage in stream)
- API error handling
### Gemini (test-gemini.agent.mdx)
Tests:
- JSON streaming format
- Role mapping (assistant → model)
- Token usage in stream
## Adding New Tests
To add a new test agent:
1. Create a `.agent.mdx` file following the existing format
2. Set required configuration in frontmatter:
```yaml
@agent {
role: test role,
llm: model-name
}
```
3. Add input directives if needed:
```yaml
@input test_input: String
```
4. Document in this README
## Troubleshooting
### API Key Not Set
```
Error: ANTHROPIC_API_KEY environment variable not set
```
**Solution**: Set the appropriate environment variable
### Quota Exceeded
```
Error: 429 Too Many Requests
```
**Solution**: Wait for quota reset or upgrade API plan
### Ollama Not Running
```
Error: Failed to call Ollama API - is Ollama running?
```
**Solution**: Start Ollama server with `ollama serve`
## See Also
- [Demos](../../demos/agent/README.md) - Full-featured demos
- [LLM Integration Docs](../../crates/typedialog-agent/typedialog-ag-core/LLM_INTEGRATION.md) - Complete LLM documentation

View File

@ -0,0 +1,8 @@
---
@agent {
role: creative writer,
llm: gemini-2.0-flash-exp
}
---
Write a haiku about the future of technology. Return only the haiku, nothing else.

View File

@ -0,0 +1,8 @@
---
@agent {
role: creative writer,
llm: gpt-4o-mini
}
---
Write a haiku about artificial intelligence and creativity. Return only the haiku, nothing else.

View File

@ -0,0 +1,8 @@
---
@agent {
role: creative writer,
llm: claude-3-5-haiku-20241022
}
---
Write a haiku about programming in Rust. Return only the haiku, nothing else.

View File

@ -0,0 +1,18 @@
---
@agent {
role: helpful assistant,
llm: claude-3-5-haiku-20241022,
max_tokens: 100,
temperature: 0.7
}
@input name: String
@validate output {
min_length: 10,
max_length: 500,
format: text
}
---
Say hello to {{ name }} in a friendly and warm way! Keep it brief (2-3 sentences).

127
tests/test-agents.sh Executable file
View File

@ -0,0 +1,127 @@
#!/bin/bash
# Test script for TypeDialog Agents
set -e
# Colors
RED='\033[0;31m'
GREEN='\033[0;32m'
YELLOW='\033[1;33m'
CYAN='\033[0;36m'
NC='\033[0m' # No Color
echo -e "${CYAN}=== TypeDialog Agent System Tests ===${NC}\n"
# Test 1: Validate all agents
echo -e "${YELLOW}Test 1: Validating all agents...${NC}"
for agent in agents/*.agent.mdx; do
echo -n " Validating $(basename $agent)... "
if cargo run --quiet --package typedialog-ag -- validate "$agent" > /dev/null 2>&1; then
echo -e "${GREEN}${NC}"
else
echo -e "${RED}${NC}"
exit 1
fi
done
echo ""
# Test 2: Transpile agents
echo -e "${YELLOW}Test 2: Transpiling agents...${NC}"
echo -n " Transpiling greeting.agent.mdx... "
OUTPUT=$(cargo run --quiet --package typedialog-ag -- transpile agents/greeting.agent.mdx 2>/dev/null)
if echo "$OUTPUT" | grep -q "config ="; then
echo -e "${GREEN}${NC}"
else
echo -e "${RED}${NC}"
exit 1
fi
echo ""
# Test 3: Cache stats
echo -e "${YELLOW}Test 3: Cache functionality...${NC}"
echo -n " Getting cache stats... "
if cargo run --quiet --package typedialog-ag -- cache stats > /dev/null 2>&1; then
echo -e "${GREEN}${NC}"
else
echo -e "${RED}${NC}"
exit 1
fi
echo ""
# Test 4: HTTP Server
echo -e "${YELLOW}Test 4: HTTP Server endpoints...${NC}"
# Start server in background
cargo run --quiet --release --package typedialog-ag -- serve --port 8771 > /tmp/test-server.log 2>&1 &
SERVER_PID=$!
sleep 3
# Health check
echo -n " GET /health... "
RESPONSE=$(curl -s http://127.0.0.1:8771/health)
if [ "$RESPONSE" = "OK" ]; then
echo -e "${GREEN}${NC}"
else
echo -e "${RED}${NC}"
kill $SERVER_PID 2>/dev/null
exit 1
fi
# Validate endpoint
echo -n " POST /validate... "
RESPONSE=$(curl -s -X POST http://127.0.0.1:8771/validate \
-H "Content-Type: application/json" \
-d '{"agent_file":"agents/greeting.agent.mdx"}')
if echo "$RESPONSE" | jq -e '.valid == true' > /dev/null 2>&1; then
echo -e "${GREEN}${NC}"
else
echo -e "${RED}${NC}"
kill $SERVER_PID 2>/dev/null
exit 1
fi
# Transpile endpoint
echo -n " POST /transpile... "
RESPONSE=$(curl -s -X POST http://127.0.0.1:8771/transpile \
-H "Content-Type: application/json" \
-d '{"content":"---\n@agent { role: test, llm: gpt-4 }\n---\nTest"}')
if echo "$RESPONSE" | jq -e '.nickel_code' > /dev/null 2>&1; then
echo -e "${GREEN}${NC}"
else
echo -e "${RED}${NC}"
kill $SERVER_PID 2>/dev/null
exit 1
fi
# Stop server
kill $SERVER_PID 2>/dev/null
wait $SERVER_PID 2>/dev/null || true
echo ""
echo -e "${GREEN}=== All tests passed! ===${NC}\n"
# Summary
echo -e "${CYAN}Available agents:${NC}"
ls -1 agents/*.agent.mdx | while read agent; do
NAME=$(basename "$agent" .agent.mdx)
echo " - $NAME"
done
echo ""
echo -e "${CYAN}Usage examples:${NC}"
echo " # Validate an agent"
echo " typedialog-ag validate agents/greeting.agent.mdx"
echo ""
echo " # Transpile to Nickel"
echo " typedialog-ag transpile agents/architect.agent.mdx"
echo ""
echo " # Start HTTP server"
echo " typedialog-ag serve --port 8765"
echo ""
echo " # Execute via HTTP"
echo ' curl -X POST http://localhost:8765/agents/greeting/execute \'
echo ' -H "Content-Type: application/json" \'
echo ' -d '"'"'{"name":"World"}'"'"
echo ""
echo -e "${GREEN}Ready to use!${NC}"