5.5 KiB
AI Integration - Intelligent Infrastructure Provisioning
The provisioning platform integrates AI capabilities to provide intelligent assistance for infrastructure configuration, deployment, and troubleshooting. This section documents the AI system architecture, features, and usage patterns.
Overview
The AI integration consists of multiple components working together to provide intelligent infrastructure provisioning:
- typdialog-ai: AI-assisted form filling and configuration
- typdialog-ag: Autonomous AI agents for complex workflows
- typdialog-prov-gen: Natural language to Nickel configuration generation
- ai-service: Core AI service backend with multi-provider support
- mcp-server: Model Context Protocol server for LLM integration
- rag: Retrieval-Augmented Generation for contextual knowledge
Key Features
Natural Language Configuration
Generate infrastructure configurations from plain English descriptions:
provisioning ai generate "Create a production PostgreSQL cluster with encryption and daily backups"
AI-Assisted Forms
Real-time suggestions and explanations as you fill out configuration forms via typdialog web UI.
Intelligent Troubleshooting
AI analyzes deployment failures and suggests fixes:
provisioning ai troubleshoot deployment-12345
Configuration Optimization AI reviews configurations and suggests performance and security improvements:
provisioning ai optimize workspaces/prod/config.ncl
Autonomous Agents
AI agents execute multi-step workflows with minimal human intervention:
provisioning ai agent --goal "Set up complete dev environment for Python app"
Documentation Structure
- Architecture - AI system architecture and components
- Natural Language Config - NL to Nickel generation
- AI-Assisted Forms - typdialog-ai integration
- AI Agents - typdialog-ag autonomous agents
- Config Generation - typdialog-prov-gen details
- RAG System - Retrieval-Augmented Generation
- MCP Integration - Model Context Protocol
- Security Policies - Cedar policies for AI
- Troubleshooting with AI - AI debugging workflows
- API Reference - AI service API documentation
- Configuration - AI system configuration guide
- Cost Management - Managing LLM API costs
Quick Start
Enable AI Features
# Edit provisioning config
vim provisioning/config/ai.toml
# Set provider and enable features
[ai]
enabled = true
provider = "anthropic" # or "openai" or "local"
model = "claude-sonnet-4"
[ai.features]
form_assistance = true
config_generation = true
troubleshooting = true
Generate Configuration from Natural Language
# Simple generation
provisioning ai generate "PostgreSQL database with encryption"
# With specific schema
provisioning ai generate \
--schema database \
--output workspaces/dev/db.ncl \
"Production PostgreSQL with 100GB storage and daily backups"
Use AI-Assisted Forms
# Open typdialog web UI with AI assistance
provisioning workspace init --interactive --ai-assist
# AI provides real-time suggestions as you type
# AI explains validation errors in plain English
# AI fills multiple fields from natural language description
Troubleshoot with AI
# Analyze failed deployment
provisioning ai troubleshoot deployment-12345
# AI analyzes logs and suggests fixes
# AI generates corrected configuration
# AI explains root cause in plain language
Security and Privacy
The AI system implements strict security controls:
- ✅ Cedar Policies: AI access controlled by Cedar authorization
- ✅ Secret Isolation: AI cannot access secrets directly
- ✅ Human Approval: Critical operations require human approval
- ✅ Audit Trail: All AI operations logged
- ✅ Data Sanitization: Secrets/PII sanitized before sending to LLM
- ✅ Local Models: Support for air-gapped deployments
See Security Policies for complete details.
Supported LLM Providers
| Provider | Models | Best For |
|---|---|---|
| Anthropic | Claude Sonnet 4, Claude Opus 4 | Complex configs, long context |
| OpenAI | GPT-4 Turbo, GPT-4 | Fast suggestions, tool calling |
| Local | Llama 3, Mistral | Air-gapped, privacy-critical |
Cost Considerations
AI features incur LLM API costs. The system implements cost controls:
- Caching: Reduces API calls by 50-80%
- Rate Limiting: Prevents runaway costs
- Budget Limits: Daily/monthly cost caps
- Local Models: Zero marginal cost for air-gapped deployments
See Cost Management for optimization strategies.
Architecture Decision Record
The AI integration is documented in:
Next Steps
- Read Architecture to understand AI system design
- Configure AI features in Configuration
- Try Natural Language Config for your first AI-generated config
- Explore AI Agents for automation workflows
- Review Security Policies to understand access controls
Version: 1.0 Last Updated: 2025-01-08 Status: Active