# AI Integration - Intelligent Infrastructure Provisioning The provisioning platform integrates AI capabilities to provide intelligent assistance for infrastructure configuration, deployment, and troubleshooting. This section documents the AI system architecture, features, and usage patterns. ## Overview The AI integration consists of multiple components working together to provide intelligent infrastructure provisioning: - **typdialog-ai**: AI-assisted form filling and configuration - **typdialog-ag**: Autonomous AI agents for complex workflows - **typdialog-prov-gen**: Natural language to Nickel configuration generation - **ai-service**: Core AI service backend with multi-provider support - **mcp-server**: Model Context Protocol server for LLM integration - **rag**: Retrieval-Augmented Generation for contextual knowledge ## Key Features ### Natural Language Configuration Generate infrastructure configurations from plain English descriptions: ```bash provisioning ai generate "Create a production PostgreSQL cluster with encryption and daily backups" ``` ### AI-Assisted Forms Real-time suggestions and explanations as you fill out configuration forms via typdialog web UI. ### Intelligent Troubleshooting AI analyzes deployment failures and suggests fixes: ```bash provisioning ai troubleshoot deployment-12345 ``` ### Configuration Optimization AI reviews configurations and suggests performance and security improvements: ```bash provisioning ai optimize workspaces/prod/config.ncl ``` ### Autonomous Agents AI agents execute multi-step workflows with minimal human intervention: ```bash provisioning ai agent --goal "Set up complete dev environment for Python app" ``` ## Documentation Structure - [Architecture](architecture.md) - AI system architecture and components - [Natural Language Config](natural-language-config.md) - NL to Nickel generation - [AI-Assisted Forms](ai-assisted-forms.md) - typdialog-ai integration - [AI Agents](ai-agents.md) - typdialog-ag autonomous agents - [Config Generation](config-generation.md) - typdialog-prov-gen details - [RAG System](rag-system.md) - Retrieval-Augmented Generation - [MCP Integration](mcp-integration.md) - Model Context Protocol - [Security Policies](security-policies.md) - Cedar policies for AI - [Troubleshooting with AI](troubleshooting-with-ai.md) - AI debugging workflows - [API Reference](api-reference.md) - AI service API documentation - [Configuration](configuration.md) - AI system configuration guide - [Cost Management](cost-management.md) - Managing LLM API costs ## Quick Start ### Enable AI Features ```bash # Edit provisioning config vim provisioning/config/ai.toml # Set provider and enable features [ai] enabled = true provider = "anthropic" # or "openai" or "local" model = "claude-sonnet-4" [ai.features] form_assistance = true config_generation = true troubleshooting = true ``` ### Generate Configuration from Natural Language ```bash # Simple generation provisioning ai generate "PostgreSQL database with encryption" # With specific schema provisioning ai generate \ --schema database \ --output workspaces/dev/db.ncl \ "Production PostgreSQL with 100GB storage and daily backups" ``` ### Use AI-Assisted Forms ```bash # Open typdialog web UI with AI assistance provisioning workspace init --interactive --ai-assist # AI provides real-time suggestions as you type # AI explains validation errors in plain English # AI fills multiple fields from natural language description ``` ### Troubleshoot with AI ```bash # Analyze failed deployment provisioning ai troubleshoot deployment-12345 # AI analyzes logs and suggests fixes # AI generates corrected configuration # AI explains root cause in plain language ``` ## Security and Privacy The AI system implements strict security controls: - ✅ **Cedar Policies**: AI access controlled by Cedar authorization - ✅ **Secret Isolation**: AI cannot access secrets directly - ✅ **Human Approval**: Critical operations require human approval - ✅ **Audit Trail**: All AI operations logged - ✅ **Data Sanitization**: Secrets/PII sanitized before sending to LLM - ✅ **Local Models**: Support for air-gapped deployments See [Security Policies](security-policies.md) for complete details. ## Supported LLM Providers | Provider | Models | Best For | | ---------- | -------- | ---------- | | **Anthropic** | Claude Sonnet 4, Claude Opus 4 | Complex configs, long context | | **OpenAI** | GPT-4 Turbo, GPT-4 | Fast suggestions, tool calling | | **Local** | Llama 3, Mistral | Air-gapped, privacy-critical | ## Cost Considerations AI features incur LLM API costs. The system implements cost controls: - **Caching**: Reduces API calls by 50-80% - **Rate Limiting**: Prevents runaway costs - **Budget Limits**: Daily/monthly cost caps - **Local Models**: Zero marginal cost for air-gapped deployments See [Cost Management](cost-management.md) for optimization strategies. ## Architecture Decision Record The AI integration is documented in: - [ADR-015: AI Integration Architecture](../architecture/adr/adr-015-ai-integration-architecture.md) ## Next Steps 1. Read [Architecture](architecture.md) to understand AI system design 2. Configure AI features in [Configuration](configuration.md) 3. Try [Natural Language Config](natural-language-config.md) for your first AI-generated config 4. Explore [AI Agents](ai-agents.md) for automation workflows 5. Review [Security Policies](security-policies.md) to understand access controls --- **Version**: 1.0 **Last Updated**: 2025-01-08 **Status**: Active