Provisioning Logo

Provisioning

# Provisioning KCL Package A comprehensive KCL (KusionStack Configuration Language) package providing type-safe schemas for [Provisioning project](https://repo.jesusperez.pro/jesus/provisioning) batch workflows, and Kubernetes deployments. ## Overview This package contains production-ready KCL schemas with configuration-driven, provider-agnostic infrastructure automation capabilities. ### Package Structure ``` provisioning/kcl/ ├── main.k # Main entry point - import this ├── settings.k # Core system settings ├── lib.k # Common schemas and utilities ├── server.k # Server configuration schemas ├── cluster.k # Cluster management schemas ├── workflows.k # Batch workflow schemas ├── batch.k # Advanced batch operation utilities ├── dependencies.k # Taskserv dependency management ├── version.k # Version management schemas ├── k8s_deploy.k # Kubernetes deployment schemas ├── defaults.k # Default configurations ├── examples_batch.k # Comprehensive examples └── docs/ # Documentation ``` ## Quick Start ### Import the Package ```kcl # Import the main entry point for access to all schemas import provisioning.main # Or import from a relative path if working within the same project import .main ``` ### Basic Server Configuration ```kcl import .main # Define a simple server web_server: main.Server = main.Server { hostname: "web-01" title: "Production Web Server" labels: "env: prod, tier: web" user: "admin" # Optional: Add taskservs to install taskservs: [ main.TaskServDef { name: "nginx" install_mode: "library" profile: "production" } ] } ``` ### Batch Workflow Example ```kcl import .main # Define a multi-provider infrastructure deployment deployment_workflow: main.BatchWorkflow = main.BatchWorkflow { workflow_id: "prod_deploy_001" name: "Production Infrastructure Deployment" description: "Deploy web tier across UpCloud and AWS" operations: [ # Create UpCloud servers main.BatchOperation { operation_id: "create_web_servers" name: "Create Web Servers" operation_type: "server" provider: "upcloud" action: "create" parameters: { "server_count": "3" "server_type": "web" "zone": "fi-hel2" "plan": "2xCPU-4GB" } priority: 10 } # Install Kubernetes after servers are ready main.BatchOperation { operation_id: "install_k8s" name: "Install Kubernetes Cluster" operation_type: "taskserv" action: "create" parameters: { "taskserv": "kubernetes" "version": "v1.31.0" "cluster_name": "prod-cluster" } dependencies: [ main.DependencyDef { target_operation_id: "create_web_servers" dependency_type: "sequential" timeout: 600 } ] priority: 8 } ] # Global workflow settings max_parallel_operations: 3 fail_fast: False # Use SurrealDB for state persistence storage: main.StorageConfig { backend: "surrealdb" connection_config: { "url": "ws://localhost:8000" "namespace": "provisioning" "database": "workflows" } enable_persistence: True retention_hours: 720 # 30 days } } ``` ### Kubernetes Deployment ```kcl import .main # Define a complete Kubernetes deployment nginx_deployment: main.K8sDeploy = main.K8sDeploy { name: "nginx-web" namespace: "production" create_ns: True spec: main.K8sDeploySpec { replicas: 3 containers: [ main.K8sContainers { name: "nginx" image: "nginx:1.21" ports: [ main.K8sPort { name: "http" container: 80 target: 8080 } ] resources_requests: main.K8sResources { memory: "128Mi" cpu: "100m" } resources_limits: main.K8sResources { memory: "256Mi" cpu: "200m" } } ] } # Expose via service service: main.K8sService { name: "nginx-service" typ: "LoadBalancer" ports: [ main.K8sPort { name: "http" target: 80 nodePort: 30080 } ] } } ``` ## Core Schemas ### Server Management - **`Server`**: Complete server configuration with defaults inheritance - **`ServerDefaults`**: Default settings for server provisioning - **`Storage`**, **`StorageVol`**: Storage configuration and partitioning ### Workflow & Batch Operations - **`BatchWorkflow`**: Multi-operation workflow with dependencies - **`BatchOperation`**: Individual operation within workflows - **`DependencyDef`**: Define sequential or conditional dependencies - **`RetryPolicy`**: Configure retry behavior and backoff - **`RollbackStrategy`**: Automatic rollback on failures ### Taskserv Management - **`TaskServDef`**: Infrastructure service definitions - **`TaskservDependencies`**: Dependency management for taskservs - **`HealthCheck`**: Health monitoring configuration ### Kubernetes Deployments - **`K8sDeploy`**: Complete Kubernetes deployment specification - **`K8sService`**: Service definitions with load balancing - **`K8sVolume`**: Persistent storage configuration - **`K8sResources`**: Resource limits and requests ### Configuration & Settings - **`Settings`**: System-wide configuration - **`SecretProvider`**: SOPS/KMS secret management - **`AIProvider`**: AI integration configuration ## Advanced Features ### Mixed Provider Support Deploy across multiple cloud providers in a single workflow: ```kcl mixed_deployment: main.BatchWorkflow = main.BatchWorkflow { workflow_id: "multi_cloud_001" name: "Multi-Cloud Deployment" operations: [ # UpCloud servers for web tier main.BatchOperation { operation_id: "upcloud_web" provider: "upcloud" parameters: {"zone": "fi-hel2", "count": "3"} } # AWS RDS for database main.BatchOperation { operation_id: "aws_database" provider: "aws" parameters: {"region": "eu-west-1", "engine": "postgresql"} dependencies: [ main.DependencyDef { target_operation_id: "upcloud_web" dependency_type: "sequential" } ] } ] } ``` ### Resource Constraints & Autoscaling Configure intelligent resource management: ```kcl batch_executor: main.BatchExecutor = main.BatchExecutor { executor_id: "production_executor" name: "Production Batch Executor" # Resource limits resource_constraints: [ main.ResourceConstraint { resource_type: "cpu" resource_name: "total_cores" max_units: 16 units_per_operation: 2 hard_constraint: True } ] # Auto-scaling configuration autoscaling: main.BatchAutoscaling { enabled: True min_parallel: 2 max_parallel: 10 scale_up_threshold: 0.8 target_utilization: 0.65 } } ``` ### Monitoring & Observability ```kcl monitoring_config: main.MonitoringConfig = main.MonitoringConfig { enabled: True backend: "prometheus" enable_tracing: True enable_notifications: True notification_channels: [ "webhook:slack://ops-alerts", "webhook:pagerduty://incidents" ] log_level: "info" } ``` ## Validation & Testing ### Schema Validation ```bash # Validate individual files kcl run server_config.k # Validate entire workflow kcl run workflow_definition.k # Output as JSON for integration kcl run workflow_definition.k --format json ``` ### Built-in Constraints All schemas include comprehensive validation: ```kcl # Server hostnames must be non-empty server: main.Server = main.Server { hostname: "web-01" # ✅ Valid # hostname: "" # ❌ Validation error } # Resource constraints are enforced resources: main.K8sResources = main.K8sResources { memory: "128Mi" # ✅ Valid K8s format # memory: "invalid" # ❌ Validation error } # Dependency cycles are prevented operation: main.BatchOperation = main.BatchOperation { operation_id: "op1" dependencies: [ main.DependencyDef { target_operation_id: "op2" # ✅ Valid dependency # target_operation_id: "op1" # ❌ Self-reference prevented } ] } ``` ## Integration Examples ### With Nushell Scripts ```nushell # Generate workflow from KCL let workflow = (kcl run deployment.k --format json | from json) # Submit to batch executor $workflow | to json | http post http://localhost:8080/workflows/batch/submit # Monitor progress while true { let status = (http get $"http://localhost:8080/workflows/batch/($workflow.workflow_id)") if $status.status == "completed" { break } sleep 5sec } ``` ### With Rust Orchestrator ```rust // Deserialize KCL output into Rust structs let workflow: BatchWorkflow = serde_json::from_str(&kcl_output)?; // Execute via orchestrator let executor = BatchExecutor::new(workflow); executor.execute().await?; ``` ## Package Metadata - **Version**: 0.1.0 - **API Version**: v1 - **KCL Compatibility**: 0.11.0 - 0.12.0 - **Build Date**: 2025-09-28 ### Features - ✅ Server Management - ✅ Cluster Orchestration - ✅ Provider Abstraction - ✅ Workflow Automation - ✅ Batch Operations ## Best Practices 1. **Always import via main.k** for stability 2. **Use descriptive operation_id values** for dependency tracking 3. **Set appropriate timeouts** based on operation complexity 4. **Enable monitoring** for production workflows 5. **Test workflows** with small counts before production 6. **Use retry policies** for transient failures 7. **Configure rollback strategies** for critical operations ## Contributing When adding new schemas: 1. Follow existing naming conventions 2. Add comprehensive validation rules 3. Include documentation strings 4. Export from `main.k` 5. Add examples to `examples_batch.k` 6. Update this README ## License This package is part of the Provisioning project and follows the same license terms.