chore: fix docs after fences fix

This commit is contained in:
Jesús Pérez 2026-01-14 04:53:58 +00:00
parent a4b3c02371
commit 17ef93ed23
Signed by: jesus
GPG Key ID: 9F243E355E0BC939
177 changed files with 4127 additions and 4010 deletions

File diff suppressed because one or more lines are too long

View File

@ -86,7 +86,7 @@ Declarative Infrastructure as Code (IaC) platform providing:
**Solution**: Unified abstraction layer with provider-agnostic interfaces. Write configuration once, deploy anywhere.
```text
```toml
# Same configuration works on UpCloud, AWS, or local infrastructure
server: Server {
name = "web-01"
@ -101,7 +101,7 @@ server: Server {
**Solution**: Automatic dependency resolution with topological sorting and health checks.
```text
```bash
# Provisioning resolves: containerd → etcd → kubernetes → cilium
taskservs = ["cilium"] # Automatically installs all dependencies
```
@ -112,7 +112,7 @@ taskservs = ["cilium"] # Automatically installs all dependencies
**Solution**: Hierarchical configuration system with 476+ config accessors replacing 200+ ENV variables.
```text
```toml
Defaults → User → Project → Infrastructure → Environment → Runtime
```
@ -197,7 +197,7 @@ Clusters handle:
Isolated environments for different projects or deployment stages.
```text
```bash
workspace_librecloud/ # Production workspace
├── infra/ # Infrastructure definitions
├── config/ # Workspace configuration
@ -211,7 +211,7 @@ workspace_dev/ # Development workspace
Switch between workspaces with single command:
```text
```bash
provisioning workspace switch librecloud
```
@ -240,7 +240,7 @@ Coordinated sequences of operations with dependency management.
### System Components
```text
```bash
┌─────────────────────────────────────────────────────────────────┐
│ User Interface Layer │
│ • CLI (provisioning command) │
@ -282,7 +282,7 @@ Coordinated sequences of operations with dependency management.
### Directory Structure
```text
```bash
project-provisioning/
├── provisioning/ # Core provisioning system
│ ├── core/ # Core engine and libraries
@ -514,7 +514,7 @@ Comprehensive version tracking and updates.
### Data Flow
```text
```bash
1. User defines infrastructure in Nickel
2. CLI loads configuration (hierarchical)
@ -540,7 +540,7 @@ Comprehensive version tracking and updates.
**Step 1**: Define infrastructure in Nickel
```text
```nickel
# infra/my-cluster.ncl
let config = {
infra = {
@ -561,13 +561,13 @@ config
**Step 2**: Submit to Provisioning
```text
```bash
provisioning server create --infra my-cluster
```
**Step 3**: Provisioning executes workflow
```text
```bash
1. Create workflow: "deploy-my-cluster"
2. Resolve dependencies:
- containerd (required by kubernetes)
@ -592,7 +592,7 @@ provisioning server create --infra my-cluster
**Step 4**: Verify deployment
```text
```bash
provisioning cluster status my-cluster
```
@ -600,7 +600,7 @@ provisioning cluster status my-cluster
Configuration values are resolved through a hierarchy:
```text
```toml
1. System Defaults (provisioning/config/config.defaults.toml)
↓ (overridden by)
2. User Preferences (~/.config/provisioning/user_config.yaml)
@ -616,7 +616,7 @@ Configuration values are resolved through a hierarchy:
**Example**:
```text
```bash
# System default
[servers]
default_plan = "small"
@ -641,7 +641,7 @@ provisioning server create --plan xlarge # Overrides everything
Deploy Kubernetes clusters across different cloud providers with identical configuration.
```text
```yaml
# UpCloud cluster
provisioning cluster create k8s-prod --provider upcloud
@ -653,7 +653,7 @@ provisioning cluster create k8s-prod --provider aws
Manage multiple environments with workspace switching.
```text
```bash
# Development
provisioning workspace switch dev
provisioning cluster create app-stack
@ -671,7 +671,7 @@ provisioning cluster create app-stack
Test infrastructure changes before deploying to production.
```text
```bash
# Test Kubernetes upgrade locally
provisioning test topology load kubernetes_3node |
test env cluster kubernetes --version 1.29.0
@ -687,7 +687,7 @@ provisioning test env cleanup <env-id>
Deploy to multiple regions in parallel.
```text
```bash
# workflows/multi-region.ncl
let batch_workflow = {
operations = [
@ -715,7 +715,7 @@ let batch_workflow = {
batch_workflow
```
```text
```bash
provisioning batch submit workflows/multi-region.ncl
provisioning batch monitor <workflow-id>
```
@ -724,7 +724,7 @@ provisioning batch monitor <workflow-id>
Recreate infrastructure from configuration.
```text
```toml
# Infrastructure destroyed
provisioning workspace switch prod
@ -738,7 +738,7 @@ provisioning cluster create --infra backup-restore --wait
Automated testing and deployment pipelines.
```text
```bash
# .gitlab-ci.yml
test-infrastructure:
script:

View File

@ -117,7 +117,7 @@ Nickel, and Rust.
## Documentation Structure
```text
```bash
provisioning/docs/src/
├── README.md (this file) # Documentation hub
├── getting-started/ # Getting started guides

View File

@ -20,7 +20,7 @@ The AI integration consists of multiple components working together to provide i
### Natural Language Configuration
Generate infrastructure configurations from plain English descriptions:
```text
```toml
provisioning ai generate "Create a production PostgreSQL cluster with encryption and daily backups"
```
@ -31,7 +31,7 @@ Real-time suggestions and explanations as you fill out configuration forms via t
### Intelligent Troubleshooting
AI analyzes deployment failures and suggests fixes:
```text
```bash
provisioning ai troubleshoot deployment-12345
```
@ -39,13 +39,13 @@ provisioning ai troubleshoot deployment-12345
Configuration Optimization
AI reviews configurations and suggests performance and security improvements:
```text
```toml
provisioning ai optimize workspaces/prod/config.ncl
```
### Autonomous Agents
AI agents execute multi-step workflows with minimal human intervention:
```text
```bash
provisioning ai agent --goal "Set up complete dev environment for Python app"
```
@ -68,7 +68,7 @@ provisioning ai agent --goal "Set up complete dev environment for Python app"
### Enable AI Features
```text
```bash
# Edit provisioning config
vim provisioning/config/ai.toml
@ -86,7 +86,7 @@ troubleshooting = true
### Generate Configuration from Natural Language
```text
```toml
# Simple generation
provisioning ai generate "PostgreSQL database with encryption"
@ -99,7 +99,7 @@ provisioning ai generate
### Use AI-Assisted Forms
```text
```bash
# Open typdialog web UI with AI assistance
provisioning workspace init --interactive --ai-assist
@ -110,7 +110,7 @@ provisioning workspace init --interactive --ai-assist
### Troubleshoot with AI
```text
```bash
# Analyze failed deployment
provisioning ai troubleshoot deployment-12345

View File

@ -13,7 +13,7 @@ security and requiring human approval for critical operations.
Enable AI agents to manage complex provisioning workflows:
```text
```bash
User Goal:
"Set up a complete development environment with:
- PostgreSQL database
@ -39,7 +39,7 @@ AI Agent executes:
Agents coordinate complex, multi-component deployments:
```text
```bash
Goal: "Deploy production Kubernetes cluster with managed databases"
Agent Plan:
@ -75,7 +75,7 @@ Agent Plan:
Agents adapt to conditions and make intelligent decisions:
```text
```bash
Scenario: Database provisioning fails due to resource quota
Standard approach (human):
@ -102,7 +102,7 @@ Agent approach:
Agents understand resource dependencies:
```text
```bash
Knowledge graph of dependencies:
VPC ──→ Subnets ──→ EC2 Instances
@ -125,7 +125,7 @@ Agent ensures:
### Agent Design Pattern
```text
```bash
┌────────────────────────────────────────────────────────┐
│ Agent Supervisor (Orchestrator) │
│ - Accepts user goal │
@ -151,7 +151,7 @@ Agent ensures:
### Agent Workflow
```text
```bash
Start: User Goal
┌─────────────────────────────────────────┐
@ -214,7 +214,7 @@ Success: Deployment Complete
### 1. Database Specialist Agent
```text
```bash
Responsibilities:
- Create and configure databases
- Set up replication and backups
@ -231,7 +231,7 @@ Examples:
### 2. Kubernetes Specialist Agent
```text
```yaml
Responsibilities:
- Create and configure Kubernetes clusters
- Configure networking and ingress
@ -248,7 +248,7 @@ Examples:
### 3. Infrastructure Agent
```text
```bash
Responsibilities:
- Create networking infrastructure
- Configure security and firewalls
@ -265,7 +265,7 @@ Examples:
### 4. Monitoring Agent
```text
```bash
Responsibilities:
- Deploy monitoring stack
- Configure alerting
@ -282,7 +282,7 @@ Examples:
### 5. Compliance Agent
```text
```bash
Responsibilities:
- Check security policies
- Verify compliance requirements
@ -301,7 +301,7 @@ Examples:
### Example 1: Development Environment Setup
```text
```bash
$ provisioning ai agent --goal "Set up dev environment for Python web app"
Agent Plan Generated:
@ -357,7 +357,7 @@ Grafana dashboards: [http://grafana.internal:3000](http://grafana.internal:3000)
### Example 2: Production Kubernetes Deployment
```text
```yaml
$ provisioning ai agent --interactive
--goal "Deploy production Kubernetes cluster with managed databases"
@ -414,7 +414,7 @@ User: Review configs
Agents stop and ask humans for approval at critical points:
```text
```bash
Automatic Approval (Agent decides):
- Create configuration
- Validate configuration
@ -434,7 +434,7 @@ Human Approval Required:
All decisions logged for audit trail:
```text
```bash
Agent Decision Log:
| 2025-01-13 10:00:00 | Generate database config |
| 2025-01-13 10:00:05 | Config validation: PASS |
@ -451,7 +451,7 @@ Agent Decision Log:
Agents can rollback on failure:
```text
```bash
Scenario: Database creation succeeds, but Kubernetes creation fails
Agent behavior:
@ -469,7 +469,7 @@ Full rollback capability if entire workflow fails before human approval.
### Agent Settings
```text
```toml
# In provisioning/config/ai.toml
[ai.agents]
enabled = true

View File

@ -11,7 +11,7 @@ typdialog web UI. This enables users to configure infrastructure through interac
Enhance configuration forms with AI-powered assistance:
```text
```toml
User typing in form field: "storage"
AI analyzes context:
@ -38,7 +38,7 @@ Suggestions appear:
### User Interface Integration
```text
```bash
┌────────────────────────────────────────┐
│ Typdialog Web UI (React/TypeScript) │
│ │
@ -65,7 +65,7 @@ Suggestions appear:
### Suggestion Pipeline
```text
```bash
User Event (typing, focusing field, validation error)
┌─────────────────────────────────────┐
@ -107,7 +107,7 @@ User Event (typing, focusing field, validation error)
Intelligent suggestions based on context:
```text
```bash
Scenario: User filling database configuration form
1. Engine selection
@ -135,7 +135,7 @@ Scenario: User filling database configuration form
Human-readable error messages with fixes:
```text
```bash
User enters: "storage = -100"
Current behavior:
@ -157,7 +157,7 @@ Planned AI behavior:
Suggestions change based on other fields:
```text
```bash
Scenario: Multi-step configuration form
Step 1: Select environment
@ -186,7 +186,7 @@ Step 4: Encryption
Quick access to relevant docs:
```text
```bash
Field: "Backup Retention Days"
Suggestion popup:
@ -207,7 +207,7 @@ Suggestion popup:
Suggest multiple related fields together:
```text
```bash
User selects: environment = "production"
AI suggests completing:
@ -231,7 +231,7 @@ AI suggests completing:
### Frontend (typdialog-ai JavaScript/TypeScript)
```text
```bash
// React component for field with AI assistance
interface AIFieldProps {
fieldName: string;
@ -286,7 +286,7 @@ function AIAssistedField({fieldName, formContext, schema}: AIFieldProps) {
### Backend Service Integration
```text
```bash
// In AI Service: field suggestion endpoint
async fn suggest_field_value(
req: SuggestFieldRequest,
@ -316,7 +316,7 @@ async fn suggest_field_value(
### Form Assistant Settings
```text
```toml
# In provisioning/config/ai.toml
[ai.forms]
enabled = true
@ -352,7 +352,7 @@ track_rejected_suggestions = true
### Scenario: New User Configuring PostgreSQL
```text
```toml
1. User opens typdialog form
- Form title: "Create Database"
- First field: "Database Engine"
@ -395,7 +395,7 @@ track_rejected_suggestions = true
NLC and form assistance share the same backend:
```text
```bash
Natural Language Generation AI-Assisted Forms
↓ ↓
"Create a PostgreSQL db" Select field values

View File

@ -36,7 +36,7 @@ The RAG system enables AI to access and reason over platform documentation:
- Semantic caching for repeated queries
**Capabilities**:
```text
```bash
provisioning ai query "How do I set up Kubernetes?"
provisioning ai template "Describe my infrastructure"
```
@ -56,14 +56,14 @@ Provides Model Context Protocol integration:
**Status**: ✅ Production-Ready
Interactive commands:
```text
```bash
provisioning ai template --prompt "Describe infrastructure"
provisioning ai query --prompt "Configuration question"
provisioning ai chat # Interactive mode
```
**Configuration**:
```text
```toml
[ai]
enabled = true
provider = "anthropic" # or "openai" or "local"
@ -108,7 +108,7 @@ Real-time AI suggestions in configuration forms:
## Architecture Diagram
```text
```bash
┌─────────────────────────────────────────────────┐
│ User Interface │
│ ├── CLI (provisioning ai ...) │

View File

@ -14,7 +14,7 @@ The Configuration Generator (typdialog-prov-gen) will provide template-based Nic
- Preview before generation
### Customization via Natural Language
```text
```bash
provisioning ai config-gen
--template "kubernetes-cluster"
--customize "Add Prometheus monitoring, increase replicas to 5, use us-east-1"
@ -32,7 +32,7 @@ provisioning ai config-gen
## Architecture
```text
```bash
Template Library
Template Selection (AI + User)

View File

@ -9,7 +9,7 @@ controls, and security settings.
### Minimal Configuration
```text
```toml
# provisioning/config/ai.toml
[ai]
enabled = true
@ -27,7 +27,7 @@ temperature = 0.7
### Initialize Configuration
```text
```toml
# Generate default configuration
provisioning config init ai
@ -45,7 +45,7 @@ provisioning config show ai
### Anthropic Claude
```text
```toml
[ai]
enabled = true
provider = "anthropic"
@ -68,7 +68,7 @@ top_k = 40
### OpenAI GPT-4
```text
```toml
[ai]
enabled = true
provider = "openai"
@ -89,7 +89,7 @@ top_p = 0.95
### Local Models
```text
```toml
[ai]
enabled = true
provider = "local"
@ -112,7 +112,7 @@ max_batch_size = 4
### Enable Specific Features
```text
```toml
[ai.features]
# Core features (production-ready)
rag_search = true # Retrieve-Augmented Generation
@ -137,7 +137,7 @@ knowledge_base = false # Custom knowledge base per workspace
### Cache Strategy
```text
```toml
[ai.cache]
enabled = true
cache_type = "memory" # or "redis", "disk"
@ -169,7 +169,7 @@ cache_embeddings = true # Cache embedding vectors
### Cache Metrics
```text
```bash
# Monitor cache performance
provisioning admin cache stats ai
@ -184,7 +184,7 @@ provisioning admin cache analyze ai --hours 24
### Rate Limits
```text
```toml
[ai.limits]
# Tokens per request
max_tokens = 4096
@ -207,7 +207,7 @@ track_cost_per_request = true
### Cost Budgeting
```text
```toml
[ai.budget]
enabled = true
monthly_limit_usd = 1000
@ -226,7 +226,7 @@ local_limit = 0 # Free (run locally)
### Track Costs
```text
```bash
# View cost metrics
provisioning admin costs show ai --period month
@ -244,7 +244,7 @@ provisioning admin costs export ai --format csv --output costs.csv
### Authentication
```text
```toml
[ai.auth]
# API key from environment variable
api_key = "${PROVISIONING_AI_API_KEY}"
@ -263,7 +263,7 @@ signing_method = "hmac-sha256"
### Authorization (Cedar)
```text
```toml
[ai.authorization]
enabled = true
policy_file = "provisioning/policies/ai-policies.cedar"
@ -276,7 +276,7 @@ policy_file = "provisioning/policies/ai-policies.cedar"
### Data Protection
```text
```toml
[ai.security]
# Sanitize data before sending to external LLM
sanitize_pii = true
@ -300,7 +300,7 @@ local_only = false # Set true for air-gapped deployments
### Vector Store Setup
```text
```toml
[ai.rag]
enabled = true
@ -337,7 +337,7 @@ code_overlap = 128
### Index Management
```text
```bash
# Create indexes
provisioning ai index create rag
@ -355,7 +355,7 @@ provisioning ai index cleanup rag --older-than 30days
### MCP Server Setup
```text
```toml
[ai.mcp]
enabled = true
port = 3000
@ -380,7 +380,7 @@ timeout_seconds = 30
### MCP Client Configuration
```text
```toml
~/.claude/claude_desktop_config.json:
{
"mcpServers": {
@ -400,7 +400,7 @@ timeout_seconds = 30
### Logging Configuration
```text
```toml
[ai.logging]
level = "info" # or "debug", "warn", "error"
format = "json" # or "text"
@ -423,7 +423,7 @@ log_costs = true
### Metrics and Monitoring
```text
```bash
# View AI service metrics
provisioning admin metrics show ai
@ -443,7 +443,7 @@ curl [http://localhost:8083/metrics](http://localhost:8083/metrics)
### Configuration Validation
```text
```toml
# Validate configuration syntax
provisioning config validate ai
@ -464,7 +464,7 @@ provisioning ai health-check
### Common Settings
```text
```toml
# Provider configuration
export PROVISIONING_AI_PROVIDER="anthropic"
export PROVISIONING_AI_MODEL="claude-sonnet-4"
@ -492,7 +492,7 @@ export RUST_LOG="provisioning::ai=info"
### Common Issues
**Issue**: API key not recognized
```text
```bash
# Check environment variable is set
echo $PROVISIONING_AI_API_KEY
@ -504,7 +504,7 @@ provisioning ai test provider anthropic
```
**Issue**: Cache not working
```text
```bash
# Check cache status
provisioning admin cache stats ai
@ -517,7 +517,7 @@ RUST_LOG=provisioning::cache=debug provisioning-ai-service
```
**Issue**: RAG search not finding results
```text
```bash
# Rebuild RAG indexes
provisioning ai index rebuild rag
@ -534,7 +534,7 @@ provisioning ai index status rag
New AI versions automatically migrate old configurations:
```text
```toml
# Check configuration version
provisioning config version ai
@ -549,7 +549,7 @@ provisioning config backup ai
### Recommended Production Settings
```text
```toml
[ai]
enabled = true
provider = "anthropic"

View File

@ -21,7 +21,7 @@ includes built-in cost controls to prevent runaway spending while maximizing val
### Cost Examples
```text
```bash
Scenario 1: Generate simple database configuration
- Input: 500 tokens (description + schema)
- Output: 200 tokens (generated config)
@ -49,7 +49,7 @@ Scenario 3: Monthly usage (typical organization)
Caching is the primary cost reduction strategy, cutting costs by 50-80%:
```text
```bash
Without Caching:
User 1: "Generate PostgreSQL config" → API call → $0.005
User 2: "Generate PostgreSQL config" → API call → $0.005
@ -69,7 +69,7 @@ With Semantic Cache:
### Cache Configuration
```text
```toml
[ai.cache]
enabled = true
cache_type = "redis" # Distributed cache across instances
@ -96,7 +96,7 @@ alert_on_low_hit_rate = true
Prevent usage spikes from unexpected costs:
```text
```toml
[ai.limits]
# Per-request limits
max_tokens = 4096
@ -119,7 +119,7 @@ stop_at_percent = 95 # Stop when at 95% of budget
### Workspace-Level Budgets
```text
```toml
[ai.workspace_budgets]
# Per-workspace cost limits
dev.daily_limit_usd = 10
@ -135,7 +135,7 @@ teams.team-b.monthly_limit = 300
### Track Spending
```text
```bash
# View current month spending
provisioning admin costs show ai
@ -154,7 +154,7 @@ provisioning admin costs export ai --format csv --output costs.csv
### Cost Breakdown
```text
```bash
Month: January 2025
Total Spending: $285.42
@ -192,7 +192,7 @@ Cache Performance:
### Strategy 1: Increase Cache Hit Rate
```text
```bash
# Longer TTL = more cache hits
[ai.cache]
ttl_seconds = 7200 # 2 hours instead of 1 hour
@ -208,7 +208,7 @@ similarity_threshold = 0.90 # Lower threshold = more hits
### Strategy 2: Use Local Models
```text
```toml
[ai]
provider = "local"
model = "mistral-7b" # Free, runs on GPU
@ -222,7 +222,7 @@ model = "mistral-7b" # Free, runs on GPU
### Strategy 3: Use Haiku for Simple Tasks
```text
```bash
Task Complexity vs Model:
Simple (form assist): Claude Haiku 4 ($0.80/$4)
@ -241,7 +241,7 @@ Example optimization:
### Strategy 4: Batch Operations
```text
```bash
# Instead of individual requests, batch similar operations:
# Before: 100 configs, 100 separate API calls
@ -257,7 +257,7 @@ provisioning ai batch --input configs-list.yaml
### Strategy 5: Smart Feature Enablement
```text
```toml
[ai.features]
# Enable high-ROI features
config_generation = true # High value, moderate cost
@ -273,7 +273,7 @@ agents = false # Complex, requires multiple calls
### 1. Set Budget
```text
```bash
# Set monthly budget
provisioning config set ai.budget.monthly_limit_usd 500
@ -287,7 +287,7 @@ provisioning config set ai.workspace_budgets.dev.monthly_limit 100
### 2. Monitor Spending
```text
```bash
# Daily check
provisioning admin costs show ai
@ -300,7 +300,7 @@ provisioning admin costs analyze ai --period month
### 3. Adjust If Needed
```text
```bash
# If overspending:
# - Increase cache TTL
# - Enable local models for simple tasks
@ -315,7 +315,7 @@ provisioning admin costs analyze ai --period month
### 4. Forecast and Plan
```text
```bash
# Current monthly run rate
provisioning admin costs forecast ai
@ -334,7 +334,7 @@ provisioning admin costs forecast ai
### Chargeback Models
**Per-Workspace Model**:
```text
```bash
Development workspace: $50/month
Staging workspace: $100/month
Production workspace: $300/month
@ -343,14 +343,14 @@ Total: $450/month
```
**Per-User Model**:
```text
```bash
Each user charged based on their usage
Encourages efficiency
Difficult to track/allocate
```
**Shared Pool Model**:
```text
```bash
All teams share $1000/month budget
Budget splits by consumption rate
Encourages optimization
@ -361,7 +361,7 @@ Most flexible
### Generate Reports
```text
```bash
# Monthly cost report
provisioning admin costs report ai
--format pdf
@ -384,7 +384,7 @@ provisioning admin costs report ai
### ROI Examples
```text
```bash
Scenario 1: Developer Time Savings
Problem: Manual config creation takes 2 hours
Solution: AI config generation, 10 minutes (12x faster)
@ -422,7 +422,7 @@ Scenario 3: Reduction in Failed Deployments
### Hybrid Strategy (Recommended)
```text
```bash
✓ Local models for:
- Form assistance (high volume, low complexity)
- Simple validation checks
@ -445,7 +445,7 @@ Result:
### Cost Anomaly Detection
```text
```bash
# Enable anomaly detection
provisioning config set ai.monitoring.anomaly_detection true
@ -462,7 +462,7 @@ provisioning config set ai.monitoring.cost_spike_percent 150
### Alert Configuration
```text
```toml
[ai.monitoring.alerts]
enabled = true
spike_threshold_percent = 150

View File

@ -9,7 +9,7 @@ platform capabilities as tools. This enables complex multi-step workflows, tool
The MCP integration follows the Model Context Protocol specification:
```text
```bash
┌──────────────────────────────────────────────────────────────┐
│ External LLM (Claude, GPT-4, etc.) │
└────────────────────┬─────────────────────────────────────────┘
@ -44,7 +44,7 @@ The MCP integration follows the Model Context Protocol specification:
The MCP server is started as a stdio-based service:
```text
```bash
# Start MCP server (stdio transport)
provisioning-mcp-server --config /etc/provisioning/ai.toml
@ -74,7 +74,7 @@ RUST_LOG=debug provisioning-mcp-server --config /etc/provisioning/ai.toml
Generate infrastructure configuration from natural language description.
```text
```json
{
"name": "generate_config",
"description": "Generate a Nickel infrastructure configuration from a natural language description",
@ -102,7 +102,7 @@ Generate infrastructure configuration from natural language description.
**Example Usage**:
```text
```bash
# Via MCP client
mcp-client provisioning generate_config
--description "Production PostgreSQL cluster with encryption and daily backups"
@ -114,7 +114,7 @@ mcp-client provisioning generate_config
**Response**:
```text
```json
{
database = {
engine = "postgresql",
@ -155,7 +155,7 @@ mcp-client provisioning generate_config
Validate a Nickel configuration against schemas and policies.
```text
```json
{
"name": "validate_config",
"description": "Validate a Nickel configuration file",
@ -182,7 +182,7 @@ Validate a Nickel configuration against schemas and policies.
**Example Usage**:
```text
```bash
# Validate configuration
mcp-client provisioning validate_config
--config "$(cat workspaces/prod/database.ncl)"
@ -195,7 +195,7 @@ mcp-client provisioning validate_config
**Response**:
```text
```json
{
"valid": true,
"errors": [],
@ -216,7 +216,7 @@ mcp-client provisioning validate_config
Search infrastructure documentation using RAG system.
```text
```json
{
"name": "search_docs",
"description": "Search provisioning documentation for information",
@ -244,7 +244,7 @@ Search infrastructure documentation using RAG system.
**Example Usage**:
```text
```bash
# Search documentation
mcp-client provisioning search_docs
--query "How do I configure PostgreSQL with replication?"
@ -258,7 +258,7 @@ mcp-client provisioning search_docs
**Response**:
```text
```json
{
"results": [
{
@ -283,7 +283,7 @@ mcp-client provisioning search_docs
Analyze deployment failures and suggest fixes.
```text
```json
{
"name": "troubleshoot_deployment",
"description": "Analyze deployment logs and suggest fixes",
@ -310,7 +310,7 @@ Analyze deployment failures and suggest fixes.
**Example Usage**:
```text
```bash
# Troubleshoot recent deployment
mcp-client provisioning troubleshoot_deployment
--deployment_id "deploy-2025-01-13-001"
@ -322,7 +322,7 @@ mcp-client provisioning troubleshoot_deployment
**Response**:
```text
```json
{
"status": "failure",
"root_cause": "Database connection timeout during migration phase",
@ -349,7 +349,7 @@ mcp-client provisioning troubleshoot_deployment
Retrieve schema definition with examples.
```text
```json
{
"name": "get_schema",
"description": "Get a provisioning schema definition",
@ -373,7 +373,7 @@ Retrieve schema definition with examples.
**Example Usage**:
```text
```bash
# Get schema definition
mcp-client provisioning get_schema --schema_name database
@ -389,7 +389,7 @@ mcp-client provisioning get_schema
Verify configuration against compliance policies (Cedar).
```text
```json
{
"name": "check_compliance",
"description": "Check configuration against compliance policies",
@ -412,7 +412,7 @@ Verify configuration against compliance policies (Cedar).
**Example Usage**:
```text
```bash
# Check against PCI-DSS
mcp-client provisioning check_compliance
--config "$(cat workspaces/prod/database.ncl)"
@ -423,7 +423,7 @@ mcp-client provisioning check_compliance
### Claude Desktop (Most Common)
```text
```bash
~/.claude/claude_desktop_config.json:
{
"mcpServers": {
@ -441,7 +441,7 @@ mcp-client provisioning check_compliance
**Usage in Claude**:
```text
```bash
User: I need a production Kubernetes cluster in AWS with automatic scaling
Claude can now use provisioning tools:
@ -454,7 +454,7 @@ I'll help you create a production Kubernetes cluster. Let me:
### OpenAI Function Calling
```text
```bash
import openai
tools = [
@ -486,7 +486,7 @@ response = openai.ChatCompletion.create(
### Local LLM Integration (Ollama)
```text
```bash
# Start Ollama with provisioning MCP
OLLAMA_MCP_SERVERS=provisioning://localhost:3000
ollama serve
@ -504,7 +504,7 @@ curl [http://localhost:11434/api/generate](http://localhost:11434/api/generate)
Tools return consistent error responses:
```text
```json
{
"error": {
"code": "VALIDATION_ERROR",
@ -567,7 +567,7 @@ See [Configuration Guide](configuration.md) for MCP-specific settings:
## Monitoring and Debugging
```text
```bash
# Monitor MCP server
provisioning admin mcp status

View File

@ -12,7 +12,7 @@ validation.
Transform infrastructure descriptions into production-ready Nickel configurations:
```text
```nickel
User Input:
"Create a production PostgreSQL cluster with 100GB storage,
daily backups, encryption enabled, and cross-region replication
@ -34,7 +34,7 @@ System Output:
### Generation Pipeline
```text
```bash
Input Description (Natural Language)
┌─────────────────────────────────────┐
@ -84,7 +84,7 @@ Input Description (Natural Language)
Extract structured intent from natural language:
```text
```bash
Input: "Create a production PostgreSQL cluster with encryption and backups"
Extracted Intent:
@ -104,7 +104,7 @@ Extracted Intent:
Map natural language entities to schema fields:
```text
```bash
Description Terms → Schema Fields:
"100GB storage" → database.instance.allocated_storage_gb = 100
"daily backups" → backup.enabled = true, backup.frequency = "daily"
@ -117,7 +117,7 @@ Description Terms → Schema Fields:
Sophisticated prompting for schema-aware generation:
```text
```bash
System Prompt:
You are generating Nickel infrastructure configurations.
Generate ONLY valid Nickel syntax.
@ -144,7 +144,7 @@ Start with: let { database = {
Handle generation errors through iteration:
```text
```bash
Attempt 1: Generate initial config
↓ Validate
✗ Error: field `version` type mismatch (string vs number)
@ -158,7 +158,7 @@ Attempt 2: Fix with context from error
### CLI Usage
```text
```bash
# Simple generation
provisioning ai generate "PostgreSQL database for production"
@ -188,7 +188,7 @@ provisioning ai generate --batch descriptions.yaml
### Interactive Refinement
```text
```bash
$ provisioning ai generate --interactive
> Describe infrastructure: Create production PostgreSQL cluster
@ -209,12 +209,12 @@ Configuration saved to: workspaces/prod/database.ncl
### Example 1: Simple Database
**Input**:
```text
```bash
"PostgreSQL database with 50GB storage and encryption"
```
**Output**:
```text
```javascript
let {
database = {
engine = "postgresql",
@ -249,13 +249,13 @@ let {
### Example 2: Complex Kubernetes Setup
**Input**:
```text
```yaml
"Production Kubernetes cluster in AWS with 3 availability zones,
auto-scaling from 3 to 10 nodes, managed PostgreSQL, and monitoring"
```
**Output**:
```text
```javascript
let {
kubernetes = {
version = "1.28.0",
@ -314,7 +314,7 @@ let {
### Configurable Generation Parameters
```text
```toml
# In provisioning/config/ai.toml
[ai.generation]
# Which schema to use by default
@ -360,7 +360,7 @@ require_compliance_check = true
### Typical Usage Session
```text
```bash
# 1. Describe infrastructure need
$ provisioning ai generate "I need a database for my web app"
@ -386,7 +386,7 @@ $ provisioning workspace logs database
NLC uses RAG to find similar configurations:
```text
```toml
User: "Create Kubernetes cluster"
RAG searches for:
@ -407,7 +407,7 @@ NLC and form assistance share components:
### CLI Integration
```text
```bash
# Generate then preview
| provisioning ai generate "PostgreSQL prod" | \ |
provisioning config preview

View File

@ -22,7 +22,7 @@ The RAG system consists of:
The system uses embedding models to convert documents into vector representations:
```text
```bash
┌─────────────────────┐
│ Document Source │
│ (Markdown, Code) │
@ -55,7 +55,7 @@ The system uses embedding models to convert documents into vector representation
SurrealDB serves as the vector database and knowledge store:
```text
```bash
# Configuration in provisioning/schemas/ai.ncl
let {
rag = {
@ -108,7 +108,7 @@ Intelligent chunking preserves context while managing token limits:
#### Markdown Chunking Strategy
```text
```bash
Input Document: provisioning/docs/src/guides/from-scratch.md
Chunks:
@ -126,7 +126,7 @@ Each chunk includes:
#### Code Chunking Strategy
```text
```bash
Input Document: provisioning/schemas/main.ncl
Chunks:
@ -148,7 +148,7 @@ The system implements dual search strategy for optimal results:
### Vector Similarity Search
```text
```bash
// Find semantically similar documents
async fn vector_search(query: &str, top_k: usize) -> Vec<Document> {
let embedding = embed(query).await?;
@ -173,7 +173,7 @@ async fn vector_search(query: &str, top_k: usize) -> Vec<Document> {
### BM25 Keyword Search
```text
```bash
// Find documents with matching keywords
async fn keyword_search(query: &str, top_k: usize) -> Vec<Document> {
// BM25 full-text search in SurrealDB
@ -196,7 +196,7 @@ async fn keyword_search(query: &str, top_k: usize) -> Vec<Document> {
### Hybrid Results
```text
```javascript
async fn hybrid_search(
query: &str,
vector_weight: f32,
@ -231,7 +231,7 @@ async fn hybrid_search(
Reduces API calls by caching embeddings of repeated queries:
```text
```rust
struct SemanticCache {
queries: Arc<DashMap<Vec<f32>, CachedResult>>,
similarity_threshold: f32,
@ -268,7 +268,7 @@ impl SemanticCache {
### Document Indexing
```text
```bash
# Index all documentation
provisioning ai index-docs provisioning/docs/src
@ -284,7 +284,7 @@ provisioning ai watch docs provisioning/docs/src
### Programmatic Indexing
```text
```bash
// In ai-service on startup
async fn initialize_rag() -> Result<()> {
let rag = RAGSystem::new(&config.rag).await?;
@ -309,7 +309,7 @@ async fn initialize_rag() -> Result<()> {
### Query the RAG System
```text
```bash
# Search for context-aware information
provisioning ai query "How do I configure PostgreSQL with encryption?"
@ -323,7 +323,7 @@ provisioning ai chat
### AI Service Integration
```text
```bash
// AI service uses RAG to enhance generation
async fn generate_config(user_request: &str) -> Result<String> {
// Retrieve relevant context
@ -344,7 +344,7 @@ async fn generate_config(user_request: &str) -> Result<String> {
### Form Assistance Integration
```text
```bash
// In typdialog-ai (JavaScript/TypeScript)
async function suggestFieldValue(fieldName, currentInput) {
// Query RAG for similar configurations
@ -415,7 +415,7 @@ See [Configuration Guide](configuration.md) for detailed RAG setup:
### Query Metrics
```text
```bash
# View RAG search metrics
provisioning ai metrics show rag
@ -425,7 +425,7 @@ provisioning ai eval-rag --sample-queries 100
### Debug Mode
```text
```bash
# In provisioning/config/ai.toml
[ai.rag.debug]
enabled = true

View File

@ -9,7 +9,7 @@ controlled through Cedar policies and include strict secret isolation.
### Defense in Depth
```text
```bash
┌─────────────────────────────────────────┐
│ User Request to AI │
└──────────────┬──────────────────────────┘
@ -60,7 +60,7 @@ controlled through Cedar policies and include strict secret isolation.
### Policy Engine Setup
```text
```bash
// File: provisioning/policies/ai-policies.cedar
// Core principle: Least privilege
@ -164,7 +164,7 @@ when {
Before sending data to external LLMs, the system removes:
```text
```bash
Patterns Removed:
├─ Passwords: password="...", pwd=..., etc.
├─ API Keys: api_key=..., api-key=..., etc.
@ -178,7 +178,7 @@ Patterns Removed:
### Configuration
```text
```toml
[ai.security]
sanitize_pii = true
sanitize_secrets = true
@ -207,7 +207,7 @@ preserve_patterns = [
### Example Sanitization
**Before**:
```text
```bash
Error configuring database:
connection_string: postgresql://dbadmin:MySecurePassword123@prod-db.us-east-1.rds.amazonaws.com:5432/app
api_key: sk-ant-abc123def456
@ -215,7 +215,7 @@ vault_token: hvs.CAESIyg7...
```
**After Sanitization**:
```text
```bash
Error configuring database:
connection_string: postgresql://dbadmin:[REDACTED]@prod-db.us-east-1.rds.amazonaws.com:5432/app
api_key: [REDACTED]
@ -228,7 +228,7 @@ vault_token: [REDACTED]
AI cannot directly access secrets. Instead:
```text
```bash
User wants: "Configure PostgreSQL with encrypted backups"
AI generates: Configuration schema with placeholders
@ -255,7 +255,7 @@ Deployment: Uses secrets from secure store (Vault, AWS Secrets Manager)
For environments requiring zero external API calls:
```text
```bash
# Deploy local Ollama with provisioning support
docker run -d
--name provisioning-ai
@ -301,7 +301,7 @@ api_base = "[http://localhost:11434"](http://localhost:11434")
For highly sensitive environments:
```text
```toml
[ai.security.hsm]
enabled = true
provider = "aws-cloudhsm" # or "thales", "yubihsm"
@ -317,7 +317,7 @@ server_key = "/etc/provisioning/certs/server.key"
### Data at Rest
```text
```toml
[ai.security.encryption]
enabled = true
algorithm = "aes-256-gcm"
@ -335,7 +335,7 @@ log_encryption = true
### Data in Transit
```text
```bash
All external LLM API calls:
├─ TLS 1.3 (minimum)
├─ Certificate pinning (optional)
@ -347,7 +347,7 @@ All external LLM API calls:
### What Gets Logged
```text
```json
{
"timestamp": "2025-01-13T10:30:45Z",
"event_type": "ai_action",
@ -380,7 +380,7 @@ All external LLM API calls:
### Audit Trail Access
```text
```bash
# View recent AI actions
provisioning audit log ai --tail 100
@ -404,7 +404,7 @@ provisioning audit search ai "error in database configuration"
### Built-in Compliance Checks
```text
```toml
[ai.compliance]
frameworks = ["pci-dss", "hipaa", "sox", "gdpr"]
@ -423,7 +423,7 @@ enabled = true
### Compliance Reports
```text
```bash
# Generate compliance report
provisioning audit compliance-report
--framework pci-dss
@ -467,7 +467,7 @@ provisioning audit verify-compliance
### Compromised API Key
```text
```bash
# 1. Immediately revoke key
provisioning admin revoke-key ai-api-key-123
@ -486,7 +486,7 @@ provisioning audit log ai
### Unauthorized Access
```text
```bash
# Review Cedar policy logs
provisioning audit log ai
--decision deny

View File

@ -11,7 +11,7 @@ root causes, suggests fixes, and generates corrected configurations based on fai
Transform deployment failures into actionable insights:
```text
```bash
Deployment Fails with Error
AI analyzes logs:
@ -37,7 +37,7 @@ Developer reviews and accepts:
### Automatic Detection and Analysis
```text
```bash
┌──────────────────────────────────────────┐
│ Deployment Monitoring │
│ - Watches deployment for failures │
@ -91,14 +91,14 @@ Developer reviews and accepts:
### Example 1: Database Connection Timeout
**Failure**:
```text
```bash
Deployment: deploy-2025-01-13-001
Status: FAILED at phase database_migration
Error: connection timeout after 30s connecting to postgres://...
```
**Run Troubleshooting**:
```text
```bash
$ provisioning ai troubleshoot deploy-2025-01-13-001
Analyzing deployment failure...
@ -175,14 +175,14 @@ Ready to redeploy with corrected configuration? [yes/no]: yes
### Example 2: Kubernetes Deployment Error
**Failure**:
```text
```yaml
Deployment: deploy-2025-01-13-002
Status: FAILED at phase kubernetes_workload
Error: failed to create deployment app: Pod exceeded capacity
```
**Troubleshooting**:
```text
```bash
$ provisioning ai troubleshoot deploy-2025-01-13-002 --detailed
╔════════════════════════════════════════════════════════════════╗
@ -239,7 +239,7 @@ $ provisioning ai troubleshoot deploy-2025-01-13-002 --detailed
### Basic Troubleshooting
```text
```bash
# Troubleshoot recent deployment
provisioning ai troubleshoot deploy-2025-01-13-001
@ -255,7 +255,7 @@ provisioning ai troubleshoot deploy-2025-01-13-001 --alternatives
### Working with Logs
```text
```bash
# Troubleshoot from custom logs
provisioning ai troubleshoot
| --logs "$(journalctl -u provisioning --no-pager | tail -100)" |
@ -271,7 +271,7 @@ provisioning ai troubleshoot
### Generate Reports
```text
```bash
# Generate detailed troubleshooting report
provisioning ai troubleshoot deploy-123
--report
@ -294,7 +294,7 @@ provisioning ai troubleshoot deploy-123
### Shallow Analysis (Fast)
```text
```bash
provisioning ai troubleshoot deploy-123 --depth shallow
Analyzes:
@ -306,7 +306,7 @@ Analyzes:
### Deep Analysis (Thorough)
```text
```bash
provisioning ai troubleshoot deploy-123 --depth deep
Analyzes:
@ -322,7 +322,7 @@ Analyzes:
### Automatic Troubleshooting
```text
```bash
# Enable auto-troubleshoot on failures
provisioning config set ai.troubleshooting.auto_analyze true
@ -333,7 +333,7 @@ provisioning config set ai.troubleshooting.auto_analyze true
### WebUI Integration
```text
```bash
Deployment Dashboard
├─ deployment-123 [FAILED]
│ └─ AI Analysis
@ -349,7 +349,7 @@ Deployment Dashboard
The system learns common failure patterns:
```text
```bash
Collected Patterns:
├─ Database Timeouts (25% of failures)
│ └─ Usually: Security group, connection pool, slow startup
@ -363,7 +363,7 @@ Collected Patterns:
### Improvement Tracking
```text
```bash
# See patterns in your deployments
provisioning ai analytics failures --period month
@ -386,7 +386,7 @@ Month Summary:
### Troubleshooting Settings
```text
```toml
[ai.troubleshooting]
enabled = true
@ -416,7 +416,7 @@ estimate_alternative_costs = true
### Failure Detection
```text
```toml
[ai.troubleshooting.detection]
# Monitor logs for these patterns
watch_patterns = [

View File

@ -12,7 +12,7 @@ API reference for programmatic access to the Provisioning Platform.
## Quick Start
```text
```bash
# Check API health
curl http://localhost:9090/health

View File

@ -16,7 +16,7 @@ All extensions follow a standardized structure and API for seamless integration.
### Standard Directory Layout
```text
```bash
extension-name/
├── manifest.toml # Extension metadata
├── schemas/ # Nickel configuration files
@ -71,7 +71,7 @@ All providers must implement the following interface:
Create `schemas/settings.ncl`:
```text
```nickel
# Provider settings schema
{
ProviderSettings = {
@ -146,7 +146,7 @@ schema ServerConfig {
Create `nulib/mod.nu`:
```text
```nushell
use std log
# Provider name and version
@ -231,7 +231,7 @@ export def "test-connection" [config: record] -> record {
Create `nulib/create.nu`:
```text
```nushell
use std log
use utils.nu *
@ -368,7 +368,7 @@ def wait-for-server-ready [server_id: string] -> string {
Add provider metadata in `metadata.toml`:
```text
```toml
[extension]
name = "my-provider"
type = "provider"
@ -429,7 +429,7 @@ Task services must implement:
Create `schemas/version.ncl`:
```text
```nickel
# Task service version configuration
{
taskserv_version = {
@ -483,7 +483,7 @@ Create `schemas/version.ncl`:
Create `nulib/mod.nu`:
```text
```nushell
use std log
use ../../../lib_provisioning *
@ -697,7 +697,7 @@ Clusters orchestrate multiple components:
Create `schemas/cluster.ncl`:
```text
```nickel
# Cluster configuration schema
{
ClusterConfig = {
@ -812,7 +812,7 @@ Create `schemas/cluster.ncl`:
Create `nulib/mod.nu`:
```text
```nushell
use std log
use ../../../lib_provisioning *
@ -1065,7 +1065,7 @@ Extensions should include comprehensive tests:
Create `tests/unit_tests.nu`:
```text
```nushell
use std testing
export def test_provider_config_validation [] {
@ -1096,7 +1096,7 @@ export def test_server_creation_check_mode [] {
Create `tests/integration_tests.nu`:
```text
```nushell
use std testing
export def test_full_server_lifecycle [] {
@ -1127,7 +1127,7 @@ export def test_full_server_lifecycle [] {
### Running Tests
```text
```bash
# Run unit tests
nu tests/unit_tests.nu
@ -1151,7 +1151,7 @@ Each extension must include:
### API Documentation Template
```text
```bash
# Extension Name API
## Overview

View File

@ -18,7 +18,7 @@ Provisioning offers multiple integration points:
#### Full-Featured Python Client
```text
```bash
import asyncio
import json
import logging
@ -416,7 +416,7 @@ if __name__ == "__main__":
#### Complete JavaScript/TypeScript Client
```text
```bash
import axios, { AxiosInstance, AxiosResponse } from 'axios';
import WebSocket from 'ws';
import { EventEmitter } from 'events';
@ -925,7 +925,7 @@ export { ProvisioningClient, Task, BatchConfig };
### Comprehensive Error Handling
```text
```python
class ProvisioningErrorHandler:
"""Centralized error handling for provisioning operations"""
@ -1028,7 +1028,7 @@ async def robust_workflow_execution():
### Circuit Breaker Pattern
```text
```javascript
class CircuitBreaker {
private failures = 0;
private nextAttempt = Date.now();
@ -1104,7 +1104,7 @@ class ResilientProvisioningClient {
### Connection Pooling and Caching
```text
```bash
import asyncio
import aiohttp
from cachetools import TTLCache
@ -1222,7 +1222,7 @@ async def high_performance_workflow():
### WebSocket Connection Pooling
```text
```javascript
class WebSocketPool {
constructor(maxConnections = 5) {
this.maxConnections = maxConnections;
@ -1290,13 +1290,13 @@ The Python SDK provides a comprehensive interface for provisioning:
#### Installation
```text
```bash
pip install provisioning-client
```
#### Quick Start
```text
```bash
from provisioning_client import ProvisioningClient
# Initialize client
@ -1319,7 +1319,7 @@ print(f"Workflow completed: {task.status}")
#### Advanced Usage
```text
```bash
# Use with async context manager
async with ProvisioningClient() as client:
# Batch operations
@ -1340,13 +1340,13 @@ async with ProvisioningClient() as client:
#### Installation
```text
```bash
npm install @provisioning/client
```
#### Usage
```text
```bash
import { ProvisioningClient } from '@provisioning/client';
const client = new ProvisioningClient({
@ -1373,7 +1373,7 @@ await client.connectWebSocket();
### Workflow Orchestration Pipeline
```text
```python
class WorkflowPipeline:
"""Orchestrate complex multi-step workflows"""
@ -1462,7 +1462,7 @@ async def complex_deployment():
### Event-Driven Architecture
```text
```javascript
class EventDrivenWorkflowManager {
constructor(client) {
this.client = client;

View File

@ -69,7 +69,7 @@ The provisioning platform provides a comprehensive Nushell library with reusable
## Usage Example
```text
```nushell
# Load provisioning library
use provisioning/core/nulib/lib_provisioning *

View File

@ -17,7 +17,7 @@ The path resolution system provides a hierarchical and configurable mechanism fo
The system follows a specific hierarchy for loading configuration files:
```text
```toml
1. System defaults (config.defaults.toml)
2. User configuration (config.user.toml)
3. Project configuration (config.project.toml)
@ -30,7 +30,7 @@ The system follows a specific hierarchy for loading configuration files:
The system searches for configuration files in these locations:
```text
```toml
# Default search paths (in order)
/usr/local/provisioning/config.defaults.toml
$HOME/.config/provisioning/config.user.toml
@ -59,7 +59,7 @@ Resolves configuration file paths using the search hierarchy.
**Example:**
```text
```bash
use path-resolution.nu *
let config_path = (resolve-config-path "config.user.toml" [])
# Returns: "/home/user/.config/provisioning/config.user.toml"
@ -76,7 +76,7 @@ Discovers extension paths (providers, taskservs, clusters).
**Returns:**
```text
```json
{
base_path: "/usr/local/provisioning/providers/upcloud",
schemas_path: "/usr/local/provisioning/providers/upcloud/schemas",
@ -92,7 +92,7 @@ Gets current workspace path configuration.
**Returns:**
```text
```json
{
base: "/usr/local/provisioning",
current_infra: "/workspace/infra/production",
@ -130,7 +130,7 @@ Interpolates variables in path templates.
**Example:**
```text
```javascript
let template = "{{paths.base}}/infra/{{env.USER}}/{{git.branch}}"
let result = (interpolate-path $template {
paths: { base: "/usr/local/provisioning" },
@ -150,7 +150,7 @@ Discovers all available providers.
**Returns:**
```text
```bash
[
{
name: "upcloud",
@ -185,7 +185,7 @@ Gets provider-specific configuration and paths.
**Returns:**
```text
```json
{
name: "upcloud",
base_path: "/usr/local/provisioning/providers/upcloud",
@ -214,7 +214,7 @@ Discovers all available task services.
**Returns:**
```text
```bash
[
{
name: "kubernetes",
@ -245,7 +245,7 @@ Gets task service configuration and version information.
**Returns:**
```text
```json
{
name: "kubernetes",
path: "/usr/local/provisioning/taskservs/kubernetes",
@ -272,7 +272,7 @@ Discovers all available cluster configurations.
**Returns:**
```text
```bash
[
{
name: "buildkit",
@ -312,7 +312,7 @@ Gets environment-specific configuration.
**Returns:**
```text
```json
{
name: "production",
paths: {
@ -359,7 +359,7 @@ Discovers available workspaces and infrastructure directories.
**Returns:**
```text
```bash
[
{
name: "production",
@ -405,7 +405,7 @@ Analyzes project structure and identifies components.
**Returns:**
```text
```json
{
root: "/workspace/project",
type: "provisioning_workspace",
@ -458,7 +458,7 @@ Gets path resolution cache statistics.
**Returns:**
```text
```json
{
enabled: true,
size: 150,
@ -485,7 +485,7 @@ Normalizes paths for cross-platform compatibility.
**Example:**
```text
```bash
# On Windows
normalize-path "path/to/file" # Returns: "path\to\file"
@ -519,7 +519,7 @@ Validates all paths in configuration.
**Returns:**
```text
```json
{
valid: true,
errors: [],
@ -541,7 +541,7 @@ Validates extension directory structure.
**Returns:**
```text
```json
{
valid: true,
required_files: [
@ -561,7 +561,7 @@ Validates extension directory structure.
The path resolution API is exposed via Nushell commands:
```text
```nushell
# Show current path configuration
provisioning show paths
@ -584,7 +584,7 @@ provisioning workspace set /path/to/infra
### Python Integration
```text
```bash
import subprocess
import json
@ -612,7 +612,7 @@ providers = resolver.discover_providers()
### JavaScript/Node.js Integration
```text
```javascript
const { exec } = require('child_process');
const util = require('util');
const execAsync = util.promisify(exec);
@ -697,7 +697,7 @@ The system provides graceful fallbacks:
Monitor path resolution performance:
```text
```bash
# Get resolution statistics
provisioning debug path-stats

View File

@ -18,7 +18,7 @@ All providers must implement the following interface:
### Required Functions
```text
```bash
# Provider initialization
export def init [] -> record { ... }
@ -37,7 +37,7 @@ export def get-pricing [plan: string] -> record { ... }
Each provider requires configuration in Nickel format:
```text
```nickel
# Example: UpCloud provider configuration
{
provider = {
@ -57,7 +57,7 @@ Each provider requires configuration in Nickel format:
### 1. Directory Structure
```text
```bash
provisioning/extensions/providers/my-provider/
├── nulib/
│ └── my_provider.nu # Provider implementation
@ -69,7 +69,7 @@ provisioning/extensions/providers/my-provider/
### 2. Implementation Template
```text
```bash
# my_provider.nu
export def init [] {
{
@ -94,7 +94,7 @@ export def list-servers [] {
### 3. Nickel Schema
```text
```nickel
# main.ncl
{
MyProvider = {
@ -118,7 +118,7 @@ Providers are automatically discovered from:
- `provisioning/extensions/providers/*/nu/*.nu`
- User workspace: `workspace/extensions/providers/*/nu/*.nu`
```text
```nushell
# Discover available providers
provisioning module discover providers
@ -130,7 +130,7 @@ provisioning module load providers workspace my-provider
### Create Servers
```text
```bash
use my_provider.nu *
let plan = {
@ -144,13 +144,13 @@ create-servers $plan
### List Servers
```text
```bash
list-servers | where status == "running" | select hostname ip_address
```
### Get Pricing
```text
```bash
get-pricing "small" | to yaml
```
@ -158,7 +158,7 @@ get-pricing "small" | to yaml
Use the test environment system to test providers:
```text
```bash
# Test provider without real resources
provisioning test env single my-provider --check
```

View File

@ -20,13 +20,13 @@ Provisioning exposes two main REST APIs:
All API endpoints (except health checks) require JWT authentication via the Authorization header:
```text
```bash
Authorization: Bearer <jwt_token>
```
### Getting Access Token
```text
```bash
POST /auth/login
Content-Type: application/json
@ -47,7 +47,7 @@ Check orchestrator health status.
**Response:**
```text
```json
{
"success": true,
"data": "Orchestrator is healthy"
@ -68,7 +68,7 @@ List all workflow tasks.
**Response:**
```text
```json
{
"success": true,
"data": [
@ -99,7 +99,7 @@ Get specific task status and details.
**Response:**
```text
```json
{
"success": true,
"data": {
@ -126,7 +126,7 @@ Submit server creation workflow.
**Request Body:**
```text
```json
{
"infra": "production",
"settings": "config.ncl",
@ -137,7 +137,7 @@ Submit server creation workflow.
**Response:**
```text
```json
{
"success": true,
"data": "uuid-task-id"
@ -150,7 +150,7 @@ Submit task service workflow.
**Request Body:**
```text
```json
{
"operation": "create",
"taskserv": "kubernetes",
@ -163,7 +163,7 @@ Submit task service workflow.
**Response:**
```text
```json
{
"success": true,
"data": "uuid-task-id"
@ -176,7 +176,7 @@ Submit cluster workflow.
**Request Body:**
```text
```json
{
"operation": "create",
"cluster_type": "buildkit",
@ -189,7 +189,7 @@ Submit cluster workflow.
**Response:**
```text
```json
{
"success": true,
"data": "uuid-task-id"
@ -204,7 +204,7 @@ Execute batch workflow operation.
**Request Body:**
```text
```json
{
"name": "multi_cloud_deployment",
"version": "1.0.0",
@ -235,7 +235,7 @@ Execute batch workflow operation.
**Response:**
```text
```json
{
"success": true,
"data": {
@ -263,7 +263,7 @@ List all batch operations.
**Response:**
```text
```json
{
"success": true,
"data": [
@ -288,7 +288,7 @@ Get batch operation status.
**Response:**
```text
```json
{
"success": true,
"data": {
@ -317,7 +317,7 @@ Cancel running batch operation.
**Response:**
```text
```json
{
"success": true,
"data": "Operation cancelled"
@ -336,7 +336,7 @@ Get real-time workflow progress.
**Response:**
```text
```json
{
"success": true,
"data": {
@ -360,7 +360,7 @@ Get workflow state snapshots.
**Response:**
```text
```json
{
"success": true,
"data": [
@ -380,7 +380,7 @@ Get system-wide metrics.
**Response:**
```text
```json
{
"success": true,
"data": {
@ -403,7 +403,7 @@ Get system health status.
**Response:**
```text
```json
{
"success": true,
"data": {
@ -424,7 +424,7 @@ Get state manager statistics.
**Response:**
```text
```json
{
"success": true,
"data": {
@ -444,7 +444,7 @@ Create new checkpoint.
**Request Body:**
```text
```json
{
"name": "before_major_update",
"description": "Checkpoint before deploying v2.0.0"
@ -453,7 +453,7 @@ Create new checkpoint.
**Response:**
```text
```json
{
"success": true,
"data": "checkpoint-uuid"
@ -466,7 +466,7 @@ List all checkpoints.
**Response:**
```text
```json
{
"success": true,
"data": [
@ -491,7 +491,7 @@ Get specific checkpoint details.
**Response:**
```text
```json
{
"success": true,
"data": {
@ -511,7 +511,7 @@ Execute rollback operation.
**Request Body:**
```text
```json
{
"checkpoint_id": "checkpoint-uuid"
}
@ -519,7 +519,7 @@ Execute rollback operation.
Or for partial rollback:
```text
```json
{
"operation_ids": ["op-1", "op-2", "op-3"]
}
@ -527,7 +527,7 @@ Or for partial rollback:
**Response:**
```text
```json
{
"success": true,
"data": {
@ -550,7 +550,7 @@ Restore system state from checkpoint.
**Response:**
```text
```json
{
"success": true,
"data": "State restored from checkpoint checkpoint-uuid"
@ -563,7 +563,7 @@ Get rollback system statistics.
**Response:**
```text
```json
{
"success": true,
"data": {
@ -585,7 +585,7 @@ Authenticate user and get JWT token.
**Request Body:**
```text
```json
{
"username": "admin",
"password": "secure_password",
@ -595,7 +595,7 @@ Authenticate user and get JWT token.
**Response:**
```text
```json
{
"success": true,
"data": {
@ -617,7 +617,7 @@ Refresh JWT token.
**Request Body:**
```text
```json
{
"token": "current-jwt-token"
}
@ -625,7 +625,7 @@ Refresh JWT token.
**Response:**
```text
```json
{
"success": true,
"data": {
@ -641,7 +641,7 @@ Logout and invalidate token.
**Response:**
```text
```json
{
"success": true,
"data": "Successfully logged out"
@ -661,7 +661,7 @@ List all users.
**Response:**
```text
```json
{
"success": true,
"data": [
@ -684,7 +684,7 @@ Create new user.
**Request Body:**
```text
```json
{
"username": "newuser",
"email": "newuser@example.com",
@ -696,7 +696,7 @@ Create new user.
**Response:**
```text
```json
{
"success": true,
"data": {
@ -719,7 +719,7 @@ Update existing user.
**Request Body:**
```text
```json
{
"email": "updated@example.com",
"roles": ["admin", "operator"],
@ -729,7 +729,7 @@ Update existing user.
**Response:**
```text
```json
{
"success": true,
"data": "User updated successfully"
@ -746,7 +746,7 @@ Delete user.
**Response:**
```text
```json
{
"success": true,
"data": "User deleted successfully"
@ -761,7 +761,7 @@ List all policies.
**Response:**
```text
```json
{
"success": true,
"data": [
@ -783,7 +783,7 @@ Create new policy.
**Request Body:**
```text
```json
{
"name": "new_policy",
"version": "1.0.0",
@ -800,7 +800,7 @@ Create new policy.
**Response:**
```text
```json
{
"success": true,
"data": {
@ -821,7 +821,7 @@ Update policy.
**Request Body:**
```text
```json
{
"name": "updated_policy",
"rules": [...]
@ -830,7 +830,7 @@ Update policy.
**Response:**
```text
```json
{
"success": true,
"data": "Policy updated successfully"
@ -855,7 +855,7 @@ Get audit logs.
**Response:**
```text
```json
{
"success": true,
"data": [
@ -876,7 +876,7 @@ Get audit logs.
All endpoints may return error responses in this format:
```text
```json
{
"success": false,
"error": "Detailed error message"
@ -904,7 +904,7 @@ API endpoints are rate-limited:
Rate limit headers are included in responses:
```text
```bash
X-RateLimit-Limit: 100
X-RateLimit-Remaining: 95
X-RateLimit-Reset: 1632150000
@ -918,7 +918,7 @@ Prometheus-compatible metrics endpoint.
**Response:**
```text
```bash
# HELP orchestrator_tasks_total Total number of tasks
# TYPE orchestrator_tasks_total counter
orchestrator_tasks_total{status="completed"} 150
@ -937,7 +937,7 @@ Real-time event streaming via WebSocket connection.
**Connection:**
```text
```javascript
const ws = new WebSocket('ws://localhost:9090/ws?token=jwt-token');
ws.onmessage = function(event) {
@ -948,7 +948,7 @@ ws.onmessage = function(event) {
**Event Format:**
```text
```json
{
"event_type": "TaskStatusChanged",
"timestamp": "2025-09-26T10:00:00Z",
@ -967,7 +967,7 @@ ws.onmessage = function(event) {
### Python SDK Example
```text
```bash
import requests
class ProvisioningClient:
@ -1007,7 +1007,7 @@ print(f"Task ID: {result['data']}")
### JavaScript/Node.js SDK Example
```text
```javascript
const axios = require('axios');
class ProvisioningClient {
@ -1051,7 +1051,7 @@ The system supports webhooks for external integrations:
Configure webhooks in the system configuration:
```text
```toml
[webhooks]
enabled = true
endpoints = [
@ -1065,7 +1065,7 @@ endpoints = [
### Webhook Payload
```text
```json
{
"event": "task.completed",
"timestamp": "2025-09-26T10:00:00Z",
@ -1087,7 +1087,7 @@ For endpoints that return lists, use pagination parameters:
Pagination metadata is included in response headers:
```text
```bash
X-Total-Count: 1500
X-Limit: 50
X-Offset: 100
@ -1098,7 +1098,7 @@ Link: </api/endpoint?offset=150&limit=50>; rel="next"
The API uses header-based versioning:
```text
```bash
Accept: application/vnd.provisioning.v1+json
```
@ -1108,7 +1108,7 @@ Current version: v1
Use the included test suite to validate API functionality:
```text
```bash
# Run API integration tests
cd src/orchestrator
cargo test --test api_tests

View File

@ -23,7 +23,7 @@ Provisioning provides SDKs in multiple languages to facilitate integration:
### Installation
```text
```bash
# Install from PyPI
pip install provisioning-client
@ -33,7 +33,7 @@ pip install git+https://github.com/provisioning-systems/python-client.git
### Quick Start
```text
```bash
from provisioning_client import ProvisioningClient
import asyncio
@ -79,7 +79,7 @@ if __name__ == "__main__":
#### WebSocket Integration
```text
```javascript
async def monitor_workflows():
client = ProvisioningClient()
await client.authenticate()
@ -103,7 +103,7 @@ async def monitor_workflows():
#### Batch Operations
```text
```javascript
async def execute_batch_deployment():
client = ProvisioningClient()
await client.authenticate()
@ -158,7 +158,7 @@ async def execute_batch_deployment():
#### Error Handling with Retries
```text
```bash
from provisioning_client.exceptions import (
ProvisioningAPIError,
AuthenticationError,
@ -209,7 +209,7 @@ async def robust_workflow():
#### ProvisioningClient Class
```text
```python
class ProvisioningClient:
def __init__(self,
base_url: str = "http://localhost:9090",
@ -258,7 +258,7 @@ class ProvisioningClient:
### Installation
```text
```bash
# npm
npm install @provisioning/client
@ -271,7 +271,7 @@ pnpm add @provisioning/client
### Quick Start
```text
```bash
import { ProvisioningClient } from '@provisioning/client';
async function main() {
@ -308,7 +308,7 @@ main();
### React Integration
```text
```bash
import React, { useState, useEffect } from 'react';
import { ProvisioningClient } from '@provisioning/client';
@ -434,7 +434,7 @@ export default WorkflowDashboard;
### Node.js CLI Tool
```text
```bash
#!/usr/bin/env node
import { Command } from 'commander';
@ -591,7 +591,7 @@ program.parse();
### API Reference
```text
```bash
interface ProvisioningClientOptions {
baseUrl?: string;
authUrl?: string;
@ -645,13 +645,13 @@ class ProvisioningClient extends EventEmitter {
### Installation
```text
```bash
go get github.com/provisioning-systems/go-client
```
### Quick Start
```text
```bash
package main
import (
@ -717,7 +717,7 @@ func main() {
### WebSocket Integration
```text
```bash
package main
import (
@ -785,7 +785,7 @@ func main() {
### HTTP Client with Retry Logic
```text
```bash
package main
import (
@ -877,7 +877,7 @@ func main() {
Add to your `Cargo.toml`:
```text
```toml
[dependencies]
provisioning-rs = "2.0.0"
tokio = { version = "1.0", features = ["full"] }
@ -885,7 +885,7 @@ tokio = { version = "1.0", features = ["full"] }
### Quick Start
```text
```bash
use provisioning_rs::{ProvisioningClient, Config, CreateServerRequest};
use tokio;
@ -941,7 +941,7 @@ async fn main() -> Result<(), Box<dyn std::error::Error>> {
### WebSocket Integration
```text
```bash
use provisioning_rs::{ProvisioningClient, Config, WebSocketEvent};
use futures_util::StreamExt;
use tokio;
@ -997,7 +997,7 @@ async fn main() -> Result<(), Box<dyn std::error::Error>> {
### Batch Operations
```text
```bash
use provisioning_rs::{BatchOperationRequest, BatchOperation};
#[tokio::main]

View File

@ -30,7 +30,7 @@ The main WebSocket endpoint for real-time events and monitoring.
**Example Connection:**
```text
```javascript
const ws = new WebSocket('ws://localhost:9090/ws?token=jwt-token&events=task,batch,system');
```
@ -64,7 +64,7 @@ Live log streaming endpoint.
All WebSocket connections require authentication via JWT token:
```text
```bash
// Include token in connection URL
const ws = new WebSocket('ws://localhost:9090/ws?token=' + jwtToken);
@ -93,7 +93,7 @@ ws.onopen = function() {
Fired when a workflow task status changes.
```text
```json
{
"event_type": "TaskStatusChanged",
"timestamp": "2025-09-26T10:00:00Z",
@ -116,7 +116,7 @@ Fired when a workflow task status changes.
Fired when batch operation status changes.
```text
```json
{
"event_type": "BatchOperationUpdate",
"timestamp": "2025-09-26T10:00:00Z",
@ -150,7 +150,7 @@ Fired when batch operation status changes.
Fired when system health status changes.
```text
```json
{
"event_type": "SystemHealthUpdate",
"timestamp": "2025-09-26T10:00:00Z",
@ -185,7 +185,7 @@ Fired when system health status changes.
Fired when workflow progress changes.
```text
```json
{
"event_type": "WorkflowProgressUpdate",
"timestamp": "2025-09-26T10:00:00Z",
@ -215,7 +215,7 @@ Fired when workflow progress changes.
Real-time log streaming.
```text
```json
{
"event_type": "LogEntry",
"timestamp": "2025-09-26T10:00:00Z",
@ -241,7 +241,7 @@ Real-time log streaming.
Real-time metrics streaming.
```text
```json
{
"event_type": "MetricUpdate",
"timestamp": "2025-09-26T10:00:00Z",
@ -266,7 +266,7 @@ Real-time metrics streaming.
Applications can define custom event types:
```text
```json
{
"event_type": "CustomApplicationEvent",
"timestamp": "2025-09-26T10:00:00Z",
@ -283,7 +283,7 @@ Applications can define custom event types:
### Connection Management
```text
```javascript
class ProvisioningWebSocket {
constructor(baseUrl, token, options = {}) {
this.baseUrl = baseUrl;
@ -430,7 +430,7 @@ ws.subscribe(['TaskStatusChanged', 'WorkflowProgressUpdate']);
### Real-Time Dashboard Example
```text
```javascript
class ProvisioningDashboard {
constructor(wsUrl, token) {
this.ws = new ProvisioningWebSocket(wsUrl, token);
@ -542,7 +542,7 @@ const dashboard = new ProvisioningDashboard('ws://localhost:9090', jwtToken);
The orchestrator implements WebSocket support using Axum and Tokio:
```text
```bash
use axum::{
extract::{ws::WebSocket, ws::WebSocketUpgrade, Query, State},
response::Response,
@ -702,7 +702,7 @@ fn has_event_permission(claims: &Claims, event_type: &str) -> bool {
### Client-Side Filtering
```text
```bash
// Subscribe to specific event types
ws.subscribe(['TaskStatusChanged', 'WorkflowProgressUpdate']);
@ -741,7 +741,7 @@ Events can be filtered on the server side based on:
### Connection Errors
```text
```bash
ws.on('error', (error) => {
console.error('WebSocket error:', error);
@ -780,7 +780,7 @@ ws.on('disconnected', (event) => {
### Heartbeat and Keep-Alive
```text
```javascript
class ProvisioningWebSocket {
constructor(baseUrl, token, options = {}) {
// ... existing code ...
@ -835,7 +835,7 @@ class ProvisioningWebSocket {
To improve performance, the server can batch multiple events into single WebSocket messages:
```text
```json
{
"type": "batch",
"timestamp": "2025-09-26T10:00:00Z",
@ -856,7 +856,7 @@ To improve performance, the server can batch multiple events into single WebSock
Enable message compression for large events:
```text
```javascript
const ws = new WebSocket('ws://localhost:9090/ws?token=jwt&compression=true');
```

View File

@ -28,7 +28,7 @@ The system needed a clear, maintainable structure that supports:
Adopt a **domain-driven hybrid structure** organized around functional boundaries:
```text
```bash
src/
├── core/ # Core system and CLI entry point
├── platform/ # High-performance coordination layer (Rust orchestrator)

View File

@ -49,7 +49,7 @@ Implement a **layered distribution strategy** with clear separation between deve
### Distribution Structure
```text
```bash
# User Distribution
/usr/local/bin/
├── provisioning # Main CLI entry point
@ -153,7 +153,7 @@ Use environment variables to control what gets installed.
### Configuration Hierarchy
```text
```toml
System Defaults (lowest precedence)
└── User Configuration
└── Project Configuration

View File

@ -33,7 +33,7 @@ Implement **isolated user workspaces** with clear boundaries and hierarchical co
### Workspace Structure
```text
```bash
~/workspace/provisioning/ # User workspace root
├── config/
│ ├── user.toml # User preferences and overrides
@ -141,7 +141,7 @@ Store all user configuration in database.
### Workspace Initialization
```text
```bash
# Automatic workspace creation on first run
provisioning workspace init
@ -163,7 +163,7 @@ provisioning workspace validate
### Backup and Migration
```text
```bash
# Backup entire workspace
provisioning workspace backup --output ~/backup/provisioning-workspace.tar.gz

View File

@ -54,7 +54,7 @@ Implement a **Hybrid Rust/Nushell Architecture** with clear separation of concer
#### Rust → Nushell Communication
```text
```nushell
// Rust orchestrator invokes Nushell scripts via process execution
let result = Command::new("nu")
.arg("-c")
@ -64,7 +64,7 @@ let result = Command::new("nu")
#### Nushell → Rust Communication
```text
```nushell
# Nushell submits workflows to Rust orchestrator via HTTP API
http post "http://localhost:9090/workflows/servers/create" {
name: "server-name",

View File

@ -45,7 +45,7 @@ Implement a **registry-based extension framework** with structured discovery and
### Extension Structure
```text
```bash
extensions/
├── providers/ # Provider extensions
│ └── custom-cloud/
@ -75,7 +75,7 @@ extensions/
### Extension Manifest (extension.toml)
```text
```toml
[extension]
name = "custom-provider"
version = "1.0.0"
@ -186,7 +186,7 @@ Traditional plugin architecture with dynamic loading.
### Extension Loading Lifecycle
```text
```bash
# Extension discovery and validation
provisioning extension discover
provisioning extension validate --extension custom-provider
@ -208,7 +208,7 @@ provisioning extension update custom-provider
Extensions integrate with hierarchical configuration system:
```text
```toml
# System configuration includes extension settings
[custom_provider]
api_endpoint = "https://api.custom-cloud.com"
@ -238,7 +238,7 @@ timeout = 30
### Provider Extension Pattern
```text
```bash
# extensions/providers/custom-cloud/nulib/provider.nu
export def list-servers [] -> table {
http get $"($config.custom_provider.api_endpoint)/servers"
@ -260,7 +260,7 @@ export def create-server [name: string, config: record] -> record {
### Task Service Extension Pattern
```text
```bash
# extensions/taskservs/custom-service/nulib/service.nu
export def install [server: string] -> nothing {
let manifest_data = open ./manifests/deployment.yaml

View File

@ -40,7 +40,7 @@ monolithic structure created multiple critical problems:
We refactored the monolithic CLI into a **modular, domain-driven architecture** with the following structure:
```text
```bash
provisioning/core/nulib/
├── provisioning (211 lines) ⬅️ 84% reduction
├── main_provisioning/
@ -63,7 +63,7 @@ provisioning/core/nulib/
Single source of truth for all flag parsing and argument building:
```text
```javascript
export def parse_common_flags [flags: record]: nothing -> record
export def build_module_args [flags: record, extra: string = ""]: nothing -> string
export def set_debug_env [flags: record]
@ -81,7 +81,7 @@ export def get_debug_flag [flags: record]: nothing -> string
Central routing with 80+ command mappings:
```text
```javascript
export def get_command_registry []: nothing -> record # 80+ shortcuts
export def dispatch_command [args: list, flags: record] # Main router
```
@ -148,7 +148,7 @@ Eliminated repetition:
All handlers depend on abstractions (flag records, not concrete flags):
```text
```bash
# Handler signature
export def handle_infrastructure_command [
command: string
@ -182,7 +182,7 @@ export def handle_infrastructure_command [
Users can now access help in multiple ways:
```text
```bash
# All these work equivalently:
provisioning help workspace
provisioning workspace help # ⬅️ NEW: Bi-directional
@ -192,7 +192,7 @@ provisioning help ws # ⬅️ NEW: Shortcut in help
**Implementation:**
```text
```bash
# Intercept "command help" → "help command"
let first_op = if ($ops_list | length) > 0 { ($ops_list | get 0) } else { "" }
if $first_op in ["help" "h"] {
@ -242,7 +242,7 @@ Comprehensive test suite created (`tests/test_provisioning_refactor.nu`):
### Test Results
```text
```bash
📋 Testing main help... ✅
📋 Testing category help... ✅
🔄 Testing bi-directional help... ✅
@ -319,7 +319,7 @@ Comprehensive test suite created (`tests/test_provisioning_refactor.nu`):
### Before: Repetitive Flag Handling
```text
```bash
"server" => {
let use_check = if $check { "--check "} else { "" }
let use_yes = if $yes { "--yes" } else { "" }
@ -335,7 +335,7 @@ Comprehensive test suite created (`tests/test_provisioning_refactor.nu`):
### After: Clean, Reusable
```text
```python
def handle_server [ops: string, flags: record] {
let args = build_module_args $flags $ops
run_module $args "server" --exec

View File

@ -128,7 +128,7 @@ Remove support for:
### For Development
```text
```bash
# 1. Install Age
brew install age # or apt install age
@ -142,7 +142,7 @@ age-keygen -y ~/.config/provisioning/age/private_key.txt > ~/.config/provisionin
### For Production
```text
```bash
# 1. Set up Cosmian KMS (cloud or self-hosted)
# 2. Create master key in Cosmian
# 3. Migrate secrets from Vault/AWS to Cosmian

View File

@ -117,7 +117,7 @@ Use Casbin authorization library.
#### Architecture
```text
```bash
┌─────────────────────────────────────────────────────────┐
│ Orchestrator │
├─────────────────────────────────────────────────────────┤
@ -143,7 +143,7 @@ Use Casbin authorization library.
#### Policy Organization
```text
```bash
provisioning/config/cedar-policies/
├── schema.cedar # Entity and action definitions
├── production.cedar # Production environment policies
@ -154,7 +154,7 @@ provisioning/config/cedar-policies/
#### Rust Implementation
```text
```rust
provisioning/platform/orchestrator/src/security/
├── cedar.rs # Cedar engine integration (450 lines)
├── policy_loader.rs # Policy loading with hot reload (320 lines)
@ -190,7 +190,7 @@ provisioning/platform/orchestrator/src/security/
#### Context Variables
```text
```bash
AuthorizationContext {
mfa_verified: bool, // MFA verification status
ip_address: String, // Client IP address
@ -204,7 +204,7 @@ AuthorizationContext {
#### Example Policy
```text
```bash
// Production deployments require MFA verification
@id("prod-deploy-mfa")
@description("All production deployments must have MFA verification")

View File

@ -249,7 +249,7 @@ Implement a complete security architecture using 12 specialized components organ
### End-to-End Request Flow
```text
```bash
1. User Request
2. Rate Limiting (100 req/min per IP)
@ -271,7 +271,7 @@ Implement a complete security architecture using 12 specialized components organ
### Emergency Access Flow
```text
```bash
1. Emergency Request (reason + justification)
2. Multi-Party Approval (2+ approvers, different teams)
@ -382,7 +382,7 @@ Implement a complete security architecture using 12 specialized components organ
### Development
```text
```bash
# Start all services
cd provisioning/platform/kms-service && cargo run &
cd provisioning/platform/orchestrator && cargo run &
@ -391,7 +391,7 @@ cd provisioning/platform/control-center && cargo run &
### Production
```text
```bash
# Kubernetes deployment
kubectl apply -f k8s/security-stack.yaml
@ -410,7 +410,7 @@ systemctl start provisioning-control-center
### Environment Variables
```text
```bash
# JWT
export JWT_ISSUER="control-center"
export JWT_AUDIENCE="orchestrator,cli"
@ -433,7 +433,7 @@ export MFA_WEBAUTHN_RP_ID="provisioning.example.com"
### Config Files
```text
```toml
# provisioning/config/security.toml
[jwt]
issuer = "control-center"
@ -470,7 +470,7 @@ pii_anonymization = true
### Run All Tests
```text
```bash
# Control Center (JWT, MFA)
cd provisioning/platform/control-center
cargo test
@ -489,7 +489,7 @@ nu provisioning/core/nulib/lib_provisioning/config/encryption_tests.nu
### Integration Tests
```text
```bash
# Full security flow
cd provisioning/platform/orchestrator
cargo test --test security_integration_tests

View File

@ -65,7 +65,7 @@ Define and document the three-format approach through:
**Move template files to proper directory structure and correct extensions**:
```text
```bash
Previous (KCL):
provisioning/kcl/templates/*.k (had Nushell/Jinja2 code, not KCL)
@ -326,7 +326,7 @@ Current (Nickel):
Currently, 15/16 files in `provisioning/kcl/templates/` have `.k` extension but contain Nushell/Jinja2 code, not KCL:
```text
```nushell
provisioning/kcl/templates/
├── server.ncl # Actually Nushell/Jinja2 template
├── taskserv.ncl # Actually Nushell/Jinja2 template
@ -343,7 +343,7 @@ This causes:
Reorganize into type-specific directories:
```text
```bash
provisioning/templates/
├── nushell/ # Nushell code generation (*.nu.j2)
│ ├── server.nu.j2

View File

@ -112,7 +112,7 @@ The provisioning system required:
**Example - UpCloud Provider**:
```text
```nickel
# upcloud/nickel/main.ncl (migrated from upcloud/kcl/)
let contracts = import "./contracts.ncl" in
let defaults = import "./defaults.ncl" in
@ -171,7 +171,7 @@ let defaults = import "./defaults.ncl" in
**File 1: Contracts** (`batch_contracts.ncl`):
```text
```json
{
BatchScheduler = {
strategy | String,
@ -184,7 +184,7 @@ let defaults = import "./defaults.ncl" in
**File 2: Defaults** (`batch_defaults.ncl`):
```text
```json
{
scheduler = {
strategy = "dependency_first",
@ -197,7 +197,7 @@ let defaults = import "./defaults.ncl" in
**File 3: Main** (`batch.ncl`):
```text
```javascript
let contracts = import "./batch_contracts.ncl" in
let defaults = import "./batch_defaults.ncl" in
@ -218,7 +218,7 @@ let defaults = import "./batch_defaults.ncl" in
### Domain-Organized Architecture
```text
```nickel
provisioning/schemas/
├── lib/ # Storage, TaskServDef, ClusterDef
├── config/ # Settings, defaults, workspace_config
@ -233,7 +233,7 @@ provisioning/schemas/
**Import pattern**:
```text
```javascript
let provisioning = import "./main.ncl" in
provisioning.lib # For Storage, TaskServDef
provisioning.config.settings # For Settings, Defaults
@ -254,7 +254,7 @@ provisioning.operations.workflows
- No snapshot overhead
- Usage: Local development, testing, experimentation
```text
```nickel
# workspace_librecloud/nickel/main.ncl
import "../../provisioning/schemas/main.ncl"
import "../../provisioning/extensions/taskservs/kubernetes/nickel/main.ncl"
@ -264,13 +264,13 @@ import "../../provisioning/extensions/taskservs/kubernetes/nickel/main.ncl"
Create immutable snapshots for reproducible deployments:
```text
```nickel
provisioning workspace freeze --version "2025-12-15-prod-v1" --env production
```
**Frozen structure** (`.frozen/{version}/`):
```text
```nickel
├── provisioning/schemas/ # Snapshot of central schemas
├── extensions/ # Snapshot of all extensions
└── workspace/ # Snapshot of workspace configs
@ -285,7 +285,7 @@ provisioning workspace freeze --version "2025-12-15-prod-v1" --env production
**Deploy from frozen snapshot**:
```text
```nickel
provisioning deploy --frozen "2025-12-15-prod-v1" --infra wuji
```
@ -308,7 +308,7 @@ provisioning deploy --frozen "2025-12-15-prod-v1" --infra wuji
**Key Feature**: Nickel schemas → Type-safe UIs → Nickel output
```text
```nickel
# Nickel schema → Interactive form
typedialog form --schema server.ncl --output json

View File

@ -19,7 +19,7 @@ The architectural decision was whether the plugin should:
Nickel configurations in provisioning use the **module system**:
```text
```nickel
# config/database.ncl
import "lib/defaults" as defaults
import "lib/validation" as valid
@ -47,7 +47,7 @@ Implement the `nu_plugin_nickel` plugin as a **CLI wrapper** that invokes the ex
### Architecture Diagram
```text
```nickel
┌─────────────────────────────┐
│ Nushell Script │
│ │
@ -288,7 +288,7 @@ This makes direct usage risky. The CLI is the documented, proven interface.
The plugin uses the **correct Nickel command syntax**:
```text
```nickel
// Correct:
cmd.arg("export").arg(file).arg("--format").arg(format);
// Results in: "nickel export /file --format json"
@ -323,7 +323,7 @@ Plugin correctly processes JSON output:
This enables Nushell cell path access:
```text
```nushell
nickel-export json /config.ncl | .database.host # ✅ Works
```
@ -343,7 +343,7 @@ nickel-export json /config.ncl | .database.host # ✅ Works
**Manual Verification**:
```text
```nickel
# Test module imports
nickel-export json /workspace/config.ncl

View File

@ -78,7 +78,7 @@ integration with the provisioning orchestrator.
### Architecture Diagram
```text
```bash
┌─────────────────────────────────────────┐
│ Nushell Script │
│ │
@ -167,7 +167,7 @@ integration with the provisioning orchestrator.
Nushell's `input` command is limited:
```text
```nushell
# Current: No validation, no security
let password = input "Password: " # ❌ Shows in terminal
let region = input "AWS Region: " # ❌ No autocomplete/validation
@ -184,7 +184,7 @@ let region = input "AWS Region: " # ❌ No autocomplete/validation
Nickel is declarative and cannot prompt users:
```text
```nickel
# Nickel defines what the config looks like, NOT how to get it
{
database = {
@ -243,7 +243,7 @@ Nickel is declarative and cannot prompt users:
### Mitigation Strategies
**Non-Interactive Mode**:
```text
```bash
// Support both interactive and non-interactive
if terminal::is_interactive() {
// Show TUI dialog
@ -255,7 +255,7 @@ if terminal::is_interactive() {
```
**Testing**:
```text
```bash
// Unit tests: Test form validation logic (no TUI)
#[test]
fn test_validate_workspace_name() {
@ -267,7 +267,7 @@ fn test_validate_workspace_name() {
```
**Scriptability**:
```text
```bash
# Batch mode: Provide config via file
provisioning workspace init --config workspace.toml
@ -316,7 +316,7 @@ provisioning workspace init --interactive
### Form Definition Pattern
```text
```bash
use typdialog::Form;
pub fn workspace_initialization_form() -> Result<WorkspaceConfig> {
@ -353,7 +353,7 @@ pub fn workspace_initialization_form() -> Result<WorkspaceConfig> {
### Integration with Nickel
```text
```nickel
// 1. Get validated input from TUI dialog
let config = workspace_initialization_form()?;
@ -370,7 +370,7 @@ fs::write("workspace/config.toml", config_toml)?;
### CLI Command Structure
```text
```bash
// provisioning/core/cli/src/commands/workspace.rs
#[derive(Parser)]
@ -404,7 +404,7 @@ pub fn handle_workspace_init(args: InitArgs) -> Result<()> {
### Validation Rules
```text
```rust
pub fn validate_workspace_name(name: &str) -> Result<(), String> {
// Alphanumeric, hyphens, 3-32 chars
let re = Regex::new(r"^[a-z0-9-]{3,32}$").unwrap();
@ -425,7 +425,7 @@ pub fn validate_region(region: &str) -> Result<(), String> {
### Security: Password Handling
```text
```bash
use zeroize::Zeroizing;
pub fn get_secure_password() -> Result<Zeroizing<String>> {
@ -447,7 +447,7 @@ pub fn get_secure_password() -> Result<Zeroizing<String>> {
## Testing Strategy
**Unit Tests**:
```text
```bash
#[test]
fn test_workspace_name_validation() {
assert!(validate_workspace_name("my-workspace").is_ok());
@ -457,7 +457,7 @@ fn test_workspace_name_validation() {
```
**Integration Tests**:
```text
```bash
// Use non-interactive mode with config files
#[test]
fn test_workspace_init_non_interactive() {
@ -481,7 +481,7 @@ fn test_workspace_init_non_interactive() {
```
**Manual Testing**:
```text
```bash
# Test interactive flow
cargo build --release
./target/release/provisioning workspace init --interactive
@ -495,7 +495,7 @@ cargo build --release
## Configuration Integration
**CLI Flag**:
```text
```toml
# provisioning/config/config.defaults.toml
[ui]
interactive_mode = "auto" # "auto" | "always" | "never"
@ -503,7 +503,7 @@ dialog_theme = "default" # "default" | "minimal" | "colorful"
```
**Environment Override**:
```text
```bash
# Force non-interactive mode (for CI/CD)
export PROVISIONING_INTERACTIVE=false
@ -523,7 +523,7 @@ export PROVISIONING_INTERACTIVE=true
- Validation rule patterns
**Configuration Schema**:
```text
```toml
# provisioning/schemas/workspace.ncl
{
WorkspaceConfig = {

View File

@ -93,7 +93,7 @@ Integrate **SecretumVault** as the centralized secrets management system for the
### Architecture Diagram
```text
```bash
┌─────────────────────────────────────────────────────────────┐
│ Provisioning CLI / Orchestrator / Services │
│ │
@ -273,7 +273,7 @@ SOPS is excellent for **static secrets in git**, but inadequate for:
### Mitigation Strategies
**High Availability**:
```text
```bash
# Deploy SecretumVault cluster (3 nodes)
provisioning deploy secretum-vault --ha --replicas 3
@ -282,7 +282,7 @@ provisioning deploy secretum-vault --ha --replicas 3
```
**Migration from SOPS**:
```text
```bash
# Phase 1: Import existing SOPS secrets into SecretumVault
provisioning secrets migrate --from-sops config/secrets.yaml
@ -291,7 +291,7 @@ provisioning secrets migrate --from-sops config/secrets.yaml
```
**Fallback Strategy**:
```text
```bash
// Graceful degradation if vault unavailable
let secret = match vault_client.get_secret("database/password").await {
Ok(s) => s,
@ -305,7 +305,7 @@ let secret = match vault_client.get_secret("database/password").await {
```
**Operational Monitoring**:
```text
```bash
# prometheus metrics
secretum_vault_request_duration_seconds
secretum_vault_secret_lease_expiry
@ -351,7 +351,7 @@ secretum_vault_raft_leader_changes
### SecretumVault Deployment
```text
```bash
# Deploy via provisioning system
provisioning deploy secretum-vault
--ha
@ -367,7 +367,7 @@ provisioning vault unseal --key-shares 5 --key-threshold 3
### Rust Client Library
```text
```rust
// provisioning/core/libs/secretum-client/src/lib.rs
use secretum_vault::{Client, SecretEngine, Auth};
@ -402,7 +402,7 @@ impl VaultClient {
### Nushell Integration
```text
```nushell
# Nushell commands via Rust CLI wrapper
provisioning secrets get database/prod/password
provisioning secrets set api/keys/stripe --value "sk_live_xyz"
@ -413,7 +413,7 @@ provisioning secrets list database/
### Nickel Configuration Integration
```text
```nickel
# provisioning/schemas/database.ncl
{
database = {
@ -429,7 +429,7 @@ provisioning secrets list database/
### Cedar Policy for Secret Access
```text
```bash
// policy: developers can read dev secrets, not prod
permit(
principal in Group::"developers",
@ -455,7 +455,7 @@ permit(
### Dynamic Database Credentials
```text
```bash
// Application requests temporary DB credentials
let creds = vault_client
.database()
@ -472,7 +472,7 @@ println!("TTL: {}", creds.lease_duration); // 1h
### Secret Rotation Automation
```text
```bash
# secretum-vault config
[[rotation_policies]]
path = "database/prod/password"
@ -487,7 +487,7 @@ max_age = "90d"
### Audit Log Format
```text
```json
{
"timestamp": "2025-01-08T12:34:56Z",
"type": "request",
@ -515,7 +515,7 @@ max_age = "90d"
## Testing Strategy
**Unit Tests**:
```text
```bash
#[tokio::test]
async fn test_get_secret() {
let vault = mock_vault_client();
@ -533,7 +533,7 @@ async fn test_dynamic_credentials_generation() {
```
**Integration Tests**:
```text
```bash
# Test vault deployment
provisioning deploy secretum-vault --test-mode
provisioning vault init
@ -551,7 +551,7 @@ provisioning secrets rotate test/secret
```
**Security Tests**:
```text
```bash
#[tokio::test]
async fn test_unauthorized_access_denied() {
let vault = vault_client_with_limited_token();
@ -563,7 +563,7 @@ async fn test_unauthorized_access_denied() {
## Configuration Integration
**Provisioning Config**:
```text
```toml
# provisioning/config/config.defaults.toml
[secrets]
provider = "secretum-vault" # "secretum-vault" | "sops" | "env"
@ -583,7 +583,7 @@ max_size = "100MB"
```
**Environment Variables**:
```text
```javascript
export VAULT_ADDR="https://vault.example.com:8200"
export VAULT_TOKEN="s.abc123def456..."
export VAULT_NAMESPACE="provisioning"

View File

@ -100,7 +100,7 @@ All AI components are **schema-aware**, **security-enforced**, and **human-super
### Architecture Diagram
```text
```bash
┌─────────────────────────────────────────────────────────────────┐
│ User Interfaces │
│ │
@ -268,7 +268,7 @@ All AI components are **schema-aware**, **security-enforced**, and **human-super
Traditional AI code generation fails for infrastructure because:
```text
```bash
Generic AI (like GitHub Copilot):
❌ Generates syntactically correct but semantically wrong configs
❌ Doesn't understand cloud provider constraints
@ -278,7 +278,7 @@ Generic AI (like GitHub Copilot):
```
**Schema-aware AI** (our approach):
```text
```bash
# Nickel schema provides ground truth
{
Database = {
@ -303,7 +303,7 @@ Generic AI (like GitHub Copilot):
LLMs alone have limitations:
```text
```bash
Pure LLM:
❌ Knowledge cutoff (no recent updates)
❌ Hallucinations (invents plausible-sounding configs)
@ -312,7 +312,7 @@ Pure LLM:
```
**RAG-enhanced LLM**:
```text
```bash
Query: "How to configure Postgres with encryption?"
RAG retrieves:
@ -332,7 +332,7 @@ LLM generates answer WITH retrieved context:
AI-generated infrastructure configs require human approval:
```text
```toml
// All AI operations require approval
pub async fn ai_generate_config(request: GenerateRequest) -> Result<Config> {
let ai_generated = ai_service.generate(request).await?;
@ -414,7 +414,7 @@ No single LLM provider is best for all tasks:
### Mitigation Strategies
**Cost Control**:
```text
```toml
[ai.rate_limiting]
requests_per_minute = 60
tokens_per_day = 1000000
@ -427,7 +427,7 @@ ttl = "1h"
```
**Latency Optimization**:
```text
```bash
// Streaming responses for real-time feedback
pub async fn ai_generate_stream(request: GenerateRequest) -> impl Stream<Item = String> {
ai_service
@ -438,7 +438,7 @@ pub async fn ai_generate_stream(request: GenerateRequest) -> impl Stream<Item =
```
**Privacy (Local Models)**:
```text
```toml
[ai]
provider = "local"
model_path = "/opt/provisioning/models/llama-3-70b"
@ -447,7 +447,7 @@ model_path = "/opt/provisioning/models/llama-3-70b"
```
**Validation (Defense in Depth)**:
```text
```bash
AI generates config
Nickel schema validation (syntax, types, contracts)
@ -460,7 +460,7 @@ Deployment
```
**Observability**:
```text
```toml
[ai.observability]
trace_all_requests = true
store_conversations = true
@ -510,7 +510,7 @@ conversation_retention = "30d"
### AI Service API
```text
```bash
// platform/crates/ai-service/src/lib.rs
#[async_trait]
@ -609,7 +609,7 @@ impl AIService for AIServiceImpl {
### MCP Server Integration
```text
```bash
// platform/crates/mcp-server/src/lib.rs
pub struct MCPClient {
@ -675,7 +675,7 @@ impl ToolRegistry {
### RAG System Implementation
```text
```bash
// platform/crates/rag/src/lib.rs
pub struct RAGService {
@ -750,7 +750,7 @@ pub struct QdrantStore {
### typdialog-ai Integration
```text
```bash
// typdialog-ai/src/form_assistant.rs
pub struct FormAssistant {
@ -821,7 +821,7 @@ Generate field values:",
### typdialog-ag Agents
```text
```bash
// typdialog-ag/src/agent.rs
pub struct ProvisioningAgent {
@ -899,7 +899,7 @@ impl ProvisioningAgent {
### Cedar Policies for AI
```text
```bash
// AI cannot access secrets without explicit permission
forbid(
principal == Service::"ai-service",
@ -939,7 +939,7 @@ forbid(
## Testing Strategy
**Unit Tests**:
```text
```bash
#[tokio::test]
async fn test_ai_config_generation_validates() {
let ai_service = mock_ai_service();
@ -968,7 +968,7 @@ async fn test_ai_cannot_access_secrets() {
```
**Integration Tests**:
```text
```bash
#[tokio::test]
async fn test_end_to_end_ai_config_generation() {
// User provides natural language
@ -999,7 +999,7 @@ async fn test_end_to_end_ai_config_generation() {
```
**RAG Quality Tests**:
```text
```bash
#[tokio::test]
async fn test_rag_retrieval_accuracy() {
let rag = rag_service();
@ -1026,7 +1026,7 @@ async fn test_rag_retrieval_accuracy() {
## Security Considerations
**AI Access Control**:
```text
```bash
AI Service Permissions (enforced by Cedar):
✅ CAN: Read Nickel schemas
✅ CAN: Generate configurations
@ -1039,7 +1039,7 @@ AI Service Permissions (enforced by Cedar):
```
**Data Privacy**:
```text
```toml
[ai.privacy]
# Sanitize before sending to LLM
sanitize_secrets = true
@ -1056,7 +1056,7 @@ sanitize_credentials = true
```
**Audit Trail**:
```text
```bash
// Every AI operation logged
pub struct AIAuditLog {
timestamp: DateTime<Utc>,
@ -1074,7 +1074,7 @@ pub struct AIAuditLog {
**Estimated Costs** (per month, based on typical usage):
```text
```bash
Assumptions:
- 100 active users
- 10 AI config generations per user per day

View File

@ -10,7 +10,7 @@
The `lib_provisioning/config/accessor.nu` file contains 1567 lines across 187 accessor functions. Analysis reveals that 95% of these functions follow
an identical mechanical pattern:
```text
```javascript
export def get-{field-name} [--config: record] {
config-get "{path.to.field}" {default_value} --config $config
}
@ -42,7 +42,7 @@ Implement **Schema-Driven Accessor Generation**: automatically generate accessor
### Architecture
```text
```bash
Nickel Schema (contracts.ncl)
[Parse & Extract Schema Structure]

View File

@ -16,7 +16,7 @@ The provisioning system integrates with four critical plugins, each with its own
Analysis reveals ~90% code duplication across these wrappers:
```text
```bash
# Pattern repeated 4 times with minor variations:
export def plugin-available? [] {
# Check if plugin is installed
@ -53,7 +53,7 @@ Implement **Plugin Wrapper Abstraction Framework**: replace manual plugin wrappe
### Architecture
```text
```bash
Plugin Definition (YAML)
├─ plugin: auth
├─ methods:
@ -89,7 +89,7 @@ Generated Wrappers
**Nushell 0.109 Compliant** (do-complete pattern, no try-catch):
```text
```python
def call-plugin-with-fallback [method: string args: record] {
let plugin_result = (
do {
@ -175,7 +175,7 @@ def call-plugin-with-fallback [method: string args: record] {
### auth.yaml Example
```text
```yaml
plugin: auth
http_endpoint: http://localhost:8001
methods:
@ -196,7 +196,7 @@ methods:
**Feature Flag Approach**:
```text
```bash
# Use original manual wrappers
export PROVISIONING_USE_GENERATED_PLUGINS=false

View File

@ -10,7 +10,7 @@
The current help system in `main_provisioning/help_system.nu` (1303 lines) consists almost entirely of hardcoded string concatenation with embedded
ANSI formatting codes:
```text
```nushell
def help-infrastructure [] {
print "╔════════════════════════════════════════════════════╗"
print "║ SERVER & INFRASTRUCTURE ║"
@ -45,7 +45,7 @@ Implement **Data-Driven Help with Mozilla Fluent Integration**:
### Architecture
```text
```bash
Help Content (Fluent Files)
├─ en-US/help.ftl (65 strings - English base)
└─ es-ES/help.ftl (65 strings - Spanish translations)
@ -72,7 +72,7 @@ User Interface
**en-US/help.ftl**:
```text
```bash
help-main-title = PROVISIONING SYSTEM
help-main-subtitle = Layered Infrastructure Automation
help-main-categories = COMMAND CATEGORIES
@ -99,7 +99,7 @@ help-orch-batch = Multi-Provider Batch Operations
**es-ES/help.ftl** (Spanish translations):
```text
```bash
help-main-title = SISTEMA DE PROVISIÓN
help-main-subtitle = Automatización de Infraestructura por Capas
help-main-categories = CATEGORÍAS DE COMANDOS
@ -126,7 +126,7 @@ help-orch-batch = Operaciones por Lotes Multi-Proveedor
### 2. Fluent Loading in Nushell
```text
```python
def load-fluent-file [category: string] {
let lang = ($env.LANG? | default "en_US" | str replace "_" "-")
let fluent_path = $"provisioning/locales/($lang)/help.ftl"
@ -138,7 +138,7 @@ def load-fluent-file [category: string] {
### 3. Help System Wrapper
```text
```javascript
export def help-infrastructure [] {
let strings = (load-fluent-file "infrastructure")
@ -191,7 +191,7 @@ export def help-infrastructure [] {
## Language Resolution Flow
```text
```bash
1. Check LANG environment variable
LANG=es_ES.UTF-8 → extract "es_ES" or "es-ES"
@ -213,7 +213,7 @@ export def help-infrastructure [] {
### Unit Tests
```text
```bash
# Test language detection
LANG=en_US provisioning help infrastructure
# Expected: English output
@ -227,7 +227,7 @@ LANG=fr_FR provisioning help infrastructure
## File Structure
```text
```bash
provisioning/
├── locales/
│ ├── i18n-config.toml # Locale metadata & fallback chains
@ -243,7 +243,7 @@ provisioning/
**i18n-config.toml** defines:
```text
```toml
[locales]
default = "en-US"
fallback = "en-US"

View File

@ -9,7 +9,7 @@
The `lib_provisioning/config/loader.nu` file (2199 lines) is a monolithic implementation mixing multiple unrelated concerns:
```text
```nushell
Current Structure (2199 lines):
├─ Cache lookup/storage (300 lines)
├─ Nickel evaluation (400 lines)
@ -43,7 +43,7 @@ Implement **Layered Loader Architecture**: decompose monolithic loader into spec
### Target Architecture
```text
```bash
lib_provisioning/config/
├── loader.nu # ORCHESTRATOR (< 300 lines)
│ └─ Coordinates loading pipeline
@ -165,7 +165,7 @@ Create each loader as independent module:
Extract Nickel evaluation logic:
```text
```javascript
export def evaluate-nickel [file: string] {
let result = (
do {
@ -185,7 +185,7 @@ export def evaluate-nickel [file: string] {
Implement thin loader.nu:
```text
```javascript
export def load-provisioning-config [] {
let env_config = (env-loader load-environment)
let toml_config = (toml-loader load-toml "config.toml")
@ -207,7 +207,7 @@ export def load-provisioning-config [] {
Create test for each module:
```text
```bash
tests/config/
├── loaders/
│ ├── test_nickel_loader.nu
@ -235,7 +235,7 @@ tests/config/
## Backward Compatibility
**Public API Unchanged**:
```text
```bash
# Current usage (unchanged)
let config = (load-provisioning-config)
```

View File

@ -38,7 +38,7 @@ Implement **Domain-Based Command Modules**: split monolithic handlers into focus
### Target Architecture
```text
```bash
main_provisioning/commands/
├── dispatcher.nu # Routes commands to domain handlers
├── utilities/ # Split by domain
@ -168,7 +168,7 @@ Create `integrations/` directory with 3 modules:
Implement `dispatcher.nu`:
```text
```javascript
export def provision-ssh [args] {
use ./utilities/ssh.nu *
handle-ssh-command $args
@ -189,7 +189,7 @@ export def provision-cache [args] {
Keep public exports in original files for compatibility:
```text
```bash
# commands/utilities.nu (compatibility layer)
use ./utilities/ssh.nu *
use ./utilities/sops.nu *
@ -204,7 +204,7 @@ export use ./utilities/sops.nu
Create test structure:
```text
```bash
tests/commands/
├── utilities/
│ ├── test_ssh.nu
@ -225,7 +225,7 @@ tests/commands/
**utilities/ssh.nu**:
```text
```nushell
# Connect to remote host
export def ssh-connect [host: string --port: int = 22] {
# Implementation
@ -244,7 +244,7 @@ export def ssh-close [host: string] {
## File Structure
```text
```bash
main_provisioning/commands/
├── dispatcher.nu # Route to domain handlers
├── utilities/
@ -269,7 +269,7 @@ main_provisioning/commands/
Users see no change in CLI:
```text
```bash
provisioning ssh host.example.com
provisioning sops edit config.yaml
provisioning cache clear
@ -281,7 +281,7 @@ provisioning guide from-scratch
**Import Path Options**:
```text
```bash
# Option 1: Import from domain module (new way)
use ./utilities/ssh.nu *
connect $host

View File

@ -43,7 +43,7 @@ The Provisioning Platform is a modern, cloud-native infrastructure automation sy
### Architecture at a Glance
```text
```bash
┌─────────────────────────────────────────────────────────────────────┐
│ Provisioning Platform │
├─────────────────────────────────────────────────────────────────────┤
@ -93,7 +93,7 @@ The Provisioning Platform is a modern, cloud-native infrastructure automation sy
### High-Level Architecture
```text
```bash
┌────────────────────────────────────────────────────────────────────────────┐
│ PRESENTATION LAYER │
├────────────────────────────────────────────────────────────────────────────┤
@ -191,7 +191,7 @@ The system is organized into three separate repositories:
#### **provisioning-core**
```text
```bash
Core system functionality
├── CLI interface (Nushell entry point)
├── Core libraries (lib_provisioning)
@ -205,7 +205,7 @@ Core system functionality
#### **provisioning-extensions**
```text
```bash
All provider, taskserv, cluster extensions
├── providers/
│ ├── aws/
@ -229,7 +229,7 @@ All provider, taskserv, cluster extensions
#### **provisioning-platform**
```text
```bash
Platform services
├── orchestrator/ (Rust)
├── control-center/ (Rust/Yew)
@ -255,7 +255,7 @@ Platform services
**Architecture**:
```text
```bash
Main CLI (211 lines)
Command Dispatcher (264 lines)
@ -281,7 +281,7 @@ Domain Handlers (7 modules)
**Hierarchical Loading**:
```text
```bash
1. System defaults (config.defaults.toml)
2. User config (~/.provisioning/config.user.toml)
3. Workspace config (workspace/config/provisioning.yaml)
@ -303,7 +303,7 @@ Domain Handlers (7 modules)
**Architecture**:
```text
```bash
src/
├── main.rs // Entry point
├── api/
@ -342,7 +342,7 @@ src/
**Workflow Types**:
```text
```bash
workflows/
├── server_create.nu // Server provisioning
├── taskserv.nu // Task service management
@ -371,7 +371,7 @@ workflows/
**Extension Structure**:
```text
```bash
extension-name/
├── schemas/
│ ├── main.ncl // Main schema
@ -401,7 +401,7 @@ Each extension packaged as OCI artifact:
**Module System**:
```text
```bash
# Discover available extensions
provisioning module discover taskservs
@ -414,7 +414,7 @@ provisioning module list taskserv my-workspace
**Layer System** (Configuration Inheritance):
```text
```toml
Layer 1: Core (provisioning/extensions/{type}/{name})
Layer 2: Workspace (workspace/extensions/{type}/{name})
@ -438,7 +438,7 @@ Layer 3: Infrastructure (workspace/infra/{infra}/extensions/{type}/{name})
**Example**:
```text
```javascript
let { TaskservDependencies } = import "provisioning/dependencies.ncl" in
{
kubernetes = TaskservDependencies {
@ -467,7 +467,7 @@ let { TaskservDependencies } = import "provisioning/dependencies.ncl" in
**Lifecycle Management**:
```text
```bash
# Start all auto-start services
provisioning platform start
@ -485,7 +485,7 @@ provisioning platform logs orchestrator --follow
**Architecture**:
```text
```bash
User Command (CLI)
Test Orchestrator (Rust)
@ -520,7 +520,7 @@ The platform supports four operational modes that adapt the system from individu
### Mode Comparison
```text
```bash
┌───────────────────────────────────────────────────────────────────────┐
│ MODE ARCHITECTURE │
├───────────────┬───────────────┬───────────────┬───────────────────────┤
@ -562,7 +562,7 @@ The platform supports four operational modes that adapt the system from individu
**Switching Modes**:
```text
```bash
# Check current mode
provisioning mode current
@ -577,7 +577,7 @@ provisioning mode validate enterprise
#### Solo Mode
```text
```bash
# 1. Default mode, no setup needed
provisioning workspace init
@ -590,7 +590,7 @@ provisioning server create
#### Multi-User Mode
```text
```bash
# 1. Switch mode and authenticate
provisioning mode switch multi-user
provisioning auth login
@ -609,7 +609,7 @@ provisioning workspace unlock my-infra
#### CI/CD Mode
```text
```bash
# GitLab CI
deploy:
stage: deploy
@ -626,7 +626,7 @@ deploy:
#### Enterprise Mode
```text
```bash
# 1. Switch to enterprise, verify K8s
provisioning mode switch enterprise
kubectl get pods -n provisioning-system
@ -654,7 +654,7 @@ provisioning workspace unlock prod-deployment
### Service Communication
```text
```bash
┌──────────────────────────────────────────────────────────────────────┐
│ NETWORK LAYER │
├──────────────────────────────────────────────────────────────────────┤
@ -732,7 +732,7 @@ provisioning workspace unlock prod-deployment
### Data Storage
```text
```bash
┌────────────────────────────────────────────────────────────────┐
│ DATA LAYER │
├────────────────────────────────────────────────────────────────┤
@ -813,7 +813,7 @@ provisioning workspace unlock prod-deployment
**Configuration Loading**:
```text
```toml
1. Load system defaults (config.defaults.toml)
2. Merge user config (~/.provisioning/config.user.toml)
3. Load workspace config (workspace/config/provisioning.yaml)
@ -824,7 +824,7 @@ provisioning workspace unlock prod-deployment
**State Persistence**:
```text
```bash
Workflow execution
Create checkpoint (JSON)
@ -836,7 +836,7 @@ On failure, load checkpoint and resume
**OCI Artifact Flow**:
```text
```bash
1. Package extension (oci-package.nu)
2. Push to OCI registry (provisioning oci push)
3. Extension stored as OCI artifact
@ -850,7 +850,7 @@ On failure, load checkpoint and resume
### Security Layers
```text
```bash
┌─────────────────────────────────────────────────────────────────┐
│ SECURITY ARCHITECTURE │
├─────────────────────────────────────────────────────────────────┤
@ -921,7 +921,7 @@ On failure, load checkpoint and resume
**SOPS Integration**:
```text
```bash
# Edit encrypted file
provisioning sops workspace/secrets/keys.yaml.enc
@ -931,7 +931,7 @@ provisioning sops workspace/secrets/keys.yaml.enc
**KMS Integration** (Enterprise):
```text
```bash
# workspace/config/provisioning.yaml
secrets:
provider: "kms"
@ -945,7 +945,7 @@ secrets:
**CI/CD Mode** (Required):
```text
```bash
# Sign OCI artifact
cosign sign oci://registry/kubernetes:1.28.0
@ -955,7 +955,7 @@ cosign verify oci://registry/kubernetes:1.28.0
**Enterprise Mode** (Mandatory):
```text
```bash
# Pull with verification
provisioning extension pull kubernetes --verify-signature
@ -970,7 +970,7 @@ provisioning extension pull kubernetes --verify-signature
#### 1. **Binary Deployment** (Solo, Multi-user)
```text
```bash
User Machine
├── ~/.provisioning/bin/
│ ├── provisioning-orchestrator
@ -986,7 +986,7 @@ User Machine
#### 2. **Docker Deployment** (Multi-user, CI/CD)
```text
```bash
Docker Daemon
├── Container: provisioning-orchestrator
├── Container: provisioning-control-center
@ -1001,7 +1001,7 @@ Docker Daemon
#### 3. **Docker Compose Deployment** (Multi-user)
```text
```bash
# provisioning/platform/docker-compose.yaml
services:
orchestrator:
@ -1039,7 +1039,7 @@ services:
#### 4. **Kubernetes Deployment** (CI/CD, Enterprise)
```text
```yaml
# Namespace: provisioning-system
apiVersion: apps/v1
kind: Deployment
@ -1085,7 +1085,7 @@ spec:
#### 5. **Remote Deployment** (All modes)
```text
```bash
# Connect to remotely-running services
services:
orchestrator:
@ -1108,7 +1108,7 @@ services:
#### 1. **Hybrid Language Integration** (Rust ↔ Nushell)
```text
```nushell
Rust Orchestrator
↓ (HTTP API)
Nushell CLI
@ -1124,7 +1124,7 @@ File-based Task Queue
#### 2. **Provider Abstraction**
```text
```bash
Unified Provider Interface
├── create_server(config) -> Server
├── delete_server(id) -> bool
@ -1139,7 +1139,7 @@ Provider Implementations:
#### 3. **OCI Registry Integration**
```text
```bash
Extension Development
Package (oci-package.nu)
@ -1157,7 +1157,7 @@ Load into Workspace
#### 4. **Gitea Integration** (Multi-user, Enterprise)
```text
```bash
Workspace Operations
Check Lock Status (Gitea API)
@ -1179,7 +1179,7 @@ Release Lock (Delete lock file)
#### 5. **CoreDNS Integration**
```text
```bash
Service Registration
Update CoreDNS Corefile

View File

@ -86,7 +86,7 @@ Original comprehensive loader that handles:
## Module Dependency Graph
```text
```bash
Help/Status Commands
loader-lazy.nu
@ -110,7 +110,7 @@ loader.nu (full configuration)
### Fast Path (Help Commands)
```text
```bash
# Uses minimal loader - 23ms
./provisioning help infrastructure
./provisioning workspace list
@ -119,7 +119,7 @@ loader.nu (full configuration)
### Medium Path (Status Operations)
```text
```bash
# Uses minimal loader with some full config - ~50ms
./provisioning status
./provisioning workspace active
@ -128,7 +128,7 @@ loader.nu (full configuration)
### Full Path (Infrastructure Operations)
```text
```bash
# Uses full loader - ~150ms
./provisioning server create --infra myinfra
./provisioning taskserv create kubernetes
@ -139,7 +139,7 @@ loader.nu (full configuration)
### Lazy Loading Decision Logic
```text
```bash
# In loader-lazy.nu
let is_fast_command = (
$command == "help" or
@ -160,7 +160,7 @@ if $is_fast_command {
The minimal loader returns a lightweight config record:
```text
```json
{
workspace: {
name: "librecloud"
@ -247,7 +247,7 @@ Only add if:
### Performance Testing
```text
```bash
# Benchmark minimal loader
time nu -n -c "use loader-minimal.nu *; get-active-workspace"

View File

@ -13,7 +13,7 @@ Control-Center uses **SurrealDB with kv-mem backend**, an embedded in-memory dat
### Database Configuration
```text
```toml
[database]
url = "memory" # In-memory backend
namespace = "control_center"
@ -24,7 +24,7 @@ database = "main"
**Production Alternative**: Switch to remote WebSocket connection for persistent storage:
```text
```toml
[database]
url = "ws://localhost:8000"
namespace = "control_center"
@ -79,7 +79,7 @@ Control-Center also supports (via Cargo.toml dependencies):
Orchestrator uses simple file-based storage by default:
```text
```toml
[orchestrator.storage]
type = "filesystem" # Default
backend_path = "{{orchestrator.paths.data_dir}}/queue.rkvs"
@ -87,7 +87,7 @@ backend_path = "{{orchestrator.paths.data_dir}}/queue.rkvs"
**Resolved Path**:
```text
```json
{{workspace.path}}/.orchestrator/data/queue.rkvs
```
@ -95,7 +95,7 @@ backend_path = "{{orchestrator.paths.data_dir}}/queue.rkvs"
For production deployments, switch to SurrealDB:
```text
```toml
[orchestrator.storage]
type = "surrealdb-server" # or surrealdb-embedded
@ -115,7 +115,7 @@ password = "secret"
All services load configuration in this order (priority: low → high):
```text
```toml
1. System Defaults provisioning/config/config.defaults.toml
2. Service Defaults provisioning/platform/{service}/config.defaults.toml
3. Workspace Config workspace/{name}/config/provisioning.yaml
@ -128,7 +128,7 @@ All services load configuration in this order (priority: low → high):
Configs support dynamic variable interpolation:
```text
```toml
[paths]
base = "/Users/Akasha/project-provisioning/provisioning"
data_dir = "{{paths.base}}/data" # Resolves to: /Users/.../data
@ -175,7 +175,7 @@ All services use workspace-aware paths:
**Orchestrator**:
```text
```toml
[orchestrator.paths]
base = "{{workspace.path}}/.orchestrator"
data_dir = "{{orchestrator.paths.base}}/data"
@ -185,7 +185,7 @@ queue_dir = "{{orchestrator.paths.data_dir}}/queue"
**Control-Center**:
```text
```toml
[paths]
base = "{{workspace.path}}/.control-center"
data_dir = "{{paths.base}}/data"
@ -194,7 +194,7 @@ logs_dir = "{{paths.base}}/logs"
**Result** (workspace: `workspace-librecloud`):
```text
```bash
workspace-librecloud/
├── .orchestrator/
│ ├── data/
@ -214,7 +214,7 @@ Any config value can be overridden via environment variables:
### Control-Center
```text
```bash
# Override server port
export CONTROL_CENTER_SERVER_PORT=8081
@ -227,7 +227,7 @@ export CONTROL_CENTER_JWT_ISSUER="my-issuer"
### Orchestrator
```text
```bash
# Override orchestrator port
export ORCHESTRATOR_SERVER_PORT=8080
@ -241,7 +241,7 @@ export ORCHESTRATOR_QUEUE_MAX_CONCURRENT_TASKS=10
### Naming Convention
```text
```json
{SERVICE}_{SECTION}_{KEY} = value
```
@ -259,7 +259,7 @@ export ORCHESTRATOR_QUEUE_MAX_CONCURRENT_TASKS=10
**Container paths** (resolved inside container):
```text
```toml
[paths]
base = "/app/provisioning"
data_dir = "/data" # Mounted volume
@ -268,7 +268,7 @@ logs_dir = "/var/log/orchestrator" # Mounted volume
**Docker Compose volumes**:
```text
```bash
services:
orchestrator:
volumes:
@ -289,7 +289,7 @@ volumes:
**Host paths** (macOS/Linux):
```text
```toml
[paths]
base = "/Users/Akasha/project-provisioning/provisioning"
data_dir = "{{workspace.path}}/.orchestrator/data"
@ -302,7 +302,7 @@ logs_dir = "{{workspace.path}}/.orchestrator/logs"
Check current configuration:
```text
```toml
# Show effective configuration
provisioning env
@ -322,7 +322,7 @@ PROVISIONING_DEBUG=true ./orchestrator --show-config
**Cosmian KMS** uses its own database (when deployed):
```text
```bash
# KMS database location (Docker)
/data/kms.db # SQLite database inside KMS container
@ -332,7 +332,7 @@ PROVISIONING_DEBUG=true ./orchestrator --show-config
KMS also integrates with Control-Center's KMS hybrid backend (local + remote):
```text
```toml
[kms]
mode = "hybrid" # local, remote, or hybrid

View File

@ -32,7 +32,7 @@ without code changes. Hardcoded values defeat the purpose of IaC and create main
**Example**:
```text
```bash
# ✅ PAP Compliant - Configuration-driven
[providers.aws]
regions = ["us-west-2", "us-east-1"]
@ -62,7 +62,7 @@ configuration management and domain-specific operations.
**Language Responsibility Matrix**:
```text
```bash
Rust Layer:
├── Workflow orchestration and coordination
├── REST API servers and HTTP endpoints
@ -111,7 +111,7 @@ flexibility while maintaining predictability.
**Domain Organization**:
```text
```bash
├── core/ # Core system and library functions
├── platform/ # High-performance coordination layer
├── provisioning/ # Main business logic with providers and services
@ -160,7 +160,7 @@ evolution.
**Recovery Strategies**:
```text
```bash
Operation Level:
├── Atomic operations with rollback
├── Retry logic with exponential backoff
@ -203,7 +203,7 @@ gains.
**Security Implementation**:
```text
```bash
Authentication & Authorization:
├── API authentication for external access
├── Role-based access control for operations
@ -234,7 +234,7 @@ the system.
**Testing Strategy**:
```text
```bash
Unit Testing:
├── Configuration validation tests
├── Individual component tests
@ -272,7 +272,7 @@ System Testing:
**Error Categories**:
```text
```bash
Configuration Errors:
├── Invalid configuration syntax
├── Missing required configuration
@ -300,7 +300,7 @@ System Errors:
**Observability Implementation**:
```text
```bash
Logging:
├── Structured JSON logging
├── Configurable log levels
@ -358,7 +358,7 @@ Monitoring:
**Debt Management Strategy**:
```text
```bash
Assessment:
├── Regular code quality reviews
├── Performance profiling and optimization
@ -382,7 +382,7 @@ Improvement:
**Trade-off Categories**:
```text
```bash
Performance vs. Maintainability:
├── Rust coordination layer for performance
├── Nushell business logic for maintainability

View File

@ -19,7 +19,7 @@ This document describes the **hybrid selective integration** of prov-ecosystem a
### Three-Layer Integration
```text
```bash
┌─────────────────────────────────────────────┐
│ Provisioning CLI (provisioning/core/cli/) │
│ ✅ 80+ command shortcuts │
@ -70,7 +70,7 @@ This document describes the **hybrid selective integration** of prov-ecosystem a
**Key Types**:
```text
```rust
pub enum ContainerRuntime {
Docker,
Podman,
@ -85,7 +85,7 @@ pub struct ComposeAdapter { ... }
**Nushell Functions**:
```text
```nushell
runtime-detect # Auto-detect available runtime
runtime-exec # Execute command in detected runtime
runtime-compose # Adapt docker-compose for runtime
@ -112,7 +112,7 @@ runtime-list # List all available runtimes
**Key Types**:
```text
```rust
pub struct SshConfig { ... }
pub struct SshPool { ... }
pub enum DeploymentStrategy {
@ -124,7 +124,7 @@ pub enum DeploymentStrategy {
**Nushell Functions**:
```text
```nushell
ssh-pool-connect # Create SSH pool connection
ssh-pool-exec # Execute on SSH pool
ssh-pool-status # Check pool status
@ -153,7 +153,7 @@ ssh-circuit-breaker-status # Check circuit breaker
**Key Types**:
```text
```rust
pub enum BackupBackend {
Restic,
Borg,
@ -169,7 +169,7 @@ pub struct BackupManager { ... }
**Nushell Functions**:
```text
```nushell
backup-create # Create backup job
backup-restore # Restore from snapshot
backup-list # List snapshots
@ -199,7 +199,7 @@ backup-status # Check backup status
**Key Types**:
```text
```rust
pub enum GitProvider {
GitHub,
GitLab,
@ -212,7 +212,7 @@ pub struct GitOpsOrchestrator { ... }
**Nushell Functions**:
```text
```nushell
gitops-rules # Load rules from config
gitops-watch # Watch for Git events
gitops-trigger # Manually trigger deployment
@ -243,7 +243,7 @@ gitops-status # Get GitOps status
**Nushell Functions**:
```text
```nushell
service-install # Install service
service-start # Start service
service-stop # Stop service
@ -300,7 +300,7 @@ All implementations follow project standards:
## File Structure
```text
```bash
provisioning/
├── platform/integrations/
│ └── provisioning-bridge/ # Rust bridge crate
@ -338,7 +338,7 @@ provisioning/
### Runtime Abstraction
```text
```bash
# Auto-detect available runtime
let runtime = (runtime-detect)
@ -351,7 +351,7 @@ let compose_cmd = (runtime-compose "./docker-compose.yml")
### SSH Advanced
```text
```bash
# Connect to SSH pool
let pool = (ssh-pool-connect "server01.example.com" "root" --port 22)
@ -364,7 +364,7 @@ ssh-circuit-breaker-status
### Backup System
```text
```bash
# Schedule regular backups
backup-schedule "daily-app-backup" "0 2 * * *"
--paths ["/opt/app" "/var/lib/app"]
@ -381,7 +381,7 @@ backup-restore "snapshot-001" --restore_path "."
### GitOps Events
```text
```bash
# Load GitOps rules
let rules = (gitops-rules "./gitops-rules.yaml")
@ -394,7 +394,7 @@ gitops-trigger "deploy-app" --environment "prod"
### Service Management
```text
```bash
# Install service
service-install "my-app" "/usr/local/bin/my-app"
--user "appuser"
@ -418,7 +418,7 @@ service-restart-policy "my-app" --policy "on-failure" --delay-secs 5
Existing `provisioning` CLI will gain new command tree:
```text
```bash
provisioning runtime detect|exec|compose|info|list
provisioning ssh pool connect|exec|status|strategies
provisioning backup create|restore|list|schedule|retention|status
@ -430,7 +430,7 @@ provisioning service install|start|stop|restart|status|list|policy|detect-init
All integrations use Nickel schemas from `provisioning/schemas/integrations/`:
```text
```javascript
let { IntegrationConfig } = import "provisioning/integrations.ncl" in
{
runtime = { ... },
@ -445,7 +445,7 @@ let { IntegrationConfig } = import "provisioning/integrations.ncl" in
Nushell plugins can be created for performance-critical operations:
```text
```nushell
provisioning plugin list
# [installed]
# nu_plugin_runtime
@ -460,7 +460,7 @@ provisioning plugin list
### Rust Tests
```text
```rust
cd provisioning/platform/integrations/provisioning-bridge
cargo test --all
cargo test -p provisioning-bridge --lib
@ -469,7 +469,7 @@ cargo test -p provisioning-bridge --doc
### Nushell Tests
```text
```nushell
nu provisioning/core/nulib/integrations/runtime.nu
nu provisioning/core/nulib/integrations/ssh_advanced.nu
```

View File

@ -15,7 +15,7 @@ workflows, and enable extensible functionality. This document outlines the key i
**Implementation**:
```text
```bash
use tokio::process::Command;
use serde_json;
@ -35,7 +35,7 @@ pub async fn execute_nushell_workflow(
**Data Exchange Format**:
```text
```json
{
"status": "success" | "error" | "partial",
"result": {
@ -54,7 +54,7 @@ pub async fn execute_nushell_workflow(
**Implementation**:
```text
```python
def submit-workflow [workflow: record] -> record {
let payload = $workflow | to json
@ -68,7 +68,7 @@ def submit-workflow [workflow: record] -> record {
**API Contract**:
```text
```json
{
"workflow_id": "wf-456",
"name": "multi_cloud_deployment",
@ -86,7 +86,7 @@ def submit-workflow [workflow: record] -> record {
**Interface Definition**:
```text
```bash
# Standard provider interface that all providers must implement
export def list-servers [] -> table {
# Provider-specific implementation
@ -107,7 +107,7 @@ export def get-server [id: string] -> record {
**Configuration Integration**:
```text
```toml
[providers.aws]
region = "us-west-2"
credentials_profile = "default"
@ -125,7 +125,7 @@ network_mode = "bridge"
#### Provider Discovery and Loading
```text
```bash
def load-providers [] -> table {
let provider_dirs = glob "providers/*/nulib"
@ -150,7 +150,7 @@ def load-providers [] -> table {
**Implementation**:
```text
```python
def resolve-configuration [context: record] -> record {
let base_config = open config.defaults.toml
let user_config = if ("config.user.toml" | path exists) {
@ -173,7 +173,7 @@ def resolve-configuration [context: record] -> record {
#### Variable Interpolation Pattern
```text
```python
def interpolate-variables [config: record] -> record {
let interpolations = {
"{{paths.base}}": ($env.PWD),
@ -200,7 +200,7 @@ def interpolate-variables [config: record] -> record {
**Implementation (Rust)**:
```text
```rust
use petgraph::{Graph, Direction};
use std::collections::HashMap;
@ -229,7 +229,7 @@ impl DependencyResolver {
#### Parallel Execution Pattern
```text
```bash
use tokio::task::JoinSet;
use futures::stream::{FuturesUnordered, StreamExt};
@ -265,7 +265,7 @@ pub async fn execute_parallel_batch(
**Implementation**:
```text
```bash
#[derive(Serialize, Deserialize)]
pub struct WorkflowCheckpoint {
pub workflow_id: String,
@ -309,7 +309,7 @@ impl CheckpointManager {
#### Rollback Pattern
```text
```rust
pub struct RollbackManager {
rollback_stack: Vec<RollbackAction>,
}
@ -349,7 +349,7 @@ impl RollbackManager {
**Event Definition**:
```text
```bash
#[derive(Serialize, Deserialize, Clone, Debug)]
pub enum SystemEvent {
WorkflowStarted { workflow_id: String, name: String },
@ -363,7 +363,7 @@ pub enum SystemEvent {
**Event Bus Implementation**:
```text
```bash
use tokio::sync::broadcast;
pub struct EventBus {
@ -392,7 +392,7 @@ impl EventBus {
#### Extension Discovery and Loading
```text
```bash
def discover-extensions [] -> table {
let extension_dirs = glob "extensions/*/extension.toml"
@ -417,7 +417,7 @@ def discover-extensions [] -> table {
#### Extension Interface Pattern
```text
```bash
# Standard extension interface
export def extension-info [] -> record {
{
@ -452,7 +452,7 @@ export def extension-deactivate [] -> nothing {
**Base API Structure**:
```text
```bash
use axum::{
extract::{Path, State},
response::Json,
@ -473,7 +473,7 @@ pub fn create_api_router(state: AppState) -> Router {
**Standard Response Format**:
```text
```json
{
"status": "success" | "error" | "pending",
"data": { ... },
@ -494,7 +494,7 @@ pub fn create_api_router(state: AppState) -> Router {
### Structured Error Pattern
```text
```bash
#[derive(thiserror::Error, Debug)]
pub enum ProvisioningError {
#[error("Configuration error: {message}")]
@ -513,7 +513,7 @@ pub enum ProvisioningError {
### Error Recovery Pattern
```text
```python
def with-retry [operation: closure, max_attempts: int = 3] {
mut attempts = 0
mut last_error = null
@ -540,7 +540,7 @@ def with-retry [operation: closure, max_attempts: int = 3] {
### Caching Strategy Pattern
```text
```bash
use std::sync::Arc;
use tokio::sync::RwLock;
use std::collections::HashMap;
@ -583,7 +583,7 @@ impl<T: Clone> Cache<T> {
### Streaming Pattern for Large Data
```text
```python
def process-large-dataset [source: string] -> nothing {
# Stream processing instead of loading entire dataset
open $source
@ -600,7 +600,7 @@ def process-large-dataset [source: string] -> nothing {
### Integration Test Pattern
```text
```bash
#[cfg(test)]
mod integration_tests {
use super::*;

View File

@ -24,7 +24,7 @@ distributed extension management through OCI registry integration.
**Purpose**: Core system functionality - CLI, libraries, base schemas
```text
```bash
provisioning-core/
├── core/
│ ├── cli/ # Command-line interface
@ -82,7 +82,7 @@ provisioning-core/
**Purpose**: All provider, taskserv, and cluster extensions
```text
```bash
provisioning-extensions/
├── providers/
│ ├── aws/
@ -143,7 +143,7 @@ Each extension published separately as OCI artifact:
**Extension Manifest** (`manifest.yaml`):
```text
```yaml
name: kubernetes
type: taskserv
version: 1.28.0
@ -183,7 +183,7 @@ min_provisioning_version: "3.0.0"
**Purpose**: Platform services (orchestrator, control-center, MCP server, API gateway)
```text
```bash
provisioning-platform/
├── orchestrator/ # Rust orchestrator service
│ ├── src/
@ -238,7 +238,7 @@ Standard Docker images in OCI registry:
### Registry Structure
```text
```bash
OCI Registry (localhost:5000 or harbor.company.com)
├── provisioning-core/
│ ├── v3.5.0 # Core system artifact
@ -263,7 +263,7 @@ OCI Registry (localhost:5000 or harbor.company.com)
Each extension packaged as OCI artifact:
```text
```bash
kubernetes-1.28.0.tar.gz
├── schemas/ # Nickel schemas
│ ├── kubernetes.ncl
@ -291,7 +291,7 @@ kubernetes-1.28.0.tar.gz
**File**: `workspace/config/provisioning.yaml`
```text
```yaml
# Core system dependency
dependencies:
core:
@ -363,7 +363,7 @@ The system resolves dependencies in this order:
### Dependency Resolution Commands
```text
```bash
# Resolve and install all dependencies
provisioning dep resolve
@ -386,7 +386,7 @@ provisioning dep tree kubernetes
### CLI Commands
```text
```bash
# Pull extension from OCI registry
provisioning oci pull kubernetes:1.28.0
@ -419,7 +419,7 @@ provisioning oci copy
### OCI Configuration
```text
```toml
# Show OCI configuration
provisioning oci config
@ -442,7 +442,7 @@ provisioning oci config
### 1. Develop Extension
```text
```bash
# Create new extension from template
provisioning generate extension taskserv redis
@ -466,7 +466,7 @@ provisioning generate extension taskserv redis
### 2. Test Extension Locally
```text
```bash
# Load extension from local path
provisioning module load taskserv workspace_dev redis --source local
@ -479,7 +479,7 @@ provisioning test extension redis
### 3. Package Extension
```text
```bash
# Validate extension structure
provisioning oci package validate ./extensions/taskservs/redis
@ -491,7 +491,7 @@ provisioning oci package ./extensions/taskservs/redis
### 4. Publish Extension
```text
```bash
# Login to registry (one-time)
provisioning oci login localhost:5000
@ -511,7 +511,7 @@ provisioning oci tags redis
### 5. Use Published Extension
```text
```bash
# Add to workspace configuration
# workspace/config/provisioning.yaml:
# dependencies:
@ -534,7 +534,7 @@ provisioning dep resolve
**Using Zot (lightweight OCI registry)**:
```text
```bash
# Start local OCI registry
provisioning oci-registry start
@ -555,7 +555,7 @@ provisioning oci-registry status
**Using Harbor**:
```text
```bash
# workspace/config/provisioning.yaml
dependencies:
registry:
@ -591,7 +591,7 @@ dependencies:
### Phase 2: Gradual Migration
```text
```bash
# Migrate extensions one by one
for ext in (ls provisioning/extensions/taskservs); do
provisioning oci publish $ext.name

View File

@ -79,7 +79,7 @@ dependency model.
**Contents:**
```text
```bash
provisioning-core/
├── nulib/ # Nushell libraries
│ ├── lib_provisioning/ # Core library functions
@ -120,7 +120,7 @@ provisioning-core/
**Installation Path:**
```text
```bash
/usr/local/
├── bin/provisioning
├── lib/provisioning/
@ -135,7 +135,7 @@ provisioning-core/
**Contents:**
```text
```bash
provisioning-platform/
├── orchestrator/ # Rust orchestrator
│ ├── src/
@ -180,7 +180,7 @@ provisioning-platform/
**Installation Path:**
```text
```bash
/usr/local/
├── bin/
│ ├── provisioning-orchestrator
@ -203,7 +203,7 @@ provisioning-platform/
**Contents:**
```text
```bash
provisioning-extensions/
├── registry/ # Extension registry
│ ├── index.json # Searchable index
@ -252,7 +252,7 @@ provisioning-extensions/
**Installation:**
```text
```bash
# Install extension via core CLI
provisioning extension install mongodb
provisioning extension install azure-provider
@ -261,7 +261,7 @@ provisioning extension install azure-provider
**Extension Structure:**
Each extension is self-contained:
```text
```bash
mongodb/
├── manifest.toml # Extension metadata
├── taskserv.nu # Implementation
@ -279,7 +279,7 @@ mongodb/
**Contents:**
```text
```bash
provisioning-workspace/
├── templates/ # Workspace templates
│ ├── minimal/ # Minimal starter
@ -315,7 +315,7 @@ provisioning-workspace/
**Usage:**
```text
```bash
# Create workspace from template
provisioning workspace init my-project --template kubernetes
@ -333,7 +333,7 @@ provisioning workspace init
**Contents:**
```text
```bash
provisioning-distribution/
├── release-automation/ # Automated release workflows
│ ├── build-all.nu # Build all packages
@ -385,7 +385,7 @@ provisioning-distribution/
### Package-Based Dependencies (Not Submodules)
```text
```bash
┌─────────────────────────────────────────────────────────────┐
│ provisioning-distribution │
│ (Release orchestration & registry) │
@ -416,7 +416,7 @@ provisioning-distribution/
**Method:** Loose coupling via CLI + REST API
```text
```bash
# Platform calls Core CLI (subprocess)
def create-server [name: string] {
# Orchestrator executes Core CLI
@ -431,7 +431,7 @@ def submit-workflow [workflow: record] {
**Version Compatibility:**
```text
```bash
# platform/Cargo.toml
[package.metadata.provisioning]
core-version = "^3.0" # Compatible with core 3.x
@ -441,7 +441,7 @@ core-version = "^3.0" # Compatible with core 3.x
**Method:** Plugin/module system
```text
```bash
# Extension manifest
# extensions/mongodb/manifest.toml
[extension]
@ -465,7 +465,7 @@ provisioning extension install mongodb
**Method:** Git templates or package templates
```text
```bash
# Option 1: GitHub template repository
gh repo create my-infra --template provisioning-workspace
cd my-infra
@ -486,7 +486,7 @@ provisioning workspace create my-infra --template kubernetes
Each repository maintains independent semantic versioning:
```text
```bash
provisioning-core: 3.2.1
provisioning-platform: 2.5.3
provisioning-extensions: (per-extension versioning)
@ -497,7 +497,7 @@ provisioning-workspace: 1.4.0
**`provisioning-distribution/version-management/versions.toml`:**
```text
```toml
# Version compatibility matrix
[compatibility]
@ -536,7 +536,7 @@ workspace = "1.3.0"
**Coordinated releases** for major versions:
```text
```bash
# Major release: All repos release together
provisioning-core: 3.0.0
provisioning-platform: 2.0.0
@ -553,7 +553,7 @@ provisioning-platform: 2.1.0 (improves orchestrator, core stays 3.1.x)
### Working on Single Repository
```text
```bash
# Developer working on core only
git clone https://github.com/yourorg/provisioning-core
cd provisioning-core
@ -574,7 +574,7 @@ just install-dev
### Working Across Repositories
```text
```bash
# Scenario: Adding new feature requiring core + platform changes
# 1. Clone both repositories
@ -615,7 +615,7 @@ cargo test
### Testing Cross-Repo Integration
```text
```bash
# Integration tests in provisioning-distribution
cd provisioning-distribution
@ -636,7 +636,7 @@ just test-bundle stable-3.3
Each repository releases independently:
```text
```bash
# Core release
cd provisioning-core
git tag v3.2.1
@ -656,7 +656,7 @@ git push --tags
Distribution repository creates tested bundles:
```text
```bash
cd provisioning-distribution
# Create bundle
@ -679,7 +679,7 @@ just publish-bundle stable-3.2
#### Option 1: Bundle Installation (Recommended for Users)
```text
```bash
# Install stable bundle (easiest)
curl -fsSL https://get.provisioning.io | sh
@ -691,7 +691,7 @@ curl -fsSL https://get.provisioning.io | sh
#### Option 2: Individual Component Installation
```text
```bash
# Install only core (minimal)
curl -fsSL https://get.provisioning.io/core | sh
@ -704,7 +704,7 @@ provisioning extension install mongodb
#### Option 3: Custom Combination
```text
```bash
# Install specific versions
provisioning install core@3.1.0
provisioning install platform@2.4.0
@ -760,7 +760,7 @@ provisioning install platform@2.4.0
**Core CI (`provisioning-core/.github/workflows/ci.yml`):**
```text
```yaml
name: Core CI
on: [push, pull_request]
@ -792,7 +792,7 @@ jobs:
**Platform CI (`provisioning-platform/.github/workflows/ci.yml`):**
```text
```yaml
name: Platform CI
on: [push, pull_request]
@ -829,7 +829,7 @@ jobs:
**Distribution CI (`provisioning-distribution/.github/workflows/integration.yml`):**
```text
```yaml
name: Integration Tests
on:
@ -862,7 +862,7 @@ jobs:
### Monorepo Structure
```text
```bash
provisioning/ (One repo, ~500 MB)
├── core/ (Nushell)
├── platform/ (Rust)
@ -873,7 +873,7 @@ provisioning/ (One repo, ~500 MB)
### Multi-Repo Structure
```text
```bash
provisioning-core/ (Repo 1, ~50 MB)
├── nulib/
├── cli/

View File

@ -10,7 +10,7 @@
### Prerequisites
```text
```nickel
# Install Nickel
brew install nickel
# or from source: https://nickel-lang.org/getting-started/
@ -21,7 +21,7 @@ nickel --version # Should be 1.0+
### Directory Structure for Examples
```text
```nickel
mkdir -p ~/nickel-examples/{simple,complex,production}
cd ~/nickel-examples
```
@ -32,7 +32,7 @@ cd ~/nickel-examples
### Step 1: Create Contract File
```text
```nickel
cat > simple/server_contracts.ncl << 'EOF'
{
ServerConfig = {
@ -47,7 +47,7 @@ EOF
### Step 2: Create Defaults File
```text
```nickel
cat > simple/server_defaults.ncl << 'EOF'
{
web_server = {
@ -76,7 +76,7 @@ EOF
### Step 3: Create Main Module with Hybrid Interface
```text
```nickel
cat > simple/server.ncl << 'EOF'
let contracts = import "./server_contracts.ncl" in
let defaults = import "./server_defaults.ncl" in
@ -110,7 +110,7 @@ EOF
### Test: Export and Validate JSON
```text
```nickel
cd simple/
# Export to JSON
@ -133,7 +133,7 @@ nickel export server.ncl --format json | jq '.production_web_server.cpu_cores'
### Usage in Consumer Module
```text
```nickel
cat > simple/consumer.ncl << 'EOF'
let server = import "./server.ncl" in
@ -162,14 +162,14 @@ nickel export consumer.ncl --format json | jq '.staging_web'
### Create Provider Structure
```text
```nickel
mkdir -p complex/upcloud/{contracts,defaults,main}
cd complex/upcloud
```
### Provider Contracts
```text
```nickel
cat > upcloud_contracts.ncl << 'EOF'
{
StorageBackup = {
@ -196,7 +196,7 @@ EOF
### Provider Defaults
```text
```nickel
cat > upcloud_defaults.ncl << 'EOF'
{
backup = {
@ -223,7 +223,7 @@ EOF
### Provider Main Module
```text
```nickel
cat > upcloud_main.ncl << 'EOF'
let contracts = import "./upcloud_contracts.ncl" in
let defaults = import "./upcloud_defaults.ncl" in
@ -281,7 +281,7 @@ EOF
### Test Provider Configuration
```text
```toml
# Export provider config
nickel export upcloud_main.ncl --format json | jq '.production_high_availability'
@ -296,7 +296,7 @@ nickel export upcloud_main.ncl --format json | jq '.production_high_availability
### Consumer Using Provider
```text
```nickel
cat > upcloud_consumer.ncl << 'EOF'
let upcloud = import "./upcloud_main.ncl" in
@ -332,7 +332,7 @@ nickel export upcloud_consumer.ncl --format json | jq '.ha_stack | keys'
### Taskserv Contracts (from wuji)
```text
```nickel
cat > production/taskserv_contracts.ncl << 'EOF'
{
Dependency = {
@ -352,7 +352,7 @@ EOF
### Taskserv Defaults
```text
```nickel
cat > production/taskserv_defaults.ncl << 'EOF'
{
kubernetes = {
@ -407,7 +407,7 @@ EOF
### Taskserv Main
```text
```nickel
cat > production/taskserv.ncl << 'EOF'
let contracts = import "./taskserv_contracts.ncl" in
let defaults = import "./taskserv_defaults.ncl" in
@ -453,7 +453,7 @@ EOF
### Test Taskserv Setup
```text
```nickel
# Export stack
nickel export taskserv.ncl --format json | jq '.wuji_k8s_stack | keys'
# Output: ["kubernetes", "cilium", "containerd", "etcd"]
@ -477,7 +477,7 @@ nickel export taskserv.ncl --format json | jq '.staging_stack | length'
### Base Infrastructure
```text
```nickel
cat > production/infrastructure.ncl << 'EOF'
let servers = import "./server.ncl" in
let taskservs = import "./taskserv.ncl" in
@ -520,7 +520,7 @@ nickel export infrastructure.ncl --format json | jq '.production.taskservs | key
### Extending Infrastructure (Nickel Advantage!)
```text
```nickel
cat > production/infrastructure_extended.ncl << 'EOF'
let infra = import "./infrastructure.ncl" in
@ -557,7 +557,7 @@ nickel export infrastructure_extended.ncl --format json |
### Validation Functions
```text
```nickel
cat > production/validation.ncl << 'EOF'
let validate_server = fun server =>
if server.cpu_cores <= 0 then
@ -586,7 +586,7 @@ EOF
### Using Validations
```text
```nickel
cat > production/validated_config.ncl << 'EOF'
let server = import "./server.ncl" in
let taskserv = import "./taskserv.ncl" in
@ -632,7 +632,7 @@ nickel export validated_config.ncl --format json
### Run All Examples
```text
```bash
#!/bin/bash
# test_all_examples.sh
@ -679,7 +679,7 @@ echo "=== All Tests Passed ✓ ==="
### Common Nickel Operations
```text
```nickel
# Validate Nickel syntax
nickel export config.ncl
@ -711,7 +711,7 @@ nickel typecheck config.ncl
### Problem: "unexpected token" with multiple let
```text
```nickel
# ❌ WRONG
let A = {x = 1}
let B = {y = 2}
@ -725,7 +725,7 @@ let B = {y = 2} in
### Problem: Function serialization fails
```text
```nickel
# ❌ WRONG - function will fail to serialize
{
get_value = fun x => x + 1,
@ -741,7 +741,7 @@ let B = {y = 2} in
### Problem: Null values cause export issues
```text
```nickel
# ❌ WRONG
{ optional_field = null }

View File

@ -8,7 +8,7 @@
## Quick Decision Tree
```text
```nickel
Need to define infrastructure/schemas?
├─ New platform schemas → Use Nickel ✅
├─ New provider extensions → Use Nickel ✅
@ -26,7 +26,7 @@ Need to define infrastructure/schemas?
#### KCL Approach
```text
```nickel
schema ServerDefaults:
name: str
cpu_cores: int = 2
@ -51,7 +51,7 @@ server_defaults: ServerDefaults = {
**server_contracts.ncl**:
```text
```json
{
ServerDefaults = {
name | String,
@ -64,7 +64,7 @@ server_defaults: ServerDefaults = {
**server_defaults.ncl**:
```text
```json
{
server = {
name = "web-server",
@ -77,7 +77,7 @@ server_defaults: ServerDefaults = {
**server.ncl**:
```text
```javascript
let contracts = import "./server_contracts.ncl" in
let defaults = import "./server_defaults.ncl" in
@ -93,7 +93,7 @@ let defaults = import "./server_defaults.ncl" in
**Usage**:
```text
```javascript
let server = import "./server.ncl" in
# Simple override
@ -117,7 +117,7 @@ my_custom = server.defaults.server & {
#### KCL (from `provisioning/extensions/providers/upcloud/nickel/` - legacy approach)
```text
```nickel
schema StorageBackup:
backup_id: str
frequency: str
@ -145,7 +145,7 @@ provision_upcloud: ProvisionUpcloud = {
**upcloud_contracts.ncl**:
```text
```json
{
StorageBackup = {
backup_id | String,
@ -170,7 +170,7 @@ provision_upcloud: ProvisionUpcloud = {
**upcloud_defaults.ncl**:
```text
```json
{
storage_backup = {
backup_id = "",
@ -195,7 +195,7 @@ provision_upcloud: ProvisionUpcloud = {
**upcloud_main.ncl** (from actual codebase):
```text
```javascript
let contracts = import "./upcloud_contracts.ncl" in
let defaults = import "./upcloud_defaults.ncl" in
@ -219,7 +219,7 @@ let defaults = import "./upcloud_defaults.ncl" in
**Usage Comparison**:
```text
```nickel
# KCL way (KCL no lo permite bien)
# Cannot easily extend without schema modification
@ -288,7 +288,7 @@ production_stack = upcloud.make_provision_upcloud {
**KCL (Legacy)**:
```text
```nickel
schema ServerConfig:
name: str
zone: str = "us-nyc1"
@ -300,7 +300,7 @@ web_server: ServerConfig = {
**Nickel (Recommended)**:
```text
```javascript
let defaults = import "./server_defaults.ncl" in
web_server = defaults.make_server { name = "web-01" }
```
@ -313,7 +313,7 @@ web_server = defaults.make_server { name = "web-01" }
**KCL** (from wuji infrastructure):
```text
```nickel
schema TaskServDependency:
name: str
wait_for_health: bool = false
@ -343,7 +343,7 @@ taskserv_cilium: TaskServ = {
**Nickel** (from wuji/main.ncl):
```text
```javascript
let ts_kubernetes = import "./taskservs/kubernetes.ncl" in
let ts_cilium = import "./taskservs/cilium.ncl" in
let ts_containerd = import "./taskservs/containerd.ncl" in
@ -367,7 +367,7 @@ let ts_containerd = import "./taskservs/containerd.ncl" in
**KCL**:
```text
```nickel
schema ServerConfig:
name: str
# Would need to modify schema!
@ -379,7 +379,7 @@ schema ServerConfig:
**Nickel**:
```text
```javascript
let server = import "./server.ncl" in
# Add custom fields without modifying schema!
@ -402,7 +402,7 @@ my_server = server.defaults.server & {
**KCL Approach (Legacy)**:
```text
```nickel
schema ServerDefaults:
cpu: int = 2
memory: int = 4
@ -423,7 +423,7 @@ server: Server = {
**Nickel Approach**:
```text
```nickel
# defaults.ncl
server_defaults = {
cpu = 2,
@ -449,7 +449,7 @@ server = make_server {
**KCL Validation (Legacy)** (compile-time, inline):
```text
```nickel
schema Config:
timeout: int = 5
@ -465,7 +465,7 @@ schema Config:
**Nickel Validation** (runtime, contract-based):
```text
```nickel
# contracts.ncl - Pure type definitions
Config = {
timeout | Number,
@ -495,7 +495,7 @@ my_config = validate_config { timeout = 10 }
**Before (KCL - Legacy)**:
```text
```nickel
schema Scheduler:
strategy: str = "fifo"
workers: int = 4
@ -513,7 +513,7 @@ scheduler_config: Scheduler = {
`scheduler_contracts.ncl`:
```text
```json
{
Scheduler = {
strategy | String,
@ -524,7 +524,7 @@ scheduler_config: Scheduler = {
`scheduler_defaults.ncl`:
```text
```json
{
scheduler = {
strategy = "fifo",
@ -535,7 +535,7 @@ scheduler_config: Scheduler = {
`scheduler.ncl`:
```text
```javascript
let contracts = import "./scheduler_contracts.ncl" in
let defaults = import "./scheduler_defaults.ncl" in
@ -557,7 +557,7 @@ let defaults = import "./scheduler_defaults.ncl" in
**Before (KCL - Legacy)**:
```text
```nickel
schema Mode:
deployment_type: str = "solo" # "solo" | "multiuser" | "cicd" | "enterprise"
@ -568,7 +568,7 @@ schema Mode:
**After (Nickel - Current)**:
```text
```nickel
# contracts.ncl
{
Mode = {
@ -592,7 +592,7 @@ schema Mode:
**Before (KCL - Legacy)**:
```text
```nickel
schema ServerDefaults:
cpu: int = 2
memory: int = 4
@ -609,7 +609,7 @@ web_server: Server = {
**After (Nickel - Current)**:
```text
```nickel
# defaults.ncl
{
server_defaults = {
@ -643,7 +643,7 @@ let make_server = fun config =>
**Workflow**:
```text
```nickel
# Edit workspace config
cd workspace_librecloud/nickel
vim wuji/main.ncl
@ -658,7 +658,7 @@ nickel export wuji/main.ncl # Uses updated schemas
**Imports** (relative, central):
```text
```nickel
import "../../provisioning/schemas/main.ncl"
import "../../provisioning/extensions/taskservs/kubernetes/nickel/main.ncl"
```
@ -671,7 +671,7 @@ import "../../provisioning/extensions/taskservs/kubernetes/nickel/main.ncl"
**Workflow**:
```text
```nickel
# 1. Create immutable snapshot
provisioning workspace freeze
--version "2025-12-15-prod-v1"
@ -696,7 +696,7 @@ provisioning deploy
**Frozen Imports** (rewritten to local):
```text
```nickel
# Original in workspace
import "../../provisioning/schemas/main.ncl"
@ -720,7 +720,7 @@ import "./provisioning/schemas/main.ncl"
**Problem**:
```text
```nickel
# ❌ WRONG
let A = { x = 1 }
let B = { y = 2 }
@ -731,7 +731,7 @@ Error: `unexpected token`
**Solution**: Use `let...in` chaining:
```text
```nickel
# ✅ CORRECT
let A = { x = 1 } in
let B = { y = 2 } in
@ -744,7 +744,7 @@ let B = { y = 2 } in
**Problem**:
```text
```nickel
# ❌ WRONG
let StorageVol = {
mount_path : String | null = null,
@ -757,7 +757,7 @@ Error: `this can't be used as a contract`
**Solution**: Use untyped assignment:
```text
```nickel
# ✅ CORRECT
let StorageVol = {
mount_path = null,
@ -770,7 +770,7 @@ let StorageVol = {
**Problem**:
```text
```nickel
# ❌ WRONG
{
get_value = fun x => x + 1,
@ -782,7 +782,7 @@ Error: Functions can't be serialized
**Solution**: Mark helper functions `not_exported`:
```text
```nickel
# ✅ CORRECT
{
get_value | not_exported = fun x => x + 1,
@ -796,7 +796,7 @@ Error: Functions can't be serialized
**Problem**:
```text
```javascript
let defaults = import "./defaults.ncl" in
defaults.scheduler_config # But file has "scheduler"
```
@ -805,7 +805,7 @@ Error: `field not found`
**Solution**: Use exact field names:
```text
```javascript
let defaults = import "./defaults.ncl" in
defaults.scheduler # Correct name from defaults.ncl
```
@ -818,7 +818,7 @@ defaults.scheduler # Correct name from defaults.ncl
**Solution**: Check for circular references or missing `not_exported`:
```text
```nickel
# ❌ Slow - functions being serialized
{
validate_config = fun x => x,
@ -917,7 +917,7 @@ Type-safe prompts, forms, and schemas that **bidirectionally integrate with Nick
### Workflow: Nickel Schemas → Interactive UIs → Nickel Output
```text
```nickel
# 1. Define schema in Nickel
cat > server.ncl << 'EOF'
let contracts = import "./contracts.ncl" in
@ -952,7 +952,7 @@ typedialog form --input form.toml --output nickel
### Example: Infrastructure Wizard
```text
```nickel
# User runs
provisioning init --wizard
@ -1014,7 +1014,7 @@ provisioning/schemas/config/workspace_config/main.ncl
**File**: `provisioning/schemas/main.ncl` (174 lines)
```text
```nickel
# Domain-organized architecture
{
lib | doc "Core library types"
@ -1054,7 +1054,7 @@ provisioning/schemas/config/workspace_config/main.ncl
**Usage**:
```text
```javascript
let provisioning = import "./main.ncl" in
provisioning.lib.Storage
@ -1069,7 +1069,7 @@ provisioning.operations.workflows
**File**: `provisioning/extensions/providers/upcloud/nickel/main.ncl` (38 lines)
```text
```javascript
let contracts_lib = import "./contracts.ncl" in
let defaults_lib = import "./defaults.ncl" in
@ -1109,7 +1109,7 @@ let defaults_lib = import "./defaults.ncl" in
**File**: `workspace_librecloud/nickel/wuji/main.ncl` (53 lines)
```text
```javascript
let settings_config = import "./settings.ncl" in
let ts_cilium = import "./taskservs/cilium.ncl" in
let ts_containerd = import "./taskservs/containerd.ncl" in

View File

@ -15,7 +15,7 @@ verification, Cedar authorization, rate limiting, and audit logging) into a cohe
The middleware chain is applied in this specific order to ensure proper security:
```text
```bash
┌─────────────────────────────────────────────────────────────────┐
│ Incoming HTTP Request │
└────────────────────────┬────────────────────────────────────────┘
@ -90,7 +90,7 @@ The middleware chain is applied in this specific order to ensure proper security
**Example**:
```text
```rust
pub struct SecurityContext {
pub user_id: String,
pub token: ValidatedToken,
@ -164,7 +164,7 @@ impl SecurityContext {
**Example**:
```text
```rust
fn requires_mfa(method: &str, path: &str) -> bool {
if path.contains("/production/") { return true; }
if method == "DELETE" { return true; }
@ -190,7 +190,7 @@ fn requires_mfa(method: &str, path: &str) -> bool {
**Resource Mapping**:
```text
```bash
/api/v1/servers/srv-123 → Resource::Server("srv-123")
/api/v1/taskserv/kubernetes → Resource::TaskService("kubernetes")
/api/v1/cluster/prod → Resource::Cluster("prod")
@ -199,7 +199,7 @@ fn requires_mfa(method: &str, path: &str) -> bool {
**Action Mapping**:
```text
```bash
GET → Action::Read
POST → Action::Create
PUT → Action::Update
@ -223,7 +223,7 @@ DELETE → Action::Delete
**Configuration**:
```text
```rust
pub struct RateLimitConfig {
pub max_requests: u32, // for example, 100
pub window_duration: Duration, // for example, 60 seconds
@ -236,7 +236,7 @@ pub struct RateLimitConfig {
**Statistics**:
```text
```rust
pub struct RateLimitStats {
pub total_ips: usize, // Number of tracked IPs
pub total_requests: u32, // Total requests made
@ -261,7 +261,7 @@ pub struct RateLimitStats {
**Usage Example**:
```text
```bash
use provisioning_orchestrator::security_integration::{
SecurityComponents, SecurityConfig
};
@ -292,7 +292,7 @@ let secured_app = apply_security_middleware(app, &security);
### Updated AppState Structure
```text
```rust
pub struct AppState {
// Existing fields
pub task_storage: Arc<dyn TaskStorage>,
@ -317,7 +317,7 @@ pub struct AppState {
### Initialization in main.rs
```text
```rust
#[tokio::main]
async fn main() -> Result<()> {
let args = Args::parse();
@ -398,7 +398,7 @@ async fn main() -> Result<()> {
### Step-by-Step Flow
```text
```bash
1. CLIENT REQUEST
├─ Headers:
│ ├─ Authorization: Bearer <jwt_token>
@ -485,7 +485,7 @@ async fn main() -> Result<()> {
### Environment Variables
```text
```bash
# JWT Configuration
JWT_ISSUER=control-center
JWT_AUDIENCE=orchestrator
@ -513,7 +513,7 @@ AUDIT_RETENTION_DAYS=365
For development/testing, all security can be disabled:
```text
```bash
// In main.rs
let security = if env::var("DEVELOPMENT_MODE").unwrap_or("false".to_string()) == "true" {
SecurityComponents::disabled(audit_logger.clone())
@ -544,7 +544,7 @@ Location: `provisioning/platform/orchestrator/tests/security_integration_tests.r
**Run Tests**:
```text
```bash
cd provisioning/platform/orchestrator
cargo test security_integration_tests
```

View File

@ -54,7 +54,7 @@ http post <http://localhost:9090/workflows/servers/create> {
1. Orchestrator receives and queues:
```text
```bash
// Orchestrator receives HTTP request
async fn create_server_workflow(request) {
let task = Task::new(TaskType::ServerCreate, request);
@ -65,7 +65,7 @@ async fn create_server_workflow(request) {
2. Orchestrator executes via Nushell subprocess:
```text
```nushell
// Orchestrator spawns Nushell to run business logic
async fn execute_task(task: Task) {
let output = Command::new("nu")
@ -80,7 +80,7 @@ async fn execute_task(task: Task) {
3. Nushell executes the actual work:
```text
```nushell
# servers/create.nu
export def create-server [name: string] {

View File

@ -18,7 +18,7 @@ functionality.
**Original Issue:**
```text
```bash
Deep call stack in Nushell (template.nu:71)
→ "Type not supported" errors
→ Cannot handle complex nested workflows
@ -35,7 +35,7 @@ Deep call stack in Nushell (template.nu:71)
### How It Works Today (Monorepo)
```text
```bash
┌─────────────────────────────────────────────────────────────┐
│ User │
└───────────────────────────┬─────────────────────────────────┘
@ -80,7 +80,7 @@ Deep call stack in Nushell (template.nu:71)
#### Mode 1: Direct Mode (Simple Operations)
```text
```bash
# No orchestrator needed
provisioning server list
provisioning env
@ -92,7 +92,7 @@ provisioning (CLI) → Nushell scripts → Result
#### Mode 2: Orchestrated Mode (Complex Operations)
```text
```bash
# Uses orchestrator for coordination
provisioning server create --orchestrated
@ -104,7 +104,7 @@ provisioning CLI → Orchestrator API → Task Queue → Nushell executor
#### Mode 3: Workflow Mode (Batch Operations)
```text
```bash
# Complex workflows with dependencies
provisioning workflow submit server-cluster.ncl
@ -128,7 +128,7 @@ provisioning CLI → Orchestrator Workflow Engine → Dependency Graph
**Nushell CLI (`core/nulib/workflows/server_create.nu`):**
```text
```nushell
# Submit server creation workflow to orchestrator
export def server_create_workflow [
infra_name: string
@ -153,7 +153,7 @@ export def server_create_workflow [
**Rust Orchestrator (`platform/orchestrator/src/api/workflows.rs`):**
```text
```rust
// Receive workflow submission from Nushell CLI
#[axum::debug_handler]
async fn create_server_workflow(
@ -183,7 +183,7 @@ async fn create_server_workflow(
**Flow:**
```text
```bash
User → provisioning server create --orchestrated
Nushell CLI prepares task
@ -201,7 +201,7 @@ User can monitor: provisioning workflow monitor <id>
**Orchestrator Task Executor (`platform/orchestrator/src/executor.rs`):**
```text
```rust
// Orchestrator spawns Nushell to execute business logic
pub async fn execute_task(task: Task) -> Result<TaskResult> {
match task.task_type {
@ -233,7 +233,7 @@ pub async fn execute_task(task: Task) -> Result<TaskResult> {
**Flow:**
```text
```bash
Orchestrator task queue has pending task
Executor picks up task
@ -253,7 +253,7 @@ User monitors via: provisioning workflow status <id>
**Nushell Calls Orchestrator API:**
```text
```nushell
# Nushell script checks orchestrator status during execution
export def check-orchestrator-health [] {
let response = (http get http://localhost:9090/health)
@ -276,7 +276,7 @@ export def report-progress [task_id: string, progress: int] {
**Orchestrator Monitors Nushell Execution:**
```text
```nushell
// Orchestrator tracks Nushell subprocess
pub async fn execute_with_monitoring(task: Task) -> Result<TaskResult> {
let mut child = Command::new("nu")
@ -332,7 +332,7 @@ pub async fn execute_with_monitoring(task: Task) -> Result<TaskResult> {
**Runtime Integration (Same as Monorepo):**
```text
```bash
User installs both packages:
provisioning-core-3.2.1 → /usr/local/lib/provisioning/
provisioning-platform-2.5.3 → /usr/local/bin/provisioning-orchestrator
@ -347,7 +347,7 @@ No code dependencies, just runtime coordination!
**Core Package (`provisioning-core`) config:**
```text
```toml
# /usr/local/share/provisioning/config/config.defaults.toml
[orchestrator]
@ -363,7 +363,7 @@ fallback_to_direct = true # Fall back if orchestrator down
**Platform Package (`provisioning-platform`) config:**
```text
```toml
# /usr/local/share/provisioning/platform/config.toml
[orchestrator]
@ -382,7 +382,7 @@ task_timeout_seconds = 3600
**Compatibility Matrix (`provisioning-distribution/versions.toml`):**
```text
```toml
[compatibility.platform."2.5.3"]
core = "^3.2" # Platform 2.5.3 compatible with core 3.2.x
min-core = "3.2.0"
@ -402,7 +402,7 @@ orchestrator-api = "v1"
**No Orchestrator Needed:**
```text
```bash
provisioning server list
# Flow:
@ -414,7 +414,7 @@ CLI → servers/list.nu → Query state → Return results
**Using Orchestrator:**
```text
```bash
provisioning server create --orchestrated --infra wuji
# Detailed Flow:
@ -466,7 +466,7 @@ provisioning server create --orchestrated --infra wuji
**Complex Workflow:**
```text
```bash
provisioning batch submit multi-cloud-deployment.ncl
# Workflow contains:
@ -548,8 +548,7 @@ provisioning batch submit multi-cloud-deployment.ncl
1. **Reliable State Management**
```text
Orchestrator maintains:
``` Orchestrator maintains:
- Task queue (survives crashes)
- Workflow checkpoints (resume on failure)
- Progress tracking (real-time monitoring)
@ -558,8 +557,7 @@ provisioning batch submit multi-cloud-deployment.ncl
1. **Clean Separation**
```text
Orchestrator (Rust): Performance, concurrency, state
``` Orchestrator (Rust): Performance, concurrency, state
Business Logic (Nushell): Providers, taskservs, workflows
Each does what it's best at!
@ -594,7 +592,7 @@ provisioning batch submit multi-cloud-deployment.ncl
**User installs bundle:**
```text
```bash
curl -fsSL https://get.provisioning.io | sh
# Installs:
@ -614,7 +612,7 @@ curl -fsSL https://get.provisioning.io | sh
**Core package expects orchestrator:**
```text
```bash
# core/nulib/lib_provisioning/orchestrator/client.nu
# Check if orchestrator is running
@ -644,7 +642,7 @@ export def ensure-orchestrator [] {
**Platform package executes core scripts:**
```text
```bash
// platform/orchestrator/src/executor/nushell.rs
pub struct NushellExecutor {
@ -689,7 +687,7 @@ impl NushellExecutor {
**`/usr/local/share/provisioning/config/config.defaults.toml`:**
```text
```toml
[orchestrator]
enabled = true
endpoint = "http://localhost:9090"
@ -722,7 +720,7 @@ force_direct = [
**`/usr/local/share/provisioning/platform/config.toml`:**
```text
```toml
[server]
host = "127.0.0.1"
port = 8080
@ -780,7 +778,7 @@ env_vars = { NU_LIB_DIRS = "/usr/local/lib/provisioning" }
The confusing example in the multi-repo doc was **oversimplified**. The real architecture is:
```text
```bash
✅ Orchestrator IS USED and IS ESSENTIAL
✅ Platform (Rust) coordinates Core (Nushell) execution
✅ Loose coupling via CLI + REST API (not code dependencies)

View File

@ -41,7 +41,7 @@ Contains fundamental schemas for provisioning:
#### Discovery Commands
```text
```bash
# Discover available modules
module-loader discover taskservs # List all taskservs
module-loader discover providers --format yaml # List providers as YAML
@ -58,7 +58,7 @@ module-loader discover clusters redis # Search for redis clusters
#### Loading Commands
```text
```bash
# Load modules into workspace
module-loader load taskservs . [kubernetes, cilium, containerd]
module-loader load providers . [upcloud]
@ -81,7 +81,7 @@ module-loader init workspace/infra/production
### New Workspace Layout
```text
```bash
workspace/infra/my-project/
├── kcl.mod # Package dependencies
├── servers.ncl # Main server configuration
@ -110,7 +110,7 @@ workspace/infra/my-project/
#### Before (Old System)
```text
```bash
# Hardcoded relative paths
import ../../../kcl/server as server
import ../../../extensions/taskservs/kubernetes/kcl/kubernetes as k8s
@ -118,7 +118,7 @@ import ../../../extensions/taskservs/kubernetes/kcl/kubernetes as k8s
#### After (New System)
```text
```bash
# Package-based imports
import provisioning.server as server
@ -130,7 +130,7 @@ import .taskservs.nclubernetes.kubernetes as k8s
### Building Core Package
```text
```bash
# Build distributable package
./provisioning/tools/kcl-packager.nu build --version 1.0.0
@ -145,21 +145,21 @@ import .taskservs.nclubernetes.kubernetes as k8s
#### Method 1: Local Installation (Recommended for development)
```text
```toml
[dependencies]
provisioning = { path = "~/.kcl/packages/provisioning", version = "0.0.1" }
```
#### Method 2: Git Repository (For distributed teams)
```text
```toml
[dependencies]
provisioning = { git = "https://github.com/your-org/provisioning-kcl", version = "v0.0.1" }
```
#### Method 3: KCL Registry (When available)
```text
```toml
[dependencies]
provisioning = { version = "0.0.1" }
```
@ -168,7 +168,7 @@ provisioning = { version = "0.0.1" }
### 1. New Project Setup
```text
```bash
# Create workspace from template
cp -r provisioning/templates/workspaces/kubernetes ./my-k8s-cluster
cd my-k8s-cluster
@ -187,7 +187,7 @@ provisioning server create --infra . --check
### 2. Extension Development
```text
```bash
# Create new taskserv
mkdir -p extensions/taskservs/my-service/kcl
cd extensions/taskservs/my-service/kcl
@ -202,7 +202,7 @@ module-loader discover taskservs # Should find your service
### 3. Workspace Migration
```text
```bash
# Analyze existing workspace
workspace-migrate.nu workspace/infra/old-project dry-run
@ -215,7 +215,7 @@ module-loader validate workspace/infra/old-project
### 4. Multi-Environment Management
```text
```bash
# Development environment
cd workspace/infra/dev
module-loader load taskservs . [redis, postgres]
@ -231,7 +231,7 @@ module-loader load providers . [upcloud, aws] # Multi-cloud
### Listing and Validation
```text
```bash
# List loaded modules
module-loader list taskservs .
module-loader list providers .
@ -246,7 +246,7 @@ workspace-init.nu . info
### Unloading Modules
```text
```bash
# Remove specific modules
module-loader unload taskservs . redis
module-loader unload providers . aws
@ -256,7 +256,7 @@ module-loader unload providers . aws
### Module Information
```text
```bash
# Get detailed module info
module-loader info taskservs kubernetes
module-loader info providers upcloud
@ -267,7 +267,7 @@ module-loader info clusters buildkit
### Pipeline Example
```text
```nushell
#!/usr/bin/env nu
# deploy-pipeline.nu
@ -292,13 +292,13 @@ provisioning server create --infra $env.WORKSPACE_PATH
#### Module Import Errors
```text
```bash
Error: module not found
```
**Solution**: Verify modules are loaded and regenerate imports
```text
```bash
module-loader list taskservs .
module-loader load taskservs . [kubernetes, cilium, containerd]
```
@ -311,14 +311,14 @@ module-loader load taskservs . [kubernetes, cilium, containerd]
**Solution**: Verify core package installation and kcl.mod configuration
```text
```toml
kcl-packager.nu install --version latest
kcl run --dry-run servers.ncl
```
### Debug Commands
```text
```bash
# Show workspace structure
tree -a workspace/infra/my-project
@ -364,25 +364,25 @@ For existing workspaces, follow these steps:
### 1. Backup Current Workspace
```text
```bash
cp -r workspace/infra/existing workspace/infra/existing-backup
```
### 2. Analyze Migration Requirements
```text
```bash
workspace-migrate.nu workspace/infra/existing dry-run
```
### 3. Perform Migration
```text
```bash
workspace-migrate.nu workspace/infra/existing
```
### 4. Load Required Modules
```text
```bash
cd workspace/infra/existing
module-loader load taskservs . [kubernetes, cilium]
module-loader load providers . [upcloud]
@ -390,14 +390,14 @@ module-loader load providers . [upcloud]
### 5. Test and Validate
```text
```bash
kcl run servers.ncl
module-loader validate .
```
### 6. Deploy
```text
```bash
provisioning server create --infra . --check
```

View File

@ -70,7 +70,7 @@ workflow, and user-friendly distribution.
### 1. Monorepo Structure
```text
```bash
project-provisioning/
├── provisioning/ # CORE SYSTEM (distribution source)
@ -246,7 +246,7 @@ project-provisioning/
**Installation:**
```text
```bash
/usr/local/
├── bin/
│ └── provisioning
@ -275,7 +275,7 @@ project-provisioning/
**Installation:**
```text
```bash
/usr/local/
├── bin/
│ ├── provisioning-orchestrator
@ -297,7 +297,7 @@ project-provisioning/
**Installation:**
```text
```bash
/usr/local/lib/provisioning/extensions/
├── taskservs/
├── clusters/
@ -317,7 +317,7 @@ project-provisioning/
**Installation:**
```text
```bash
~/.config/nushell/plugins/
```
@ -325,7 +325,7 @@ project-provisioning/
#### System Installation (Root)
```text
```bash
/usr/local/
├── bin/
│ ├── provisioning # Main CLI
@ -351,7 +351,7 @@ project-provisioning/
#### User Configuration
```text
```toml
~/.provisioning/
├── config/
│ └── config.user.toml # User overrides
@ -365,7 +365,7 @@ project-provisioning/
#### Project Workspace
```text
```bash
./workspace/
├── infra/ # Infrastructure definitions
│ ├── my-cluster/
@ -384,7 +384,7 @@ project-provisioning/
### Configuration Hierarchy
```text
```toml
Priority (highest to lowest):
1. CLI flags --debug, --infra=my-cluster
2. Runtime overrides PROVISIONING_DEBUG=true
@ -401,7 +401,7 @@ Priority (highest to lowest):
**`provisioning/tools/build/`:**
```text
```bash
build/
├── build-system.nu # Main build orchestrator
├── package-core.nu # Core packaging
@ -417,7 +417,7 @@ build/
**`provisioning/tools/build/build-system.nu`:**
```text
```nushell
#!/usr/bin/env nu
# Build system for provisioning project
@ -597,7 +597,7 @@ Total packages: (($packages | length))"
**`Justfile`:**
```text
```bash
# Provisioning Build System
# Use 'just --list' to see all available commands
@ -729,7 +729,7 @@ audit:
**`distribution/installers/install.nu`:**
```text
```nushell
#!/usr/bin/env nu
# Provisioning installation script
@ -986,7 +986,7 @@ export def "main upgrade" [
**`distribution/installers/install.sh`:**
```text
```bash
#!/usr/bin/env bash
# Provisioning installation script (Bash version)
# This script installs Nushell first, then runs the Nushell installer
@ -1113,7 +1113,7 @@ main "$@"
**Commands:**
```text
```bash
# Backup current state
cp -r /Users/Akasha/project-provisioning /Users/Akasha/project-provisioning.backup
@ -1138,7 +1138,7 @@ fd workspace -t d > workspace-dirs.txt
**Commands:**
```text
```bash
# Create distribution directory
mkdir -p distribution/{packages,installers,registry}
@ -1412,7 +1412,7 @@ rm -rf NO/ wrks/ presentations/
#### Option 1: Clean Migration
```text
```bash
# Backup current workspace
cp -r workspace workspace.backup
@ -1425,7 +1425,7 @@ provisioning workspace migrate --from workspace.backup --to workspace/
#### Option 2: In-Place Migration
```text
```bash
# Run migration script
provisioning migrate --check # Dry run
provisioning migrate # Execute migration
@ -1433,7 +1433,7 @@ provisioning migrate # Execute migration
### For Developers
```text
```bash
# Pull latest changes
git pull origin main

View File

@ -11,7 +11,7 @@ The system solves fundamental technical challenges through architectural innovat
### System Diagram
```text
```bash
┌─────────────────────────────────────────────────────────────────┐
│ User Interface Layer │
├─────────────────┬─────────────────┬─────────────────────────────┤
@ -149,7 +149,7 @@ The system solves fundamental technical challenges through architectural innovat
**Nickel Workflow Definitions**:
```text
```json
{
batch_workflow = {
name = "multi_cloud_deployment",
@ -247,14 +247,14 @@ The system solves fundamental technical challenges through architectural innovat
### Configuration Resolution Flow
```text
```toml
1. Workspace Discovery → 2. Configuration Loading → 3. Hierarchy Merge →
4. Variable Interpolation → 5. Schema Validation → 6. Runtime Application
```
### Workflow Execution Flow
```text
```bash
1. Workflow Submission → 2. Dependency Analysis → 3. Task Scheduling →
4. Parallel Execution → 5. State Tracking → 6. Result Aggregation →
7. Error Handling → 8. Cleanup/Rollback
@ -262,7 +262,7 @@ The system solves fundamental technical challenges through architectural innovat
### Provider Integration Flow
```text
```bash
1. Provider Discovery → 2. Configuration Validation → 3. Authentication →
4. Resource Planning → 5. Operation Execution → 6. State Persistence →
7. Result Reporting

View File

@ -11,7 +11,7 @@
TypeDialog generates **type-safe interactive forms** from configuration schemas with **bidirectional Nickel integration**.
```text
```nickel
Nickel Schema
TypeDialog Form (Auto-generated)
@ -27,7 +27,7 @@ Nickel output config (Type-safe)
### Three Layers
```text
```nickel
CLI/TUI/Web Layer
TypeDialog Form Engine
@ -39,7 +39,7 @@ Schema Contracts
### Data Flow
```text
```nickel
Input (Nickel)
Form Definition (TOML)
@ -59,7 +59,7 @@ Output (JSON/YAML/TOML/Nickel)
### Installation
```text
```nickel
# Clone TypeDialog
git clone https://github.com/jesusperezlorenzo/typedialog.git
cd typedialog
@ -73,7 +73,7 @@ cargo install --path ./crates/typedialog
### Verify Installation
```text
```nickel
typedialog --version
typedialog --help
```
@ -84,7 +84,7 @@ typedialog --help
### Step 1: Define Nickel Schema
```text
```nickel
# server_config.ncl
let contracts = import "./contracts.ncl" in
let defaults = import "./defaults.ncl" in
@ -101,7 +101,7 @@ let defaults = import "./defaults.ncl" in
### Step 2: Define TypeDialog Form (TOML)
```text
```toml
# server_form.toml
[form]
title = "Server Configuration"
@ -155,13 +155,13 @@ help = "Select applicable tags"
### Step 3: Render Form (CLI)
```text
```nickel
typedialog form --config server_form.toml --backend cli
```
**Output**:
```text
```nickel
Server Configuration
Create a new server configuration
@ -179,14 +179,14 @@ Create a new server configuration
### Step 4: Validate Against Nickel Schema
```text
```nickel
# Validation happens automatically
# If input matches Nickel contract, proceeds to output
```
### Step 5: Output to Nickel
```text
```nickel
typedialog form
--config server_form.toml
--output nickel
@ -195,7 +195,7 @@ typedialog form
**Output file** (`server_config_output.ncl`):
```text
```json
{
server_name = "web-01",
cpu_cores = 4,
@ -216,7 +216,7 @@ You want an interactive CLI wizard for infrastructure provisioning.
### Step 1: Define Nickel Schema for Infrastructure
```text
```nickel
# infrastructure_schema.ncl
{
InfrastructureConfig = {
@ -245,7 +245,7 @@ You want an interactive CLI wizard for infrastructure provisioning.
### Step 2: Create Comprehensive Form
```text
```nickel
# infrastructure_wizard.toml
[form]
title = "Infrastructure Provisioning Wizard"
@ -334,7 +334,7 @@ placeholder = "admin@company.com"
### Step 3: Run Interactive Wizard
```text
```nickel
typedialog form
--config infrastructure_wizard.toml
--backend tui
@ -343,7 +343,7 @@ typedialog form
**Output** (`infrastructure_config.ncl`):
```text
```json
{
workspace_name = "production-eu",
deployment_mode = 'enterprise,
@ -358,7 +358,7 @@ typedialog form
### Step 4: Use Output in Infrastructure
```text
```nickel
# main_infrastructure.ncl
let config = import "./infrastructure_config.ncl" in
let schemas = import "../../provisioning/schemas/main.ncl" in
@ -398,7 +398,7 @@ let schemas = import "../../provisioning/schemas/main.ncl" in
### Form Definition (Advanced)
```text
```nickel
# server_advanced_form.toml
[form]
title = "Server Configuration"
@ -532,7 +532,7 @@ options = ["production", "staging", "testing", "development"]
### Output Structure
```text
```json
{
# Basic
server_name = "web-prod-01",
@ -562,7 +562,7 @@ options = ["production", "staging", "testing", "development"]
### TypeDialog REST Endpoints
```text
```nickel
# Start TypeDialog server
typedialog server --port 8080
@ -574,7 +574,7 @@ curl -X POST http://localhost:8080/forms
### Response Format
```text
```json
{
"form_id": "srv_abc123",
"status": "rendered",
@ -592,7 +592,7 @@ curl -X POST http://localhost:8080/forms
### Submit Form
```text
```nickel
curl -X POST http://localhost:8080/forms/srv_abc123/submit
-H "Content-Type: application/json"
-d '{
@ -607,7 +607,7 @@ curl -X POST http://localhost:8080/forms/srv_abc123/submit
### Response
```text
```json
{
"status": "success",
"validation": "passed",
@ -631,7 +631,7 @@ curl -X POST http://localhost:8080/forms/srv_abc123/submit
TypeDialog validates user input against Nickel contracts:
```text
```nickel
# Nickel contract
ServerConfig = {
cpu_cores | Number, # Must be number
@ -645,7 +645,7 @@ ServerConfig = {
### Validation Rules in Form
```text
```toml
[[fields]]
name = "cpu_cores"
type = "number"
@ -661,7 +661,7 @@ help = "Must be 1-32 cores"
### Use Case: Infrastructure Initialization
```text
```nickel
# 1. User runs initialization
provisioning init --wizard
@ -679,7 +679,7 @@ provisioning init --wizard
### Implementation in Nushell
```text
```nushell
# provisioning/core/nulib/provisioning_init.nu
def provisioning_init_wizard [] {
@ -714,7 +714,7 @@ def provisioning_init_wizard [] {
Show/hide fields based on user selections:
```text
```toml
[[fields]]
name = "backup_retention"
label = "Backup Retention (days)"
@ -726,7 +726,7 @@ visible_if = "enable_backup == true" # Only shown if backup enabled
Set defaults based on other fields:
```text
```toml
[[fields]]
name = "deployment_mode"
type = "select"
@ -741,7 +741,7 @@ default_from = "deployment_mode" # Can reference other fields
### Custom Validation
```text
```toml
[[fields]]
name = "memory_gb"
type = "number"
@ -755,7 +755,7 @@ help = "Memory must be at least 2 GB per CPU core"
TypeDialog can output to multiple formats:
```text
```nickel
# Output to Nickel (recommended for IaC)
typedialog form --config form.toml --output nickel
@ -777,7 +777,7 @@ TypeDialog supports three rendering backends:
### 1. CLI (Command-line prompts)
```text
```nickel
typedialog form --config form.toml --backend cli
```
@ -786,7 +786,7 @@ typedialog form --config form.toml --backend cli
### 2. TUI (Terminal User Interface - Ratatui)
```text
```nickel
typedialog form --config form.toml --backend tui
```
@ -795,7 +795,7 @@ typedialog form --config form.toml --backend tui
### 3. Web (HTTP Server - Axum)
```text
```nickel
typedialog form --config form.toml --backend web --port 3000
# Opens http://localhost:3000
```
@ -813,7 +813,7 @@ typedialog form --config form.toml --backend web --port 3000
**Solution**: Verify field definitions match Nickel schema:
```text
```nickel
# Form field
[[fields]]
name = "cpu_cores" # Must match Nickel field name
@ -826,7 +826,7 @@ type = "number" # Must match Nickel type
**Solution**: Add help text and validation rules:
```text
```toml
[[fields]]
name = "cpu_cores"
validation_pattern = "^[1-9][0-9]*$"
@ -839,7 +839,7 @@ help = "Must be positive integer"
**Solution**: Ensure all required fields in form:
```text
```toml
[[fields]]
name = "required_field"
required = true # User must provide value
@ -851,7 +851,7 @@ required = true # User must provide value
### Step 1: Define Nickel Schema
```text
```nickel
# workspace_schema.ncl
{
workspace = {
@ -866,7 +866,7 @@ required = true # User must provide value
### Step 2: Define Form
```text
```nickel
# workspace_form.toml
[[fields]]
name = "name"
@ -895,14 +895,14 @@ required = true
### Step 3: User Interaction
```text
```nickel
$ typedialog form --config workspace_form.toml --backend tui
# User fills form interactively
```
### Step 4: Output
```text
```json
{
workspace = {
name = "production",
@ -916,7 +916,7 @@ $ typedialog form --config workspace_form.toml --backend tui
### Step 5: Use in Provisioning
```text
```nickel
# main.ncl
let config = import "./workspace.ncl" in
let schemas = import "provisioning/schemas/main.ncl" in

View File

@ -10,7 +10,7 @@ The new configuration system includes comprehensive schema validation to catch e
Ensures all required fields are present:
```text
```bash
# Schema definition
[required]
fields = ["name", "version", "enabled"]
@ -30,7 +30,7 @@ version = "1.0.0"
Validates field types:
```text
```bash
# Schema
[fields.port]
type = "int"
@ -54,7 +54,7 @@ port = "8080" # Error: Expected int, got string
Restricts values to predefined set:
```text
```bash
# Schema
[fields.environment]
type = "string"
@ -71,7 +71,7 @@ environment = "production" # Error: Must be one of: dev, staging, prod
Validates numeric ranges:
```text
```bash
# Schema
[fields.port]
type = "int"
@ -92,7 +92,7 @@ port = 70000 # Error: Must be <= 65535
Validates string patterns using regex:
```text
```bash
# Schema
[fields.email]
type = "string"
@ -109,7 +109,7 @@ email = "not-an-email" # Error: Does not match pattern
Warns about deprecated configuration:
```text
```toml
# Schema
[deprecated]
fields = ["old_field"]
@ -125,7 +125,7 @@ old_field = "value" # Warning: old_field is deprecated. Use new_field instead.
### Command Line
```text
```bash
# Validate workspace config
provisioning workspace config validate
@ -141,7 +141,7 @@ provisioning workspace config validate --verbose
### Programmatic Usage
```text
```bash
use provisioning/core/nulib/lib_provisioning/config/schema_validator.nu *
# Load config
@ -171,7 +171,7 @@ if ($result.warnings | length) > 0 {
### Pretty Print Results
```text
```bash
# Validate and print formatted results
let result = (validate-workspace-config $config)
print-validation-results $result
@ -183,7 +183,7 @@ print-validation-results $result
File: `/Users/Akasha/project-provisioning/provisioning/config/workspace.schema.toml`
```text
```toml
[required]
fields = ["workspace", "paths"]
@ -222,7 +222,7 @@ enum = ["debug", "info", "warn", "error"]
File: `/Users/Akasha/project-provisioning/provisioning/extensions/providers/aws/config.schema.toml`
```text
```toml
[required]
fields = ["provider", "credentials"]
@ -279,7 +279,7 @@ old_region_field = "provider.region"
File: `/Users/Akasha/project-provisioning/provisioning/platform/orchestrator/config.schema.toml`
```text
```toml
[required]
fields = ["service", "server"]
@ -325,7 +325,7 @@ type = "string"
File: `/Users/Akasha/project-provisioning/provisioning/core/services/kms/config.schema.toml`
```text
```toml
[required]
fields = ["kms", "encryption"]
@ -372,7 +372,7 @@ old_kms_type = "kms.provider"
### 1. Development
```text
```bash
# Create new config
vim ~/workspaces/dev/config/provisioning.yaml
@ -386,7 +386,7 @@ provisioning workspace config validate
### 2. CI/CD Pipeline
```text
```bash
# GitLab CI
validate-config:
stage: validate
@ -402,7 +402,7 @@ validate-config:
### 3. Pre-Deployment
```text
```bash
# Validate all configurations before deployment
provisioning workspace config validate --verbose
provisioning provider validate --all
@ -418,7 +418,7 @@ fi
### Clear Error Format
```text
```bash
❌ Validation failed
Errors:
@ -445,7 +445,7 @@ Each error includes:
### Pattern 1: Hostname Validation
```text
```toml
[fields.hostname]
type = "string"
pattern = "^[a-z0-9]([a-z0-9-]{0,61}[a-z0-9])?$"
@ -453,7 +453,7 @@ pattern = "^[a-z0-9]([a-z0-9-]{0,61}[a-z0-9])?$"
### Pattern 2: Email Validation
```text
```toml
[fields.email]
type = "string"
pattern = "^[a-zA-Z0-9._%+-]+@[a-zA-Z0-9.-]+\\.[a-zA-Z]{2,}$"
@ -461,7 +461,7 @@ pattern = "^[a-zA-Z0-9._%+-]+@[a-zA-Z0-9.-]+\\.[a-zA-Z]{2,}$"
### Pattern 3: Semantic Version
```text
```toml
[fields.version]
type = "string"
pattern = "^\\d+\\.\\d+\\.\\d+(-[a-zA-Z0-9]+)?$"
@ -469,7 +469,7 @@ pattern = "^\\d+\\.\\d+\\.\\d+(-[a-zA-Z0-9]+)?$"
### Pattern 4: URL Validation
```text
```toml
[fields.url]
type = "string"
pattern = "^https?://[a-zA-Z0-9.-]+(:[0-9]+)?(/.*)?$"
@ -477,7 +477,7 @@ pattern = "^https?://[a-zA-Z0-9.-]+(:[0-9]+)?(/.*)?$"
### Pattern 5: IPv4 Address
```text
```toml
[fields.ip_address]
type = "string"
pattern = "^(?:[0-9]{1,3}\\.){3}[0-9]{1,3}$"
@ -485,7 +485,7 @@ pattern = "^(?:[0-9]{1,3}\\.){3}[0-9]{1,3}$"
### Pattern 6: AWS Resource ID
```text
```toml
[fields.instance_id]
type = "string"
pattern = "^i-[a-f0-9]{8,17}$"
@ -503,14 +503,14 @@ pattern = "^vpc-[a-f0-9]{8,17}$"
### Unit Tests
```text
```bash
# Run validation test suite
nu provisioning/tests/config_validation_tests.nu
```
### Integration Tests
```text
```bash
# Test with real configs
provisioning test validate --workspace dev
provisioning test validate --workspace staging
@ -519,7 +519,7 @@ provisioning test validate --workspace prod
### Custom Validation
```text
```bash
# Create custom validation function
def validate-custom-config [config: record] {
let result = (validate-workspace-config $config)
@ -543,7 +543,7 @@ def validate-custom-config [config: record] {
### 1. Validate Early
```text
```bash
# Validate during development
provisioning workspace config validate
@ -552,7 +552,7 @@ provisioning workspace config validate
### 2. Use Strict Schemas
```text
```bash
# Be explicit about types and constraints
[fields.port]
type = "int"
@ -564,7 +564,7 @@ max = 65535
### 3. Document Patterns
```text
```bash
# Include examples in schema
[fields.email]
type = "string"
@ -574,7 +574,7 @@ pattern = "^[a-zA-Z0-9._%+-]+@[a-zA-Z0-9.-]+\\.[a-zA-Z]{2,}$"
### 4. Handle Deprecation
```text
```bash
# Always provide replacement guidance
[deprecated_replacements]
old_field = "new_field" # Clear migration path
@ -582,7 +582,7 @@ old_field = "new_field" # Clear migration path
### 5. Test Schemas
```text
```bash
# Include test cases in comments
# Valid: "admin@example.com"
# Invalid: "not-an-email"
@ -592,7 +592,7 @@ old_field = "new_field" # Clear migration path
### Schema File Not Found
```text
```bash
# Error: Schema file not found: /path/to/schema.toml
# Solution: Ensure schema exists
@ -601,7 +601,7 @@ ls -la /Users/Akasha/project-provisioning/provisioning/config/*.schema.toml
### Pattern Not Matching
```text
```bash
# Error: Field hostname does not match pattern
# Debug: Test pattern separately
@ -610,7 +610,7 @@ echo "my-hostname" | grep -E "^[a-z0-9]([a-z0-9-]{0,61}[a-z0-9])?$"
### Type Mismatch
```text
```bash
# Error: Expected int, got string
# Check config

View File

@ -28,7 +28,7 @@ This guide describes the metadata-driven authentication system implemented over
### System Components
```text
```bash
┌─────────────────────────────────────────────────────────────┐
│ User Command │
└────────────────────────────────┬──────────────────────────────┘
@ -89,7 +89,7 @@ This guide describes the metadata-driven authentication system implemented over
### Installation Steps
```text
```bash
# 1. Clone or update repository
git clone https://github.com/your-org/project-provisioning.git
cd project-provisioning
@ -113,7 +113,7 @@ nu tests/test-metadata-cache-benchmark.nu
### Basic Commands
```text
```bash
# Initialize authentication
provisioning login
@ -135,7 +135,7 @@ provisioning server create --name test --check
### Authentication Flow
```text
```bash
# 1. Login (required for production operations)
$ provisioning login
Username: alice@example.com
@ -160,7 +160,7 @@ Auth check: Check auth for destructive operation
### Check Mode (Bypass Auth for Testing)
```text
```bash
# Dry-run without auth checks
provisioning server create --name test --check
@ -172,7 +172,7 @@ Dry-run mode - no changes will be made
### Non-Interactive CI/CD Mode
```text
```bash
# Automated mode - skip confirmations
provisioning server create --name web-01 --yes
@ -189,7 +189,7 @@ PROVISIONING_NON_INTERACTIVE=1 provisioning server create --name web-02 --yes
**Old Pattern** (Before Fase 5):
```text
```bash
# Hardcoded auth check
let response = (input "Delete server? (yes/no): ")
if $response != "yes" { exit 1 }
@ -203,7 +203,7 @@ export def delete-server [name: string, --yes] {
**New Pattern** (After Fase 5):
```text
```bash
# Metadata header
# [command]
# name = "server delete"
@ -226,7 +226,7 @@ export def delete-server [name: string, --yes] {
1. Add metadata header after shebang:
```text
```nushell
#!/usr/bin/env nu
# [command]
# name = "server create"
@ -241,7 +241,7 @@ export def create-server [name: string] {
1. Register in `provisioning/schemas/main.ncl`:
```text
```javascript
let server_create = {
name = "server create",
domain = "infrastructure",
@ -259,7 +259,7 @@ server_create
1. Handler integration (happens in dispatcher):
```text
```bash
# Dispatcher automatically:
# 1. Loads metadata for "server create"
# 2. Validates auth based on requirements
@ -269,7 +269,7 @@ server_create
### Phase 3: Validating Migration
```text
```bash
# Validate metadata headers
nu utils/validate-metadata-headers.nu
@ -292,7 +292,7 @@ nu utils/search-scripts.nu list
**Step 1: Create metadata in main.ncl**
```text
```javascript
let new_feature_command = {
name = "feature command",
domain = "infrastructure",
@ -310,7 +310,7 @@ new_feature_command
**Step 2: Add metadata header to script**
```text
```nushell
#!/usr/bin/env nu
# [command]
# name = "feature command"
@ -325,7 +325,7 @@ export def feature-command [param: string] {
**Step 3: Implement handler function**
```text
```bash
# Handler registered in dispatcher
export def handle-feature-command [
action: string
@ -342,7 +342,7 @@ export def handle-feature-command [
**Step 4: Test with check mode**
```text
```bash
# Dry-run without auth
provisioning feature command --check
@ -389,7 +389,7 @@ provisioning feature command --yes
**Pattern 1: For Long Operations**
```text
```bash
# Use orchestrator for operations >2 seconds
if (get-operation-duration "my-operation") > 2000 {
submit-to-orchestrator $operation
@ -399,7 +399,7 @@ if (get-operation-duration "my-operation") > 2000 {
**Pattern 2: For Batch Operations**
```text
```bash
# Use batch workflows for multiple operations
nu -c "
use core/nulib/workflows/batch.nu *
@ -409,7 +409,7 @@ batch submit workflows/batch-deploy.ncl --parallel-limit 5
**Pattern 3: For Metadata Overhead**
```text
```bash
# Cache hit rate optimization
# Current: 40-100x faster with warm cache
# Target: >95% cache hit rate
@ -420,7 +420,7 @@ batch submit workflows/batch-deploy.ncl --parallel-limit 5
### Running Tests
```text
```bash
# End-to-End Integration Tests
nu tests/test-fase5-e2e.nu
@ -456,7 +456,7 @@ for test in tests/test-*.nu { nu $test }
**Solution**: Ensure metadata is registered in `main.ncl`
```text
```nickel
# Check if command is in metadata
grep "command_name" provisioning/schemas/main.ncl
```
@ -465,7 +465,7 @@ grep "command_name" provisioning/schemas/main.ncl
**Solution**: Verify user has required permission level
```text
```bash
# Check current user permissions
provisioning auth whoami
@ -480,7 +480,7 @@ get-command-metadata 'server create'
**Solution**: Check cache status
```text
```bash
# Force cache reload
rm ~/.cache/provisioning/command_metadata.json
@ -492,7 +492,7 @@ nu tests/test-metadata-cache-benchmark.nu
**Solution**: Run compliance check
```text
```bash
# Validate Nushell compliance
nu --ide-check 100 <file.nu>
@ -514,7 +514,7 @@ grep "let mut" <file.nu> # Should be empty
### Real-World Impact
```text
```bash
Scenario: 20 sequential commands
Without cache: 20 × 200 ms = 4 seconds
With cache: 1 × 200 ms + 19 × 5 ms = 295 ms

View File

@ -30,7 +30,7 @@ The build system is a comprehensive, Makefile-based solution that orchestrates:
## Quick Start
```text
```bash
# Navigate to build system
cd src/tools
@ -61,7 +61,7 @@ make status
**Variables**:
```text
```bash
# Project metadata
PROJECT_NAME := provisioning
VERSION := $(git describe --tags --always --dirty)
@ -95,7 +95,7 @@ PARALLEL := true
**`make build-platform`** - Build platform binaries for all targets
```text
```bash
make build-platform
# Equivalent to:
nu tools/build/compile-platform.nu
@ -107,7 +107,7 @@ nu tools/build/compile-platform.nu
**`make build-core`** - Bundle core Nushell libraries
```text
```nushell
make build-core
# Equivalent to:
nu tools/build/bundle-core.nu
@ -119,7 +119,7 @@ nu tools/build/bundle-core.nu
**`make validate-nickel`** - Validate and compile Nickel schemas
```text
```nickel
make validate-nickel
# Equivalent to:
nu tools/build/validate-nickel.nu
@ -142,7 +142,7 @@ nu tools/build/validate-nickel.nu
**`make dist-generate`** - Generate complete distributions
```text
```bash
make dist-generate
# Advanced usage:
make dist-generate PLATFORMS=linux-amd64,macos-amd64 VARIANTS=complete
@ -176,7 +176,7 @@ make dist-generate PLATFORMS=linux-amd64,macos-amd64 VARIANTS=complete
**`make release`** - Create a complete release (requires VERSION)
```text
```bash
make release VERSION=2.1.0
```
@ -217,7 +217,7 @@ Features:
**`make dev-build`** - Quick development build
```text
```bash
make dev-build
# Fast build with minimal validation
```
@ -250,7 +250,7 @@ make dev-build
**`make docs`** - Generate documentation
```text
```bash
make docs
# Generates API docs, user guides, and examples
```
@ -265,7 +265,7 @@ make docs
**`make clean`** - Clean all build artifacts
```text
```bash
make clean
# Removes all build, distribution, and package directories
```
@ -290,7 +290,7 @@ make clean
**`make status`** - Show build system status
```text
```bash
make status
# Output:
# Build System Status
@ -345,21 +345,21 @@ make status
**`make linux`** - Build for Linux only
```text
```bash
make linux
# Sets PLATFORMS=linux-amd64
```
**`make macos`** - Build for macOS only
```text
```bash
make macos
# Sets PLATFORMS=macos-amd64
```
**`make windows`** - Build for Windows only
```text
```bash
make windows
# Sets PLATFORMS=windows-amd64
```
@ -368,7 +368,7 @@ make windows
**`make debug`** - Build with debug information
```text
```bash
make debug
# Sets BUILD_MODE=debug VERBOSE=true
```
@ -398,7 +398,7 @@ All build tools are implemented as Nushell scripts with comprehensive parameter
**Usage**:
```text
```nushell
nu compile-platform.nu [options]
Options:
@ -412,7 +412,7 @@ Options:
**Example**:
```text
```nushell
nu compile-platform.nu
--target x86_64-apple-darwin
--release
@ -435,7 +435,7 @@ nu compile-platform.nu
**Usage**:
```text
```nushell
nu bundle-core.nu [options]
Options:
@ -468,7 +468,7 @@ Options:
**Usage**:
```text
```nushell
nu validate-nickel.nu [options]
Options:
@ -490,7 +490,7 @@ Options:
**Usage**:
```text
```nushell
nu test-distribution.nu [options]
Options:
@ -514,7 +514,7 @@ Options:
**Usage**:
```text
```nushell
nu clean-build.nu [options]
Options:
@ -544,7 +544,7 @@ Options:
**Usage**:
```text
```nushell
nu generate-distribution.nu [command] [options]
Commands:
@ -566,7 +566,7 @@ Options:
**Advanced Examples**:
```text
```bash
# Complete multi-platform release
nu generate-distribution.nu
--version 2.1.0
@ -599,7 +599,7 @@ nu generate-distribution.nu status
**Usage**:
```text
```nushell
nu create-installer.nu DISTRIBUTION_DIR [options]
Options:
@ -660,7 +660,7 @@ Options:
**Usage**:
```text
```nushell
nu create-release.nu [options]
Options:
@ -694,7 +694,7 @@ Options:
**Install Rust Targets**:
```text
```rust
# Install additional targets
rustup target add x86_64-apple-darwin
rustup target add x86_64-pc-windows-gnu
@ -706,7 +706,7 @@ rustup target add aarch64-apple-darwin
**macOS Cross-Compilation**:
```text
```bash
# Install osxcross toolchain
brew install FiloSottile/musl-cross/musl-cross
brew install mingw-w64
@ -714,7 +714,7 @@ brew install mingw-w64
**Windows Cross-Compilation**:
```text
```bash
# Install Windows dependencies
brew install mingw-w64
# or on Linux:
@ -725,7 +725,7 @@ sudo apt-get install gcc-mingw-w64
**Single Platform**:
```text
```bash
# Build for macOS from Linux
make build-platform RUST_TARGET=x86_64-apple-darwin
@ -735,7 +735,7 @@ make build-platform RUST_TARGET=x86_64-pc-windows-gnu
**Multiple Platforms**:
```text
```bash
# Build for all configured platforms
make build-cross
@ -745,7 +745,7 @@ make build-cross PLATFORMS=linux-amd64,macos-amd64,windows-amd64
**Platform-Specific Targets**:
```text
```bash
# Quick platform builds
make linux # Linux AMD64
make macos # macOS AMD64
@ -775,7 +775,7 @@ make windows # Windows AMD64
**Check Dependencies**:
```text
```bash
make info
# Shows versions of all required tools
@ -789,7 +789,7 @@ make info
**Install Missing Dependencies**:
```text
```bash
# Install Nushell
cargo install nu
@ -810,7 +810,7 @@ cargo install cross
**Build Cache Management**:
```text
```bash
# Clean Cargo cache
cargo clean
@ -829,7 +829,7 @@ make clean SCOPE=cache
**Error**: `linker 'cc' not found`
```text
```bash
# Solution: Install build essentials
sudo apt-get install build-essential # Linux
xcode-select --install # macOS
@ -837,14 +837,14 @@ xcode-select --install # macOS
**Error**: `target not found`
```text
```bash
# Solution: Install target
rustup target add x86_64-unknown-linux-gnu
```
**Error**: Cross-compilation linking errors
```text
```bash
# Solution: Use cross instead of cargo
cargo install cross
make build-platform CROSS=true
@ -854,7 +854,7 @@ make build-platform CROSS=true
**Error**: `command not found`
```text
```bash
# Solution: Ensure Nushell is in PATH
which nu
export PATH="$HOME/.cargo/bin:$PATH"
@ -862,14 +862,14 @@ export PATH="$HOME/.cargo/bin:$PATH"
**Error**: Permission denied
```text
```bash
# Solution: Make scripts executable
chmod +x src/tools/build/*.nu
```
**Error**: Module not found
```text
```bash
# Solution: Check working directory
cd src/tools
nu build/compile-platform.nu --help
@ -879,7 +879,7 @@ nu build/compile-platform.nu --help
**Error**: `nickel command not found`
```text
```nickel
# Solution: Install Nickel
cargo install nickel
# or
@ -888,7 +888,7 @@ brew install nickel
**Error**: Schema validation failed
```text
```bash
# Solution: Check Nickel syntax
nickel fmt schemas/
nickel check schemas/
@ -900,7 +900,7 @@ nickel check schemas/
**Optimizations**:
```text
```bash
# Enable parallel builds
make build-all PARALLEL=true
@ -913,7 +913,7 @@ export CARGO_BUILD_JOBS=8
**Cargo Configuration** (`~/.cargo/config.toml`):
```text
```toml
[build]
jobs = 8
@ -925,7 +925,7 @@ linker = "lld"
**Solutions**:
```text
```bash
# Reduce parallel jobs
export CARGO_BUILD_JOBS=2
@ -942,7 +942,7 @@ make clean-dist
**Validation**:
```text
```bash
# Test distribution
make test-dist
@ -954,7 +954,7 @@ nu src/tools/package/validate-package.nu dist/
**Optimizations**:
```text
```bash
# Strip binaries
make package-binaries STRIP=true
@ -969,7 +969,7 @@ make dist-generate VARIANTS=minimal
**Enable Debug Logging**:
```text
```bash
# Set environment
export PROVISIONING_DEBUG=true
export RUST_LOG=debug
@ -983,7 +983,7 @@ make build-all VERBOSE=true
**Debug Information**:
```text
```bash
# Show debug information
make debug-info
@ -1000,7 +1000,7 @@ make info
**Example Workflow** (`.github/workflows/build.yml`):
```text
```yaml
name: Build and Test
on: [push, pull_request]
@ -1034,7 +1034,7 @@ jobs:
**Release Workflow**:
```text
```bash
name: Release
on:
push:
@ -1061,7 +1061,7 @@ jobs:
**Test CI Pipeline Locally**:
```text
```bash
# Run CI build pipeline
make ci-build

View File

@ -19,7 +19,7 @@ work with this architecture.
### Architecture Components
```text
```bash
provisioning/core/nulib/
├── provisioning (211 lines) - Main entry point
├── main_provisioning/
@ -58,7 +58,7 @@ Commands are organized by domain. Choose the appropriate handler:
Edit `provisioning/core/nulib/main_provisioning/commands/infrastructure.nu`:
```text
```nushell
# Add to the handle_infrastructure_command match statement
export def handle_infrastructure_command [
command: string
@ -102,7 +102,7 @@ If you want shortcuts like `provisioning s status`:
Edit `provisioning/core/nulib/main_provisioning/dispatcher.nu`:
```text
```javascript
export def get_command_registry []: nothing -> record {
{
# Infrastructure commands
@ -127,7 +127,7 @@ Let's say you want to add better error handling to the taskserv command:
**Before:**
```text
```python
def handle_taskserv [ops: string, flags: record] {
let args = build_module_args $flags $ops
run_module $args "taskserv" --exec
@ -136,7 +136,7 @@ def handle_taskserv [ops: string, flags: record] {
**After:**
```text
```python
def handle_taskserv [ops: string, flags: record] {
# Validate taskserv name if provided
let first_arg = ($ops | split row " " | get -o 0)
@ -163,7 +163,7 @@ def handle_taskserv [ops: string, flags: record] {
The `flags.nu` module provides centralized flag handling:
```text
```nushell
# Parse all flags into normalized record
let parsed_flags = (parse_common_flags {
version: $version, v: $v, info: $info,
@ -210,7 +210,7 @@ If you need to add a new flag:
**Example: Adding `--timeout` flag**
```text
```bash
# 1. In provisioning main file (parameter list)
def main [
# ... existing parameters
@ -253,7 +253,7 @@ export def build_module_args [flags: record, extra: string = ""]: nothing -> str
Edit `provisioning/core/nulib/main_provisioning/dispatcher.nu`:
```text
```javascript
export def get_command_registry []: nothing -> record {
{
# ... existing shortcuts
@ -273,7 +273,7 @@ export def get_command_registry []: nothing -> record {
### Running the Test Suite
```text
```bash
# Run comprehensive test suite
nu tests/test_provisioning_refactor.nu
```
@ -293,7 +293,7 @@ The test suite validates:
Edit `tests/test_provisioning_refactor.nu`:
```text
```nushell
# Add your test function
export def test_my_new_feature [] {
print "
@ -318,7 +318,7 @@ export def main [] {
### Manual Testing
```text
```bash
# Test command execution
provisioning/core/cli/provisioning my-command test --check
@ -336,7 +336,7 @@ provisioning/core/cli/provisioning help my-command # Bi-directional
**Use Case**: Command just needs to execute a module with standard flags
```text
```python
def handle_simple_command [ops: string, flags: record] {
let args = build_module_args $flags $ops
run_module $args "module_name" --exec
@ -347,7 +347,7 @@ def handle_simple_command [ops: string, flags: record] {
**Use Case**: Need to validate input before execution
```text
```python
def handle_validated_command [ops: string, flags: record] {
# Validate
let first_arg = ($ops | split row " " | get -o 0)
@ -367,7 +367,7 @@ def handle_validated_command [ops: string, flags: record] {
**Use Case**: Command has multiple subcommands (like `server create`, `server delete`)
```text
```python
def handle_complex_command [ops: string, flags: record] {
let subcommand = ($ops | split row " " | get -o 0)
let rest_ops = ($ops | split row " " | skip 1 | str join " ")
@ -389,7 +389,7 @@ def handle_complex_command [ops: string, flags: record] {
**Use Case**: Command behavior changes based on flags
```text
```python
def handle_flag_routed_command [ops: string, flags: record] {
if $flags.check_mode {
# Dry-run mode
@ -415,7 +415,7 @@ Each handler should do **one thing well**:
### 2. Use Descriptive Error Messages
```text
```bash
# ❌ Bad
print "Error"
@ -434,7 +434,7 @@ print "Use 'provisioning taskserv list' to see all available taskservs"
Don't repeat code - use centralized functions:
```text
```bash
# ❌ Bad: Repeating flag handling
def handle_bad [ops: string, flags: record] {
let use_check = if $flags.check_mode { "--check " } else { "" }
@ -479,7 +479,7 @@ Before committing:
**Fix**: Use relative imports with `.nu` extension:
```text
```nushell
# ✅ Correct
use ../flags.nu *
use ../../lib_provisioning *
@ -495,7 +495,7 @@ use lib_provisioning *
**Fix**: Use proper Nushell 0.107 type signature:
```text
```nushell
# ✅ Correct
export def my_function [param: string]: nothing -> string {
"result"
@ -513,7 +513,7 @@ export def my_function [param: string] -> string {
**Fix**: Add to `dispatcher.nu:get_command_registry`:
```text
```nushell
"myshortcut" => "domain command"
```
@ -523,7 +523,7 @@ export def my_function [param: string] -> string {
**Fix**: Use centralized flag builder:
```text
```javascript
let args = build_module_args $flags $ops
run_module $args "module" --exec
```
@ -532,7 +532,7 @@ run_module $args "module" --exec
### File Locations
```text
```bash
provisioning/core/nulib/
├── provisioning - Main entry, flag definitions
├── main_provisioning/
@ -551,7 +551,7 @@ docs/
### Key Functions
```text
```bash
# In flags.nu
parse_common_flags [flags: record]: nothing -> record
build_module_args [flags: record, extra: string = ""]: nothing -> string
@ -575,7 +575,7 @@ handle_*_command [command: string, ops: string, flags: record]
### Testing Commands
```text
```bash
# Run full test suite
nu tests/test_provisioning_refactor.nu

View File

@ -19,7 +19,7 @@ This guide includes:
### Essential Commands
```text
```bash
# System status
provisioning status
provisioning health

View File

@ -44,7 +44,7 @@ to signal cancellation and let each layer of the call stack handle it gracefully
### 1. Helper Functions (ssh.nu:11-32)
```text
```python
def check_sudo_cached []: nothing -> bool {
let result = (do --ignore-errors { ^sudo -n true } | complete)
$result.exit_code == 0
@ -71,7 +71,7 @@ def run_sudo_with_interrupt_check [
### 2. Pre-emptive Warning (ssh.nu:155-160)
```text
```nushell
if $server.fix_local_hosts and not (check_sudo_cached) {
print "
⚠ Sudo access required for --fix-local-hosts"
@ -87,7 +87,7 @@ if $server.fix_local_hosts and not (check_sudo_cached) {
All sudo commands wrapped with detection:
```text
```javascript
let result = (do --ignore-errors { ^sudo <command> } | complete)
if $result.exit_code == 1 and ($result.stderr | str contains "password is required") {
print "
@ -102,7 +102,7 @@ if $result.exit_code == 1 and ($result.stderr | str contains "password is requir
Using Nushell's `reduce` instead of mutable variables:
```text
```javascript
let all_succeeded = ($settings.data.servers | reduce -f true { |server, acc|
if $text_match == null or $server.hostname == $text_match {
let result = (on_server_ssh $settings $server $ip_type $request_from $run)
@ -117,7 +117,7 @@ let all_succeeded = ($settings.data.servers | reduce -f true { |server, acc|
### 5. Caller Handling (create.nu:262-266, generate.nu:269-273)
```text
```javascript
let ssh_result = (on_server_ssh $settings $server "pub" "create" false)
if not $ssh_result {
_print "
@ -130,7 +130,7 @@ if not $ssh_result {
## Error Flow Diagram
```text
```bash
User presses CTRL-C during password prompt
sudo exits with code 1, stderr: "password is required"
@ -162,7 +162,7 @@ Clean exit, no cryptic errors
Captures both stdout, stderr, and exit code without throwing:
```text
```javascript
let result = (do --ignore-errors { ^sudo command } | complete)
# result = { stdout: "...", stderr: "...", exit_code: 1 }
```
@ -171,7 +171,7 @@ let result = (do --ignore-errors { ^sudo command } | complete)
Instead of mutable variables in loops:
```text
```bash
# ❌ BAD - mutable capture in closure
mut all_succeeded = true
$servers | each { |s|
@ -186,7 +186,7 @@ let all_succeeded = ($servers | reduce -f true { |s, acc|
### 3. Early Returns for Error Handling
```text
```bash
if not $condition {
print "Error message"
return false
@ -198,7 +198,7 @@ if not $condition {
### Scenario 1: CTRL-C During First Sudo Command
```text
```bash
provisioning -c server create
# Password: [CTRL-C]
@ -210,7 +210,7 @@ provisioning -c server create
### Scenario 2: Pre-cached Credentials
```text
```bash
sudo -v
provisioning -c server create
@ -219,7 +219,7 @@ provisioning -c server create
### Scenario 3: Wrong Password 3 Times
```text
```bash
provisioning -c server create
# Password: [wrong]
# Password: [wrong]
@ -230,7 +230,7 @@ provisioning -c server create
### Scenario 4: Multiple Servers, Cancel on Second
```text
```bash
# If creating multiple servers and CTRL-C on second:
# - First server completes successfully
# - Second server shows cancellation message
@ -250,7 +250,7 @@ When adding new sudo commands to the codebase:
Example template:
```text
```javascript
let result = (do --ignore-errors { ^sudo new-command } | complete)
if $result.exit_code == 1 and ($result.stderr | str contains "password is required") {
print "

View File

@ -42,7 +42,7 @@ hierarchical TOML configuration system with comprehensive validation and interpo
The configuration system implements a clear precedence hierarchy (lowest to highest precedence):
```text
```toml
Configuration Hierarchy (Low → High Precedence)
┌─────────────────────────────────────────────────┐
│ 1. config.defaults.toml │ ← System defaults
@ -69,7 +69,7 @@ Configuration Hierarchy (Low → High Precedence)
**Configuration Accessor Functions**:
```text
```toml
# Core configuration access
use core/nulib/lib_provisioning/config/accessor.nu
@ -93,7 +93,7 @@ let data_path = (get-config-interpolated "paths.data") # Resolves {{paths.base}
**Before (ENV-based)**:
```text
```javascript
export PROVISIONING_UPCLOUD_API_KEY="your-key"
export PROVISIONING_UPCLOUD_API_URL="https://api.upcloud.com"
export PROVISIONING_LOG_LEVEL="debug"
@ -102,7 +102,7 @@ export PROVISIONING_BASE_PATH="/usr/local/provisioning"
**After (Config-based)**:
```text
```toml
# config.user.toml
[providers.upcloud]
api_key = "your-key"
@ -123,7 +123,7 @@ base = "/usr/local/provisioning"
**Location**: Root of the repository
**Modification**: Should only be modified by system maintainers
```text
```bash
# System-wide defaults - DO NOT MODIFY in production
# Copy values to config.user.toml for customization
@ -203,7 +203,7 @@ sample_rate = 0.1
**Location**: User's configuration directory
**Modification**: Users should customize this file for their needs
```text
```toml
# User configuration - customizations and personal preferences
# This file overrides system defaults
@ -249,7 +249,7 @@ commit_prefix = "[{{env.USER}}]"
**Location**: Project root directory
**Version Control**: Should be committed to version control
```text
```bash
# Project-specific configuration
# Shared settings for this project/repository
@ -296,7 +296,7 @@ developers = ["dev-team@company.com"]
**Location**: Infrastructure directory
**Usage**: Overrides for specific infrastructure deployments
```text
```bash
# Infrastructure-specific configuration
# Overrides for this specific infrastructure deployment
@ -345,7 +345,7 @@ retention_days = 30
**Purpose**: Development-optimized settings
**Features**: Enhanced debugging, local providers, relaxed validation
```text
```toml
# Development environment configuration
# Optimized for local development and testing
@ -404,7 +404,7 @@ mock_external_apis = true
**Purpose**: Testing-specific configuration
**Features**: Mock services, isolated environments, comprehensive logging
```text
```toml
# Testing environment configuration
# Optimized for automated testing and CI/CD
@ -453,7 +453,7 @@ fail_fast = true
**Purpose**: Production-optimized settings
**Features**: Performance optimization, security hardening, comprehensive monitoring
```text
```toml
# Production environment configuration
# Optimized for performance, reliability, and security
@ -513,7 +513,7 @@ connection_pooling = true
**Creating User Configuration**:
```text
```toml
# Create user config directory
mkdir -p ~/.config/provisioning
@ -526,7 +526,7 @@ $EDITOR ~/.config/provisioning/config.toml
**Common User Customizations**:
```text
```bash
# Personal configuration customizations
[paths]
@ -561,7 +561,7 @@ slack_webhook = "{{env.SLACK_WEBHOOK_URL}}"
**Workspace Integration**:
```text
```bash
# Workspace-aware configuration
# workspace/config/developer.toml
@ -590,7 +590,7 @@ auto_create = true
**Built-in Validation**:
```text
```bash
# Validate current configuration
provisioning validate config
@ -606,7 +606,7 @@ provisioning config debug
**Validation Rules**:
```text
```bash
# Configuration validation in Nushell
def validate_configuration [config: record] -> record {
let errors = []
@ -645,7 +645,7 @@ def validate_configuration [config: record] -> record {
**Configuration-Driven Error Handling**:
```text
```toml
# Never patch with hardcoded fallbacks - use configuration
def get_api_endpoint [provider: string] -> string {
# Good: Configuration-driven with clear error
@ -675,7 +675,7 @@ def get_api_endpoint_bad [provider: string] -> string {
**Comprehensive Error Context**:
```text
```python
def load_provider_config [provider: string] -> record {
let config_section = $"providers.($provider)"
@ -704,7 +704,7 @@ def load_provider_config [provider: string] -> record {
**Supported Interpolation Variables**:
```text
```bash
# Environment variables
base_path = "{{env.HOME}}/provisioning"
user_name = "{{env.USER}}"
@ -732,7 +732,7 @@ architecture = "{{system.arch}}"
**Dynamic Path Resolution**:
```text
```toml
[paths]
base = "{{env.HOME}}/.local/share/provisioning"
config = "{{paths.base}}/config"
@ -747,7 +747,7 @@ log_file = "{{paths.logs}}/upcloud-{{now.date}}.log"
**Environment-Aware Configuration**:
```text
```toml
[core]
name = "provisioning-{{system.hostname}}-{{env.USER}}"
version = "{{release.version}}+{{git.commit}}.{{now.timestamp}}"
@ -770,7 +770,7 @@ tags = {
**Custom Interpolation Logic**:
```text
```bash
# Interpolation resolver
def resolve_interpolation [template: string, context: record] -> string {
let interpolations = ($template | parse --regex '\{\{([^}]+)\}\}')
@ -816,7 +816,7 @@ def resolve_interpolation_key [key_path: string, context: record] -> string {
**Backward Compatibility**:
```text
```bash
# Configuration accessor with ENV fallback
def get-config-with-env-fallback [
config_key: string,
@ -855,7 +855,7 @@ def get-config-with-env-fallback [
**Available Migration Scripts**:
```text
```bash
# Migrate existing ENV-based setup to configuration
nu src/tools/migration/env-to-config.nu --scan-environment --create-config
@ -874,7 +874,7 @@ nu src/tools/migration/generate-config.nu --output-file config.migrated.toml
**Error**: `Configuration file not found`
```text
```toml
# Solution: Check configuration file paths
provisioning config paths
@ -889,7 +889,7 @@ provisioning config debug
**Error**: `Invalid TOML syntax in configuration file`
```text
```toml
# Solution: Validate TOML syntax
nu -c "open config.user.toml | from toml"
@ -904,7 +904,7 @@ provisioning config check --verbose
**Error**: `Failed to resolve interpolation: {{env.MISSING_VAR}}`
```text
```bash
# Solution: Check available interpolation variables
provisioning config interpolation --list-variables
@ -919,7 +919,7 @@ provisioning config debug --show-interpolation
**Error**: `Provider 'upcloud' configuration invalid`
```text
```toml
# Solution: Validate provider configuration
provisioning validate config --section providers.upcloud
@ -934,7 +934,7 @@ provisioning providers upcloud test --dry-run
**Configuration Debugging**:
```text
```toml
# Show complete resolved configuration
provisioning config show --resolved
@ -955,7 +955,7 @@ provisioning config interpolation --debug "{{paths.data}}/{{env.USER}}"
**Configuration Caching**:
```text
```toml
# Enable configuration caching
export PROVISIONING_CONFIG_CACHE=true
@ -968,7 +968,7 @@ provisioning config cache --stats
**Startup Optimization**:
```text
```bash
# Optimize configuration loading
[performance]
lazy_loading = true

View File

@ -34,7 +34,7 @@ The workspace system provides isolated development environments for the provisio
### Directory Structure
```text
```bash
workspace/
├── config/ # Development configuration
│ ├── dev-defaults.toml # Development environment defaults
@ -97,7 +97,7 @@ workspace/
### Quick Start
```text
```bash
# Navigate to workspace
cd workspace/tools
@ -110,7 +110,7 @@ nu workspace.nu init --user-name developer --infra-name my-dev-infra
### Complete Initialization
```text
```bash
# Full initialization with all options
nu workspace.nu init
--user-name developer
@ -134,7 +134,7 @@ nu workspace.nu init
**Verify Installation**:
```text
```bash
# Check workspace health
nu workspace.nu health --detailed
@ -147,7 +147,7 @@ nu workspace.nu list
**Configure Development Environment**:
```text
```toml
# Create user-specific configuration
cp workspace/config/local-overrides.toml.example workspace/config/$USER.toml
@ -170,7 +170,7 @@ The workspace implements a sophisticated path resolution system that prioritizes
### Using Path Resolution
```text
```bash
# Import path resolver
use workspace/lib/path-resolver.nu
@ -188,7 +188,7 @@ let new_path = (path-resolver resolve_path "infra" "my-infra" --create-missing)
**Hierarchical Configuration Loading**:
```text
```toml
# Resolve configuration with full hierarchy
let config = (path-resolver resolve_config "user" --workspace-user "developer")
@ -203,7 +203,7 @@ let merged = (path-resolver resolve_config "merged" --workspace-user "developer"
**Automatic Extension Discovery**:
```text
```bash
# Find custom provider extension
let provider = (path-resolver resolve_extension "providers" "my-aws-provider")
@ -218,7 +218,7 @@ let cluster = (path-resolver resolve_extension "clusters" "development-cluster")
**Workspace Health Validation**:
```text
```bash
# Check workspace health with automatic fixes
let health = (path-resolver check_workspace_health --workspace-user "developer" --fix-issues)
@ -244,7 +244,7 @@ let runtime_status = (path-resolver check_runtime_health --workspace-user "devel
**Development Environment** (`workspace/config/dev-defaults.toml`):
```text
```toml
[core]
name = "provisioning-dev"
version = "dev-${git.branch}"
@ -273,7 +273,7 @@ max_size = "10 MB"
**Testing Environment** (`workspace/config/test-defaults.toml`):
```text
```toml
[core]
name = "provisioning-test"
version = "test-${build.timestamp}"
@ -302,7 +302,7 @@ test_output = true
**User-Specific Configuration** (`workspace/config/{user}.toml`):
```text
```toml
[core]
name = "provisioning-${workspace.user}"
version = "1.0.0-dev"
@ -339,7 +339,7 @@ email = "developer@company.com"
**Workspace Configuration Management**:
```text
```toml
# Show current configuration
nu workspace.nu config show
@ -370,7 +370,7 @@ The workspace provides templates and tools for developing three types of extensi
**Create New Provider**:
```text
```bash
# Copy template
cp -r workspace/extensions/providers/template workspace/extensions/providers/my-provider
@ -381,7 +381,7 @@ nu init.nu --provider-name my-provider --author developer
**Provider Structure**:
```text
```bash
workspace/extensions/providers/my-provider/
├── kcl/
│ ├── provider.ncl # Provider configuration schema
@ -402,7 +402,7 @@ workspace/extensions/providers/my-provider/
**Test Provider**:
```text
```bash
# Run provider tests
nu workspace/extensions/providers/my-provider/nulib/provider.nu test
@ -417,7 +417,7 @@ nu workspace/extensions/providers/my-provider/tests/integration/basic-test.nu
**Create New Task Service**:
```text
```bash
# Copy template
cp -r workspace/extensions/taskservs/template workspace/extensions/taskservs/my-service
@ -428,7 +428,7 @@ nu init.nu --service-name my-service --service-type database
**Task Service Structure**:
```text
```bash
workspace/extensions/taskservs/my-service/
├── kcl/
│ ├── taskserv.ncl # Service configuration schema
@ -452,7 +452,7 @@ workspace/extensions/taskservs/my-service/
**Create New Cluster**:
```text
```bash
# Copy template
cp -r workspace/extensions/clusters/template workspace/extensions/clusters/my-cluster
@ -463,7 +463,7 @@ nu init.nu --cluster-name my-cluster --cluster-type web-stack
**Testing Extensions**:
```text
```bash
# Test extension syntax
nu workspace.nu tools validate-extension providers/my-provider
@ -480,7 +480,7 @@ nu workspace.nu tools deploy-test clusters/my-cluster --infra test-env
**Per-User Isolation**:
```text
```bash
runtime/
├── workspaces/
│ ├── developer/ # Developer's workspace data
@ -516,7 +516,7 @@ runtime/
**Initialize Runtime Environment**:
```text
```bash
# Initialize for current user
nu workspace/tools/runtime-manager.nu init
@ -526,7 +526,7 @@ nu workspace/tools/runtime-manager.nu init --user-name developer
**Runtime Cleanup**:
```text
```bash
# Clean cache older than 30 days
nu workspace/tools/runtime-manager.nu cleanup --type cache --age 30d
@ -539,7 +539,7 @@ nu workspace/tools/runtime-manager.nu cleanup --type temp --force
**Log Management**:
```text
```bash
# View recent logs
nu workspace/tools/runtime-manager.nu logs --action tail --lines 100
@ -555,7 +555,7 @@ nu workspace/tools/runtime-manager.nu logs --action archive --older-than 7d
**Cache Management**:
```text
```bash
# Show cache statistics
nu workspace/tools/runtime-manager.nu cache --action stats
@ -571,7 +571,7 @@ nu workspace/tools/runtime-manager.nu cache --action refresh --selective
**Monitoring**:
```text
```bash
# Monitor runtime usage
nu workspace/tools/runtime-manager.nu monitor --duration 5m --interval 30s
@ -601,7 +601,7 @@ The workspace provides comprehensive health monitoring with automatic repair cap
**Basic Health Check**:
```text
```bash
# Quick health check
nu workspace.nu health
@ -617,7 +617,7 @@ nu workspace.nu health --report-format json > health-report.json
**Component-Specific Health Checks**:
```text
```bash
# Check directory structure
nu workspace/tools/workspace-health.nu check-directories --workspace-user developer
@ -635,7 +635,7 @@ nu workspace/tools/workspace-health.nu check-extensions --workspace-user develop
**Example Health Report**:
```text
```json
{
"workspace_health": {
"user": "developer",
@ -704,7 +704,7 @@ nu workspace/tools/workspace-health.nu check-extensions --workspace-user develop
**Create Backup**:
```text
```bash
# Basic backup
nu workspace.nu backup
@ -732,7 +732,7 @@ nu workspace.nu backup --components config,extensions --name my-backup
**List Available Backups**:
```text
```bash
# List all backups
nu workspace.nu restore --list-backups
@ -745,7 +745,7 @@ nu workspace.nu restore --show-contents --backup-name workspace-developer-202509
**Restore Operations**:
```text
```bash
# Restore latest backup
nu workspace.nu restore --latest
@ -771,7 +771,7 @@ nu workspace.nu restore --backup-name my-backup --restore-to different-user
**Workspace Reset**:
```text
```bash
# Reset with backup
nu workspace.nu reset --backup-first
@ -784,7 +784,7 @@ nu workspace.nu reset --force --no-backup
**Cleanup Operations**:
```text
```bash
# Clean old data with dry-run
nu workspace.nu cleanup --type old --age 14d --dry-run
@ -803,7 +803,7 @@ nu workspace.nu cleanup --user-name old-user --type all
**Error**: `Workspace for user 'developer' not found`
```text
```bash
# Solution: Initialize workspace
nu workspace.nu init --user-name developer
```
@ -812,7 +812,7 @@ nu workspace.nu init --user-name developer
**Error**: `Path resolution failed for config/user`
```text
```toml
# Solution: Fix with health check
nu workspace.nu health --fix-issues
@ -824,7 +824,7 @@ nu workspace/lib/path-resolver.nu resolve_path "config" "user" --create-missing
**Error**: `Invalid configuration syntax in user.toml`
```text
```toml
# Solution: Validate and fix configuration
nu workspace.nu config validate --user-name developer
@ -836,7 +836,7 @@ cp workspace/config/local-overrides.toml.example workspace/config/developer.toml
**Error**: `Runtime directory permissions error`
```text
```bash
# Solution: Reinitialize runtime
nu workspace/tools/runtime-manager.nu init --user-name developer --force
@ -848,7 +848,7 @@ chmod -R 755 workspace/runtime/workspaces/developer
**Error**: `Extension 'my-provider' not found or invalid`
```text
```bash
# Solution: Validate extension
nu workspace.nu tools validate-extension providers/my-provider
@ -860,7 +860,7 @@ cp -r workspace/extensions/providers/template workspace/extensions/providers/my-
**Enable Debug Logging**:
```text
```bash
# Set debug environment
export PROVISIONING_DEBUG=true
export PROVISIONING_LOG_LEVEL=debug
@ -874,7 +874,7 @@ nu workspace.nu health --detailed
**Slow Operations**:
```text
```bash
# Check disk space
df -h workspace/
@ -890,7 +890,7 @@ nu workspace/tools/runtime-manager.nu cache --action optimize
**Corrupted Workspace**:
```text
```bash
# 1. Backup current state
nu workspace.nu backup --name corrupted-backup --force

View File

@ -37,7 +37,7 @@ automated release management.
### Distribution Components
```text
```bash
Distribution Ecosystem
├── Core Components
│ ├── Platform Binaries # Rust-compiled binaries
@ -59,7 +59,7 @@ Distribution Ecosystem
### Build Pipeline
```text
```bash
Build Pipeline Flow
┌─────────────────┐ ┌─────────────────┐ ┌─────────────────┐
│ Source Code │ -> │ Build Stage │ -> │ Package Stage │
@ -116,7 +116,7 @@ Build Pipeline Flow
**Pre-Release Checklist**:
```text
```bash
# Update dependencies and security
cargo update
cargo audit
@ -133,7 +133,7 @@ make validate-all
**Version Planning**:
```text
```bash
# Check current version
git describe --tags --always
@ -148,7 +148,7 @@ nu src/tools/release/create-release.nu --dry-run --version 2.1.0
**Complete Build**:
```text
```bash
# Clean build environment
make clean
@ -161,7 +161,7 @@ make test-dist
**Build with Specific Parameters**:
```text
```bash
# Build for specific platforms
make all PLATFORMS=linux-amd64,macos-amd64 VARIANTS=complete
@ -176,7 +176,7 @@ make all PARALLEL=true
**Create Distribution Packages**:
```text
```bash
# Generate complete distributions
make dist-generate
@ -192,7 +192,7 @@ make create-installers
**Package Validation**:
```text
```bash
# Validate packages
make test-dist
@ -208,7 +208,7 @@ make uninstall
**Automated Release**:
```text
```bash
# Create complete release
make release VERSION=2.1.0
@ -235,7 +235,7 @@ nu src/tools/release/create-release.nu
**Upload Artifacts**:
```text
```bash
# Upload to GitHub Releases
make upload-artifacts
@ -248,7 +248,7 @@ make notify-release
**Registry Updates**:
```text
```bash
# Update Homebrew formula
nu src/tools/release/update-registry.nu
--registries homebrew
@ -266,7 +266,7 @@ nu src/tools/release/update-registry.nu
**Complete Automated Release**:
```text
```bash
# Full release pipeline
make cd-deploy VERSION=2.1.0
@ -294,7 +294,7 @@ make notify-release
**Create Binary Packages**:
```text
```bash
# Standard binary packages
make package-binaries
@ -320,7 +320,7 @@ nu src/tools/package/package-binaries.nu
**Container Build Process**:
```text
```bash
# Build container images
make package-containers
@ -363,7 +363,7 @@ nu src/tools/package/build-containers.nu
**Create Installers**:
```text
```bash
# Generate all installer types
make create-installers
@ -411,7 +411,7 @@ nu src/tools/distribution/create-installer.nu
**Cross-Compilation Setup**:
```text
```bash
# Install cross-compilation targets
rustup target add aarch64-unknown-linux-gnu
rustup target add x86_64-apple-darwin
@ -424,7 +424,7 @@ cargo install cross
**Platform-Specific Builds**:
```text
```bash
# Build for specific platform
make build-platform RUST_TARGET=aarch64-apple-darwin
@ -441,7 +441,7 @@ make windows
**Generated Distributions**:
```text
```bash
Distribution Matrix:
provisioning-{version}-{platform}-{variant}.{format}
@ -466,7 +466,7 @@ Examples:
**Validation Pipeline**:
```text
```bash
# Complete validation
make test-dist
@ -497,7 +497,7 @@ nu src/tools/build/test-distribution.nu
**Test Execution**:
```text
```bash
# Run all tests
make ci-test
@ -511,7 +511,7 @@ nu src/tools/build/test-distribution.nu --test-types complete
**Package Integrity**:
```text
```bash
# Validate package structure
nu src/tools/package/validate-package.nu dist/
@ -524,7 +524,7 @@ gpg --verify packages/provisioning-2.1.0.tar.gz.sig
**Installation Testing**:
```text
```bash
# Test installation process
./packages/installers/install-provisioning-2.1.0.sh --dry-run
@ -541,7 +541,7 @@ docker run --rm provisioning:2.1.0 provisioning --version
**GitHub Release Integration**:
```text
```bash
# Create GitHub release
nu src/tools/release/create-release.nu
--version 2.1.0
@ -568,7 +568,7 @@ nu src/tools/release/create-release.nu
**Version Detection**:
```text
```bash
# Auto-detect next version
nu src/tools/release/create-release.nu --release-type minor
@ -591,7 +591,7 @@ nu src/tools/release/create-release.nu --version 2.1.0-rc.1 --pre-release
**Upload and Distribution**:
```text
```bash
# Upload to GitHub Releases
make upload-artifacts
@ -618,7 +618,7 @@ make update-registry
**Automated Rollback**:
```text
```bash
# Rollback latest release
nu src/tools/release/rollback-release.nu --version 2.1.0
@ -632,7 +632,7 @@ nu src/tools/release/rollback-release.nu
**Manual Rollback Steps**:
```text
```bash
# 1. Identify target version
git tag -l | grep -v 2.1.0 | tail -5
@ -665,7 +665,7 @@ nu src/tools/release/notify-users.nu
**Rollback Testing**:
```text
```bash
# Test rollback in staging
nu src/tools/release/rollback-release.nu
--version 2.1.0
@ -681,7 +681,7 @@ make test-dist DIST_VERSION=2.0.5
**Critical Security Rollback**:
```text
```bash
# Emergency rollback (bypasses normal procedures)
nu src/tools/release/rollback-release.nu
--version 2.1.0
@ -692,7 +692,7 @@ nu src/tools/release/rollback-release.nu
**Infrastructure Failure Recovery**:
```text
```bash
# Failover to backup infrastructure
nu src/tools/release/rollback-release.nu
--infrastructure-failover
@ -706,7 +706,7 @@ nu src/tools/release/rollback-release.nu
**Build Workflow** (`.github/workflows/build.yml`):
```text
```yaml
name: Build and Distribute
on:
push:
@ -745,7 +745,7 @@ jobs:
**Release Workflow** (`.github/workflows/release.yml`):
```text
```yaml
name: Release
on:
push:
@ -777,7 +777,7 @@ jobs:
**GitLab CI Configuration** (`.gitlab-ci.yml`):
```text
```yaml
stages:
- build
- package
@ -817,7 +817,7 @@ release:
**Jenkinsfile**:
```text
```bash
pipeline {
agent any
@ -860,7 +860,7 @@ pipeline {
**Rust Compilation Errors**:
```text
```rust
# Solution: Clean and rebuild
make clean
cargo clean
@ -873,7 +873,7 @@ rustup update
**Cross-Compilation Issues**:
```text
```bash
# Solution: Install missing targets
rustup target list --installed
rustup target add x86_64-apple-darwin
@ -887,7 +887,7 @@ make build-platform CROSS=true
**Missing Dependencies**:
```text
```bash
# Solution: Install build tools
sudo apt-get install build-essential
brew install gnu-tar
@ -898,7 +898,7 @@ make info
**Permission Errors**:
```text
```bash
# Solution: Fix permissions
chmod +x src/tools/build/*.nu
chmod +x src/tools/distribution/*.nu
@ -909,7 +909,7 @@ chmod +x src/tools/package/*.nu
**Package Integrity Issues**:
```text
```bash
# Solution: Regenerate packages
make clean-dist
make package-all
@ -920,7 +920,7 @@ sha256sum packages/*.tar.gz
**Installation Test Failures**:
```text
```bash
# Solution: Test in clean environment
docker run --rm -v $(pwd):/work ubuntu:latest /work/packages/installers/install.sh
@ -934,7 +934,7 @@ docker run --rm -v $(pwd):/work ubuntu:latest /work/packages/installers/install.
**Network Issues**:
```text
```bash
# Solution: Retry with backoff
nu src/tools/release/upload-artifacts.nu
--retry-count 5
@ -946,7 +946,7 @@ gh release upload v2.1.0 packages/*.tar.gz
**Authentication Failures**:
```text
```bash
# Solution: Refresh tokens
gh auth refresh
docker login ghcr.io
@ -960,7 +960,7 @@ docker system info
**Homebrew Formula Issues**:
```text
```bash
# Solution: Manual PR creation
git clone https://github.com/Homebrew/homebrew-core
cd homebrew-core
@ -973,7 +973,7 @@ git commit -m "provisioning 2.1.0"
**Debug Mode**:
```text
```bash
# Enable debug logging
export PROVISIONING_DEBUG=true
export RUST_LOG=debug
@ -989,7 +989,7 @@ nu src/tools/distribution/generate-distribution.nu
**Monitoring Build Progress**:
```text
```bash
# Monitor build logs
tail -f src/tools/build.log

View File

@ -137,7 +137,7 @@ orchestrator).
**Commands**:
```text
```bash
provisioning batch submit workflow.ncl
provisioning batch list
provisioning batch status <id>
@ -161,7 +161,7 @@ provisioning batch status <id>
**Commands**:
```text
```bash
provisioning break-glass request "reason"
provisioning break-glass approve <id>
```
@ -220,7 +220,7 @@ provisioning break-glass approve <id>
**Examples**:
```text
```bash
provisioning server create
provisioning taskserv install kubernetes
provisioning workspace switch prod
@ -249,7 +249,7 @@ provisioning workspace switch prod
**Commands**:
```text
```bash
provisioning cluster create <name>
provisioning cluster list
provisioning cluster delete <name>
@ -383,7 +383,7 @@ provisioning cluster delete <name>
**Commands**:
```text
```bash
provisioning status
provisioning diagnostics run
```
@ -427,7 +427,7 @@ provisioning diagnostics run
**Usage**:
```text
```bash
PROVISIONING_ENV=prod provisioning server list
```
@ -492,7 +492,7 @@ PROVISIONING_ENV=prod provisioning server list
**Commands**:
```text
```bash
provisioning compliance gdpr export <user>
provisioning compliance gdpr delete <user>
```
@ -529,7 +529,7 @@ provisioning compliance gdpr delete <user>
**Commands**:
```text
```bash
provisioning guide from-scratch
provisioning guide update
provisioning guide customize
@ -555,7 +555,7 @@ provisioning guide customize
**Example**:
```text
```bash
health_check = {
endpoint = "http://localhost:6443/healthz"
timeout = 30
@ -602,7 +602,7 @@ health_check = {
**Commands**:
```text
```bash
provisioning infra list
provisioning generate infra --new <name>
```
@ -719,7 +719,7 @@ provisioning generate infra --new <name>
**Commands**:
```text
```bash
provisioning taskserv create kubernetes
provisioning test quick kubernetes
```
@ -778,7 +778,7 @@ provisioning test quick kubernetes
**Commands**:
```text
```bash
provisioning mfa totp enroll
provisioning mfa webauthn enroll
provisioning mfa verify <code>
@ -818,7 +818,7 @@ provisioning mfa verify <code>
**Commands**:
```text
```bash
provisioning module discover provider
provisioning module load provider <ws> <name>
provisioning module list taskserv
@ -896,7 +896,7 @@ provisioning module list taskserv
**Commands**:
```text
```bash
cd provisioning/platform/orchestrator
./scripts/start-orchestrator.nu --background
```
@ -953,7 +953,7 @@ cd provisioning/platform/orchestrator
**Commands**:
```text
```bash
provisioning plugin list
provisioning plugin install
```
@ -980,7 +980,7 @@ provisioning plugin install
**Commands**:
```text
```bash
provisioning module discover provider
provisioning providers list
```
@ -1005,7 +1005,7 @@ provisioning providers list
**Commands**:
```text
```bash
provisioning sc # Fastest
provisioning guide quickstart
```
@ -1080,7 +1080,7 @@ provisioning guide quickstart
**Commands**:
```text
```bash
provisioning batch rollback <workflow-id>
```
@ -1118,7 +1118,7 @@ provisioning batch rollback <workflow-id>
**Example**:
```text
```javascript
let ServerConfig = {
hostname | string,
cores | number,
@ -1177,7 +1177,7 @@ ServerConfig
**Commands**:
```text
```bash
provisioning server create
provisioning server list
provisioning server ssh <hostname>
@ -1241,7 +1241,7 @@ provisioning server ssh <hostname>
**Commands**:
```text
```bash
provisioning sops edit <file>
```
@ -1261,7 +1261,7 @@ provisioning sops edit <file>
**Commands**:
```text
```bash
provisioning server ssh <hostname>
provisioning ssh connect <server>
```
@ -1316,7 +1316,7 @@ provisioning ssh connect <server>
**Commands**:
```text
```bash
provisioning taskserv create <name>
provisioning taskserv list
provisioning test quick <taskserv>
@ -1356,7 +1356,7 @@ provisioning test quick <taskserv>
**Commands**:
```text
```bash
provisioning test quick <taskserv>
provisioning test env single <taskserv>
provisioning test env cluster <cluster>
@ -1396,7 +1396,7 @@ provisioning test env cluster <cluster>
**Commands**:
```text
```bash
provisioning mfa totp enroll
provisioning mfa totp verify <code>
```
@ -1449,7 +1449,7 @@ provisioning mfa totp verify <code>
**Commands**:
```text
```bash
provisioning version check
provisioning version apply
```
@ -1474,7 +1474,7 @@ provisioning version apply
**Commands**:
```text
```bash
provisioning validate config
provisioning validate infrastructure
```
@ -1497,7 +1497,7 @@ provisioning validate infrastructure
**Commands**:
```text
```bash
provisioning version
provisioning version check
provisioning taskserv check-updates
@ -1521,7 +1521,7 @@ provisioning taskserv check-updates
**Commands**:
```text
```bash
provisioning mfa webauthn enroll
provisioning mfa webauthn verify
```
@ -1542,7 +1542,7 @@ provisioning mfa webauthn verify
**Commands**:
```text
```bash
provisioning workflow list
provisioning workflow status <id>
provisioning workflow monitor <id>
@ -1568,7 +1568,7 @@ provisioning workflow monitor <id>
**Commands**:
```text
```bash
provisioning workspace list
provisioning workspace switch <name>
provisioning workspace create <name>

View File

@ -43,7 +43,7 @@ specific commands, validation steps, and rollback procedures.
#### Step 1.1: Create Complete Backup
```text
```bash
# Create timestamped backup
BACKUP_DIR="/Users/Akasha/project-provisioning-backup-$(date +%Y%m%d)"
cp -r /Users/Akasha/project-provisioning "$BACKUP_DIR"
@ -59,7 +59,7 @@ echo "✅ Backup created: $BACKUP_DIR"
#### Step 1.2: Analyze Current State
```text
```bash
cd /Users/Akasha/project-provisioning
# Count workspace directories
@ -96,7 +96,7 @@ echo "✅ Analysis complete: docs/development/current-state-analysis.txt"
#### Step 1.3: Identify Dependencies
```text
```bash
# Find all hardcoded paths
echo "=== Hardcoded Paths in Nushell Scripts ==="
rg -t nu "workspace/|_workspace/|backup-workspace/" provisioning/core/nulib/ | tee hardcoded-paths.txt
@ -114,7 +114,7 @@ echo "✅ Dependencies mapped"
#### Step 1.4: Create Implementation Branch
```text
```bash
# Create and switch to implementation branch
git checkout -b feat/repo-restructure
@ -138,7 +138,7 @@ echo "✅ Implementation branch created: feat/repo-restructure"
#### Step 2.1: Create New Directory Structure
```text
```bash
cd /Users/Akasha/project-provisioning
# Create distribution directory structure
@ -156,7 +156,7 @@ tree -L 2 distribution/ workspace/
#### Step 2.2: Move Build Artifacts
```text
```bash
# Move Rust build artifacts
if [ -d "target" ]; then
mv target distribution/target
@ -178,7 +178,7 @@ done
#### Step 2.3: Consolidate Workspaces
```text
```bash
# Identify active workspace
echo "=== Current Workspace Status ==="
ls -la workspace/ _workspace/ backup-workspace/ 2>/dev/null
@ -221,7 +221,7 @@ echo "✅ Workspaces consolidated"
#### Step 2.4: Remove Obsolete Directories
```text
```bash
# Remove build artifacts (already moved)
rm -rf wrks/
echo "✅ Removed wrks/"
@ -248,7 +248,7 @@ echo "✅ Cleanup complete"
#### Step 2.5: Update .gitignore
```text
```bash
# Backup existing .gitignore
cp .gitignore .gitignore.backup
@ -318,7 +318,7 @@ echo "✅ Updated .gitignore"
#### Step 2.6: Commit Restructuring
```text
```bash
# Stage changes
git add -A
@ -355,7 +355,7 @@ echo "✅ Restructuring committed"
#### Step 3.1: Create Path Update Script
```text
```bash
# Create migration script
cat > provisioning/tools/migration/update-paths.nu << 'EOF'
#!/usr/bin/env nu
@ -407,7 +407,7 @@ chmod +x provisioning/tools/migration/update-paths.nu
#### Step 3.2: Run Path Updates
```text
```bash
# Create backup before updates
git stash
git checkout -b feat/path-updates
@ -424,7 +424,7 @@ nu -c "use provisioning/core/nulib/servers/create.nu; print 'OK'"
#### Step 3.3: Update CLAUDE.md
```text
```bash
# Update CLAUDE.md with new paths
cat > CLAUDE.md.new << 'EOF'
# CLAUDE.md
@ -461,7 +461,7 @@ mv CLAUDE.md.new CLAUDE.md
#### Step 3.4: Update Documentation
```text
```bash
# Find all documentation files
fd -e md . docs/
@ -478,7 +478,7 @@ echo "Files listed in: docs-to-update.txt"
#### Step 3.5: Commit Path Updates
```text
```bash
git add -A
git commit -m "refactor: update all path references for new structure
@ -505,7 +505,7 @@ echo "✅ Path updates committed"
#### Step 4.1: Automated Validation
```text
```bash
# Create validation script
cat > provisioning/tools/validation/validate-structure.nu << 'EOF'
#!/usr/bin/env nu
@ -592,7 +592,7 @@ nu provisioning/tools/validation/validate-structure.nu
#### Step 4.2: Functional Testing
```text
```bash
# Test core commands
echo "=== Testing Core Commands ==="
@ -621,7 +621,7 @@ echo "✅ Functional tests passed"
#### Step 4.3: Integration Testing
```text
```bash
# Test workflow system
echo "=== Testing Workflow System ==="
@ -641,7 +641,7 @@ echo "✅ Integration tests passed"
#### Step 4.4: Create Test Report
```text
```json
{
echo "# Repository Restructuring - Validation Report"
echo "Date: $(date)"
@ -669,7 +669,7 @@ echo "✅ Test report created: docs/development/phase1-validation-report.md"
#### Step 4.5: Update README
```text
```bash
# Update main README with new structure
# This is manual - review and update README.md
@ -681,7 +681,7 @@ echo " - Update quick start guide"
#### Step 4.6: Finalize Phase 1
```text
```bash
# Commit validation and reports
git add -A
git commit -m "test: add validation for repository restructuring
@ -718,7 +718,7 @@ echo "✅ Phase 1 complete and merged"
#### Step 5.1: Create Build Tools Directory
```text
```bash
mkdir -p provisioning/tools/build
cd provisioning/tools/build
@ -730,7 +730,7 @@ echo "✅ Build tools directory created"
#### Step 5.2: Implement Core Build System
```text
```bash
# Create main build orchestrator
# See full implementation in repo-dist-analysis.md
# Copy build-system.nu from the analysis document
@ -741,7 +741,7 @@ nu build-system.nu status
#### Step 5.3: Implement Core Packaging
```text
```bash
# Create package-core.nu
# This packages Nushell libraries, KCL schemas, templates
@ -751,7 +751,7 @@ nu build-system.nu build-core --version dev
#### Step 5.4: Create Justfile
```text
```bash
# Create Justfile in project root
# See full Justfile in repo-dist-analysis.md
@ -779,7 +779,7 @@ just status
#### Step 9.1: Create install.nu
```text
```nushell
mkdir -p distribution/installers
# Create install.nu
@ -788,7 +788,7 @@ mkdir -p distribution/installers
#### Step 9.2: Test Installation
```text
```bash
# Test installation to /tmp
nu distribution/installers/install.nu --prefix /tmp/provisioning-test
@ -812,7 +812,7 @@ nu distribution/installers/install.nu uninstall --prefix /tmp/provisioning-test
### If Phase 1 Fails
```text
```bash
# Restore from backup
rm -rf /Users/Akasha/project-provisioning
cp -r "$BACKUP_DIR" /Users/Akasha/project-provisioning
@ -825,7 +825,7 @@ git branch -D feat/repo-restructure
### If Build System Fails
```text
```bash
# Revert build system commits
git checkout feat/repo-restructure
git revert <commit-hash>
@ -833,7 +833,7 @@ git revert <commit-hash>
### If Installation Fails
```text
```bash
# Clean up test installation
rm -rf /tmp/provisioning-test
sudo rm -rf /usr/local/lib/provisioning

View File

@ -31,7 +31,7 @@ Before creating custom extensions, assess your infrastructure requirements:
#### 1. Application Inventory
```text
```bash
# Document existing applications
cat > infrastructure-assessment.yaml << EOF
applications:
@ -75,7 +75,7 @@ EOF
#### 2. Gap Analysis
```text
```bash
# Analyze what standard modules don't cover
./provisioning/core/cli/module-loader discover taskservs > available-modules.txt
@ -107,7 +107,7 @@ EOF
#### Business Requirements Template
```text
```bash
"""
Business Requirements Schema for Custom Extensions
Use this template to document requirements before development
@ -179,7 +179,7 @@ schema Integration:
#### Example: Legacy ERP System Integration
```text
```bash
# Create company-specific taskserv
mkdir -p extensions/taskservs/company-specific/legacy-erp/nickel
cd extensions/taskservs/company-specific/legacy-erp/nickel
@ -187,7 +187,7 @@ cd extensions/taskservs/company-specific/legacy-erp/nickel
Create `legacy-erp.ncl`:
```text
```nickel
"""
Legacy ERP System Taskserv
Handles deployment and management of company's legacy ERP system
@ -437,7 +437,7 @@ legacy_erp_default: LegacyERPTaskserv = {
Create `compliance-monitor.ncl`:
```text
```nickel
"""
Compliance Monitoring Taskserv
Automated compliance checking and reporting for regulated environments
@ -607,7 +607,7 @@ compliance_monitor_default: ComplianceMonitorTaskserv = {
When working with specialized or private cloud providers:
```text
```bash
# Create custom provider extension
mkdir -p extensions/providers/company-private-cloud/nickel
cd extensions/providers/company-private-cloud/nickel
@ -615,7 +615,7 @@ cd extensions/providers/company-private-cloud/nickel
Create `provision_company-private-cloud.ncl`:
```text
```nickel
"""
Company Private Cloud Provider
Integration with company's private cloud infrastructure
@ -762,7 +762,7 @@ company_private_cloud_defaults: defaults.ServerDefaults = {
Create environment-specific extensions that handle different deployment patterns:
```text
```bash
# Create environment management extension
mkdir -p extensions/clusters/company-environments/nickel
cd extensions/clusters/company-environments/nickel
@ -770,7 +770,7 @@ cd extensions/clusters/company-environments/nickel
Create `company-environments.ncl`:
```text
```nickel
"""
Company Environment Management
Standardized environment configurations for different deployment stages
@ -950,7 +950,7 @@ environment_templates = {
Create integration patterns for common legacy system scenarios:
```text
```bash
# Create integration patterns
mkdir -p extensions/taskservs/integrations/legacy-bridge/nickel
cd extensions/taskservs/integrations/legacy-bridge/nickel
@ -958,7 +958,7 @@ cd extensions/taskservs/integrations/legacy-bridge/nickel
Create `legacy-bridge.ncl`:
```text
```nickel
"""
Legacy System Integration Bridge
Provides standardized integration patterns for legacy systems
@ -1161,21 +1161,21 @@ legacy_bridge_dependencies: deps.TaskservDependencies = {
### Example 1: Financial Services Company
```text
```bash
# Financial services specific extensions
mkdir -p extensions/taskservs/financial-services/{trading-system,risk-engine,compliance-reporter}/nickel
```
### Example 2: Healthcare Organization
```text
```bash
# Healthcare specific extensions
mkdir -p extensions/taskservs/healthcare/{hl7-processor,dicom-storage,hipaa-audit}/nickel
```
### Example 3: Manufacturing Company
```text
```bash
# Manufacturing specific extensions
mkdir -p extensions/taskservs/manufacturing/{iot-gateway,scada-bridge,quality-system}/nickel
```
@ -1184,7 +1184,7 @@ mkdir -p extensions/taskservs/manufacturing/{iot-gateway,scada-bridge,quality-sy
#### Loading Infrastructure-Specific Extensions
```text
```bash
# Load company-specific extensions
cd workspace/infra/production
module-loader load taskservs . [legacy-erp, compliance-monitor, legacy-bridge]
@ -1198,7 +1198,7 @@ module-loader validate .
#### Using in Server Configuration
```text
```toml
# Import loaded extensions
import .taskservs.legacy-erp.legacy-erp as erp
import .taskservs.compliance-monitor.compliance-monitor as compliance

View File

@ -30,7 +30,7 @@ existing production systems while providing clear migration pathways.
**Integration Architecture**:
```text
```bash
Integration Ecosystem
┌─────────────────┐ ┌─────────────────┐ ┌─────────────────┐
│ Legacy Core │ ←→ │ Bridge Layer │ ←→ │ New Systems │
@ -48,7 +48,7 @@ Integration Ecosystem
**Seamless CLI Compatibility**:
```text
```bash
# All existing commands continue to work unchanged
./core/nulib/provisioning server create web-01 2xCPU-4 GB
./core/nulib/provisioning taskserv install kubernetes
@ -61,7 +61,7 @@ nu workspace/tools/workspace.nu health --detailed
**Path Resolution Integration**:
```text
```bash
# Automatic path resolution between systems
use workspace/lib/path-resolver.nu
@ -76,7 +76,7 @@ let provider_path = (path-resolver resolve_extension "providers" "upcloud")
**Dual Configuration Support**:
```text
```toml
# Configuration bridge supports both ENV and TOML
def get-config-value-bridge [key: string, default: string = ""] -> string {
# Try new TOML configuration first
@ -113,7 +113,7 @@ def get-config-value-bridge [key: string, default: string = ""] -> string {
**Shared Data Access**:
```text
```bash
# Unified data access across old and new systems
def get-server-info [server_name: string] -> record {
# Try new orchestrator data store first
@ -142,7 +142,7 @@ def get-server-info [server_name: string] -> record {
**Hybrid Process Management**:
```text
```bash
# Orchestrator-aware process management
def create-server-integrated [
name: string,
@ -179,7 +179,7 @@ def check-orchestrator-available [] -> bool {
**Version Header Support**:
```text
```bash
# API calls with version specification
curl -H "API-Version: v1" http://localhost:9090/servers
curl -H "API-Version: v2" http://localhost:9090/workflows/servers/create
@ -190,7 +190,7 @@ curl -H "API-Version: v3" http://localhost:9090/workflows/batch/submit
**Backward Compatible Endpoints**:
```text
```bash
// Rust API compatibility layer
#[derive(Debug, Serialize, Deserialize)]
struct ApiRequest {
@ -233,7 +233,7 @@ async fn handle_v1_request(payload: serde_json::Value) -> Result<ApiResponse, Ap
**Backward Compatible Schema Changes**:
```text
```bash
# API schema with version support
let ServerCreateRequest = {
# V1 fields (always supported)
@ -267,7 +267,7 @@ WorkflowOptions
**Multi-Version Client Support**:
```text
```bash
# Nushell client with version support
def "client create-server" [
name: string,
@ -309,7 +309,7 @@ def "client create-server" [
**Migration Strategy**:
```text
```bash
Database Evolution Path
┌─────────────────┐ ┌─────────────────┐ ┌─────────────────┐
│ File-based │ → │ SQLite │ → │ SurrealDB │
@ -325,7 +325,7 @@ Database Evolution Path
**Automated Database Migration**:
```text
```bash
# Database migration orchestration
def migrate-database [
--from: string = "filesystem",
@ -366,7 +366,7 @@ def migrate-database [
**File System to SurrealDB Migration**:
```text
```bash
def migrate_filesystem_to_surrealdb [] -> record {
# Initialize SurrealDB connection
let db = (connect-surrealdb)
@ -420,7 +420,7 @@ def migrate_filesystem_to_surrealdb [] -> record {
**Migration Verification**:
```text
```python
def verify-migration [from: string, to: string] -> record {
print "Verifying data integrity..."
@ -466,7 +466,7 @@ def verify-migration [from: string, to: string] -> record {
**Hybrid Deployment Model**:
```text
```bash
Deployment Architecture
┌─────────────────────────────────────────────────────────────────┐
│ Load Balancer / Reverse Proxy │
@ -488,7 +488,7 @@ Deployment Architecture
**Blue-Green Deployment**:
```text
```bash
# Blue-Green deployment with integration bridge
# Phase 1: Deploy new system alongside existing (Green environment)
cd src/tools
@ -519,7 +519,7 @@ nginx-traffic-split --new-backend 100%
**Rolling Update**:
```text
```bash
def rolling-deployment [
--target-version: string,
--batch-size: int = 3,
@ -576,7 +576,7 @@ def rolling-deployment [
**Environment-Specific Deployment**:
```text
```bash
# Development deployment
PROVISIONING_ENV=dev ./deploy.sh
--config-source config.dev.toml
@ -602,7 +602,7 @@ PROVISIONING_ENV=prod ./deploy.sh
**Docker Deployment with Bridge**:
```text
```bash
# Multi-stage Docker build supporting both systems
FROM rust:1.70 as builder
WORKDIR /app
@ -630,7 +630,7 @@ CMD ["/app/bin/bridge-start.sh"]
**Kubernetes Integration**:
```text
```yaml
# Kubernetes deployment with bridge sidecar
apiVersion: apps/v1
kind: Deployment
@ -678,7 +678,7 @@ spec:
**Monitoring Stack Integration**:
```text
```bash
Observability Architecture
┌─────────────────────────────────────────────────────────────────┐
│ Monitoring Dashboard │
@ -714,7 +714,7 @@ Observability Architecture
**Unified Metrics Collection**:
```text
```bash
# Metrics bridge for legacy and new systems
def collect-system-metrics [] -> record {
let legacy_metrics = collect-legacy-metrics
@ -770,7 +770,7 @@ def collect-new-metrics [] -> record {
**Unified Logging Strategy**:
```text
```bash
# Structured logging bridge
def log-integrated [
level: string,
@ -805,7 +805,7 @@ def log-integrated [
**Comprehensive Health Monitoring**:
```text
```bash
def health-check-integrated [] -> record {
let health_checks = [
{name: "legacy-system", check: (check-legacy-health)},
@ -844,7 +844,7 @@ def health-check-integrated [] -> record {
**Bridge Component Design**:
```text
```bash
# Legacy system bridge module
export module bridge {
# Bridge state management
@ -905,7 +905,7 @@ export module bridge {
**Compatibility Mode**:
```text
```bash
# Full compatibility with legacy system
def run-compatibility-mode [] {
print "Starting bridge in compatibility mode..."
@ -931,7 +931,7 @@ def run-compatibility-mode [] {
**Migration Mode**:
```text
```bash
# Gradual migration with traffic splitting
def run-migration-mode [
--new-system-percentage: int = 50
@ -986,7 +986,7 @@ def run-migration-mode [
**Automated Migration Orchestration**:
```text
```bash
def execute-migration-plan [
migration_plan: string,
--dry-run: bool = false,
@ -1041,7 +1041,7 @@ def execute-migration-plan [
**Migration Validation**:
```text
```bash
def validate-migration-readiness [] -> record {
let checks = [
{name: "backup-available", check: (check-backup-exists)},
@ -1079,7 +1079,7 @@ def validate-migration-readiness [] -> record {
**Problem**: Version mismatch between client and server
```text
```bash
# Diagnosis
curl -H "API-Version: v1" http://localhost:9090/health
curl -H "API-Version: v2" http://localhost:9090/health
@ -1095,7 +1095,7 @@ export PROVISIONING_API_VERSION=v2
**Problem**: Configuration not found in either system
```text
```toml
# Diagnosis
def diagnose-config-issue [key: string] -> record {
let toml_result = try {
@ -1131,7 +1131,7 @@ def migrate-single-config [key: string] {
**Problem**: Data inconsistency between systems
```text
```bash
# Diagnosis and repair
def repair-data-consistency [] -> record {
let legacy_data = (read-legacy-data)
@ -1166,7 +1166,7 @@ def repair-data-consistency [] -> record {
**Integration Debug Mode**:
```text
```bash
# Enable comprehensive debugging
export PROVISIONING_DEBUG=true
export PROVISIONING_LOG_LEVEL=debug
@ -1179,7 +1179,7 @@ provisioning server create test-server 2xCPU-4 GB --debug-integration
**Health Check Debugging**:
```text
```bash
def debug-integration-health [] -> record {
print "=== Integration Health Debug ==="

View File

@ -66,7 +66,7 @@ If you were using **Vault** or **AWS KMS** for development:
#### Step 1: Install Age
```text
```bash
# macOS
brew install age
@ -79,7 +79,7 @@ go install filippo.io/age/cmd/...@latest
#### Step 2: Generate Age Keys
```text
```bash
mkdir -p ~/.config/provisioning/age
age-keygen -o ~/.config/provisioning/age/private_key.txt
age-keygen -y ~/.config/provisioning/age/private_key.txt > ~/.config/provisioning/age/public_key.txt
@ -91,7 +91,7 @@ Replace your old Vault/AWS config:
**Old (Vault)**:
```text
```toml
[kms]
type = "vault"
address = "http://localhost:8200"
@ -101,7 +101,7 @@ mount_point = "transit"
**New (Age)**:
```text
```toml
[kms]
environment = "dev"
@ -112,7 +112,7 @@ private_key_path = "~/.config/provisioning/age/private_key.txt"
#### Step 4: Re-encrypt Development Secrets
```text
```bash
# Export old secrets (if using Vault)
vault kv get -format=json secret/dev > dev-secrets.json
@ -133,7 +133,7 @@ Choose one of these options:
**Option A: Cosmian Cloud (Managed)**
```text
```bash
# Sign up at https://cosmian.com
# Get API credentials
export COSMIAN_KMS_URL=https://kms.cosmian.cloud
@ -142,7 +142,7 @@ export COSMIAN_API_KEY=your-api-key
**Option B: Self-Hosted Cosmian KMS**
```text
```bash
# Deploy Cosmian KMS server
# See: https://docs.cosmian.com/kms/deployment/
@ -153,7 +153,7 @@ export COSMIAN_API_KEY=your-api-key
#### Step 2: Create Master Key in Cosmian
```text
```bash
# Using Cosmian CLI
cosmian-kms create-key
--algorithm AES
@ -175,7 +175,7 @@ curl -X POST $COSMIAN_KMS_URL/api/v1/keys
**From Vault to Cosmian**:
```text
```bash
# Export secrets from Vault
vault kv get -format=json secret/prod > prod-secrets.json
@ -197,7 +197,7 @@ cat prod-secrets.enc |
**From AWS KMS to Cosmian**:
```text
```bash
# Decrypt with AWS KMS
aws kms decrypt
--ciphertext-blob fileb://encrypted-data
@ -216,7 +216,7 @@ curl -X POST $COSMIAN_KMS_URL/api/v1/encrypt
**Old (AWS KMS)**:
```text
```toml
[kms]
type = "aws-kms"
region = "us-east-1"
@ -225,7 +225,7 @@ key_id = "arn:aws:kms:us-east-1:123456789012:key/..."
**New (Cosmian)**:
```text
```toml
[kms]
environment = "prod"
@ -239,7 +239,7 @@ use_confidential_computing = false # Enable if using SGX/SEV
#### Step 5: Test Production Setup
```text
```bash
# Set environment
export PROVISIONING_ENV=prod
export COSMIAN_KMS_URL=https://kms.example.com
@ -263,7 +263,7 @@ curl -X POST http://localhost:8082/api/v1/kms/decrypt
### Before (4 Backends)
```text
```bash
# Development could use any backend
[kms]
type = "vault" # or "aws-kms"
@ -279,7 +279,7 @@ key_id = "arn:aws:kms:..."
### After (2 Backends)
```text
```bash
# Clear environment-based selection
[kms]
dev_backend = "age"
@ -314,14 +314,14 @@ tls_verify = true
**Before**:
```text
```bash
KmsError::VaultError(String)
KmsError::AwsKmsError(String)
```
**After**:
```text
```bash
KmsError::AgeError(String)
KmsError::CosmianError(String)
```
@ -330,7 +330,7 @@ KmsError::CosmianError(String)
**Before**:
```text
```bash
enum KmsBackendConfig {
Vault { address, token, mount_point, ... },
AwsKms { region, key_id, assume_role },
@ -339,7 +339,7 @@ enum KmsBackendConfig {
**After**:
```text
```bash
enum KmsBackendConfig {
Age { public_key_path, private_key_path },
Cosmian { server_url, api_key, default_key_id, tls_verify },
@ -352,7 +352,7 @@ enum KmsBackendConfig {
**Before (AWS KMS)**:
```text
```bash
use kms_service::{KmsService, KmsBackendConfig};
let config = KmsBackendConfig::AwsKms {
@ -366,7 +366,7 @@ let kms = KmsService::new(config).await?;
**After (Cosmian)**:
```text
```bash
use kms_service::{KmsService, KmsBackendConfig};
let config = KmsBackendConfig::Cosmian {
@ -383,7 +383,7 @@ let kms = KmsService::new(config).await?;
**Before (Vault)**:
```text
```bash
# Set Vault environment
$env.VAULT_ADDR = "http://localhost:8200"
$env.VAULT_TOKEN = "root"
@ -394,7 +394,7 @@ kms encrypt "secret-data"
**After (Age for dev)**:
```text
```bash
# Set environment
$env.PROVISIONING_ENV = "dev"
@ -406,7 +406,7 @@ kms encrypt "secret-data"
If you need to rollback to Vault/AWS KMS:
```text
```bash
# Checkout previous version
git checkout tags/v0.1.0
@ -423,7 +423,7 @@ cp provisioning/config/kms.toml.backup provisioning/config/kms.toml
### Development Testing
```text
```bash
# 1. Generate Age keys
age-keygen -o /tmp/test_private.txt
age-keygen -y /tmp/test_private.txt > /tmp/test_public.txt
@ -442,7 +442,7 @@ cargo run --bin kms-service
### Production Testing
```text
```bash
# 1. Set up test Cosmian instance
export COSMIAN_KMS_URL=https://kms-staging.example.com
export COSMIAN_API_KEY=test-api-key
@ -464,7 +464,7 @@ cargo run --bin kms-service
### Age Keys Not Found
```text
```bash
# Check keys exist
ls -la ~/.config/provisioning/age/
@ -475,7 +475,7 @@ age-keygen -y ~/.config/provisioning/age/private_key.txt > ~/.config/provisionin
### Cosmian Connection Failed
```text
```bash
# Check network connectivity
curl -v $COSMIAN_KMS_URL/api/v1/health
@ -489,7 +489,7 @@ openssl s_client -connect kms.example.com:443
### Compilation Errors
```text
```bash
# Clean and rebuild
cd provisioning/platform/kms-service
cargo clean

View File

@ -11,7 +11,7 @@ Replaces the Python implementation with significant performance improvements whi
## Performance Results
```text
```bash
🚀 Rust MCP Server Performance Analysis
==================================================
@ -35,7 +35,7 @@ Replaces the Python implementation with significant performance improvements whi
## Architecture
```text
```bash
src/
├── simple_main.rs # Lightweight MCP server entry point
├── main.rs # Full MCP server (with SDK integration)
@ -67,7 +67,7 @@ src/
## Usage
```text
```bash
# Build and run
cargo run --bin provisioning-mcp-server --release
@ -85,7 +85,7 @@ cargo run --bin provisioning-mcp-server --release
Set via environment variables:
```text
```javascript
export PROVISIONING_PATH=/path/to/provisioning
export PROVISIONING_AI_PROVIDER=openai
export OPENAI_API_KEY=your-key

View File

@ -27,7 +27,7 @@ This reorganization enables efficient development workflows while maintaining fu
### New Development Structure (`/src/`)
```text
```bash
src/
├── config/ # System configuration
├── control-center/ # Control center application
@ -47,7 +47,7 @@ src/
### Legacy Structure (Preserved)
```text
```bash
repo-cnz/
├── cluster/ # Cluster configurations (preserved)
├── core/ # Core system (preserved)
@ -62,7 +62,7 @@ repo-cnz/
### Development Workspace (`/workspace/`)
```text
```bash
workspace/
├── config/ # Development configuration
├── extensions/ # Extension development
@ -92,7 +92,7 @@ workspace/
**Key Components**:
```text
```bash
tools/
├── build/ # Build tools
│ ├── compile-platform.nu # Platform-specific compilation
@ -163,20 +163,20 @@ The workspace provides a sophisticated development environment:
**Initialization**:
```text
```bash
cd workspace/tools
nu workspace.nu init --user-name developer --infra-name my-infra
```
**Health Monitoring**:
```text
```nushell
nu workspace.nu health --detailed --fix-issues
```
**Path Resolution**:
```text
```bash
use lib/path-resolver.nu
let config = (path-resolver resolve_config "user" --workspace-user "john")
```
@ -232,7 +232,7 @@ The workspace implements a sophisticated configuration cascade:
**Core System Entry Points**:
```text
```bash
# Main CLI (development version)
/src/core/nulib/provisioning
@ -245,7 +245,7 @@ The workspace implements a sophisticated configuration cascade:
**Build System**:
```text
```bash
# Main build system
cd /src/tools && make help
@ -258,7 +258,7 @@ make all
**Configuration Files**:
```text
```toml
# System defaults
/config.defaults.toml
@ -271,7 +271,7 @@ make all
**Extension Development**:
```text
```bash
# Provider template
/workspace/extensions/providers/template/
@ -286,7 +286,7 @@ make all
**1. Development Setup**:
```text
```bash
# Initialize workspace
cd workspace/tools
nu workspace.nu init --user-name $USER
@ -297,7 +297,7 @@ nu workspace.nu health --detailed
**2. Building Distribution**:
```text
```bash
# Complete build
cd src/tools
make all
@ -310,7 +310,7 @@ make windows
**3. Extension Development**:
```text
```bash
# Create new provider
cp -r workspace/extensions/providers/template workspace/extensions/providers/my-provider
@ -322,7 +322,7 @@ nu workspace/extensions/providers/my-provider/nulib/provider.nu test
**Existing Commands Still Work**:
```text
```bash
# All existing commands preserved
./core/nulib/provisioning server create
./core/nulib/provisioning taskserv install kubernetes

View File

@ -15,7 +15,7 @@ backup)
Defines the contract that all providers must implement:
```text
```bash
# Standard interface functions
- query_servers
- server_info
@ -38,7 +38,7 @@ Defines the contract that all providers must implement:
Manages provider discovery and registration:
```text
```bash
# Initialize registry
init-provider-registry
@ -60,7 +60,7 @@ is-provider-available "aws"
Handles dynamic provider loading and validation:
```text
```bash
# Load provider dynamically
load-provider "aws"
@ -82,7 +82,7 @@ call-provider-function "aws" "query_servers" $find $cols
Each provider implements a standard adapter:
```text
```bash
provisioning/extensions/providers/
├── aws/provider.nu # AWS adapter
├── upcloud/provider.nu # UpCloud adapter
@ -92,7 +92,7 @@ provisioning/extensions/providers/
**Adapter Structure:**
```text
```bash
# AWS Provider Adapter
export def query_servers [find?: string, cols?: string] {
aws_query_servers $find $cols
@ -107,7 +107,7 @@ export def create_server [settings: record, server: record, check: bool, wait: b
The new middleware that uses dynamic dispatch:
```text
```bash
# No hardcoded imports!
export def mw_query_servers [settings: record, find?: string, cols?: string] {
$settings.data.servers | each { |server|
@ -121,7 +121,7 @@ export def mw_query_servers [settings: record, find?: string, cols?: string] {
### Example: Mixed Provider Infrastructure
```text
```javascript
let servers = [
{
hostname = "compute-01",
@ -144,7 +144,7 @@ servers
### Multi-Provider Deployment
```text
```bash
# Deploy across multiple providers automatically
mw_deploy_multi_provider_infra $settings $deployment_plan
@ -160,7 +160,7 @@ mw_suggest_deployment_strategy {
Providers declare their capabilities:
```text
```bash
capabilities: {
server_management: true
network_management: true
@ -177,7 +177,7 @@ capabilities: {
**Before (hardcoded):**
```text
```bash
# middleware.nu
use ../aws/nulib/aws/servers.nu *
use ../upcloud/nulib/upcloud/servers.nu *
@ -190,7 +190,7 @@ match $server.provider {
**After (provider-agnostic):**
```text
```bash
# middleware_provider_agnostic.nu
# No hardcoded imports!
@ -224,7 +224,7 @@ dispatch_provider_function $server.provider "query_servers" $find $cols
Create `provisioning/extensions/providers/{name}/provider.nu`:
```text
```nushell
# Digital Ocean Provider Example
export def get-provider-metadata [] {
{
@ -255,7 +255,7 @@ The registry will automatically discover the new provider on next initialization
### 3. Test New Provider
```text
```bash
# Check if discovered
is-provider-available "digitalocean"
@ -283,7 +283,7 @@ check-provider-health "digitalocean"
### Profile-Based Security
```text
```bash
# Environment profiles can restrict providers
PROVISIONING_PROFILE=production # Only allows certified providers
PROVISIONING_PROFILE=development # Allows all providers including local
@ -310,7 +310,7 @@ PROVISIONING_PROFILE=development # Allows all providers including local
### Debug Commands
```text
```bash
# Registry diagnostics
get-provider-stats
list-providers --verbose
@ -341,7 +341,7 @@ get-loader-stats
See the interface specification for complete function documentation:
```text
```bash
get-provider-interface-docs | table
```

View File

@ -374,7 +374,7 @@ Outbound data transfer (per GB):
Use this matrix to quickly select a provider:
```text
```bash
If you need: Then use:
─────────────────────────────────────────────────────────────
Lowest cost compute Hetzner

View File

@ -19,7 +19,7 @@ A cloud provider is **production-ready** when it completes all 4 tasks:
### Execution Sequence
```text
```bash
Tarea 4 (5 min) ──────┐
Tarea 1 (main) ───┐ ├──> Tarea 2 (tests)
Tarea 3 (parallel)┘ │
@ -33,19 +33,19 @@ Tarea 3 (parallel)┘ │
These rules are **mandatory** for all provider Nushell code:
### Rule 1: Module System & Imports
```text
```nushell
use mod.nu
use api.nu
use servers.nu
```
### Rule 2: Function Signatures
```text
```python
def function_name [param: type, optional: type = default] { }
```
### Rule 3: Return Early, Fail Fast
```text
```python
def operation [resource: record] {
if ($resource | get -o id | is-empty) {
error make {msg: "Resource ID required"}
@ -56,7 +56,7 @@ def operation [resource: record] {
### Rule 4: Modern Error Handling (CRITICAL)
**❌ FORBIDDEN** - Deprecated try-catch:
```text
```bash
try {
^external_command
} catch {|err|
@ -65,7 +65,7 @@ try {
```
**✅ REQUIRED** - Modern do/complete pattern:
```text
```javascript
let result = (do { ^external_command } | complete)
if $result.exit_code != 0 {
@ -79,7 +79,7 @@ $result.stdout
All operations must fully succeed or fully fail. No partial state changes.
### Rule 12: Structured Error Returns
```text
```bash
error make {
msg: "Human-readable message",
label: {text: "Error context", span: (metadata error).span}
@ -103,7 +103,7 @@ All Nickel schemas follow this pattern:
### contracts.ncl: Type Definitions
```text
```json
{
Server = {
id | String,
@ -123,7 +123,7 @@ All Nickel schemas follow this pattern:
### defaults.ncl: Default Values
```text
```json
{
Server = {
instance_type = "t3.micro",
@ -139,7 +139,7 @@ All Nickel schemas follow this pattern:
### main.ncl: Public API
```text
```javascript
let contracts = import "contracts.ncl" in
let defaults = import "defaults.ncl" in
@ -151,7 +151,7 @@ let defaults = import "defaults.ncl" in
### version.ncl: Version Tracking
```text
```json
{
provider_version = "1.0.0",
cli_tools = {
@ -162,7 +162,7 @@ let defaults = import "defaults.ncl" in
```
**Validation**:
```text
```bash
nickel typecheck nickel/contracts.ncl
nickel typecheck nickel/defaults.ncl
nickel typecheck nickel/main.ncl
@ -176,7 +176,7 @@ nickel export nickel/main.ncl
### Identify Violations
```text
```bash
cd provisioning/extensions/providers/{PROVIDER}
grep -r "try {" nulib/ --include="*.nu" | wc -l
@ -188,7 +188,7 @@ All three commands should return `0`.
### Fix Mutable Loops: Accumulation Pattern
```text
```bash
def retry_with_backoff [
closure: closure,
max_attempts: int
@ -226,7 +226,7 @@ def retry_with_backoff [
### Fix Mutable Loops: Recursive Pattern
```text
```bash
def _wait_for_state [
resource_id: string,
target_state: string,
@ -252,7 +252,7 @@ def _wait_for_state [
### Fix Error Handling
```text
```python
def create_server [config: record] {
if ($config | get -o name | is-empty) {
error make {msg: "Server name required"}
@ -280,7 +280,7 @@ def create_server [config: record] {
### Validation
```text
```bash
cd provisioning/extensions/providers/{PROVIDER}
for file in nulib/*/\*.nu; do
@ -298,7 +298,7 @@ echo "✅ Nushell compliance complete"
### Directory Structure
```text
```bash
tests/
├── mocks/
│ └── mock_api_responses.json
@ -313,7 +313,7 @@ tests/
### Mock API Responses
```text
```json
{
"list_servers": {
"servers": [
@ -335,7 +335,7 @@ tests/
### Unit Tests: 14 Tests
```text
```python
def test-result [name: string, result: bool] {
if $result {
print $"✓ ($name)"
@ -529,7 +529,7 @@ main
### Test Orchestrator
```text
```bash
def main [] {
print "=== Provider Test Suite ==="
@ -567,7 +567,7 @@ exit (if $result.success {0} else {1})
### Validation
```text
```bash
cd provisioning/extensions/providers/{PROVIDER}
nu tests/run_{provider}_tests.nu
```
@ -580,7 +580,7 @@ Expected: 51 tests passing, exit code 0
### Directory Structure
```text
```bash
templates/
├── {provider}_servers.j2
├── {provider}_networks.j2
@ -589,7 +589,8 @@ templates/
### Template Example
```jinja2
```bash
jinja2
#!/bin/bash
# {{ provider_name }} Server Provisioning
set -e
@ -627,7 +628,7 @@ echo "Server provisioning complete"
### Validation
```text
```bash
cd provisioning/extensions/providers/{PROVIDER}
for template in templates/*.j2; do
@ -641,7 +642,7 @@ echo "✅ Templates valid"
## Tarea 4: Nickel Schema Validation
```text
```nickel
cd provisioning/extensions/providers/{PROVIDER}
nickel typecheck nickel/contracts.ncl || exit 1
@ -658,7 +659,7 @@ echo "✅ Nickel schemas validated"
## Complete Validation Script
```text
```bash
#!/bin/bash
set -e
@ -705,7 +706,7 @@ Use these as templates for new providers.
## Quick Start
```text
```bash
cd provisioning/extensions/providers/{PROVIDER}
# Validate completeness

View File

@ -38,7 +38,7 @@ Fast, local development with direct access to provider source code.
### How It Works
```text
```bash
# Install provider for infrastructure (creates symlinks)
provisioning providers install upcloud wuji
@ -67,7 +67,7 @@ provisioning providers install upcloud wuji
### Example Workflow
```text
```bash
# 1. List available providers
provisioning providers list
@ -90,7 +90,7 @@ provisioning providers remove upcloud wuji
### File Structure
```text
```bash
extensions/providers/upcloud/
├── nickel/
│ ├── manifest.toml
@ -117,7 +117,7 @@ Create versioned, distributable artifacts for production deployments and team co
### How It Works
```text
```bash
# Package providers into distributable artifacts
export PROVISIONING=/Users/Akasha/project-provisioning/provisioning
./provisioning/core/cli/pack providers
@ -148,7 +148,7 @@ export PROVISIONING=/Users/Akasha/project-provisioning/provisioning
### Example Workflow
```text
```bash
# Set environment variable
export PROVISIONING=/Users/Akasha/project-provisioning/provisioning
@ -176,7 +176,7 @@ export PROVISIONING=/Users/Akasha/project-provisioning/provisioning
### File Structure
```text
```bash
provisioning/
├── distribution/
│ ├── packages/
@ -194,7 +194,7 @@ provisioning/
### Package Metadata Example
```text
```json
{
"name": "upcloud_prov",
"version": "0.0.1",
@ -232,7 +232,7 @@ provisioning/
### Development Phase
```text
```bash
# 1. Start with module-loader for development
provisioning providers list
provisioning providers install upcloud wuji
@ -248,7 +248,7 @@ nickel export workspace/infra/wuji/main.ncl
### Release Phase
```text
```bash
# 4. Create release packages
export PROVISIONING=/Users/Akasha/project-provisioning/provisioning
./provisioning/core/cli/pack providers
@ -266,7 +266,7 @@ rsync distribution/packages/*.tar user@repo.jesusperez.pro:/registry/v0.0.2/
### Production Deployment
```text
```bash
# 8. Download specific version from registry
wget https://repo.jesusperez.pro/registry/v0.0.2/upcloud_prov_0.0.2.tar
@ -283,7 +283,7 @@ tar -xf upcloud_prov_0.0.2.tar -C infrastructure/providers/
### Module-Loader Commands
```text
```bash
# List all available providers
provisioning providers list [--kcl] [--format table|json|yaml]
@ -308,7 +308,7 @@ provisioning providers validate <infra>
### Provider Pack Commands
```text
```bash
# Set environment variable (required)
export PROVISIONING=/path/to/provisioning
@ -338,7 +338,7 @@ export PROVISIONING=/path/to/provisioning
**Recommendation**: Module-Loader only
```text
```bash
# Simple and fast
providers install upcloud homelab
providers install aws cloud-backup
@ -355,7 +355,7 @@ providers install aws cloud-backup
**Recommendation**: Module-Loader + Git
```text
```bash
# Each developer
git clone repo
providers install upcloud project-x
@ -377,7 +377,7 @@ git pull
**Recommendation**: Hybrid (Module-Loader dev + Provider Packs releases)
```text
```bash
# Development (team member)
providers install upcloud staging-env
# Make changes...
@ -402,7 +402,7 @@ git tag v0.2.0
**Recommendation**: Provider Packs only
```text
```bash
# CI/CD Pipeline
pack providers # Build artifacts
# Run tests on packages
@ -426,7 +426,7 @@ pack providers # Build artifacts
**Recommendation**: Provider Packs + Registry
```text
```bash
# Maintainer
pack providers
# Create release on GitHub
@ -521,7 +521,7 @@ wget https://github.com/project/releases/v1.0.0/upcloud_prov_1.0.0.tar
When you're ready to move to production:
```text
```bash
# 1. Clean up development setup
providers remove upcloud wuji
@ -544,7 +544,7 @@ nickel eval defs/servers.ncl
When you need to debug or develop:
```text
```bash
# 1. Remove vendored version
rm -rf workspace/infra/wuji/vendor/upcloud_prov
@ -564,7 +564,7 @@ nickel eval defs/servers.ncl
### Environment Variables
```text
```bash
# Required for pack commands
export PROVISIONING=/path/to/provisioning
@ -576,7 +576,7 @@ export PROVISIONING_CONFIG=/path/to/provisioning
Distribution settings in `provisioning/config/config.defaults.toml`:
```text
```toml
[distribution]
pack_path = "{{paths.base}}/distribution/packages"
registry_path = "{{paths.base}}/distribution/registry"
@ -605,7 +605,7 @@ modules_dir = ".kcl-modules"
**Problem**: Provider not found after install
```text
```bash
# Check provider exists
providers list | grep upcloud
@ -618,7 +618,7 @@ ls -la workspace/infra/wuji/.kcl-modules/
**Problem**: Changes not reflected
```text
```bash
# Verify symlink is correct
readlink workspace/infra/wuji/.kcl-modules/upcloud_prov
@ -629,7 +629,7 @@ readlink workspace/infra/wuji/.kcl-modules/upcloud_prov
**Problem**: No .tar file created
```text
```bash
# Check KCL version (need 0.11.3+)
kcl version
@ -639,7 +639,7 @@ ls extensions/providers/upcloud/kcl/kcl.mod
**Problem**: PROVISIONING environment variable not set
```text
```bash
# Set it
export PROVISIONING=/Users/Akasha/project-provisioning/provisioning

View File

@ -12,14 +12,14 @@ This guide shows how to quickly add a new provider to the provider-agnostic infr
### Step 1: Create Provider Directory
```text
```bash
mkdir -p provisioning/extensions/providers/{provider_name}
mkdir -p provisioning/extensions/providers/{provider_name}/nulib/{provider_name}
```
### Step 2: Copy Template and Customize
```text
```bash
# Copy the local provider as a template
cp provisioning/extensions/providers/local/provider.nu
provisioning/extensions/providers/{provider_name}/provider.nu
@ -29,7 +29,7 @@ cp provisioning/extensions/providers/local/provider.nu
Edit `provisioning/extensions/providers/{provider_name}/provider.nu`:
```text
```javascript
export def get-provider-metadata []: nothing -> record {
{
name: "your_provider_name"
@ -51,7 +51,7 @@ export def get-provider-metadata []: nothing -> record {
The provider interface requires these essential functions:
```text
```bash
# Required: Server operations
export def query_servers [find?: string, cols?: string]: nothing -> list {
# Call your provider's server listing API
@ -87,7 +87,7 @@ export def server_state [server: record, new_state: string, error_exit: bool, wa
Create `provisioning/extensions/providers/{provider_name}/nulib/{provider_name}/servers.nu`:
```text
```nushell
# Example: DigitalOcean provider functions
export def digitalocean_query_servers [find?: string, cols?: string]: nothing -> list {
# Use DigitalOcean API to list droplets
@ -122,7 +122,7 @@ export def digitalocean_create_server [settings: record, server: record, check:
### Step 6: Test Your Provider
```text
```bash
# Test provider discovery
nu -c "use provisioning/core/nulib/lib_provisioning/providers/registry.nu *; init-provider-registry; list-providers"
@ -137,7 +137,7 @@ nu -c "use provisioning/extensions/providers/your_provider_name/provider.nu *; q
Add to your Nickel configuration:
```text
```nickel
# workspace/infra/example/servers.ncl
let servers = [
{
@ -156,7 +156,7 @@ servers
For cloud providers (AWS, GCP, Azure, etc.):
```text
```bash
# Use HTTP calls to cloud APIs
export def cloud_query_servers [find?: string, cols?: string]: nothing -> list {
let auth_header = { Authorization: $"Bearer ($env.PROVIDER_TOKEN)" }
@ -170,7 +170,7 @@ export def cloud_query_servers [find?: string, cols?: string]: nothing -> list {
For container platforms (Docker, Podman, etc.):
```text
```bash
# Use CLI commands for container platforms
export def container_query_servers [find?: string, cols?: string]: nothing -> list {
let containers = (docker ps --format json | from json)
@ -183,7 +183,7 @@ export def container_query_servers [find?: string, cols?: string]: nothing -> li
For bare metal or existing servers:
```text
```bash
# Use SSH or local commands
export def baremetal_query_servers [find?: string, cols?: string]: nothing -> list {
# Read from inventory file or ping servers
@ -197,7 +197,7 @@ export def baremetal_query_servers [find?: string, cols?: string]: nothing -> li
### 1. Error Handling
```text
```javascript
export def provider_operation []: nothing -> any {
try {
# Your provider operation
@ -212,7 +212,7 @@ export def provider_operation []: nothing -> any {
### 2. Authentication
```text
```bash
# Check for required environment variables
def check_auth []: nothing -> bool {
if ($env | get -o PROVIDER_TOKEN) == null {
@ -225,7 +225,7 @@ def check_auth []: nothing -> bool {
### 3. Rate Limiting
```text
```bash
# Add delays for API rate limits
def api_call_with_retry [url: string]: nothing -> any {
mut attempts = 0
@ -248,7 +248,7 @@ def api_call_with_retry [url: string]: nothing -> any {
Set capabilities accurately:
```text
```bash
capabilities: {
server_management: true # Can create/delete servers
network_management: true # Can manage networks/VPCs
@ -281,7 +281,7 @@ capabilities: {
### Provider Not Found
```text
```bash
# Check provider directory structure
ls -la provisioning/extensions/providers/your_provider_name/
@ -291,14 +291,14 @@ grep "get-provider-metadata" provisioning/extensions/providers/your_provider_nam
### Interface Validation Failed
```text
```bash
# Check which functions are missing
nu -c "use provisioning/core/nulib/lib_provisioning/providers/interface.nu *; validate-provider-interface 'your_provider_name'"
```
### Authentication Errors
```text
```bash
# Check environment variables
env | grep PROVIDER

View File

@ -4,13 +4,13 @@
### Create a New Taskserv (Interactive)
```text
```nushell
nu provisioning/tools/create-taskserv-helper.nu interactive
```
### Create a New Taskserv (Direct)
```text
```nushell
nu provisioning/tools/create-taskserv-helper.nu create my-api
--category development
--port 8080
@ -27,7 +27,7 @@ nu provisioning/tools/create-taskserv-helper.nu create my-api
### 2. Basic Structure
```text
```bash
my-service/
├── nickel/
│ ├── manifest.toml # Package definition
@ -43,7 +43,7 @@ my-service/
**manifest.toml** (package definition):
```text
```toml
[package]
name = "my-service"
version = "1.0.0"
@ -55,7 +55,7 @@ k8s = { oci = "oci://ghcr.io/kcl-lang/k8s", tag = "1.30" }
**my-service.ncl** (main schema):
```text
```javascript
let MyService = {
name | String,
version | String,
@ -75,7 +75,7 @@ let MyService = {
### 4. Test Your Taskserv
```text
```bash
# Discover your taskserv
nu -c "use provisioning/core/nulib/taskservs/discover.nu *; get-taskserv-info my-service"
@ -90,7 +90,7 @@ provisioning/core/cli/provisioning taskserv create my-service --infra wuji --che
### Web Service
```text
```javascript
let WebService = {
name | String,
version | String | default = "latest",
@ -111,7 +111,7 @@ WebService
### Database Service
```text
```javascript
let DatabaseService = {
name | String,
version | String | default = "latest",
@ -132,7 +132,7 @@ DatabaseService
### Background Worker
```text
```javascript
let BackgroundWorker = {
name | String,
version | String | default = "latest",
@ -154,7 +154,7 @@ BackgroundWorker
### Discovery
```text
```bash
# List all taskservs
nu -c "use provisioning/core/nulib/taskservs/discover.nu *; discover-taskservs | select name group"
@ -167,7 +167,7 @@ nu -c "use provisioning/workspace/tools/layer-utils.nu *; show_layer_stats"
### Development
```text
```bash
# Check Nickel syntax
nickel typecheck provisioning/extensions/taskservs/{category}/{name}/schemas/{name}.ncl
@ -181,7 +181,7 @@ provisioning/core/cli/provisioning taskserv check-updates
### Testing
```text
```bash
# Dry run deployment
provisioning/core/cli/provisioning taskserv create {name} --infra {infra} --check
@ -205,7 +205,7 @@ nu -c "use provisioning/workspace/tools/layer-utils.nu *; test_layer_resolution
### Taskserv Not Found
```text
```bash
# Check if discovered
nu -c "use provisioning/core/nulib/taskservs/discover.nu *; discover-taskservs | where name == my-service"
@ -215,7 +215,7 @@ ls provisioning/extensions/taskservs/{category}/my-service/kcl/kcl.mod
### Layer Resolution Issues
```text
```bash
# Debug resolution
nu -c "use provisioning/workspace/tools/layer-utils.nu *; test_layer_resolution my-service wuji upcloud"
@ -225,7 +225,7 @@ ls provisioning/workspace/templates/taskservs/{category}/my-service.ncl
### Nickel Syntax Errors
```text
```nickel
# Check syntax
nickel typecheck provisioning/extensions/taskservs/{category}/my-service/schemas/my-service.ncl

View File

@ -27,7 +27,7 @@ files, you answer questions in an interactive form, and TypeDialog generates val
### 1. Configure a Platform Service (5 minutes)
```text
```toml
# Launch interactive form for orchestrator
provisioning config platform orchestrator
@ -51,14 +51,14 @@ This opens an interactive form with sections for:
After completing the form, TypeDialog generates `config.ncl`:
```text
```nickel
# View what was generated
cat workspace_librecloud/config/config.ncl
```
### 3. Validate Configuration
```text
```toml
# Check Nickel syntax is valid
nickel typecheck workspace_librecloud/config/config.ncl
@ -70,7 +70,7 @@ provisioning config export
Platform services automatically load the exported TOML:
```text
```toml
# Orchestrator reads config/generated/platform/orchestrator.toml
provisioning start orchestrator
@ -108,7 +108,7 @@ cat workspace_librecloud/config/generated/platform/orchestrator.toml
All configuration lives in one Nickel file with three sections:
```text
```nickel
# workspace_librecloud/config/config.ncl
{
# SECTION 1: Workspace metadata
@ -186,7 +186,7 @@ All configuration lives in one Nickel file with three sections:
**Example**:
```text
```bash
platform = {
orchestrator = {
enabled = true,
@ -223,7 +223,7 @@ platform = {
**Example**:
```text
```bash
platform = {
kms = {
enabled = true,
@ -246,7 +246,7 @@ platform = {
**Example**:
```text
```bash
platform = {
control_center = {
enabled = true,
@ -271,7 +271,7 @@ All platform services support four deployment modes, each with different resourc
**Mode-based Configuration Loading**:
```text
```toml
# Load a specific mode's configuration
export VAULT_MODE=enterprise
export REGISTRY_MODE=multiuser
@ -308,7 +308,7 @@ export RAG_MODE=cicd
**Environment Variable Overrides**:
```text
```bash
VAULT_CONFIG=/path/to/vault.toml # Explicit config path
VAULT_MODE=enterprise # Mode-specific config
VAULT_SERVER_URL=http://localhost:8200 # Server URL
@ -319,7 +319,7 @@ VAULT_TLS_VERIFY=true # TLS verification
**Example Configuration**:
```text
```toml
platform = {
vault_service = {
enabled = true,
@ -366,7 +366,7 @@ platform = {
**Environment Variable Overrides**:
```text
```bash
REGISTRY_CONFIG=/path/to/registry.toml # Explicit config path
REGISTRY_MODE=multiuser # Mode-specific config
REGISTRY_SERVER_HOST=0.0.0.0 # Server host
@ -380,7 +380,7 @@ REGISTRY_OCI_NAMESPACE=provisioning # OCI namespace
**Example Configuration**:
```text
```toml
platform = {
extension_registry = {
enabled = true,
@ -428,7 +428,7 @@ platform = {
**Environment Variable Overrides**:
```text
```bash
RAG_CONFIG=/path/to/rag.toml # Explicit config path
RAG_MODE=multiuser # Mode-specific config
RAG_ENABLED=true # Enable/disable RAG
@ -442,7 +442,7 @@ RAG_VECTOR_DB_TYPE=surrealdb # Vector DB type
**Example Configuration**:
```text
```toml
platform = {
rag = {
enabled = true,
@ -489,7 +489,7 @@ platform = {
**Environment Variable Overrides**:
```text
```bash
AI_SERVICE_CONFIG=/path/to/ai.toml # Explicit config path
AI_SERVICE_MODE=enterprise # Mode-specific config
AI_SERVICE_SERVER_PORT=8082 # Server port
@ -501,7 +501,7 @@ AI_SERVICE_DAG_MAX_CONCURRENT_TASKS=50 # Max concurrent tasks
**Example Configuration**:
```text
```toml
platform = {
ai_service = {
enabled = true,
@ -550,7 +550,7 @@ platform = {
**Environment Variable Overrides**:
```text
```bash
DAEMON_CONFIG=/path/to/daemon.toml # Explicit config path
DAEMON_MODE=enterprise # Mode-specific config
DAEMON_POLL_INTERVAL=30 # Polling interval (seconds)
@ -562,7 +562,7 @@ DAEMON_AUTO_UPDATE=true # Enable auto updates
**Example Configuration**:
```text
```toml
platform = {
provisioning_daemon = {
enabled = true,
@ -607,21 +607,21 @@ platform = {
**Environment Variables**:
```text
```bash
api_user = "{{env.UPCLOUD_USER}}"
api_password = "{{env.UPCLOUD_PASSWORD}}"
```
**Workspace Paths**:
```text
```bash
data_dir = "{{workspace.path}}/.orchestrator/data"
logs_dir = "{{workspace.path}}/.orchestrator/logs"
```
**KMS Decryption**:
```text
```bash
api_password = "{{kms.decrypt('upcloud_pass')}}"
```
@ -629,7 +629,7 @@ api_password = "{{kms.decrypt('upcloud_pass')}}"
### Validating Configuration
```text
```toml
# Check Nickel syntax
nickel typecheck workspace_librecloud/config/config.ncl
@ -642,7 +642,7 @@ provisioning config export
### Exporting to Service Formats
```text
```bash
# One-time export
provisioning config export
@ -693,7 +693,7 @@ provisioning/schemas/platform/
All 5 new services come with pre-built TOML configs for each deployment mode:
```text
```toml
# View available schemas for vault service
ls -la provisioning/schemas/platform/schemas/vault-service.ncl
ls -la provisioning/schemas/platform/defaults/vault-service-defaults.ncl
@ -725,7 +725,7 @@ export DAEMON_MODE=multiuser
If you prefer interactive updating:
```text
```bash
# Re-run TypeDialog form (overwrites config.ncl)
provisioning config platform orchestrator
@ -741,7 +741,7 @@ typedialog form .typedialog/provisioning/platform/orchestrator/form.toml
**Solution**: Check form.toml syntax and verify required fields are present (name, description, locales_path, templates_path)
```text
```toml
head -10 .typedialog/provisioning/platform/orchestrator/form.toml
```
@ -751,7 +751,7 @@ head -10 .typedialog/provisioning/platform/orchestrator/form.toml
**Solution**: Check for syntax errors and correct field names
```text
```bash
nickel typecheck workspace_librecloud/config/config.ncl 2>&1 | less
```
@ -763,7 +763,7 @@ Common issues: Missing closing braces, incorrect field names, wrong data types
**Solution**: Verify config.ncl exports to JSON and check all required sections exist
```text
```nickel
nickel export --format json workspace_librecloud/config/config.ncl | head -20
```
@ -781,7 +781,7 @@ nickel export --format json workspace_librecloud/config/config.ncl | head -20
### Development Setup
```text
```json
{
workspace = {
name = "dev",
@ -815,7 +815,7 @@ nickel export --format json workspace_librecloud/config/config.ncl | head -20
### Production Setup
```text
```json
{
workspace = {
name = "prod",
@ -859,7 +859,7 @@ nickel export --format json workspace_librecloud/config/config.ncl | head -20
### Multi-Provider Setup
```text
```json
{
workspace = {
name = "multi",
@ -904,7 +904,7 @@ nickel export --format json workspace_librecloud/config/config.ncl | head -20
Start with TypeDialog forms for the best experience:
```text
```bash
provisioning config platform orchestrator
```
@ -920,7 +920,7 @@ Only edit the source `.ncl` file, not the generated TOML files.
Always validate before deploying changes:
```text
```bash
nickel typecheck workspace_librecloud/config/config.ncl
provisioning config export
```
@ -973,14 +973,14 @@ Add comments explaining custom settings in the Nickel file.
Get detailed error messages and check available fields:
```text
```bash
nickel typecheck workspace_librecloud/config/config.ncl 2>&1 | less
grep "prompt =" .typedialog/provisioning/platform/orchestrator/form.toml
```
### Configuration Questions
```text
```toml
# Show all available config commands
provisioning config --help
@ -994,7 +994,7 @@ provisioning config services list
### Test Configuration
```text
```toml
# Validate without deploying
nickel typecheck workspace_librecloud/config/config.ncl

View File

@ -42,7 +42,7 @@ quality, and efficiency.
**1. Clone and Navigate**:
```text
```bash
# Clone repository
git clone https://github.com/company/provisioning-system.git
cd provisioning-system
@ -53,7 +53,7 @@ cd workspace/tools
**2. Initialize Workspace**:
```text
```bash
# Initialize development workspace
nu workspace.nu init --user-name $USER --infra-name dev-env
@ -63,7 +63,7 @@ nu workspace.nu health --detailed --fix-issues
**3. Configure Development Environment**:
```text
```toml
# Create user configuration
cp workspace/config/local-overrides.toml.example workspace/config/$USER.toml
@ -73,7 +73,7 @@ $EDITOR workspace/config/$USER.toml
**4. Set Up Build System**:
```text
```bash
# Navigate to build tools
cd src/tools
@ -88,7 +88,7 @@ make dev-build
**Required Tools**:
```text
```bash
# Install Nushell
cargo install nu
@ -103,7 +103,7 @@ cargo install cargo-watch # File watching
**Optional Development Tools**:
```text
```bash
# Install development enhancers
cargo install nu_plugin_tera # Template plugin
cargo install sops # Secrets management
@ -114,7 +114,7 @@ brew install k9s # Kubernetes management
**VS Code Setup** (`.vscode/settings.json`):
```text
```json
{
"files.associations": {
"*.nu": "shellscript",
@ -143,7 +143,7 @@ brew install k9s # Kubernetes management
**1. Sync and Update**:
```text
```bash
# Sync with upstream
git pull origin main
@ -157,7 +157,7 @@ nu workspace.nu status --detailed
**2. Review Current State**:
```text
```bash
# Check current infrastructure
provisioning show servers
provisioning show settings
@ -170,7 +170,7 @@ nu workspace.nu status
**1. Feature Development**:
```text
```bash
# Create feature branch
git checkout -b feature/new-provider-support
@ -184,7 +184,7 @@ $EDITOR workspace/extensions/providers/new-provider/nulib/provider.nu
**2. Incremental Testing**:
```text
```bash
# Test syntax during development
nu --check workspace/extensions/providers/new-provider/nulib/provider.nu
@ -197,7 +197,7 @@ nu workspace.nu tools test-extension providers/new-provider
**3. Build and Validate**:
```text
```bash
# Quick development build
cd src/tools
make dev-build
@ -213,7 +213,7 @@ make test-dist
**Unit Testing**:
```text
```bash
# Add test examples to functions
def create-server [name: string] -> record {
# @test: "test-server" -> {name: "test-server", status: "created"}
@ -223,7 +223,7 @@ def create-server [name: string] -> record {
**Integration Testing**:
```text
```bash
# Test with real infrastructure
nu workspace/extensions/providers/new-provider/nulib/provider.nu
create-server test-server --dry-run
@ -236,7 +236,7 @@ PROVISIONING_WORKSPACE_USER=$USER provisioning server create test-server --check
**1. Commit Progress**:
```text
```bash
# Stage changes
git add .
@ -254,7 +254,7 @@ git push origin feature/new-provider-support
**2. Workspace Maintenance**:
```text
```bash
# Clean up development data
nu workspace.nu cleanup --type cache --age 1d
@ -271,7 +271,7 @@ nu workspace.nu health
**File Organization**:
```text
```bash
Extension Structure:
├── nulib/
│ ├── main.nu # Main entry point
@ -293,7 +293,7 @@ Extension Structure:
**Function Naming Conventions**:
```text
```bash
# Use kebab-case for commands
def create-server [name: string] -> record { ... }
def validate-config [config: record] -> bool { ... }
@ -310,7 +310,7 @@ def list-available-zones [] -> list<string> { ... }
**Error Handling Pattern**:
```text
```bash
def create-server [
name: string
--dry-run: bool = false
@ -347,7 +347,7 @@ def create-server [
**Project Organization**:
```text
```bash
src/
├── lib.rs # Library root
├── main.rs # Binary entry point
@ -367,7 +367,7 @@ src/
**Error Handling**:
```text
```bash
use anyhow::{Context, Result};
use thiserror::Error;
@ -404,7 +404,7 @@ pub fn create_server(name: &str) -> Result<ServerInfo> {
**Schema Structure**:
```text
```bash
# Base schema definitions
let ServerConfig = {
name | string,
@ -446,7 +446,7 @@ InfrastructureConfig
**Unit Test Pattern**:
```text
```bash
# Function with embedded test
def validate-server-name [name: string] -> bool {
# @test: "valid-name" -> true
@ -482,7 +482,7 @@ def test_validate_server_name [] {
**Integration Test Pattern**:
```text
```bash
# tests/integration/server-lifecycle-test.nu
def test_complete_server_lifecycle [] {
# Setup
@ -509,7 +509,7 @@ def test_complete_server_lifecycle [] {
**Unit Testing**:
```text
```bash
#[cfg(test)]
mod tests {
use super::*;
@ -540,7 +540,7 @@ mod tests {
**Integration Testing**:
```text
```bash
#[cfg(test)]
mod integration_tests {
use super::*;
@ -570,7 +570,7 @@ mod integration_tests {
**Schema Validation Testing**:
```text
```bash
# Test Nickel schemas
nickel check schemas/
@ -585,7 +585,7 @@ nickel eval schemas/server.ncl
**Continuous Testing**:
```text
```bash
# Watch for changes and run tests
cargo watch -x test -x check
@ -602,7 +602,7 @@ nu workspace.nu tools test-all --watch
**Enable Debug Mode**:
```text
```bash
# Environment variables
export PROVISIONING_DEBUG=true
export PROVISIONING_LOG_LEVEL=debug
@ -617,7 +617,7 @@ export PROVISIONING_WORKSPACE_USER=$USER
**Debug Techniques**:
```text
```bash
# Debug prints
def debug-server-creation [name: string] {
print $"🐛 Creating server: ($name)"
@ -658,7 +658,7 @@ def debug-interactive [] {
**Error Investigation**:
```text
```bash
# Comprehensive error handling
def safe-server-creation [name: string] {
try {
@ -691,7 +691,7 @@ def safe-server-creation [name: string] {
**Debug Logging**:
```text
```bash
use tracing::{debug, info, warn, error, instrument};
#[instrument]
@ -720,7 +720,7 @@ pub async fn create_server(name: &str) -> Result<ServerInfo> {
**Interactive Debugging**:
```text
```bash
// Use debugger breakpoints
#[cfg(debug_assertions)]
{
@ -734,7 +734,7 @@ pub async fn create_server(name: &str) -> Result<ServerInfo> {
**Log Monitoring**:
```text
```bash
# Follow all logs
tail -f workspace/runtime/logs/$USER/*.log
@ -750,7 +750,7 @@ jq '.level == "ERROR"' workspace/runtime/logs/$USER/structured.jsonl
**Debug Log Levels**:
```text
```bash
# Different verbosity levels
PROVISIONING_LOG_LEVEL=trace provisioning server create test
PROVISIONING_LOG_LEVEL=debug provisioning server create test
@ -763,7 +763,7 @@ PROVISIONING_LOG_LEVEL=info provisioning server create test
**Working with Legacy Components**:
```text
```bash
# Test integration with existing system
provisioning --version # Legacy system
src/core/nulib/provisioning --version # New system
@ -780,7 +780,7 @@ nu workspace.nu config validate
**REST API Testing**:
```text
```bash
# Test orchestrator API
curl -X GET http://localhost:9090/health
curl -X GET http://localhost:9090/tasks
@ -798,7 +798,7 @@ curl -X GET http://localhost:9090/workflows/batch/status/workflow-id
**SurrealDB Integration**:
```text
```bash
# Test database connectivity
use core/nulib/lib_provisioning/database/surreal.nu
let db = (connect-database)
@ -814,7 +814,7 @@ assert ($status.status == "pending")
**Container Integration**:
```text
```bash
# Test with Docker
docker run --rm -v $(pwd):/work provisioning:dev provisioning --version
@ -841,7 +841,7 @@ make test-dist PLATFORM=kubernetes
**Workflow**:
```text
```bash
# Start new feature
git checkout main
git pull origin main
@ -869,7 +869,7 @@ gh pr create --title "Add new provider support" --body "..."
**Review Commands**:
```text
```bash
# Test PR locally
gh pr checkout 123
cd src/tools && make ci-test
@ -886,7 +886,7 @@ nu --check $(find . -name "*.nu")
**Code Documentation**:
```text
```bash
# Function documentation
def create-server [
name: string # Server name (must be unique)
@ -925,7 +925,7 @@ def create-server [
**Automated Quality Gates**:
```text
```bash
# Pre-commit hooks
pre-commit install
@ -949,7 +949,7 @@ cargo audit
**Performance Testing**:
```text
```bash
# Benchmark builds
make benchmark
@ -962,7 +962,7 @@ ab -n 1000 -c 10 http://localhost:9090/health
**Resource Monitoring**:
```text
```bash
# Monitor during development
nu workspace/tools/runtime-manager.nu monitor --duration 5m
@ -977,7 +977,7 @@ df -h
**Never Hardcode**:
```text
```bash
# Bad
def get-api-url [] { "https://api.upcloud.com" }
@ -991,7 +991,7 @@ def get-api-url [] {
**Comprehensive Error Context**:
```text
```python
def create-server [name: string] {
try {
validate-server-name $name
@ -1017,7 +1017,7 @@ def create-server [name: string] {
**Clean Up Resources**:
```text
```python
def with-temporary-server [name: string, action: closure] {
let server = (create-server $name)
@ -1038,7 +1038,7 @@ def with-temporary-server [name: string, action: closure] {
**Test Isolation**:
```text
```python
def test-with-isolation [test_name: string, test_action: closure] {
let test_workspace = $"test-($test_name)-(date now | format date '%Y%m%d%H%M%S')"

View File

@ -76,7 +76,7 @@ Before proceeding, verify your system has the core dependencies installed:
### Nushell
```text
```nushell
# Check Nushell version
nu --version
@ -85,7 +85,7 @@ nu --version
### Nickel
```text
```nickel
# Check Nickel version
nickel --version
@ -94,7 +94,7 @@ nickel --version
### Docker
```text
```bash
# Check Docker version
docker --version
@ -106,7 +106,7 @@ docker ps
### SOPS
```text
```bash
# Check SOPS version
sops --version
@ -115,7 +115,7 @@ sops --version
### Age
```text
```bash
# Check Age version
age --version
@ -126,7 +126,7 @@ age --version
### macOS (using Homebrew)
```text
```bash
# Install Homebrew if not already installed
/bin/bash -c "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/HEAD/install.sh)"
@ -151,7 +151,7 @@ brew install k9s glow bat
### Ubuntu/Debian
```text
```bash
# Update package list
sudo apt update
@ -184,7 +184,7 @@ sudo apt install -y age
### Fedora/RHEL
```text
```bash
# Install Nushell
sudo dnf install -y nushell

View File

@ -15,7 +15,7 @@ Estimated time: 15-20 minutes
## Step 1: Clone the Repository
```text
```bash
# Clone the repository
git clone https://github.com/provisioning/provisioning-platform.git
cd provisioning-platform
@ -30,7 +30,7 @@ The platform uses multiple Nushell plugins for enhanced functionality.
### Install nu_plugin_tera (Template Rendering)
```text
```bash
# Install from crates.io
cargo install nu_plugin_tera
@ -40,7 +40,7 @@ nu -c "plugin add ~/.cargo/bin/nu_plugin_tera; plugin use tera"
### Verify Plugin Installation
```text
```bash
# Start Nushell
nu
@ -55,7 +55,7 @@ plugin list
Make the `provisioning` command available globally:
```text
```bash
# Option 1: Symlink to /usr/local/bin (recommended)
sudo ln -s "$(pwd)/provisioning/core/cli/provisioning" /usr/local/bin/provisioning
@ -71,7 +71,7 @@ provisioning --version
Generate keys for encrypting sensitive configuration:
```text
```toml
# Create Age key directory
mkdir -p ~/.config/provisioning/age
@ -90,7 +90,7 @@ chmod 644 ~/.config/provisioning/age/public_key.txt
Set up basic environment variables:
```text
```bash
# Create environment file
cat > ~/.provisioning/env << 'ENVEOF'
# Provisioning Environment Configuration
@ -110,7 +110,7 @@ echo 'source ~/.provisioning/env' >> ~/.bashrc # or ~/.zshrc
Create your first workspace:
```text
```bash
# Initialize a new workspace
provisioning workspace init my-first-workspace
@ -127,7 +127,7 @@ provisioning workspace list
Run the installation verification:
```text
```bash
# Check system configuration
provisioning validate config
@ -149,7 +149,7 @@ Expected output should show:
If you plan to use platform services (orchestrator, control center, etc.):
```text
```bash
# Build platform services
cd provisioning/platform
@ -176,7 +176,7 @@ ls */target/release/
Use the interactive installer for a guided setup:
```text
```bash
# Build the installer
cd provisioning/platform/installer
cargo build --release
@ -194,7 +194,7 @@ cargo build --release
If plugins aren't recognized:
```text
```bash
# Rebuild plugin registry
nu -c "plugin list; plugin use tera"
```
@ -203,7 +203,7 @@ nu -c "plugin list; plugin use tera"
If you encounter permission errors:
```text
```bash
# Ensure proper ownership
sudo chown -R $USER:$USER ~/.config/provisioning
@ -215,7 +215,7 @@ echo $PATH | grep provisioning
If encryption fails:
```text
```bash
# Verify keys exist
ls -la ~/.config/provisioning/age/

View File

@ -17,7 +17,7 @@ Estimated time: 10-15 minutes
Create a basic infrastructure configuration:
```text
```toml
# Generate infrastructure template
provisioning generate infra --new my-infra
@ -30,14 +30,14 @@ provisioning generate infra --new my-infra
Edit the generated configuration:
```text
```toml
# Edit with your preferred editor
$EDITOR workspace/infra/my-infra/settings.ncl
```
Example configuration:
```text
```toml
import provisioning.settings as cfg
# Infrastructure settings
@ -62,7 +62,7 @@ servers = [
First, run in check mode to see what would happen:
```text
```bash
# Check mode - no actual changes
provisioning server create --infra my-infra --check
@ -78,7 +78,7 @@ provisioning server create --infra my-infra --check
If check mode looks good, create the server:
```text
```bash
# Create server
provisioning server create --infra my-infra
@ -93,7 +93,7 @@ provisioning server create --infra my-infra
Check server status:
```text
```bash
# List all servers
provisioning server list
@ -108,7 +108,7 @@ provisioning server ssh dev-server-01
Install a task service on the server:
```text
```bash
# Check mode first
provisioning taskserv create kubernetes --infra my-infra --check
@ -126,7 +126,7 @@ provisioning taskserv create kubernetes --infra my-infra --check
Proceed with installation:
```text
```bash
# Install Kubernetes
provisioning taskserv create kubernetes --infra my-infra --wait
@ -145,7 +145,7 @@ provisioning workflow monitor <task-id>
Check that Kubernetes is running:
```text
```yaml
# List installed task services
provisioning taskserv list --infra my-infra
@ -164,7 +164,7 @@ provisioning server exec dev-server-01 -- kubectl get nodes
Create multiple servers at once:
```text
```bash
servers = [
{hostname = "web-01", cores = 2, memory = 4096},
{hostname = "web-02", cores = 2, memory = 4096},
@ -172,7 +172,7 @@ servers = [
]
```
```text
```bash
provisioning server create --infra my-infra --servers web-01,web-02,db-01
```
@ -180,7 +180,7 @@ provisioning server create --infra my-infra --servers web-01,web-02,db-01
Install multiple services on one server:
```text
```bash
provisioning taskserv create kubernetes,cilium,postgres --infra my-infra --servers web-01
```
@ -188,7 +188,7 @@ provisioning taskserv create kubernetes,cilium,postgres --infra my-infra --serve
Deploy a complete cluster configuration:
```text
```toml
provisioning cluster create buildkit --infra my-infra
```
@ -196,7 +196,7 @@ provisioning cluster create buildkit --infra my-infra
The typical deployment workflow:
```text
```bash
# 1. Initialize workspace
provisioning workspace init production
@ -230,7 +230,7 @@ provisioning taskserv list
### Server Creation Fails
```text
```bash
# Check logs
provisioning server logs dev-server-01
@ -240,7 +240,7 @@ provisioning --debug server create --infra my-infra
### Task Service Installation Fails
```text
```bash
# Check task service logs
provisioning taskserv logs kubernetes
@ -250,7 +250,7 @@ provisioning taskserv create kubernetes --infra my-infra --force
### SSH Connection Issues
```text
```bash
# Verify SSH key
ls -la ~/.ssh/

View File

@ -15,7 +15,7 @@ After completing your first deployment, verify:
Check that all configuration is valid:
```text
```toml
# Validate all configuration
provisioning validate config
@ -25,7 +25,7 @@ provisioning validate config
# ✓ All required fields present
```
```text
```bash
# Check environment variables
provisioning env
@ -37,7 +37,7 @@ provisioning allenv
Check that servers are accessible and healthy:
```text
```bash
# List all servers
provisioning server list
@ -49,7 +49,7 @@ provisioning server list
# └───────────────┴──────────┴───────┴────────┴──────────────┴──────────┘
```
```text
```bash
# Check server details
provisioning server info dev-server-01
@ -61,7 +61,7 @@ provisioning server ssh dev-server-01 -- echo "SSH working"
Check installed task services:
```text
```bash
# List task services
provisioning taskserv list
@ -75,7 +75,7 @@ provisioning taskserv list
# └────────────┴─────────┴────────────────┴──────────┘
```
```text
```bash
# Check specific task service
provisioning taskserv status kubernetes
@ -87,7 +87,7 @@ provisioning taskserv logs kubernetes --tail 50
If you installed Kubernetes, verify it's working:
```text
```yaml
# Check Kubernetes nodes
provisioning server ssh dev-server-01 -- kubectl get nodes
@ -96,7 +96,7 @@ provisioning server ssh dev-server-01 -- kubectl get nodes
# dev-server-01 Ready control-plane 10m v1.28.0
```
```text
```bash
# Check Kubernetes pods
provisioning server ssh dev-server-01 -- kubectl get pods -A
@ -109,7 +109,7 @@ If you installed platform services:
### Orchestrator
```text
```bash
# Check orchestrator health
curl http://localhost:8080/health
@ -117,14 +117,14 @@ curl http://localhost:8080/health
# {"status":"healthy","version":"0.1.0"}
```
```text
```bash
# List tasks
curl http://localhost:8080/tasks
```
### Control Center
```text
```bash
# Check control center health
curl http://localhost:9090/health
@ -136,7 +136,7 @@ curl -X POST http://localhost:9090/policies/evaluate
### KMS Service
```text
```bash
# Check KMS health
curl http://localhost:8082/api/v1/kms/health
@ -148,7 +148,7 @@ echo "test" | provisioning kms encrypt
Run comprehensive health checks:
```text
```bash
# Check all components
provisioning health check
@ -165,7 +165,7 @@ provisioning health check
If you used workflows:
```text
```bash
# List all workflows
provisioning workflow list
@ -180,7 +180,7 @@ provisioning workflow stats
### DNS Resolution (If CoreDNS Installed)
```text
```bash
# Test DNS resolution
dig @localhost test.provisioning.local
@ -190,7 +190,7 @@ provisioning server ssh dev-server-01 -- systemctl status coredns
### Network Connectivity
```text
```bash
# Test server-to-server connectivity
provisioning server ssh dev-server-01 -- ping -c 3 dev-server-02
@ -200,7 +200,7 @@ provisioning server ssh dev-server-01 -- sudo iptables -L
### Storage and Resources
```text
```bash
# Check disk usage
provisioning server ssh dev-server-01 -- df -h
@ -215,7 +215,7 @@ provisioning server ssh dev-server-01 -- top -bn1 | head -20
### Configuration Validation Failed
```text
```toml
# View detailed error
provisioning validate config --verbose
@ -225,7 +225,7 @@ provisioning validate config --infra my-infra
### Server Unreachable
```text
```bash
# Check server logs
provisioning server logs dev-server-01
@ -235,7 +235,7 @@ provisioning --debug server ssh dev-server-01
### Task Service Not Running
```text
```bash
# Check service logs
provisioning taskserv logs kubernetes
@ -245,7 +245,7 @@ provisioning taskserv restart kubernetes --infra my-infra
### Platform Service Down
```text
```bash
# Check service status
provisioning platform status orchestrator
@ -260,7 +260,7 @@ provisioning platform restart orchestrator
### Response Time Tests
```text
```bash
# Measure server response time
time provisioning server info dev-server-01
@ -273,7 +273,7 @@ time provisioning workflow submit test-workflow.ncl
### Resource Usage
```text
```bash
# Check platform resource usage
docker stats # If using Docker
@ -285,7 +285,7 @@ provisioning system resources
### Encryption
```text
```bash
# Verify encryption keys
ls -la ~/.config/provisioning/age/
@ -295,7 +295,7 @@ echo "test" | provisioning kms encrypt | provisioning kms decrypt
### Authentication (If Enabled)
```text
```bash
# Test login
provisioning login --username admin

View File

@ -51,7 +51,7 @@ Choose a deployment mode based on your needs:
The configuration system is managed by a standalone script that doesn't require the main installer:
```text
```toml
# Navigate to the provisioning directory
cd /path/to/project-provisioning
@ -70,7 +70,7 @@ TypeDialog provides an interactive form-based configuration interface available
#### Quick Interactive Setup (All Services at Once)
```text
```bash
# Run interactive setup - prompts for choices
./provisioning/scripts/setup-platform-config.sh
@ -83,7 +83,7 @@ TypeDialog provides an interactive form-based configuration interface available
#### Configure Specific Service with TypeDialog
```text
```toml
# Configure orchestrator in solo mode with web UI
./provisioning/scripts/setup-platform-config.sh
--service orchestrator
@ -103,7 +103,7 @@ TypeDialog provides an interactive form-based configuration interface available
Quick mode automatically creates all service configurations from defaults overlaid with mode-specific tuning.
```text
```toml
# Quick setup for solo development mode
./provisioning/scripts/setup-platform-config.sh --quick-mode --mode solo
@ -123,7 +123,7 @@ Quick mode automatically creates all service configurations from defaults overla
For advanced users who prefer editing configuration files directly:
```text
```toml
# View schema definition
cat provisioning/schemas/platform/schemas/orchestrator.ncl
@ -153,7 +153,7 @@ nickel typecheck provisioning/config/runtime/orchestrator.solo.ncl
The configuration system uses layered composition:
```text
```toml
1. Schema (Type contract)
↓ Defines valid fields and constraints
@ -179,7 +179,7 @@ All layers are automatically composed and validated.
After running the setup script, verify the configuration was created:
```text
```toml
# List generated runtime configurations
ls -la provisioning/config/runtime/
@ -198,7 +198,7 @@ After successful configuration, services can be started:
### Running a Single Service
```text
```bash
# Set deployment mode
export ORCHESTRATOR_MODE=solo
@ -209,7 +209,7 @@ cargo run -p orchestrator
### Running Multiple Services
```text
```bash
# Terminal 1: Vault Service (secrets management)
export VAULT_MODE=solo
cargo run -p vault-service
@ -227,7 +227,7 @@ cargo run -p control-center
### Docker-Based Deployment
```text
```bash
# Start all services in Docker (requires docker-compose.yml)
cd provisioning/platform/infrastructure/docker
docker-compose -f docker-compose.solo.yml up
@ -238,7 +238,7 @@ docker-compose -f docker-compose.enterprise.yml up
## Step 6: Verify Services Are Running
```text
```bash
# Check orchestrator status
curl http://localhost:9000/health
@ -256,7 +256,7 @@ cargo run -p orchestrator -- --log-level debug
If you need to switch from solo to multiuser mode:
```text
```bash
# Option 1: Re-run setup with new mode
./provisioning/scripts/setup-platform-config.sh --quick-mode --mode multiuser
@ -271,7 +271,7 @@ If you need to switch from solo to multiuser mode:
If you need fine-grained control:
```text
```bash
# 1. Edit the Nickel configuration directly
vim provisioning/config/runtime/orchestrator.solo.ncl
@ -296,7 +296,7 @@ cargo run -p orchestrator
For workspace-specific customization:
```text
```bash
# Create workspace override file
mkdir -p workspace_myworkspace/config
cat > workspace_myworkspace/config/platform-overrides.ncl <<'EOF'
@ -321,7 +321,7 @@ EOF
## Available Configuration Commands
```text
```toml
# List all available modes
./provisioning/scripts/setup-platform-config.sh --list-modes
# Output: solo, multiuser, cicd, enterprise
@ -344,7 +344,7 @@ EOF
### Public Definitions (Part of repository)
```text
```bash
provisioning/schemas/platform/
├── schemas/ # Type contracts (Nickel)
├── defaults/ # Base configuration values
@ -356,7 +356,7 @@ provisioning/schemas/platform/
### Private Runtime Configs (Gitignored)
```text
```toml
provisioning/config/runtime/ # User-specific deployments
├── orchestrator.solo.ncl # Editable config
├── orchestrator.multiuser.ncl
@ -367,7 +367,7 @@ provisioning/config/runtime/ # User-specific deployments
### Examples (Reference)
```text
```bash
provisioning/config/examples/
├── orchestrator.solo.example.ncl # Solo mode reference
└── orchestrator.enterprise.example.ncl # Enterprise mode reference
@ -377,7 +377,7 @@ provisioning/config/examples/
### Issue: Script Fails with "Nickel not found"
```text
```nickel
# Install Nickel
# macOS
brew install nickel
@ -392,7 +392,7 @@ nickel --version
### Issue: Configuration Won't Generate TOML
```text
```toml
# Check Nickel syntax
nickel typecheck provisioning/config/runtime/orchestrator.solo.ncl
@ -405,7 +405,7 @@ nickel export --format toml provisioning/config/runtime/orchestrator.solo.ncl
### Issue: Service Can't Read Configuration
```text
```toml
# Verify TOML file exists
ls -la provisioning/config/runtime/generated/orchestrator.solo.toml
@ -422,7 +422,7 @@ cargo run -p orchestrator --verbose
### Issue: Services Won't Start After Config Change
```text
```toml
# If you edited .ncl file manually, TOML must be regenerated
./provisioning/scripts/setup-platform-config.sh --generate-toml
@ -454,7 +454,7 @@ Files in `provisioning/schemas/platform/` are **version-controlled** because:
The setup script is safe to run multiple times:
```text
```bash
# Safe: Updates only what's needed
./provisioning/scripts/setup-platform-config.sh --quick-mode --mode enterprise

View File

@ -26,7 +26,7 @@ Before starting this guide, ensure you have:
Provisioning uses **declarative configuration** to manage infrastructure. Instead of manually creating resources, you define what you want in
configuration files, and the system makes it happen.
```text
```toml
You describe → System creates → Infrastructure exists
```
@ -51,7 +51,7 @@ You describe → System creates → Infrastructure exists
Create your personal configuration:
```text
```toml
# Initialize user configuration
provisioning init config
@ -60,7 +60,7 @@ provisioning init config
### Step 2: Verify Your Environment
```text
```bash
# Check your environment setup
provisioning env
@ -70,7 +70,7 @@ provisioning allenv
You should see output like:
```text
```bash
✅ Configuration loaded successfully
✅ All required tools available
📁 Base path: /usr/local/provisioning
@ -79,7 +79,7 @@ You should see output like:
### Step 3: Explore Available Resources
```text
```bash
# List available providers
provisioning list providers
@ -96,7 +96,7 @@ Let's create a simple local infrastructure to learn the basics.
### Step 1: Create a Workspace
```text
```bash
# Create a new workspace directory
mkdir ~/my-first-infrastructure
cd ~/my-first-infrastructure
@ -107,7 +107,7 @@ provisioning generate infra --new local-demo
This creates:
```text
```bash
local-demo/
├── config/
│ └── config.ncl # Master Nickel configuration
@ -120,14 +120,14 @@ local-demo/
### Step 2: Examine the Configuration
```text
```toml
# View the generated configuration
provisioning show settings --infra local-demo
```
### Step 3: Validate the Configuration
```text
```toml
# Validate syntax and structure
provisioning validate config --infra local-demo
@ -136,7 +136,7 @@ provisioning validate config --infra local-demo
### Step 4: Deploy Infrastructure (Check Mode)
```text
```bash
# Dry run - see what would be created
provisioning server create --infra local-demo --check
@ -145,7 +145,7 @@ provisioning server create --infra local-demo --check
### Step 5: Create Your Infrastructure
```text
```bash
# Create the actual infrastructure
provisioning server create --infra local-demo
@ -159,7 +159,7 @@ provisioning server list --infra local-demo
Let's install a containerized service:
```text
```bash
# Install Docker/containerd
provisioning taskserv create containerd --infra local-demo
@ -171,7 +171,7 @@ provisioning taskserv list --infra local-demo
For container orchestration:
```text
```bash
# Install Kubernetes
provisioning taskserv create kubernetes --infra local-demo
@ -180,7 +180,7 @@ provisioning taskserv create kubernetes --infra local-demo
### Checking Service Status
```text
```bash
# Show all services on your infrastructure
provisioning show servers --infra local-demo
@ -194,7 +194,7 @@ provisioning show servers web-01 taskserv kubernetes --infra local-demo
All commands follow this pattern:
```text
```bash
provisioning [global-options] <command> [command-options] [arguments]
```
@ -229,7 +229,7 @@ The system supports multiple environments:
### Switching Environments
```text
```bash
# Set environment for this session
export PROVISIONING_ENV=dev
provisioning env
@ -242,7 +242,7 @@ provisioning --environment dev server create
Create environment configs:
```text
```toml
# Development environment
provisioning init config dev
@ -254,7 +254,7 @@ provisioning init config prod
### Workflow 1: Development Environment
```text
```bash
# 1. Create development workspace
mkdir ~/dev-environment
cd ~/dev-environment
@ -276,7 +276,7 @@ provisioning taskserv create containerd --infra dev-setup
### Workflow 2: Service Updates
```text
```bash
# Check for service updates
provisioning taskserv check-updates
@ -289,7 +289,7 @@ provisioning taskserv versions kubernetes
### Workflow 3: Infrastructure Scaling
```text
```bash
# Add servers to existing infrastructure
# Edit settings.ncl to add more servers
@ -304,14 +304,14 @@ provisioning taskserv create containerd --infra dev-setup
### Starting Interactive Shell
```text
```bash
# Start Nushell with provisioning loaded
provisioning nu
```
In the interactive shell, you have access to all provisioning functions:
```text
```bash
# Inside Nushell session
use lib_provisioning *
@ -324,7 +324,7 @@ help commands | where name =~ "provision"
### Useful Interactive Commands
```text
```bash
# Show detailed server information
find_servers "web-*" | table
@ -346,7 +346,7 @@ taskservs_list | where status == "running"
### Configuration Hierarchy
```text
```toml
Infrastructure settings.ncl
↓ (overrides)
Environment config.{env}.toml
@ -358,7 +358,7 @@ System config.defaults.toml
### Customizing Your Configuration
```text
```toml
# Edit user configuration
provisioning sops ~/.provisioning/config.user.toml
@ -368,7 +368,7 @@ nano ~/.provisioning/config.user.toml
Example customizations:
```text
```toml
[debug]
enabled = true # Enable debug mode by default
log_level = "debug" # Verbose logging
@ -384,7 +384,7 @@ format = "json" # Prefer JSON output
### Checking System Status
```text
```bash
# Overall system health
provisioning env
@ -397,7 +397,7 @@ provisioning taskserv list --infra dev-setup
### Logging and Debugging
```text
```bash
# Enable debug mode for troubleshooting
provisioning --debug server create --infra dev-setup --check
@ -407,7 +407,7 @@ provisioning show logs --infra dev-setup
### Cost Monitoring
```text
```bash
# Show cost estimates
provisioning show cost --infra dev-setup
@ -440,7 +440,7 @@ provisioning server price --infra dev-setup
### 4. Development Workflow
```text
```bash
# 1. Always validate before applying
provisioning validate config --infra my-infra
@ -458,7 +458,7 @@ provisioning show servers --infra my-infra
### Built-in Help System
```text
```bash
# General help
provisioning help
@ -485,7 +485,7 @@ Let's walk through a complete example of setting up a web application infrastruc
### Step 1: Plan Your Infrastructure
```text
```bash
# Create project workspace
mkdir ~/webapp-infrastructure
cd ~/webapp-infrastructure
@ -504,7 +504,7 @@ Edit `webapp/settings.ncl` to define:
### Step 3: Deploy Base Infrastructure
```text
```bash
# Validate configuration
provisioning validate config --infra webapp
@ -517,7 +517,7 @@ provisioning server create --infra webapp
### Step 4: Install Services
```text
```bash
# Install container runtime on all servers
provisioning taskserv create containerd --infra webapp
@ -530,7 +530,7 @@ provisioning taskserv create postgresql --infra webapp
### Step 5: Deploy Application
```text
```bash
# Create application cluster
provisioning cluster create webapp --infra webapp

View File

@ -42,7 +42,7 @@ Before installation, ensure you have:
### Pre-installation Checklist
```text
```bash
# Check your system
uname -a # View system information
df -h # Check available disk space
@ -57,7 +57,7 @@ This is the easiest method for most users.
#### Step 1: Download the Package
```text
```bash
# Download the latest release package
wget https://releases.example.com/provisioning-latest.tar.gz
@ -67,7 +67,7 @@ curl -LO https://releases.example.com/provisioning-latest.tar.gz
#### Step 2: Extract and Install
```text
```bash
# Extract the package
tar xzf provisioning-latest.tar.gz
@ -91,7 +91,7 @@ For containerized environments or testing.
#### Using Docker
```text
```bash
# Pull the provisioning container
docker pull provisioning:latest
@ -108,7 +108,7 @@ sudo ln -sf /usr/local/provisioning/bin/provisioning /usr/local/bin/provisioning
#### Using Podman
```text
```bash
# Similar to Docker but with Podman
podman pull provisioning:latest
podman run -it --name provisioning-setup
@ -127,7 +127,7 @@ For developers or custom installations.
#### Installation Steps
```text
```bash
# Clone the repository
git clone https://github.com/your-org/provisioning.git
cd provisioning
@ -143,7 +143,7 @@ cd provisioning
For advanced users who want complete control.
```text
```bash
# Create installation directory
sudo mkdir -p /usr/local/provisioning
@ -165,7 +165,7 @@ The installation process sets up:
#### 1. Core System Files
```text
```bash
/usr/local/provisioning/
├── core/ # Core provisioning logic
├── providers/ # Cloud provider integrations
@ -200,7 +200,7 @@ The installation process sets up:
### Basic Verification
```text
```bash
# Check if provisioning command is available
provisioning --version
@ -213,7 +213,7 @@ provisioning allenv
Expected output should show:
```text
```bash
✅ Provisioning v1.0.0 installed
✅ All dependencies available
✅ Configuration loaded successfully
@ -221,7 +221,7 @@ Expected output should show:
### Tool Verification
```text
```bash
# Check individual tools
nu --version # Should show Nushell 0.109.0+
nickel version # Should show Nickel 1.5+
@ -232,7 +232,7 @@ k9s version # Should show K9s 0.50.6
### Plugin Verification
```text
```bash
# Start Nushell and check plugins
nu -c "version | get installed_plugins"
@ -242,7 +242,7 @@ nu -c "version | get installed_plugins"
### Configuration Verification
```text
```toml
# Validate configuration
provisioning validate config
@ -256,7 +256,7 @@ provisioning validate config
Add to your shell profile (`~/.bashrc`, `~/.zshrc`, or `~/.profile`):
```text
```bash
# Add provisioning to PATH
export PATH="/usr/local/bin:$PATH"
@ -266,7 +266,7 @@ export PROVISIONING="/usr/local/provisioning"
### Configuration Initialization
```text
```toml
# Initialize user configuration
provisioning init config
@ -275,7 +275,7 @@ provisioning init config
### First-Time Setup
```text
```bash
# Set up your first workspace
mkdir -p ~/provisioning-workspace
cd ~/provisioning-workspace
@ -291,7 +291,7 @@ provisioning env
### Linux (Ubuntu/Debian)
```text
```bash
# Install system dependencies
sudo apt update
sudo apt install -y curl wget tar
@ -305,7 +305,7 @@ sudo ./install-provisioning
### Linux (RHEL/CentOS/Fedora)
```text
```bash
# Install system dependencies
sudo dnf install -y curl wget tar
# or for older versions: sudo yum install -y curl wget tar
@ -315,7 +315,7 @@ sudo dnf install -y curl wget tar
### macOS
```text
```bash
# Using Homebrew (if available)
brew install curl wget
@ -328,7 +328,7 @@ sudo ./install-provisioning
### Windows (WSL2)
```text
```bash
# In WSL2 terminal
sudo apt update
sudo apt install -y curl wget tar
@ -344,7 +344,7 @@ wget https://releases.example.com/provisioning-latest.tar.gz
Create `~/.provisioning/config.user.toml`:
```text
```toml
[core]
name = "my-provisioning"
@ -367,7 +367,7 @@ format = "yaml"
For developers, use enhanced debugging:
```text
```toml
[debug]
enabled = true
log_level = "debug"
@ -381,7 +381,7 @@ enabled = false # Disable caching during development
### Upgrading from Previous Version
```text
```bash
# Backup current installation
sudo cp -r /usr/local/provisioning /usr/local/provisioning.backup
@ -399,7 +399,7 @@ provisioning --version
### Migrating Configuration
```text
```toml
# Backup your configuration
cp -r ~/.provisioning ~/.provisioning.backup
@ -415,7 +415,7 @@ provisioning init config
#### Permission Denied Errors
```text
```bash
# Problem: Cannot write to /usr/local
# Solution: Use sudo
sudo ./install-provisioning
@ -427,7 +427,7 @@ export PATH="$HOME/provisioning/bin:$PATH"
#### Missing Dependencies
```text
```bash
# Problem: curl/wget not found
# Ubuntu/Debian solution:
sudo apt install -y curl wget tar
@ -438,7 +438,7 @@ sudo dnf install -y curl wget tar
#### Download Failures
```text
```bash
# Problem: Cannot download package
# Solution: Check internet connection and try alternative
ping google.com
@ -452,7 +452,7 @@ wget --tries=3 https://releases.example.com/provisioning-latest.tar.gz
#### Extraction Failures
```text
```bash
# Problem: Archive corrupted
# Solution: Verify and re-download
sha256sum provisioning-latest.tar.gz # Check against published hash
@ -464,7 +464,7 @@ wget https://releases.example.com/provisioning-latest.tar.gz
#### Tool Installation Failures
```text
```bash
# Problem: Nushell installation fails
# Solution: Check architecture and OS compatibility
uname -m # Should show x86_64 or arm64
@ -478,7 +478,7 @@ uname -s # Should show Linux, Darwin, etc.
#### Command Not Found
```text
```bash
# Problem: 'provisioning' command not found
# Check installation path
ls -la /usr/local/bin/provisioning
@ -493,7 +493,7 @@ echo 'export PATH="/usr/local/bin:$PATH"' >> ~/.bashrc
#### Plugin Errors
```text
```bash
# Problem: Plugin command not found
# Solution: Ensure plugin is properly registered
@ -506,7 +506,7 @@ exec nu
#### Configuration Errors
```text
```toml
# Problem: Configuration validation fails
# Solution: Initialize with template
provisioning init config

View File

@ -16,7 +16,7 @@ Before running the bootstrap script, verify that your system has all required de
Run these commands to verify your system meets minimum requirements:
```text
```bash
# Check OS
uname -s
# Expected: Darwin (macOS), Linux, or WSL2
@ -48,7 +48,7 @@ df -h | grep -E '^/dev|^Filesystem'
Nushell is required for bootstrap and CLI operations:
```text
```nushell
command -v nu
# Expected output: /path/to/nu
@ -58,7 +58,7 @@ nu --version
**If Nushell is not installed:**
```text
```nushell
# macOS (using Homebrew)
brew install nushell
@ -75,7 +75,7 @@ sudo yum install nushell
Nickel is required for configuration validation:
```text
```nickel
command -v nickel
# Expected output: /path/to/nickel
@ -85,7 +85,7 @@ nickel --version
**If Nickel is not installed:**
```text
```nickel
# Install via Cargo (requires Rust)
cargo install nickel-lang-cli
@ -96,7 +96,7 @@ cargo install nickel-lang-cli
Docker is required for running containerized services:
```text
```bash
command -v docker
# Expected output: /path/to/docker
@ -112,7 +112,7 @@ Visit [Docker installation guide](https://docs.docker.com/get-docker/) and insta
Verify the provisioning CLI binary exists:
```text
```bash
ls -la /Users/Akasha/project-provisioning/provisioning/core/cli/provisioning
# Expected: -rwxr-xr-x (executable)
@ -122,13 +122,13 @@ file /Users/Akasha/project-provisioning/provisioning/core/cli/provisioning
**If binary is not executable:**
```text
```bash
chmod +x /Users/Akasha/project-provisioning/provisioning/core/cli/provisioning
```
### Prerequisites Checklist
```text
```bash
[ ] OS is macOS, Linux, or WSL2
[ ] CPU: 2+ cores available
[ ] RAM: 2 GB minimum installed
@ -147,13 +147,13 @@ The bootstrap script automates 7 stages of installation and initialization. Run
### Step 2.1: Navigate to Project Root
```text
```bash
cd /Users/Akasha/project-provisioning
```
### Step 2.2: Run Bootstrap Script
```text
```bash
./provisioning/bootstrap/install.sh
```
@ -161,7 +161,7 @@ cd /Users/Akasha/project-provisioning
You should see output similar to this:
```text
```bash
╔════════════════════════════════════════════════════════════════╗
║ PROVISIONING BOOTSTRAP (Bash) ║
╚════════════════════════════════════════════════════════════════╝
@ -241,7 +241,7 @@ After bootstrap completes, verify that all components are working correctly.
Bootstrap should have created workspace directories. Verify they exist:
```text
```bash
cd /Users/Akasha/project-provisioning
# Check all required directories
@ -253,7 +253,7 @@ ls -la workspaces/workspace_librecloud/.clusters/
```
**Expected Output**:
```text
```bash
total 0
drwxr-xr-x 2 user group 64 Jan 7 10:30 .
@ -264,7 +264,7 @@ drwxr-xr-x 2 user group 64 Jan 7 10:30 .
Bootstrap should have exported Nickel configuration to TOML format:
```text
```nickel
# Check generated files exist
ls -la workspaces/workspace_librecloud/config/generated/
@ -279,7 +279,7 @@ cat workspaces/workspace_librecloud/config/generated/platform/orchestrator.toml
```
**Expected Output**:
```text
```bash
config/
├── generated/
│ ├── workspace.toml
@ -293,7 +293,7 @@ config/
Verify Nickel configuration files have valid syntax:
```text
```nickel
cd /Users/Akasha/project-provisioning/workspaces/workspace_librecloud
# Type-check main workspace config
@ -313,7 +313,7 @@ nu workspace.nu typecheck
```
**Expected Output**:
```text
```bash
✓ All files validated successfully
✓ infra/wuji/main.ncl
✓ infra/sgoyol/main.ncl
@ -323,7 +323,7 @@ nu workspace.nu typecheck
The orchestrator service manages workflows and deployments:
```text
```bash
# Check if orchestrator is running (health check)
curl http://localhost:9090/health
# Expected: {"status": "healthy"} or similar response
@ -337,7 +337,7 @@ ps aux | grep orchestrator
```
**Expected Output**:
```text
```json
{
"status": "healthy",
"uptime": "0:05:23"
@ -348,7 +348,7 @@ ps aux | grep orchestrator
Check logs and restart manually:
```text
```bash
cd /Users/Akasha/project-provisioning/provisioning/platform/orchestrator
# Check log file
@ -365,7 +365,7 @@ curl http://localhost:9090/health
You can install the provisioning CLI globally for easier access:
```text
```bash
# Option A: System-wide installation (requires sudo)
cd /Users/Akasha/project-provisioning
sudo ./scripts/install-provisioning.sh
@ -382,7 +382,7 @@ provisioning --version
```
**Expected Output**:
```text
```bash
provisioning version 1.0.0
Usage: provisioning [OPTIONS] COMMAND
@ -396,7 +396,7 @@ Commands:
### Installation Validation Checklist
```text
```bash
[ ] Workspace directories created (.orchestrator, .kms, .providers, .taskservs, .clusters)
[ ] Generated TOML files exist in config/generated/
[ ] Nickel type-checking passes (no errors)
@ -415,7 +415,7 @@ This section covers common issues and solutions.
### Issue: "Nushell not found"
**Symptoms**:
```text
```nushell
./provisioning/bootstrap/install.sh: line X: nu: command not found
```
@ -427,7 +427,7 @@ This section covers common issues and solutions.
### Issue: "Nickel configuration validation failed"
**Symptoms**:
```text
```nickel
⚙️ Stage 4: Validating Configuration
Error: Nickel configuration validation failed
```
@ -441,7 +441,7 @@ Error: Nickel configuration validation failed
### Issue: "Docker not installed"
**Symptoms**:
```text
```bash
❌ Docker is required but not installed
```
@ -453,7 +453,7 @@ Error: Nickel configuration validation failed
### Issue: "Configuration export failed"
**Symptoms**:
```text
```toml
⚠️ Configuration export encountered issues (may continue)
```
@ -472,7 +472,7 @@ Error: Nickel configuration validation failed
### Issue: "Orchestrator didn't start"
**Symptoms**:
```text
```bash
🚀 Stage 6: Initializing Orchestrator Service
⚠️ Orchestrator may not have started (check logs)
@ -492,7 +492,7 @@ curl http://localhost:9090/health
### Issue: "Sudo password prompt during bootstrap"
**Symptoms**:
```text
```bash
Stage 3: Creating Directory Structure
[sudo] password for user:
```
@ -505,12 +505,12 @@ Stage 3: Creating Directory Structure
### Issue: "Permission denied" on binary
**Symptoms**:
```text
```bash
bash: ./provisioning/bootstrap/install.sh: Permission denied
```
**Solution**:
```text
```bash
# Make script executable
chmod +x /Users/Akasha/project-provisioning/provisioning/bootstrap/install.sh
@ -528,7 +528,7 @@ After successful installation validation, you can:
To deploy infrastructure to UpCloud:
```text
```bash
# Read workspace deployment guide
cat workspaces/workspace_librecloud/docs/deployment-guide.md
@ -541,7 +541,7 @@ cat docs/deployment-guide.md
To create a new workspace for different infrastructure:
```text
```bash
provisioning workspace init my_workspace --template minimal
```
@ -549,7 +549,7 @@ provisioning workspace init my_workspace --template minimal
Discover what's available to deploy:
```text
```bash
# List available task services
provisioning mod discover taskservs
@ -566,7 +566,7 @@ provisioning mod discover clusters
After completing all steps, verify with this final checklist:
```text
```bash
Prerequisites Verified:
[ ] OS is macOS, Linux, or WSL2
[ ] CPU: 2+ cores

View File

@ -26,7 +26,7 @@ Native Nushell plugins for high-performance operations. **10-50x faster than HTT
### Authentication Plugin (nu_plugin_auth)
```text
```bash
# Login (password prompted securely)
auth login admin
@ -54,7 +54,7 @@ auth mfa verify --code ABCD-EFGH-IJKL # Backup code
**Installation:**
```text
```bash
cd provisioning/core/plugins/nushell-plugins
cargo build --release -p nu_plugin_auth
plugin add target/release/nu_plugin_auth
@ -64,7 +64,7 @@ plugin add target/release/nu_plugin_auth
**Performance**: 10x faster encryption (~5 ms vs ~50 ms HTTP)
```text
```bash
# Encrypt with auto-detected backend
kms encrypt "secret data"
# vault:v1:abc123...
@ -102,7 +102,7 @@ kms status
**Installation:**
```text
```bash
cargo build --release -p nu_plugin_kms
plugin add target/release/nu_plugin_kms
@ -115,7 +115,7 @@ export RUSTYVAULT_TOKEN="hvs.xxxxx"
**Performance**: 30-50x faster queries (~1 ms vs ~30-50 ms HTTP)
```text
```bash
# Get orchestrator status (direct file access, ~1 ms)
orch status
# { active_tasks: 5, completed_tasks: 120, health: "healthy" }
@ -132,7 +132,7 @@ orch tasks --status failed --limit 10
**Installation:**
```text
```bash
cargo build --release -p nu_plugin_orchestrator
plugin add target/release/nu_plugin_orchestrator
```
@ -154,7 +154,7 @@ plugin add target/release/nu_plugin_orchestrator
### Infrastructure Shortcuts
```text
```bash
# Server shortcuts
provisioning s # server (same as 'provisioning server')
provisioning s create # Create servers
@ -186,7 +186,7 @@ provisioning i validate
### Orchestration Shortcuts
```text
```bash
# Workflow shortcuts
provisioning wf # workflow (same as 'provisioning workflow')
provisioning flow # workflow (alias)
@ -217,7 +217,7 @@ provisioning orch logs
### Development Shortcuts
```text
```bash
# Module shortcuts
provisioning mod # module (same as 'provisioning module')
provisioning mod discover taskserv
@ -251,7 +251,7 @@ provisioning pack clean
### Workspace Shortcuts
```text
```bash
# Workspace shortcuts
provisioning ws # workspace (same as 'provisioning workspace')
provisioning ws init
@ -275,7 +275,7 @@ provisioning tpl validate <name>
### Configuration Shortcuts
```text
```toml
# Environment shortcuts
provisioning e # env (same as 'provisioning env')
provisioning val # validate (same as 'provisioning validate')
@ -296,7 +296,7 @@ provisioning allenv # Show all config and environment
### Utility Shortcuts
```text
```bash
# List shortcuts
provisioning l # list (same as 'provisioning list')
provisioning ls # list (alias)
@ -334,7 +334,7 @@ provisioning plugin test nu_plugin_kms
### Generation Shortcuts
```text
```bash
# Generate shortcuts
provisioning g # generate (same as 'provisioning generate')
provisioning gen # generate (alias)
@ -347,7 +347,7 @@ provisioning g new <type> <name>
### Action Shortcuts
```text
```bash
# Common actions
provisioning c # create (same as 'provisioning create')
provisioning d # delete (same as 'provisioning delete')
@ -369,7 +369,7 @@ provisioning csts # create-server-task (alias)
### Server Management
```text
```bash
# Create servers
provisioning server create
provisioning server create --check # Dry-run mode
@ -396,7 +396,7 @@ provisioning server price --provider upcloud
### Taskserv Management
```text
```bash
# Create taskserv
provisioning taskserv create kubernetes
provisioning taskserv create kubernetes --check
@ -421,7 +421,7 @@ provisioning taskserv check-updates --taskserv kubernetes
### Cluster Management
```text
```bash
# Create cluster
provisioning cluster create buildkit
provisioning cluster create buildkit --check
@ -442,7 +442,7 @@ provisioning cluster list --infra wuji
### Workflow Management
```text
```bash
# Submit server creation workflow
nu -c "use core/nulib/workflows/server_create.nu *; server_create_workflow 'wuji' '' [] --check"
@ -475,7 +475,7 @@ nu -c "use core/nulib/workflows/management.nu *; workflow status <task_id>"
### Batch Operations
```text
```bash
# Submit batch workflow from Nickel
provisioning batch submit workflows/example_batch.ncl
nu -c "use core/nulib/workflows/batch.nu *; batch submit workflows/example_batch.ncl"
@ -507,7 +507,7 @@ nu -c "use core/nulib/workflows/batch.nu *; batch stats"
### Orchestrator Management
```text
```bash
# Start orchestrator in background
cd provisioning/platform/orchestrator
./scripts/start-orchestrator.nu --background
@ -531,7 +531,7 @@ provisioning orchestrator logs
### Environment and Validation
```text
```bash
# Show environment variables
provisioning env
@ -548,7 +548,7 @@ provisioning setup
### Configuration Files
```text
```toml
# System defaults
less provisioning/config/config.defaults.toml
@ -566,7 +566,7 @@ vim workspace/infra/<name>/config.toml
### HTTP Configuration
```text
```toml
# Configure HTTP client behavior
# In workspace/config/local-overrides.toml:
[http]
@ -579,7 +579,7 @@ use_curl = true # Use curl instead of ureq
### Workspace Management
```text
```bash
# List all workspaces
provisioning workspace list
@ -617,7 +617,7 @@ provisioning workspace migrate
### User Preferences
```text
```bash
# View user preferences
provisioning workspace preferences
@ -642,7 +642,7 @@ provisioning workspace get-preference editor
### Authentication (via CLI)
```text
```bash
# Login
provisioning login admin
@ -658,7 +658,7 @@ provisioning auth sessions
### Multi-Factor Authentication (MFA)
```text
```bash
# Enroll in TOTP (Google Authenticator, Authy)
provisioning mfa totp enroll
@ -675,7 +675,7 @@ provisioning mfa devices
### Secrets Management
```text
```bash
# Generate AWS STS credentials (15 min-12h TTL)
provisioning secrets generate aws --ttl 1hr
@ -694,7 +694,7 @@ provisioning secrets cleanup
### SSH Temporal Keys
```text
```bash
# Connect to server with temporal key
provisioning ssh connect server01 --ttl 1hr
@ -710,7 +710,7 @@ provisioning ssh revoke <key_id>
### KMS Operations (via CLI)
```text
```bash
# Encrypt configuration file
provisioning kms encrypt secure.yaml
@ -726,7 +726,7 @@ provisioning config decrypt workspace/infra/production/
### Break-Glass Emergency Access
```text
```bash
# Request emergency access
provisioning break-glass request "Production database outage"
@ -742,7 +742,7 @@ provisioning break-glass revoke <session_id>
### Compliance and Audit
```text
```bash
# Generate compliance report
provisioning compliance report
provisioning compliance report --standard gdpr
@ -770,7 +770,7 @@ provisioning audit export --format json --output audit-logs.json
### Complete Deployment from Scratch
```text
```bash
# 1. Initialize workspace
provisioning workspace init --name production
@ -804,7 +804,7 @@ provisioning server ssh k8s-master-01
### Multi-Environment Deployment
```text
```bash
# Deploy to dev
provisioning server create --infra dev --check
provisioning server create --infra dev
@ -823,7 +823,7 @@ provisioning taskserv create kubernetes --infra production
### Update Infrastructure
```text
```bash
# 1. Check for updates
provisioning taskserv check-updates
@ -839,7 +839,7 @@ provisioning taskserv list --infra production | where name == kubernetes
### Encrypted Secrets Deployment
```text
```bash
# 1. Authenticate
auth login admin
auth mfa verify --code 123456
@ -862,7 +862,7 @@ orch tasks --status completed
Enable verbose logging with `--debug` or `-x` flag:
```text
```bash
# Server creation with debug output
provisioning server create --debug
provisioning server create -x
@ -878,7 +878,7 @@ provisioning --debug taskserv create kubernetes
Preview changes without applying them with `--check` or `-c` flag:
```text
```bash
# Check what servers would be created
provisioning server create --check
provisioning server create -c
@ -897,7 +897,7 @@ provisioning server create --check --debug
Skip confirmation prompts with `--yes` or `-y` flag:
```text
```bash
# Auto-confirm server creation
provisioning server create --yes
provisioning server create -y
@ -910,7 +910,7 @@ provisioning server delete --yes
Wait for operations to complete with `--wait` or `-w` flag:
```text
```bash
# Wait for server creation to complete
provisioning server create --wait
@ -922,7 +922,7 @@ provisioning taskserv create kubernetes --wait
Specify target infrastructure with `--infra` or `-i` flag:
```text
```bash
# Create servers in specific infrastructure
provisioning server create --infra production
provisioning server create -i production
@ -937,7 +937,7 @@ provisioning server list --infra production
### JSON Output
```text
```bash
# Output as JSON
provisioning server list --out json
provisioning taskserv list --out json
@ -948,7 +948,7 @@ provisioning server list --out json | jq '.[] | select(.status == "running")'
### YAML Output
```text
```yaml
# Output as YAML
provisioning server list --out yaml
provisioning taskserv list --out yaml
@ -959,7 +959,7 @@ provisioning server list --out yaml | yq '.[] | select(.status == "running")'
### Table Output (Default)
```text
```bash
# Output as table (default)
provisioning server list
provisioning server list --out table
@ -970,7 +970,7 @@ provisioning server list | table
### Text Output
```text
```bash
# Output as plain text
provisioning server list --out text
```
@ -981,7 +981,7 @@ provisioning server list --out text
### Use Plugins for Frequent Operations
```text
```bash
# ❌ Slow: HTTP API (50 ms per call)
for i in 1..100 { http post http://localhost:9998/encrypt { data: "secret" } }
@ -991,14 +991,14 @@ for i in 1..100 { kms encrypt "secret" }
### Batch Operations
```text
```bash
# Use batch workflows for multiple operations
provisioning batch submit workflows/multi-cloud-deploy.ncl
```
### Check Mode for Testing
```text
```bash
# Always test with --check first
provisioning server create --check
provisioning server create # Only after verification
@ -1010,7 +1010,7 @@ provisioning server create # Only after verification
### Command-Specific Help
```text
```bash
# Show help for specific command
provisioning help server
provisioning help taskserv
@ -1028,7 +1028,7 @@ provisioning help config
### Bi-Directional Help
```text
```bash
# All these work identically:
provisioning help workspace
provisioning workspace help
@ -1038,7 +1038,7 @@ provisioning help ws
### General Help
```text
```bash
# Show all commands
provisioning help
provisioning --help
@ -1065,7 +1065,7 @@ provisioning --version
## Plugin Installation Quick Reference
```text
```bash
# Build all plugins (one-time setup)
cd provisioning/core/plugins/nushell-plugins
cargo build --release --all

View File

@ -13,7 +13,7 @@ Please see the complete quick start guide here:
## Quick Commands
```text
```bash
# Check system status
provisioning status

View File

@ -70,12 +70,12 @@ This guide provides detailed information about each setup profile and when to us
#### Step 1: Run Setup
```text
```bash
provisioning setup profile --profile developer
```
Output:
```text
```bash
╔═══════════════════════════════════════════════════════╗
║ PROVISIONING SYSTEM SETUP - DEVELOPER PROFILE ║
╚═══════════════════════════════════════════════════════╝
@ -108,7 +108,7 @@ System automatically detects:
Creates three Nickel configs:
**system.ncl** - System info (read-only):
```text
```json
{
version = "1.0.0",
config_base_path = "/Users/user/Library/Application Support/provisioning",
@ -124,7 +124,7 @@ Creates three Nickel configs:
```
**platform/deployment.ncl** - Deployment config (can edit):
```text
```json
{
deployment = {
mode = 'docker_compose,
@ -149,7 +149,7 @@ Creates three Nickel configs:
```
**user_preferences.ncl** - User settings (can edit):
```text
```json
{
output_format = 'yaml,
use_colors = true,
@ -163,7 +163,7 @@ Creates three Nickel configs:
#### Step 4: Validation
Each config is validated:
```text
```toml
✓ Validating system.ncl
✓ Validating platform/deployment.ncl
✓ Validating user_preferences.ncl
@ -173,7 +173,7 @@ Each config is validated:
#### Step 5: Service Startup
Docker Compose starts:
```text
```bash
✓ Starting Docker Compose services...
✓ Starting orchestrator... [port 9090]
✓ Starting control-center... [port 3000]
@ -183,7 +183,7 @@ Docker Compose starts:
#### Step 6: Verification
Health checks verify services:
```text
```bash
✓ Orchestrator health: HEALTHY
✓ Control Center health: HEALTHY
✓ KMS health: HEALTHY
@ -194,32 +194,32 @@ Setup complete in 3 minutes 47 seconds!
### After Setup: Common Tasks
**Verify everything works**:
```text
```bash
curl http://localhost:9090/health
curl http://localhost:3000/health
curl http://localhost:3001/health
```
**View your configuration**:
```text
```toml
cat ~/Library/Application\ Support/provisioning/system.ncl
cat ~/Library/Application\ Support/provisioning/platform/deployment.ncl
```
**Create a workspace**:
```text
```bash
provisioning workspace create myapp
```
**View logs**:
```text
```bash
docker-compose logs orchestrator
docker-compose logs control-center
docker-compose logs kms
```
**Stop services**:
```text
```bash
docker-compose down
```
@ -277,7 +277,7 @@ docker-compose down
#### Step 1: Run Setup
```text
```bash
provisioning setup profile --profile production --interactive
```
@ -289,7 +289,7 @@ Same as Developer profile - auto-detects OS, CPU, memory, etc.
The wizard asks 10-15 questions:
```text
```bash
1. Deployment Mode?
a) Kubernetes (recommended for HA)
b) SSH (manual server management)
@ -366,7 +366,7 @@ The wizard asks 10-15 questions:
Creates extensive Nickel configs:
**platform/deployment.ncl**:
```text
```json
{
deployment = {
mode = 'kubernetes,
@ -393,7 +393,7 @@ Creates extensive Nickel configs:
```
**providers/upcloud.ncl**:
```text
```json
{
provider = 'upcloud,
api_key_ref = "rustyvault://secrets/upcloud/api-key",
@ -405,7 +405,7 @@ Creates extensive Nickel configs:
```
**cedar-policies/default.cedar**:
```text
```bash
permit(
principal == User::"john@company.com",
action == Action::"Deploy",
@ -429,7 +429,7 @@ forbid(
#### Step 5: Validation
All configs validated:
```text
```toml
✓ Validating system.ncl
✓ Validating platform/deployment.ncl
✓ Validating providers/upcloud.ncl
@ -439,7 +439,7 @@ All configs validated:
#### Step 6: Summary & Confirmation
```text
```bash
Setup Summary
─────────────────────────────────────────
Profile: Production
@ -457,7 +457,7 @@ Do you want to proceed? (y/n): y
#### Step 7: Infrastructure Creation (Optional)
```text
```bash
Creating UpCloud infrastructure...
Creating 3 master nodes... [networking configured]
Creating 5 worker nodes... [networking configured]
@ -478,28 +478,28 @@ Deploy services:
### After Setup: Common Tasks
**View Kubernetes cluster**:
```text
```yaml
kubectl get nodes
kubectl get pods --all-namespaces
```
**Check Cedar authorization**:
```text
```bash
cat ~/.config/provisioning/cedar-policies/default.cedar
```
**View infrastructure definition**:
```text
```bash
cat workspace-production-infrastructure/infrastructure.ncl
```
**Deploy an application**:
```text
```bash
provisioning app deploy myapp --workspace production-infrastructure
```
**Monitor cluster**:
```text
```bash
# Access Grafana
open http://localhost:3000
@ -547,7 +547,7 @@ open http://localhost:9090
#### Example: GitHub Actions
```text
```bash
name: Integration Tests
on: [push, pull_request]
@ -598,27 +598,27 @@ jobs:
#### What Happens
**Step 1: Minimal Detection**
```text
```bash
✓ Detected: CI environment
✓ Profile: CICD
```
**Step 2: Ephemeral Config Creation**
```text
```toml
✓ Created: /tmp/provisioning-ci-abc123def456/
✓ Created: /tmp/provisioning-ci-abc123def456/system.ncl
✓ Created: /tmp/provisioning-ci-abc123def456/platform/deployment.ncl
```
**Step 3: Validation**
```text
```bash
✓ Validating system.ncl
✓ Validating platform/deployment.ncl
✓ All configurations validated: PASSED
```
**Step 4: Services Start**
```text
```bash
✓ Starting Docker Compose services
✓ Orchestrator running [port 9090]
✓ Control Center running [port 3000]
@ -627,7 +627,7 @@ jobs:
```
**Step 5: Tests Execute**
```text
```bash
$ curl http://localhost:9090/health
{"status": "healthy", "uptime": "2s"}
@ -639,7 +639,7 @@ All tests passed!
```
**Step 6: Automatic Cleanup**
```text
```bash
✓ Cleanup triggered (job exit)
✓ Stopping Docker Compose
✓ Removing temporary directory: /tmp/provisioning-ci-abc123def456/
@ -650,7 +650,7 @@ All tests passed!
Use environment variables to customize:
```text
```bash
# Provider (local or cloud)
export PROVISIONING_PROVIDER=local|upcloud|aws|hetzner
@ -670,7 +670,7 @@ export PROVISIONING_CONFIG=/tmp/custom-config.ncl
### CI/CD Best Practices
**1. Use matrix builds for testing**:
```text
```bash
strategy:
matrix:
profile: [developer, production]
@ -678,7 +678,7 @@ strategy:
```
**2. Cache Nickel compilation**:
```text
```nickel
- uses: actions/cache@v3
with:
path: ~/.cache/nickel
@ -686,7 +686,7 @@ strategy:
```
**3. Separate test stages**:
```text
```bash
- name: Setup (CI/CD Profile)
- name: Test Unit
- name: Test Integration
@ -694,7 +694,7 @@ strategy:
```
**4. Publish test results**:
```text
```bash
- name: Publish Test Results
if: always()
uses: actions/upload-artifact@v3
@ -730,7 +730,7 @@ strategy:
### Migration Path
```text
```bash
Developer → Production
(ready for team)
@ -747,7 +747,7 @@ You can run Developer locally and CI/CD in your pipeline simultaneously.
If you started with Developer and want to move to Production:
```text
```bash
# Backup your current setup
tar czf provisioning-backup.tar.gz ~/.config/provisioning/
@ -763,7 +763,7 @@ tar xzf provisioning-backup.tar.gz
All profiles' Nickel configs can be edited after setup:
```text
```nickel
# Edit deployment config
vim ~/.config/provisioning/platform/deployment.ncl
@ -781,7 +781,7 @@ docker-compose restart # or kubectl apply -f
### Developer Profile
**Problem**: Docker not running
```text
```bash
# Solution: Start Docker
docker daemon &
# or
@ -789,7 +789,7 @@ sudo systemctl start docker
```
**Problem**: Ports 9090/3000/3001 already in use
```text
```bash
# Solution: Kill conflicting process
lsof -i :9090 | grep LISTEN | awk '{print $2}' | xargs kill -9
```
@ -797,14 +797,14 @@ lsof -i :9090 | grep LISTEN | awk '{print $2}' | xargs kill -9
### Production Profile
**Problem**: Kubernetes not installed
```text
```yaml
# Solution: Install kubectl
brew install kubectl # macOS
sudo apt-get install kubectl # Linux
```
**Problem**: Cloud credentials rejected
```text
```bash
# Solution: Verify credentials
upcloud auth status # or aws sts get-caller-identity
# Re-run setup with correct credentials
@ -813,13 +813,13 @@ upcloud auth status # or aws sts get-caller-identity
### CI/CD Profile
**Problem**: Services not accessible from test
```text
```bash
# Solution: Use service DNS
curl http://orchestrator:9090/health # instead of localhost
```
**Problem**: Cleanup not working
```text
```bash
# Solution: Manual cleanup
docker system prune -f
rm -rf /tmp/provisioning-ci-*/

View File

@ -4,7 +4,7 @@
## Step 1: Check Prerequisites (30 seconds)
```text
```bash
# Check Nushell
nu --version # Should be 0.109.0+
@ -17,7 +17,7 @@ systemctl --version
## Step 2: Install Provisioning (1 minute)
```text
```bash
# Option A: Using installer script
curl -sSL https://install.provisioning.dev | bash
@ -29,7 +29,7 @@ cd provisioning
## Step 3: Initialize System (2 minutes)
```text
```bash
# Run interactive setup
provisioning setup system --interactive
@ -41,7 +41,7 @@ provisioning setup system --interactive
## Step 4: Create Your First Workspace (1 minute)
```text
```bash
# Create workspace
provisioning setup workspace myapp
@ -51,7 +51,7 @@ provisioning workspace list
## Step 5: Deploy Your First Server (1 minute)
```text
```bash
# Activate workspace
provisioning workspace activate myapp
@ -67,7 +67,7 @@ provisioning server create --yes
## Verify Everything Works
```text
```bash
# Check health
provisioning platform health
@ -80,7 +80,7 @@ provisioning server ssh <server-name>
## Common Commands Cheat Sheet
```text
```bash
# Workspace management
provisioning workspace list # List all workspaces
provisioning workspace activate prod # Switch workspace
@ -106,7 +106,7 @@ provisioning platform health # Check platform health
**Setup wizard won't start**
```text
```bash
# Check Nushell
nu --version
@ -116,7 +116,7 @@ chmod +x $(which provisioning)
**Configuration error**
```text
```toml
# Validate configuration
provisioning setup validate --verbose
@ -126,7 +126,7 @@ provisioning info paths
**Deployment fails**
```text
```bash
# Dry-run to see what would happen
provisioning server create --check
@ -146,7 +146,7 @@ After basic setup:
## Need Help
```text
```bash
# Get help
provisioning help

View File

@ -15,7 +15,7 @@
### 30-Second Setup
```text
```bash
# Install provisioning
curl -sSL https://install.provisioning.dev | bash
@ -37,7 +37,7 @@ provisioning server create
## Directory Structure
```text
```bash
provisioning/
├── system.toml # System info (immutable)
├── user_preferences.toml # User settings (editable)
@ -54,7 +54,7 @@ provisioning/
Run the interactive setup wizard:
```text
```bash
provisioning setup system --interactive
```
@ -93,7 +93,7 @@ The wizard guides you through:
Create and manage multiple isolated environments:
```text
```bash
# Create workspace
provisioning setup workspace dev
provisioning setup workspace prod
@ -109,7 +109,7 @@ provisioning workspace activate prod
Update any setting:
```text
```bash
# Update platform configuration
provisioning setup platform --config new-config.toml
@ -122,7 +122,7 @@ provisioning setup validate
## Backup & Restore
```text
```bash
# Backup current configuration
provisioning setup backup --path ./backup.tar.gz
@ -137,25 +137,25 @@ provisioning setup migrate --from-existing
### "Command not found: provisioning"
```text
```javascript
export PATH="/usr/local/bin:$PATH"
```
### "Nushell not found"
```text
```nushell
curl -sSL https://raw.githubusercontent.com/nushell/nushell/main/install.sh | bash
```
### "Cannot write to directory"
```text
```bash
chmod 755 ~/Library/Application\ Support/provisioning/
```
### Check required tools
```text
```bash
provisioning setup validate --check-tools
```
@ -181,7 +181,7 @@ A: Yes, via GitOps - configurations in Git, secrets in secure storage.
## Getting Help
```text
```bash
# General help
provisioning help

View File

@ -25,7 +25,7 @@ All profiles use **Nickel-first architecture**: configuration source of truth is
### Developer Profile (Recommended for First Time)
```text
```bash
# Run unified setup
provisioning setup profile --profile developer
@ -45,7 +45,7 @@ curl http://localhost:3001/health
```
Expected output:
```text
```bash
╔═════════════════════════════════════════════════════╗
║ PROVISIONING SETUP - DEVELOPER PROFILE ║
╚═════════════════════════════════════════════════════╝
@ -62,7 +62,7 @@ Setup complete in ~4 minutes!
### Production Profile (HA, Security, Team Ready)
```text
```bash
# Interactive setup for production
provisioning setup profile --profile production --interactive
@ -84,7 +84,7 @@ nickel typecheck ~/.config/provisioning/platform/deployment.ncl
```
Expected config structure:
```text
```toml
~/.config/provisioning/
├── system.ncl # System detection + capabilities
├── user_preferences.ncl # User settings (MFA, audit, etc.)
@ -102,7 +102,7 @@ Expected config structure:
### CI/CD Profile (Automated, Ephemeral)
```text
```bash
# Fully automated setup for pipelines
export PROVISIONING_PROVIDER=local
export PROVISIONING_WORKSPACE=ci-test-${CI_JOB_ID}
@ -126,7 +126,7 @@ provisioning setup profile --profile cicd
### Linux (XDG Base Directory)
```text
```bash
# Primary location
~/.config/provisioning/
@ -145,7 +145,7 @@ $XDG_CONFIG_HOME/provisioning/
### macOS (Application Support)
```text
```bash
# Platform-specific location
~/Library/Application Support/provisioning/
@ -177,7 +177,7 @@ Provisioning detects:
- **Memory**: Total system RAM in GB
- **Disk Space**: Total available disk
```text
```bash
# View detected system
provisioning setup detect --verbose
```
@ -193,7 +193,7 @@ You choose between:
Setup creates Nickel configs using composition:
```text
```nickel
# Example: system.ncl is composed from:
let helpers = import "../../schemas/platform/common/helpers.ncl"
let defaults = import "../../schemas/platform/defaults/system-defaults.ncl"
@ -213,7 +213,7 @@ Result: **Type-safe config**, guaranteed valid structure and values.
All configs are validated:
```text
```toml
# Done automatically during setup
nickel typecheck ~/.config/provisioning/system.ncl
nickel typecheck ~/.config/provisioning/platform/deployment.ncl
@ -225,19 +225,19 @@ nickel typecheck ~/.config/provisioning/**/*.ncl
### Step 5: Service Bootstrap (Profile-Dependent)
**Developer**: Starts Docker Compose services locally
```text
```bash
docker-compose up -d orchestrator control-center kms
```
**Production**: Outputs Kubernetes manifests (doesn't auto-start, you review first)
```text
```yaml
cat ~/.config/provisioning/platform/deployment.ncl
# Review, then deploy to your cluster
kubectl apply -f generated-from-deployment.ncl
```
**CI/CD**: Starts ephemeral Docker Compose in `/tmp`
```text
```bash
# Automatic cleanup on job exit
docker-compose -f /tmp/provisioning-ci-${JOB_ID}/compose.yml up
# Tests run, cleanup automatic on script exit
@ -266,7 +266,7 @@ docker-compose -f /tmp/provisioning-ci-${JOB_ID}/compose.yml up
**Time**: 3-4 minutes
**Example**:
```text
```bash
provisioning setup profile --profile developer
# Output:
@ -303,7 +303,7 @@ provisioning setup profile --profile developer
**Time**: 10-15 minutes (interactive, many questions)
**Example**:
```text
```bash
provisioning setup profile --profile production --interactive
# Prompts:
@ -347,7 +347,7 @@ provisioning setup profile --profile production --interactive
**Time**: Less than 2 minutes
**Example**:
```text
```bash
# In GitHub Actions:
- name: Setup Provisioning
run: |
@ -369,7 +369,7 @@ provisioning setup profile --profile production --interactive
### After Setup, Verify Everything Works
**Developer Profile**:
```text
```bash
# Check configs exist
ls -la ~/.config/provisioning/
ls -la ~/.config/provisioning/platform/
@ -387,7 +387,7 @@ curl http://localhost:3001/health
```
**Production Profile**:
```text
```bash
# Check Nickel configs
nickel typecheck ~/.config/provisioning/system.ncl
nickel typecheck ~/.config/provisioning/platform/deployment.ncl
@ -404,7 +404,7 @@ cat ~/.config/provisioning/cedar-policies/default.cedar
```
**CI/CD Profile**:
```text
```bash
# Check temp configs exist
ls -la /tmp/provisioning-ci-*/
@ -424,7 +424,7 @@ docker ps | grep provisioning
**Cause**: Nickel binary not installed
**Solution**:
```text
```nickel
# macOS
brew install nickel
@ -444,7 +444,7 @@ nickel --version # Should be 1.5.0+
**Cause**: Nickel typecheck error in generated config
**Solution**:
```text
```nickel
# See detailed error
nickel typecheck ~/.config/provisioning/platform/deployment.ncl --color always
@ -463,7 +463,7 @@ provisioning setup profile --profile developer --verbose
**Cause**: Docker not installed or not running
**Solution**:
```text
```bash
# Check Docker
docker --version
docker ps
@ -487,7 +487,7 @@ provisioning setup profile --profile developer
**Cause**: Port already in use, Docker not running, or resource constraints
**Solution**:
```text
```bash
# Check what's using ports 9090, 3000, 3001
lsof -i :9090
lsof -i :3000
@ -509,7 +509,7 @@ docker system prune # Free up space if needed
**Cause**: Directory created with wrong permissions
**Solution**:
```text
```bash
# Fix permissions (macOS)
chmod 700 ~/Library/Application\ Support/provisioning/
@ -528,7 +528,7 @@ provisioning setup profile --profile developer
**Cause**: Services reading from old location or wrong environment variable
**Solution**:
```text
```bash
# Verify service sees new location
echo $PROVISIONING_CONFIG
# Should be: ~/.config/provisioning/platform/deployment.ncl
@ -547,7 +547,7 @@ provisioning service status --verbose
After initial setup, you can customize configs per workspace:
```text
```toml
# Create workspace-specific override
mkdir -p workspace-myproject/config
cat > workspace-myproject/config/platform-overrides.ncl <<'EOF'
@ -631,7 +631,7 @@ Result: Minimal, validated, reproducible config.
## Getting Help
```text
```bash
# Help for setup
provisioning setup --help

View File

@ -20,7 +20,7 @@ This guide covers:
The provisioning system uses a **3-layer architecture** for configuration inheritance:
```text
```toml
┌─────────────────────────────────────┐
│ Infrastructure Layer (Priority 300)│ ← Highest priority
│ workspace/infra/{name}/ │
@ -52,14 +52,14 @@ Higher numbers override lower numbers.
### View Layer Resolution
```text
```bash
# Explain layer concept
provisioning lyr explain
```
**Expected Output:**
```text
```bash
📚 LAYER SYSTEM EXPLAINED
The layer system provides configuration inheritance across 3 levels:
@ -89,14 +89,14 @@ Resolution: Infrastructure → Workspace → Core
Higher priority layers override lower ones.
```
```text
```bash
# Show layer resolution for your project
provisioning lyr show my-production
```
**Expected Output:**
```text
```bash
📊 Layer Resolution for my-production:
LAYER PRIORITY SOURCE FILES
@ -121,14 +121,14 @@ Status: ✅ All layers resolved successfully
### Test Layer Resolution
```text
```bash
# Test how a specific module resolves
provisioning lyr test kubernetes my-production
```
**Expected Output:**
```text
```bash
🔍 Layer Resolution Test: kubernetes → my-production
Resolving kubernetes configuration...
@ -171,14 +171,14 @@ Resolution: ✅ Success
### List Available Templates
```text
```bash
# List all templates
provisioning tpl list
```
**Expected Output:**
```text
```bash
📋 Available Templates:
TASKSERVS:
@ -203,7 +203,7 @@ CLUSTERS:
Total: 13 templates
```
```text
```bash
# List templates by type
provisioning tpl list --type taskservs
provisioning tpl list --type providers
@ -212,14 +212,14 @@ provisioning tpl list --type clusters
### View Template Details
```text
```bash
# Show template details
provisioning tpl show production-kubernetes
```
**Expected Output:**
```text
```bash
📄 Template: production-kubernetes
Description: Production-ready Kubernetes configuration with
@ -250,14 +250,14 @@ Example Usage:
### Apply Template
```text
```bash
# Apply template to your infrastructure
provisioning tpl apply production-kubernetes my-production
```
**Expected Output:**
```text
```bash
🚀 Applying template: production-kubernetes → my-production
Checking compatibility... ⏳
@ -282,14 +282,14 @@ Next steps:
### Validate Template Usage
```text
```bash
# Validate template was applied correctly
provisioning tpl validate my-production
```
**Expected Output:**
```text
```bash
✅ Template Validation: my-production
Templates Applied:
@ -314,7 +314,7 @@ Status: ✅ Valid
### Step 1: Create Template Structure
```text
```bash
# Create custom template directory
mkdir -p provisioning/workspace/templates/my-custom-template
```
@ -323,7 +323,7 @@ mkdir -p provisioning/workspace/templates/my-custom-template
**File: `provisioning/workspace/templates/my-custom-template/main.ncl`**
```text
```nickel
# Custom Kubernetes template with specific settings
let kubernetes_config = {
# Version
@ -389,7 +389,7 @@ kubernetes_config
**File: `provisioning/workspace/templates/my-custom-template/metadata.toml`**
```text
```toml
[template]
name = "my-custom-template"
version = "1.0.0"
@ -409,7 +409,7 @@ features = ["security", "monitoring", "high-availability"]
### Step 4: Test Custom Template
```text
```bash
# List templates (should include your custom template)
provisioning tpl list
@ -426,7 +426,7 @@ provisioning tpl apply my-custom-template my-test
**Core Layer** (`provisioning/extensions/taskservs/postgres/main.ncl`):
```text
```javascript
let postgres_config = {
version = "15.5",
port = 5432,
@ -437,7 +437,7 @@ postgres_config
**Infrastructure Layer** (`workspace/infra/my-production/taskservs/postgres.ncl`):
```text
```javascript
let postgres_config = {
max_connections = 500, # Override only max_connections
} in
@ -446,7 +446,7 @@ postgres_config
**Result** (after layer resolution):
```text
```javascript
let postgres_config = {
version = "15.5", # From Core
port = 5432, # From Core
@ -459,7 +459,7 @@ postgres_config
**Workspace Layer** (`provisioning/workspace/templates/production-postgres.ncl`):
```text
```javascript
let postgres_config = {
replication = {
enabled = true,
@ -472,7 +472,7 @@ postgres_config
**Infrastructure Layer** (`workspace/infra/my-production/taskservs/postgres.ncl`):
```text
```javascript
let postgres_config = {
replication = {
sync_mode = "sync", # Override sync mode
@ -484,7 +484,7 @@ postgres_config
**Result**:
```text
```javascript
let postgres_config = {
version = "15.5", # From Core
port = 5432, # From Core
@ -503,7 +503,7 @@ postgres_config
**Workspace Layer** (`provisioning/workspace/templates/base-kubernetes.ncl`):
```text
```javascript
let kubernetes_config = {
version = "1.30.0",
control_plane_count = 3,
@ -518,7 +518,7 @@ kubernetes_config
**Development Infrastructure** (`workspace/infra/my-dev/taskservs/kubernetes.ncl`):
```text
```javascript
let kubernetes_config = {
control_plane_count = 1, # Smaller for dev
worker_count = 2,
@ -532,7 +532,7 @@ kubernetes_config
**Production Infrastructure** (`workspace/infra/my-prod/taskservs/kubernetes.ncl`):
```text
```javascript
let kubernetes_config = {
control_plane_count = 5, # Larger for prod
worker_count = 10,
@ -550,7 +550,7 @@ kubernetes_config
Create different configurations for each environment:
```text
```toml
# Create environments
provisioning ws init my-app-dev
provisioning ws init my-app-staging
@ -573,7 +573,7 @@ Create reusable configuration fragments:
**File: `provisioning/workspace/templates/shared/security-policies.ncl`**
```text
```javascript
let security_policies = {
pod_security = {
enforce = "restricted",
@ -603,7 +603,7 @@ security_policies
Import in your infrastructure:
```text
```javascript
let security_policies = (import "../../../provisioning/workspace/templates/shared/security-policies.ncl") in
let kubernetes_config = {
@ -618,7 +618,7 @@ kubernetes_config
Use Nickel features for dynamic configuration:
```text
```nickel
# Calculate resources based on server count
let server_count = 5 in
let replicas_per_server = 2 in
@ -634,7 +634,7 @@ postgres_config
### Pattern 4: Conditional Configuration
```text
```javascript
let environment = "production" in # or "development"
let kubernetes_config = {
@ -651,14 +651,14 @@ kubernetes_config
## Layer Statistics
```text
```bash
# Show layer system statistics
provisioning lyr stats
```
**Expected Output:**
```text
```bash
📊 Layer System Statistics:
Infrastructure Layer:
@ -686,7 +686,7 @@ Resolution Performance:
### Complete Customization Example
```text
```bash
# 1. Create new infrastructure
provisioning ws init my-custom-app
@ -728,7 +728,7 @@ provisioning t create kubernetes --infra my-custom-app
### 2. Template Organization
```text
```bash
provisioning/workspace/templates/
├── shared/ # Shared configuration fragments
│ ├── security-policies.ncl
@ -749,7 +749,7 @@ Document your customizations:
**File: `workspace/infra/my-production/README.md`**
```text
```bash
# My Production Infrastructure
## Customizations
@ -769,7 +769,7 @@ Document your customizations:
Keep templates and configurations in version control:
```text
```toml
cd provisioning/workspace/templates/
git add .
git commit -m "Add production Kubernetes template with enhanced security"
@ -783,7 +783,7 @@ git commit -m "Configure production environment for my-production"
### Issue: Configuration not applied
```text
```toml
# Check layer resolution
provisioning lyr show my-production
@ -796,7 +796,7 @@ provisioning lyr test kubernetes my-production
### Issue: Conflicting configurations
```text
```toml
# Validate configuration
provisioning val config --infra my-production
@ -806,7 +806,7 @@ provisioning show config kubernetes --infra my-production
### Issue: Template not found
```text
```bash
# List available templates
provisioning tpl list
@ -826,7 +826,7 @@ provisioning tpl refresh
## Quick Reference
```text
```bash
# Layer system
provisioning lyr explain # Explain layers
provisioning lyr show <project> # Show layer resolution

Some files were not shown because too many files have changed in this diff Show More