This architecture scales better for your project’s growth, supports a community extension ecosystem, and provides professional-grade separation of concerns while maintaining integration through a well-designed package system.
+
-
+
@@ -804,7 +720,7 @@ cargo test security_integration_tests
-
+
diff --git a/docs/book/architecture/orchestrator-integration-model.html b/docs/book/architecture/orchestrator-integration-model.html
index c3725ba..e5cf498 100644
--- a/docs/book/architecture/orchestrator-integration-model.html
+++ b/docs/book/architecture/orchestrator-integration-model.html
@@ -186,20 +186,17 @@
→ "Type not supported" errors
→ Cannot handle complex nested workflows
→ Performance bottlenecks with recursive calls
-```plaintext
-
-**Solution:** Rust orchestrator provides:
-
-1. **Task queue management** (file-based, reliable)
-2. **Priority scheduling** (intelligent task ordering)
-3. **Deep call stack elimination** (Rust handles recursion)
-4. **Performance optimization** (async/await, parallel execution)
-5. **State management** (workflow checkpointing)
-
-### How It Works Today (Monorepo)
-
-```plaintext
-┌─────────────────────────────────────────────────────────────┐
+
+Solution: Rust orchestrator provides:
+
+Task queue management (file-based, reliable)
+Priority scheduling (intelligent task ordering)
+Deep call stack elimination (Rust handles recursion)
+Performance optimization (async/await, parallel execution)
+State management (workflow checkpointing)
+
+
+┌─────────────────────────────────────────────────────────────┐
│ User │
└───────────────────────────┬─────────────────────────────────┘
│ calls
@@ -237,39 +234,29 @@
│ • taskservs.nu │
│ • clusters.nu │
└────────────────┘
-```plaintext
-
-### Three Execution Modes
-
-#### Mode 1: Direct Mode (Simple Operations)
-
-```bash
-# No orchestrator needed
+
+
+
+# No orchestrator needed
provisioning server list
provisioning env
provisioning help
# Direct Nushell execution
provisioning (CLI) → Nushell scripts → Result
-```plaintext
-
-#### Mode 2: Orchestrated Mode (Complex Operations)
-
-```bash
-# Uses orchestrator for coordination
+
+
+# Uses orchestrator for coordination
provisioning server create --orchestrated
# Flow:
provisioning CLI → Orchestrator API → Task Queue → Nushell executor
↓
Result back to user
-```plaintext
-
-#### Mode 3: Workflow Mode (Batch Operations)
-
-```bash
-# Complex workflows with dependencies
-provisioning workflow submit server-cluster.k
+
+
+# Complex workflows with dependencies
+provisioning workflow submit server-cluster.ncl
# Flow:
provisioning CLI → Orchestrator Workflow Engine → Dependency Graph
@@ -279,20 +266,13 @@ provisioning CLI → Orchestrator Workflow Engine → Dependency Graph
Nushell scripts for each task
↓
Checkpoint state
-```plaintext
-
----
-
-## Integration Patterns
-
-### Pattern 1: CLI Submits Tasks to Orchestrator
-
-**Current Implementation:**
-
-**Nushell CLI (`core/nulib/workflows/server_create.nu`):**
-
-```nushell
-# Submit server creation workflow to orchestrator
+
+
+
+
+Current Implementation:
+Nushell CLI (core/nulib/workflows/server_create.nu):
+# Submit server creation workflow to orchestrator
export def server_create_workflow [
infra_name: string
--orchestrated
@@ -312,12 +292,9 @@ export def server_create_workflow [
do-server-create $infra_name
}
}
-```plaintext
-
-**Rust Orchestrator (`platform/orchestrator/src/api/workflows.rs`):**
-
-```rust
-// Receive workflow submission from Nushell CLI
+
+Rust Orchestrator (platform/orchestrator/src/api/workflows.rs):
+// Receive workflow submission from Nushell CLI
#[axum::debug_handler]
async fn create_server_workflow(
State(state): State<Arc<AppState>>,
@@ -341,13 +318,9 @@ async fn create_server_workflow(
workflow_id: task.id,
status: "queued",
}))
-}
-```plaintext
-
-**Flow:**
-
-```plaintext
-User → provisioning server create --orchestrated
+}
+Flow:
+User → provisioning server create --orchestrated
↓
Nushell CLI prepares task
↓
@@ -358,14 +331,10 @@ Orchestrator queues task
Returns workflow ID immediately
↓
User can monitor: provisioning workflow monitor <id>
-```plaintext
-
-### Pattern 2: Orchestrator Executes Nushell Scripts
-
-**Orchestrator Task Executor (`platform/orchestrator/src/executor.rs`):**
-
-```rust
-// Orchestrator spawns Nushell to execute business logic
+
+
+Orchestrator Task Executor (platform/orchestrator/src/executor.rs):
+// Orchestrator spawns Nushell to execute business logic
pub async fn execute_task(task: Task) -> Result<TaskResult> {
match task.task_type {
TaskType::ServerCreate => {
@@ -391,13 +360,9 @@ pub async fn execute_task(task: Task) -> Result<TaskResult> {
}
// Other task types...
}
-}
-```plaintext
-
-**Flow:**
-
-```plaintext
-Orchestrator task queue has pending task
+}
+Flow:
+Orchestrator task queue has pending task
↓
Executor picks up task
↓
@@ -410,14 +375,10 @@ Returns result to orchestrator
Orchestrator updates task status
↓
User monitors via: provisioning workflow status <id>
-```plaintext
-
-### Pattern 3: Bidirectional Communication
-
-**Nushell Calls Orchestrator API:**
-
-```nushell
-# Nushell script checks orchestrator status during execution
+
+
+Nushell Calls Orchestrator API:
+# Nushell script checks orchestrator status during execution
export def check-orchestrator-health [] {
let response = (http get http://localhost:9090/health)
@@ -435,12 +396,9 @@ export def report-progress [task_id: string, progress: int] {
status: "in_progress"
}
}
-```plaintext
-
-**Orchestrator Monitors Nushell Execution:**
-
-```rust
-// Orchestrator tracks Nushell subprocess
+
+Orchestrator Monitors Nushell Execution:
+// Orchestrator tracks Nushell subprocess
pub async fn execute_with_monitoring(task: Task) -> Result<TaskResult> {
let mut child = Command::new("nu")
.arg("-c")
@@ -470,33 +428,25 @@ pub async fn execute_with_monitoring(task: Task) -> Result<TaskResult>
).await??;
Ok(TaskResult::from_exit_status(result))
-}
-```plaintext
-
----
-
-## Multi-Repo Architecture Impact
-
-### Repository Split Doesn't Change Integration Model
-
-**In Multi-Repo Setup:**
-
-**Repository: `provisioning-core`**
-
-- Contains: Nushell business logic
-- Installs to: `/usr/local/lib/provisioning/`
-- Package: `provisioning-core-3.2.1.tar.gz`
-
-**Repository: `provisioning-platform`**
-
-- Contains: Rust orchestrator
-- Installs to: `/usr/local/bin/provisioning-orchestrator`
-- Package: `provisioning-platform-2.5.3.tar.gz`
-
-**Runtime Integration (Same as Monorepo):**
-
-```plaintext
-User installs both packages:
+}
+
+
+
+In Multi-Repo Setup:
+Repository: provisioning-core
+
+Contains: Nushell business logic
+Installs to: /usr/local/lib/provisioning/
+Package: provisioning-core-3.2.1.tar.gz
+
+Repository: provisioning-platform
+
+Contains: Rust orchestrator
+Installs to: /usr/local/bin/provisioning-orchestrator
+Package: provisioning-platform-2.5.3.tar.gz
+
+Runtime Integration (Same as Monorepo):
+User installs both packages:
provisioning-core-3.2.1 → /usr/local/lib/provisioning/
provisioning-platform-2.5.3 → /usr/local/bin/provisioning-orchestrator
@@ -504,14 +454,10 @@ Orchestrator expects core at: /usr/local/lib/provisioning/
Core expects orchestrator at: http://localhost:9090/
No code dependencies, just runtime coordination!
-```plaintext
-
-### Configuration-Based Integration
-
-**Core Package (`provisioning-core`) config:**
-
-```toml
-# /usr/local/share/provisioning/config/config.defaults.toml
+
+
+Core Package (provisioning-core) config:
+# /usr/local/share/provisioning/config/config.defaults.toml
[orchestrator]
enabled = true
@@ -522,12 +468,9 @@ auto_start = true # Start orchestrator if not running
[execution]
default_mode = "orchestrated" # Use orchestrator by default
fallback_to_direct = true # Fall back if orchestrator down
-```plaintext
-
-**Platform Package (`provisioning-platform`) config:**
-
-```toml
-# /usr/local/share/provisioning/platform/config.toml
+
+Platform Package (provisioning-platform) config:
+# /usr/local/share/provisioning/platform/config.toml
[orchestrator]
host = "127.0.0.1"
@@ -539,14 +482,10 @@ nushell_binary = "nu" # Expects nu in PATH
provisioning_lib = "/usr/local/lib/provisioning"
max_concurrent_tasks = 10
task_timeout_seconds = 3600
-```plaintext
-
-### Version Compatibility
-
-**Compatibility Matrix (`provisioning-distribution/versions.toml`):**
-
-```toml
-[compatibility.platform."2.5.3"]
+
+
+Compatibility Matrix (provisioning-distribution/versions.toml):
+[compatibility.platform."2.5.3"]
core = "^3.2" # Platform 2.5.3 compatible with core 3.2.x
min-core = "3.2.0"
api-version = "v1"
@@ -555,30 +494,20 @@ api-version = "v1"
platform = "^2.5" # Core 3.2.1 compatible with platform 2.5.x
min-platform = "2.5.0"
orchestrator-api = "v1"
-```plaintext
-
----
-
-## Execution Flow Examples
-
-### Example 1: Simple Server Creation (Direct Mode)
-
-**No Orchestrator Needed:**
-
-```bash
-provisioning server list
+
+
+
+
+No Orchestrator Needed:
+provisioning server list
# Flow:
CLI → servers/list.nu → Query state → Return results
(Orchestrator not involved)
-```plaintext
-
-### Example 2: Server Creation with Orchestrator
-
-**Using Orchestrator:**
-
-```bash
-provisioning server create --orchestrated --infra wuji
+
+
+Using Orchestrator:
+provisioning server create --orchestrated --infra wuji
# Detailed Flow:
1. User executes command
@@ -612,7 +541,7 @@ provisioning server create --orchestrated --infra wuji
nu -c "use /usr/local/lib/provisioning/servers/create.nu; create-server 'wuji'"
↓
13. Nushell executes business logic:
- - Reads KCL config
+ - Reads Nickel config
- Calls provider API (UpCloud/AWS)
- Creates server
- Returns result
@@ -623,14 +552,10 @@ provisioning server create --orchestrated --infra wuji
↓
16. User monitors: provisioning workflow status abc-123
→ Shows: "Server wuji created successfully"
-```plaintext
-
-### Example 3: Batch Workflow with Dependencies
-
-**Complex Workflow:**
-
-```bash
-provisioning batch submit multi-cloud-deployment.k
+
+
+Complex Workflow:
+provisioning batch submit multi-cloud-deployment.ncl
# Workflow contains:
- Create 5 servers (parallel)
@@ -638,7 +563,7 @@ provisioning batch submit multi-cloud-deployment.k
- Deploy applications (depends on Kubernetes)
# Detailed Flow:
-1. CLI submits KCL workflow to orchestrator
+1. CLI submits Nickel workflow to orchestrator
↓
2. Orchestrator parses workflow
↓
@@ -674,24 +599,26 @@ provisioning batch submit multi-cloud-deployment.k
8. If failure occurs, can retry from checkpoint
↓
9. User monitors real-time: provisioning batch monitor <id>
-```plaintext
+
+
+
+
+
+
+Eliminates Deep Call Stack Issues
+
+Without Orchestrator:
+template.nu → calls → cluster.nu → calls → taskserv.nu → calls → provider.nu
+(Deep nesting causes "Type not supported" errors)
----
-
-## Why This Architecture?
-
-### Orchestrator Benefits
-
-1. **Eliminates Deep Call Stack Issues**
+With Orchestrator:
+Orchestrator → spawns → Nushell subprocess (flat execution)
+(No deep nesting, fresh Nushell context for each task)
-Without Orchestrator:
-template.nu → calls → cluster.nu → calls → taskserv.nu → calls → provider.nu
-(Deep nesting causes “Type not supported” errors)
-With Orchestrator:
-Orchestrator → spawns → Nushell subprocess (flat execution)
-(No deep nesting, fresh Nushell context for each task)
-
+
+
+
2. **Performance Optimization**
```rust
@@ -721,7 +648,7 @@ Orchestrator → spawns → Nushell subprocess (flat execution)
Each does what it's best at!
-
+
Question: Why not implement everything in Rust?
Answer:
@@ -767,14 +694,10 @@ Orchestrator → spawns → Nushell subprocess (flat execution)
→ /usr/local/share/provisioning/platform/ (platform configs)
3. Sets up systemd/launchd service for orchestrator
-```plaintext
-
-### Runtime Coordination
-
-**Core package expects orchestrator:**
-
-```nushell
-# core/nulib/lib_provisioning/orchestrator/client.nu
+
+
+Core package expects orchestrator:
+# core/nulib/lib_provisioning/orchestrator/client.nu
# Check if orchestrator is running
export def orchestrator-available [] {
@@ -799,12 +722,9 @@ export def ensure-orchestrator [] {
}
}
}
-```plaintext
-
-**Platform package executes core scripts:**
-
-```rust
-// platform/orchestrator/src/executor/nushell.rs
+
+Platform package executes core scripts:
+// platform/orchestrator/src/executor/nushell.rs
pub struct NushellExecutor {
provisioning_lib: PathBuf, // /usr/local/lib/provisioning
@@ -837,19 +757,12 @@ impl NushellExecutor {
self.execute_script(&script).await
}
-}
-```plaintext
-
----
-
-## Configuration Examples
-
-### Core Package Config
-
-**`/usr/local/share/provisioning/config/config.defaults.toml`:**
-
-```toml
-[orchestrator]
+}
+
+
+
+/usr/local/share/provisioning/config/config.defaults.toml:
+[orchestrator]
enabled = true
endpoint = "http://localhost:9090"
timeout_seconds = 60
@@ -875,14 +788,10 @@ force_direct = [
"help",
"version"
]
-```plaintext
-
-### Platform Package Config
-
-**`/usr/local/share/provisioning/platform/config.toml`:**
-
-```toml
-[server]
+
+
+/usr/local/share/provisioning/platform/config.toml:
+[server]
host = "127.0.0.1"
port = 8080
@@ -899,70 +808,61 @@ checkpoint_interval_seconds = 30
binary = "nu" # Expects nu in PATH
provisioning_lib = "/usr/local/lib/provisioning"
env_vars = { NU_LIB_DIRS = "/usr/local/lib/provisioning" }
-```plaintext
-
----
-
-## Key Takeaways
-
-### 1. **Orchestrator is Essential**
-
-- Solves deep call stack problems
-- Provides performance optimization
-- Enables complex workflows
-- NOT optional for production use
-
-### 2. **Integration is Loose but Coordinated**
-
-- No code dependencies between repos
-- Runtime integration via CLI + REST API
-- Configuration-driven coordination
-- Works in both monorepo and multi-repo
-
-### 3. **Best of Both Worlds**
-
-- Rust: High-performance coordination
-- Nushell: Flexible business logic
-- Clean separation of concerns
-- Each technology does what it's best at
-
-### 4. **Multi-Repo Doesn't Change Integration**
-
-- Same runtime model as monorepo
-- Package installation sets up paths
-- Configuration enables discovery
-- Versioning ensures compatibility
-
----
-
-## Conclusion
-
-The confusing example in the multi-repo doc was **oversimplified**. The real architecture is:
-
-```plaintext
-✅ Orchestrator IS USED and IS ESSENTIAL
+
+
+
+
+
+Solves deep call stack problems
+Provides performance optimization
+Enables complex workflows
+NOT optional for production use
+
+
+
+No code dependencies between repos
+Runtime integration via CLI + REST API
+Configuration-driven coordination
+Works in both monorepo and multi-repo
+
+
+
+Rust: High-performance coordination
+Nushell: Flexible business logic
+Clean separation of concerns
+Each technology does what it’s best at
+
+
+
+Same runtime model as monorepo
+Package installation sets up paths
+Configuration enables discovery
+Versioning ensures compatibility
+
+
+
+The confusing example in the multi-repo doc was oversimplified . The real architecture is:
+✅ Orchestrator IS USED and IS ESSENTIAL
✅ Platform (Rust) coordinates Core (Nushell) execution
✅ Loose coupling via CLI + REST API (not code dependencies)
✅ Works identically in monorepo and multi-repo
✅ Configuration-based integration (no hardcoded paths)
-```plaintext
-
-The orchestrator provides:
-
-- Performance layer (async, parallel execution)
-- Workflow engine (complex dependencies)
-- State management (checkpoints, recovery)
-- Task queue (reliable execution)
-
-While Nushell provides:
-
-- Business logic (providers, taskservs, clusters)
-- Template rendering (Jinja2 via nu_plugin_tera)
-- Configuration management (KCL integration)
-- User-facing scripting
-
-**Multi-repo just splits WHERE the code lives, not HOW it works together.**
+The orchestrator provides:
+
+Performance layer (async, parallel execution)
+Workflow engine (complex dependencies)
+State management (checkpoints, recovery)
+Task queue (reliable execution)
+
+While Nushell provides:
+
+Business logic (providers, taskservs, clusters)
+Template rendering (Jinja2 via nu_plugin_tera)
+Configuration management (KCL integration)
+User-facing scripting
+
+Multi-repo just splits WHERE the code lives, not HOW it works together.
diff --git a/docs/book/development/build-system.html b/docs/book/development/build-system.html
index ee031a6..3d428be 100644
--- a/docs/book/development/build-system.html
+++ b/docs/book/development/build-system.html
@@ -190,7 +190,7 @@
Rust compilation : Platform binaries (orchestrator, control-center, etc.)
Nushell bundling : Core libraries and CLI tools
-KCL validation : Configuration schema validation
+Nickel validation : Configuration schema validation
Distribution generation : Multi-platform packages
Release management : Automated release pipelines
Documentation generation : API and user documentation
@@ -249,7 +249,7 @@ PARALLEL := true
make build-all - Build all components
-Runs: build-platform build-core validate-kcl
+Runs: build-platform build-core validate-nickel
Use for: Complete system compilation
make build-platform - Build platform binaries for all targets
@@ -270,11 +270,11 @@ nu tools/build/bundle-core.nu \
--validate \
--exclude-dev
-make validate-kcl - Validate and compile KCL schemas
-make validate-kcl
+make validate-nickel - Validate and compile Nickel schemas
+make validate-nickel
# Equivalent to:
-nu tools/build/validate-kcl.nu \
- --output-dir dist/kcl \
+nu tools/build/validate-nickel.nu \
+ --output-dir dist/schemas \
--format-code \
--check-dependencies
@@ -374,7 +374,7 @@ make dist-generate PLATFORMS=linux-amd64,macos-amd64 VARIANTS=complete
make validate-all - Validate all components
-KCL schema validation
+Nickel schema validation
Package validation
Configuration validation
@@ -550,22 +550,22 @@ Options:
Function signature validation
Test execution (if tests present)
-
-Purpose : Validates and compiles KCL schemas
+
+Purpose : Validates and compiles Nickel schemas
Validation Process :
-Syntax validation of all .k files
+Syntax validation of all .ncl files
Schema dependency checking
Type constraint validation
Example validation against schemas
Documentation generation
Usage :
-nu validate-kcl.nu [options]
+nu validate-nickel.nu [options]
Options:
- --output-dir STRING Output directory (default: dist/kcl)
- --format-code Format KCL code during validation
+ --output-dir STRING Output directory (default: dist/schemas)
+ --format-code Format Nickel code during validation
--check-dependencies Validate schema dependencies
--verbose Enable verbose logging
@@ -613,7 +613,7 @@ Options:
Platform binary compilation
Core library bundling
-KCL schema validation and packaging
+Nickel schema validation and packaging
Configuration system preparation
Documentation generation
Archive creation and compression
@@ -818,8 +818,8 @@ make windows # Windows AMD64
# Install Nushell
cargo install nu
-# Install KCL
-cargo install kcl-cli
+# Install Nickel
+cargo install nickel
# Install Cross (for cross-compilation)
cargo install cross
@@ -873,17 +873,17 @@ chmod +x src/tools/build/*.nu
cd src/tools
nu build/compile-platform.nu --help
-
-Error : kcl command not found
-# Solution: Install KCL
-cargo install kcl-cli
+
+Error : nickel command not found
+# Solution: Install Nickel
+cargo install nickel
# or
-brew install kcl
+brew install nickel
Error : Schema validation failed
-# Solution: Check KCL syntax
-kcl fmt kcl/
-kcl check kcl/
+# Solution: Check Nickel syntax
+nickel fmt schemas/
+nickel check schemas/
diff --git a/docs/book/development/distribution-process.html b/docs/book/development/distribution-process.html
index 83dd7bb..798bef1 100644
--- a/docs/book/development/distribution-process.html
+++ b/docs/book/development/distribution-process.html
@@ -219,19 +219,16 @@
├── Checksums # SHA256/MD5 verification
├── Signatures # Digital signatures
└── Metadata # Release information
-```plaintext
-
-### Build Pipeline
-
-```plaintext
-Build Pipeline Flow
+
+
+Build Pipeline Flow
┌─────────────────┐ ┌─────────────────┐ ┌─────────────────┐
│ Source Code │ -> │ Build Stage │ -> │ Package Stage │
│ │ │ │ │ │
│ - Rust code │ │ - compile- │ │ - create- │
│ - Nushell libs │ │ platform │ │ archives │
-│ - KCL schemas │ │ - bundle-core │ │ - build- │
-│ - Config files │ │ - validate-kcl │ │ containers │
+│ - Nickel schemas│ │ - bundle-core │ │ - build- │
+│ - Config files │ │ - validate-nickel│ │ containers │
└─────────────────┘ └─────────────────┘ └─────────────────┘
|
v
@@ -243,45 +240,37 @@ Build Pipeline Flow
│ - upload- │ │ package │ │ - create- │
│ artifacts │ │ - integration │ │ installers │
└─────────────────┘ └─────────────────┘ └─────────────────┘
-```plaintext
-
-### Distribution Variants
-
-**Complete Distribution**:
-
-- All Rust binaries (orchestrator, control-center, MCP server)
-- Full Nushell library suite
-- All providers, taskservs, and clusters
-- Complete documentation and examples
-- Development tools and templates
-
-**Minimal Distribution**:
-
-- Essential binaries only
-- Core Nushell libraries
-- Basic provider support
-- Essential task services
-- Minimal documentation
-
-## Release Process
-
-### Release Types
-
-**Release Classifications**:
-
-- **Major Release** (x.0.0): Breaking changes, new major features
-- **Minor Release** (x.y.0): New features, backward compatible
-- **Patch Release** (x.y.z): Bug fixes, security updates
-- **Pre-Release** (x.y.z-alpha/beta/rc): Development/testing releases
-
-### Step-by-Step Release Process
-
-#### 1. Preparation Phase
-
-**Pre-Release Checklist**:
-
-```bash
-# Update dependencies and security
+
+
+Complete Distribution :
+
+All Rust binaries (orchestrator, control-center, MCP server)
+Full Nushell library suite
+All providers, taskservs, and clusters
+Complete documentation and examples
+Development tools and templates
+
+Minimal Distribution :
+
+Essential binaries only
+Core Nushell libraries
+Basic provider support
+Essential task services
+Minimal documentation
+
+
+
+Release Classifications :
+
+Major Release (x.0.0): Breaking changes, new major features
+Minor Release (x.y.0): New features, backward compatible
+Patch Release (x.y.z): Bug fixes, security updates
+Pre-Release (x.y.z-alpha/beta/rc): Development/testing releases
+
+
+
+Pre-Release Checklist :
+# Update dependencies and security
cargo update
cargo audit
@@ -293,12 +282,9 @@ make docs
# Validate all configurations
make validate-all
-```plaintext
-
-**Version Planning**:
-
-```bash
-# Check current version
+
+Version Planning :
+# Check current version
git describe --tags --always
# Plan next version
@@ -306,14 +292,10 @@ make status | grep Version
# Validate version bump
nu src/tools/release/create-release.nu --dry-run --version 2.1.0
-```plaintext
-
-#### 2. Build Phase
-
-**Complete Build**:
-
-```bash
-# Clean build environment
+
+
+Complete Build :
+# Clean build environment
make clean
# Build all platforms and variants
@@ -321,12 +303,9 @@ make all
# Validate build output
make test-dist
-```plaintext
-
-**Build with Specific Parameters**:
-
-```bash
-# Build for specific platforms
+
+Build with Specific Parameters :
+# Build for specific platforms
make all PLATFORMS=linux-amd64,macos-amd64 VARIANTS=complete
# Build with custom version
@@ -334,14 +313,10 @@ make all VERSION=2.1.0-rc1
# Parallel build for speed
make all PARALLEL=true
-```plaintext
-
-#### 3. Package Generation
-
-**Create Distribution Packages**:
-
-```bash
-# Generate complete distributions
+
+
+Create Distribution Packages :
+# Generate complete distributions
make dist-generate
# Create binary packages
@@ -352,12 +327,9 @@ make package-containers
# Create installers
make create-installers
-```plaintext
-
-**Package Validation**:
-
-```bash
-# Validate packages
+
+Package Validation :
+# Validate packages
make test-dist
# Check package contents
@@ -366,14 +338,10 @@ nu src/tools/package/validate-package.nu packages/
# Test installation
make install
make uninstall
-```plaintext
-
-#### 4. Release Creation
-
-**Automated Release**:
-
-```bash
-# Create complete release
+
+
+Automated Release :
+# Create complete release
make release VERSION=2.1.0
# Create draft release for review
@@ -385,22 +353,18 @@ nu src/tools/release/create-release.nu \
--generate-changelog \
--push-tag \
--auto-upload
-```plaintext
-
-**Release Options**:
-
-- `--pre-release`: Mark as pre-release
-- `--draft`: Create draft release
-- `--generate-changelog`: Auto-generate changelog from commits
-- `--push-tag`: Push git tag to remote
-- `--auto-upload`: Upload assets automatically
-
-#### 5. Distribution and Notification
-
-**Upload Artifacts**:
-
-```bash
-# Upload to GitHub Releases
+
+Release Options :
+
+--pre-release: Mark as pre-release
+--draft: Create draft release
+--generate-changelog: Auto-generate changelog from commits
+--push-tag: Push git tag to remote
+--auto-upload: Upload assets automatically
+
+
+Upload Artifacts :
+# Upload to GitHub Releases
make upload-artifacts
# Update package registries
@@ -408,12 +372,9 @@ make update-registry
# Send notifications
make notify-release
-```plaintext
-
-**Registry Updates**:
-
-```bash
-# Update Homebrew formula
+
+Registry Updates :
+# Update Homebrew formula
nu src/tools/release/update-registry.nu \
--registries homebrew \
--version 2.1.0 \
@@ -424,14 +385,10 @@ nu src/tools/release/update-registry.nu \
--registries custom \
--registry-url https://packages.company.com \
--credentials-file ~/.registry-creds
-```plaintext
-
-### Release Automation
-
-**Complete Automated Release**:
-
-```bash
-# Full release pipeline
+
+
+Complete Automated Release :
+# Full release pipeline
make cd-deploy VERSION=2.1.0
# Equivalent manual steps:
@@ -443,23 +400,18 @@ make release VERSION=2.1.0
make upload-artifacts
make update-registry
make notify-release
-```plaintext
-
-## Package Generation
-
-### Binary Packages
-
-**Package Types**:
-
-- **Standalone Archives**: TAR.GZ and ZIP with all dependencies
-- **Platform Packages**: DEB, RPM, MSI, PKG with system integration
-- **Portable Packages**: Single-directory distributions
-- **Source Packages**: Source code with build instructions
-
-**Create Binary Packages**:
-
-```bash
-# Standard binary packages
+
+
+
+Package Types :
+
+Standalone Archives : TAR.GZ and ZIP with all dependencies
+Platform Packages : DEB, RPM, MSI, PKG with system integration
+Portable Packages : Single-directory distributions
+Source Packages : Source code with build instructions
+
+Create Binary Packages :
+# Standard binary packages
make package-binaries
# Custom package creation
@@ -471,21 +423,17 @@ nu src/tools/package/package-binaries.nu \
--compress \
--strip \
--checksum
-```plaintext
-
-**Package Features**:
-
-- **Binary Stripping**: Removes debug symbols for smaller size
-- **Compression**: GZIP, LZMA, and Brotli compression
-- **Checksums**: SHA256 and MD5 verification
-- **Signatures**: GPG and code signing support
-
-### Container Images
-
-**Container Build Process**:
-
-```bash
-# Build container images
+
+Package Features :
+
+Binary Stripping : Removes debug symbols for smaller size
+Compression : GZIP, LZMA, and Brotli compression
+Checksums : SHA256 and MD5 verification
+Signatures : GPG and code signing support
+
+
+Container Build Process :
+# Build container images
make package-containers
# Advanced container build
@@ -497,38 +445,34 @@ nu src/tools/package/build-containers.nu \
--optimize-size \
--security-scan \
--multi-stage
-```plaintext
-
-**Container Features**:
-
-- **Multi-Stage Builds**: Minimal runtime images
-- **Security Scanning**: Vulnerability detection
-- **Multi-Platform**: AMD64, ARM64 support
-- **Layer Optimization**: Efficient layer caching
-- **Runtime Configuration**: Environment-based configuration
-
-**Container Registry Support**:
-
-- Docker Hub
-- GitHub Container Registry
-- Amazon ECR
-- Google Container Registry
-- Azure Container Registry
-- Private registries
-
-### Installers
-
-**Installer Types**:
-
-- **Shell Script Installer**: Universal Unix/Linux installer
-- **Package Installers**: DEB, RPM, MSI, PKG
-- **Container Installer**: Docker/Podman setup
-- **Source Installer**: Build-from-source installer
-
-**Create Installers**:
-
-```bash
-# Generate all installer types
+
+Container Features :
+
+Multi-Stage Builds : Minimal runtime images
+Security Scanning : Vulnerability detection
+Multi-Platform : AMD64, ARM64 support
+Layer Optimization : Efficient layer caching
+Runtime Configuration : Environment-based configuration
+
+Container Registry Support :
+
+Docker Hub
+GitHub Container Registry
+Amazon ECR
+Google Container Registry
+Azure Container Registry
+Private registries
+
+
+Installer Types :
+
+Shell Script Installer : Universal Unix/Linux installer
+Package Installers : DEB, RPM, MSI, PKG
+Container Installer : Docker/Podman setup
+Source Installer : Build-from-source installer
+
+Create Installers :
+# Generate all installer types
make create-installers
# Custom installer creation
@@ -540,43 +484,37 @@ nu src/tools/distribution/create-installer.nu \
--include-services \
--create-uninstaller \
--validate-installer
-```plaintext
-
-**Installer Features**:
-
-- **System Integration**: Systemd/Launchd service files
-- **Path Configuration**: Automatic PATH updates
-- **User/System Install**: Support for both user and system-wide installation
-- **Uninstaller**: Clean removal capability
-- **Dependency Management**: Automatic dependency resolution
-- **Configuration Setup**: Initial configuration creation
-
-## Multi-Platform Distribution
-
-### Supported Platforms
-
-**Primary Platforms**:
-
-- **Linux AMD64** (x86_64-unknown-linux-gnu)
-- **Linux ARM64** (aarch64-unknown-linux-gnu)
-- **macOS AMD64** (x86_64-apple-darwin)
-- **macOS ARM64** (aarch64-apple-darwin)
-- **Windows AMD64** (x86_64-pc-windows-gnu)
-- **FreeBSD AMD64** (x86_64-unknown-freebsd)
-
-**Platform-Specific Features**:
-
-- **Linux**: SystemD integration, package manager support
-- **macOS**: LaunchAgent services, Homebrew packages
-- **Windows**: Windows Service support, MSI installers
-- **FreeBSD**: RC scripts, pkg packages
-
-### Cross-Platform Build
-
-**Cross-Compilation Setup**:
-
-```bash
-# Install cross-compilation targets
+
+Installer Features :
+
+System Integration : Systemd/Launchd service files
+Path Configuration : Automatic PATH updates
+User/System Install : Support for both user and system-wide installation
+Uninstaller : Clean removal capability
+Dependency Management : Automatic dependency resolution
+Configuration Setup : Initial configuration creation
+
+
+
+Primary Platforms :
+
+Linux AMD64 (x86_64-unknown-linux-gnu)
+Linux ARM64 (aarch64-unknown-linux-gnu)
+macOS AMD64 (x86_64-apple-darwin)
+macOS ARM64 (aarch64-apple-darwin)
+Windows AMD64 (x86_64-pc-windows-gnu)
+FreeBSD AMD64 (x86_64-unknown-freebsd)
+
+Platform-Specific Features :
+
+Linux : SystemD integration, package manager support
+macOS : LaunchAgent services, Homebrew packages
+Windows : Windows Service support, MSI installers
+FreeBSD : RC scripts, pkg packages
+
+
+Cross-Compilation Setup :
+# Install cross-compilation targets
rustup target add aarch64-unknown-linux-gnu
rustup target add x86_64-apple-darwin
rustup target add aarch64-apple-darwin
@@ -584,12 +522,9 @@ rustup target add x86_64-pc-windows-gnu
# Install cross-compilation tools
cargo install cross
-```plaintext
-
-**Platform-Specific Builds**:
-
-```bash
-# Build for specific platform
+
+Platform-Specific Builds :
+# Build for specific platform
make build-platform RUST_TARGET=aarch64-apple-darwin
# Build for multiple platforms
@@ -599,14 +534,10 @@ make build-cross PLATFORMS=linux-amd64,macos-arm64,windows-amd64
make linux
make macos
make windows
-```plaintext
-
-### Distribution Matrix
-
-**Generated Distributions**:
-
-```plaintext
-Distribution Matrix:
+
+
+Generated Distributions :
+Distribution Matrix:
provisioning-{version}-{platform}-{variant}.{format}
Examples:
@@ -614,24 +545,19 @@ Examples:
- provisioning-2.1.0-macos-arm64-minimal.tar.gz
- provisioning-2.1.0-windows-amd64-complete.zip
- provisioning-2.1.0-freebsd-amd64-minimal.tar.xz
-```plaintext
-
-**Platform Considerations**:
-
-- **File Permissions**: Executable permissions on Unix systems
-- **Path Separators**: Platform-specific path handling
-- **Service Integration**: Platform-specific service management
-- **Package Formats**: TAR.GZ for Unix, ZIP for Windows
-- **Line Endings**: CRLF for Windows, LF for Unix
-
-## Validation and Testing
-
-### Distribution Validation
-
-**Validation Pipeline**:
-
-```bash
-# Complete validation
+
+Platform Considerations :
+
+File Permissions : Executable permissions on Unix systems
+Path Separators : Platform-specific path handling
+Service Integration : Platform-specific service management
+Package Formats : TAR.GZ for Unix, ZIP for Windows
+Line Endings : CRLF for Windows, LF for Unix
+
+
+
+Validation Pipeline :
+# Complete validation
make test-dist
# Custom validation
@@ -641,42 +567,34 @@ nu src/tools/build/test-distribution.nu \
--platform linux \
--cleanup \
--verbose
-```plaintext
-
-**Validation Types**:
-
-- **Basic**: Installation test, CLI help, version check
-- **Integration**: Server creation, configuration validation
-- **Complete**: Full workflow testing including cluster operations
-
-### Testing Framework
-
-**Test Categories**:
-
-- **Unit Tests**: Component-specific testing
-- **Integration Tests**: Cross-component testing
-- **End-to-End Tests**: Complete workflow testing
-- **Performance Tests**: Load and performance validation
-- **Security Tests**: Security scanning and validation
-
-**Test Execution**:
-
-```bash
-# Run all tests
+
+Validation Types :
+
+Basic : Installation test, CLI help, version check
+Integration : Server creation, configuration validation
+Complete : Full workflow testing including cluster operations
+
+
+Test Categories :
+
+Unit Tests : Component-specific testing
+Integration Tests : Cross-component testing
+End-to-End Tests : Complete workflow testing
+Performance Tests : Load and performance validation
+Security Tests : Security scanning and validation
+
+Test Execution :
+# Run all tests
make ci-test
# Specific test types
nu src/tools/build/test-distribution.nu --test-types basic
nu src/tools/build/test-distribution.nu --test-types integration
nu src/tools/build/test-distribution.nu --test-types complete
-```plaintext
-
-### Package Validation
-
-**Package Integrity**:
-
-```bash
-# Validate package structure
+
+
+Package Integrity :
+# Validate package structure
nu src/tools/package/validate-package.nu dist/
# Check checksums
@@ -684,12 +602,9 @@ sha256sum -c packages/checksums.sha256
# Verify signatures
gpg --verify packages/provisioning-2.1.0.tar.gz.sig
-```plaintext
-
-**Installation Testing**:
-
-```bash
-# Test installation process
+
+Installation Testing :
+# Test installation process
./packages/installers/install-provisioning-2.1.0.sh --dry-run
# Test uninstallation
@@ -697,43 +612,34 @@ gpg --verify packages/provisioning-2.1.0.tar.gz.sig
# Container testing
docker run --rm provisioning:2.1.0 provisioning --version
-```plaintext
-
-## Release Management
-
-### Release Workflow
-
-**GitHub Release Integration**:
-
-```bash
-# Create GitHub release
+
+
+
+GitHub Release Integration :
+# Create GitHub release
nu src/tools/release/create-release.nu \
--version 2.1.0 \
--asset-dir packages \
--generate-changelog \
--push-tag \
--auto-upload
-```plaintext
-
-**Release Features**:
-
-- **Automated Changelog**: Generated from git commit history
-- **Asset Management**: Automatic upload of all distribution artifacts
-- **Tag Management**: Semantic version tagging
-- **Release Notes**: Formatted release notes with change summaries
-
-### Versioning Strategy
-
-**Semantic Versioning**:
-
-- **MAJOR.MINOR.PATCH** format (e.g., 2.1.0)
-- **Pre-release** suffixes (e.g., 2.1.0-alpha.1, 2.1.0-rc.2)
-- **Build metadata** (e.g., 2.1.0+20250925.abcdef)
-
-**Version Detection**:
-
-```bash
-# Auto-detect next version
+
+Release Features :
+
+Automated Changelog : Generated from git commit history
+Asset Management : Automatic upload of all distribution artifacts
+Tag Management : Semantic version tagging
+Release Notes : Formatted release notes with change summaries
+
+
+Semantic Versioning :
+
+MAJOR.MINOR.PATCH format (for example, 2.1.0)
+Pre-release suffixes (for example, 2.1.0-alpha.1, 2.1.0-rc.2)
+Build metadata (for example, 2.1.0+20250925.abcdef)
+
+Version Detection :
+# Auto-detect next version
nu src/tools/release/create-release.nu --release-type minor
# Manual version specification
@@ -741,22 +647,18 @@ nu src/tools/release/create-release.nu --version 2.1.0
# Pre-release versioning
nu src/tools/release/create-release.nu --version 2.1.0-rc.1 --pre-release
-```plaintext
-
-### Artifact Management
-
-**Artifact Types**:
-
-- **Source Archives**: Complete source code distributions
-- **Binary Archives**: Compiled binary distributions
-- **Container Images**: OCI-compliant container images
-- **Installers**: Platform-specific installation packages
-- **Documentation**: Generated documentation packages
-
-**Upload and Distribution**:
-
-```bash
-# Upload to GitHub Releases
+
+
+Artifact Types :
+
+Source Archives : Complete source code distributions
+Binary Archives : Compiled binary distributions
+Container Images : OCI-compliant container images
+Installers : Platform-specific installation packages
+Documentation : Generated documentation packages
+
+Upload and Distribution :
+# Upload to GitHub Releases
make upload-artifacts
# Upload to container registries
@@ -764,26 +666,20 @@ docker push provisioning:2.1.0
# Update package repositories
make update-registry
-```plaintext
-
-## Rollback Procedures
-
-### Rollback Scenarios
-
-**Common Rollback Triggers**:
-
-- Critical bugs discovered post-release
-- Security vulnerabilities identified
-- Performance regression
-- Compatibility issues
-- Infrastructure failures
-
-### Rollback Process
-
-**Automated Rollback**:
-
-```bash
-# Rollback latest release
+
+
+
+Common Rollback Triggers :
+
+Critical bugs discovered post-release
+Security vulnerabilities identified
+Performance regression
+Compatibility issues
+Infrastructure failures
+
+
+Automated Rollback :
+# Rollback latest release
nu src/tools/release/rollback-release.nu --version 2.1.0
# Rollback with specific target
@@ -792,12 +688,9 @@ nu src/tools/release/rollback-release.nu \
--to-version 2.0.5 \
--update-registries \
--notify-users
-```plaintext
-
-**Manual Rollback Steps**:
-
-```bash
-# 1. Identify target version
+
+Manual Rollback Steps :
+# 1. Identify target version
git tag -l | grep -v 2.1.0 | tail -5
# 2. Create rollback release
@@ -816,21 +709,17 @@ nu src/tools/release/notify-users.nu \
--channels slack,discord,email \
--message-type rollback \
--urgent
-```plaintext
-
-### Rollback Safety
-
-**Pre-Rollback Validation**:
-
-- Validate target version integrity
-- Check compatibility matrix
-- Verify rollback procedure testing
-- Confirm communication plan
-
-**Rollback Testing**:
-
-```bash
-# Test rollback in staging
+
+
+Pre-Rollback Validation :
+
+Validate target version integrity
+Check compatibility matrix
+Verify rollback procedure testing
+Confirm communication plan
+
+Rollback Testing :
+# Test rollback in staging
nu src/tools/release/rollback-release.nu \
--version 2.1.0 \
--target-version 2.0.5 \
@@ -839,39 +728,27 @@ nu src/tools/release/rollback-release.nu \
# Validate rollback success
make test-dist DIST_VERSION=2.0.5
-```plaintext
-
-### Emergency Procedures
-
-**Critical Security Rollback**:
-
-```bash
-# Emergency rollback (bypasses normal procedures)
+
+
+Critical Security Rollback :
+# Emergency rollback (bypasses normal procedures)
nu src/tools/release/rollback-release.nu \
--version 2.1.0 \
--emergency \
--security-issue \
--immediate-notify
-```plaintext
-
-**Infrastructure Failure Recovery**:
-
-```bash
-# Failover to backup infrastructure
+
+Infrastructure Failure Recovery :
+# Failover to backup infrastructure
nu src/tools/release/rollback-release.nu \
--infrastructure-failover \
--backup-registry \
--mirror-sync
-```plaintext
-
-## CI/CD Integration
-
-### GitHub Actions Integration
-
-**Build Workflow** (`.github/workflows/build.yml`):
-
-```yaml
-name: Build and Distribute
+
+
+
+Build Workflow (.github/workflows/build.yml):
+name: Build and Distribute
on:
push:
branches: [main]
@@ -905,12 +782,9 @@ jobs:
with:
name: build-${{ matrix.platform }}
path: src/dist/
-```plaintext
-
-**Release Workflow** (`.github/workflows/release.yml`):
-
-```yaml
-name: Release
+
+Release Workflow (.github/workflows/release.yml):
+name: Release
on:
push:
tags: ['v*']
@@ -935,14 +809,10 @@ jobs:
run: |
cd src/tools
make update-registry VERSION=${{ github.ref_name }}
-```plaintext
-
-### GitLab CI Integration
-
-**GitLab CI Configuration** (`.gitlab-ci.yml`):
-
-```yaml
-stages:
+
+
+GitLab CI Configuration (.gitlab-ci.yml):
+stages:
- build
- package
- test
@@ -975,14 +845,10 @@ release:
- make cd-deploy VERSION=${CI_COMMIT_TAG}
only:
- tags
-```plaintext
-
-### Jenkins Integration
-
-**Jenkinsfile**:
-
-```groovy
-pipeline {
+
+
+Jenkinsfile :
+pipeline {
agent any
stages {
@@ -1014,18 +880,12 @@ pipeline {
}
}
}
-```plaintext
-
-## Troubleshooting
-
-### Common Issues
-
-#### Build Failures
-
-**Rust Compilation Errors**:
-
-```bash
-# Solution: Clean and rebuild
+
+
+
+
+Rust Compilation Errors :
+# Solution: Clean and rebuild
make clean
cargo clean
make build-platform
@@ -1033,112 +893,79 @@ make build-platform
# Check Rust toolchain
rustup show
rustup update
-```plaintext
-
-**Cross-Compilation Issues**:
-
-```bash
-# Solution: Install missing targets
+
+Cross-Compilation Issues :
+# Solution: Install missing targets
rustup target list --installed
rustup target add x86_64-apple-darwin
# Use cross for problematic targets
cargo install cross
make build-platform CROSS=true
-```plaintext
-
-#### Package Generation Issues
-
-**Missing Dependencies**:
-
-```bash
-# Solution: Install build tools
+
+
+Missing Dependencies :
+# Solution: Install build tools
sudo apt-get install build-essential
brew install gnu-tar
# Check tool availability
make info
-```plaintext
-
-**Permission Errors**:
-
-```bash
-# Solution: Fix permissions
+
+Permission Errors :
+# Solution: Fix permissions
chmod +x src/tools/build/*.nu
chmod +x src/tools/distribution/*.nu
chmod +x src/tools/package/*.nu
-```plaintext
-
-#### Distribution Validation Failures
-
-**Package Integrity Issues**:
-
-```bash
-# Solution: Regenerate packages
+
+
+Package Integrity Issues :
+# Solution: Regenerate packages
make clean-dist
make package-all
# Verify manually
sha256sum packages/*.tar.gz
-```plaintext
-
-**Installation Test Failures**:
-
-```bash
-# Solution: Test in clean environment
+
+Installation Test Failures :
+# Solution: Test in clean environment
docker run --rm -v $(pwd):/work ubuntu:latest /work/packages/installers/install.sh
# Debug installation
./packages/installers/install.sh --dry-run --verbose
-```plaintext
-
-### Release Issues
-
-#### Upload Failures
-
-**Network Issues**:
-
-```bash
-# Solution: Retry with backoff
+
+
+
+Network Issues :
+# Solution: Retry with backoff
nu src/tools/release/upload-artifacts.nu \
--retry-count 5 \
--backoff-delay 30
# Manual upload
gh release upload v2.1.0 packages/*.tar.gz
-```plaintext
-
-**Authentication Failures**:
-
-```bash
-# Solution: Refresh tokens
+
+Authentication Failures :
+# Solution: Refresh tokens
gh auth refresh
docker login ghcr.io
# Check credentials
gh auth status
docker system info
-```plaintext
-
-#### Registry Update Issues
-
-**Homebrew Formula Issues**:
-
-```bash
-# Solution: Manual PR creation
+
+
+Homebrew Formula Issues :
+# Solution: Manual PR creation
git clone https://github.com/Homebrew/homebrew-core
cd homebrew-core
# Edit formula
git add Formula/provisioning.rb
git commit -m "provisioning 2.1.0"
-```plaintext
-
-### Debug and Monitoring
-
-**Debug Mode**:
-
-```bash
-# Enable debug logging
+
+
+Debug Mode :
+# Enable debug logging
export PROVISIONING_DEBUG=true
export RUST_LOG=debug
@@ -1149,12 +976,9 @@ make all VERBOSE=true
nu src/tools/distribution/generate-distribution.nu \
--verbose \
--dry-run
-```plaintext
-
-**Monitoring Build Progress**:
-
-```bash
-# Monitor build logs
+
+Monitoring Build Progress :
+# Monitor build logs
tail -f src/tools/build.log
# Check build status
@@ -1163,10 +987,8 @@ make status
# Resource monitoring
top
df -h
-```plaintext
-
-This distribution process provides a robust, automated pipeline for creating, validating, and distributing provisioning across multiple platforms while maintaining high quality and reliability standards.
+This distribution process provides a robust, automated pipeline for creating, validating, and distributing provisioning across multiple platforms while maintaining high quality and reliability standards.
diff --git a/docs/book/development/extensions.html b/docs/book/development/extensions.html
index 8cf4735..3d94873 100644
--- a/docs/book/development/extensions.html
+++ b/docs/book/development/extensions.html
@@ -222,21 +222,17 @@
├── CI/CD Pipeline # Continuous integration/deployment
├── Data Platform # Data processing and analytics
└── Custom Clusters # User-defined clusters
-```plaintext
-
-### Extension Discovery
-
-**Discovery Order**:
-
-1. `workspace/extensions/{type}/{user}/{name}` - User-specific extensions
-2. `workspace/extensions/{type}/{name}` - Workspace shared extensions
-3. `workspace/extensions/{type}/template` - Templates
-4. Core system paths (fallback)
-
-**Path Resolution**:
-
-```nushell
-# Automatic extension discovery
+
+
+Discovery Order :
+
+workspace/extensions/{type}/{user}/{name} - User-specific extensions
+workspace/extensions/{type}/{name} - Workspace shared extensions
+workspace/extensions/{type}/template - Templates
+Core system paths (fallback)
+
+Path Resolution :
+# Automatic extension discovery
use workspace/lib/path-resolver.nu
# Find provider extension
@@ -247,55 +243,42 @@ let taskservs = (path-resolver list_extensions "taskservs" --include-core)
# Resolve cluster definition
let cluster_path = (path-resolver resolve_extension "clusters" "web-stack")
-```plaintext
-
-## Provider Development
-
-### Provider Architecture
-
-Providers implement cloud resource management through a standardized interface that supports multiple cloud platforms while maintaining consistent APIs.
-
-**Core Responsibilities**:
-
-- **Authentication**: Secure API authentication and credential management
-- **Resource Management**: Server creation, deletion, and lifecycle management
-- **Configuration**: Provider-specific settings and validation
-- **Error Handling**: Comprehensive error handling and recovery
-- **Rate Limiting**: API rate limiting and retry logic
-
-### Creating a New Provider
-
-**1. Initialize from Template**:
-
-```bash
-# Copy provider template
+
+
+
+Providers implement cloud resource management through a standardized interface that supports multiple cloud platforms while maintaining consistent APIs.
+Core Responsibilities :
+
+Authentication : Secure API authentication and credential management
+Resource Management : Server creation, deletion, and lifecycle management
+Configuration : Provider-specific settings and validation
+Error Handling : Comprehensive error handling and recovery
+Rate Limiting : API rate limiting and retry logic
+
+
+1. Initialize from Template :
+# Copy provider template
cp -r workspace/extensions/providers/template workspace/extensions/providers/my-cloud
# Navigate to new provider
cd workspace/extensions/providers/my-cloud
-```plaintext
-
-**2. Update Configuration**:
-
-```bash
-# Initialize provider metadata
+
+2. Update Configuration :
+# Initialize provider metadata
nu init-provider.nu \
--name "my-cloud" \
--display-name "MyCloud Provider" \
--author "$USER" \
--description "MyCloud platform integration"
-```plaintext
-
-### Provider Structure
-
-```plaintext
-my-cloud/
+
+
+my-cloud/
├── README.md # Provider documentation
-├── kcl/ # KCL configuration schemas
-│ ├── settings.k # Provider settings schema
-│ ├── servers.k # Server configuration schema
-│ ├── networks.k # Network configuration schema
-│ └── kcl.mod # KCL module dependencies
+├── schemas/ # Nickel configuration schemas
+│ ├── settings.ncl # Provider settings schema
+│ ├── servers.ncl # Server configuration schema
+│ ├── networks.ncl # Network configuration schema
+│ └── manifest.toml # Nickel module dependencies
├── nulib/ # Nushell implementation
│ ├── provider.nu # Main provider interface
│ ├── servers/ # Server management
@@ -330,14 +313,10 @@ my-cloud/
└── mock/ # Mock data and services
├── api-responses.json # Mock API responses
└── test-configs.toml # Test configurations
-```plaintext
-
-### Provider Implementation
-
-**Main Provider Interface** (`nulib/provider.nu`):
-
-```nushell
-#!/usr/bin/env nu
+
+
+Main Provider Interface (nulib/provider.nu):
+#!/usr/bin/env nu
# MyCloud Provider Implementation
# Provider metadata
@@ -494,12 +473,9 @@ export def "provider test" [
_ => (error make {msg: $"Unknown test type: ($test_type)"})
}
}
-```plaintext
-
-**Authentication Module** (`nulib/auth/client.nu`):
-
-```nushell
-# API client setup and authentication
+
+Authentication Module (nulib/auth/client.nu):
+# API client setup and authentication
export def setup_api_client [config: record] -> record {
# Validate credentials
@@ -541,83 +517,64 @@ def test_auth_api [client: record] -> bool {
$response.status == "success"
}
-```plaintext
+
+Nickel Configuration Schema (schemas/settings.ncl):
+# MyCloud Provider Configuration Schema
-**KCL Configuration Schema** (`kcl/settings.k`):
-
-```kcl
-# MyCloud Provider Configuration Schema
-
-schema MyCloudConfig:
- """MyCloud provider configuration"""
-
- api_url?: str = "https://api.my-cloud.com"
- api_key: str
- api_secret: str
- timeout?: int = 30
- retries?: int = 3
+let MyCloudConfig = {
+ # MyCloud provider configuration
+ api_url | string | default = "https://api.my-cloud.com",
+ api_key | string,
+ api_secret | string,
+ timeout | number | default = 30,
+ retries | number | default = 3,
# Rate limiting
- rate_limit?: {
- requests_per_minute?: int = 60
- burst_size?: int = 10
- } = {}
+ rate_limit | {
+ requests_per_minute | number | default = 60,
+ burst_size | number | default = 10,
+ } | default = {},
# Default settings
- defaults?: {
- zone?: str = "us-east-1"
- template?: str = "ubuntu-22.04"
- network?: str = "default"
- } = {}
+ defaults | {
+ zone | string | default = "us-east-1",
+ template | string | default = "ubuntu-22.04",
+ network | string | default = "default",
+ } | default = {},
+} in
+MyCloudConfig
- check:
- len(api_key) > 0, "API key cannot be empty"
- len(api_secret) > 0, "API secret cannot be empty"
- timeout > 0, "Timeout must be positive"
- retries >= 0, "Retries must be non-negative"
-
-schema MyCloudServerConfig:
- """MyCloud server configuration"""
-
- name: str
- plan: str
- zone?: str
- template?: str = "ubuntu-22.04"
- storage?: int = 25
- tags?: {str: str} = {}
+let MyCloudServerConfig = {
+ # MyCloud server configuration
+ name | string,
+ plan | string,
+ zone | string | optional,
+ template | string | default = "ubuntu-22.04",
+ storage | number | default = 25,
+ tags | { } | default = {},
# Network configuration
- network?: {
- vpc_id?: str
- subnet_id?: str
- public_ip?: bool = true
- firewall_rules?: [FirewallRule] = []
- }
+ network | {
+ vpc_id | string | optional,
+ subnet_id | string | optional,
+ public_ip | bool | default = true,
+ firewall_rules | array | default = [],
+ } | optional,
+} in
+MyCloudServerConfig
- check:
- len(name) > 0, "Server name cannot be empty"
- plan in ["small", "medium", "large", "xlarge"], "Invalid plan"
- storage >= 10, "Minimum storage is 10GB"
- storage <= 2048, "Maximum storage is 2TB"
-
-schema FirewallRule:
- """Firewall rule configuration"""
-
- port: int | str
- protocol: str = "tcp"
- source: str = "0.0.0.0/0"
- description?: str
-
- check:
- protocol in ["tcp", "udp", "icmp"], "Invalid protocol"
-```plaintext
-
-### Provider Testing
-
-**Unit Testing** (`tests/unit/test-servers.nu`):
-
-```nushell
-# Unit tests for server management
+let FirewallRule = {
+ # Firewall rule configuration
+ port | (number | string),
+ protocol | string | default = "tcp",
+ source | string | default = "0.0.0.0/0",
+ description | string | optional,
+} in
+FirewallRule
+
+
+Unit Testing (tests/unit/test-servers.nu):
+# Unit tests for server management
use ../../../nulib/provider.nu
@@ -664,12 +621,9 @@ def main [] {
test_invalid_plan
print "✅ All server management tests passed"
}
-```plaintext
-
-**Integration Testing** (`tests/integration/test-lifecycle.nu`):
-
-```nushell
-# Integration tests for complete server lifecycle
+
+Integration Testing (tests/integration/test-lifecycle.nu):
+# Integration tests for complete server lifecycle
use ../../../nulib/provider.nu
@@ -702,54 +656,41 @@ def main [] {
test_complete_lifecycle
print "✅ All integration tests passed"
}
-```plaintext
-
-## Task Service Development
-
-### Task Service Architecture
-
-Task services are infrastructure components that can be deployed and managed across different environments. They provide standardized interfaces for installation, configuration, and lifecycle management.
-
-**Core Responsibilities**:
-
-- **Installation**: Service deployment and setup
-- **Configuration**: Dynamic configuration management
-- **Health Checking**: Service status monitoring
-- **Version Management**: Automatic version updates from GitHub
-- **Integration**: Integration with other services and clusters
-
-### Creating a New Task Service
-
-**1. Initialize from Template**:
-
-```bash
-# Copy task service template
+
+
+
+Task services are infrastructure components that can be deployed and managed across different environments. They provide standardized interfaces for installation, configuration, and lifecycle management.
+Core Responsibilities :
+
+Installation : Service deployment and setup
+Configuration : Dynamic configuration management
+Health Checking : Service status monitoring
+Version Management : Automatic version updates from GitHub
+Integration : Integration with other services and clusters
+
+
+1. Initialize from Template :
+# Copy task service template
cp -r workspace/extensions/taskservs/template workspace/extensions/taskservs/my-service
# Navigate to new service
cd workspace/extensions/taskservs/my-service
-```plaintext
-
-**2. Initialize Service**:
-
-```bash
-# Initialize service metadata
+
+2. Initialize Service :
+# Initialize service metadata
nu init-service.nu \
--name "my-service" \
--display-name "My Custom Service" \
--type "database" \
--github-repo "myorg/my-service"
-```plaintext
-
-### Task Service Structure
-
-```plaintext
-my-service/
+
+
+my-service/
├── README.md # Service documentation
-├── kcl/ # KCL schemas
-│ ├── version.k # Version and GitHub integration
-│ ├── config.k # Service configuration schema
-│ └── kcl.mod # Module dependencies
+├── schemas/ # Nickel schemas
+│ ├── version.ncl # Version and GitHub integration
+│ ├── config.ncl # Service configuration schema
+│ └── manifest.toml # Module dependencies
├── nushell/ # Nushell implementation
│ ├── taskserv.nu # Main service interface
│ ├── install.nu # Installation logic
@@ -776,14 +717,10 @@ my-service/
├── unit/ # Unit tests
├── integration/ # Integration tests
└── fixtures/ # Test fixtures and data
-```plaintext
-
-### Task Service Implementation
-
-**Main Service Interface** (`nushell/taskserv.nu`):
-
-```nushell
-#!/usr/bin/env nu
+
+
+Main Service Interface (nushell/taskserv.nu):
+#!/usr/bin/env nu
# My Custom Service Task Service Implementation
export const SERVICE_NAME = "my-service"
@@ -986,136 +923,120 @@ export def "taskserv test" [
_ => (error make {msg: $"Unknown test type: ($test_type)"})
}
}
-```plaintext
+
+Version Configuration (schemas/version.ncl):
+# Version management with GitHub integration
-**Version Configuration** (`kcl/version.k`):
-
-```kcl
-# Version management with GitHub integration
-
-version_config: VersionConfig = {
- service_name = "my-service"
+let version_config = {
+ service_name = "my-service",
# GitHub repository for version checking
github = {
- owner = "myorg"
- repo = "my-service"
+ owner = "myorg",
+ repo = "my-service",
# Release configuration
release = {
- tag_prefix = "v"
- prerelease = false
- draft = false
- }
+ tag_prefix = "v",
+ prerelease = false,
+ draft = false,
+ },
# Asset patterns for different platforms
assets = {
- linux_amd64 = "my-service-{version}-linux-amd64.tar.gz"
- darwin_amd64 = "my-service-{version}-darwin-amd64.tar.gz"
- windows_amd64 = "my-service-{version}-windows-amd64.zip"
- }
- }
+ linux_amd64 = "my-service-{version}-linux-amd64.tar.gz",
+ darwin_amd64 = "my-service-{version}-darwin-amd64.tar.gz",
+ windows_amd64 = "my-service-{version}-windows-amd64.zip",
+ },
+ },
# Version constraints and compatibility
compatibility = {
- min_kubernetes_version = "1.20.0"
- max_kubernetes_version = "1.28.*"
+ min_kubernetes_version = "1.20.0",
+ max_kubernetes_version = "1.28.*",
# Dependencies
requires = {
- "cert-manager": ">=1.8.0"
- "ingress-nginx": ">=1.0.0"
- }
+ "cert-manager" = ">=1.8.0",
+ "ingress-nginx" = ">=1.0.0",
+ },
# Conflicts
conflicts = {
- "old-my-service": "*"
- }
- }
+ "old-my-service" = "*",
+ },
+ },
# Installation configuration
installation = {
- default_namespace = "my-service"
- create_namespace = true
+ default_namespace = "my-service",
+ create_namespace = true,
# Resource requirements
resources = {
requests = {
- cpu = "100m"
- memory = "128Mi"
- }
+ cpu = "100m",
+ memory = "128Mi",
+ },
limits = {
- cpu = "500m"
- memory = "512Mi"
- }
- }
+ cpu = "500m",
+ memory = "512Mi",
+ },
+ },
# Persistence
persistence = {
- enabled = true
- storage_class = "default"
- size = "10Gi"
- }
- }
+ enabled = true,
+ storage_class = "default",
+ size = "10Gi",
+ },
+ },
# Health check configuration
health_check = {
- initial_delay_seconds = 30
- period_seconds = 10
- timeout_seconds = 5
- failure_threshold = 3
+ initial_delay_seconds = 30,
+ period_seconds = 10,
+ timeout_seconds = 5,
+ failure_threshold = 3,
# Health endpoints
endpoints = {
- liveness = "/health/live"
- readiness = "/health/ready"
- }
- }
-}
-```plaintext
-
-## Cluster Development
-
-### Cluster Architecture
-
-Clusters represent complete deployment solutions that combine multiple task services, providers, and configurations to create functional environments.
-
-**Core Responsibilities**:
-
-- **Service Orchestration**: Coordinate multiple task service deployments
-- **Dependency Management**: Handle service dependencies and startup order
-- **Configuration Management**: Manage cross-service configuration
-- **Health Monitoring**: Monitor overall cluster health
-- **Scaling**: Handle cluster scaling operations
-
-### Creating a New Cluster
-
-**1. Initialize from Template**:
-
-```bash
-# Copy cluster template
+ liveness = "/health/live",
+ readiness = "/health/ready",
+ },
+ },
+} in
+version_config
+
+
+
+Clusters represent complete deployment solutions that combine multiple task services, providers, and configurations to create functional environments.
+Core Responsibilities :
+
+Service Orchestration : Coordinate multiple task service deployments
+Dependency Management : Handle service dependencies and startup order
+Configuration Management : Manage cross-service configuration
+Health Monitoring : Monitor overall cluster health
+Scaling : Handle cluster scaling operations
+
+
+1. Initialize from Template :
+# Copy cluster template
cp -r workspace/extensions/clusters/template workspace/extensions/clusters/my-stack
# Navigate to new cluster
cd workspace/extensions/clusters/my-stack
-```plaintext
-
-**2. Initialize Cluster**:
-
-```bash
-# Initialize cluster metadata
+
+2. Initialize Cluster :
+# Initialize cluster metadata
nu init-cluster.nu \
--name "my-stack" \
--display-name "My Application Stack" \
--type "web-application"
-```plaintext
-
-### Cluster Implementation
-
-**Main Cluster Interface** (`nushell/cluster.nu`):
-
-```nushell
-#!/usr/bin/env nu
+
+
+Main Cluster Interface (nushell/cluster.nu):
+#!/usr/bin/env nu
# My Application Stack Cluster Implementation
export const CLUSTER_NAME = "my-stack"
@@ -1223,26 +1144,20 @@ export def "cluster delete" [
deleted_at: (date now)
}
}
-```plaintext
-
-## Testing and Validation
-
-### Testing Framework
-
-**Test Types**:
-
-- **Unit Tests**: Individual function and module testing
-- **Integration Tests**: Cross-component interaction testing
-- **End-to-End Tests**: Complete workflow testing
-- **Performance Tests**: Load and performance validation
-- **Security Tests**: Security and vulnerability testing
-
-### Extension Testing Commands
-
-**Workspace Testing Tools**:
-
-```bash
-# Validate extension syntax and structure
+
+
+
+Test Types :
+
+Unit Tests : Individual function and module testing
+Integration Tests : Cross-component interaction testing
+End-to-End Tests : Complete workflow testing
+Performance Tests : Load and performance validation
+Security Tests : Security and vulnerability testing
+
+
+Workspace Testing Tools :
+# Validate extension syntax and structure
nu workspace.nu tools validate-extension providers/my-cloud
# Run extension unit tests
@@ -1253,14 +1168,10 @@ nu workspace.nu tools test-extension clusters/my-stack --test-type integration -
# Performance testing
nu workspace.nu tools test-extension providers/my-cloud --test-type performance --duration 5m
-```plaintext
-
-### Automated Testing
-
-**Test Runner** (`tests/run-tests.nu`):
-
-```nushell
-#!/usr/bin/env nu
+
+
+Test Runner (tests/run-tests.nu):
+#!/usr/bin/env nu
# Automated test runner for extensions
def main [
@@ -1320,24 +1231,19 @@ def main [
completed_at: (date now)
}
}
-```plaintext
-
-## Publishing and Distribution
-
-### Extension Publishing
-
-**Publishing Process**:
-
-1. **Validation**: Comprehensive testing and validation
-2. **Documentation**: Complete documentation and examples
-3. **Packaging**: Create distribution packages
-4. **Registry**: Publish to extension registry
-5. **Versioning**: Semantic version tagging
-
-### Publishing Commands
-
-```bash
-# Validate extension for publishing
+
+
+
+Publishing Process :
+
+Validation : Comprehensive testing and validation
+Documentation : Complete documentation and examples
+Packaging : Create distribution packages
+Registry : Publish to extension registry
+Versioning : Semantic version tagging
+
+
+# Validate extension for publishing
nu workspace.nu tools validate-for-publish providers/my-cloud
# Create distribution package
@@ -1348,14 +1254,10 @@ nu workspace.nu tools publish-extension providers/my-cloud --registry official
# Tag version
nu workspace.nu tools tag-extension providers/my-cloud --version 1.0.0 --push
-```plaintext
-
-### Extension Registry
-
-**Registry Structure**:
-
-```plaintext
-Extension Registry
+
+
+Registry Structure :
+Extension Registry
├── providers/
│ ├── aws/ # Official AWS provider
│ ├── upcloud/ # Official UpCloud provider
@@ -1368,16 +1270,11 @@ Extension Registry
├── web-stacks/ # Web application stacks
├── data-platforms/ # Data processing platforms
└── ci-cd/ # CI/CD pipelines
-```plaintext
-
-## Best Practices
-
-### Code Quality
-
-**Function Design**:
-
-```nushell
-# Good: Single responsibility, clear parameters, comprehensive error handling
+
+
+
+Function Design :
+# Good: Single responsibility, clear parameters, comprehensive error handling
export def "provider create-server" [
name: string # Server name (must be unique in region)
plan: string # Server plan (see list-plans for options)
@@ -1401,12 +1298,9 @@ def create [n, p] {
# Missing validation and error handling
api_call $n $p
}
-```plaintext
-
-**Configuration Management**:
-
-```nushell
-# Good: Configuration-driven with validation
+
+Configuration Management :
+# Good: Configuration-driven with validation
def get_api_endpoint [provider: string] -> string {
let config = get-config-value $"providers.($provider).api_url"
@@ -1424,14 +1318,10 @@ def get_api_endpoint [provider: string] -> string {
def get_api_endpoint [] {
"https://api.provider.com" # Never hardcode!
}
-```plaintext
-
-### Error Handling
-
-**Comprehensive Error Context**:
-
-```nushell
-def create_server_with_context [name: string, config: record] -> record {
+
+
+Comprehensive Error Context :
+def create_server_with_context [name: string, config: record] -> record {
try {
# Validate configuration
validate_server_config $config
@@ -1470,14 +1360,10 @@ def create_server_with_context [name: string, config: record] -> record {
}
}
}
-```plaintext
-
-### Testing Practices
-
-**Test Organization**:
-
-```nushell
-# Organize tests by functionality
+
+
+Test Organization :
+# Organize tests by functionality
# tests/unit/server-creation-test.nu
def test_valid_server_creation [] {
@@ -1513,14 +1399,10 @@ def test_invalid_inputs [] {
}
}
}
-```plaintext
-
-### Documentation Standards
-
-**Function Documentation**:
-
-```nushell
-# Comprehensive function documentation
+
+
+Function Documentation :
+# Comprehensive function documentation
def "provider create-server" [
name: string # Server name - must be unique within the provider
plan: string # Server size plan (run 'provider list-plans' for options)
@@ -1557,74 +1439,52 @@ def "provider create-server" [
# Implementation...
}
-```plaintext
-
-## Troubleshooting
-
-### Common Development Issues
-
-#### Extension Not Found
-
-**Error**: `Extension 'my-provider' not found`
-
-```bash
-# Solution: Check extension location and structure
+
+
+
+
+Error : Extension 'my-provider' not found
+# Solution: Check extension location and structure
ls -la workspace/extensions/providers/my-provider
nu workspace/lib/path-resolver.nu resolve_extension "providers" "my-provider"
# Validate extension structure
nu workspace.nu tools validate-extension providers/my-provider
-```plaintext
+
+
+Error : Invalid Nickel configuration
+# Solution: Validate Nickel syntax
+nickel check workspace/extensions/providers/my-provider/schemas/
-#### Configuration Errors
-
-**Error**: `Invalid KCL configuration`
-
-```bash
-# Solution: Validate KCL syntax
-kcl check workspace/extensions/providers/my-provider/kcl/
-
-# Format KCL files
-kcl fmt workspace/extensions/providers/my-provider/kcl/
+# Format Nickel files
+nickel fmt workspace/extensions/providers/my-provider/schemas/
# Test with example data
-kcl run workspace/extensions/providers/my-provider/kcl/settings.k -D api_key="test"
-```plaintext
-
-#### API Integration Issues
-
-**Error**: `Authentication failed`
-
-```bash
-# Solution: Test credentials and connectivity
+nickel eval workspace/extensions/providers/my-provider/schemas/settings.ncl
+
+
+Error : Authentication failed
+# Solution: Test credentials and connectivity
curl -H "Authorization: Bearer $API_KEY" https://api.provider.com/auth/test
# Debug API calls
export PROVISIONING_DEBUG=true
export PROVISIONING_LOG_LEVEL=debug
nu workspace/extensions/providers/my-provider/nulib/provider.nu test --test-type basic
-```plaintext
-
-### Debug Mode
-
-**Enable Extension Debugging**:
-
-```bash
-# Set debug environment
+
+
+Enable Extension Debugging :
+# Set debug environment
export PROVISIONING_DEBUG=true
export PROVISIONING_LOG_LEVEL=debug
export PROVISIONING_WORKSPACE_USER=$USER
# Run extension with debug
nu workspace/extensions/providers/my-provider/nulib/provider.nu create-server test-server small --dry-run
-```plaintext
-
-### Performance Optimization
-
-**Extension Performance**:
-
-```bash
-# Profile extension performance
+
+
+Extension Performance :
+# Profile extension performance
time nu workspace/extensions/providers/my-provider/nulib/provider.nu list-servers
# Monitor resource usage
@@ -1633,10 +1493,8 @@ nu workspace/tools/runtime-manager.nu monitor --duration 1m --interval 5s
# Optimize API calls (use caching)
export PROVISIONING_CACHE_ENABLED=true
export PROVISIONING_CACHE_TTL=300 # 5 minutes
-```plaintext
-
-This extension development guide provides a comprehensive framework for creating high-quality, maintainable extensions that integrate seamlessly with provisioning's architecture and workflows.
+This extension development guide provides a comprehensive framework for creating high-quality, maintainable extensions that integrate seamlessly with provisioning’s architecture and workflows.
diff --git a/docs/book/development/implementation-guide.html b/docs/book/development/implementation-guide.html
index ef62ed2..d9633dc 100644
--- a/docs/book/development/implementation-guide.html
+++ b/docs/book/development/implementation-guide.html
@@ -633,7 +633,7 @@ export def main [] {
"provisioning/core"
"provisioning/extensions"
"provisioning/platform"
- "provisioning/kcl"
+ "provisioning/schemas"
"workspace"
"workspace/templates"
"distribution"
diff --git a/docs/book/development/integration.html b/docs/book/development/integration.html
index 867159f..ff0eb73 100644
--- a/docs/book/development/integration.html
+++ b/docs/book/development/integration.html
@@ -206,29 +206,21 @@
│ - File-based │ │ - Monitoring │ │ - Workflows │
│ - Simple logging│ │ - Validation │ │ - REST APIs │
└─────────────────┘ └─────────────────┘ └─────────────────┘
-```plaintext
-
-## Existing System Integration
-
-### Command-Line Interface Integration
-
-**Seamless CLI Compatibility**:
-
-```bash
-# All existing commands continue to work unchanged
-./core/nulib/provisioning server create web-01 2xCPU-4GB
+
+
+
+Seamless CLI Compatibility :
+# All existing commands continue to work unchanged
+./core/nulib/provisioning server create web-01 2xCPU-4 GB
./core/nulib/provisioning taskserv install kubernetes
./core/nulib/provisioning cluster create buildkit
# New commands available alongside existing ones
-./src/core/nulib/provisioning server create web-01 2xCPU-4GB --orchestrated
+./src/core/nulib/provisioning server create web-01 2xCPU-4 GB --orchestrated
nu workspace/tools/workspace.nu health --detailed
-```plaintext
-
-**Path Resolution Integration**:
-
-```nushell
-# Automatic path resolution between systems
+
+Path Resolution Integration :
+# Automatic path resolution between systems
use workspace/lib/path-resolver.nu
# Resolves to workspace path if available, falls back to core
@@ -236,14 +228,10 @@ let config_path = (path-resolver resolve_path "config" "user" --fallback-to-core
# Seamless extension discovery
let provider_path = (path-resolver resolve_extension "providers" "upcloud")
-```plaintext
-
-### Configuration System Bridge
-
-**Dual Configuration Support**:
-
-```nushell
-# Configuration bridge supports both ENV and TOML
+
+
+Dual Configuration Support :
+# Configuration bridge supports both ENV and TOML
def get-config-value-bridge [key: string, default: string = ""] -> string {
# Try new TOML configuration first
let toml_value = try {
@@ -273,14 +261,10 @@ def get-config-value-bridge [key: string, default: string = ""] -> string {
help: $"Migrate from ($env_key) environment variable to ($key) in config file"
}
}
-```plaintext
-
-### Data Integration
-
-**Shared Data Access**:
-
-```nushell
-# Unified data access across old and new systems
+
+
+Shared Data Access :
+# Unified data access across old and new systems
def get-server-info [server_name: string] -> record {
# Try new orchestrator data store first
let orchestrator_data = try {
@@ -302,14 +286,10 @@ def get-server-info [server_name: string] -> record {
error make {msg: $"Server not found: ($server_name)"}
}
-```plaintext
-
-### Process Integration
-
-**Hybrid Process Management**:
-
-```nushell
-# Orchestrator-aware process management
+
+
+Hybrid Process Management :
+# Orchestrator-aware process management
def create-server-integrated [
name: string,
plan: string,
@@ -331,33 +311,24 @@ def check-orchestrator-available [] -> bool {
false
}
}
-```plaintext
-
-## API Compatibility and Versioning
-
-### REST API Versioning
-
-**API Version Strategy**:
-
-- **v1**: Legacy compatibility API (existing functionality)
-- **v2**: Enhanced API with orchestrator features
-- **v3**: Full workflow and batch operation support
-
-**Version Header Support**:
-
-```bash
-# API calls with version specification
+
+
+
+API Version Strategy :
+
+v1 : Legacy compatibility API (existing functionality)
+v2 : Enhanced API with orchestrator features
+v3 : Full workflow and batch operation support
+
+Version Header Support :
+# API calls with version specification
curl -H "API-Version: v1" http://localhost:9090/servers
curl -H "API-Version: v2" http://localhost:9090/workflows/servers/create
curl -H "API-Version: v3" http://localhost:9090/workflows/batch/submit
-```plaintext
-
-### API Compatibility Layer
-
-**Backward Compatible Endpoints**:
-
-```rust
-// Rust API compatibility layer
+
+
+Backward Compatible Endpoints :
+// Rust API compatibility layer
#[derive(Debug, Serialize, Deserialize)]
struct ApiRequest {
version: Option<String>,
@@ -392,54 +363,40 @@ async fn handle_v1_request(payload: serde_json::Value) -> Result<ApiRespon
// Transform response to v1 format
Ok(transform_to_v1_response(result))
-}
-```plaintext
-
-### Schema Evolution
-
-**Backward Compatible Schema Changes**:
-
-```kcl
-# API schema with version support
-schema ServerCreateRequest {
+}
+
+Backward Compatible Schema Changes :
+# API schema with version support
+let ServerCreateRequest = {
# V1 fields (always supported)
- name: str
- plan: str
- zone?: str = "auto"
+ name | string,
+ plan | string,
+ zone | string | default = "auto",
# V2 additions (optional for backward compatibility)
- orchestrated?: bool = false
- workflow_options?: WorkflowOptions
+ orchestrated | bool | default = false,
+ workflow_options | { } | optional,
# V3 additions
- batch_options?: BatchOptions
- dependencies?: [str] = []
+ batch_options | { } | optional,
+ dependencies | array | default = [],
# Version constraints
- api_version?: str = "v1"
-
- check:
- len(name) > 0, "Name cannot be empty"
- plan in ["1xCPU-2GB", "2xCPU-4GB", "4xCPU-8GB", "8xCPU-16GB"], "Invalid plan"
-}
+ api_version | string | default = "v1",
+} in
+ServerCreateRequest
# Conditional validation based on API version
-schema WorkflowOptions:
- wait_for_completion?: bool = true
- timeout_seconds?: int = 300
- retry_count?: int = 3
-
- check:
- timeout_seconds > 0, "Timeout must be positive"
- retry_count >= 0, "Retry count must be non-negative"
-```plaintext
-
-### Client SDK Compatibility
-
-**Multi-Version Client Support**:
-
-```nushell
-# Nushell client with version support
+let WorkflowOptions = {
+ wait_for_completion | bool | default = true,
+ timeout_seconds | number | default = 300,
+ retry_count | number | default = 3,
+} in
+WorkflowOptions
+
+
+Multi-Version Client Support :
+# Nushell client with version support
def "client create-server" [
name: string,
plan: string,
@@ -472,16 +429,11 @@ def "client create-server" [
"API-Version": $api_version
}
}
-```plaintext
-
-## Database Migration Strategies
-
-### Database Architecture Evolution
-
-**Migration Strategy**:
-
-```plaintext
-Database Evolution Path
+
+
+
+Migration Strategy :
+Database Evolution Path
┌─────────────────┐ ┌─────────────────┐ ┌─────────────────┐
│ File-based │ → │ SQLite │ → │ SurrealDB │
│ Storage │ │ Migration │ │ Full Schema │
@@ -490,14 +442,10 @@ Database Evolution Path
│ - Text logs │ │ - Transactions │ │ - Real-time │
│ - Simple state │ │ - Backup/restore│ │ - Clustering │
└─────────────────┘ └─────────────────┘ └─────────────────┘
-```plaintext
-
-### Migration Scripts
-
-**Automated Database Migration**:
-
-```nushell
-# Database migration orchestration
+
+
+Automated Database Migration :
+# Database migration orchestration
def migrate-database [
--from: string = "filesystem",
--to: string = "surrealdb",
@@ -533,12 +481,9 @@ def migrate-database [
print $"Migration from ($from) to ($to) completed successfully"
{from: $from, to: $to, status: "completed", migrated_at: (date now)}
}
-```plaintext
-
-**File System to SurrealDB Migration**:
-
-```nushell
-def migrate_filesystem_to_surrealdb [] -> record {
+
+File System to SurrealDB Migration :
+def migrate_filesystem_to_surrealdb [] -> record {
# Initialize SurrealDB connection
let db = (connect-surrealdb)
@@ -585,14 +530,10 @@ def migrate_filesystem_to_surrealdb [] -> record {
status: "completed"
}
}
-```plaintext
-
-### Data Integrity Verification
-
-**Migration Verification**:
-
-```nushell
-def verify-migration [from: string, to: string] -> record {
+
+
+Migration Verification :
+def verify-migration [from: string, to: string] -> record {
print "Verifying data integrity..."
let source_data = (read-source-data $from)
@@ -629,16 +570,11 @@ def verify-migration [from: string, to: string] -> record {
verified_at: (date now)
}
}
-```plaintext
-
-## Deployment Considerations
-
-### Deployment Architecture
-
-**Hybrid Deployment Model**:
-
-```plaintext
-Deployment Architecture
+
+
+
+Hybrid Deployment Model :
+Deployment Architecture
┌─────────────────────────────────────────────────────────────────┐
│ Load Balancer / Reverse Proxy │
└─────────────────────┬───────────────────────────────────────────┘
@@ -653,14 +589,10 @@ Deployment Architecture
│- Files │ │- Compat │ │- DB │
│- Logs │ │- Monitor │ │- Queue │
└────────┘ └────────────┘ └────────┘
-```plaintext
-
-### Deployment Strategies
-
-**Blue-Green Deployment**:
-
-```bash
-# Blue-Green deployment with integration bridge
+
+
+Blue-Green Deployment :
+# Blue-Green deployment with integration bridge
# Phase 1: Deploy new system alongside existing (Green environment)
cd src/tools
make all
@@ -686,12 +618,9 @@ nginx-traffic-split --new-backend 90%
# Phase 4: Complete cutover
nginx-traffic-split --new-backend 100%
/opt/provisioning-v1/bin/orchestrator stop
-```plaintext
-
-**Rolling Update**:
-
-```nushell
-def rolling-deployment [
+
+Rolling Update :
+def rolling-deployment [
--target-version: string,
--batch-size: int = 3,
--health-check-interval: duration = 30sec
@@ -741,14 +670,10 @@ def rolling-deployment [
completed_at: (date now)
}
}
-```plaintext
-
-### Configuration Deployment
-
-**Environment-Specific Deployment**:
-
-```bash
-# Development deployment
+
+
+Environment-Specific Deployment :
+# Development deployment
PROVISIONING_ENV=dev ./deploy.sh \
--config-source config.dev.toml \
--enable-debug \
@@ -767,14 +692,10 @@ PROVISIONING_ENV=prod ./deploy.sh \
--enable-all-monitoring \
--backup-before-deploy \
--health-check-timeout 5m
-```plaintext
-
-### Container Integration
-
-**Docker Deployment with Bridge**:
-
-```dockerfile
-# Multi-stage Docker build supporting both systems
+
+
+Docker Deployment with Bridge :
+# Multi-stage Docker build supporting both systems
FROM rust:1.70 as builder
WORKDIR /app
COPY . .
@@ -797,12 +718,9 @@ ENV PROVISIONING_NEW_PATH=/app/bin
EXPOSE 8080
CMD ["/app/bin/bridge-start.sh"]
-```plaintext
-
-**Kubernetes Integration**:
-
-```yaml
-# Kubernetes deployment with bridge sidecar
+
+Kubernetes Integration :
+# Kubernetes deployment with bridge sidecar
apiVersion: apps/v1
kind: Deployment
metadata:
@@ -841,16 +759,11 @@ spec:
- name: legacy-data
persistentVolumeClaim:
claimName: provisioning-data
-```plaintext
-
-## Monitoring and Observability
-
-### Integrated Monitoring Architecture
-
-**Monitoring Stack Integration**:
-
-```plaintext
-Observability Architecture
+
+
+
+Monitoring Stack Integration :
+Observability Architecture
┌─────────────────────────────────────────────────────────────────┐
│ Monitoring Dashboard │
│ ┌─────────────┐ ┌─────────────┐ ┌─────────────┐ │
@@ -879,14 +792,10 @@ Observability Architecture
│ - Compatibility │
│ - Migration │
└───────────────────┘
-```plaintext
-
-### Metrics Integration
-
-**Unified Metrics Collection**:
-
-```nushell
-# Metrics bridge for legacy and new systems
+
+
+Unified Metrics Collection :
+# Metrics bridge for legacy and new systems
def collect-system-metrics [] -> record {
let legacy_metrics = collect-legacy-metrics
let new_metrics = collect-new-metrics
@@ -935,14 +844,10 @@ def collect-new-metrics [] -> record {
database_stats: (get-database-metrics)
}
}
-```plaintext
-
-### Logging Integration
-
-**Unified Logging Strategy**:
-
-```nushell
-# Structured logging bridge
+
+
+Unified Logging Strategy :
+# Structured logging bridge
def log-integrated [
level: string,
message: string,
@@ -970,14 +875,10 @@ def log-integrated [
# Send to monitoring system
send-to-monitoring $log_entry
}
-```plaintext
-
-### Health Check Integration
-
-**Comprehensive Health Monitoring**:
-
-```nushell
-def health-check-integrated [] -> record {
+
+
+Comprehensive Health Monitoring :
+def health-check-integrated [] -> record {
let health_checks = [
{name: "legacy-system", check: (check-legacy-health)},
{name: "orchestrator", check: (check-orchestrator-health)},
@@ -1007,16 +908,11 @@ def health-check-integrated [] -> record {
checked_at: (date now)
}
}
-```plaintext
-
-## Legacy System Bridge
-
-### Bridge Architecture
-
-**Bridge Component Design**:
-
-```nushell
-# Legacy system bridge module
+
+
+
+Bridge Component Design :
+# Legacy system bridge module
export module bridge {
# Bridge state management
export def init-bridge [] -> record {
@@ -1070,14 +966,10 @@ export module bridge {
}
}
}
-```plaintext
-
-### Bridge Operation Modes
-
-**Compatibility Mode**:
-
-```nushell
-# Full compatibility with legacy system
+
+
+Compatibility Mode :
+# Full compatibility with legacy system
def run-compatibility-mode [] {
print "Starting bridge in compatibility mode..."
@@ -1098,12 +990,9 @@ def run-compatibility-mode [] {
}
}
}
-```plaintext
-
-**Migration Mode**:
-
-```nushell
-# Gradual migration with traffic splitting
+
+Migration Mode :
+# Gradual migration with traffic splitting
def run-migration-mode [
--new-system-percentage: int = 50
] {
@@ -1126,39 +1015,33 @@ def run-migration-mode [
}
}
}
-```plaintext
-
-## Migration Pathways
-
-### Migration Phases
-
-**Phase 1: Parallel Deployment**
-
-- Deploy new system alongside existing
-- Enable bridge for compatibility
-- Begin data synchronization
-- Monitor integration health
-
-**Phase 2: Gradual Migration**
-
-- Route increasing traffic to new system
-- Migrate data in background
-- Validate consistency
-- Address integration issues
-
-**Phase 3: Full Migration**
-
-- Complete traffic cutover
-- Decommission legacy system
-- Clean up bridge components
-- Finalize data migration
-
-### Migration Automation
-
-**Automated Migration Orchestration**:
-
-```nushell
-def execute-migration-plan [
+
+
+
+Phase 1: Parallel Deployment
+
+Deploy new system alongside existing
+Enable bridge for compatibility
+Begin data synchronization
+Monitor integration health
+
+Phase 2: Gradual Migration
+
+Route increasing traffic to new system
+Migrate data in background
+Validate consistency
+Address integration issues
+
+Phase 3: Full Migration
+
+Complete traffic cutover
+Decommission legacy system
+Clean up bridge components
+Finalize data migration
+
+
+Automated Migration Orchestration :
+def execute-migration-plan [
migration_plan: string,
--dry-run: bool = false,
--skip-backup: bool = false
@@ -1208,12 +1091,9 @@ def execute-migration-plan [
results: $migration_results
}
}
-```plaintext
-
-**Migration Validation**:
-
-```nushell
-def validate-migration-readiness [] -> record {
+
+Migration Validation :
+def validate-migration-readiness [] -> record {
let checks = [
{name: "backup-available", check: (check-backup-exists)},
{name: "new-system-healthy", check: (check-new-system-health)},
@@ -1240,18 +1120,12 @@ def validate-migration-readiness [] -> record {
validated_at: (date now)
}
}
-```plaintext
-
-## Troubleshooting Integration Issues
-
-### Common Integration Problems
-
-#### API Compatibility Issues
-
-**Problem**: Version mismatch between client and server
-
-```bash
-# Diagnosis
+
+
+
+
+Problem : Version mismatch between client and server
+# Diagnosis
curl -H "API-Version: v1" http://localhost:9090/health
curl -H "API-Version: v2" http://localhost:9090/health
@@ -1260,14 +1134,10 @@ curl http://localhost:9090/api/versions
# Update client API version
export PROVISIONING_API_VERSION=v2
-```plaintext
-
-#### Configuration Bridge Issues
-
-**Problem**: Configuration not found in either system
-
-```nushell
-# Diagnosis
+
+
+Problem : Configuration not found in either system
+# Diagnosis
def diagnose-config-issue [key: string] -> record {
let toml_result = try {
get-config-value $key
@@ -1296,14 +1166,10 @@ def migrate-single-config [key: string] {
print $"Migrated ($key) from environment variable"
}
}
-```plaintext
-
-#### Database Integration Issues
-
-**Problem**: Data inconsistency between systems
-
-```nushell
-# Diagnosis and repair
+
+
+Problem : Data inconsistency between systems
+# Diagnosis and repair
def repair-data-consistency [] -> record {
let legacy_data = (read-legacy-data)
let new_data = (read-new-data)
@@ -1331,27 +1197,20 @@ def repair-data-consistency [] -> record {
repaired_at: (date now)
}
}
-```plaintext
-
-### Debug Tools
-
-**Integration Debug Mode**:
-
-```bash
-# Enable comprehensive debugging
+
+
+Integration Debug Mode :
+# Enable comprehensive debugging
export PROVISIONING_DEBUG=true
export PROVISIONING_LOG_LEVEL=debug
export PROVISIONING_BRIDGE_DEBUG=true
export PROVISIONING_INTEGRATION_TRACE=true
# Run with integration debugging
-provisioning server create test-server 2xCPU-4GB --debug-integration
-```plaintext
-
-**Health Check Debugging**:
-
-```nushell
-def debug-integration-health [] -> record {
+provisioning server create test-server 2xCPU-4 GB --debug-integration
+
+Health Check Debugging :
+def debug-integration-health [] -> record {
print "=== Integration Health Debug ==="
# Check all integration points
@@ -1384,10 +1243,8 @@ def debug-integration-health [] -> record {
debug_timestamp: (date now)
}
}
-```plaintext
-
-This integration guide provides a comprehensive framework for seamlessly integrating new development components with existing production systems while maintaining reliability, compatibility, and clear migration pathways.
+This integration guide provides a comprehensive framework for seamlessly integrating new development components with existing production systems while maintaining reliability, compatibility, and clear migration pathways.
diff --git a/docs/book/development/project-structure.html b/docs/book/development/project-structure.html
index 6a6839f..b2e39aa 100644
--- a/docs/book/development/project-structure.html
+++ b/docs/book/development/project-structure.html
@@ -202,68 +202,53 @@
├── docs/ # Documentation (new)
├── extensions/ # Extension framework
├── generators/ # Code generation tools
-├── kcl/ # KCL configuration language files
+├── schemas/ # Nickel configuration schemas (migrated from kcl/)
├── orchestrator/ # Hybrid Rust/Nushell orchestrator
├── platform/ # Platform-specific code
├── provisioning/ # Main provisioning
├── templates/ # Template files
├── tools/ # Build and development tools
└── utils/ # Utility scripts
-```plaintext
-
-### Legacy Structure (Preserved)
-
-```plaintext
-repo-cnz/
+
+
+repo-cnz/
├── cluster/ # Cluster configurations (preserved)
├── core/ # Core system (preserved)
├── generate/ # Generation scripts (preserved)
-├── kcl/ # KCL files (preserved)
+├── schemas/ # Nickel schemas (migrated from kcl/)
├── klab/ # Development lab (preserved)
├── nushell-plugins/ # Plugin development (preserved)
├── providers/ # Cloud providers (preserved)
├── taskservs/ # Task services (preserved)
└── templates/ # Template files (preserved)
-```plaintext
-
-### Development Workspace (`/workspace/`)
-
-```plaintext
-workspace/
+
+
+workspace/
├── config/ # Development configuration
├── extensions/ # Extension development
├── infra/ # Development infrastructure
├── lib/ # Workspace libraries
├── runtime/ # Runtime data
└── tools/ # Workspace management tools
-```plaintext
-
-## Core Directories
-
-### `/src/core/` - Core Development Libraries
-
-**Purpose**: Development-focused core libraries and entry points
-
-**Key Files**:
-
-- `nulib/provisioning` - Main CLI entry point (symlinks to legacy location)
-- `nulib/lib_provisioning/` - Core provisioning libraries
-- `nulib/workflows/` - Workflow management (orchestrator integration)
-
-**Relationship to Legacy**: Preserves original `core/` functionality while adding development enhancements
-
-### `/src/tools/` - Build and Development Tools
-
-**Purpose**: Complete build system for the provisioning project
-
-**Key Components**:
-
-```plaintext
-tools/
+
+
+
+Purpose : Development-focused core libraries and entry points
+Key Files :
+
+nulib/provisioning - Main CLI entry point (symlinks to legacy location)
+nulib/lib_provisioning/ - Core provisioning libraries
+nulib/workflows/ - Workflow management (orchestrator integration)
+
+Relationship to Legacy : Preserves original core/ functionality while adding development enhancements
+
+Purpose : Complete build system for the provisioning project
+Key Components :
+tools/
├── build/ # Build tools
│ ├── compile-platform.nu # Platform-specific compilation
│ ├── bundle-core.nu # Core library bundling
-│ ├── validate-kcl.nu # KCL validation
+│ ├── validate-nickel.nu # Nickel schema validation
│ ├── clean-build.nu # Build cleanup
│ └── test-distribution.nu # Distribution testing
├── distribution/ # Distribution tools
@@ -284,122 +269,94 @@ tools/
│ ├── notify-users.nu # Release notifications
│ └── update-registry.nu # Package registry updates
└── Makefile # Main build system (40+ targets)
-```plaintext
-
-### `/src/orchestrator/` - Hybrid Orchestrator
-
-**Purpose**: Rust/Nushell hybrid orchestrator for solving deep call stack limitations
-
-**Key Components**:
-
-- `src/` - Rust orchestrator implementation
-- `scripts/` - Orchestrator management scripts
-- `data/` - File-based task queue and persistence
-
-**Integration**: Provides REST API and workflow management while preserving all Nushell business logic
-
-### `/src/provisioning/` - Enhanced Provisioning
-
-**Purpose**: Enhanced version of the main provisioning with additional features
-
-**Key Features**:
-
-- Batch workflow system (v3.1.0)
-- Provider-agnostic design
-- Configuration-driven architecture (v2.0.0)
-
-### `/workspace/` - Development Workspace
-
-**Purpose**: Complete development environment with tools and runtime management
-
-**Key Components**:
-
-- `tools/workspace.nu` - Unified workspace management interface
-- `lib/path-resolver.nu` - Smart path resolution system
-- `config/` - Environment-specific development configurations
-- `extensions/` - Extension development templates and examples
-- `infra/` - Development infrastructure examples
-- `runtime/` - Isolated runtime data per user
-
-## Development Workspace
-
-### Workspace Management
-
-The workspace provides a sophisticated development environment:
-
-**Initialization**:
-
-```bash
-cd workspace/tools
+
+
+Purpose : Rust/Nushell hybrid orchestrator for solving deep call stack limitations
+Key Components :
+
+src/ - Rust orchestrator implementation
+scripts/ - Orchestrator management scripts
+data/ - File-based task queue and persistence
+
+Integration : Provides REST API and workflow management while preserving all Nushell business logic
+
+Purpose : Enhanced version of the main provisioning with additional features
+Key Features :
+
+Batch workflow system (v3.1.0)
+Provider-agnostic design
+Configuration-driven architecture (v2.0.0)
+
+
+Purpose : Complete development environment with tools and runtime management
+Key Components :
+
+tools/workspace.nu - Unified workspace management interface
+lib/path-resolver.nu - Smart path resolution system
+config/ - Environment-specific development configurations
+extensions/ - Extension development templates and examples
+infra/ - Development infrastructure examples
+runtime/ - Isolated runtime data per user
+
+
+
+The workspace provides a sophisticated development environment:
+Initialization :
+cd workspace/tools
nu workspace.nu init --user-name developer --infra-name my-infra
-```plaintext
-
-**Health Monitoring**:
-
-```bash
-nu workspace.nu health --detailed --fix-issues
-```plaintext
-
-**Path Resolution**:
-
-```nushell
-use lib/path-resolver.nu
+
+Health Monitoring :
+nu workspace.nu health --detailed --fix-issues
+
+Path Resolution :
+use lib/path-resolver.nu
let config = (path-resolver resolve_config "user" --workspace-user "john")
-```plaintext
-
-### Extension Development
-
-The workspace provides templates for developing:
-
-- **Providers**: Custom cloud provider implementations
-- **Task Services**: Infrastructure service components
-- **Clusters**: Complete deployment solutions
-
-Templates are available in `workspace/extensions/{type}/template/`
-
-### Configuration Hierarchy
-
-The workspace implements a sophisticated configuration cascade:
-
-1. Workspace user configuration (`workspace/config/{user}.toml`)
-2. Environment-specific defaults (`workspace/config/{env}-defaults.toml`)
-3. Workspace defaults (`workspace/config/dev-defaults.toml`)
-4. Core system defaults (`config.defaults.toml`)
-
-## File Naming Conventions
-
-### Nushell Files (`.nu`)
-
-- **Commands**: `kebab-case` - `create-server.nu`, `validate-config.nu`
-- **Modules**: `snake_case` - `lib_provisioning`, `path_resolver`
-- **Scripts**: `kebab-case` - `workspace-health.nu`, `runtime-manager.nu`
-
-### Configuration Files
-
-- **TOML**: `kebab-case.toml` - `config-defaults.toml`, `user-settings.toml`
-- **Environment**: `{env}-defaults.toml` - `dev-defaults.toml`, `prod-defaults.toml`
-- **Examples**: `*.toml.example` - `local-overrides.toml.example`
-
-### KCL Files (`.k`)
-
-- **Schemas**: `PascalCase` types - `ServerConfig`, `WorkflowDefinition`
-- **Files**: `kebab-case.k` - `server-config.k`, `workflow-schema.k`
-- **Modules**: `kcl.mod` - Module definition files
-
-### Build and Distribution
-
-- **Scripts**: `kebab-case.nu` - `compile-platform.nu`, `generate-distribution.nu`
-- **Makefiles**: `Makefile` - Standard naming
-- **Archives**: `{project}-{version}-{platform}-{variant}.{ext}`
-
-## Navigation Guide
-
-### Finding Components
-
-**Core System Entry Points**:
-
-```bash
-# Main CLI (development version)
+
+
+The workspace provides templates for developing:
+
+Providers : Custom cloud provider implementations
+Task Services : Infrastructure service components
+Clusters : Complete deployment solutions
+
+Templates are available in workspace/extensions/{type}/template/
+
+The workspace implements a sophisticated configuration cascade:
+
+Workspace user configuration (workspace/config/{user}.toml)
+Environment-specific defaults (workspace/config/{env}-defaults.toml)
+Workspace defaults (workspace/config/dev-defaults.toml)
+Core system defaults (config.defaults.toml)
+
+
+
+
+Commands : kebab-case - create-server.nu, validate-config.nu
+Modules : snake_case - lib_provisioning, path_resolver
+Scripts : kebab-case - workspace-health.nu, runtime-manager.nu
+
+
+
+TOML : kebab-case.toml - config-defaults.toml, user-settings.toml
+Environment : {env}-defaults.toml - dev-defaults.toml, prod-defaults.toml
+Examples : *.toml.example - local-overrides.toml.example
+
+
+
+Schemas : kebab-case.ncl - server-config.ncl, workflow-schema.ncl
+Configuration : manifest.toml - Package metadata
+Structure : Organized in schemas/ directories per extension
+
+
+
+Scripts : kebab-case.nu - compile-platform.nu, generate-distribution.nu
+Makefiles : Makefile - Standard naming
+Archives : {project}-{version}-{platform}-{variant}.{ext}
+
+
+
+Core System Entry Points :
+# Main CLI (development version)
/src/core/nulib/provisioning
# Legacy CLI (production version)
@@ -407,12 +364,9 @@ The workspace implements a sophisticated configuration cascade:
# Workspace management
/workspace/tools/workspace.nu
-```plaintext
-
-**Build System**:
-
-```bash
-# Main build system
+
+Build System :
+# Main build system
cd /src/tools && make help
# Quick development build
@@ -420,12 +374,9 @@ make dev-build
# Complete distribution
make all
-```plaintext
-
-**Configuration Files**:
-
-```bash
-# System defaults
+
+Configuration Files :
+# System defaults
/config.defaults.toml
# User configuration (workspace)
@@ -433,12 +384,9 @@ make all
# Environment-specific
/workspace/config/{env}-defaults.toml
-```plaintext
-
-**Extension Development**:
-
-```bash
-# Provider template
+
+Extension Development :
+# Provider template
/workspace/extensions/providers/template/
# Task service template
@@ -446,25 +394,18 @@ make all
# Cluster template
/workspace/extensions/clusters/template/
-```plaintext
-
-### Common Workflows
-
-**1. Development Setup**:
-
-```bash
-# Initialize workspace
+
+
+1. Development Setup :
+# Initialize workspace
cd workspace/tools
nu workspace.nu init --user-name $USER
# Check health
nu workspace.nu health --detailed
-```plaintext
-
-**2. Building Distribution**:
-
-```bash
-# Complete build
+
+2. Building Distribution :
+# Complete build
cd src/tools
make all
@@ -472,109 +413,95 @@ make all
make linux
make macos
make windows
-```plaintext
-
-**3. Extension Development**:
-
-```bash
-# Create new provider
+
+3. Extension Development :
+# Create new provider
cp -r workspace/extensions/providers/template workspace/extensions/providers/my-provider
# Test extension
nu workspace/extensions/providers/my-provider/nulib/provider.nu test
-```plaintext
-
-### Legacy Compatibility
-
-**Existing Commands Still Work**:
-
-```bash
-# All existing commands preserved
+
+
+Existing Commands Still Work :
+# All existing commands preserved
./core/nulib/provisioning server create
./core/nulib/provisioning taskserv install kubernetes
./core/nulib/provisioning cluster create buildkit
-```plaintext
-
-**Configuration Migration**:
-
-- ENV variables still supported as fallbacks
-- New configuration system provides better defaults
-- Migration tools available in `src/tools/migration/`
-
-## Migration Path
-
-### For Users
-
-**No Changes Required**:
-
-- All existing commands continue to work
-- Configuration files remain compatible
-- Existing infrastructure deployments unaffected
-
-**Optional Enhancements**:
-
-- Migrate to new configuration system for better defaults
-- Use workspace for development environments
-- Leverage new build system for custom distributions
-
-### For Developers
-
-**Development Environment**:
-
-1. Initialize development workspace: `nu workspace/tools/workspace.nu init`
-2. Use new build system: `cd src/tools && make dev-build`
-3. Leverage extension templates for custom development
-
-**Build System**:
-
-1. Use new Makefile for comprehensive build management
-2. Leverage distribution tools for packaging
-3. Use release management for version control
-
-**Orchestrator Integration**:
-
-1. Start orchestrator for workflow management: `cd src/orchestrator && ./scripts/start-orchestrator.nu`
-2. Use workflow APIs for complex operations
-3. Leverage batch operations for efficiency
-
-### Migration Tools
-
-**Available Migration Scripts**:
-
-- `src/tools/migration/config-migration.nu` - Configuration migration
-- `src/tools/migration/workspace-setup.nu` - Workspace initialization
-- `src/tools/migration/path-resolver.nu` - Path resolution migration
-
-**Validation Tools**:
-
-- `src/tools/validation/system-health.nu` - System health validation
-- `src/tools/validation/compatibility-check.nu` - Compatibility verification
-- `src/tools/validation/migration-status.nu` - Migration status tracking
-
-## Architecture Benefits
-
-### Development Efficiency
-
-- **Build System**: Comprehensive 40+ target Makefile system
-- **Workspace Isolation**: Per-user development environments
-- **Extension Framework**: Template-based extension development
-
-### Production Reliability
-
-- **Backward Compatibility**: All existing functionality preserved
-- **Configuration Migration**: Gradual migration from ENV to config-driven
-- **Orchestrator Architecture**: Hybrid Rust/Nushell for performance and flexibility
-- **Workflow Management**: Batch operations with rollback capabilities
-
-### Maintenance Benefits
-
-- **Clean Separation**: Development tools separate from production code
-- **Organized Structure**: Logical grouping of related functionality
-- **Documentation**: Comprehensive documentation and examples
-- **Testing Framework**: Built-in testing and validation tools
-
-This structure represents a significant evolution in the project's organization while maintaining complete backward compatibility and providing powerful new development capabilities.
+Configuration Migration :
+
+ENV variables still supported as fallbacks
+New configuration system provides better defaults
+Migration tools available in src/tools/migration/
+
+
+
+No Changes Required :
+
+All existing commands continue to work
+Configuration files remain compatible
+Existing infrastructure deployments unaffected
+
+Optional Enhancements :
+
+Migrate to new configuration system for better defaults
+Use workspace for development environments
+Leverage new build system for custom distributions
+
+
+Development Environment :
+
+Initialize development workspace: nu workspace/tools/workspace.nu init
+Use new build system: cd src/tools && make dev-build
+Leverage extension templates for custom development
+
+Build System :
+
+Use new Makefile for comprehensive build management
+Leverage distribution tools for packaging
+Use release management for version control
+
+Orchestrator Integration :
+
+Start orchestrator for workflow management: cd src/orchestrator && ./scripts/start-orchestrator.nu
+Use workflow APIs for complex operations
+Leverage batch operations for efficiency
+
+
+Available Migration Scripts :
+
+src/tools/migration/config-migration.nu - Configuration migration
+src/tools/migration/workspace-setup.nu - Workspace initialization
+src/tools/migration/path-resolver.nu - Path resolution migration
+
+Validation Tools :
+
+src/tools/validation/system-health.nu - System health validation
+src/tools/validation/compatibility-check.nu - Compatibility verification
+src/tools/validation/migration-status.nu - Migration status tracking
+
+
+
+
+Build System : Comprehensive 40+ target Makefile system
+Workspace Isolation : Per-user development environments
+Extension Framework : Template-based extension development
+
+
+
+Backward Compatibility : All existing functionality preserved
+Configuration Migration : Gradual migration from ENV to config-driven
+Orchestrator Architecture : Hybrid Rust/Nushell for performance and flexibility
+Workflow Management : Batch operations with rollback capabilities
+
+
+
+Clean Separation : Development tools separate from production code
+Organized Structure : Logical grouping of related functionality
+Documentation : Comprehensive documentation and examples
+Testing Framework : Built-in testing and validation tools
+
+This structure represents a significant evolution in the project’s organization while maintaining complete backward compatibility and providing powerful new development capabilities.
diff --git a/docs/book/development/workflow.html b/docs/book/development/workflow.html
index c6d7ac0..30302dc 100644
--- a/docs/book/development/workflow.html
+++ b/docs/book/development/workflow.html
@@ -213,32 +213,23 @@ cd provisioning-system
# Navigate to workspace
cd workspace/tools
-```plaintext
-
-**2. Initialize Workspace**:
-
-```bash
-# Initialize development workspace
+
+2. Initialize Workspace :
+# Initialize development workspace
nu workspace.nu init --user-name $USER --infra-name dev-env
# Check workspace health
nu workspace.nu health --detailed --fix-issues
-```plaintext
-
-**3. Configure Development Environment**:
-
-```bash
-# Create user configuration
+
+3. Configure Development Environment :
+# Create user configuration
cp workspace/config/local-overrides.toml.example workspace/config/$USER.toml
# Edit configuration for development
$EDITOR workspace/config/$USER.toml
-```plaintext
-
-**4. Set Up Build System**:
-
-```bash
-# Navigate to build tools
+
+4. Set Up Build System :
+# Navigate to build tools
cd src/tools
# Check build prerequisites
@@ -246,43 +237,32 @@ make info
# Perform initial build
make dev-build
-```plaintext
-
-### Tool Installation
-
-**Required Tools**:
-
-```bash
-# Install Nushell
+
+
+Required Tools :
+# Install Nushell
cargo install nu
-# Install KCL
-cargo install kcl-cli
+# Install Nickel
+cargo install nickel
# Install additional tools
cargo install cross # Cross-compilation
cargo install cargo-audit # Security auditing
cargo install cargo-watch # File watching
-```plaintext
-
-**Optional Development Tools**:
-
-```bash
-# Install development enhancers
+
+Optional Development Tools :
+# Install development enhancers
cargo install nu_plugin_tera # Template plugin
cargo install sops # Secrets management
brew install k9s # Kubernetes management
-```plaintext
-
-### IDE Configuration
-
-**VS Code Setup** (`.vscode/settings.json`):
-
-```json
-{
+
+
+VS Code Setup (.vscode/settings.json):
+{
"files.associations": {
"*.nu": "shellscript",
- "*.k": "kcl",
+ "*.ncl": "nickel",
"*.toml": "toml"
},
"nushell.shellPath": "/usr/local/bin/nu",
@@ -291,24 +271,19 @@ brew install k9s # Kubernetes management
"editor.rulers": [100],
"files.trimTrailingWhitespace": true
}
-```plaintext
-
-**Recommended Extensions**:
-
-- Nushell Language Support
-- Rust Analyzer
-- KCL Language Support
-- TOML Language Support
-- Better TOML
-
-## Daily Development Workflow
-
-### Morning Routine
-
-**1. Sync and Update**:
-
-```bash
-# Sync with upstream
+
+Recommended Extensions :
+
+Nushell Language Support
+Rust Analyzer
+Nickel Language Support
+TOML Language Support
+Better TOML
+
+
+
+1. Sync and Update :
+# Sync with upstream
git pull origin main
# Update workspace
@@ -317,25 +292,18 @@ nu workspace.nu health --fix-issues
# Check for updates
nu workspace.nu status --detailed
-```plaintext
-
-**2. Review Current State**:
-
-```bash
-# Check current infrastructure
+
+2. Review Current State :
+# Check current infrastructure
provisioning show servers
provisioning show settings
# Review workspace status
nu workspace.nu status
-```plaintext
-
-### Development Cycle
-
-**1. Feature Development**:
-
-```bash
-# Create feature branch
+
+
+1. Feature Development :
+# Create feature branch
git checkout -b feature/new-provider-support
# Start development environment
@@ -344,12 +312,9 @@ nu workspace.nu init --workspace-type development
# Begin development
$EDITOR workspace/extensions/providers/new-provider/nulib/provider.nu
-```plaintext
-
-**2. Incremental Testing**:
-
-```bash
-# Test syntax during development
+
+2. Incremental Testing :
+# Test syntax during development
nu --check workspace/extensions/providers/new-provider/nulib/provider.nu
# Run unit tests
@@ -357,12 +322,9 @@ nu workspace/extensions/providers/new-provider/tests/unit/basic-test.nu
# Integration testing
nu workspace.nu tools test-extension providers/new-provider
-```plaintext
-
-**3. Build and Validate**:
-
-```bash
-# Quick development build
+
+3. Build and Validate :
+# Quick development build
cd src/tools
make dev-build
@@ -371,37 +333,26 @@ make validate-all
# Test distribution
make test-dist
-```plaintext
-
-### Testing During Development
-
-**Unit Testing**:
-
-```nushell
-# Add test examples to functions
+
+
+Unit Testing :
+# Add test examples to functions
def create-server [name: string] -> record {
# @test: "test-server" -> {name: "test-server", status: "created"}
# Implementation here
}
-```plaintext
-
-**Integration Testing**:
-
-```bash
-# Test with real infrastructure
+
+Integration Testing :
+# Test with real infrastructure
nu workspace/extensions/providers/new-provider/nulib/provider.nu \
create-server test-server --dry-run
# Test with workspace isolation
PROVISIONING_WORKSPACE_USER=$USER provisioning server create test-server --check
-```plaintext
-
-### End-of-Day Routine
-
-**1. Commit Progress**:
-
-```bash
-# Stage changes
+
+
+1. Commit Progress :
+# Stage changes
git add .
# Commit with descriptive message
@@ -414,12 +365,9 @@ git commit -m "feat(provider): add new cloud provider support
# Push to feature branch
git push origin feature/new-provider-support
-```plaintext
-
-**2. Workspace Maintenance**:
-
-```bash
-# Clean up development data
+
+2. Workspace Maintenance :
+# Clean up development data
nu workspace.nu cleanup --type cache --age 1d
# Backup current state
@@ -427,16 +375,11 @@ nu workspace.nu backup --auto-name --components config,extensions
# Check workspace health
nu workspace.nu health
-```plaintext
-
-## Code Organization
-
-### Nushell Code Structure
-
-**File Organization**:
-
-```plaintext
-Extension Structure:
+
+
+
+File Organization :
+Extension Structure:
├── nulib/
│ ├── main.nu # Main entry point
│ ├── core/ # Core functionality
@@ -453,12 +396,9 @@ Extension Structure:
└── templates/ # Template files
├── config.j2 # Configuration templates
└── manifest.j2 # Manifest templates
-```plaintext
-
-**Function Naming Conventions**:
-
-```nushell
-# Use kebab-case for commands
+
+Function Naming Conventions :
+# Use kebab-case for commands
def create-server [name: string] -> record { ... }
def validate-config [config: record] -> bool { ... }
@@ -470,12 +410,9 @@ def parse_config_file [path: string] -> record { ... }
def check-server-status [server: string] -> string { ... }
def get-server-info [server: string] -> record { ... }
def list-available-zones [] -> list<string> { ... }
-```plaintext
-
-**Error Handling Pattern**:
-
-```nushell
-def create-server [
+
+Error Handling Pattern :
+def create-server [
name: string
--dry-run: bool = false
] -> record {
@@ -505,14 +442,10 @@ def create-server [
# 4. Return result
{server: $name, status: "created", id: (generate-id)}
}
-```plaintext
-
-### Rust Code Structure
-
-**Project Organization**:
-
-```plaintext
-src/
+
+
+Project Organization :
+src/
├── lib.rs # Library root
├── main.rs # Binary entry point
├── config/ # Configuration handling
@@ -527,12 +460,9 @@ src/
├── mod.rs
├── workflow.rs # Workflow management
└── task_queue.rs # Task queue management
-```plaintext
-
-**Error Handling**:
-
-```rust
-use anyhow::{Context, Result};
+
+Error Handling :
+use anyhow::{Context, Result};
use thiserror::Error;
#[derive(Error, Debug)]
@@ -561,62 +491,46 @@ pub fn create_server(name: &str) -> Result<ServerInfo> {
.context("Failed to provision server")?;
Ok(server)
-}
-```plaintext
-
-### KCL Schema Organization
-
-**Schema Structure**:
-
-```kcl
-# Base schema definitions
-schema ServerConfig:
- name: str
- plan: str
- zone: str
- tags?: {str: str} = {}
-
- check:
- len(name) > 0, "Server name cannot be empty"
- plan in ["1xCPU-2GB", "2xCPU-4GB", "4xCPU-8GB"], "Invalid plan"
+}
+
+Schema Structure :
+# Base schema definitions
+let ServerConfig = {
+ name | string,
+ plan | string,
+ zone | string,
+ tags | { } | default = {},
+} in
+ServerConfig
# Provider-specific extensions
-schema UpCloudServerConfig(ServerConfig):
- template?: str = "Ubuntu Server 22.04 LTS (Jammy Jellyfish)"
- storage?: int = 25
-
- check:
- storage >= 10, "Minimum storage is 10GB"
- storage <= 2048, "Maximum storage is 2TB"
+let UpCloudServerConfig = {
+ template | string | default = "Ubuntu Server 22.04 LTS (Jammy Jellyfish)",
+ storage | number | default = 25,
+} in
+UpCloudServerConfig
# Composition schemas
-schema InfrastructureConfig:
- servers: [ServerConfig]
- networks?: [NetworkConfig] = []
- load_balancers?: [LoadBalancerConfig] = []
-
- check:
- len(servers) > 0, "At least one server required"
-```plaintext
-
-## Testing Strategies
-
-### Test-Driven Development
-
-**TDD Workflow**:
-
-1. **Write Test First**: Define expected behavior
-2. **Run Test (Fail)**: Confirm test fails as expected
-3. **Write Code**: Implement minimal code to pass
-4. **Run Test (Pass)**: Confirm test now passes
-5. **Refactor**: Improve code while keeping tests green
-
-### Nushell Testing
-
-**Unit Test Pattern**:
-
-```nushell
-# Function with embedded test
+let InfrastructureConfig = {
+ servers | array,
+ networks | array | default = [],
+ load_balancers | array | default = [],
+} in
+InfrastructureConfig
+
+
+
+TDD Workflow :
+
+Write Test First : Define expected behavior
+Run Test (Fail) : Confirm test fails as expected
+Write Code : Implement minimal code to pass
+Run Test (Pass) : Confirm test now passes
+Refactor : Improve code while keeping tests green
+
+
+Unit Test Pattern :
+# Function with embedded test
def validate-server-name [name: string] -> bool {
# @test: "valid-name" -> true
# @test: "" -> false
@@ -647,12 +561,9 @@ def test_validate_server_name [] {
print "✅ validate-server-name tests passed"
}
-```plaintext
-
-**Integration Test Pattern**:
-
-```nushell
-# tests/integration/server-lifecycle-test.nu
+
+Integration Test Pattern :
+# tests/integration/server-lifecycle-test.nu
def test_complete_server_lifecycle [] {
# Setup
let test_server = "test-server-" + (date now | format date "%Y%m%d%H%M%S")
@@ -672,14 +583,10 @@ def test_complete_server_lifecycle [] {
exit 1
}
}
-```plaintext
-
-### Rust Testing
-
-**Unit Testing**:
-
-```rust
-#[cfg(test)]
+
+
+Unit Testing :
+#[cfg(test)]
mod tests {
use super::*;
use tokio_test;
@@ -704,13 +611,9 @@ mod tests {
assert_eq!(server.name, "test-server");
assert_eq!(server.status, "created");
}
-}
-```plaintext
-
-**Integration Testing**:
-
-```rust
-#[cfg(test)]
+}
+Integration Testing :
+#[cfg(test)]
mod integration_tests {
use super::*;
use testcontainers::*;
@@ -732,30 +635,21 @@ mod integration_tests {
assert_eq!(result.status, WorkflowStatus::Completed);
}
-}
-```plaintext
-
-### KCL Testing
-
-**Schema Validation Testing**:
-
-```bash
-# Test KCL schemas
-kcl test kcl/
+}
+
+Schema Validation Testing :
+# Test Nickel schemas
+nickel check schemas/
# Validate specific schemas
-kcl check kcl/server.k --data test-data.yaml
+nickel typecheck schemas/server.ncl
# Test with examples
-kcl run kcl/server.k -D name="test-server" -D plan="2xCPU-4GB"
-```plaintext
-
-### Test Automation
-
-**Continuous Testing**:
-
-```bash
-# Watch for changes and run tests
+nickel eval schemas/server.ncl
+
+
+Continuous Testing :
+# Watch for changes and run tests
cargo watch -x test -x check
# Watch Nushell files
@@ -763,16 +657,11 @@ find . -name "*.nu" | entr -r nu tests/run-all-tests.nu
# Automated testing in workspace
nu workspace.nu tools test-all --watch
-```plaintext
-
-## Debugging Techniques
-
-### Debug Configuration
-
-**Enable Debug Mode**:
-
-```bash
-# Environment variables
+
+
+
+Enable Debug Mode :
+# Environment variables
export PROVISIONING_DEBUG=true
export PROVISIONING_LOG_LEVEL=debug
export RUST_LOG=debug
@@ -780,14 +669,10 @@ export RUST_BACKTRACE=1
# Workspace debug
export PROVISIONING_WORKSPACE_USER=$USER
-```plaintext
-
-### Nushell Debugging
-
-**Debug Techniques**:
-
-```nushell
-# Debug prints
+
+
+Debug Techniques :
+# Debug prints
def debug-server-creation [name: string] {
print $"🐛 Creating server: ($name)"
@@ -823,12 +708,9 @@ def debug-interactive [] {
# Drop into interactive shell
nu --interactive
}
-```plaintext
-
-**Error Investigation**:
-
-```nushell
-# Comprehensive error handling
+
+Error Investigation :
+# Comprehensive error handling
def safe-server-creation [name: string] {
try {
create-server $name
@@ -854,14 +736,10 @@ def safe-server-creation [name: string] {
}
}
}
-```plaintext
-
-### Rust Debugging
-
-**Debug Logging**:
-
-```rust
-use tracing::{debug, info, warn, error, instrument};
+
+
+Debug Logging :
+use tracing::{debug, info, warn, error, instrument};
#[instrument]
pub async fn create_server(name: &str) -> Result<ServerInfo> {
@@ -884,27 +762,18 @@ pub async fn create_server(name: &str) -> Result<ServerInfo> {
info!("Server {} created successfully", name);
Ok(server)
-}
-```plaintext
-
-**Interactive Debugging**:
-
-```rust
-// Use debugger breakpoints
+}
+Interactive Debugging :
+// Use debugger breakpoints
#[cfg(debug_assertions)]
{
println!("Debug: server creation starting");
dbg!(&config);
// Add breakpoint here in IDE
-}
-```plaintext
-
-### Log Analysis
-
-**Log Monitoring**:
-
-```bash
-# Follow all logs
+}
+
+Log Monitoring :
+# Follow all logs
tail -f workspace/runtime/logs/$USER/*.log
# Filter for errors
@@ -915,25 +784,17 @@ tail -f workspace/runtime/logs/$USER/orchestrator.log | grep -i workflow
# Structured log analysis
jq '.level == "ERROR"' workspace/runtime/logs/$USER/structured.jsonl
-```plaintext
-
-**Debug Log Levels**:
-
-```bash
-# Different verbosity levels
+
+Debug Log Levels :
+# Different verbosity levels
PROVISIONING_LOG_LEVEL=trace provisioning server create test
PROVISIONING_LOG_LEVEL=debug provisioning server create test
PROVISIONING_LOG_LEVEL=info provisioning server create test
-```plaintext
-
-## Integration Workflows
-
-### Existing System Integration
-
-**Working with Legacy Components**:
-
-```bash
-# Test integration with existing system
+
+
+
+Working with Legacy Components :
+# Test integration with existing system
provisioning --version # Legacy system
src/core/nulib/provisioning --version # New system
@@ -943,32 +804,24 @@ PROVISIONING_WORKSPACE_USER=$USER provisioning server list
# Validate configuration compatibility
provisioning validate config
nu workspace.nu config validate
-```plaintext
-
-### API Integration Testing
-
-**REST API Testing**:
-
-```bash
-# Test orchestrator API
+
+
+REST API Testing :
+# Test orchestrator API
curl -X GET http://localhost:9090/health
curl -X GET http://localhost:9090/tasks
# Test workflow creation
curl -X POST http://localhost:9090/workflows/servers/create \
-H "Content-Type: application/json" \
- -d '{"name": "test-server", "plan": "2xCPU-4GB"}'
+ -d '{"name": "test-server", "plan": "2xCPU-4 GB"}'
# Monitor workflow
curl -X GET http://localhost:9090/workflows/batch/status/workflow-id
-```plaintext
-
-### Database Integration
-
-**SurrealDB Integration**:
-
-```nushell
-# Test database connectivity
+
+
+SurrealDB Integration :
+# Test database connectivity
use core/nulib/lib_provisioning/database/surreal.nu
let db = (connect-database)
(test-connection $db)
@@ -977,14 +830,10 @@ let db = (connect-database)
let workflow_id = (create-workflow-record "test-workflow")
let status = (get-workflow-status $workflow_id)
assert ($status.status == "pending")
-```plaintext
-
-### External Tool Integration
-
-**Container Integration**:
-
-```bash
-# Test with Docker
+
+
+Container Integration :
+# Test with Docker
docker run --rm -v $(pwd):/work provisioning:dev provisioning --version
# Test with Kubernetes
@@ -994,24 +843,19 @@ kubectl logs test-pod
# Validate in different environments
make test-dist PLATFORM=docker
make test-dist PLATFORM=kubernetes
-```plaintext
-
-## Collaboration Guidelines
-
-### Branch Strategy
-
-**Branch Naming**:
-
-- `feature/description` - New features
-- `fix/description` - Bug fixes
-- `docs/description` - Documentation updates
-- `refactor/description` - Code refactoring
-- `test/description` - Test improvements
-
-**Workflow**:
-
-```bash
-# Start new feature
+
+
+
+Branch Naming :
+
+feature/description - New features
+fix/description - Bug fixes
+docs/description - Documentation updates
+refactor/description - Code refactoring
+test/description - Test improvements
+
+Workflow :
+# Start new feature
git checkout main
git pull origin main
git checkout -b feature/new-provider-support
@@ -1023,23 +867,25 @@ git commit -m "feat(provider): implement server creation API"
# Push and create PR
git push origin feature/new-provider-support
gh pr create --title "Add new provider support" --body "..."
-```plaintext
-
-### Code Review Process
-
-**Review Checklist**:
-
-- [ ] Code follows project conventions
-- [ ] Tests are included and passing
-- [ ] Documentation is updated
-- [ ] No hardcoded values
-- [ ] Error handling is comprehensive
-- [ ] Performance considerations addressed
-
-**Review Commands**:
-
-```bash
-# Test PR locally
+
+
+Review Checklist :
+
+Review Commands :
+# Test PR locally
gh pr checkout 123
cd src/tools && make ci-test
@@ -1049,53 +895,43 @@ nu workspace/extensions/providers/new-provider/tests/run-all.nu
# Check code quality
cargo clippy -- -D warnings
nu --check $(find . -name "*.nu")
-```plaintext
-
-### Documentation Requirements
-
-**Code Documentation**:
-
-```nushell
-# Function documentation
+
+
+Code Documentation :
+# Function documentation
def create-server [
name: string # Server name (must be unique)
- plan: string # Server plan (e.g., "2xCPU-4GB")
+ plan: string # Server plan (for example, "2xCPU-4 GB")
--dry-run: bool # Show what would be created without doing it
] -> record { # Returns server creation result
# Creates a new server with the specified configuration
#
# Examples:
- # create-server "web-01" "2xCPU-4GB"
- # create-server "test" "1xCPU-2GB" --dry-run
+ # create-server "web-01" "2xCPU-4 GB"
+ # create-server "test" "1xCPU-2 GB" --dry-run
# Implementation
}
-```plaintext
-
-### Communication
-
-**Progress Updates**:
-
-- Daily standup participation
-- Weekly architecture reviews
-- PR descriptions with context
-- Issue tracking with details
-
-**Knowledge Sharing**:
-
-- Technical blog posts
-- Architecture decision records
-- Code review discussions
-- Team documentation updates
-
-## Quality Assurance
-
-### Code Quality Checks
-
-**Automated Quality Gates**:
-
-```bash
-# Pre-commit hooks
+
+
+Progress Updates :
+
+Daily standup participation
+Weekly architecture reviews
+PR descriptions with context
+Issue tracking with details
+
+Knowledge Sharing :
+
+Technical blog posts
+Architecture decision records
+Code review discussions
+Team documentation updates
+
+
+
+Automated Quality Gates :
+# Pre-commit hooks
pre-commit install
# Manual quality check
@@ -1104,22 +940,18 @@ make validate-all
# Security audit
cargo audit
-```plaintext
-
-**Quality Metrics**:
-
-- Code coverage > 80%
-- No critical security vulnerabilities
-- All tests passing
-- Documentation coverage complete
-- Performance benchmarks met
-
-### Performance Monitoring
-
-**Performance Testing**:
-
-```bash
-# Benchmark builds
+
+Quality Metrics :
+
+Code coverage > 80%
+No critical security vulnerabilities
+All tests passing
+Documentation coverage complete
+Performance benchmarks met
+
+
+Performance Testing :
+# Benchmark builds
make benchmark
# Performance profiling
@@ -1127,41 +959,29 @@ cargo flamegraph --bin provisioning-orchestrator
# Load testing
ab -n 1000 -c 10 http://localhost:9090/health
-```plaintext
-
-**Resource Monitoring**:
-
-```bash
-# Monitor during development
+
+Resource Monitoring :
+# Monitor during development
nu workspace/tools/runtime-manager.nu monitor --duration 5m
# Check resource usage
du -sh workspace/runtime/
df -h
-```plaintext
-
-## Best Practices
-
-### Configuration Management
-
-**Never Hardcode**:
-
-```nushell
-# Bad
+
+
+
+Never Hardcode :
+# Bad
def get-api-url [] { "https://api.upcloud.com" }
# Good
def get-api-url [] {
get-config-value "providers.upcloud.api_url" "https://api.upcloud.com"
}
-```plaintext
-
-### Error Handling
-
-**Comprehensive Error Context**:
-
-```nushell
-def create-server [name: string] {
+
+
+Comprehensive Error Context :
+def create-server [name: string] {
try {
validate-server-name $name
} catch { |e|
@@ -1180,14 +1000,10 @@ def create-server [name: string] {
}
}
}
-```plaintext
-
-### Resource Management
-
-**Clean Up Resources**:
-
-```nushell
-def with-temporary-server [name: string, action: closure] {
+
+
+Clean Up Resources :
+def with-temporary-server [name: string, action: closure] {
let server = (create-server $name)
try {
@@ -1201,14 +1017,10 @@ def with-temporary-server [name: string, action: closure] {
# Clean up on success
delete-server $name
}
-```plaintext
-
-### Testing Best Practices
-
-**Test Isolation**:
-
-```nushell
-def test-with-isolation [test_name: string, test_action: closure] {
+
+
+Test Isolation :
+def test-with-isolation [test_name: string, test_action: closure] {
let test_workspace = $"test-($test_name)-(date now | format date '%Y%m%d%H%M%S')"
try {
@@ -1228,10 +1040,8 @@ def test-with-isolation [test_name: string, test_action: closure] {
nu workspace.nu cleanup --user-name $test_workspace --type all --force
}
}
-```plaintext
-
-This development workflow provides a comprehensive framework for efficient, quality-focused development while maintaining the project's architectural principles and ensuring smooth collaboration across the team.
+This development workflow provides a comprehensive framework for efficient, quality-focused development while maintaining the project’s architectural principles and ensuring smooth collaboration across the team.
diff --git a/docs/book/fonts/SOURCE-CODE-PRO-LICENSE.txt b/docs/book/fonts/SOURCE-CODE-PRO-LICENSE.txt
index 366206f..efc001f 100644
--- a/docs/book/fonts/SOURCE-CODE-PRO-LICENSE.txt
+++ b/docs/book/fonts/SOURCE-CODE-PRO-LICENSE.txt
@@ -18,7 +18,7 @@ with others.
The OFL allows the licensed fonts to be used, studied, modified and
redistributed freely as long as they are not sold by themselves. The
-fonts, including any derivative works, can be bundled, embedded,
+fonts, including any derivative works, can be bundled, embedded,
redistributed and/or sold with any software provided that any reserved
names are not used by derivative works. The fonts and derivatives,
however, cannot be released under any other type of license. The
diff --git a/docs/book/guides/customize-infrastructure.html b/docs/book/guides/customize-infrastructure.html
index e108e7e..b4838ff 100644
--- a/docs/book/guides/customize-infrastructure.html
+++ b/docs/book/guides/customize-infrastructure.html
@@ -211,23 +211,15 @@
│ • Provider implementations │
│ • Default taskserv configs │
└─────────────────────────────────────┘
-```plaintext
-
-**Resolution Order**: Infrastructure (300) → Workspace (200) → Core (100)
-
-Higher numbers override lower numbers.
-
-### View Layer Resolution
-
-```bash
-# Explain layer concept
+
+Resolution Order : Infrastructure (300) → Workspace (200) → Core (100)
+Higher numbers override lower numbers.
+
+# Explain layer concept
provisioning lyr explain
-```plaintext
-
-**Expected Output:**
-
-```plaintext
-📚 LAYER SYSTEM EXPLAINED
+
+Expected Output:
+📚 LAYER SYSTEM EXPLAINED
The layer system provides configuration inheritance across 3 levels:
@@ -254,28 +246,23 @@ The layer system provides configuration inheritance across 3 levels:
Resolution: Infrastructure → Workspace → Core
Higher priority layers override lower ones.
-```plaintext
-
-```bash
-# Show layer resolution for your project
+
+# Show layer resolution for your project
provisioning lyr show my-production
-```plaintext
-
-**Expected Output:**
-
-```plaintext
-📊 Layer Resolution for my-production:
+
+Expected Output:
+📊 Layer Resolution for my-production:
LAYER PRIORITY SOURCE FILES
Infrastructure 300 workspace/infra/my-production/ 4 files
- • servers.k (overrides)
- • taskservs.k (overrides)
- • clusters.k (custom)
- • providers.k (overrides)
+ • servers.ncl (overrides)
+ • taskservs.ncl (overrides)
+ • clusters.ncl (custom)
+ • providers.ncl (overrides)
Workspace 200 provisioning/workspace/templates/ 2 files
- • production.k (used)
- • kubernetes.k (used)
+ • production.ncl (used)
+ • kubernetes.ncl (used)
Core 100 provisioning/extensions/ 15 files
• taskservs/* (base configs)
@@ -284,38 +271,32 @@ Core 100 provisioning/extensions/ 15 files
Resolution Order: Infrastructure → Workspace → Core
Status: ✅ All layers resolved successfully
-```plaintext
-
-### Test Layer Resolution
-
-```bash
-# Test how a specific module resolves
+
+
+# Test how a specific module resolves
provisioning lyr test kubernetes my-production
-```plaintext
-
-**Expected Output:**
-
-```plaintext
-🔍 Layer Resolution Test: kubernetes → my-production
+
+Expected Output:
+🔍 Layer Resolution Test: kubernetes → my-production
Resolving kubernetes configuration...
🔴 Infrastructure Layer (300):
- ✅ Found: workspace/infra/my-production/taskservs/kubernetes.k
+ ✅ Found: workspace/infra/my-production/taskservs/kubernetes.ncl
Provides:
• version = "1.30.0" (overrides)
• control_plane_servers = ["web-01"] (overrides)
• worker_servers = ["web-02"] (overrides)
🟢 Workspace Layer (200):
- ✅ Found: provisioning/workspace/templates/production-kubernetes.k
+ ✅ Found: provisioning/workspace/templates/production-kubernetes.ncl
Provides:
• security_policies (inherited)
• network_policies (inherited)
• resource_quotas (inherited)
🔵 Core Layer (100):
- ✅ Found: provisioning/extensions/taskservs/kubernetes/config.k
+ ✅ Found: provisioning/extensions/taskservs/kubernetes/main.ncl
Provides:
• default_version = "1.29.0" (base)
• default_features (base)
@@ -332,21 +313,14 @@ Final Configuration (after merging all layers):
default_plugins: {...} (from Core)
Resolution: ✅ Success
-```plaintext
-
-## Using Templates
-
-### List Available Templates
-
-```bash
-# List all templates
+
+
+
+# List all templates
provisioning tpl list
-```plaintext
-
-**Expected Output:**
-
-```plaintext
-📋 Available Templates:
+
+Expected Output:
+📋 Available Templates:
TASKSERVS:
• production-kubernetes - Production-ready Kubernetes setup
@@ -368,26 +342,18 @@ CLUSTERS:
• security-stack - Security monitoring tools
Total: 13 templates
-```plaintext
-
-```bash
-# List templates by type
+
+# List templates by type
provisioning tpl list --type taskservs
provisioning tpl list --type providers
provisioning tpl list --type clusters
-```plaintext
-
-### View Template Details
-
-```bash
-# Show template details
+
+
+# Show template details
provisioning tpl show production-kubernetes
-```plaintext
-
-**Expected Output:**
-
-```plaintext
-📄 Template: production-kubernetes
+
+Expected Output:
+📄 Template: production-kubernetes
Description: Production-ready Kubernetes configuration with
security hardening, network policies, and monitoring
@@ -406,26 +372,20 @@ Configuration Provided:
Requirements:
• Minimum 2 servers
- • 4GB RAM per server
+ • 4 GB RAM per server
• Network plugin (Cilium recommended)
-Location: provisioning/workspace/templates/production-kubernetes.k
+Location: provisioning/workspace/templates/production-kubernetes.ncl
Example Usage:
provisioning tpl apply production-kubernetes my-production
-```plaintext
-
-### Apply Template
-
-```bash
-# Apply template to your infrastructure
+
+
+# Apply template to your infrastructure
provisioning tpl apply production-kubernetes my-production
-```plaintext
-
-**Expected Output:**
-
-```plaintext
-🚀 Applying template: production-kubernetes → my-production
+
+Expected Output:
+🚀 Applying template: production-kubernetes → my-production
Checking compatibility... ⏳
✅ Infrastructure compatible with template
@@ -434,10 +394,10 @@ Merging configuration... ⏳
✅ Configuration merged
Files created/updated:
- • workspace/infra/my-production/taskservs/kubernetes.k (updated)
- • workspace/infra/my-production/policies/security.k (created)
- • workspace/infra/my-production/policies/network.k (created)
- • workspace/infra/my-production/monitoring/prometheus.k (created)
+ • workspace/infra/my-production/taskservs/kubernetes.ncl (updated)
+ • workspace/infra/my-production/policies/security.ncl (created)
+ • workspace/infra/my-production/policies/network.ncl (created)
+ • workspace/infra/my-production/monitoring/prometheus.ncl (created)
🎉 Template applied successfully!
@@ -445,19 +405,13 @@ Next steps:
1. Review generated configuration
2. Adjust as needed
3. Deploy: provisioning t create kubernetes --infra my-production
-```plaintext
-
-### Validate Template Usage
-
-```bash
-# Validate template was applied correctly
+
+
+# Validate template was applied correctly
provisioning tpl validate my-production
-```plaintext
-
-**Expected Output:**
-
-```plaintext
-✅ Template Validation: my-production
+
+Expected Output:
+✅ Template Validation: my-production
Templates Applied:
✅ production-kubernetes (v1.0.0)
@@ -475,89 +429,77 @@ Compliance:
✅ Monitoring enabled
Status: ✅ Valid
-```plaintext
-
-## Creating Custom Templates
-
-### Step 1: Create Template Structure
-
-```bash
-# Create custom template directory
+
+
+
+# Create custom template directory
mkdir -p provisioning/workspace/templates/my-custom-template
-```plaintext
+
+
+File: provisioning/workspace/templates/my-custom-template/main.ncl
+# Custom Kubernetes template with specific settings
+let kubernetes_config = {
+ # Version
+ version = "1.30.0",
-### Step 2: Write Template Configuration
+ # Custom feature gates
+ feature_gates = {
+ "GracefulNodeShutdown" = true,
+ "SeccompDefault" = true,
+ "StatefulSetAutoDeletePVC" = true,
+ },
-**File: `provisioning/workspace/templates/my-custom-template/config.k`**
+ # Custom kubelet configuration
+ kubelet_config = {
+ max_pods = 110,
+ pod_pids_limit = 4096,
+ container_log_max_size = "10Mi",
+ container_log_max_files = 5,
+ },
-```kcl
-# Custom Kubernetes template with specific settings
+ # Custom API server flags
+ apiserver_extra_args = {
+ "enable-admission-plugins" = "NodeRestriction,PodSecurity,LimitRanger",
+ "audit-log-maxage" = "30",
+ "audit-log-maxbackup" = "10",
+ },
-kubernetes_config = {
- # Version
- version = "1.30.0"
+ # Custom scheduler configuration
+ scheduler_config = {
+ profiles = [
+ {
+ name = "high-availability",
+ plugins = {
+ score = {
+ enabled = [
+ {name = "NodeResourcesBalancedAllocation", weight = 2},
+ {name = "NodeResourcesLeastAllocated", weight = 1},
+ ],
+ },
+ },
+ },
+ ],
+ },
- # Custom feature gates
- feature_gates = {
- "GracefulNodeShutdown" = True
- "SeccompDefault" = True
- "StatefulSetAutoDeletePVC" = True
- }
+ # Network configuration
+ network = {
+ service_cidr = "10.96.0.0/12",
+ pod_cidr = "10.244.0.0/16",
+ dns_domain = "cluster.local",
+ },
- # Custom kubelet configuration
- kubelet_config = {
- max_pods = 110
- pod_pids_limit = 4096
- container_log_max_size = "10Mi"
- container_log_max_files = 5
- }
-
- # Custom API server flags
- apiserver_extra_args = {
- "enable-admission-plugins" = "NodeRestriction,PodSecurity,LimitRanger"
- "audit-log-maxage" = "30"
- "audit-log-maxbackup" = "10"
- }
-
- # Custom scheduler configuration
- scheduler_config = {
- profiles = [
- {
- name = "high-availability"
- plugins = {
- score = {
- enabled = [
- {name = "NodeResourcesBalancedAllocation", weight = 2}
- {name = "NodeResourcesLeastAllocated", weight = 1}
- ]
- }
- }
- }
- ]
- }
-
- # Network configuration
- network = {
- service_cidr = "10.96.0.0/12"
- pod_cidr = "10.244.0.0/16"
- dns_domain = "cluster.local"
- }
-
- # Security configuration
- security = {
- pod_security_standard = "restricted"
- encrypt_etcd = True
- rotate_certificates = True
- }
-}
-```plaintext
-
-### Step 3: Create Template Metadata
-
-**File: `provisioning/workspace/templates/my-custom-template/metadata.toml`**
-
-```toml
-[template]
+ # Security configuration
+ security = {
+ pod_security_standard = "restricted",
+ encrypt_etcd = true,
+ rotate_certificates = true,
+ },
+} in
+kubernetes_config
+
+
+File: provisioning/workspace/templates/my-custom-template/metadata.toml
+[template]
name = "my-custom-template"
version = "1.0.0"
description = "Custom Kubernetes template with enhanced security"
@@ -572,12 +514,9 @@ required_taskservs = ["containerd", "cilium"]
[tags]
environment = ["production", "staging"]
features = ["security", "monitoring", "high-availability"]
-```plaintext
-
-### Step 4: Test Custom Template
-
-```bash
-# List templates (should include your custom template)
+
+
+# List templates (should include your custom template)
provisioning tpl list
# Show your template
@@ -585,131 +524,104 @@ provisioning tpl show my-custom-template
# Apply to test infrastructure
provisioning tpl apply my-custom-template my-test
-```plaintext
-
-## Configuration Inheritance Examples
-
-### Example 1: Override Single Value
-
-**Core Layer** (`provisioning/extensions/taskservs/postgres/config.k`):
-
-```kcl
-postgres_config = {
- version = "15.5"
- port = 5432
- max_connections = 100
-}
-```plaintext
-
-**Infrastructure Layer** (`workspace/infra/my-production/taskservs/postgres.k`):
-
-```kcl
-postgres_config = {
- max_connections = 500 # Override only max_connections
-}
-```plaintext
-
-**Result** (after layer resolution):
-
-```kcl
-postgres_config = {
- version = "15.5" # From Core
- port = 5432 # From Core
- max_connections = 500 # From Infrastructure (overridden)
-}
-```plaintext
-
-### Example 2: Add Custom Configuration
-
-**Workspace Layer** (`provisioning/workspace/templates/production-postgres.k`):
-
-```kcl
-postgres_config = {
- replication = {
- enabled = True
- replicas = 2
- sync_mode = "async"
- }
-}
-```plaintext
-
-**Infrastructure Layer** (`workspace/infra/my-production/taskservs/postgres.k`):
-
-```kcl
-postgres_config = {
- replication = {
- sync_mode = "sync" # Override sync mode
- }
- custom_extensions = ["pgvector", "timescaledb"] # Add custom config
-}
-```plaintext
-
-**Result**:
-
-```kcl
-postgres_config = {
- version = "15.5" # From Core
- port = 5432 # From Core
- max_connections = 100 # From Core
- replication = {
- enabled = True # From Workspace
- replicas = 2 # From Workspace
- sync_mode = "sync" # From Infrastructure (overridden)
- }
- custom_extensions = ["pgvector", "timescaledb"] # From Infrastructure (added)
-}
-```plaintext
-
-### Example 3: Environment-Specific Configuration
-
-**Workspace Layer** (`provisioning/workspace/templates/base-kubernetes.k`):
-
-```kcl
-kubernetes_config = {
- version = "1.30.0"
- control_plane_count = 3
- worker_count = 5
- resources = {
- control_plane = {cpu = "4", memory = "8Gi"}
- worker = {cpu = "8", memory = "16Gi"}
- }
-}
-```plaintext
-
-**Development Infrastructure** (`workspace/infra/my-dev/taskservs/kubernetes.k`):
-
-```kcl
-kubernetes_config = {
- control_plane_count = 1 # Smaller for dev
- worker_count = 2
- resources = {
- control_plane = {cpu = "2", memory = "4Gi"}
- worker = {cpu = "2", memory = "4Gi"}
- }
-}
-```plaintext
-
-**Production Infrastructure** (`workspace/infra/my-prod/taskservs/kubernetes.k`):
-
-```kcl
-kubernetes_config = {
- control_plane_count = 5 # Larger for prod
- worker_count = 10
- resources = {
- control_plane = {cpu = "8", memory = "16Gi"}
- worker = {cpu = "16", memory = "32Gi"}
- }
-}
-```plaintext
-
-## Advanced Customization Patterns
-
-### Pattern 1: Multi-Environment Setup
-
-Create different configurations for each environment:
-
-```bash
-# Create environments
+
+
+
+Core Layer (provisioning/extensions/taskservs/postgres/main.ncl):
+let postgres_config = {
+ version = "15.5",
+ port = 5432,
+ max_connections = 100,
+} in
+postgres_config
+
+Infrastructure Layer (workspace/infra/my-production/taskservs/postgres.ncl):
+let postgres_config = {
+ max_connections = 500, # Override only max_connections
+} in
+postgres_config
+
+Result (after layer resolution):
+let postgres_config = {
+ version = "15.5", # From Core
+ port = 5432, # From Core
+ max_connections = 500, # From Infrastructure (overridden)
+} in
+postgres_config
+
+
+Workspace Layer (provisioning/workspace/templates/production-postgres.ncl):
+let postgres_config = {
+ replication = {
+ enabled = true,
+ replicas = 2,
+ sync_mode = "async",
+ },
+} in
+postgres_config
+
+Infrastructure Layer (workspace/infra/my-production/taskservs/postgres.ncl):
+let postgres_config = {
+ replication = {
+ sync_mode = "sync", # Override sync mode
+ },
+ custom_extensions = ["pgvector", "timescaledb"], # Add custom config
+} in
+postgres_config
+
+Result :
+let postgres_config = {
+ version = "15.5", # From Core
+ port = 5432, # From Core
+ max_connections = 100, # From Core
+ replication = {
+ enabled = true, # From Workspace
+ replicas = 2, # From Workspace
+ sync_mode = "sync", # From Infrastructure (overridden)
+ },
+ custom_extensions = ["pgvector", "timescaledb"], # From Infrastructure (added)
+} in
+postgres_config
+
+
+Workspace Layer (provisioning/workspace/templates/base-kubernetes.ncl):
+let kubernetes_config = {
+ version = "1.30.0",
+ control_plane_count = 3,
+ worker_count = 5,
+ resources = {
+ control_plane = {cpu = "4", memory = "8Gi"},
+ worker = {cpu = "8", memory = "16Gi"},
+ },
+} in
+kubernetes_config
+
+Development Infrastructure (workspace/infra/my-dev/taskservs/kubernetes.ncl):
+let kubernetes_config = {
+ control_plane_count = 1, # Smaller for dev
+ worker_count = 2,
+ resources = {
+ control_plane = {cpu = "2", memory = "4Gi"},
+ worker = {cpu = "2", memory = "4Gi"},
+ },
+} in
+kubernetes_config
+
+Production Infrastructure (workspace/infra/my-prod/taskservs/kubernetes.ncl):
+let kubernetes_config = {
+ control_plane_count = 5, # Larger for prod
+ worker_count = 10,
+ resources = {
+ control_plane = {cpu = "8", memory = "16Gi"},
+ worker = {cpu = "16", memory = "32Gi"},
+ },
+} in
+kubernetes_config
+
+
+
+Create different configurations for each environment:
+# Create environments
provisioning ws init my-app-dev
provisioning ws init my-app-staging
provisioning ws init my-app-prod
@@ -723,97 +635,80 @@ provisioning tpl apply production-kubernetes my-app-prod
# Edit: workspace/infra/my-app-dev/...
# Edit: workspace/infra/my-app-staging/...
# Edit: workspace/infra/my-app-prod/...
-```plaintext
-
-### Pattern 2: Shared Configuration Library
-
-Create reusable configuration fragments:
-
-**File: `provisioning/workspace/templates/shared/security-policies.k`**
-
-```kcl
-security_policies = {
- pod_security = {
- enforce = "restricted"
- audit = "restricted"
- warn = "restricted"
- }
- network_policies = [
+
+
+Create reusable configuration fragments:
+File: provisioning/workspace/templates/shared/security-policies.ncl
+let security_policies = {
+ pod_security = {
+ enforce = "restricted",
+ audit = "restricted",
+ warn = "restricted",
+ },
+ network_policies = [
+ {
+ name = "deny-all",
+ pod_selector = {},
+ policy_types = ["Ingress", "Egress"],
+ },
+ {
+ name = "allow-dns",
+ pod_selector = {},
+ egress = [
{
- name = "deny-all"
- pod_selector = {}
- policy_types = ["Ingress", "Egress"]
+ to = [{namespace_selector = {name = "kube-system"}}],
+ ports = [{protocol = "UDP", port = 53}],
},
- {
- name = "allow-dns"
- pod_selector = {}
- egress = [
- {
- to = [{namespace_selector = {name = "kube-system"}}]
- ports = [{protocol = "UDP", port = 53}]
- }
- ]
- }
- ]
-}
-```plaintext
+ ],
+ },
+ ],
+} in
+security_policies
+
+Import in your infrastructure:
+let security_policies = (import "../../../provisioning/workspace/templates/shared/security-policies.ncl") in
-Import in your infrastructure:
+let kubernetes_config = {
+ version = "1.30.0",
+ image_repo = "k8s.gcr.io",
+ security = security_policies, # Import shared policies
+} in
+kubernetes_config
+
+
+Use Nickel features for dynamic configuration:
+# Calculate resources based on server count
+let server_count = 5 in
+let replicas_per_server = 2 in
+let total_replicas = server_count * replicas_per_server in
-```kcl
-import "../../../provisioning/workspace/templates/shared/security-policies.k"
+let postgres_config = {
+ version = "16.1",
+ max_connections = total_replicas * 50, # Dynamic calculation
+ shared_buffers = "1024 MB",
+} in
+postgres_config
+
+
+let environment = "production" in # or "development"
-kubernetes_config = {
- version = "1.30.0"
- # ... other config
- security = security_policies # Import shared policies
-}
-```plaintext
-
-### Pattern 3: Dynamic Configuration
-
-Use KCL features for dynamic configuration:
-
-```kcl
-# Calculate resources based on server count
-server_count = 5
-replicas_per_server = 2
-total_replicas = server_count * replicas_per_server
-
-postgres_config = {
- version = "16.1"
- max_connections = total_replicas * 50 # Dynamic calculation
- shared_buffers = "${total_replicas * 128}MB"
-}
-```plaintext
-
-### Pattern 4: Conditional Configuration
-
-```kcl
-environment = "production" # or "development"
-
-kubernetes_config = {
- version = "1.30.0"
- control_plane_count = if environment == "production" { 3 } else { 1 }
- worker_count = if environment == "production" { 5 } else { 2 }
- monitoring = {
- enabled = environment == "production"
- retention = if environment == "production" { "30d" } else { "7d" }
- }
-}
-```plaintext
-
-## Layer Statistics
-
-```bash
-# Show layer system statistics
+let kubernetes_config = {
+ version = "1.30.0",
+ control_plane_count = if environment == "production" then 3 else 1,
+ worker_count = if environment == "production" then 5 else 2,
+ monitoring = {
+ enabled = environment == "production",
+ retention = if environment == "production" then "30d" else "7d",
+ },
+} in
+kubernetes_config
+
+
+# Show layer system statistics
provisioning lyr stats
-```plaintext
-
-**Expected Output:**
-
-```plaintext
-📊 Layer System Statistics:
+
+Expected Output:
+📊 Layer System Statistics:
Infrastructure Layer:
• Projects: 3
@@ -831,17 +726,13 @@ Core Layer:
• Clusters: 3
Resolution Performance:
- • Average resolution time: 45ms
+ • Average resolution time: 45 ms
• Cache hit rate: 87%
• Total resolutions: 1,250
-```plaintext
-
-## Customization Workflow
-
-### Complete Customization Example
-
-```bash
-# 1. Create new infrastructure
+
+
+
+# 1. Create new infrastructure
provisioning ws init my-custom-app
# 2. Understand layer system
@@ -857,7 +748,7 @@ provisioning tpl apply production-kubernetes my-custom-app
provisioning lyr show my-custom-app
# 6. Customize (edit files)
-provisioning sops workspace/infra/my-custom-app/taskservs/kubernetes.k
+provisioning sops workspace/infra/my-custom-app/taskservs/kubernetes.ncl
# 7. Test layer resolution
provisioning lyr test kubernetes my-custom-app
@@ -870,41 +761,32 @@ provisioning val config --infra my-custom-app
provisioning s create --infra my-custom-app --check
provisioning s create --infra my-custom-app
provisioning t create kubernetes --infra my-custom-app
-```plaintext
-
-## Best Practices
-
-### 1. Use Layers Correctly
-
-- **Core Layer**: Only modify for system-wide changes
-- **Workspace Layer**: Use for organization-wide templates
-- **Infrastructure Layer**: Use for project-specific customizations
-
-### 2. Template Organization
-
-```plaintext
-provisioning/workspace/templates/
+
+
+
+
+Core Layer : Only modify for system-wide changes
+Workspace Layer : Use for organization-wide templates
+Infrastructure Layer : Use for project-specific customizations
+
+
+provisioning/workspace/templates/
├── shared/ # Shared configuration fragments
-│ ├── security-policies.k
-│ ├── network-policies.k
-│ └── monitoring.k
+│ ├── security-policies.ncl
+│ ├── network-policies.ncl
+│ └── monitoring.ncl
├── production/ # Production templates
-│ ├── kubernetes.k
-│ ├── postgres.k
-│ └── redis.k
+│ ├── kubernetes.ncl
+│ ├── postgres.ncl
+│ └── redis.ncl
└── development/ # Development templates
- ├── kubernetes.k
- └── postgres.k
-```plaintext
-
-### 3. Documentation
-
-Document your customizations:
-
-**File: `workspace/infra/my-production/README.md`**
-
-```markdown
-# My Production Infrastructure
+ ├── kubernetes.ncl
+ └── postgres.ncl
+
+
+Document your customizations:
+File: workspace/infra/my-production/README.md
+# My Production Infrastructure
## Customizations
@@ -914,31 +796,23 @@ Document your customizations:
## Layer Overrides
-- `taskservs/kubernetes.k`: Control plane count (3 → 5)
-- `taskservs/postgres.k`: Replication mode (async → sync)
-- `network/cilium.k`: Routing mode (tunnel → native)
-```plaintext
-
-### 4. Version Control
-
-Keep templates and configurations in version control:
-
-```bash
-cd provisioning/workspace/templates/
+- `taskservs/kubernetes.ncl`: Control plane count (3 → 5)
+- `taskservs/postgres.ncl`: Replication mode (async → sync)
+- `network/cilium.ncl`: Routing mode (tunnel → native)
+
+
+Keep templates and configurations in version control:
+cd provisioning/workspace/templates/
git add .
git commit -m "Add production Kubernetes template with enhanced security"
cd workspace/infra/my-production/
git add .
git commit -m "Configure production environment for my-production"
-```plaintext
-
-## Troubleshooting Customizations
-
-### Issue: Configuration not applied
-
-```bash
-# Check layer resolution
+
+
+
+# Check layer resolution
provisioning lyr show my-production
# Verify file exists
@@ -946,22 +820,16 @@ ls -la workspace/infra/my-production/taskservs/
# Test specific resolution
provisioning lyr test kubernetes my-production
-```plaintext
-
-### Issue: Conflicting configurations
-
-```bash
-# Validate configuration
+
+
+# Validate configuration
provisioning val config --infra my-production
# Show configuration merge result
provisioning show config kubernetes --infra my-production
-```plaintext
-
-### Issue: Template not found
-
-```bash
-# List available templates
+
+
+# List available templates
provisioning tpl list
# Check template path
@@ -969,19 +837,16 @@ ls -la provisioning/workspace/templates/
# Refresh template cache
provisioning tpl refresh
-```plaintext
-
-## Next Steps
-
-- **[From Scratch Guide](from-scratch.md)** - Deploy new infrastructure
-- **[Update Guide](update-infrastructure.md)** - Update existing infrastructure
-- **[Workflow Guide](../development/workflow.md)** - Automate with workflows
-- **[KCL Guide](../development/KCL_MODULE_GUIDE.md)** - Learn KCL configuration language
-
-## Quick Reference
-
-```bash
-# Layer system
+
+
+
+
+# Layer system
provisioning lyr explain # Explain layers
provisioning lyr show <project> # Show layer resolution
provisioning lyr test <module> <project> # Test resolution
@@ -993,12 +858,9 @@ provisioning tpl list --type <type> # Filter by type
provisioning tpl show <template> # Show template details
provisioning tpl apply <template> <project> # Apply template
provisioning tpl validate <project> # Validate template usage
-```plaintext
-
----
-
-*This guide is part of the provisioning project documentation. Last updated: 2025-09-30*
+
+This guide is part of the provisioning project documentation. Last updated: 2025-09-30
@@ -1008,7 +870,7 @@ provisioning tpl validate <project> # Validate template usage
-
+
@@ -1022,7 +884,7 @@ provisioning tpl validate <project> # Validate template usage
-
+
diff --git a/docs/book/guides/from-scratch.html b/docs/book/guides/from-scratch.html
index 6a8ce88..3511454 100644
--- a/docs/book/guides/from-scratch.html
+++ b/docs/book/guides/from-scratch.html
@@ -204,34 +204,30 @@
✅ Operating System : macOS, Linux, or Windows (WSL2 recommended)
✅ Administrator Access : Ability to install software and configure system
✅ Internet Connection : For downloading dependencies and accessing cloud providers
-✅ Cloud Provider Credentials : UpCloud, AWS, or local development environment
+✅ Cloud Provider Credentials : UpCloud, Hetzner, AWS, or local development environment
✅ Basic Terminal Knowledge : Comfortable running shell commands
-✅ Text Editor : vim, nano, VSCode, or your preferred editor
+✅ Text Editor : vim, nano, Zed, VSCode, or your preferred editor
CPU : 2+ cores
-RAM : 8GB minimum, 16GB recommended
-Disk : 20GB free space minimum
+RAM : 8 GB minimum, 16 GB recommended
+Disk : 20 GB free space minimum
-Nushell 0.107.1+ is the primary shell and scripting language for the provisioning platform.
+Nushell 0.109.1+ is the primary shell and scripting language for the provisioning platform.
# Install Nushell
brew install nushell
# Verify installation
nu --version
-# Expected: 0.107.1 or higher
-```plaintext
-
-### Linux (via Package Manager)
-
-**Ubuntu/Debian:**
-
-```bash
-# Add Nushell repository
+# Expected: 0.109.1 or higher
+
+
+Ubuntu/Debian:
+# Add Nushell repository
curl -fsSL https://starship.rs/install.sh | bash
# Install Nushell
@@ -240,26 +236,17 @@ sudo apt install nushell
# Verify installation
nu --version
-```plaintext
-
-**Fedora:**
-
-```bash
-sudo dnf install nushell
+
+Fedora:
+sudo dnf install nushell
nu --version
-```plaintext
-
-**Arch Linux:**
-
-```bash
-sudo pacman -S nushell
+
+Arch Linux:
+sudo pacman -S nushell
nu --version
-```plaintext
-
-### Linux/macOS (via Cargo)
-
-```bash
-# Install Rust (if not already installed)
+
+
+# Install Rust (if not already installed)
curl --proto '=https' --tlsv1.2 -sSf https://sh.rustup.rs | sh
source $HOME/.cargo/env
@@ -268,53 +255,40 @@ cargo install nu --locked
# Verify installation
nu --version
-```plaintext
-
-### Windows (via Winget)
-
-```powershell
-# Install Nushell
+
+
+# Install Nushell
winget install nushell
# Verify installation
nu --version
-```plaintext
-
-### Configure Nushell
-
-```bash
-# Start Nushell
+
+
+# Start Nushell
nu
# Configure (creates default config if not exists)
config nu
-```plaintext
-
----
-
-## Step 2: Install Nushell Plugins (Recommended)
-
-Native plugins provide **10-50x performance improvement** for authentication, KMS, and orchestrator operations.
-
-### Why Install Plugins?
-
-**Performance Gains:**
-
-- 🚀 **KMS operations**: ~5ms vs ~50ms (10x faster)
-- 🚀 **Orchestrator queries**: ~1ms vs ~30ms (30x faster)
-- 🚀 **Batch encryption**: 100 files in 0.5s vs 5s (10x faster)
-
-**Benefits:**
-
-- ✅ Native Nushell integration (pipelines, data structures)
-- ✅ OS keyring for secure token storage
-- ✅ Offline capability (Age encryption, local orchestrator)
-- ✅ Graceful fallback to HTTP if not installed
-
-### Prerequisites for Building Plugins
-
-```bash
-# Install Rust toolchain (if not already installed)
+
+
+
+Native plugins provide 10-50x performance improvement for authentication, KMS, and orchestrator operations.
+
+Performance Gains:
+
+🚀 KMS operations : ~5 ms vs ~50 ms (10x faster)
+🚀 Orchestrator queries : ~1 ms vs ~30 ms (30x faster)
+🚀 Batch encryption : 100 files in 0.5s vs 5s (10x faster)
+
+Benefits:
+
+✅ Native Nushell integration (pipelines, data structures)
+✅ OS keyring for secure token storage
+✅ Offline capability (Age encryption, local orchestrator)
+✅ Graceful fallback to HTTP if not installed
+
+
+# Install Rust toolchain (if not already installed)
curl --proto '=https' --tlsv1.2 -sSf https://sh.rustup.rs | sh
source $HOME/.cargo/env
rustc --version
@@ -327,12 +301,9 @@ sudo dnf install openssl-devel # Fedora
# Linux only: Install keyring service (required for auth plugin)
sudo apt install gnome-keyring # Ubuntu/Debian (GNOME)
sudo apt install kwalletmanager # Ubuntu/Debian (KDE)
-```plaintext
-
-### Build Plugins
-
-```bash
-# Navigate to plugins directory
+
+
+# Navigate to plugins directory
cd provisioning/core/plugins/nushell-plugins
# Build all three plugins in release mode (optimized)
@@ -343,14 +314,10 @@ cargo build --release --all
# Compiling nu_plugin_kms v0.1.0
# Compiling nu_plugin_orchestrator v0.1.0
# Finished release [optimized] target(s) in 2m 15s
-```plaintext
-
-**Build time**: ~2-5 minutes depending on hardware
-
-### Register Plugins with Nushell
-
-```bash
-# Register all three plugins (full paths recommended)
+
+Build time : ~2-5 minutes depending on hardware
+
+# Register all three plugins (full paths recommended)
plugin add $PWD/target/release/nu_plugin_auth
plugin add $PWD/target/release/nu_plugin_kms
plugin add $PWD/target/release/nu_plugin_orchestrator
@@ -359,12 +326,9 @@ plugin add $PWD/target/release/nu_plugin_orchestrator
plugin add target/release/nu_plugin_auth
plugin add target/release/nu_plugin_kms
plugin add target/release/nu_plugin_orchestrator
-```plaintext
-
-### Verify Plugin Installation
-
-```bash
-# List registered plugins
+
+
+# List registered plugins
plugin list | where name =~ "auth|kms|orch"
# Expected output:
@@ -380,12 +344,9 @@ plugin list | where name =~ "auth|kms|orch"
auth --help # Should show auth commands
kms --help # Should show kms commands
orch --help # Should show orch commands
-```plaintext
-
-### Configure Plugin Environments
-
-```bash
-# Add to ~/.config/nushell/env.nu
+
+
+# Add to ~/.config/nushell/env.nu
$env.CONTROL_CENTER_URL = "http://localhost:3000"
$env.RUSTYVAULT_ADDR = "http://localhost:8200"
$env.RUSTYVAULT_TOKEN = "your-vault-token-here"
@@ -394,12 +355,9 @@ $env.ORCHESTRATOR_DATA_DIR = "provisioning/platform/orchestrator/data"
# For Age encryption (local development)
$env.AGE_IDENTITY = $"($env.HOME)/.age/key.txt"
$env.AGE_RECIPIENT = "age1xxxxxxxxx" # Replace with your public key
-```plaintext
-
-### Test Plugins (Quick Smoke Test)
-
-```bash
-# Test KMS plugin (requires backend configured)
+
+
+# Test KMS plugin (requires backend configured)
kms status
# Expected: { backend: "rustyvault", status: "healthy", ... }
# Or: Error if backend not configured (OK for now)
@@ -413,50 +371,25 @@ orch status
auth verify
# Expected: { active: false }
# Or: Error if control center not running (OK for now)
-```plaintext
-
-**Note**: It's OK if plugins show errors at this stage. We'll configure backends and services later.
-
-### Skip Plugins? (Not Recommended)
-
-If you want to skip plugin installation for now:
-
-- ✅ All features work via HTTP API (slower but functional)
-- ⚠️ You'll miss 10-50x performance improvements
-- ⚠️ No offline capability for KMS/orchestrator
-- ℹ️ You can install plugins later anytime
-
-To use HTTP fallback:
-
-```bash
-# System automatically uses HTTP if plugins not available
+
+Note : It’s OK if plugins show errors at this stage. We’ll configure backends and services later.
+
+If you want to skip plugin installation for now:
+
+✅ All features work via HTTP API (slower but functional)
+⚠️ You’ll miss 10-50x performance improvements
+⚠️ No offline capability for KMS/orchestrator
+ℹ️ You can install plugins later anytime
+
+To use HTTP fallback:
+# System automatically uses HTTP if plugins not available
# No configuration changes needed
-```plaintext
-
----
-
-## Step 3: Install Required Tools
-
-### Essential Tools
-
-**KCL (Configuration Language)**
-
-```bash
-# macOS
-brew install kcl
-
-# Linux
-curl -fsSL https://kcl-lang.io/script/install.sh | /bin/bash
-
-# Verify
-kcl version
-# Expected: 0.11.2 or higher
-```plaintext
-
-**SOPS (Secrets Management)**
-
-```bash
-# macOS
+
+
+
+
+SOPS (Secrets Management)
+# macOS
brew install sops
# Linux
@@ -467,12 +400,9 @@ sudo chmod +x /usr/local/bin/sops
# Verify
sops --version
# Expected: 3.10.2 or higher
-```plaintext
-
-**Age (Encryption Tool)**
-
-```bash
-# macOS
+
+Age (Encryption Tool)
+# macOS
brew install age
# Linux
@@ -490,14 +420,10 @@ age --version
age-keygen -o ~/.age/key.txt
cat ~/.age/key.txt
# Save the public key (age1...) for later
-```plaintext
-
-### Optional but Recommended Tools
-
-**K9s (Kubernetes Management)**
-
-```bash
-# macOS
+
+
+K9s (Kubernetes Management)
+# macOS
brew install k9s
# Linux
@@ -506,12 +432,9 @@ curl -sS https://webinstall.dev/k9s | bash
# Verify
k9s version
# Expected: 0.50.6 or higher
-```plaintext
-
-**glow (Markdown Renderer)**
-
-```bash
-# macOS
+
+glow (Markdown Renderer)
+# macOS
brew install glow
# Linux
@@ -520,27 +443,19 @@ sudo dnf install glow # Fedora
# Verify
glow --version
-```plaintext
-
----
-
-## Step 4: Clone and Setup Project
-
-### Clone Repository
-
-```bash
-# Clone project
+
+
+
+
+# Clone project
git clone https://github.com/your-org/project-provisioning.git
cd project-provisioning
# Or if already cloned, update to latest
git pull origin main
-```plaintext
-
-### Add CLI to PATH (Optional)
-
-```bash
-# Add to ~/.bashrc or ~/.zshrc
+
+
+# Add to ~/.bashrc or ~/.zshrc
export PATH="$PATH:/Users/Akasha/project-provisioning/provisioning/core/cli"
# Or create symlink
@@ -549,18 +464,12 @@ sudo ln -s /Users/Akasha/project-provisioning/provisioning/core/cli/provisioning
# Verify
provisioning version
# Expected: 3.5.0
-```plaintext
-
----
-
-## Step 5: Initialize Workspace
-
-A workspace is a self-contained environment for managing infrastructure.
-
-### Create New Workspace
-
-```bash
-# Initialize new workspace
+
+
+
+A workspace is a self-contained environment for managing infrastructure.
+
+# Initialize new workspace
provisioning workspace init --name production
# Or use interactive mode
@@ -568,68 +477,53 @@ provisioning workspace init
# Name: production
# Description: Production infrastructure
# Provider: upcloud
-```plaintext
-
-**What this creates:**
-
-The new workspace initialization now generates **KCL (Kusion Configuration Language) configuration files** for type-safe, schema-validated infrastructure definitions:
-
-```plaintext
-workspace/
+
+What this creates:
+The new workspace initialization now generates Nickel configuration files for type-safe, schema-validated infrastructure definitions:
+workspace/
├── config/
-│ ├── provisioning.k # Main KCL configuration (schema-validated)
+│ ├── config.ncl # Master Nickel configuration (type-safe)
│ ├── providers/
│ │ └── upcloud.toml # Provider-specific settings
│ ├── platform/ # Platform service configs
│ └── kms.toml # Key management settings
-├── infra/ # Infrastructure definitions
-├── extensions/ # Custom modules
-└── runtime/ # Runtime data and state
-```plaintext
+├── infra/
+│ └── default/
+│ ├── main.ncl # Infrastructure entry point
+│ └── servers.ncl # Server definitions
+├── docs/ # Auto-generated guides
+└── workspace.nu # Workspace utility scripts
+
+
+The workspace configuration uses Nickel (type-safe, validated) . This provides:
+
+✅ Type Safety : Schema validation catches errors at load time
+✅ Lazy Evaluation : Only computes what’s needed
+✅ Validation : Record merging, required fields, constraints
+✅ Documentation : Self-documenting with records
+
+Example Nickel config (config.ncl):
+{
+ workspace = {
+ name = "production",
+ version = "1.0.0",
+ created = "2025-12-03T14:30:00Z",
+ },
-### Workspace Configuration Format
+ paths = {
+ base = "/opt/workspaces/production",
+ infra = "/opt/workspaces/production/infra",
+ cache = "/opt/workspaces/production/.cache",
+ },
-The workspace configuration now uses **KCL (type-safe)** instead of YAML. This provides:
-
-- ✅ **Type Safety**: Schema validation catches errors at load time
-- ✅ **Immutability**: Enforces configuration immutability by default
-- ✅ **Validation**: Semantic versioning, required fields, value constraints
-- ✅ **Documentation**: Self-documenting with schema descriptions
-
-**Example KCL config** (`provisioning.k`):
-
-```kcl
-import provisioning.workspace_config as ws
-
-workspace_config = ws.WorkspaceConfig {
- workspace: {
- name: "production"
- version: "1.0.0"
- created: "2025-12-03T14:30:00Z"
- }
-
- paths: {
- base: "/opt/workspaces/production"
- infra: "/opt/workspaces/production/infra"
- cache: "/opt/workspaces/production/.cache"
- # ... other paths
- }
-
- providers: {
- active: ["upcloud"]
- default: "upcloud"
- }
-
- # ... other sections
+ providers = {
+ active = ["upcloud"],
+ default = "upcloud",
+ },
}
-```plaintext
-
-**Backward Compatibility**: If you have existing YAML workspace configs (`provisioning.yaml`), they continue to work. The config loader checks for KCL files first, then falls back to YAML.
-
-### Verify Workspace
-
-```bash
-# Show workspace info
+
+
+# Show workspace info
provisioning workspace info
# List all workspaces
@@ -638,14 +532,10 @@ provisioning workspace list
# Show active workspace
provisioning workspace active
# Expected: production
-```plaintext
-
-### View and Validate Workspace Configuration
-
-Now you can inspect and validate your KCL workspace configuration:
-
-```bash
-# View complete workspace configuration
+
+
+Now you can inspect and validate your Nickel workspace configuration:
+# View complete workspace configuration
provisioning workspace config show
# Show specific workspace
@@ -654,7 +544,7 @@ provisioning workspace config show production
# View configuration in different formats
provisioning workspace config show --format=json
provisioning workspace config show --format=yaml
-provisioning workspace config show --format=kcl # Raw KCL file
+provisioning workspace config show --format=nickel # Raw Nickel file
# Validate workspace configuration
provisioning workspace config validate
@@ -662,48 +552,35 @@ provisioning workspace config validate
# Show configuration hierarchy (priority order)
provisioning workspace config hierarchy
-```plaintext
-
-**Configuration Validation**: The KCL schema automatically validates:
-
-- ✅ Semantic versioning format (e.g., "1.0.0")
-- ✅ Required sections present (workspace, paths, provisioning, etc.)
-- ✅ Valid file paths and types
-- ✅ Provider configuration exists for active providers
-- ✅ KMS and SOPS settings properly configured
-
----
-
-## Step 6: Configure Environment
-
-### Set Provider Credentials
-
-**UpCloud Provider:**
-
-```bash
-# Create provider config
+
+Configuration Validation : The Nickel schema automatically validates:
+
+✅ Semantic versioning format (for example, “1.0.0”)
+✅ Required sections present (workspace, paths, provisioning, etc.)
+✅ Valid file paths and types
+✅ Provider configuration exists for active providers
+✅ KMS and SOPS settings properly configured
+
+
+
+
+UpCloud Provider:
+# Create provider config
vim workspace/config/providers/upcloud.toml
-```plaintext
-
-```toml
-[upcloud]
+
+[upcloud]
username = "your-upcloud-username"
password = "your-upcloud-password" # Will be encrypted
# Default settings
default_zone = "de-fra1"
-default_plan = "2xCPU-4GB"
-```plaintext
-
-**AWS Provider:**
-
-```bash
-# Create AWS config
+default_plan = "2xCPU-4 GB"
+
+AWS Provider:
+# Create AWS config
vim workspace/config/providers/aws.toml
-```plaintext
-
-```toml
-[aws]
+
+[aws]
region = "us-east-1"
access_key_id = "AKIAXXXXX"
secret_access_key = "xxxxx" # Will be encrypted
@@ -711,12 +588,9 @@ secret_access_key = "xxxxx" # Will be encrypted
# Default settings
default_instance_type = "t3.medium"
default_region = "us-east-1"
-```plaintext
-
-### Encrypt Sensitive Data
-
-```bash
-# Generate Age key if not done already
+
+
+# Generate Age key if not done already
age-keygen -o ~/.age/key.txt
# Encrypt provider configs
@@ -729,17 +603,12 @@ sops --encrypt --age $(cat ~/.age/key.txt | grep "public key:" | cut -d: -f2) \
# Remove plaintext
rm workspace/config/providers/upcloud.toml
-```plaintext
-
-### Configure Local Overrides
-
-```bash
-# Edit user-specific settings
+
+
+# Edit user-specific settings
vim workspace/config/local-overrides.toml
-```plaintext
-
-```toml
-[user]
+
+[user]
name = "admin"
email = "admin@example.com"
@@ -754,16 +623,11 @@ use_curl = true # Use curl instead of ureq
[paths]
ssh_key = "~/.ssh/id_ed25519"
-```plaintext
-
----
-
-## Step 7: Discover and Load Modules
-
-### Discover Available Modules
-
-```bash
-# Discover task services
+
+
+
+
+# Discover task services
provisioning module discover taskserv
# Shows: kubernetes, containerd, etcd, cilium, helm, etc.
@@ -774,12 +638,9 @@ provisioning module discover provider
# Discover clusters
provisioning module discover cluster
# Shows: buildkit, registry, monitoring, etc.
-```plaintext
-
-### Load Modules into Workspace
-
-```bash
-# Load Kubernetes taskserv
+
+
+# Load Kubernetes taskserv
provisioning module load taskserv production kubernetes
# Load multiple modules
@@ -791,16 +652,11 @@ provisioning module load cluster production buildkit
# Verify loaded modules
provisioning module list taskserv production
provisioning module list cluster production
-```plaintext
-
----
-
-## Step 8: Validate Configuration
-
-Before deploying, validate all configuration:
-
-```bash
-# Validate workspace configuration
+
+
+
+Before deploying, validate all configuration:
+# Validate workspace configuration
provisioning workspace validate
# Validate infrastructure configuration
@@ -814,47 +670,35 @@ provisioning env
# Show all configuration and environment
provisioning allenv
-```plaintext
-
-**Expected output:**
-
-```plaintext
-✓ Configuration valid
+
+Expected output:
+✓ Configuration valid
✓ Provider credentials configured
✓ Workspace initialized
✓ Modules loaded: 3 taskservs, 1 cluster
✓ SSH key configured
✓ Age encryption key available
-```plaintext
-
-**Fix any errors** before proceeding to deployment.
-
----
-
-## Step 9: Deploy Servers
-
-### Preview Server Creation (Dry Run)
-
-```bash
-# Check what would be created (no actual changes)
+
+Fix any errors before proceeding to deployment.
+
+
+
+# Check what would be created (no actual changes)
provisioning server create --infra production --check
# With debug output for details
provisioning server create --infra production --check --debug
-```plaintext
-
-**Review the output:**
-
-- Server names and configurations
-- Zones and regions
-- CPU, memory, disk specifications
-- Estimated costs
-- Network settings
-
-### Create Servers
-
-```bash
-# Create servers (with confirmation prompt)
+
+Review the output:
+
+Server names and configurations
+Zones and regions
+CPU, memory, disk specifications
+Estimated costs
+Network settings
+
+
+# Create servers (with confirmation prompt)
provisioning server create --infra production
# Or auto-confirm (skip prompt)
@@ -862,16 +706,13 @@ provisioning server create --infra production --yes
# Wait for completion
provisioning server create --infra production --wait
-```plaintext
+
+Expected output:
+Creating servers for infrastructure: production
-**Expected output:**
-
-```plaintext
-Creating servers for infrastructure: production
-
- ● Creating server: k8s-master-01 (de-fra1, 4xCPU-8GB)
- ● Creating server: k8s-worker-01 (de-fra1, 4xCPU-8GB)
- ● Creating server: k8s-worker-02 (de-fra1, 4xCPU-8GB)
+ ● Creating server: k8s-master-01 (de-fra1, 4xCPU-8 GB)
+ ● Creating server: k8s-worker-01 (de-fra1, 4xCPU-8 GB)
+ ● Creating server: k8s-worker-02 (de-fra1, 4xCPU-8 GB)
✓ Created 3 servers in 120 seconds
@@ -879,12 +720,9 @@ Servers:
• k8s-master-01: 192.168.1.10 (Running)
• k8s-worker-01: 192.168.1.11 (Running)
• k8s-worker-02: 192.168.1.12 (Running)
-```plaintext
-
-### Verify Server Creation
-
-```bash
-# List all servers
+
+
+# List all servers
provisioning server list --infra production
# Show detailed server info
@@ -893,18 +731,12 @@ provisioning server list --infra production --out yaml
# SSH to server (test connectivity)
provisioning server ssh k8s-master-01
# Type 'exit' to return
-```plaintext
-
----
-
-## Step 10: Install Task Services
-
-Task services are infrastructure components like Kubernetes, databases, monitoring, etc.
-
-### Install Kubernetes (Check Mode First)
-
-```bash
-# Preview Kubernetes installation
+
+
+
+Task services are infrastructure components like Kubernetes, databases, monitoring, etc.
+
+# Preview Kubernetes installation
provisioning taskserv create kubernetes --infra production --check
# Shows:
@@ -912,12 +744,9 @@ provisioning taskserv create kubernetes --infra production --check
# - Configuration to be applied
# - Resources needed
# - Estimated installation time
-```plaintext
-
-### Install Kubernetes
-
-```bash
-# Install Kubernetes (with dependencies)
+
+
+# Install Kubernetes (with dependencies)
provisioning taskserv create kubernetes --infra production
# Or install dependencies first
@@ -927,12 +756,9 @@ provisioning taskserv create kubernetes --infra production
# Monitor progress
provisioning workflow monitor <task_id>
-```plaintext
-
-**Expected output:**
-
-```plaintext
-Installing taskserv: kubernetes
+
+Expected output:
+Installing taskserv: kubernetes
● Installing containerd on k8s-master-01
● Installing containerd on k8s-worker-01
@@ -955,12 +781,9 @@ Cluster Info:
• Version: 1.28.0
• Nodes: 3 (1 control-plane, 2 workers)
• API Server: https://192.168.1.10:6443
-```plaintext
-
-### Install Additional Services
-
-```bash
-# Install Cilium (CNI)
+
+
+# Install Cilium (CNI)
provisioning taskserv create cilium --infra production
# Install Helm
@@ -968,18 +791,12 @@ provisioning taskserv create helm --infra production
# Verify all taskservs
provisioning taskserv list --infra production
-```plaintext
-
----
-
-## Step 11: Create Clusters
-
-Clusters are complete application stacks (e.g., BuildKit, OCI Registry, Monitoring).
-
-### Create BuildKit Cluster (Check Mode)
-
-```bash
-# Preview cluster creation
+
+
+
+Clusters are complete application stacks (for example, BuildKit, OCI Registry, Monitoring).
+
+# Preview cluster creation
provisioning cluster create buildkit --infra production --check
# Shows:
@@ -987,12 +804,9 @@ provisioning cluster create buildkit --infra production --check
# - Dependencies required
# - Configuration values
# - Resource requirements
-```plaintext
-
-### Create BuildKit Cluster
-
-```bash
-# Create BuildKit cluster
+
+
+# Create BuildKit cluster
provisioning cluster create buildkit --infra production
# Monitor deployment
@@ -1000,12 +814,9 @@ provisioning workflow monitor <task_id>
# Or use plugin for faster monitoring
orch tasks --status running
-```plaintext
-
-**Expected output:**
-
-```plaintext
-Creating cluster: buildkit
+
+Expected output:
+Creating cluster: buildkit
● Deploying BuildKit daemon
● Deploying BuildKit worker
@@ -1017,14 +828,11 @@ Creating cluster: buildkit
Cluster Info:
• BuildKit version: 0.12.0
• Workers: 2
- • Cache: 50GB
+ • Cache: 50 GB
• Registry: registry.production.local
-```plaintext
-
-### Verify Cluster
-
-```bash
-# List all clusters
+
+
+# List all clusters
provisioning cluster list --infra production
# Show cluster details
@@ -1032,16 +840,11 @@ provisioning cluster list --infra production --out yaml
# Check cluster health
kubectl get pods -n buildkit
-```plaintext
-
----
-
-## Step 12: Verify Deployment
-
-### Comprehensive Health Check
-
-```bash
-# Check orchestrator status
+
+
+
+
+# Check orchestrator status
orch status
# or
provisioning orchestrator status
@@ -1058,12 +861,9 @@ provisioning cluster list --infra production
# Verify Kubernetes cluster
kubectl get nodes
kubectl get pods --all-namespaces
-```plaintext
-
-### Run Validation Tests
-
-```bash
-# Validate infrastructure
+
+
+# Validate infrastructure
provisioning infra validate --infra production
# Test connectivity
@@ -1071,26 +871,20 @@ provisioning server ssh k8s-master-01 "kubectl get nodes"
# Test BuildKit
kubectl exec -it -n buildkit buildkit-0 -- buildctl --version
-```plaintext
-
-### Expected Results
-
-All checks should show:
-
-- ✅ Servers: Running
-- ✅ Taskservs: Installed and healthy
-- ✅ Clusters: Deployed and operational
-- ✅ Kubernetes: 3/3 nodes ready
-- ✅ BuildKit: 2/2 workers ready
-
----
-
-## Step 13: Post-Deployment
-
-### Configure kubectl Access
-
-```bash
-# Get kubeconfig from master node
+
+
+All checks should show:
+
+✅ Servers: Running
+✅ Taskservs: Installed and healthy
+✅ Clusters: Deployed and operational
+✅ Kubernetes: 3/3 nodes ready
+✅ BuildKit: 2/2 workers ready
+
+
+
+
+# Get kubeconfig from master node
provisioning server ssh k8s-master-01 "cat ~/.kube/config" > ~/.kube/config-production
# Set KUBECONFIG
@@ -1099,34 +893,25 @@ export KUBECONFIG=~/.kube/config-production
# Verify access
kubectl get nodes
kubectl get pods --all-namespaces
-```plaintext
-
-### Set Up Monitoring (Optional)
-
-```bash
-# Deploy monitoring stack
+
+
+# Deploy monitoring stack
provisioning cluster create monitoring --infra production
# Access Grafana
kubectl port-forward -n monitoring svc/grafana 3000:80
# Open: http://localhost:3000
-```plaintext
-
-### Configure CI/CD Integration (Optional)
-
-```bash
-# Generate CI/CD credentials
+
+
+# Generate CI/CD credentials
provisioning secrets generate aws --ttl 12h
# Create CI/CD kubeconfig
kubectl create serviceaccount ci-cd -n default
kubectl create clusterrolebinding ci-cd --clusterrole=admin --serviceaccount=default:ci-cd
-```plaintext
-
-### Backup Configuration
-
-```bash
-# Backup workspace configuration
+
+
+# Backup workspace configuration
tar -czf workspace-production-backup.tar.gz workspace/
# Encrypt backup
@@ -1134,18 +919,12 @@ kms encrypt (open workspace-production-backup.tar.gz | encode base64) --backend
| save workspace-production-backup.tar.gz.enc
# Store securely (S3, Vault, etc.)
-```plaintext
-
----
-
-## Troubleshooting
-
-### Server Creation Fails
-
-**Problem**: Server creation times out or fails
-
-```bash
-# Check provider credentials
+
+
+
+
+Problem : Server creation times out or fails
+# Check provider credentials
provisioning validate config
# Check provider API status
@@ -1153,14 +932,10 @@ curl -u username:password https://api.upcloud.com/1.3/account
# Try with debug mode
provisioning server create --infra production --check --debug
-```plaintext
-
-### Taskserv Installation Fails
-
-**Problem**: Kubernetes installation fails
-
-```bash
-# Check server connectivity
+
+
+Problem : Kubernetes installation fails
+# Check server connectivity
provisioning server ssh k8s-master-01
# Check logs
@@ -1172,14 +947,10 @@ provisioning taskserv list --infra production | where status == "failed"
# Retry installation
provisioning taskserv delete kubernetes --infra production
provisioning taskserv create kubernetes --infra production
-```plaintext
-
-### Plugin Commands Don't Work
-
-**Problem**: `auth`, `kms`, or `orch` commands not found
-
-```bash
-# Check plugin registration
+
+
+Problem : auth, kms, or orch commands not found
+# Check plugin registration
plugin list | where name =~ "auth|kms|orch"
# Re-register if missing
@@ -1191,14 +962,10 @@ plugin add target/release/nu_plugin_orchestrator
# Restart Nushell
exit
nu
-```plaintext
-
-### KMS Encryption Fails
-
-**Problem**: `kms encrypt` returns error
-
-```bash
-# Check backend status
+
+
+Problem : kms encrypt returns error
+# Check backend status
kms status
# Check RustyVault running
@@ -1209,14 +976,10 @@ kms encrypt "data" --backend age --key age1xxxxxxxxx
# Check Age key
cat ~/.age/key.txt
-```plaintext
-
-### Orchestrator Not Running
-
-**Problem**: `orch status` returns error
-
-```bash
-# Check orchestrator status
+
+
+Problem : orch status returns error
+# Check orchestrator status
ps aux | grep orchestrator
# Start orchestrator
@@ -1225,14 +988,10 @@ cd provisioning/platform/orchestrator
# Check logs
tail -f provisioning/platform/orchestrator/data/orchestrator.log
-```plaintext
-
-### Configuration Validation Errors
-
-**Problem**: `provisioning validate config` shows errors
-
-```bash
-# Show detailed errors
+
+
+Problem : provisioning validate config shows errors
+# Show detailed errors
provisioning validate config --debug
# Check configuration files
@@ -1240,27 +999,23 @@ provisioning allenv
# Fix missing settings
vim workspace/config/local-overrides.toml
-```plaintext
-
----
-
-## Next Steps
-
-### Explore Advanced Features
-
-1. **Multi-Environment Deployment**
-
- ```bash
- # Create dev and staging workspaces
- provisioning workspace create dev
- provisioning workspace create staging
- provisioning workspace switch dev
+
+
+
+Multi-Environment Deployment
+# Create dev and staging workspaces
+provisioning workspace create dev
+provisioning workspace create staging
+provisioning workspace switch dev
+
+
+
Batch Operations
# Deploy to multiple clouds
-provisioning batch submit workflows/multi-cloud-deploy.k
+provisioning batch submit workflows/multi-cloud-deploy.ncl
@@ -1285,7 +1040,7 @@ provisioning compliance report --standard soc2
Update Guide : docs/guides/update-infrastructure.md
Customize Guide : docs/guides/customize-infrastructure.md
Plugin Guide : docs/user/PLUGIN_INTEGRATION_GUIDE.md
-Security System : docs/architecture/ADR-009-security-system-complete.md
+Security System : docs/architecture/adr-009-security-system-complete.md
# Show help for any command
@@ -1298,15 +1053,11 @@ provisioning version
# Start Nushell session with provisioning library
provisioning nu
-```plaintext
-
----
-
-## Summary
-
-You've successfully:
-
-✅ Installed Nushell and essential tools
+
+
+
+You’ve successfully:
+✅ Installed Nushell and essential tools
✅ Built and registered native plugins (10-50x faster operations)
✅ Cloned and configured the project
✅ Initialized a production workspace
@@ -1314,19 +1065,14 @@ You've successfully:
✅ Deployed servers
✅ Installed Kubernetes and task services
✅ Created application clusters
-✅ Verified complete deployment
-
-**Your infrastructure is now ready for production use!**
-
----
-
-**Estimated Total Time**: 30-60 minutes
-**Next Guide**: [Update Infrastructure](update-infrastructure.md)
-**Questions?**: Open an issue or contact <platform-team@example.com>
-
-**Last Updated**: 2025-10-09
-**Version**: 3.5.0
-
+✅ Verified complete deployment
+Your infrastructure is now ready for production use!
+
+Estimated Total Time : 30-60 minutes
+Next Guide : Update Infrastructure
+Questions? : Open an issue or contact platform-team@example.com
+Last Updated : 2025-10-09
+Version : 3.5.0
diff --git a/docs/book/guides/update-infrastructure.html b/docs/book/guides/update-infrastructure.html
index caf0964..ce03e4c 100644
--- a/docs/book/guides/update-infrastructure.html
+++ b/docs/book/guides/update-infrastructure.html
@@ -191,42 +191,27 @@
Best for : Non-critical environments, development, staging
# Direct update without downtime consideration
provisioning t create <taskserv> --infra <project>
-```plaintext
-
-### Strategy 2: Rolling Updates (Recommended)
-
-**Best for**: Production environments, high availability
-
-```bash
-# Update servers one by one
+
+
+Best for : Production environments, high availability
+# Update servers one by one
provisioning s update --infra <project> --rolling
-```plaintext
-
-### Strategy 3: Blue-Green Deployment (Safest)
-
-**Best for**: Critical production, zero-downtime requirements
-
-```bash
-# Create new infrastructure, switch traffic, remove old
+
+
+Best for : Critical production, zero-downtime requirements
+# Create new infrastructure, switch traffic, remove old
provisioning ws init <project>-green
# ... configure and deploy
# ... switch traffic
provisioning ws delete <project>-blue
-```plaintext
-
-## Step 1: Check for Updates
-
-### 1.1 Check All Task Services
-
-```bash
-# Check all taskservs for updates
+
+
+
+# Check all taskservs for updates
provisioning t check-updates
-```plaintext
-
-**Expected Output:**
-
-```plaintext
-📦 Task Service Update Check:
+
+Expected Output:
+📦 Task Service Update Check:
NAME CURRENT LATEST STATUS
kubernetes 1.29.0 1.30.0 ⬆️ update available
@@ -236,19 +221,13 @@ postgres 15.5 16.1 ⬆️ update available
redis 7.2.3 7.2.3 ✅ up-to-date
Updates available: 3
-```plaintext
-
-### 1.2 Check Specific Task Service
-
-```bash
-# Check specific taskserv
+
+
+# Check specific taskserv
provisioning t check-updates kubernetes
-```plaintext
-
-**Expected Output:**
-
-```plaintext
-📦 Kubernetes Update Check:
+
+Expected Output:
+📦 Kubernetes Update Check:
Current: 1.29.0
Latest: 1.30.0
@@ -264,19 +243,13 @@ Breaking Changes:
• None
Recommended: ✅ Safe to update
-```plaintext
-
-### 1.3 Check Version Status
-
-```bash
-# Show detailed version information
+
+
+# Show detailed version information
provisioning version show
-```plaintext
-
-**Expected Output:**
-
-```plaintext
-📋 Component Versions:
+
+Expected Output:
+📋 Component Versions:
COMPONENT CURRENT LATEST DAYS OLD STATUS
kubernetes 1.29.0 1.30.0 45 ⬆️ update
@@ -284,51 +257,32 @@ containerd 1.7.13 1.7.13 0 ✅ current
cilium 1.14.5 1.15.0 30 ⬆️ update
postgres 15.5 16.1 60 ⬆️ update (major)
redis 7.2.3 7.2.3 0 ✅ current
-```plaintext
-
-### 1.4 Check for Security Updates
-
-```bash
-# Check for security-related updates
+
+
+# Check for security-related updates
provisioning version updates --security-only
-```plaintext
-
-## Step 2: Plan Your Update
-
-### 2.1 Review Current Configuration
-
-```bash
-# Show current infrastructure
+
+
+
+# Show current infrastructure
provisioning show settings --infra my-production
-```plaintext
-
-### 2.2 Backup Configuration
-
-```bash
-# Create configuration backup
+
+
+# Create configuration backup
cp -r workspace/infra/my-production workspace/infra/my-production.backup-$(date +%Y%m%d)
# Or use built-in backup
provisioning ws backup my-production
-```plaintext
-
-**Expected Output:**
-
-```plaintext
-✅ Backup created: workspace/backups/my-production-20250930.tar.gz
-```plaintext
-
-### 2.3 Create Update Plan
-
-```bash
-# Generate update plan
+
+Expected Output:
+✅ Backup created: workspace/backups/my-production-20250930.tar.gz
+
+
+# Generate update plan
provisioning plan update --infra my-production
-```plaintext
-
-**Expected Output:**
-
-```plaintext
-📝 Update Plan for my-production:
+
+Expected Output:
+📝 Update Plan for my-production:
Phase 1: Minor Updates (Low Risk)
• containerd: No update needed
@@ -348,23 +302,15 @@ Recommended Order:
Total Estimated Time: 30 minutes
Recommended: Test in staging environment first
-```plaintext
-
-## Step 3: Update Task Services
-
-### 3.1 Update Non-Critical Service (Cilium Example)
-
-#### Dry-Run Update
-
-```bash
-# Test update without applying
+
+
+
+
+# Test update without applying
provisioning t create cilium --infra my-production --check
-```plaintext
-
-**Expected Output:**
-
-```plaintext
-🔍 CHECK MODE: Simulating Cilium update
+
+Expected Output:
+🔍 CHECK MODE: Simulating Cilium update
Current: 1.14.5
Target: 1.15.0
@@ -377,33 +323,21 @@ Would perform:
Estimated downtime: <1 minute per node
No errors detected. Ready to update.
-```plaintext
-
-#### Generate Updated Configuration
-
-```bash
-# Generate new configuration
+
+
+# Generate new configuration
provisioning t generate cilium --infra my-production
-```plaintext
-
-**Expected Output:**
-
-```plaintext
-✅ Generated Cilium configuration (version 1.15.0)
- Saved to: workspace/infra/my-production/taskservs/cilium.k
-```plaintext
-
-#### Apply Update
-
-```bash
-# Apply update
+
+Expected Output:
+✅ Generated Cilium configuration (version 1.15.0)
+ Saved to: workspace/infra/my-production/taskservs/cilium.ncl
+
+
+# Apply update
provisioning t create cilium --infra my-production
-```plaintext
-
-**Expected Output:**
-
-```plaintext
-🚀 Updating Cilium on my-production...
+
+Expected Output:
+🚀 Updating Cilium on my-production...
Downloading Cilium 1.15.0... ⏳
✅ Downloaded
@@ -423,19 +357,13 @@ Verifying connectivity... ⏳
🎉 Cilium update complete!
Version: 1.14.5 → 1.15.0
Downtime: 0 minutes
-```plaintext
-
-#### Verify Update
-
-```bash
-# Verify updated version
+
+
+# Verify updated version
provisioning version taskserv cilium
-```plaintext
-
-**Expected Output:**
-
-```plaintext
-📦 Cilium Version Info:
+
+Expected Output:
+📦 Cilium Version Info:
Installed: 1.15.0
Latest: 1.15.0
@@ -444,49 +372,33 @@ Status: ✅ Up-to-date
Nodes:
✅ web-01: 1.15.0 (running)
✅ web-02: 1.15.0 (running)
-```plaintext
-
-### 3.2 Update Critical Service (Kubernetes Example)
-
-#### Test in Staging First
-
-```bash
-# If you have staging environment
+
+
+
+# If you have staging environment
provisioning t create kubernetes --infra my-staging --check
provisioning t create kubernetes --infra my-staging
# Run integration tests
provisioning test kubernetes --infra my-staging
-```plaintext
-
-#### Backup Current State
-
-```bash
-# Backup Kubernetes state
+
+
+# Backup Kubernetes state
kubectl get all -A -o yaml > k8s-backup-$(date +%Y%m%d).yaml
# Backup etcd (if using external etcd)
provisioning t backup kubernetes --infra my-production
-```plaintext
-
-#### Schedule Maintenance Window
-
-```bash
-# Set maintenance mode (optional, if supported)
+
+
+# Set maintenance mode (optional, if supported)
provisioning maintenance enable --infra my-production --duration 30m
-```plaintext
-
-#### Update Kubernetes
-
-```bash
-# Update control plane first
+
+
+# Update control plane first
provisioning t create kubernetes --infra my-production --control-plane-only
-```plaintext
-
-**Expected Output:**
-
-```plaintext
-🚀 Updating Kubernetes control plane on my-production...
+
+Expected Output:
+🚀 Updating Kubernetes control plane on my-production...
Draining control plane: web-01... ⏳
✅ web-01 drained
@@ -501,17 +413,12 @@ Verifying control plane... ⏳
✅ Control plane healthy
🎉 Control plane update complete!
-```plaintext
-
-```bash
-# Update worker nodes one by one
+
+# Update worker nodes one by one
provisioning t create kubernetes --infra my-production --workers-only --rolling
-```plaintext
-
-**Expected Output:**
-
-```plaintext
-🚀 Updating Kubernetes workers on my-production...
+
+Expected Output:
+🚀 Updating Kubernetes workers on my-production...
Rolling update: web-02...
Draining... ⏳
@@ -529,44 +436,28 @@ Rolling update: web-02...
🎉 Worker update complete!
Updated: web-02
Version: 1.30.0
-```plaintext
-
-#### Verify Update
-
-```bash
-# Verify Kubernetes cluster
+
+
+# Verify Kubernetes cluster
kubectl get nodes
provisioning version taskserv kubernetes
-```plaintext
-
-**Expected Output:**
-
-```plaintext
-NAME STATUS ROLES AGE VERSION
+
+Expected Output:
+NAME STATUS ROLES AGE VERSION
web-01 Ready control-plane 30d v1.30.0
web-02 Ready <none> 30d v1.30.0
-```plaintext
-
-```bash
-# Run smoke tests
+
+# Run smoke tests
provisioning test kubernetes --infra my-production
-```plaintext
-
-### 3.3 Update Database (PostgreSQL Example)
-
-⚠️ **WARNING**: Database updates may require data migration. Always backup first!
-
-#### Backup Database
-
-```bash
-# Backup PostgreSQL database
+
+
+⚠️ WARNING : Database updates may require data migration. Always backup first!
+
+# Backup PostgreSQL database
provisioning t backup postgres --infra my-production
-```plaintext
-
-**Expected Output:**
-
-```plaintext
-🗄️ Backing up PostgreSQL...
+
+Expected Output:
+🗄️ Backing up PostgreSQL...
Creating dump: my-production-postgres-20250930.sql... ⏳
✅ Dump created (2.3 GB)
@@ -575,19 +466,13 @@ Compressing... ⏳
✅ Compressed (450 MB)
Saved to: workspace/backups/postgres/my-production-20250930.sql.gz
-```plaintext
-
-#### Check Compatibility
-
-```bash
-# Check if data migration is needed
+
+
+# Check if data migration is needed
provisioning t check-migration postgres --from 15.5 --to 16.1
-```plaintext
-
-**Expected Output:**
-
-```plaintext
-🔍 PostgreSQL Migration Check:
+
+Expected Output:
+🔍 PostgreSQL Migration Check:
From: 15.5
To: 16.1
@@ -605,19 +490,13 @@ Estimated Time: 15-30 minutes (depending on data size)
Estimated Downtime: 15-30 minutes
Recommended: Use streaming replication for zero-downtime upgrade
-```plaintext
-
-#### Perform Update
-
-```bash
-# Update PostgreSQL (with automatic migration)
+
+
+# Update PostgreSQL (with automatic migration)
provisioning t create postgres --infra my-production --migrate
-```plaintext
-
-**Expected Output:**
-
-```plaintext
-🚀 Updating PostgreSQL on my-production...
+
+Expected Output:
+🚀 Updating PostgreSQL on my-production...
⚠️ Major version upgrade detected (15.5 → 16.1)
Automatic migration will be performed
@@ -646,29 +525,19 @@ Verifying data integrity... ⏳
🎉 PostgreSQL update complete!
Version: 15.5 → 16.1
Downtime: 18 minutes
-```plaintext
-
-#### Verify Update
-
-```bash
-# Verify PostgreSQL
+
+
+# Verify PostgreSQL
provisioning version taskserv postgres
ssh db-01 "psql --version"
-```plaintext
-
-## Step 4: Update Multiple Services
-
-### 4.1 Batch Update (Sequentially)
-
-```bash
-# Update multiple taskservs one by one
+
+
+
+# Update multiple taskservs one by one
provisioning t update --infra my-production --taskservs cilium,containerd,redis
-```plaintext
-
-**Expected Output:**
-
-```plaintext
-🚀 Updating 3 taskservs on my-production...
+
+Expected Output:
+🚀 Updating 3 taskservs on my-production...
[1/3] Updating cilium... ⏳
✅ cilium updated (1.15.0)
@@ -682,19 +551,13 @@ provisioning t update --infra my-production --taskservs cilium,containerd,redis
🎉 All updates complete!
Updated: 3 taskservs
Total time: 8 minutes
-```plaintext
-
-### 4.2 Parallel Update (Non-Dependent Services)
-
-```bash
-# Update taskservs in parallel (if they don't depend on each other)
+
+
+# Update taskservs in parallel (if they don't depend on each other)
provisioning t update --infra my-production --taskservs redis,postgres --parallel
-```plaintext
-
-**Expected Output:**
-
-```plaintext
-🚀 Updating 2 taskservs in parallel on my-production...
+
+Expected Output:
+🚀 Updating 2 taskservs in parallel on my-production...
redis: Updating... ⏳
postgres: Updating... ⏳
@@ -705,50 +568,35 @@ postgres: ✅ Updated (16.1)
🎉 All updates complete!
Updated: 2 taskservs
Total time: 3 minutes (parallel)
-```plaintext
-
-## Step 5: Update Server Configuration
-
-### 5.1 Update Server Resources
-
-```bash
-# Edit server configuration
-provisioning sops workspace/infra/my-production/servers.k
-```plaintext
-
-**Example: Upgrade server plan**
-
-```kcl
-# Before
+
+
+
+# Edit server configuration
+provisioning sops workspace/infra/my-production/servers.ncl
+
+Example: Upgrade server plan
+# Before
{
name = "web-01"
- plan = "1xCPU-2GB" # Old plan
+ plan = "1xCPU-2 GB" # Old plan
}
# After
{
name = "web-01"
- plan = "2xCPU-4GB" # New plan
+ plan = "2xCPU-4 GB" # New plan
}
-```plaintext
-
-```bash
-# Apply server update
+
+# Apply server update
provisioning s update --infra my-production --check
provisioning s update --infra my-production
-```plaintext
-
-### 5.2 Update Server OS
-
-```bash
-# Update operating system packages
+
+
+# Update operating system packages
provisioning s update --infra my-production --os-update
-```plaintext
-
-**Expected Output:**
-
-```plaintext
-🚀 Updating OS packages on my-production servers...
+
+Expected Output:
+🚀 Updating OS packages on my-production servers...
web-01: Updating packages... ⏳
✅ web-01: 24 packages updated
@@ -760,23 +608,15 @@ db-01: Updating packages... ⏳
✅ db-01: 24 packages updated
🎉 OS updates complete!
-```plaintext
-
-## Step 6: Rollback Procedures
-
-### 6.1 Rollback Task Service
-
-If update fails or causes issues:
-
-```bash
-# Rollback to previous version
+
+
+
+If update fails or causes issues:
+# Rollback to previous version
provisioning t rollback cilium --infra my-production
-```plaintext
-
-**Expected Output:**
-
-```plaintext
-🔄 Rolling back Cilium on my-production...
+
+Expected Output:
+🔄 Rolling back Cilium on my-production...
Current: 1.15.0
Target: 1.14.5 (previous version)
@@ -792,35 +632,22 @@ Verifying connectivity... ⏳
🎉 Rollback complete!
Version: 1.15.0 → 1.14.5
-```plaintext
-
-### 6.2 Rollback from Backup
-
-```bash
-# Restore configuration from backup
+
+
+# Restore configuration from backup
provisioning ws restore my-production --from workspace/backups/my-production-20250930.tar.gz
-```plaintext
-
-### 6.3 Emergency Rollback
-
-```bash
-# Complete infrastructure rollback
+
+
+# Complete infrastructure rollback
provisioning rollback --infra my-production --to-snapshot <snapshot-id>
-```plaintext
-
-## Step 7: Post-Update Verification
-
-### 7.1 Verify All Components
-
-```bash
-# Check overall health
+
+
+
+# Check overall health
provisioning health --infra my-production
-```plaintext
-
-**Expected Output:**
-
-```plaintext
-🏥 Health Check: my-production
+
+Expected Output:
+🏥 Health Check: my-production
Servers:
✅ web-01: Healthy
@@ -837,26 +664,17 @@ Clusters:
✅ buildkit: 2/2 replicas (healthy)
Overall Status: ✅ All systems healthy
-```plaintext
-
-### 7.2 Verify Version Updates
-
-```bash
-# Verify all versions are updated
+
+
+# Verify all versions are updated
provisioning version show
-```plaintext
-
-### 7.3 Run Integration Tests
-
-```bash
-# Run comprehensive tests
+
+
+# Run comprehensive tests
provisioning test all --infra my-production
-```plaintext
-
-**Expected Output:**
-
-```plaintext
-🧪 Running Integration Tests...
+
+Expected Output:
+🧪 Running Integration Tests...
[1/5] Server connectivity... ⏳
✅ All servers reachable
@@ -874,70 +692,66 @@ provisioning test all --infra my-production
✅ All applications healthy
🎉 All tests passed!
-```plaintext
-
-### 7.4 Monitor for Issues
-
-```bash
-# Monitor logs for errors
+
+
+# Monitor logs for errors
provisioning logs --infra my-production --follow --level error
-```plaintext
-
-## Update Checklist
-
-Use this checklist for production updates:
-
-- [ ] Check for available updates
-- [ ] Review changelog and breaking changes
-- [ ] Create configuration backup
-- [ ] Test update in staging environment
-- [ ] Schedule maintenance window
-- [ ] Notify team/users of maintenance
-- [ ] Update non-critical services first
-- [ ] Verify each update before proceeding
-- [ ] Update critical services with rolling updates
-- [ ] Backup database before major updates
-- [ ] Verify all components after update
-- [ ] Run integration tests
-- [ ] Monitor for issues (30 minutes minimum)
-- [ ] Document any issues encountered
-- [ ] Close maintenance window
-
-## Common Update Scenarios
-
-### Scenario 1: Minor Security Patch
-
-```bash
-# Quick security update
+
+
+Use this checklist for production updates:
+
+
+
+# Quick security update
provisioning t check-updates --security-only
provisioning t update --infra my-production --security-patches --yes
-```plaintext
-
-### Scenario 2: Major Version Upgrade
-
-```bash
-# Careful major version update
+
+
+# Careful major version update
provisioning ws backup my-production
provisioning t check-migration <service> --from X.Y --to X+1.Y
provisioning t create <service> --infra my-production --migrate
provisioning test all --infra my-production
-```plaintext
-
-### Scenario 3: Emergency Hotfix
-
-```bash
-# Apply critical hotfix immediately
+
+
+# Apply critical hotfix immediately
provisioning t create <service> --infra my-production --hotfix --yes
-```plaintext
-
-## Troubleshooting Updates
-
-### Issue: Update fails mid-process
-
-**Solution:**
-
-```bash
-# Check update status
+
+
+
+Solution:
+# Check update status
provisioning t status <taskserv> --infra my-production
# Resume failed update
@@ -945,14 +759,10 @@ provisioning t update <taskserv> --infra my-production --resume
# Or rollback
provisioning t rollback <taskserv> --infra my-production
-```plaintext
-
-### Issue: Service not starting after update
-
-**Solution:**
-
-```bash
-# Check logs
+
+
+Solution:
+# Check logs
provisioning logs <taskserv> --infra my-production
# Verify configuration
@@ -960,41 +770,34 @@ provisioning t validate <taskserv> --infra my-production
# Rollback if necessary
provisioning t rollback <taskserv> --infra my-production
-```plaintext
-
-### Issue: Data migration fails
-
-**Solution:**
-
-```bash
-# Check migration logs
+
+
+Solution:
+# Check migration logs
provisioning t migration-logs <taskserv> --infra my-production
# Restore from backup
provisioning t restore <taskserv> --infra my-production --from <backup-file>
-```plaintext
-
-## Best Practices
-
-1. **Always Test First**: Test updates in staging before production
-2. **Backup Everything**: Create backups before any update
-3. **Update Gradually**: Update one service at a time
-4. **Monitor Closely**: Watch for errors after each update
-5. **Have Rollback Plan**: Always have a rollback strategy
-6. **Document Changes**: Keep update logs for reference
-7. **Schedule Wisely**: Update during low-traffic periods
-8. **Verify Thoroughly**: Run tests after each update
-
-## Next Steps
-
-- **[Customize Guide](customize-infrastructure.md)** - Customize your infrastructure
-- **[From Scratch Guide](from-scratch.md)** - Deploy new infrastructure
-- **[Workflow Guide](../development/workflow.md)** - Automate with workflows
-
-## Quick Reference
-
-```bash
-# Update workflow
+
+
+
+Always Test First : Test updates in staging before production
+Backup Everything : Create backups before any update
+Update Gradually : Update one service at a time
+Monitor Closely : Watch for errors after each update
+Have Rollback Plan : Always have a rollback strategy
+Document Changes : Keep update logs for reference
+Schedule Wisely : Update during low-traffic periods
+Verify Thoroughly : Run tests after each update
+
+
+
+
+# Update workflow
provisioning t check-updates
provisioning ws backup my-production
provisioning t create <taskserv> --infra my-production --check
@@ -1002,12 +805,9 @@ provisioning t create <taskserv> --infra my-production
provisioning version taskserv <taskserv>
provisioning health --infra my-production
provisioning test all --infra my-production
-```plaintext
-
----
-
-*This guide is part of the provisioning project documentation. Last updated: 2025-09-30*
+
+This guide is part of the provisioning project documentation. Last updated: 2025-09-30
diff --git a/docs/book/index.html b/docs/book/index.html
index 2cf7252..cb3a775 100644
--- a/docs/book/index.html
+++ b/docs/book/index.html
@@ -314,229 +314,202 @@
├── configuration/ # Configuration docs
├── troubleshooting/ # Troubleshooting guides
└── quick-reference/ # Quick references
-```plaintext
-
----
-
-## Key Concepts
-
-### Infrastructure as Code (IaC)
-
-The provisioning platform uses **declarative configuration** to manage infrastructure. Instead of manually creating resources, you define what you want in Nickel configuration files, and the system makes it happen.
-
-### Mode-Based Architecture
-
-The system supports four operational modes:
-
-- **Solo**: Single developer local development
-- **Multi-user**: Team collaboration with shared services
-- **CI/CD**: Automated pipeline execution
-- **Enterprise**: Production deployment with strict compliance
-
-### Extension System
-
-Extensibility through:
-
-- **Providers**: Cloud platform integrations (AWS, UpCloud, Local)
-- **Task Services**: Infrastructure components (Kubernetes, databases, etc.)
-- **Clusters**: Complete deployment configurations
-
-### OCI-Native Distribution
-
-Extensions and packages distributed as OCI artifacts, enabling:
-
-- Industry-standard packaging
-- Efficient caching and bandwidth
-- Version pinning and rollback
-- Air-gapped deployments
-
----
-
-## Documentation by Role
-
-### For New Users
-
-1. Start with **[Installation Guide](getting-started/installation-guide.md)**
-2. Read **[Getting Started](getting-started/getting-started.md)**
-3. Follow **[From Scratch Guide](guides/from-scratch.md)**
-4. Reference **[Quickstart Cheatsheet](guides/quickstart-cheatsheet.md)**
-
-### For Developers
-
-1. Review **[System Overview](architecture/system-overview.md)**
-2. Study **[Design Principles](architecture/design-principles.md)**
-3. Read relevant **[ADRs](architecture/)**
-4. Follow **[Development Guide](development/README.md)**
-5. Reference **KCL Quick Reference**
-
-### For Operators
-
-1. Understand **[Mode System](infrastructure/mode-system)**
-2. Learn **[Service Management](operations/service-management-guide.md)**
-3. Review **[Infrastructure Management](infrastructure/infrastructure-management.md)**
-4. Study **[OCI Registry](integration/oci-registry-guide.md)**
-
-### For Architects
-
-1. Read **[System Overview](architecture/system-overview.md)**
-2. Study all **[ADRs](architecture/)**
-3. Review **[Integration Patterns](architecture/integration-patterns.md)**
-4. Understand **[Multi-Repo Architecture](architecture/multi-repo-architecture.md)**
-
----
-
-## System Capabilities
-
-### ✅ Infrastructure Automation
-
-- Multi-cloud support (AWS, UpCloud, Local)
-- Declarative configuration with KCL
-- Automated dependency resolution
-- Batch operations with rollback
-
-### ✅ Workflow Orchestration
-
-- Hybrid Rust/Nushell orchestration
-- Checkpoint-based recovery
-- Parallel execution with limits
-- Real-time monitoring
-
-### ✅ Test Environments
-
-- Containerized testing
-- Multi-node cluster simulation
-- Topology templates
-- Automated cleanup
-
-### ✅ Mode-Based Operation
-
-- Solo: Local development
-- Multi-user: Team collaboration
-- CI/CD: Automated pipelines
-- Enterprise: Production deployment
-
-### ✅ Extension Management
-
-- OCI-native distribution
-- Automatic dependency resolution
-- Version management
-- Local and remote sources
-
----
-
-## Key Achievements
-
-### 🚀 Batch Workflow System (v3.1.0)
-
-- Provider-agnostic batch operations
-- Mixed provider support (UpCloud + AWS + local)
-- Dependency resolution with soft/hard dependencies
-- Real-time monitoring and rollback
-
-### 🏗️ Hybrid Orchestrator (v3.0.0)
-
-- Solves Nushell deep call stack limitations
-- Preserves all business logic
-- REST API for external integration
-- Checkpoint-based state management
-
-### ⚙️ Configuration System (v2.0.0)
-
-- Migrated from ENV to config-driven
-- Hierarchical configuration loading
-- Variable interpolation
-- True IaC without hardcoded fallbacks
-
-### 🎯 Modular CLI (v3.2.0)
-
-- 84% reduction in main file size
-- Domain-driven handlers
-- 80+ shortcuts
-- Bi-directional help system
-
-### 🧪 Test Environment Service (v3.4.0)
-
-- Automated containerized testing
-- Multi-node cluster topologies
-- CI/CD integration ready
-- Template-based configurations
-
-### 🔄 Workspace Switching (v2.0.5)
-
-- Centralized workspace management
-- Single-command workspace switching
-- Active workspace tracking
-- User preference system
-
----
-
-## Technology Stack
-
-| Component | Technology | Purpose |
-|-----------|------------|---------|
-| **Core CLI** | Nushell 0.107.1 | Shell and scripting |
-| **Configuration** | KCL 0.11.2 | Type-safe IaC |
-| **Orchestrator** | Rust | High-performance coordination |
-| **Templates** | Jinja2 (nu_plugin_tera) | Code generation |
-| **Secrets** | SOPS 3.10.2 + Age 1.2.1 | Encryption |
-| **Distribution** | OCI (skopeo/crane/oras) | Artifact management |
-
----
-
-## Support
-
-### Getting Help
-
-- **Documentation**: You're reading it!
-- **Quick Reference**: Run `provisioning sc` or `provisioning guide quickstart`
-- **Help System**: Run `provisioning help` or `provisioning <command> help`
-- **Interactive Shell**: Run `provisioning nu` for Nushell REPL
-
-### Reporting Issues
-
-- Check **[Troubleshooting Guide](infrastructure/troubleshooting-guide.md)**
-- Review **[FAQ](troubleshooting/troubleshooting-guide.md)**
-- Enable debug mode: `provisioning --debug <command>`
-- Check logs: `provisioning platform logs <service>`
-
----
-
-## Contributing
-
-This project welcomes contributions! See **[Development Guide](development/README.md)** for:
-
-- Development setup
-- Code style guidelines
-- Testing requirements
-- Pull request process
-
----
-
-## License
-
-[Add license information]
-
----
-
-## Version History
-
-| Version | Date | Major Changes |
-|---------|------|---------------|
-| **3.5.0** | 2025-10-06 | Mode system, OCI registry, comprehensive documentation |
-| **3.4.0** | 2025-10-06 | Test environment service |
-| **3.3.0** | 2025-09-30 | Interactive guides system |
-| **3.2.0** | 2025-09-30 | Modular CLI refactoring |
-| **3.1.0** | 2025-09-25 | Batch workflow system |
-| **3.0.0** | 2025-09-25 | Hybrid orchestrator architecture |
-| **2.0.5** | 2025-10-02 | Workspace switching system |
-| **2.0.0** | 2025-09-23 | Configuration system migration |
-
----
-
-**Maintained By**: Provisioning Team
-**Last Review**: 2025-10-06
-**Next Review**: 2026-01-06
+
+
+
+The provisioning platform uses declarative configuration to manage infrastructure. Instead of manually creating resources, you define what you want in Nickel configuration files, and the system makes it happen.
+
+The system supports four operational modes:
+
+Solo : Single developer local development
+Multi-user : Team collaboration with shared services
+CI/CD : Automated pipeline execution
+Enterprise : Production deployment with strict compliance
+
+
+Extensibility through:
+
+Providers : Cloud platform integrations (AWS, UpCloud, Local)
+Task Services : Infrastructure components (Kubernetes, databases, etc.)
+Clusters : Complete deployment configurations
+
+
+Extensions and packages distributed as OCI artifacts, enabling:
+
+Industry-standard packaging
+Efficient caching and bandwidth
+Version pinning and rollback
+Air-gapped deployments
+
+
+
+
+
+Start with Installation Guide
+Read Getting Started
+Follow From Scratch Guide
+Reference Quickstart Cheatsheet
+
+
+
+Review System Overview
+Study Design Principles
+Read relevant ADRs
+Follow Development Guide
+Reference KCL Quick Reference
+
+
+
+Understand Mode System
+Learn Service Management
+Review Infrastructure Management
+Study OCI Registry
+
+
+
+Read System Overview
+Study all ADRs
+Review Integration Patterns
+Understand Multi-Repo Architecture
+
+
+
+
+
+Multi-cloud support (AWS, UpCloud, Local)
+Declarative configuration with KCL
+Automated dependency resolution
+Batch operations with rollback
+
+
+
+Hybrid Rust/Nushell orchestration
+Checkpoint-based recovery
+Parallel execution with limits
+Real-time monitoring
+
+
+
+Containerized testing
+Multi-node cluster simulation
+Topology templates
+Automated cleanup
+
+
+
+Solo: Local development
+Multi-user: Team collaboration
+CI/CD: Automated pipelines
+Enterprise: Production deployment
+
+
+
+OCI-native distribution
+Automatic dependency resolution
+Version management
+Local and remote sources
+
+
+
+
+
+Provider-agnostic batch operations
+Mixed provider support (UpCloud + AWS + local)
+Dependency resolution with soft/hard dependencies
+Real-time monitoring and rollback
+
+
+
+Solves Nushell deep call stack limitations
+Preserves all business logic
+REST API for external integration
+Checkpoint-based state management
+
+
+
+Migrated from ENV to config-driven
+Hierarchical configuration loading
+Variable interpolation
+True IaC without hardcoded fallbacks
+
+
+
+84% reduction in main file size
+Domain-driven handlers
+80+ shortcuts
+Bi-directional help system
+
+
+
+Automated containerized testing
+Multi-node cluster topologies
+CI/CD integration ready
+Template-based configurations
+
+
+
+Centralized workspace management
+Single-command workspace switching
+Active workspace tracking
+User preference system
+
+
+
+Component Technology Purpose
+Core CLI Nushell 0.107.1 Shell and scripting
+Configuration KCL 0.11.2 Type-safe IaC
+Orchestrator Rust High-performance coordination
+Templates Jinja2 (nu_plugin_tera) Code generation
+Secrets SOPS 3.10.2 + Age 1.2.1 Encryption
+Distribution OCI (skopeo/crane/oras) Artifact management
+
+
+
+
+
+
+Documentation : You’re reading it!
+Quick Reference : Run provisioning sc or provisioning guide quickstart
+Help System : Run provisioning help or provisioning <command> help
+Interactive Shell : Run provisioning nu for Nushell REPL
+
+
+
+Check Troubleshooting Guide
+Review FAQ
+Enable debug mode: provisioning --debug <command>
+Check logs: provisioning platform logs <service>
+
+
+
+This project welcomes contributions! See Development Guide for:
+
+Development setup
+Code style guidelines
+Testing requirements
+Pull request process
+
+
+
+[Add license information]
+
+
+Version Date Major Changes
+3.5.0 2025-10-06 Mode system, OCI registry, comprehensive documentation
+3.4.0 2025-10-06 Test environment service
+3.3.0 2025-09-30 Interactive guides system
+3.2.0 2025-09-30 Modular CLI refactoring
+3.1.0 2025-09-25 Batch workflow system
+3.0.0 2025-09-25 Hybrid orchestrator architecture
+2.0.5 2025-10-02 Workspace switching system
+2.0.0 2025-09-23 Configuration system migration
+
+
+
+Maintained By : Provisioning Team
+Last Review : 2025-10-06
+Next Review : 2026-01-06
diff --git a/docs/book/print.html b/docs/book/print.html
index b90f43e..4108539 100644
--- a/docs/book/print.html
+++ b/docs/book/print.html
@@ -312,229 +312,202 @@
├── configuration/ # Configuration docs
├── troubleshooting/ # Troubleshooting guides
└── quick-reference/ # Quick references
-```plaintext
-
----
-
-## Key Concepts
-
-### Infrastructure as Code (IaC)
-
-The provisioning platform uses **declarative configuration** to manage infrastructure. Instead of manually creating resources, you define what you want in Nickel configuration files, and the system makes it happen.
-
-### Mode-Based Architecture
-
-The system supports four operational modes:
-
-- **Solo**: Single developer local development
-- **Multi-user**: Team collaboration with shared services
-- **CI/CD**: Automated pipeline execution
-- **Enterprise**: Production deployment with strict compliance
-
-### Extension System
-
-Extensibility through:
-
-- **Providers**: Cloud platform integrations (AWS, UpCloud, Local)
-- **Task Services**: Infrastructure components (Kubernetes, databases, etc.)
-- **Clusters**: Complete deployment configurations
-
-### OCI-Native Distribution
-
-Extensions and packages distributed as OCI artifacts, enabling:
-
-- Industry-standard packaging
-- Efficient caching and bandwidth
-- Version pinning and rollback
-- Air-gapped deployments
-
----
-
-## Documentation by Role
-
-### For New Users
-
-1. Start with **[Installation Guide](getting-started/installation-guide.md)**
-2. Read **[Getting Started](getting-started/getting-started.md)**
-3. Follow **[From Scratch Guide](guides/from-scratch.md)**
-4. Reference **[Quickstart Cheatsheet](guides/quickstart-cheatsheet.md)**
-
-### For Developers
-
-1. Review **[System Overview](architecture/system-overview.md)**
-2. Study **[Design Principles](architecture/design-principles.md)**
-3. Read relevant **[ADRs](architecture/)**
-4. Follow **[Development Guide](development/README.md)**
-5. Reference **KCL Quick Reference**
-
-### For Operators
-
-1. Understand **[Mode System](infrastructure/mode-system)**
-2. Learn **[Service Management](operations/service-management-guide.md)**
-3. Review **[Infrastructure Management](infrastructure/infrastructure-management.md)**
-4. Study **[OCI Registry](integration/oci-registry-guide.md)**
-
-### For Architects
-
-1. Read **[System Overview](architecture/system-overview.md)**
-2. Study all **[ADRs](architecture/)**
-3. Review **[Integration Patterns](architecture/integration-patterns.md)**
-4. Understand **[Multi-Repo Architecture](architecture/multi-repo-architecture.md)**
-
----
-
-## System Capabilities
-
-### ✅ Infrastructure Automation
-
-- Multi-cloud support (AWS, UpCloud, Local)
-- Declarative configuration with KCL
-- Automated dependency resolution
-- Batch operations with rollback
-
-### ✅ Workflow Orchestration
-
-- Hybrid Rust/Nushell orchestration
-- Checkpoint-based recovery
-- Parallel execution with limits
-- Real-time monitoring
-
-### ✅ Test Environments
-
-- Containerized testing
-- Multi-node cluster simulation
-- Topology templates
-- Automated cleanup
-
-### ✅ Mode-Based Operation
-
-- Solo: Local development
-- Multi-user: Team collaboration
-- CI/CD: Automated pipelines
-- Enterprise: Production deployment
-
-### ✅ Extension Management
-
-- OCI-native distribution
-- Automatic dependency resolution
-- Version management
-- Local and remote sources
-
----
-
-## Key Achievements
-
-### 🚀 Batch Workflow System (v3.1.0)
-
-- Provider-agnostic batch operations
-- Mixed provider support (UpCloud + AWS + local)
-- Dependency resolution with soft/hard dependencies
-- Real-time monitoring and rollback
-
-### 🏗️ Hybrid Orchestrator (v3.0.0)
-
-- Solves Nushell deep call stack limitations
-- Preserves all business logic
-- REST API for external integration
-- Checkpoint-based state management
-
-### ⚙️ Configuration System (v2.0.0)
-
-- Migrated from ENV to config-driven
-- Hierarchical configuration loading
-- Variable interpolation
-- True IaC without hardcoded fallbacks
-
-### 🎯 Modular CLI (v3.2.0)
-
-- 84% reduction in main file size
-- Domain-driven handlers
-- 80+ shortcuts
-- Bi-directional help system
-
-### 🧪 Test Environment Service (v3.4.0)
-
-- Automated containerized testing
-- Multi-node cluster topologies
-- CI/CD integration ready
-- Template-based configurations
-
-### 🔄 Workspace Switching (v2.0.5)
-
-- Centralized workspace management
-- Single-command workspace switching
-- Active workspace tracking
-- User preference system
-
----
-
-## Technology Stack
-
-| Component | Technology | Purpose |
-|-----------|------------|---------|
-| **Core CLI** | Nushell 0.107.1 | Shell and scripting |
-| **Configuration** | KCL 0.11.2 | Type-safe IaC |
-| **Orchestrator** | Rust | High-performance coordination |
-| **Templates** | Jinja2 (nu_plugin_tera) | Code generation |
-| **Secrets** | SOPS 3.10.2 + Age 1.2.1 | Encryption |
-| **Distribution** | OCI (skopeo/crane/oras) | Artifact management |
-
----
-
-## Support
-
-### Getting Help
-
-- **Documentation**: You're reading it!
-- **Quick Reference**: Run `provisioning sc` or `provisioning guide quickstart`
-- **Help System**: Run `provisioning help` or `provisioning <command> help`
-- **Interactive Shell**: Run `provisioning nu` for Nushell REPL
-
-### Reporting Issues
-
-- Check **[Troubleshooting Guide](infrastructure/troubleshooting-guide.md)**
-- Review **[FAQ](troubleshooting/troubleshooting-guide.md)**
-- Enable debug mode: `provisioning --debug <command>`
-- Check logs: `provisioning platform logs <service>`
-
----
-
-## Contributing
-
-This project welcomes contributions! See **[Development Guide](development/README.md)** for:
-
-- Development setup
-- Code style guidelines
-- Testing requirements
-- Pull request process
-
----
-
-## License
-
-[Add license information]
-
----
-
-## Version History
-
-| Version | Date | Major Changes |
-|---------|------|---------------|
-| **3.5.0** | 2025-10-06 | Mode system, OCI registry, comprehensive documentation |
-| **3.4.0** | 2025-10-06 | Test environment service |
-| **3.3.0** | 2025-09-30 | Interactive guides system |
-| **3.2.0** | 2025-09-30 | Modular CLI refactoring |
-| **3.1.0** | 2025-09-25 | Batch workflow system |
-| **3.0.0** | 2025-09-25 | Hybrid orchestrator architecture |
-| **2.0.5** | 2025-10-02 | Workspace switching system |
-| **2.0.0** | 2025-09-23 | Configuration system migration |
-
----
-
-**Maintained By**: Provisioning Team
-**Last Review**: 2025-10-06
-**Next Review**: 2026-01-06
+
+
+
+The provisioning platform uses declarative configuration to manage infrastructure. Instead of manually creating resources, you define what you want in Nickel configuration files, and the system makes it happen.
+
+The system supports four operational modes:
+
+Solo : Single developer local development
+Multi-user : Team collaboration with shared services
+CI/CD : Automated pipeline execution
+Enterprise : Production deployment with strict compliance
+
+
+Extensibility through:
+
+Providers : Cloud platform integrations (AWS, UpCloud, Local)
+Task Services : Infrastructure components (Kubernetes, databases, etc.)
+Clusters : Complete deployment configurations
+
+
+Extensions and packages distributed as OCI artifacts, enabling:
+
+Industry-standard packaging
+Efficient caching and bandwidth
+Version pinning and rollback
+Air-gapped deployments
+
+
+
+
+
+Start with Installation Guide
+Read Getting Started
+Follow From Scratch Guide
+Reference Quickstart Cheatsheet
+
+
+
+Review System Overview
+Study Design Principles
+Read relevant ADRs
+Follow Development Guide
+Reference KCL Quick Reference
+
+
+
+Understand Mode System
+Learn Service Management
+Review Infrastructure Management
+Study OCI Registry
+
+
+
+Read System Overview
+Study all ADRs
+Review Integration Patterns
+Understand Multi-Repo Architecture
+
+
+
+
+
+Multi-cloud support (AWS, UpCloud, Local)
+Declarative configuration with KCL
+Automated dependency resolution
+Batch operations with rollback
+
+
+
+Hybrid Rust/Nushell orchestration
+Checkpoint-based recovery
+Parallel execution with limits
+Real-time monitoring
+
+
+
+Containerized testing
+Multi-node cluster simulation
+Topology templates
+Automated cleanup
+
+
+
+Solo: Local development
+Multi-user: Team collaboration
+CI/CD: Automated pipelines
+Enterprise: Production deployment
+
+
+
+OCI-native distribution
+Automatic dependency resolution
+Version management
+Local and remote sources
+
+
+
+
+
+Provider-agnostic batch operations
+Mixed provider support (UpCloud + AWS + local)
+Dependency resolution with soft/hard dependencies
+Real-time monitoring and rollback
+
+
+
+Solves Nushell deep call stack limitations
+Preserves all business logic
+REST API for external integration
+Checkpoint-based state management
+
+
+
+Migrated from ENV to config-driven
+Hierarchical configuration loading
+Variable interpolation
+True IaC without hardcoded fallbacks
+
+
+
+84% reduction in main file size
+Domain-driven handlers
+80+ shortcuts
+Bi-directional help system
+
+
+
+Automated containerized testing
+Multi-node cluster topologies
+CI/CD integration ready
+Template-based configurations
+
+
+
+Centralized workspace management
+Single-command workspace switching
+Active workspace tracking
+User preference system
+
+
+
+Component Technology Purpose
+Core CLI Nushell 0.107.1 Shell and scripting
+Configuration KCL 0.11.2 Type-safe IaC
+Orchestrator Rust High-performance coordination
+Templates Jinja2 (nu_plugin_tera) Code generation
+Secrets SOPS 3.10.2 + Age 1.2.1 Encryption
+Distribution OCI (skopeo/crane/oras) Artifact management
+
+
+
+
+
+
+Documentation : You’re reading it!
+Quick Reference : Run provisioning sc or provisioning guide quickstart
+Help System : Run provisioning help or provisioning <command> help
+Interactive Shell : Run provisioning nu for Nushell REPL
+
+
+
+Check Troubleshooting Guide
+Review FAQ
+Enable debug mode: provisioning --debug <command>
+Check logs: provisioning platform logs <service>
+
+
+
+This project welcomes contributions! See Development Guide for:
+
+Development setup
+Code style guidelines
+Testing requirements
+Pull request process
+
+
+
+[Add license information]
+
+
+Version Date Major Changes
+3.5.0 2025-10-06 Mode system, OCI registry, comprehensive documentation
+3.4.0 2025-10-06 Test environment service
+3.3.0 2025-09-30 Interactive guides system
+3.2.0 2025-09-30 Modular CLI refactoring
+3.1.0 2025-09-25 Batch workflow system
+3.0.0 2025-09-25 Hybrid orchestrator architecture
+2.0.5 2025-10-02 Workspace switching system
+2.0.0 2025-09-23 Configuration system migration
+
+
+
+Maintained By : Provisioning Team
+Last Review : 2025-10-06
+Next Review : 2026-01-06
This guide will help you install Infrastructure Automation on your machine and get it ready for use.
@@ -577,28 +550,19 @@ This project welcomes contributions! See **[Development Guide](development/READM
uname -a # View system information
df -h # Check available disk space
curl --version # Verify internet connectivity
-```plaintext
-
-## Installation Methods
-
-### Method 1: Package Installation (Recommended)
-
-This is the easiest method for most users.
-
-#### Step 1: Download the Package
-
-```bash
-# Download the latest release package
+
+
+
+This is the easiest method for most users.
+
+# Download the latest release package
wget https://releases.example.com/provisioning-latest.tar.gz
# Or using curl
curl -LO https://releases.example.com/provisioning-latest.tar.gz
-```plaintext
-
-#### Step 2: Extract and Install
-
-```bash
-# Extract the package
+
+
+# Extract the package
tar xzf provisioning-latest.tar.gz
# Navigate to extracted directory
@@ -606,23 +570,18 @@ cd provisioning-*
# Run the installation script
sudo ./install-provisioning
-```plaintext
-
-The installer will:
-
-- Install to `/usr/local/provisioning`
-- Create a global command at `/usr/local/bin/provisioning`
-- Install all required dependencies
-- Set up configuration templates
-
-### Method 2: Container Installation
-
-For containerized environments or testing.
-
-#### Using Docker
-
-```bash
-# Pull the provisioning container
+
+The installer will:
+
+Install to /usr/local/provisioning
+Create a global command at /usr/local/bin/provisioning
+Install all required dependencies
+Set up configuration templates
+
+
+For containerized environments or testing.
+
+# Pull the provisioning container
docker pull provisioning:latest
# Create a container with persistent storage
@@ -634,31 +593,23 @@ docker run -it --name provisioning-setup \
docker cp provisioning-setup:/usr/local/provisioning ./
sudo cp -r ./provisioning /usr/local/
sudo ln -sf /usr/local/provisioning/bin/provisioning /usr/local/bin/provisioning
-```plaintext
-
-#### Using Podman
-
-```bash
-# Similar to Docker but with Podman
+
+
+# Similar to Docker but with Podman
podman pull provisioning:latest
podman run -it --name provisioning-setup \
-v ~/provisioning-data:/data \
provisioning:latest
-```plaintext
-
-### Method 3: Source Installation
-
-For developers or custom installations.
-
-#### Prerequisites for Source Installation
-
-- **Git** - For cloning the repository
-- **Build tools** - Compiler toolchain for your platform
-
-#### Installation Steps
-
-```bash
-# Clone the repository
+
+
+For developers or custom installations.
+
+
+Git - For cloning the repository
+Build tools - Compiler toolchain for your platform
+
+
+# Clone the repository
git clone https://github.com/your-org/provisioning.git
cd provisioning
@@ -667,14 +618,10 @@ cd provisioning
# Or if you have development environment
./distro/pack-install.sh
-```plaintext
-
-### Method 4: Manual Installation
-
-For advanced users who want complete control.
-
-```bash
-# Create installation directory
+
+
+For advanced users who want complete control.
+# Create installation directory
sudo mkdir -p /usr/local/provisioning
# Copy files (assumes you have the source)
@@ -685,54 +632,42 @@ sudo ln -sf /usr/local/provisioning/core/nulib/provisioning /usr/local/bin/provi
# Install dependencies manually
./install-dependencies.sh
-```plaintext
-
-## Installation Process Details
-
-### What Gets Installed
-
-The installation process sets up:
-
-#### 1. Core System Files
-
-```plaintext
-/usr/local/provisioning/
+
+
+
+The installation process sets up:
+
+/usr/local/provisioning/
├── core/ # Core provisioning logic
├── providers/ # Cloud provider integrations
├── taskservs/ # Infrastructure services
├── cluster/ # Cluster configurations
-├── kcl/ # Configuration schemas
+├── schemas/ # Configuration schemas (Nickel)
├── templates/ # Template files
└── resources/ # Project resources
-```plaintext
-
-#### 2. Required Tools
-
-| Tool | Version | Purpose |
-|------|---------|---------|
-| Nushell | 0.107.1 | Primary shell and scripting |
-| KCL | 0.11.2 | Configuration language |
-| SOPS | 3.10.2 | Secret management |
-| Age | 1.2.1 | Encryption |
-| K9s | 0.50.6 | Kubernetes management |
-
-#### 3. Nushell Plugins
-
-- **nu_plugin_tera** - Template rendering
-- **nu_plugin_kcl** - KCL integration (requires KCL CLI)
-
-#### 4. Configuration Files
-
-- User configuration templates
-- Environment-specific configs
-- Default settings and schemas
-
-## Post-Installation Verification
-
-### Basic Verification
-
-```bash
-# Check if provisioning command is available
+
+
+Tool Version Purpose
+Nushell 0.107.1 Primary shell and scripting
+Nickel 1.15.0+ Configuration language
+SOPS 3.10.2 Secret management
+Age 1.2.1 Encryption
+K9s 0.50.6 Kubernetes management
+
+
+
+
+nu_plugin_tera - Template rendering
+
+
+
+User configuration templates
+Environment-specific configs
+Default settings and schemas
+
+
+
+# Check if provisioning command is available
provisioning --version
# Verify installation
@@ -740,75 +675,52 @@ provisioning env
# Show comprehensive environment info
provisioning allenv
-```plaintext
-
-Expected output should show:
-
-```plaintext
-✅ Provisioning v1.0.0 installed
+
+Expected output should show:
+✅ Provisioning v1.0.0 installed
✅ All dependencies available
✅ Configuration loaded successfully
-```plaintext
-
-### Tool Verification
-
-```bash
-# Check individual tools
+
+
+# Check individual tools
nu --version # Should show Nushell 0.107.1
kcl version # Should show KCL 0.11.2
sops --version # Should show SOPS 3.10.2
age --version # Should show Age 1.2.1
k9s version # Should show K9s 0.50.6
-```plaintext
-
-### Plugin Verification
-
-```bash
-# Start Nushell and check plugins
+
+
+# Start Nushell and check plugins
nu -c "version | get installed_plugins"
# Should include:
# - nu_plugin_tera
# - nu_plugin_kcl (if KCL CLI is installed)
-```plaintext
-
-### Configuration Verification
-
-```bash
-# Validate configuration
+
+
+# Validate configuration
provisioning validate config
# Should show:
# ✅ Configuration validation passed!
-```plaintext
-
-## Environment Setup
-
-### Shell Configuration
-
-Add to your shell profile (`~/.bashrc`, `~/.zshrc`, or `~/.profile`):
-
-```bash
-# Add provisioning to PATH
+
+
+
+Add to your shell profile (~/.bashrc, ~/.zshrc, or ~/.profile):
+# Add provisioning to PATH
export PATH="/usr/local/bin:$PATH"
# Optional: Set default provisioning directory
export PROVISIONING="/usr/local/provisioning"
-```plaintext
-
-### Configuration Initialization
-
-```bash
-# Initialize user configuration
+
+
+# Initialize user configuration
provisioning init config
# This creates ~/.provisioning/config.user.toml
-```plaintext
-
-### First-Time Setup
-
-```bash
-# Set up your first workspace
+
+
+# Set up your first workspace
mkdir -p ~/provisioning-workspace
cd ~/provisioning-workspace
@@ -817,14 +729,10 @@ provisioning init config dev
# Verify setup
provisioning env
-```plaintext
-
-## Platform-Specific Instructions
-
-### Linux (Ubuntu/Debian)
-
-```bash
-# Install system dependencies
+
+
+
+# Install system dependencies
sudo apt update
sudo apt install -y curl wget tar
@@ -833,22 +741,16 @@ wget https://releases.example.com/provisioning-latest.tar.gz
tar xzf provisioning-latest.tar.gz
cd provisioning-*
sudo ./install-provisioning
-```plaintext
-
-### Linux (RHEL/CentOS/Fedora)
-
-```bash
-# Install system dependencies
+
+
+# Install system dependencies
sudo dnf install -y curl wget tar
# or for older versions: sudo yum install -y curl wget tar
# Proceed with standard installation
-```plaintext
-
-### macOS
-
-```bash
-# Using Homebrew (if available)
+
+
+# Using Homebrew (if available)
brew install curl wget
# Or download directly
@@ -856,28 +758,20 @@ curl -LO https://releases.example.com/provisioning-latest.tar.gz
tar xzf provisioning-latest.tar.gz
cd provisioning-*
sudo ./install-provisioning
-```plaintext
-
-### Windows (WSL2)
-
-```bash
-# In WSL2 terminal
+
+
+# In WSL2 terminal
sudo apt update
sudo apt install -y curl wget tar
# Proceed with Linux installation steps
wget https://releases.example.com/provisioning-latest.tar.gz
# ... continue as Linux
-```plaintext
-
-## Configuration Examples
-
-### Basic Configuration
-
-Create `~/.provisioning/config.user.toml`:
-
-```toml
-[core]
+
+
+
+Create ~/.provisioning/config.user.toml:
+[core]
name = "my-provisioning"
[paths]
@@ -893,28 +787,20 @@ default = "local"
[output]
format = "yaml"
-```plaintext
-
-### Development Configuration
-
-For developers, use enhanced debugging:
-
-```toml
-[debug]
+
+
+For developers, use enhanced debugging:
+[debug]
enabled = true
log_level = "debug"
check = true
[cache]
enabled = false # Disable caching during development
-```plaintext
-
-## Upgrade and Migration
-
-### Upgrading from Previous Version
-
-```bash
-# Backup current installation
+
+
+
+# Backup current installation
sudo cp -r /usr/local/provisioning /usr/local/provisioning.backup
# Download new version
@@ -927,51 +813,37 @@ sudo ./install-provisioning
# Verify upgrade
provisioning --version
-```plaintext
-
-### Migrating Configuration
-
-```bash
-# Backup your configuration
+
+
+# Backup your configuration
cp -r ~/.provisioning ~/.provisioning.backup
# Initialize new configuration
provisioning init config
# Manually merge important settings from backup
-```plaintext
-
-## Troubleshooting Installation Issues
-
-### Common Installation Problems
-
-#### Permission Denied Errors
-
-```bash
-# Problem: Cannot write to /usr/local
+
+
+
+
+# Problem: Cannot write to /usr/local
# Solution: Use sudo
sudo ./install-provisioning
# Or install to user directory
./install-provisioning --prefix=$HOME/provisioning
export PATH="$HOME/provisioning/bin:$PATH"
-```plaintext
-
-#### Missing Dependencies
-
-```bash
-# Problem: curl/wget not found
+
+
+# Problem: curl/wget not found
# Ubuntu/Debian solution:
sudo apt install -y curl wget tar
# RHEL/CentOS solution:
sudo dnf install -y curl wget tar
-```plaintext
-
-#### Download Failures
-
-```bash
-# Problem: Cannot download package
+
+
+# Problem: Cannot download package
# Solution: Check internet connection and try alternative
ping google.com
@@ -980,38 +852,28 @@ curl -LO --retry 3 https://releases.example.com/provisioning-latest.tar.gz
# Or use wget with retries
wget --tries=3 https://releases.example.com/provisioning-latest.tar.gz
-```plaintext
-
-#### Extraction Failures
-
-```bash
-# Problem: Archive corrupted
+
+
+# Problem: Archive corrupted
# Solution: Verify and re-download
sha256sum provisioning-latest.tar.gz # Check against published hash
# Re-download if hash doesn't match
rm provisioning-latest.tar.gz
wget https://releases.example.com/provisioning-latest.tar.gz
-```plaintext
-
-#### Tool Installation Failures
-
-```bash
-# Problem: Nushell installation fails
+
+
+# Problem: Nushell installation fails
# Solution: Check architecture and OS compatibility
uname -m # Should show x86_64 or arm64
uname -s # Should show Linux, Darwin, etc.
# Try manual tool installation
./install-dependencies.sh --verbose
-```plaintext
-
-### Verification Failures
-
-#### Command Not Found
-
-```bash
-# Problem: 'provisioning' command not found
+
+
+
+# Problem: 'provisioning' command not found
# Check installation path
ls -la /usr/local/bin/provisioning
@@ -1021,53 +883,531 @@ sudo ln -sf /usr/local/provisioning/core/nulib/provisioning /usr/local/bin/provi
# Add to PATH if needed
export PATH="/usr/local/bin:$PATH"
echo 'export PATH="/usr/local/bin:$PATH"' >> ~/.bashrc
-```plaintext
-
-#### Plugin Errors
-
-```bash
-# Problem: nu_plugin_kcl not working
+
+
+# Problem: nu_plugin_kcl not working
# Solution: Ensure KCL CLI is installed
kcl version
# If missing, install KCL CLI first
# Then re-run plugin installation
nu -c "plugin add /usr/local/provisioning/plugins/nu_plugin_kcl"
-```plaintext
-
-#### Configuration Errors
-
-```bash
-# Problem: Configuration validation fails
+
+
+# Problem: Configuration validation fails
# Solution: Initialize with template
provisioning init config
# Or validate and show errors
provisioning validate config --detailed
-```plaintext
-
-### Getting Help
-
-If you encounter issues not covered here:
-
-1. **Check logs**: `provisioning --debug env`
-2. **Validate configuration**: `provisioning validate config`
-3. **Check system compatibility**: `provisioning version --verbose`
-4. **Consult troubleshooting guide**: `docs/user/troubleshooting-guide.md`
-
-## Next Steps
-
-After successful installation:
-
-1. **Complete the Getting Started Guide**: `docs/user/getting-started.md`
-2. **Set up your first workspace**: `docs/user/workspace-setup.md`
-3. **Learn about configuration**: `docs/user/configuration.md`
-4. **Try example tutorials**: `docs/user/examples/`
-
-Your provisioning is now ready to manage cloud infrastructure!
+
+If you encounter issues not covered here:
+
+Check logs : provisioning --debug env
+Validate configuration : provisioning validate config
+Check system compatibility : provisioning version --verbose
+Consult troubleshooting guide : docs/user/troubleshooting-guide.md
+
+
+After successful installation:
+
+Complete the Getting Started Guide : docs/user/getting-started.md
+Set up your first workspace : docs/user/workspace-setup.md
+Learn about configuration : docs/user/configuration.md
+Try example tutorials : docs/user/examples/
+
+Your provisioning is now ready to manage cloud infrastructure!
+
+Objective : Validate your provisioning installation, run bootstrap to initialize the workspace, and verify all components are working correctly.
+Expected Duration : 30-45 minutes
+Prerequisites : Fresh clone of provisioning repository at /Users/Akasha/project-provisioning
+
+
+Before running the bootstrap script, verify that your system has all required dependencies.
+
+Run these commands to verify your system meets minimum requirements:
+# Check OS
+uname -s
+# Expected: Darwin (macOS), Linux, or WSL2
+
+# Check CPU cores
+sysctl -n hw.physicalcpu # macOS
+# OR
+nproc # Linux
+# Expected: 2 or more cores
+
+# Check RAM
+sysctl -n hw.memsize | awk '{print $1 / 1024 / 1024 / 1024}' GB # macOS
+# OR
+grep MemTotal /proc/meminfo | awk '{print int($2 / 1024 / 1024) " GB"}' # Linux
+# Expected: 2 GB or more (4 GB+ recommended)
+
+# Check free disk space
+df -h | grep -E '^/dev|^Filesystem'
+# Expected: At least 2 GB free (10 GB+ recommended)
+
+Success Criteria :
+
+OS is macOS, Linux, or WSL2
+CPU: 2+ cores available
+RAM: 2 GB minimum, 4+ GB recommended
+Disk: 2 GB free minimum
+
+
+Nushell is required for bootstrap and CLI operations:
+command -v nu
+# Expected output: /path/to/nu
+
+nu --version
+# Expected output: 0.109.0 or higher
+
+If Nushell is not installed:
+# macOS (using Homebrew)
+brew install nushell
+
+# Linux (Debian/Ubuntu)
+sudo apt-get update && sudo apt-get install nushell
+
+# Linux (RHEL/CentOS)
+sudo yum install nushell
+
+# Or install from source: https://nushell.sh/book/installation.html
+
+
+Nickel is required for configuration validation:
+command -v nickel
+# Expected output: /path/to/nickel
+
+nickel --version
+# Expected output: nickel 1.x.x or higher
+
+If Nickel is not installed:
+# Install via Cargo (requires Rust)
+cargo install nickel-lang-cli
+
+# Or: https://nickel-lang.org/
+
+
+Docker is required for running containerized services:
+command -v docker
+# Expected output: /path/to/docker
+
+docker --version
+# Expected output: Docker version 20.10 or higher
+
+If Docker is not installed:
+Visit Docker installation guide and install for your OS.
+
+Verify the provisioning CLI binary exists:
+ls -la /Users/Akasha/project-provisioning/provisioning/core/cli/provisioning
+# Expected: -rwxr-xr-x (executable)
+
+file /Users/Akasha/project-provisioning/provisioning/core/cli/provisioning
+# Expected: ELF 64-bit or similar binary format
+
+If binary is not executable:
+chmod +x /Users/Akasha/project-provisioning/provisioning/core/cli/provisioning
+
+
+[ ] OS is macOS, Linux, or WSL2
+[ ] CPU: 2+ cores available
+[ ] RAM: 2 GB minimum installed
+[ ] Disk: 2+ GB free space
+[ ] Nushell 0.109.0+ installed
+[ ] Nickel 1.x.x installed
+[ ] Docker 20.10+ installed
+[ ] Provisioning binary exists and is executable
+
+
+
+The bootstrap script automates 7 stages of installation and initialization. Run it from the project root directory.
+
+cd /Users/Akasha/project-provisioning
+
+
+./provisioning/bootstrap/install.sh
+
+
+You should see output similar to this:
+╔════════════════════════════════════════════════════════════════╗
+║ PROVISIONING BOOTSTRAP (Bash) ║
+╚════════════════════════════════════════════════════════════════╝
+
+📊 Stage 1: System Detection
+─────────────────────────────────────────────────────────────────
+ OS: Darwin
+ Architecture: arm64 (or x86_64)
+ CPU Cores: 8
+ Memory: 16 GB
+ ✅ System requirements met
+
+📦 Stage 2: Checking Dependencies
+─────────────────────────────────────────────────────────────────
+ Versions:
+ Docker: Docker version 28.5.2
+ Rust: rustc 1.75.0
+ Nushell: 0.109.1
+ ✅ All dependencies found
+
+📁 Stage 3: Creating Directory Structure
+─────────────────────────────────────────────────────────────────
+ ✅ Directory structure created
+
+⚙️ Stage 4: Validating Configuration
+─────────────────────────────────────────────────────────────────
+ ✅ Configuration syntax valid
+
+📤 Stage 5: Exporting Configuration to TOML
+─────────────────────────────────────────────────────────────────
+ ✅ Configuration exported
+
+🚀 Stage 6: Initializing Orchestrator Service
+─────────────────────────────────────────────────────────────────
+ ✅ Orchestrator started
+
+✅ Stage 7: Verification
+─────────────────────────────────────────────────────────────────
+ ✅ All configuration files generated
+ ✅ All required directories created
+
+╔════════════════════════════════════════════════════════════════╗
+║ BOOTSTRAP COMPLETE ✅ ║
+╚════════════════════════════════════════════════════════════════╝
+
+📍 Next Steps:
+
+1. Verify configuration:
+ cat /Users/Akasha/project-provisioning/workspaces/workspace_librecloud/config/config.ncl
+
+2. Check orchestrator is running:
+ curl http://localhost:9090/health
+
+3. Start provisioning:
+ provisioning server create --infra sgoyol --name web-01
+
+
+The bootstrap script automatically:
+
+Detects your system (OS, CPU, RAM, architecture)
+Verifies dependencies (Docker, Rust, Nushell)
+Creates workspace directories (config, state, cache)
+Validates Nickel configuration (syntax checking)
+Exports configuration (Nickel → TOML files)
+Initializes orchestrator (starts service in background)
+Verifies installation (checks all files created)
+
+
+
+After bootstrap completes, verify that all components are working correctly.
+
+Bootstrap should have created workspace directories. Verify they exist:
+cd /Users/Akasha/project-provisioning
+
+# Check all required directories
+ls -la workspaces/workspace_librecloud/.orchestrator/data/queue/
+ls -la workspaces/workspace_librecloud/.kms/
+ls -la workspaces/workspace_librecloud/.providers/
+ls -la workspaces/workspace_librecloud/.taskservs/
+ls -la workspaces/workspace_librecloud/.clusters/
+
+Expected Output :
+total 0
+drwxr-xr-x 2 user group 64 Jan 7 10:30 .
+
+(directories exist and are accessible)
+
+
+Bootstrap should have exported Nickel configuration to TOML format:
+# Check generated files exist
+ls -la workspaces/workspace_librecloud/config/generated/
+
+# View workspace configuration
+cat workspaces/workspace_librecloud/config/generated/workspace.toml
+
+# View provider configuration
+cat workspaces/workspace_librecloud/config/generated/providers/upcloud.toml
+
+# View orchestrator configuration
+cat workspaces/workspace_librecloud/config/generated/platform/orchestrator.toml
+
+Expected Output :
+config/
+├── generated/
+│ ├── workspace.toml
+│ ├── providers/
+│ │ └── upcloud.toml
+│ └── platform/
+│ └── orchestrator.toml
+
+
+Verify Nickel configuration files have valid syntax:
+cd /Users/Akasha/project-provisioning/workspaces/workspace_librecloud
+
+# Type-check main workspace config
+nickel typecheck config/config.ncl
+# Expected: No output (success) or clear error messages
+
+# Type-check infrastructure configs
+nickel typecheck infra/wuji/main.ncl
+nickel typecheck infra/sgoyol/main.ncl
+
+# Use workspace utility for comprehensive validation
+nu workspace.nu validate
+# Expected: ✓ All files validated successfully
+
+# Type-check all Nickel files
+nu workspace.nu typecheck
+
+Expected Output :
+✓ All files validated successfully
+✓ infra/wuji/main.ncl
+✓ infra/sgoyol/main.ncl
+
+
+The orchestrator service manages workflows and deployments:
+# Check if orchestrator is running (health check)
+curl http://localhost:9090/health
+# Expected: {"status": "healthy"} or similar response
+
+# If health check fails, check orchestrator logs
+tail -f /Users/Akasha/project-provisioning/provisioning/platform/orchestrator/data/orchestrator.log
+
+# Alternative: Check if orchestrator process is running
+ps aux | grep orchestrator
+# Expected: Running orchestrator process visible
+
+Expected Output :
+{
+ "status": "healthy",
+ "uptime": "0:05:23"
+}
+
+If Orchestrator Failed to Start:
+Check logs and restart manually:
+cd /Users/Akasha/project-provisioning/provisioning/platform/orchestrator
+
+# Check log file
+cat data/orchestrator.log
+
+# Or start orchestrator manually
+./scripts/start-orchestrator.nu --background
+
+# Verify it's running
+curl http://localhost:9090/health
+
+
+You can install the provisioning CLI globally for easier access:
+# Option A: System-wide installation (requires sudo)
+cd /Users/Akasha/project-provisioning
+sudo ./scripts/install-provisioning.sh
+
+# Verify installation
+provisioning --version
+provisioning help
+
+# Option B: Add to PATH temporarily (current session only)
+export PATH="$PATH:/Users/Akasha/project-provisioning/provisioning/core/cli"
+
+# Verify
+provisioning --version
+
+Expected Output :
+provisioning version 1.0.0
+
+Usage: provisioning [OPTIONS] COMMAND
+
+Commands:
+ server - Server management
+ workspace - Workspace management
+ config - Configuration management
+ help - Show help information
+
+
+[ ] Workspace directories created (.orchestrator, .kms, .providers, .taskservs, .clusters)
+[ ] Generated TOML files exist in config/generated/
+[ ] Nickel type-checking passes (no errors)
+[ ] Workspace utility validation passes
+[ ] Orchestrator responding to health check
+[ ] Orchestrator process running
+[ ] Provisioning CLI accessible and working
+
+
+
+This section covers common issues and solutions.
+
+Symptoms :
+./provisioning/bootstrap/install.sh: line X: nu: command not found
+
+Solution :
+
+Install Nushell (see Step 1.2)
+Verify installation: nu --version
+Retry bootstrap script
+
+
+Symptoms :
+⚙️ Stage 4: Validating Configuration
+Error: Nickel configuration validation failed
+
+Solution :
+
+Check Nickel syntax: nickel typecheck config/config.ncl
+Review error message for specific issue
+Edit config file: vim config/config.ncl
+Run bootstrap again
+
+
+Symptoms :
+❌ Docker is required but not installed
+
+Solution :
+
+Install Docker: Docker installation guide
+Verify: docker --version
+Retry bootstrap script
+
+
+Symptoms :
+⚠️ Configuration export encountered issues (may continue)
+
+Solution :
+
+Check Nushell library paths: nu -c "use provisioning/core/nulib/lib_provisioning/config/export.nu *"
+Verify export library exists: ls provisioning/core/nulib/lib_provisioning/config/export.nu
+Re-export manually:
+cd /Users/Akasha/project-provisioning
+nu -c "
+ use provisioning/core/nulib/lib_provisioning/config/export.nu *
+ export-all-configs 'workspaces/workspace_librecloud'
+"
+
+
+
+
+Symptoms :
+🚀 Stage 6: Initializing Orchestrator Service
+⚠️ Orchestrator may not have started (check logs)
+
+curl http://localhost:9090/health
+# Connection refused
+
+Solution :
+
+Check for port conflicts: lsof -i :9090
+If port 9090 is in use, either:
+
+Stop the conflicting service
+Change orchestrator port in configuration
+
+
+Check logs: tail -f provisioning/platform/orchestrator/data/orchestrator.log
+Start manually: cd provisioning/platform/orchestrator && ./scripts/start-orchestrator.nu --background
+Verify: curl http://localhost:9090/health
+
+
+Symptoms :
+Stage 3: Creating Directory Structure
+[sudo] password for user:
+
+Solution :
+
+This is normal if creating directories in system locations
+Enter your sudo password when prompted
+Or: Run bootstrap from home directory instead
+
+
+Symptoms :
+bash: ./provisioning/bootstrap/install.sh: Permission denied
+
+Solution :
+# Make script executable
+chmod +x /Users/Akasha/project-provisioning/provisioning/bootstrap/install.sh
+
+# Retry
+./provisioning/bootstrap/install.sh
+
+
+
+After successful installation validation, you can:
+
+To deploy infrastructure to UpCloud:
+# Read workspace deployment guide
+cat workspaces/workspace_librecloud/docs/deployment-guide.md
+
+# Or: From workspace directory
+cd workspaces/workspace_librecloud
+cat docs/deployment-guide.md
+
+
+To create a new workspace for different infrastructure:
+provisioning workspace init my_workspace --template minimal
+
+
+Discover what’s available to deploy:
+# List available task services
+provisioning mod discover taskservs
+
+# List available providers
+provisioning mod discover providers
+
+# List available clusters
+provisioning mod discover clusters
+
+
+
+After completing all steps, verify with this final checklist:
+Prerequisites Verified:
+ [ ] OS is macOS, Linux, or WSL2
+ [ ] CPU: 2+ cores
+ [ ] RAM: 2+ GB available
+ [ ] Disk: 2+ GB free
+ [ ] Nushell 0.109.0+ installed
+ [ ] Nickel 1.x.x installed
+ [ ] Docker 20.10+ installed
+ [ ] Provisioning binary executable
+
+Bootstrap Completed:
+ [ ] All 7 stages completed successfully
+ [ ] No error messages in output
+ [ ] Installation log shows success
+
+Installation Validated:
+ [ ] Workspace directories exist
+ [ ] Generated TOML files exist
+ [ ] Nickel type-checking passes
+ [ ] Workspace validation passes
+ [ ] Orchestrator health check passes
+ [ ] Provisioning CLI works (if installed)
+
+Ready to Deploy:
+ [ ] No errors in validation steps
+ [ ] All services responding correctly
+ [ ] Configuration properly exported
+
+
+
+If you encounter issues not covered here:
+
+Check logs : tail -f provisioning/platform/orchestrator/data/orchestrator.log
+Enable debug mode : provisioning --debug <command>
+Review bootstrap output : Scroll up to see detailed error messages
+Check documentation : provisioning help or provisioning guide <topic>
+Workspace guide : cat workspaces/workspace_librecloud/docs/deployment-guide.md
+
+
+
+This guide covers:
+
+✅ Prerequisites verification (Nushell, Nickel, Docker)
+✅ Bootstrap installation (7-stage automated process)
+✅ Installation validation (directories, configs, services)
+✅ Troubleshooting common issues
+✅ Next steps for deployment
+
+You now have a fully installed and validated provisioning system ready for workspace deployment.
-Welcome to Infrastructure Automation! This guide will walk you through your first steps with infrastructure automation, from basic setup to deploying your first infrastructure.
+Welcome to Infrastructure Automation. This guide will walk you through your first steps with infrastructure automation, from basic setup to deploying your first infrastructure.
Essential concepts and terminology
@@ -1084,62 +1424,47 @@ Your provisioning is now ready to manage cloud infrastructure!
✅ Basic familiarity with command-line interfaces
-
+
Provisioning uses declarative configuration to manage infrastructure. Instead of manually creating resources, you define what you want in configuration files, and the system makes it happen.
You describe → System creates → Infrastructure exists
-```plaintext
-
-### Key Components
-
-| Component | Purpose | Example |
-|-----------|---------|---------|
-| **Providers** | Cloud platforms | AWS, UpCloud, Local |
-| **Servers** | Virtual machines | Web servers, databases |
-| **Task Services** | Infrastructure software | Kubernetes, Docker, databases |
-| **Clusters** | Grouped services | Web cluster, database cluster |
-
-### Configuration Languages
-
-- **KCL**: Main configuration language for infrastructure definitions
-- **TOML**: User preferences and system settings
-- **YAML**: Kubernetes manifests and service definitions
-
-## First-Time Setup
-
-### Step 1: Initialize Your Configuration
-
-Create your personal configuration:
-
-```bash
-# Initialize user configuration
+
+
+Component Purpose Example
+Providers Cloud platforms AWS, UpCloud, Local
+Servers Virtual machines Web servers, databases
+Task Services Infrastructure software Kubernetes, Docker, databases
+Clusters Grouped services Web cluster, database cluster
+
+
+
+
+Nickel : Primary configuration language for infrastructure definitions (type-safe, validated)
+TOML : User preferences and system settings
+YAML : Kubernetes manifests and service definitions
+
+
+
+Create your personal configuration:
+# Initialize user configuration
provisioning init config
# This creates ~/.provisioning/config.user.toml
-```plaintext
-
-### Step 2: Verify Your Environment
-
-```bash
-# Check your environment setup
+
+
+# Check your environment setup
provisioning env
# View comprehensive configuration
provisioning allenv
-```plaintext
-
-You should see output like:
-
-```plaintext
-✅ Configuration loaded successfully
+
+You should see output like:
+✅ Configuration loaded successfully
✅ All required tools available
📁 Base path: /usr/local/provisioning
🏠 User config: ~/.provisioning/config.user.toml
-```plaintext
-
-### Step 3: Explore Available Resources
-
-```bash
-# List available providers
+
+
+# List available providers
provisioning list providers
# List available task services
@@ -1147,170 +1472,122 @@ provisioning list taskservs
# List available clusters
provisioning list clusters
-```plaintext
-
-## Your First Infrastructure
-
-Let's create a simple local infrastructure to learn the basics.
-
-### Step 1: Create a Workspace
-
-```bash
-# Create a new workspace directory
+
+
+Let’s create a simple local infrastructure to learn the basics.
+
+# Create a new workspace directory
mkdir ~/my-first-infrastructure
cd ~/my-first-infrastructure
# Initialize workspace
provisioning generate infra --new local-demo
-```plaintext
-
-This creates:
-
-```plaintext
-local-demo/
-├── settings.k # Main infrastructure definition
-├── kcl.mod # KCL module configuration
-└── keys.yaml # Key management (if needed)
-```plaintext
-
-### Step 2: Examine the Configuration
-
-```bash
-# View the generated configuration
+
+This creates:
+local-demo/
+├── config/
+│ └── config.ncl # Master Nickel configuration
+├── infra/
+│ └── default/
+│ ├── main.ncl # Infrastructure definition
+│ └── servers.ncl # Server configurations
+└── docs/ # Auto-generated guides
+
+
+# View the generated configuration
provisioning show settings --infra local-demo
-```plaintext
-
-### Step 3: Validate the Configuration
-
-```bash
-# Validate syntax and structure
+
+
+# Validate syntax and structure
provisioning validate config --infra local-demo
# Should show: ✅ Configuration validation passed!
-```plaintext
-
-### Step 4: Deploy Infrastructure (Check Mode)
-
-```bash
-# Dry run - see what would be created
+
+
+# Dry run - see what would be created
provisioning server create --infra local-demo --check
# This shows planned changes without making them
-```plaintext
-
-### Step 5: Create Your Infrastructure
-
-```bash
-# Create the actual infrastructure
+
+
+# Create the actual infrastructure
provisioning server create --infra local-demo
# Wait for completion
provisioning server list --infra local-demo
-```plaintext
-
-## Working with Services
-
-### Installing Your First Service
-
-Let's install a containerized service:
-
-```bash
-# Install Docker/containerd
+
+
+
+Let’s install a containerized service:
+# Install Docker/containerd
provisioning taskserv create containerd --infra local-demo
# Verify installation
provisioning taskserv list --infra local-demo
-```plaintext
-
-### Installing Kubernetes
-
-For container orchestration:
-
-```bash
-# Install Kubernetes
+
+
+For container orchestration:
+# Install Kubernetes
provisioning taskserv create kubernetes --infra local-demo
# This may take several minutes...
-```plaintext
-
-### Checking Service Status
-
-```bash
-# Show all services on your infrastructure
+
+
+# Show all services on your infrastructure
provisioning show servers --infra local-demo
# Show specific service details
provisioning show servers web-01 taskserv kubernetes --infra local-demo
-```plaintext
-
-## Understanding Commands
-
-### Command Structure
-
-All commands follow this pattern:
-
-```bash
-provisioning [global-options] <command> [command-options] [arguments]
-```plaintext
-
-### Global Options
-
-| Option | Short | Description |
-|--------|-------|-------------|
-| `--infra` | `-i` | Specify infrastructure |
-| `--check` | `-c` | Dry run mode |
-| `--debug` | `-x` | Enable debug output |
-| `--yes` | `-y` | Auto-confirm actions |
-
-### Essential Commands
-
-| Command | Purpose | Example |
-|---------|---------|---------|
-| `help` | Show help | `provisioning help` |
-| `env` | Show environment | `provisioning env` |
-| `list` | List resources | `provisioning list servers` |
-| `show` | Show details | `provisioning show settings` |
-| `validate` | Validate config | `provisioning validate config` |
-
-## Working with Multiple Environments
-
-### Environment Concepts
-
-The system supports multiple environments:
-
-- **dev** - Development and testing
-- **test** - Integration testing
-- **prod** - Production deployment
-
-### Switching Environments
-
-```bash
-# Set environment for this session
+
+
+
+All commands follow this pattern:
+provisioning [global-options] <command> [command-options] [arguments]
+
+
+Option Short Description
+--infra-iSpecify infrastructure
+--check-cDry run mode
+--debug-xEnable debug output
+--yes-yAuto-confirm actions
+
+
+
+Command Purpose Example
+helpShow help provisioning help
+envShow environment provisioning env
+listList resources provisioning list servers
+showShow details provisioning show settings
+validateValidate config provisioning validate config
+
+
+
+
+The system supports multiple environments:
+
+dev - Development and testing
+test - Integration testing
+prod - Production deployment
+
+
+# Set environment for this session
export PROVISIONING_ENV=dev
provisioning env
# Or specify per command
provisioning --environment dev server create
-```plaintext
-
-### Environment-Specific Configuration
-
-Create environment configs:
-
-```bash
-# Development environment
+
+
+Create environment configs:
+# Development environment
provisioning init config dev
# Production environment
provisioning init config prod
-```plaintext
-
-## Common Workflows
-
-### Workflow 1: Development Environment
-
-```bash
-# 1. Create development workspace
+
+
+
+# 1. Create development workspace
mkdir ~/dev-environment
cd ~/dev-environment
@@ -1318,7 +1595,7 @@ cd ~/dev-environment
provisioning generate infra --new dev-setup
# 3. Customize for development
-# Edit settings.k to add development tools
+# Edit settings.ncl to add development tools
# 4. Deploy
provisioning server create --infra dev-setup --check
@@ -1327,12 +1604,9 @@ provisioning server create --infra dev-setup
# 5. Install development services
provisioning taskserv create kubernetes --infra dev-setup
provisioning taskserv create containerd --infra dev-setup
-```plaintext
-
-### Workflow 2: Service Updates
-
-```bash
-# Check for service updates
+
+
+# Check for service updates
provisioning taskserv check-updates
# Update specific service
@@ -1340,34 +1614,24 @@ provisioning taskserv update kubernetes --infra dev-setup
# Verify update
provisioning taskserv versions kubernetes
-```plaintext
-
-### Workflow 3: Infrastructure Scaling
-
-```bash
-# Add servers to existing infrastructure
-# Edit settings.k to add more servers
+
+
+# Add servers to existing infrastructure
+# Edit settings.ncl to add more servers
# Apply changes
provisioning server create --infra dev-setup
# Install services on new servers
provisioning taskserv create containerd --infra dev-setup
-```plaintext
-
-## Interactive Mode
-
-### Starting Interactive Shell
-
-```bash
-# Start Nushell with provisioning loaded
+
+
+
+# Start Nushell with provisioning loaded
provisioning nu
-```plaintext
-
-In the interactive shell, you have access to all provisioning functions:
-
-```nushell
-# Inside Nushell session
+
+In the interactive shell, you have access to all provisioning functions:
+# Inside Nushell session
use lib_provisioning *
# Check environment
@@ -1375,12 +1639,9 @@ show_env
# List available functions
help commands | where name =~ "provision"
-```plaintext
-
-### Useful Interactive Commands
-
-```nushell
-# Show detailed server information
+
+
+# Show detailed server information
find_servers "web-*" | table
# Get cost estimates
@@ -1388,43 +1649,33 @@ servers_walk_by_costs $settings "" false false "stdout"
# Check task service status
taskservs_list | where status == "running"
-```plaintext
-
-## Configuration Management
-
-### Understanding Configuration Files
-
-1. **System Defaults**: `config.defaults.toml` - System-wide defaults
-2. **User Config**: `~/.provisioning/config.user.toml` - Your preferences
-3. **Environment Config**: `config.{env}.toml` - Environment-specific settings
-4. **Infrastructure Config**: `settings.k` - Infrastructure definitions
-
-### Configuration Hierarchy
-
-```plaintext
-Infrastructure settings.k
+
+
+
+
+System Defaults : config.defaults.toml - System-wide defaults
+User Config : ~/.provisioning/config.user.toml - Your preferences
+Environment Config : config.{env}.toml - Environment-specific settings
+Infrastructure Config : settings.ncl - Infrastructure definitions
+
+
+Infrastructure settings.ncl
↓ (overrides)
Environment config.{env}.toml
↓ (overrides)
User config.user.toml
↓ (overrides)
System config.defaults.toml
-```plaintext
-
-### Customizing Your Configuration
-
-```bash
-# Edit user configuration
+
+
+# Edit user configuration
provisioning sops ~/.provisioning/config.user.toml
# Or using your preferred editor
nano ~/.provisioning/config.user.toml
-```plaintext
-
-Example customizations:
-
-```toml
-[debug]
+
+Example customizations:
+[debug]
enabled = true # Enable debug mode by default
log_level = "debug" # Verbose logging
@@ -1433,14 +1684,10 @@ default = "aws" # Use AWS as default provider
[output]
format = "json" # Prefer JSON output
-```plaintext
-
-## Monitoring and Observability
-
-### Checking System Status
-
-```bash
-# Overall system health
+
+
+
+# Overall system health
provisioning env
# Infrastructure status
@@ -1448,55 +1695,45 @@ provisioning show servers --infra dev-setup
# Service status
provisioning taskserv list --infra dev-setup
-```plaintext
-
-### Logging and Debugging
-
-```bash
-# Enable debug mode for troubleshooting
+
+
+# Enable debug mode for troubleshooting
provisioning --debug server create --infra dev-setup --check
# View logs for specific operations
provisioning show logs --infra dev-setup
-```plaintext
-
-### Cost Monitoring
-
-```bash
-# Show cost estimates
+
+
+# Show cost estimates
provisioning show cost --infra dev-setup
# Detailed cost breakdown
provisioning server price --infra dev-setup
-```plaintext
-
-## Best Practices
-
-### 1. Configuration Management
-
-- ✅ Use version control for infrastructure definitions
-- ✅ Test changes in development before production
-- ✅ Use `--check` mode to preview changes
-- ✅ Keep user configuration separate from infrastructure
-
-### 2. Security
-
-- ✅ Use SOPS for encrypting sensitive data
-- ✅ Regular key rotation for cloud providers
-- ✅ Principle of least privilege for access
-- ✅ Audit infrastructure changes
-
-### 3. Operational Excellence
-
-- ✅ Monitor infrastructure costs regularly
-- ✅ Keep services updated
-- ✅ Document custom configurations
-- ✅ Plan for disaster recovery
-
-### 4. Development Workflow
-
-```bash
-# 1. Always validate before applying
+
+
+
+
+✅ Use version control for infrastructure definitions
+✅ Test changes in development before production
+✅ Use --check mode to preview changes
+✅ Keep user configuration separate from infrastructure
+
+
+
+✅ Use SOPS for encrypting sensitive data
+✅ Regular key rotation for cloud providers
+✅ Principle of least privilege for access
+✅ Audit infrastructure changes
+
+
+
+✅ Monitor infrastructure costs regularly
+✅ Keep services updated
+✅ Document custom configurations
+✅ Plan for disaster recovery
+
+
+# 1. Always validate before applying
provisioning validate config --infra my-infra
# 2. Use check mode first
@@ -1507,14 +1744,10 @@ provisioning server create --infra my-infra
# 4. Verify results
provisioning show servers --infra my-infra
-```plaintext
-
-## Getting Help
-
-### Built-in Help System
-
-```bash
-# General help
+
+
+
+# General help
provisioning help
# Command-specific help
@@ -1524,43 +1757,30 @@ provisioning cluster help
# Show available options
provisioning generate help
-```plaintext
-
-### Command Reference
-
-For complete command documentation, see: [CLI Reference](cli-reference.md)
-
-### Troubleshooting
-
-If you encounter issues, see: [Troubleshooting Guide](troubleshooting-guide.md)
-
-## Real-World Example
-
-Let's walk through a complete example of setting up a web application infrastructure:
-
-### Step 1: Plan Your Infrastructure
-
-```bash
-# Create project workspace
+
+
+For complete command documentation, see: CLI Reference
+
+If you encounter issues, see: Troubleshooting Guide
+
+Let’s walk through a complete example of setting up a web application infrastructure:
+
+# Create project workspace
mkdir ~/webapp-infrastructure
cd ~/webapp-infrastructure
# Generate base infrastructure
provisioning generate infra --new webapp
-```plaintext
-
-### Step 2: Customize Configuration
-
-Edit `webapp/settings.k` to define:
-
-- 2 web servers for load balancing
-- 1 database server
-- Load balancer configuration
-
-### Step 3: Deploy Base Infrastructure
-
-```bash
-# Validate configuration
+
+
+Edit webapp/settings.ncl to define:
+
+2 web servers for load balancing
+1 database server
+Load balancer configuration
+
+
+# Validate configuration
provisioning validate config --infra webapp
# Preview deployment
@@ -1568,12 +1788,9 @@ provisioning server create --infra webapp --check
# Deploy servers
provisioning server create --infra webapp
-```plaintext
-
-### Step 4: Install Services
-
-```bash
-# Install container runtime on all servers
+
+
+# Install container runtime on all servers
provisioning taskserv create containerd --infra webapp
# Install load balancer on web servers
@@ -1581,30 +1798,24 @@ provisioning taskserv create haproxy --infra webapp
# Install database on database server
provisioning taskserv create postgresql --infra webapp
-```plaintext
-
-### Step 5: Deploy Application
-
-```bash
-# Create application cluster
+
+
+# Create application cluster
provisioning cluster create webapp --infra webapp
# Verify deployment
provisioning show servers --infra webapp
provisioning cluster list --infra webapp
-```plaintext
-
-## Next Steps
-
-Now that you understand the basics:
-
-1. **Set up your workspace**: [Workspace Setup Guide](workspace-setup.md)
-2. **Learn about infrastructure management**: [Infrastructure Management Guide](infrastructure-management.md)
-3. **Understand configuration**: [Configuration Guide](configuration.md)
-4. **Explore examples**: [Examples and Tutorials](examples/)
-
-You're ready to start building and managing cloud infrastructure with confidence!
+
+Now that you understand the basics:
+
+Set up your workspace : Workspace Setup Guide
+Learn about infrastructure management : Infrastructure Management Guide
+Understand configuration : Configuration Guide
+Explore examples : Examples and Tutorials
+
+You’re ready to start building and managing cloud infrastructure with confidence!
Version : 3.5.0
Last Updated : 2025-10-09
@@ -1656,7 +1867,7 @@ cargo build --release -p nu_plugin_auth
plugin add target/release/nu_plugin_auth
-Performance : 10x faster encryption (~5ms vs ~50ms HTTP)
+Performance : 10x faster encryption (~5 ms vs ~50 ms HTTP)
# Encrypt with auto-detected backend
kms encrypt "secret data"
# vault:v1:abc123...
@@ -1685,11 +1896,11 @@ kms status
Supported Backends:
-rustyvault : High-performance (~5ms) - Production
-age : Local encryption (~3ms) - Development
-cosmian : Cloud KMS (~30ms)
-aws : AWS KMS (~50ms)
-vault : HashiCorp Vault (~40ms)
+rustyvault : High-performance (~5 ms) - Production
+age : Local encryption (~3 ms) - Development
+cosmian : Cloud KMS (~30 ms)
+aws : AWS KMS (~50 ms)
+vault : HashiCorp Vault (~40 ms)
Installation:
cargo build --release -p nu_plugin_kms
@@ -1700,16 +1911,16 @@ export RUSTYVAULT_ADDR="http://localhost:8200"
export RUSTYVAULT_TOKEN="hvs.xxxxx"
-Performance : 30-50x faster queries (~1ms vs ~30-50ms HTTP)
-# Get orchestrator status (direct file access, ~1ms)
+Performance : 30-50x faster queries (~1 ms vs ~30-50 ms HTTP)
+# Get orchestrator status (direct file access, ~1 ms)
orch status
# { active_tasks: 5, completed_tasks: 120, health: "healthy" }
-# Validate workflow KCL file (~10ms vs ~100ms HTTP)
-orch validate workflows/deploy.k
-orch validate workflows/deploy.k --strict
+# Validate workflow KCL file (~10 ms vs ~100 ms HTTP)
+orch validate workflows/deploy.ncl
+orch validate workflows/deploy.ncl --strict
-# List tasks (direct file read, ~5ms)
+# List tasks (direct file read, ~5 ms)
orch tasks
orch tasks --status running
orch tasks --status failed --limit 10
@@ -1720,12 +1931,12 @@ plugin add target/release/nu_plugin_orchestrator
Operation HTTP API Plugin Speedup
-KMS Encrypt ~50ms ~5ms 10x
-KMS Decrypt ~50ms ~5ms 10x
-Orch Status ~30ms ~1ms 30x
-Orch Validate ~100ms ~10ms 10x
-Orch Tasks ~50ms ~5ms 10x
-Auth Verify ~50ms ~10ms 5x
+KMS Encrypt ~50 ms ~5 ms 10x
+KMS Decrypt ~50 ms ~5 ms 10x
+Orch Status ~30 ms ~1 ms 30x
+Orch Validate ~100 ms ~10 ms 10x
+Orch Tasks ~50 ms ~5 ms 10x
+Auth Verify ~50 ms ~10 ms 5x
@@ -1771,7 +1982,7 @@ provisioning wf cleanup
# Batch shortcuts
provisioning bat # batch (same as 'provisioning batch')
-provisioning bat submit workflows/example.k
+provisioning batch submit workflows/example.ncl
provisioning bat list
provisioning bat status <workflow_id>
provisioning bat monitor <workflow_id>
@@ -2013,8 +2224,8 @@ nu -c "use core/nulib/workflows/management.nu *; workflow status <task_id>
# Submit batch workflow from KCL
-provisioning batch submit workflows/example_batch.k
-nu -c "use core/nulib/workflows/batch.nu *; batch submit workflows/example_batch.k"
+provisioning batch submit workflows/example_batch.ncl
+nu -c "use core/nulib/workflows/batch.nu *; batch submit workflows/example_batch.ncl"
# Monitor batch workflow progress
provisioning batch monitor <workflow_id>
@@ -2179,7 +2390,7 @@ provisioning mfa webauthn verify
provisioning mfa devices
-# Generate AWS STS credentials (15min-12h TTL)
+# Generate AWS STS credentials (15 min-12h TTL)
provisioning secrets generate aws --ttl 1hr
# Generate SSH key pair (Ed25519)
@@ -2255,7 +2466,7 @@ provisioning audit query --user alice --action deploy --from 24h
provisioning audit export --format json --output audit-logs.json
-
+
# 1. Initialize workspace
provisioning workspace init --name production
@@ -2418,15 +2629,15 @@ provisioning server list --out text
-# ❌ Slow: HTTP API (50ms per call)
+# ❌ Slow: HTTP API (50 ms per call)
for i in 1..100 { http post http://localhost:9998/encrypt { data: "secret" } }
-# ✅ Fast: Plugin (5ms per call, 10x faster)
+# ✅ Fast: Plugin (5 ms per call, 10x faster)
for i in 1..100 { kms encrypt "secret" }
# Use batch workflows for multiple operations
-provisioning batch submit workflows/multi-cloud-deploy.k
+provisioning batch submit workflows/multi-cloud-deploy.ncl
# Always test with --check first
@@ -2617,7 +2828,7 @@ provisioning server create --check
# Check platform status
provisioning platform status
-
+
After basic setup:
Configure Provider : Add cloud provider credentials
@@ -2626,7 +2837,7 @@ provisioning platform status
Set Up Monitoring : Health checks, logging
Automate Deployments : CI/CD integration
-
+
# Get help
provisioning help
@@ -2675,18 +2886,13 @@ provisioning setup workspace myproject
# Start deploying
provisioning server create
-```plaintext
-
-## Configuration Paths
-
-**macOS**: `~/Library/Application Support/provisioning/`
-**Linux**: `~/.config/provisioning/`
-**Windows**: `%APPDATA%/provisioning/`
-
-## Directory Structure
-
-```plaintext
-provisioning/
+
+
+macOS : ~/Library/Application Support/provisioning/
+Linux : ~/.config/provisioning/
+Windows : %APPDATA%/provisioning/
+
+provisioning/
├── system.toml # System info (immutable)
├── user_preferences.toml # User settings (editable)
├── platform/ # Platform services
@@ -2696,53 +2902,44 @@ provisioning/
├── config/
├── infra/
└── auth.token
-```plaintext
-
-## Setup Wizard
-
-Run the interactive setup wizard:
-
-```bash
-provisioning setup system --interactive
-```plaintext
-
-The wizard guides you through:
-
-1. Welcome & Prerequisites Check
-2. Operating System Detection
-3. Configuration Path Selection
-4. Platform Services Setup
-5. Provider Selection
-6. Security Configuration
-7. Review & Confirmation
-
-## Configuration Management
-
-### Hierarchy (highest to lowest priority)
-
-1. Runtime Arguments (`--flag value`)
-2. Environment Variables (`PROVISIONING_*`)
-3. Workspace Configuration
-4. Workspace Authentication Token
-5. User Preferences (`user_preferences.toml`)
-6. Platform Configurations (`platform/*.toml`)
-7. Provider Configurations (`providers/*.toml`)
-8. System Configuration (`system.toml`)
-9. Built-in Defaults
-
-### Configuration Files
-
-- `system.toml` - System information (OS, architecture, paths)
-- `user_preferences.toml` - User preferences (editor, format, etc.)
-- `platform/*.toml` - Service endpoints and configuration
-- `providers/*.toml` - Cloud provider settings
-
-## Multiple Workspaces
-
-Create and manage multiple isolated environments:
-
-```bash
-# Create workspace
+
+
+Run the interactive setup wizard:
+provisioning setup system --interactive
+
+The wizard guides you through:
+
+Welcome & Prerequisites Check
+Operating System Detection
+Configuration Path Selection
+Platform Services Setup
+Provider Selection
+Security Configuration
+Review & Confirmation
+
+
+
+
+Runtime Arguments (--flag value)
+Environment Variables (PROVISIONING_*)
+Workspace Configuration
+Workspace Authentication Token
+User Preferences (user_preferences.toml)
+Platform Configurations (platform/*.toml)
+Provider Configurations (providers/*.toml)
+System Configuration (system.toml)
+Built-in Defaults
+
+
+
+system.toml - System information (OS, architecture, paths)
+user_preferences.toml - User preferences (editor, format, etc.)
+platform/*.toml - Service endpoints and configuration
+providers/*.toml - Cloud provider settings
+
+
+Create and manage multiple isolated environments:
+# Create workspace
provisioning setup workspace dev
provisioning setup workspace prod
@@ -2751,14 +2948,10 @@ provisioning workspace list
# Activate workspace
provisioning workspace activate prod
-```plaintext
-
-## Configuration Updates
-
-Update any setting:
-
-```bash
-# Update platform configuration
+
+
+Update any setting:
+# Update platform configuration
provisioning setup platform --config new-config.toml
# Update provider settings
@@ -2766,12 +2959,9 @@ provisioning setup provider upcloud --config upcloud-config.toml
# Validate changes
provisioning setup validate
-```plaintext
-
-## Backup & Restore
-
-```bash
-# Backup current configuration
+
+
+# Backup current configuration
provisioning setup backup --path ./backup.tar.gz
# Restore from backup
@@ -2779,58 +2969,35 @@ provisioning setup restore --path ./backup.tar.gz
# Migrate from old setup
provisioning setup migrate --from-existing
-```plaintext
-
-## Troubleshooting
-
-### "Command not found: provisioning"
-
-```bash
-export PATH="/usr/local/bin:$PATH"
-```plaintext
-
-### "Nushell not found"
-
-```bash
-curl -sSL https://raw.githubusercontent.com/nushell/nushell/main/install.sh | bash
-```plaintext
-
-### "Cannot write to directory"
-
-```bash
-chmod 755 ~/Library/Application\ Support/provisioning/
-```plaintext
-
-### Check required tools
-
-```bash
-provisioning setup validate --check-tools
-```plaintext
-
-## FAQ
-
-**Q: Do I need all optional tools?**
-A: No. You need at least one deployment tool (Docker, Kubernetes, SSH, or systemd).
-
-**Q: Can I use provisioning without Docker?**
-A: Yes. Provisioning supports Docker, Kubernetes, SSH, systemd, or combinations.
-
-**Q: How do I update configuration?**
-A: `provisioning setup update <category>`
-
-**Q: Can I have multiple workspaces?**
-A: Yes, unlimited workspaces.
-
-**Q: Is my configuration secure?**
-A: Yes. Credentials stored securely, never in config files.
-
-**Q: Can I share workspaces with my team?**
-A: Yes, via GitOps - configurations in Git, secrets in secure storage.
-
-## Getting Help
-
-```bash
-# General help
+
+
+
+export PATH="/usr/local/bin:$PATH"
+
+
+curl -sSL https://raw.githubusercontent.com/nushell/nushell/main/install.sh | bash
+
+
+chmod 755 ~/Library/Application\ Support/provisioning/
+
+
+provisioning setup validate --check-tools
+
+
+Q: Do I need all optional tools?
+A: No. You need at least one deployment tool (Docker, Kubernetes, SSH, or systemd).
+Q: Can I use provisioning without Docker?
+A: Yes. Provisioning supports Docker, Kubernetes, SSH, systemd, or combinations.
+Q: How do I update configuration?
+A: provisioning setup update <category>
+Q: Can I have multiple workspaces?
+A: Yes, unlimited workspaces.
+Q: Is my configuration secure?
+A: Yes. Credentials stored securely, never in config files.
+Q: Can I share workspaces with my team?
+A: Yes, via GitOps - configurations in Git, secrets in secure storage.
+
+# General help
provisioning help
# Setup help
@@ -2838,21 +3005,18 @@ provisioning help setup
# Specific command help
provisioning setup system --help
-```plaintext
-
-## Next Steps
-
-1. [Installation Guide](installation-guide.md)
-2. [Workspace Setup](workspace-setup.md)
-3. [Provider Configuration](provider-setup.md)
-4. [From Scratch Guide](../guides/from-scratch.md)
-
----
-
-**Status**: Production Ready ✅
-**Version**: 1.0.0
-**Last Updated**: 2025-12-09
+
+
+Installation Guide
+Workspace Setup
+Provider Configuration
+From Scratch Guide
+
+
+Status : Production Ready ✅
+Version : 1.0.0
+Last Updated : 2025-12-09
This guide has moved to a multi-chapter format for better readability.
@@ -2881,22 +3045,22 @@ provisioning guide from-scratch
CPU : 2 cores
-RAM : 4GB
-Disk : 20GB available space
+RAM : 4 GB
+Disk : 20 GB available space
Network : Internet connection for downloading dependencies
CPU : 4 cores
-RAM : 8GB
-Disk : 50GB available space
+RAM : 8 GB
+Disk : 50 GB available space
Network : Reliable internet connection
CPU : 16 cores
-RAM : 32GB
-Disk : 500GB available space (SSD recommended)
+RAM : 32 GB
+Disk : 500 GB available space (SSD recommended)
Network : High-bandwidth connection with static IP
@@ -2927,7 +3091,7 @@ provisioning guide from-scratch
Software Version Purpose
Nushell 0.107.1+ Shell and scripting language
-KCL 0.11.2+ Configuration language
+Nickel 1.15.0+ Configuration language
Docker 20.10+ Container runtime (for platform services)
SOPS 3.10.2+ Secrets management
Age 1.2.1+ Encryption tool
@@ -2950,11 +3114,11 @@ nu --version
# Expected output: 0.107.1 or higher
-
-# Check KCL version
-kcl --version
+
+# Check Nickel version
+nickel --version
-# Expected output: 0.11.2 or higher
+# Expected output: 1.15.0 or higher
# Check Docker version
@@ -2985,8 +3149,8 @@ age --version
# Install Nushell
brew install nushell
-# Install KCL
-brew install kcl
+# Install Nickel
+brew install nickel
# Install Docker Desktop
brew install --cask docker
@@ -3012,10 +3176,10 @@ curl -LO https://github.com/nushell/nushell/releases/download/0.107.1/nu-0.107.1
tar xzf nu-0.107.1-x86_64-linux-musl.tar.gz
sudo mv nu /usr/local/bin/
-# Install KCL
-curl -LO https://github.com/kcl-lang/cli/releases/download/v0.11.2/kcl-v0.11.2-linux-amd64.tar.gz
-tar xzf kcl-v0.11.2-linux-amd64.tar.gz
-sudo mv kcl /usr/local/bin/
+# Install Nickel (using Rust cargo)
+curl --proto '=https' --tlsv1.2 -sSf https://sh.rustup.rs | sh
+source $HOME/.cargo/env
+cargo install nickel
# Install Docker
sudo apt install -y docker.io
@@ -3034,10 +3198,10 @@ sudo apt install -y age
# Install Nushell
sudo dnf install -y nushell
-# Install KCL (from releases)
-curl -LO https://github.com/kcl-lang/cli/releases/download/v0.11.2/kcl-v0.11.2-linux-amd64.tar.gz
-tar xzf kcl-v0.11.2-linux-amd64.tar.gz
-sudo mv kcl /usr/local/bin/
+# Install Nickel (using Rust cargo)
+curl --proto '=https' --tlsv1.2 -sSf https://sh.rustup.rs | sh
+source $HOME/.cargo/env
+cargo install nickel
# Install Docker
sudo dnf install -y docker
@@ -3084,7 +3248,7 @@ sudo dnf install -y age
UpCloud password
Configured via environment variables or config files
-
+
Once all prerequisites are met, proceed to:
→ Installation
@@ -3107,7 +3271,7 @@ cd provisioning-platform
git checkout tags/v3.5.0
-The platform uses several Nushell plugins for enhanced functionality.
+The platform uses multiple Nushell plugins for enhanced functionality.
# Install from crates.io
cargo install nu_plugin_tera
@@ -3115,13 +3279,6 @@ cargo install nu_plugin_tera
# Register with Nushell
nu -c "plugin add ~/.cargo/bin/nu_plugin_tera; plugin use tera"
-
-# Install from custom repository
-cargo install --git https://repo.jesusperez.pro/jesus/nushell-plugins nu_plugin_kcl
-
-# Register with Nushell
-nu -c "plugin add ~/.cargo/bin/nu_plugin_kcl; plugin use kcl"
-
# Start Nushell
nu
@@ -3131,7 +3288,6 @@ plugin list
# Expected output should include:
# - tera
-# - kcl (if installed)
Make the provisioning command available globally:
@@ -3242,7 +3398,7 @@ cargo build --release
# Or headless installation
./target/release/provisioning-installer --headless --mode solo --yes
-
+
If plugins aren’t recognized:
# Rebuild plugin registry
@@ -3264,7 +3420,7 @@ ls -la ~/.config/provisioning/age/
# Regenerate if needed
age-keygen -o ~/.config/provisioning/age/private_key.txt
-
+
Once installation is complete, proceed to:
→ First Deployment
@@ -3291,12 +3447,12 @@ provisioning generate infra --new my-infra
# This creates: workspace/infra/my-infra/
# - config.toml (infrastructure settings)
-# - settings.k (KCL configuration)
+# - settings.ncl (Nickel configuration)
Edit the generated configuration:
# Edit with your preferred editor
-$EDITOR workspace/infra/my-infra/settings.k
+$EDITOR workspace/infra/my-infra/settings.ncl
Example configuration:
import provisioning.settings as cfg
@@ -3328,7 +3484,7 @@ provisioning server create --infra my-infra --check
# ⚠ Check mode: No changes will be made
#
# Would create:
-# - Server: dev-server-01 (2 cores, 4GB RAM, 50GB disk)
+# - Server: dev-server-01 (2 cores, 4 GB RAM, 50 GB disk)
If check mode looks good, create the server:
@@ -3421,8 +3577,8 @@ provisioning workspace init production
# 2. Generate infrastructure
provisioning generate infra --new prod-infra
-# 3. Configure (edit settings.k)
-$EDITOR workspace/infra/prod-infra/settings.k
+# 3. Configure (edit settings.ncl)
+$EDITOR workspace/infra/prod-infra/settings.ncl
# 4. Validate configuration
provisioning validate config --infra prod-infra
@@ -3443,7 +3599,7 @@ provisioning cluster create my-cluster --infra prod-infra
provisioning server list
provisioning taskserv list
-
+
# Check logs
provisioning server logs dev-server-01
@@ -3468,7 +3624,7 @@ ssh -v user@<server-ip>
# Use provisioning SSH helper
provisioning server ssh dev-server-01 --debug
-
+
Now that you’ve completed your first deployment:
→ Verification - Verify your deployment is working correctly
@@ -3673,7 +3829,7 @@ time provisioning server info dev-server-01
time provisioning taskserv list
# Measure workflow submission time
-time provisioning workflow submit test-workflow.k
+time provisioning workflow submit test-workflow.ncl
# Check platform resource usage
@@ -3726,7 +3882,7 @@ No errors in logs
Resource usage is within expected limits
-
+
Once verification is complete:
The configuration system uses layered composition:
-1. Schema (Type contract)
+1. Schema (Type contract)
↓ Defines valid fields and constraints
2. Service Defaults (Base values)
@@ -3961,7 +4117,7 @@ cargo run -p orchestrator -- --log-level debug
# 1. Edit the Nickel configuration directly
vim provisioning/config/runtime/orchestrator.solo.ncl
-# 2. Make your changes (e.g., change port, add environment variables)
+# 2. Make your changes (for example, change port, add environment variables)
# 3. Validate syntax
nickel typecheck provisioning/config/runtime/orchestrator.solo.ncl
@@ -4020,7 +4176,7 @@ EOF
-provisioning/schemas/platform/
+provisioning/schemas/platform/
├── schemas/ # Type contracts (Nickel)
├── defaults/ # Base configuration values
│ └── deployment/ # Mode-specific: solo, multiuser, cicd, enterprise
@@ -4029,7 +4185,7 @@ EOF
└── constraints/ # Validation limits
-provisioning/config/runtime/ # User-specific deployments
+provisioning/config/runtime/ # User-specific deployments
├── orchestrator.solo.ncl # Editable config
├── orchestrator.multiuser.ncl
└── generated/ # Auto-generated, don't edit
@@ -4037,7 +4193,7 @@ EOF
└── orchestrator.multiuser.toml
-provisioning/config/examples/
+provisioning/config/examples/
├── orchestrator.solo.example.ncl # Solo mode reference
└── orchestrator.enterprise.example.ncl # Enterprise mode reference
@@ -4123,11 +4279,11 @@ ls -lah provisioning/config/runtime/generated/orchestrator.solo.toml
✅ Manual workflow works perfectly without installer
✅ CI/CD integration available now
-
+
After completing platform configuration:
Run Services : Start your platform services with configured settings
-Access Web UI : Open Control Center at http://localhost:8080 (default)
+Access Web UI : Open Control Center at http://localhost:8080 (default)
Create First Infrastructure : Deploy your first servers and clusters
Set Up Extensions : Configure providers and task services for your needs
Backup Configuration : Back up runtime configs to private repository
@@ -4179,7 +4335,7 @@ ls -lah provisioning/config/runtime/generated/orchestrator.solo.toml
┌─────────────────────────────────────────────────────────────────┐
│ Configuration Layer │
├─────────────────┬─────────────────┬─────────────────────────────┤
-│ KCL Schemas │ TOML Config │ Templates │
+│ Nickel Schemas│ TOML Config │ Templates │
│ • Type Safety │ • Hierarchy │ • Infrastructure │
│ • Validation │ • Environment │ • Service Configs │
│ • Extensible │ • User Prefs │ • Code Generation │
@@ -4193,303 +4349,260 @@ ls -lah provisioning/config/runtime/generated/orchestrator.solo.toml
│ • UpCloud │ • Services │ • Containers │
│ • Others │ • Storage │ • Host Services │
└─────────────────┴─────────────────┴─────────────────────────────┘
-```plaintext
-
-## Core Components
-
-### 1. Hybrid Architecture Foundation
-
-#### Coordination Layer (Rust)
-
-**Purpose**: High-performance workflow orchestration and system coordination
-
-**Components**:
-
-- **Orchestrator Engine**: Task scheduling and execution coordination
-- **REST API Server**: HTTP endpoints for external integration
-- **State Management**: Persistent state tracking with checkpoint recovery
-- **Batch Processor**: Parallel execution of complex multi-provider workflows
-- **File-based Queue**: Lightweight, reliable task persistence
-- **Error Recovery**: Sophisticated rollback and cleanup capabilities
-
-**Key Features**:
-
-- Solves Nushell deep call stack limitations
-- Handles 1000+ concurrent operations
-- Checkpoint-based recovery from any failure point
-- Real-time workflow monitoring and status tracking
-
-#### Business Logic Layer (Nushell)
-
-**Purpose**: Domain-specific operations and configuration management
-
-**Components**:
-
-- **Provider Implementations**: Cloud-specific operations (AWS, UpCloud, local)
-- **Task Service Management**: Infrastructure component lifecycle
-- **Configuration Processing**: KCL-based configuration validation and templating
-- **CLI Interface**: User-facing command-line tools
-- **Workflow Definitions**: Business process implementations
-
-**Key Features**:
-
-- 65+ domain-specific modules preserved and enhanced
-- Configuration-driven operations with zero hardcoded values
-- Type-safe KCL integration for Infrastructure as Code
-- Extensible provider and service architecture
-
-### 2. Configuration System (v2.0.0)
-
-#### Hierarchical Configuration Management
-
-**Migration Achievement**: 65+ files migrated, 200+ ENV variables → 476 config accessors
-
-**Configuration Hierarchy** (precedence order):
-
-1. **Runtime Parameters** (command line, environment variables)
-2. **Environment Configuration** (dev/test/prod specific)
-3. **Infrastructure Configuration** (project-specific settings)
-4. **User Configuration** (personal preferences)
-5. **System Defaults** (system-wide defaults)
-
-**Configuration Files**:
-
-- `config.defaults.toml` - System-wide defaults
-- `config.user.toml` - User-specific preferences
-- `config.{dev,test,prod}.toml` - Environment-specific configurations
-- Infrastructure-specific configuration files
-
-**Features**:
-
-- **Variable Interpolation**: `{{paths.base}}`, `{{env.HOME}}`, `{{now.date}}`, `{{git.branch}}`
-- **Environment Switching**: `PROVISIONING_ENV=prod` for environment-specific configs
-- **Validation Framework**: Comprehensive configuration validation and error reporting
-- **Migration Tools**: Automated migration from ENV-based to config-driven architecture
-
-### 3. Workflow System (v3.1.0)
-
-#### Batch Workflow Engine
-
-**Batch Capabilities**:
-
-- **Provider-Agnostic Workflows**: Mix UpCloud, AWS, and local providers in single workflow
-- **Dependency Resolution**: Topological sorting with soft/hard dependency support
-- **Parallel Execution**: Configurable parallelism limits with resource management
-- **State Recovery**: Checkpoint-based recovery with rollback capabilities
-- **Real-time Monitoring**: Live progress tracking and health monitoring
-
-**Workflow Types**:
-
-- **Server Workflows**: Multi-provider server provisioning and management
-- **Task Service Workflows**: Infrastructure component installation and configuration
-- **Cluster Workflows**: Complete Kubernetes cluster deployment and management
-- **Batch Workflows**: Complex multi-step operations with dependency management
-
-**KCL Workflow Definitions**:
-
-```kcl
-batch_workflow: BatchWorkflow = {
- name = "multi_cloud_deployment"
- version = "1.0.0"
- parallel_limit = 5
- rollback_enabled = True
+
+
+
+
+Purpose : High-performance workflow orchestration and system coordination
+Components :
+
+Orchestrator Engine : Task scheduling and execution coordination
+REST API Server : HTTP endpoints for external integration
+State Management : Persistent state tracking with checkpoint recovery
+Batch Processor : Parallel execution of complex multi-provider workflows
+File-based Queue : Lightweight, reliable task persistence
+Error Recovery : Sophisticated rollback and cleanup capabilities
+
+Key Features :
+
+Solves Nushell deep call stack limitations
+Handles 1000+ concurrent operations
+Checkpoint-based recovery from any failure point
+Real-time workflow monitoring and status tracking
+
+
+Purpose : Domain-specific operations and configuration management
+Components :
+
+Provider Implementations : Cloud-specific operations (AWS, UpCloud, local)
+Task Service Management : Infrastructure component lifecycle
+Configuration Processing : Nickel-based configuration validation and templating
+CLI Interface : User-facing command-line tools
+Workflow Definitions : Business process implementations
+
+Key Features :
+
+65+ domain-specific modules preserved and enhanced
+Configuration-driven operations with zero hardcoded values
+Type-safe Nickel integration for Infrastructure as Code
+Extensible provider and service architecture
+
+
+
+Migration Achievement : 65+ files migrated, 200+ ENV variables → 476 config accessors
+Configuration Hierarchy (precedence order):
+
+Runtime Parameters (command line, environment variables)
+Environment Configuration (dev/test/prod specific)
+Infrastructure Configuration (project-specific settings)
+User Configuration (personal preferences)
+System Defaults (system-wide defaults)
+
+Configuration Files :
+
+config.defaults.toml - System-wide defaults
+config.user.toml - User-specific preferences
+config.{dev,test,prod}.toml - Environment-specific configurations
+Infrastructure-specific configuration files
+
+Features :
+
+Variable Interpolation : {{paths.base}}, {{env.HOME}}, {{now.date}}, {{git.branch}}
+Environment Switching : PROVISIONING_ENV=prod for environment-specific configs
+Validation Framework : Comprehensive configuration validation and error reporting
+Migration Tools : Automated migration from ENV-based to config-driven architecture
+
+
+
+Batch Capabilities :
+
+Provider-Agnostic Workflows : Mix UpCloud, AWS, and local providers in single workflow
+Dependency Resolution : Topological sorting with soft/hard dependency support
+Parallel Execution : Configurable parallelism limits with resource management
+State Recovery : Checkpoint-based recovery with rollback capabilities
+Real-time Monitoring : Live progress tracking and health monitoring
+
+Workflow Types :
+
+Server Workflows : Multi-provider server provisioning and management
+Task Service Workflows : Infrastructure component installation and configuration
+Cluster Workflows : Complete Kubernetes cluster deployment and management
+Batch Workflows : Complex multi-step operations with dependency management
+
+Nickel Workflow Definitions :
+{
+ batch_workflow = {
+ name = "multi_cloud_deployment",
+ version = "1.0.0",
+ parallel_limit = 5,
+ rollback_enabled = true,
operations = [
- {
- id = "servers"
- type = "server_batch"
- provider = "upcloud"
- dependencies = []
- },
- {
- id = "services"
- type = "taskserv_batch"
- provider = "aws"
- dependencies = ["servers"]
- }
+ {
+ id = "servers",
+ type = "server_batch",
+ provider = "upcloud",
+ dependencies = [],
+ },
+ {
+ id = "services",
+ type = "taskserv_batch",
+ provider = "aws",
+ dependencies = ["servers"],
+ }
]
+ }
}
-```plaintext
-
-### 4. Provider Ecosystem
-
-#### Multi-Provider Architecture
-
-**Supported Providers**:
-
-- **AWS**: Amazon Web Services integration
-- **UpCloud**: UpCloud provider with full feature support
-- **Local**: Local development and testing provider
-
-**Provider Features**:
-
-- **Standardized Interfaces**: Consistent API across all providers
-- **Configuration Templates**: Provider-specific configuration generation
-- **Resource Management**: Complete lifecycle management for cloud resources
-- **Cost Optimization**: Pricing information and cost optimization recommendations
-- **Regional Support**: Multi-region deployment capabilities
-
-#### Task Services Ecosystem
-
-**Infrastructure Components** (40+ services):
-
-- **Container Orchestration**: Kubernetes, container runtimes (containerd, cri-o, crun, runc, youki)
-- **Networking**: Cilium, CoreDNS, HAProxy, service mesh integration
-- **Storage**: Rook-Ceph, external-NFS, Mayastor, persistent volumes
-- **Security**: Policy engines, secrets management, RBAC
-- **Observability**: Monitoring, logging, tracing, metrics collection
-- **Development Tools**: Gitea, databases, build systems
-
-**Service Features**:
-
-- **Version Management**: Real-time version checking against GitHub releases
-- **Configuration Generation**: Automated service configuration from templates
-- **Dependency Management**: Automatic dependency resolution and installation order
-- **Health Monitoring**: Service health checks and status reporting
-
-## Key Architectural Decisions
-
-### 1. Hybrid Language Architecture (ADR-004)
-
-**Decision**: Use Rust for coordination, Nushell for business logic
-**Rationale**: Solves Nushell's deep call stack limitations while preserving domain expertise
-**Impact**: Eliminates technical limitations while maintaining productivity and configuration advantages
-
-### 2. Configuration-Driven Architecture (ADR-002)
-
-**Decision**: Complete migration from ENV variables to hierarchical configuration
-**Rationale**: True Infrastructure as Code requires configuration flexibility without hardcoded fallbacks
-**Impact**: 476 configuration accessors provide complete customization without code changes
-
-### 3. Domain-Driven Structure (ADR-001)
-
-**Decision**: Organize by functional domains (core, platform, provisioning)
-**Rationale**: Clear boundaries enable scalable development and maintenance
-**Impact**: Enables specialized development while maintaining system coherence
-
-### 4. Workspace Isolation (ADR-003)
-
-**Decision**: Isolated user workspaces with hierarchical configuration
-**Rationale**: Multi-user support and customization without system impact
-**Impact**: Complete user independence with easy backup and migration
-
-### 5. Registry-Based Extensions (ADR-005)
-
-**Decision**: Manifest-driven extension framework with structured discovery
-**Rationale**: Enable community contributions while maintaining system stability
-**Impact**: Extensible system supporting custom providers, services, and workflows
-
-## Data Flow Architecture
-
-### Configuration Resolution Flow
-
-```plaintext
-1. Workspace Discovery → 2. Configuration Loading → 3. Hierarchy Merge →
+
+
+
+Supported Providers :
+
+AWS : Amazon Web Services integration
+UpCloud : UpCloud provider with full feature support
+Local : Local development and testing provider
+
+Provider Features :
+
+Standardized Interfaces : Consistent API across all providers
+Configuration Templates : Provider-specific configuration generation
+Resource Management : Complete lifecycle management for cloud resources
+Cost Optimization : Pricing information and cost optimization recommendations
+Regional Support : Multi-region deployment capabilities
+
+
+Infrastructure Components (40+ services):
+
+Container Orchestration : Kubernetes, container runtimes (containerd, cri-o, crun, runc, youki)
+Networking : Cilium, CoreDNS, HAProxy, service mesh integration
+Storage : Rook-Ceph, external-NFS, Mayastor, persistent volumes
+Security : Policy engines, secrets management, RBAC
+Observability : Monitoring, logging, tracing, metrics collection
+Development Tools : Gitea, databases, build systems
+
+Service Features :
+
+Version Management : Real-time version checking against GitHub releases
+Configuration Generation : Automated service configuration from templates
+Dependency Management : Automatic dependency resolution and installation order
+Health Monitoring : Service health checks and status reporting
+
+
+
+Decision : Use Rust for coordination, Nushell for business logic
+Rationale : Solves Nushell’s deep call stack limitations while preserving domain expertise
+Impact : Eliminates technical limitations while maintaining productivity and configuration advantages
+
+Decision : Complete migration from ENV variables to hierarchical configuration
+Rationale : True Infrastructure as Code requires configuration flexibility without hardcoded fallbacks
+Impact : 476 configuration accessors provide complete customization without code changes
+
+Decision : Organize by functional domains (core, platform, provisioning)
+Rationale : Clear boundaries enable scalable development and maintenance
+Impact : Enables specialized development while maintaining system coherence
+
+Decision : Isolated user workspaces with hierarchical configuration
+Rationale : Multi-user support and customization without system impact
+Impact : Complete user independence with easy backup and migration
+
+Decision : Manifest-driven extension framework with structured discovery
+Rationale : Enable community contributions while maintaining system stability
+Impact : Extensible system supporting custom providers, services, and workflows
+
+
+1. Workspace Discovery → 2. Configuration Loading → 3. Hierarchy Merge →
4. Variable Interpolation → 5. Schema Validation → 6. Runtime Application
-```plaintext
-
-### Workflow Execution Flow
-
-```plaintext
-1. Workflow Submission → 2. Dependency Analysis → 3. Task Scheduling →
+
+
+1. Workflow Submission → 2. Dependency Analysis → 3. Task Scheduling →
4. Parallel Execution → 5. State Tracking → 6. Result Aggregation →
7. Error Handling → 8. Cleanup/Rollback
-```plaintext
-
-### Provider Integration Flow
-
-```plaintext
-1. Provider Discovery → 2. Configuration Validation → 3. Authentication →
+
+
+1. Provider Discovery → 2. Configuration Validation → 3. Authentication →
4. Resource Planning → 5. Operation Execution → 6. State Persistence →
7. Result Reporting
-```plaintext
-
-## Technology Stack
-
-### Core Technologies
-
-- **Nushell 0.107.1**: Primary shell and scripting language
-- **Rust**: High-performance coordination and orchestration
-- **KCL 0.11.2**: Configuration language for Infrastructure as Code
-- **TOML**: Configuration file format with human readability
-- **JSON**: Data exchange format between components
-
-### Infrastructure Technologies
-
-- **Kubernetes**: Container orchestration platform
-- **Docker/Containerd**: Container runtime environments
-- **SOPS 3.10.2**: Secrets management and encryption
-- **Age 1.2.1**: Encryption tool for secrets
-- **HTTP/REST**: API communication protocols
-
-### Development Technologies
-
-- **nu_plugin_tera**: Native Nushell template rendering
-- **nu_plugin_kcl**: KCL integration for Nushell
-- **K9s 0.50.6**: Kubernetes management interface
-- **Git**: Version control and configuration management
-
-## Scalability and Performance
-
-### Performance Characteristics
-
-- **Batch Processing**: 1000+ concurrent operations with configurable parallelism
-- **Provider Operations**: Sub-second response for most cloud API operations
-- **Configuration Loading**: Millisecond-level configuration resolution
-- **State Persistence**: File-based persistence with minimal overhead
-- **Memory Usage**: Efficient memory management with streaming operations
-
-### Scalability Features
-
-- **Horizontal Scaling**: Multiple orchestrator instances for high availability
-- **Resource Management**: Configurable resource limits and quotas
-- **Caching Strategy**: Multi-level caching for performance optimization
-- **Streaming Operations**: Large dataset processing without memory limits
-- **Async Processing**: Non-blocking operations for improved throughput
-
-## Security Architecture
-
-### Security Layers
-
-- **Workspace Isolation**: User data isolated from system installation
-- **Configuration Security**: Encrypted secrets with SOPS/Age integration
-- **Extension Sandboxing**: Extensions run in controlled environments
-- **API Authentication**: Secure REST API endpoints with authentication
-- **Audit Logging**: Comprehensive audit trails for all operations
-
-### Security Features
-
-- **Secrets Management**: Encrypted configuration files with rotation support
-- **Permission Model**: Role-based access control for operations
-- **Code Signing**: Digital signature verification for extensions
-- **Network Security**: Secure communication with cloud providers
-- **Input Validation**: Comprehensive input validation and sanitization
-
-## Quality Attributes
-
-### Reliability
-
-- **Error Recovery**: Sophisticated error handling and rollback capabilities
-- **State Consistency**: Transactional operations with rollback support
-- **Health Monitoring**: Comprehensive system health checks and monitoring
-- **Fault Tolerance**: Graceful degradation and recovery from failures
-
-### Maintainability
-
-- **Clear Architecture**: Well-defined boundaries and responsibilities
-- **Documentation**: Comprehensive architecture and development documentation
-- **Testing Strategy**: Multi-layer testing with integration validation
-- **Code Quality**: Consistent patterns and quality standards
-
-### Extensibility
-
-- **Plugin Framework**: Registry-based extension system
-- **Provider API**: Standardized interfaces for new providers
-- **Configuration Schema**: Extensible configuration with validation
-- **Workflow Engine**: Custom workflow definitions and execution
-
-This system architecture represents a mature, production-ready platform for Infrastructure as Code with unique architectural innovations and proven scalability.
+
+
+
+Nushell 0.107.1 : Primary shell and scripting language
+Rust : High-performance coordination and orchestration
+Nickel 1.15.0+ : Configuration language for Infrastructure as Code
+TOML : Configuration file format with human readability
+JSON : Data exchange format between components
+
+
+
+Kubernetes : Container orchestration platform
+Docker/Containerd : Container runtime environments
+SOPS 3.10.2 : Secrets management and encryption
+Age 1.2.1 : Encryption tool for secrets
+HTTP/REST : API communication protocols
+
+
+
+nu_plugin_tera : Native Nushell template rendering
+K9s 0.50.6 : Kubernetes management interface
+Git : Version control and configuration management
+
+
+
+
+Batch Processing : 1000+ concurrent operations with configurable parallelism
+Provider Operations : Sub-second response for most cloud API operations
+Configuration Loading : Millisecond-level configuration resolution
+State Persistence : File-based persistence with minimal overhead
+Memory Usage : Efficient memory management with streaming operations
+
+
+
+Horizontal Scaling : Multiple orchestrator instances for high availability
+Resource Management : Configurable resource limits and quotas
+Caching Strategy : Multi-level caching for performance optimization
+Streaming Operations : Large dataset processing without memory limits
+Async Processing : Non-blocking operations for improved throughput
+
+
+
+
+Workspace Isolation : User data isolated from system installation
+Configuration Security : Encrypted secrets with SOPS/Age integration
+Extension Sandboxing : Extensions run in controlled environments
+API Authentication : Secure REST API endpoints with authentication
+Audit Logging : Comprehensive audit trails for all operations
+
+
+
+Secrets Management : Encrypted configuration files with rotation support
+Permission Model : Role-based access control for operations
+Code Signing : Digital signature verification for extensions
+Network Security : Secure communication with cloud providers
+Input Validation : Comprehensive input validation and sanitization
+
+
+
+
+Error Recovery : Sophisticated error handling and rollback capabilities
+State Consistency : Transactional operations with rollback support
+Health Monitoring : Comprehensive system health checks and monitoring
+Fault Tolerance : Graceful degradation and recovery from failures
+
+
+
+Clear Architecture : Well-defined boundaries and responsibilities
+Documentation : Comprehensive architecture and development documentation
+Testing Strategy : Multi-layer testing with integration validation
+Code Quality : Consistent patterns and quality standards
+
+
+
+Plugin Framework : Registry-based extension system
+Provider API : Standardized interfaces for new providers
+Configuration Schema : Extensible configuration with validation
+Workflow Engine : Custom workflow definitions and execution
+
+This system architecture represents a mature, production-ready platform for Infrastructure as Code with unique architectural innovations and proven scalability.
Version : 3.5.0
Date : 2025-10-06
@@ -4512,16 +4625,16 @@ This system architecture represents a mature, production-ready platform for Infr
-
+
The Provisioning Platform is a modern, cloud-native infrastructure automation system that combines:
-the simplicity of declarative configuration (KCL)
+the simplicity of declarative configuration (Nickel)
the power of shell scripting (Nushell)
high-performance coordination (Rust).
-Hybrid Architecture : Rust for coordination, Nushell for business logic, KCL for configuration
+Hybrid Architecture : Rust for coordination, Nushell for business logic, Nickel for configuration
Mode-Based : Adapts from solo development to enterprise production
OCI-Native : Extends leveraging industry-standard OCI distribution
Provider-Agnostic : Supports multiple cloud providers (AWS, UpCloud) and local infrastructure
@@ -4557,28 +4670,22 @@ This system architecture represents a mature, production-ready platform for Infr
│ └───────────────────────────────────────────────────────────┘ │
│ │
└─────────────────────────────────────────────────────────────────────┘
-```plaintext
-
-### Key Metrics
-
-| Metric | Value | Description |
-|--------|-------|-------------|
-| **Codebase Size** | ~50,000 LOC | Nushell (60%), Rust (30%), KCL (10%) |
-| **Extensions** | 100+ | Providers, taskservs, clusters |
-| **Supported Providers** | 3 | AWS, UpCloud, Local |
-| **Task Services** | 50+ | Kubernetes, databases, monitoring, etc. |
-| **Deployment Modes** | 5 | Binary, Docker, Docker Compose, K8s, Remote |
-| **Operational Modes** | 4 | Solo, Multi-user, CI/CD, Enterprise |
-| **API Endpoints** | 80+ | REST, WebSocket, GraphQL (planned) |
-
----
-
-## System Architecture
-
-### High-Level Architecture
-
-```plaintext
-┌────────────────────────────────────────────────────────────────────────────┐
+
+
+Metric Value Description
+Codebase Size ~50,000 LOC Nushell (60%), Rust (30%), Nickel (10%)
+Extensions 100+ Providers, taskservs, clusters
+Supported Providers 3 AWS, UpCloud, Local
+Task Services 50+ Kubernetes, databases, monitoring, etc.
+Deployment Modes 5 Binary, Docker, Docker Compose, K8s, Remote
+Operational Modes 4 Solo, Multi-user, CI/CD, Enterprise
+API Endpoints 80+ REST, WebSocket, GraphQL (planned)
+
+
+
+
+
+┌────────────────────────────────────────────────────────────────────────────┐
│ PRESENTATION LAYER │
├────────────────────────────────────────────────────────────────────────────┤
│ │
@@ -4595,7 +4702,7 @@ This system architecture represents a mature, production-ready platform for Infr
│ │
│ ┌─────────────────────────────────────────────────────────────────┐ │
│ │ Configuration Management │ │
-│ │ (KCL Schemas | TOML Config | Hierarchical Loading) │ │
+│ │ (Nickel Schemas | TOML Config | Hierarchical Loading) │ │
│ └─────────────────────────────────────────────────────────────────┘ │
│ │
│ ┌──────────────────┐ ┌──────────────────┐ ┌──────────────────┐ │
@@ -4667,30 +4774,21 @@ This system architecture represents a mature, production-ready platform for Infr
│ └────────────────┘ └──────────────────┘ └───────────────────┘ │
│ │
└────────────────────────────────────────────────────────────────────────────┘
-```plaintext
-
-### Multi-Repository Architecture
-
-The system is organized into three separate repositories:
-
-#### **provisioning-core**
-
-```plaintext
-Core system functionality
+
+
+The system is organized into three separate repositories:
+
+Core system functionality
├── CLI interface (Nushell entry point)
├── Core libraries (lib_provisioning)
-├── Base KCL schemas
+├── Base Nickel schemas
├── Configuration system
├── Workflow engine
└── Build/distribution tools
-```plaintext
-
-**Distribution**: `oci://registry/provisioning-core:v3.5.0`
-
-#### **provisioning-extensions**
-
-```plaintext
-All provider, taskserv, cluster extensions
+
+Distribution : oci://registry/provisioning-core:v3.5.0
+
+All provider, taskserv, cluster extensions
├── providers/
│ ├── aws/
│ ├── upcloud/
@@ -4704,43 +4802,31 @@ All provider, taskserv, cluster extensions
├── buildkit/
├── web/
└── (10+ more)
-```plaintext
-
-**Distribution**: Each extension as separate OCI artifact
-
-- `oci://registry/provisioning-extensions/kubernetes:1.28.0`
-- `oci://registry/provisioning-extensions/aws:2.0.0`
-
-#### **provisioning-platform**
-
-```plaintext
-Platform services
+
+Distribution : Each extension as separate OCI artifact
+
+oci://registry/provisioning-extensions/kubernetes:1.28.0
+oci://registry/provisioning-extensions/aws:2.0.0
+
+
+Platform services
├── orchestrator/ (Rust)
├── control-center/ (Rust/Yew)
├── mcp-server/ (Rust)
└── api-gateway/ (Rust)
-```plaintext
-
-**Distribution**: Docker images in OCI registry
-
-- `oci://registry/provisioning-platform/orchestrator:v1.2.0`
-
----
-
-## Component Architecture
-
-### Core Components
-
-#### 1. **CLI Interface** (Nushell)
-
-**Location**: `provisioning/core/cli/provisioning`
-
-**Purpose**: Primary user interface for all provisioning operations
-
-**Architecture**:
-
-```plaintext
-Main CLI (211 lines)
+
+Distribution : Docker images in OCI registry
+
+oci://registry/provisioning-platform/orchestrator:v1.2.0
+
+
+
+
+
+Location : provisioning/core/cli/provisioning
+Purpose : Primary user interface for all provisioning operations
+Architecture :
+Main CLI (211 lines)
↓
Command Dispatcher (264 lines)
↓
@@ -4752,43 +4838,34 @@ Domain Handlers (7 modules)
├── generation.nu (78 lines)
├── utilities.nu (157 lines)
└── configuration.nu (316 lines)
-```plaintext
-
-**Key Features**:
-
-- 80+ command shortcuts
-- Bi-directional help system
-- Centralized flag handling
-- Domain-driven design
-
-#### 2. **Configuration System** (KCL + TOML)
-
-**Hierarchical Loading**:
-
-```plaintext
-1. System defaults (config.defaults.toml)
+
+Key Features :
+
+80+ command shortcuts
+Bi-directional help system
+Centralized flag handling
+Domain-driven design
+
+
+Hierarchical Loading :
+1. System defaults (config.defaults.toml)
2. User config (~/.provisioning/config.user.toml)
3. Workspace config (workspace/config/provisioning.yaml)
4. Environment config (workspace/config/{env}-defaults.toml)
5. Infrastructure config (workspace/infra/{name}/config.toml)
6. Runtime overrides (CLI flags, ENV variables)
-```plaintext
-
-**Variable Interpolation**:
-
-- `{{paths.base}}` - Path references
-- `{{env.HOME}}` - Environment variables
-- `{{now.date}}` - Dynamic values
-- `{{git.branch}}` - Git context
-
-#### 3. **Orchestrator** (Rust)
-
-**Location**: `provisioning/platform/orchestrator/`
-
-**Architecture**:
-
-```rust
-src/
+
+Variable Interpolation :
+
+{{paths.base}} - Path references
+{{env.HOME}} - Environment variables
+{{now.date}} - Dynamic values
+{{git.branch}} - Git context
+
+
+Location : provisioning/platform/orchestrator/
+Architecture :
+src/
├── main.rs // Entry point
├── api/
│ ├── routes.rs // HTTP routes
@@ -4809,59 +4886,48 @@ src/
└── test_environment/ // Test env management
├── container_manager.rs
├── test_orchestrator.rs
- └── topologies.rs
-```plaintext
-
-**Key Features**:
-
-- File-based task queue (reliable, simple)
-- Checkpoint-based recovery
-- Priority scheduling
-- REST API (HTTP/WebSocket)
-- Nushell script execution bridge
-
-#### 4. **Workflow Engine** (Nushell)
-
-**Location**: `provisioning/core/nulib/workflows/`
-
-**Workflow Types**:
-
-```plaintext
-workflows/
+ └── topologies.rs
+Key Features :
+
+File-based task queue (reliable, simple)
+Checkpoint-based recovery
+Priority scheduling
+REST API (HTTP/WebSocket)
+Nushell script execution bridge
+
+
+Location : provisioning/core/nulib/workflows/
+Workflow Types :
+workflows/
├── server_create.nu // Server provisioning
├── taskserv.nu // Task service management
├── cluster.nu // Cluster deployment
├── batch.nu // Batch operations
└── management.nu // Workflow monitoring
-```plaintext
-
-**Batch Workflow Features**:
-
-- Provider-agnostic (mix AWS, UpCloud, local)
-- Dependency resolution (hard/soft dependencies)
-- Parallel execution (configurable limits)
-- Rollback support
-- Real-time monitoring
-
-#### 5. **Extension System**
-
-**Extension Types**:
-
-| Type | Count | Purpose | Example |
-|------|-------|---------|---------|
-| **Providers** | 3 | Cloud platform integration | AWS, UpCloud, Local |
-| **Task Services** | 50+ | Infrastructure components | Kubernetes, Postgres |
-| **Clusters** | 10+ | Complete configurations | Buildkit, Web cluster |
-
-**Extension Structure**:
-
-```plaintext
-extension-name/
-├── kcl/
-│ ├── kcl.mod // KCL dependencies
-│ ├── {name}.k // Main schema
-│ ├── version.k // Version management
-│ └── dependencies.k // Dependencies
+
+Batch Workflow Features :
+
+Provider-agnostic (mix AWS, UpCloud, local)
+Dependency resolution (hard/soft dependencies)
+Parallel execution (configurable limits)
+Rollback support
+Real-time monitoring
+
+
+Extension Types :
+Type Count Purpose Example
+Providers 3 Cloud platform integration AWS, UpCloud, Local
+Task Services 50+ Infrastructure components Kubernetes, Postgres
+Clusters 10+ Complete configurations Buildkit, Web cluster
+
+
+Extension Structure :
+extension-name/
+├── schemas/
+│ ├── main.ncl // Main schema
+│ ├── contracts.ncl // Contract definitions
+│ ├── defaults.ncl // Default values
+│ └── version.ncl // Version management
├── scripts/
│ ├── install.nu // Installation logic
│ ├── check.nu // Health check
@@ -4870,23 +4936,19 @@ extension-name/
├── docs/ // Documentation
├── tests/ // Extension tests
└── manifest.yaml // Extension metadata
-```plaintext
-
-**OCI Distribution**:
-Each extension packaged as OCI artifact:
-
-- KCL schemas
-- Nushell scripts
-- Templates
-- Documentation
-- Manifest
-
-#### 6. **Module and Layer System**
-
-**Module System**:
-
-```bash
-# Discover available extensions
+
+OCI Distribution :
+Each extension packaged as OCI artifact:
+
+Nickel schemas
+Nushell scripts
+Templates
+Documentation
+Manifest
+
+
+Module System :
+# Discover available extensions
provisioning module discover taskservs
# Load into workspace
@@ -4894,64 +4956,51 @@ provisioning module load taskserv my-workspace kubernetes containerd
# List loaded modules
provisioning module list taskserv my-workspace
-```plaintext
-
-**Layer System** (Configuration Inheritance):
-
-```plaintext
-Layer 1: Core (provisioning/extensions/{type}/{name})
+
+Layer System (Configuration Inheritance):
+Layer 1: Core (provisioning/extensions/{type}/{name})
↓
Layer 2: Workspace (workspace/extensions/{type}/{name})
↓
Layer 3: Infrastructure (workspace/infra/{infra}/extensions/{type}/{name})
-```plaintext
-
-**Resolution Priority**: Infrastructure → Workspace → Core
-
-#### 7. **Dependency Resolution**
-
-**Algorithm**: Topological sort with cycle detection
-
-**Features**:
-
-- Hard dependencies (must exist)
-- Soft dependencies (optional enhancement)
-- Conflict detection
-- Circular dependency prevention
-- Version compatibility checking
-
-**Example**:
-
-```kcl
-import provisioning.dependencies as schema
-
-_dependencies = schema.TaskservDependencies {
- name = "kubernetes"
- version = "1.28.0"
- requires = ["containerd", "etcd", "os"]
- optional = ["cilium", "helm"]
- conflicts = ["docker", "podman"]
+
+Resolution Priority : Infrastructure → Workspace → Core
+
+Algorithm : Topological sort with cycle detection
+Features :
+
+Hard dependencies (must exist)
+Soft dependencies (optional enhancement)
+Conflict detection
+Circular dependency prevention
+Version compatibility checking
+
+Example :
+let { TaskservDependencies } = import "provisioning/dependencies.ncl" in
+{
+ kubernetes = TaskservDependencies {
+ name = "kubernetes",
+ version = "1.28.0",
+ requires = ["containerd", "etcd", "os"],
+ optional = ["cilium", "helm"],
+ conflicts = ["docker", "podman"],
+ }
}
-```plaintext
-
-#### 8. **Service Management**
-
-**Supported Services**:
-
-| Service | Type | Category | Purpose |
-|---------|------|----------|---------|
-| orchestrator | Platform | Orchestration | Workflow coordination |
-| control-center | Platform | UI | Web management interface |
-| coredns | Infrastructure | DNS | Local DNS resolution |
-| gitea | Infrastructure | Git | Self-hosted Git service |
-| oci-registry | Infrastructure | Registry | OCI artifact storage |
-| mcp-server | Platform | API | Model Context Protocol |
-| api-gateway | Platform | API | Unified API access |
-
-**Lifecycle Management**:
-
-```bash
-# Start all auto-start services
+
+
+Supported Services :
+Service Type Category Purpose
+orchestrator Platform Orchestration Workflow coordination
+control-center Platform UI Web management interface
+coredns Infrastructure DNS Local DNS resolution
+gitea Infrastructure Git Self-hosted Git service
+oci-registry Infrastructure Registry OCI artifact storage
+mcp-server Platform API Model Context Protocol
+api-gateway Platform API Unified API access
+
+
+Lifecycle Management :
+# Start all auto-start services
provisioning platform start
# Start specific service (with dependencies)
@@ -4962,14 +5011,10 @@ provisioning platform health
# View logs
provisioning platform logs orchestrator --follow
-```plaintext
-
-#### 9. **Test Environment Service**
-
-**Architecture**:
-
-```plaintext
-User Command (CLI)
+
+
+Architecture :
+User Command (CLI)
↓
Test Orchestrator (Rust)
↓
@@ -4978,33 +5023,26 @@ Container Manager (bollard)
Docker API
↓
Isolated Test Containers
-```plaintext
-
-**Test Types**:
-
-- Single taskserv testing
-- Server simulation (multiple taskservs)
-- Multi-node cluster topologies
-
-**Topology Templates**:
-
-- `kubernetes_3node` - 3-node HA cluster
-- `kubernetes_single` - All-in-one K8s
-- `etcd_cluster` - 3-node etcd
-- `postgres_redis` - Database stack
-
----
-
-## Mode Architecture
-
-### Mode-Based System Overview
-
-The platform supports four operational modes that adapt the system from individual development to enterprise production.
-
-### Mode Comparison
-
-```plaintext
-┌───────────────────────────────────────────────────────────────────────┐
+
+Test Types :
+
+Single taskserv testing
+Server simulation (multiple taskservs)
+Multi-node cluster topologies
+
+Topology Templates :
+
+kubernetes_3node - 3-node HA cluster
+kubernetes_single - All-in-one K8s
+etcd_cluster - 3-node etcd
+postgres_redis - Database stack
+
+
+
+
+The platform supports four operational modes that adapt the system from individual development to enterprise production.
+
+┌───────────────────────────────────────────────────────────────────────┐
│ MODE ARCHITECTURE │
├───────────────┬───────────────┬───────────────┬───────────────────────┤
│ SOLO │ MULTI-USER │ CI/CD │ ENTERPRISE │
@@ -5032,21 +5070,15 @@ The platform supports four operational modes that adapt the system from individu
│ └─────────┘ │ └──────────┘ │ └─────────-─┘ │ └──────────────────┘ │
│ │ │ │ │
│ Unlimited │ 10 srv, 32 │ 5 srv, 16 │ 20 srv, 64 cores │
-│ │ cores, 128GB │ cores, 64GB │ 256GB per user │
+│ │ cores, 128 GB │ cores, 64 GB │ 256 GB per user │
│ │ │ │ │
└───────────────┴───────────────┴───────────────┴───────────────────────┘
-```plaintext
-
-### Mode Configuration
-
-**Mode Templates**: `workspace/config/modes/{mode}.yaml`
-
-**Active Mode**: `~/.provisioning/config/active-mode.yaml`
-
-**Switching Modes**:
-
-```bash
-# Check current mode
+
+
+Mode Templates : workspace/config/modes/{mode}.yaml
+Active Mode : ~/.provisioning/config/active-mode.yaml
+Switching Modes :
+# Check current mode
provisioning mode current
# Switch to another mode
@@ -5054,14 +5086,10 @@ provisioning mode switch multi-user
# Validate mode requirements
provisioning mode validate enterprise
-```plaintext
-
-### Mode-Specific Workflows
-
-#### Solo Mode
-
-```bash
-# 1. Default mode, no setup needed
+
+
+
+# 1. Default mode, no setup needed
provisioning workspace init
# 2. Start local orchestrator
@@ -5069,12 +5097,9 @@ provisioning platform start orchestrator
# 3. Create infrastructure
provisioning server create
-```plaintext
-
-#### Multi-User Mode
-
-```bash
-# 1. Switch mode and authenticate
+
+
+# 1. Switch mode and authenticate
provisioning mode switch multi-user
provisioning auth login
@@ -5088,12 +5113,9 @@ provisioning extension pull upcloud kubernetes
# 5. Unlock workspace
provisioning workspace unlock my-infra
-```plaintext
-
-#### CI/CD Mode
-
-```yaml
-# GitLab CI
+
+
+# GitLab CI
deploy:
stage: deploy
script:
@@ -5105,12 +5127,9 @@ deploy:
- provisioning server create
after_script:
- provisioning workspace cleanup
-```plaintext
-
-#### Enterprise Mode
-
-```bash
-# 1. Switch to enterprise, verify K8s
+
+
+# 1. Switch to enterprise, verify K8s
provisioning mode switch enterprise
kubectl get pods -n provisioning-system
@@ -5129,16 +5148,11 @@ provisioning infra create
# 6. Release
provisioning workspace unlock prod-deployment
-```plaintext
-
----
-
-## Network Architecture
-
-### Service Communication
-
-```plaintext
-┌──────────────────────────────────────────────────────────────────────┐
+
+
+
+
+┌──────────────────────────────────────────────────────────────────────┐
│ NETWORK LAYER │
├──────────────────────────────────────────────────────────────────────┤
│ │
@@ -5167,56 +5181,49 @@ provisioning workspace unlock prod-deployment
│ └────────────────────────────────────────────────────────────┘ │
│ │
└──────────────────────────────────────────────────────────────────────┘
-```plaintext
-
-### Port Allocation
-
-| Service | Port | Protocol | Purpose |
-|---------|------|----------|---------|
-| Orchestrator | 8080 | HTTP/WS | REST API, WebSocket |
-| Control Center | 3000 | HTTP | Web UI |
-| CoreDNS | 5353 | UDP/TCP | DNS resolution |
-| Gitea | 3001 | HTTP | Git operations |
-| OCI Registry (Zot) | 5000 | HTTP | OCI artifacts |
-| OCI Registry (Harbor) | 443 | HTTPS | OCI artifacts (prod) |
-| MCP Server | 8081 | HTTP | MCP protocol |
-| API Gateway | 8082 | HTTP | Unified API |
-
-### Network Security
-
-**Solo Mode**:
-
-- Localhost-only bindings
-- No authentication
-- No encryption
-
-**Multi-User Mode**:
-
-- Token-based authentication (JWT)
-- TLS for external access
-- Firewall rules
-
-**CI/CD Mode**:
-
-- Token authentication (short-lived)
-- Full TLS encryption
-- Network isolation
-
-**Enterprise Mode**:
-
-- mTLS for all connections
-- Network policies (Kubernetes)
-- Zero-trust networking
-- Audit logging
-
----
-
-## Data Architecture
-
-### Data Storage
-
-```plaintext
-┌────────────────────────────────────────────────────────────────┐
+
+
+Service Port Protocol Purpose
+Orchestrator 8080 HTTP/WS REST API, WebSocket
+Control Center 3000 HTTP Web UI
+CoreDNS 5353 UDP/TCP DNS resolution
+Gitea 3001 HTTP Git operations
+OCI Registry (Zot) 5000 HTTP OCI artifacts
+OCI Registry (Harbor) 443 HTTPS OCI artifacts (prod)
+MCP Server 8081 HTTP MCP protocol
+API Gateway 8082 HTTP Unified API
+
+
+
+Solo Mode :
+
+Localhost-only bindings
+No authentication
+No encryption
+
+Multi-User Mode :
+
+Token-based authentication (JWT)
+TLS for external access
+Firewall rules
+
+CI/CD Mode :
+
+Token authentication (short-lived)
+Full TLS encryption
+Network isolation
+
+Enterprise Mode :
+
+mTLS for all connections
+Network policies (Kubernetes)
+Zero-trust networking
+Audit logging
+
+
+
+
+┌────────────────────────────────────────────────────────────────┐
│ DATA LAYER │
├────────────────────────────────────────────────────────────────┤
│ │
@@ -5234,7 +5241,7 @@ provisioning workspace unlock prod-deployment
│ │ │ ├── provisioning.yaml (Workspace config) │ │
│ │ │ └── modes/*.yaml (Mode templates) │ │
│ │ └── infra/{name}/ │ │
-│ │ ├── settings.k (Infrastructure KCL) │ │
+│ │ ├── main.ncl (Infrastructure Nickel) │ │
│ │ └── config.toml (Infra-specific) │ │
│ └─────────────────────────────────────────────────────────┘ │
│ │
@@ -5257,7 +5264,7 @@ provisioning workspace unlock prod-deployment
│ │ │ │
│ │ ~/.provisioning/cache/ │ │
│ │ ├── oci/ (OCI artifacts) │ │
-│ │ ├── kcl/ (Compiled KCL) │ │
+│ │ ├── schemas/ (Nickel compiled) │ │
│ │ └── modules/ (Module cache) │ │
│ └─────────────────────────────────────────────────────────┘ │
│ │
@@ -5290,51 +5297,36 @@ provisioning workspace unlock prod-deployment
│ └─────────────────────────────────────────────────────────┘ │
│ │
└────────────────────────────────────────────────────────────────┘
-```plaintext
-
-### Data Flow
-
-**Configuration Loading**:
-
-```plaintext
-1. Load system defaults (config.defaults.toml)
+
+
+Configuration Loading :
+1. Load system defaults (config.defaults.toml)
2. Merge user config (~/.provisioning/config.user.toml)
3. Load workspace config (workspace/config/provisioning.yaml)
4. Load environment config (workspace/config/{env}-defaults.toml)
5. Load infrastructure config (workspace/infra/{name}/config.toml)
6. Apply runtime overrides (ENV variables, CLI flags)
-```plaintext
-
-**State Persistence**:
-
-```plaintext
-Workflow execution
+
+State Persistence :
+Workflow execution
↓
Create checkpoint (JSON)
↓
Save to ~/.provisioning/orchestrator/data/checkpoints/
↓
On failure, load checkpoint and resume
-```plaintext
-
-**OCI Artifact Flow**:
-
-```plaintext
-1. Package extension (oci-package.nu)
+
+OCI Artifact Flow :
+1. Package extension (oci-package.nu)
2. Push to OCI registry (provisioning oci push)
3. Extension stored as OCI artifact
4. Pull when needed (provisioning oci pull)
5. Cache locally (~/.provisioning/cache/oci/)
-```plaintext
-
----
-
-## Security Architecture
-
-### Security Layers
-
-```plaintext
-┌─────────────────────────────────────────────────────────────────┐
+
+
+
+
+┌─────────────────────────────────────────────────────────────────┐
│ SECURITY ARCHITECTURE │
├─────────────────────────────────────────────────────────────────┤
│ │
@@ -5398,63 +5390,43 @@ On failure, load checkpoint and resume
│ └────────────────────────────────────────────────────────┘ │
│ │
└─────────────────────────────────────────────────────────────────┘
-```plaintext
-
-### Secret Management
-
-**SOPS Integration**:
-
-```bash
-# Edit encrypted file
+
+
+SOPS Integration :
+# Edit encrypted file
provisioning sops workspace/secrets/keys.yaml.enc
# Encryption happens automatically on save
# Decryption happens automatically on load
-```plaintext
-
-**KMS Integration** (Enterprise):
-
-```yaml
-# workspace/config/provisioning.yaml
+
+KMS Integration (Enterprise):
+# workspace/config/provisioning.yaml
secrets:
provider: "kms"
kms:
type: "aws" # or "vault"
region: "us-east-1"
key_id: "arn:aws:kms:..."
-```plaintext
-
-### Image Signing and Verification
-
-**CI/CD Mode** (Required):
-
-```bash
-# Sign OCI artifact
+
+
+CI/CD Mode (Required):
+# Sign OCI artifact
cosign sign oci://registry/kubernetes:1.28.0
# Verify signature
cosign verify oci://registry/kubernetes:1.28.0
-```plaintext
-
-**Enterprise Mode** (Mandatory):
-
-```bash
-# Pull with verification
+
+Enterprise Mode (Mandatory):
+# Pull with verification
provisioning extension pull kubernetes --verify-signature
# System blocks unsigned artifacts
-```plaintext
-
----
-
-## Deployment Architecture
-
-### Deployment Modes
-
-#### 1. **Binary Deployment** (Solo, Multi-user)
-
-```plaintext
-User Machine
+
+
+
+
+
+User Machine
├── ~/.provisioning/bin/
│ ├── provisioning-orchestrator
│ ├── provisioning-control-center
@@ -5462,30 +5434,22 @@ User Machine
├── ~/.provisioning/orchestrator/data/
├── ~/.provisioning/services/
└── Process Management (PID files, logs)
-```plaintext
-
-**Pros**: Simple, fast startup, no Docker dependency
-**Cons**: Platform-specific binaries, manual updates
-
-#### 2. **Docker Deployment** (Multi-user, CI/CD)
-
-```plaintext
-Docker Daemon
+
+Pros : Simple, fast startup, no Docker dependency
+Cons : Platform-specific binaries, manual updates
+
+Docker Daemon
├── Container: provisioning-orchestrator
├── Container: provisioning-control-center
├── Container: provisioning-coredns
├── Container: provisioning-gitea
├── Container: provisioning-oci-registry
└── Volumes: ~/.provisioning/data/
-```plaintext
-
-**Pros**: Consistent environment, easy updates
-**Cons**: Requires Docker, resource overhead
-
-#### 3. **Docker Compose Deployment** (Multi-user)
-
-```yaml
-# provisioning/platform/docker-compose.yaml
+
+Pros : Consistent environment, easy updates
+Cons : Requires Docker, resource overhead
+
+# provisioning/platform/docker-compose.yaml
services:
orchestrator:
image: provisioning-platform/orchestrator:v1.2.0
@@ -5515,15 +5479,11 @@ services:
image: ghcr.io/project-zot/zot:latest
ports:
- "5000:5000"
-```plaintext
-
-**Pros**: Easy multi-service orchestration, declarative
-**Cons**: Local only, no HA
-
-#### 4. **Kubernetes Deployment** (CI/CD, Enterprise)
-
-```yaml
-# Namespace: provisioning-system
+
+Pros : Easy multi-service orchestration, declarative
+Cons : Local only, no HA
+
+# Namespace: provisioning-system
apiVersion: apps/v1
kind: Deployment
metadata:
@@ -5561,15 +5521,11 @@ spec:
- name: data
persistentVolumeClaim:
claimName: orchestrator-data
-```plaintext
-
-**Pros**: HA, scalability, production-ready
-**Cons**: Complex setup, Kubernetes required
-
-#### 5. **Remote Deployment** (All modes)
-
-```yaml
-# Connect to remotely-running services
+
+Pros : HA, scalability, production-ready
+Cons : Complex setup, Kubernetes required
+
+# Connect to remotely-running services
services:
orchestrator:
deployment:
@@ -5578,21 +5534,14 @@ services:
endpoint: "https://orchestrator.company.com"
tls_enabled: true
auth_token_path: "~/.provisioning/tokens/orchestrator.token"
-```plaintext
-
-**Pros**: No local resources, centralized
-**Cons**: Network dependency, latency
-
----
-
-## Integration Architecture
-
-### Integration Patterns
-
-#### 1. **Hybrid Language Integration** (Rust ↔ Nushell)
-
-```plaintext
-Rust Orchestrator
+
+Pros : No local resources, centralized
+Cons : Network dependency, latency
+
+
+
+
+Rust Orchestrator
↓ (HTTP API)
Nushell CLI
↓ (exec via bridge)
@@ -5601,14 +5550,10 @@ Nushell Business Logic
Rust Orchestrator
↓ (updates state)
File-based Task Queue
-```plaintext
-
-**Communication**: HTTP API + stdin/stdout JSON
-
-#### 2. **Provider Abstraction**
-
-```plaintext
-Unified Provider Interface
+
+Communication : HTTP API + stdin/stdout JSON
+
+Unified Provider Interface
├── create_server(config) -> Server
├── delete_server(id) -> bool
├── list_servers() -> [Server]
@@ -5618,12 +5563,9 @@ Provider Implementations:
├── AWS Provider (aws-sdk-rust, aws cli)
├── UpCloud Provider (upcloud API)
└── Local Provider (Docker, libvirt)
-```plaintext
-
-#### 3. **OCI Registry Integration**
-
-```plaintext
-Extension Development
+
+
+Extension Development
↓
Package (oci-package.nu)
↓
@@ -5636,12 +5578,9 @@ Pull (provisioning oci pull)
Cache (~/.provisioning/cache/oci/)
↓
Load into Workspace
-```plaintext
-
-#### 4. **Gitea Integration** (Multi-user, Enterprise)
-
-```plaintext
-Workspace Operations
+
+
+Workspace Operations
↓
Check Lock Status (Gitea API)
↓
@@ -5652,18 +5591,15 @@ Perform Changes
Commit + Push
↓
Release Lock (Delete lock file)
-```plaintext
-
-**Benefits**:
-
-- Distributed locking
-- Change tracking via Git history
-- Collaboration features
-
-#### 5. **CoreDNS Integration**
-
-```plaintext
-Service Registration
+
+Benefits :
+
+Distributed locking
+Change tracking via Git history
+Collaboration features
+
+
+Service Registration
↓
Update CoreDNS Corefile
↓
@@ -5675,163 +5611,149 @@ Zones:
├── *.prov.local (Internal services)
├── *.infra.local (Infrastructure nodes)
└── *.test.local (Test environments)
-```plaintext
-
----
-
-## Performance and Scalability
-
-### Performance Characteristics
-
-| Metric | Value | Notes |
-|--------|-------|-------|
-| **CLI Startup Time** | < 100ms | Nushell cold start |
-| **CLI Response Time** | < 50ms | Most commands |
-| **Workflow Submission** | < 200ms | To orchestrator |
-| **Task Processing** | 10-50/sec | Orchestrator throughput |
-| **Batch Operations** | Up to 100 servers | Parallel execution |
-| **OCI Pull Time** | 1-5s | Cached: <100ms |
-| **Configuration Load** | < 500ms | Full hierarchy |
-| **Health Check Interval** | 10s | Configurable |
-
-### Scalability Limits
-
-**Solo Mode**:
-
-- Unlimited local resources
-- Limited by machine capacity
-
-**Multi-User Mode**:
-
-- 10 servers per user
-- 32 cores, 128GB RAM per user
-- 5-20 concurrent users
-
-**CI/CD Mode**:
-
-- 5 servers per pipeline
-- 16 cores, 64GB RAM per pipeline
-- 100+ concurrent pipelines
-
-**Enterprise Mode**:
-
-- 20 servers per user
-- 64 cores, 256GB RAM per user
-- 1000+ concurrent users
-- Horizontal scaling via Kubernetes
-
-### Optimization Strategies
-
-**Caching**:
-
-- OCI artifacts cached locally
-- KCL compilation cached
-- Module resolution cached
-
-**Parallel Execution**:
-
-- Batch operations with configurable limits
-- Dependency-aware parallel starts
-- Workflow DAG execution
-
-**Incremental Operations**:
-
-- Only update changed resources
-- Checkpoint-based recovery
-- Delta synchronization
-
----
-
-## Evolution and Roadmap
-
-### Version History
-
-| Version | Date | Major Features |
-|---------|------|----------------|
-| **v3.5.0** | 2025-10-06 | Mode system, OCI distribution, comprehensive docs |
-| **v3.4.0** | 2025-10-06 | Test environment service |
-| **v3.3.0** | 2025-09-30 | Interactive guides |
-| **v3.2.0** | 2025-09-30 | Modular CLI refactoring |
-| **v3.1.0** | 2025-09-25 | Batch workflow system |
-| **v3.0.0** | 2025-09-25 | Hybrid orchestrator |
-| **v2.0.5** | 2025-10-02 | Workspace switching |
-| **v2.0.0** | 2025-09-23 | Configuration migration |
-
-### Roadmap (Future Versions)
-
-**v3.6.0** (Q1 2026):
-
-- GraphQL API
-- Advanced RBAC
-- Multi-tenancy
-- Observability enhancements (OpenTelemetry)
-
-**v4.0.0** (Q2 2026):
-
-- Multi-repository split complete
-- Extension marketplace
-- Advanced workflow features (conditional execution, loops)
-- Cost optimization engine
-
-**v4.1.0** (Q3 2026):
-
-- AI-assisted infrastructure generation
-- Policy-as-code (OPA integration)
-- Advanced compliance features
-
-**Long-term Vision**:
-
-- Serverless workflow execution
-- Edge computing support
-- Multi-cloud failover
-- Self-healing infrastructure
-
----
-
-## Related Documentation
-
-### Architecture
-
-- **[Multi-Repo Architecture](MULTI_REPO_ARCHITECTURE.md)** - Repository organization
-- **[Design Principles](design-principles.md)** - Architectural philosophy
-- **[Integration Patterns](integration-patterns.md)** - Integration details
-- **[Orchestrator Model](orchestrator-integration-model.md)** - Hybrid orchestration
-
-### ADRs
-
-- **[ADR-001](ADR-001-project-structure.md)** - Project structure
-- **[ADR-002](ADR-002-distribution-strategy.md)** - Distribution strategy
-- **[ADR-003](ADR-003-workspace-isolation.md)** - Workspace isolation
-- **[ADR-004](ADR-004-hybrid-architecture.md)** - Hybrid architecture
-- **[ADR-005](ADR-005-extension-framework.md)** - Extension framework
-- **[ADR-006](ADR-006-provisioning-cli-refactoring.md)** - CLI refactoring
-
-### User Guides
-
-- **[Getting Started](../user/getting-started.md)** - First steps
-- **[Mode System](../user/MODE_SYSTEM_QUICK_REFERENCE.md)** - Modes overview
-- **[Service Management](../user/SERVICE_MANAGEMENT_GUIDE.md)** - Services
-- **[OCI Registry](../user/OCI_REGISTRY_GUIDE.md)** - OCI operations
-
----
-
-**Maintained By**: Architecture Team
-**Review Cycle**: Quarterly
-**Next Review**: 2026-01-06
+
+
+
+Metric Value Notes
+CLI Startup Time < 100 ms Nushell cold start
+CLI Response Time < 50 ms Most commands
+Workflow Submission < 200 ms To orchestrator
+Task Processing 10-50/sec Orchestrator throughput
+Batch Operations Up to 100 servers Parallel execution
+OCI Pull Time 1-5s Cached: <100 ms
+Configuration Load < 500 ms Full hierarchy
+Health Check Interval 10s Configurable
+
+
+
+Solo Mode :
+
+Unlimited local resources
+Limited by machine capacity
+
+Multi-User Mode :
+
+10 servers per user
+32 cores, 128 GB RAM per user
+5-20 concurrent users
+
+CI/CD Mode :
+
+5 servers per pipeline
+16 cores, 64 GB RAM per pipeline
+100+ concurrent pipelines
+
+Enterprise Mode :
+
+20 servers per user
+64 cores, 256 GB RAM per user
+1000+ concurrent users
+Horizontal scaling via Kubernetes
+
+
+Caching :
+
+OCI artifacts cached locally
+Nickel compilation cached
+Module resolution cached
+
+Parallel Execution :
+
+Batch operations with configurable limits
+Dependency-aware parallel starts
+Workflow DAG execution
+
+Incremental Operations :
+
+Only update changed resources
+Checkpoint-based recovery
+Delta synchronization
+
+
+
+
+Version Date Major Features
+v3.5.0 2025-10-06 Mode system, OCI distribution, comprehensive docs
+v3.4.0 2025-10-06 Test environment service
+v3.3.0 2025-09-30 Interactive guides
+v3.2.0 2025-09-30 Modular CLI refactoring
+v3.1.0 2025-09-25 Batch workflow system
+v3.0.0 2025-09-25 Hybrid orchestrator
+v2.0.5 2025-10-02 Workspace switching
+v2.0.0 2025-09-23 Configuration migration
+
+
+
+v3.6.0 (Q1 2026):
+
+GraphQL API
+Advanced RBAC
+Multi-tenancy
+Observability enhancements (OpenTelemetry)
+
+v4.0.0 (Q2 2026):
+
+Multi-repository split complete
+Extension marketplace
+Advanced workflow features (conditional execution, loops)
+Cost optimization engine
+
+v4.1.0 (Q3 2026):
+
+AI-assisted infrastructure generation
+Policy-as-code (OPA integration)
+Advanced compliance features
+
+Long-term Vision :
+
+Serverless workflow execution
+Edge computing support
+Multi-cloud failover
+Self-healing infrastructure
+
+
+
+
+
+
+
+
+
+
+Maintained By : Architecture Team
+Review Cycle : Quarterly
+Next Review : 2026-01-06
Provisioning is built on a foundation of architectural principles that guide design decisions, ensure system quality, and maintain consistency across the codebase. These principles have evolved from real-world experience and represent lessons learned from complex infrastructure automation challenges.
-Principle : Completely agnostic and configuration-driven, not hardcoded. Use abstraction layers dynamically loaded from configurations.
+Principle : Fully agnostic and configuration-driven, not hardcoded. Use abstraction layers dynamically loaded from configurations.
Rationale : Infrastructure as Code (IaC) systems must be flexible enough to adapt to any environment without code changes. Hardcoded values defeat the purpose of IaC and create maintenance burdens.
Implementation Guidelines :
Never patch the system with hardcoded fallbacks when configuration parsing fails
All behavior must be configurable through the hierarchical configuration system
Use abstraction layers that are dynamically loaded from configuration
-Validate configuration completely before execution, fail fast on invalid config
+Validate configuration fully before execution, fail fast on invalid config
Anti-Patterns (Anti-PAP) :
@@ -5851,26 +5773,20 @@ api_endpoint = "https://ec2.amazonaws.com"
if config.providers.aws.regions.is_empty() {
regions = vec!["us-west-2"]; // Hardcoded fallback
}
-```plaintext
-
-### 2. Hybrid Architecture Optimization
-
-**Principle**: Use each language for what it does best - Rust for coordination, Nushell for business logic.
-
-**Rationale**: Different languages have different strengths. Rust excels at performance-critical coordination tasks, while Nushell excels at configuration management and domain-specific operations.
-
-**Implementation Guidelines**:
-
-- Rust handles orchestration, state management, and performance-critical paths
-- Nushell handles provider operations, configuration processing, and CLI interfaces
-- Clear boundaries between language responsibilities
-- Structured data exchange (JSON) between languages
-- Preserve existing domain expertise in Nushell
-
-**Language Responsibility Matrix**:
-
-```plaintext
-Rust Layer:
+
+
+Principle : Use each language for what it does best - Rust for coordination, Nushell for business logic.
+Rationale : Different languages have different strengths. Rust excels at performance-critical coordination tasks, while Nushell excels at configuration management and domain-specific operations.
+Implementation Guidelines :
+
+Rust handles orchestration, state management, and performance-critical paths
+Nushell handles provider operations, configuration processing, and CLI interfaces
+Clear boundaries between language responsibilities
+Structured data exchange (JSON) between languages
+Preserve existing domain expertise in Nushell
+
+Language Responsibility Matrix :
+Rust Layer:
├── Workflow orchestration and coordination
├── REST API servers and HTTP endpoints
├── State persistence and checkpoint management
@@ -5881,92 +5797,73 @@ Rust Layer:
Nushell Layer:
├── Provider implementations (AWS, UpCloud, local)
├── Task service management and configuration
-├── KCL configuration processing and validation
+├── Nickel configuration processing and validation
├── Template generation and Infrastructure as Code
├── CLI user interfaces and interactive tools
└── Domain-specific business logic
-```plaintext
-
-### 3. Configuration-First Architecture
-
-**Principle**: All system behavior is determined by configuration, with clear hierarchical precedence and validation.
-
-**Rationale**: True Infrastructure as Code requires that all behavior be configurable without code changes. Configuration hierarchy provides flexibility while maintaining predictability.
-
-**Configuration Hierarchy** (precedence order):
-
-1. Runtime Parameters (highest precedence)
-2. Environment Configuration
-3. Infrastructure Configuration
-4. User Configuration
-5. System Defaults (lowest precedence)
-
-**Implementation Guidelines**:
-
-- Complete configuration validation before execution
-- Variable interpolation for dynamic values
-- Schema-based validation using KCL
-- Configuration immutability during execution
-- Comprehensive error reporting for configuration issues
-
-### 4. Domain-Driven Structure
-
-**Principle**: Organize code by business domains and functional boundaries, not by technical concerns.
-
-**Rationale**: Domain-driven organization scales better, reduces coupling, and enables focused development by domain experts.
-
-**Domain Organization**:
-
-```plaintext
-├── core/ # Core system and library functions
+
+
+Principle : All system behavior is determined by configuration, with clear hierarchical precedence and validation.
+Rationale : True Infrastructure as Code requires that all behavior be configurable without code changes. Configuration hierarchy provides flexibility while maintaining predictability.
+Configuration Hierarchy (precedence order):
+
+Runtime Parameters (highest precedence)
+Environment Configuration
+Infrastructure Configuration
+User Configuration
+System Defaults (lowest precedence)
+
+Implementation Guidelines :
+
+Complete configuration validation before execution
+Variable interpolation for dynamic values
+Schema-based validation using Nickel
+Configuration immutability during execution
+Comprehensive error reporting for configuration issues
+
+
+Principle : Organize code by business domains and functional boundaries, not by technical concerns.
+Rationale : Domain-driven organization scales better, reduces coupling, and enables focused development by domain experts.
+Domain Organization :
+├── core/ # Core system and library functions
├── platform/ # High-performance coordination layer
├── provisioning/ # Main business logic with providers and services
├── control-center/ # Web-based management interface
├── tools/ # Development and utility tools
└── extensions/ # Plugin and extension framework
-```plaintext
-
-**Domain Responsibilities**:
-
-- Each domain has clear ownership and boundaries
-- Cross-domain communication through well-defined interfaces
-- Domain-specific testing and validation strategies
-- Independent evolution and versioning within architectural guidelines
-
-### 5. Isolation and Modularity
-
-**Principle**: Components are isolated, modular, and independently deployable with clear interface contracts.
-
-**Rationale**: Isolation enables independent development, testing, and deployment. Clear interfaces prevent tight coupling and enable system evolution.
-
-**Implementation Guidelines**:
-
-- User workspace isolation from system installation
-- Extension sandboxing and security boundaries
-- Provider abstraction with standardized interfaces
-- Service modularity with dependency management
-- Clear API contracts between components
-
-## Quality Attribute Principles
-
-### 6. Reliability Through Recovery
-
-**Principle**: Build comprehensive error recovery and rollback capabilities into every operation.
-
-**Rationale**: Infrastructure operations can fail at any point. Systems must be able to recover gracefully and maintain consistent state.
-
-**Implementation Guidelines**:
-
-- Checkpoint-based recovery for long-running workflows
-- Comprehensive rollback capabilities for all operations
-- Transactional semantics where possible
-- State validation and consistency checks
-- Detailed audit trails for debugging and recovery
-
-**Recovery Strategies**:
-
-```plaintext
-Operation Level:
+
+Domain Responsibilities :
+
+Each domain has clear ownership and boundaries
+Cross-domain communication through well-defined interfaces
+Domain-specific testing and validation strategies
+Independent evolution and versioning within architectural guidelines
+
+
+Principle : Components are isolated, modular, and independently deployable with clear interface contracts.
+Rationale : Isolation enables independent development, testing, and deployment. Clear interfaces prevent tight coupling and enable system evolution.
+Implementation Guidelines :
+
+User workspace isolation from system installation
+Extension sandboxing and security boundaries
+Provider abstraction with standardized interfaces
+Service modularity with dependency management
+Clear API contracts between components
+
+
+
+Principle : Build comprehensive error recovery and rollback capabilities into every operation.
+Rationale : Infrastructure operations can fail at any point. Systems must be able to recover gracefully and maintain consistent state.
+Implementation Guidelines :
+
+Checkpoint-based recovery for long-running workflows
+Comprehensive rollback capabilities for all operations
+Transactional semantics where possible
+State validation and consistency checks
+Detailed audit trails for debugging and recovery
+
+Recovery Strategies :
+Operation Level:
├── Atomic operations with rollback
├── Retry logic with exponential backoff
├── Circuit breakers for external dependencies
@@ -5983,32 +5880,23 @@ System Level:
├── Automatic recovery procedures
├── Data backup and restoration
└── Disaster recovery capabilities
-```plaintext
-
-### 7. Performance Through Parallelism
-
-**Principle**: Design for parallel execution and efficient resource utilization while maintaining correctness.
-
-**Rationale**: Infrastructure operations often involve multiple independent resources that can be processed in parallel for significant performance gains.
-
-**Implementation Guidelines**:
-
-- Configurable parallelism limits to prevent resource exhaustion
-- Dependency-aware parallel execution
-- Resource pooling and connection management
-- Efficient data structures and algorithms
-- Memory-conscious processing for large datasets
-
-### 8. Security Through Isolation
-
-**Principle**: Implement security through isolation boundaries, least privilege, and comprehensive validation.
-
-**Rationale**: Infrastructure systems handle sensitive data and powerful operations. Security must be built in at the architectural level.
-
-**Security Implementation**:
-
-```plaintext
-Authentication & Authorization:
+
+
+Principle : Design for parallel execution and efficient resource utilization while maintaining correctness.
+Rationale : Infrastructure operations often involve multiple independent resources that can be processed in parallel for significant performance gains.
+Implementation Guidelines :
+
+Configurable parallelism limits to prevent resource exhaustion
+Dependency-aware parallel execution
+Resource pooling and connection management
+Efficient data structures and algorithms
+Memory-conscious processing for large datasets
+
+
+Principle : Implement security through isolation boundaries, least privilege, and comprehensive validation.
+Rationale : Infrastructure systems handle sensitive data and powerful operations. Security must be built in at the architectural level.
+Security Implementation :
+Authentication & Authorization:
├── API authentication for external access
├── Role-based access control for operations
├── Permission validation before execution
@@ -6025,20 +5913,13 @@ Isolation Boundaries:
├── Extension sandboxing
├── Provider credential isolation
└── Process and network isolation
-```plaintext
-
-## Development Methodology Principles
-
-### 9. Configuration-Driven Testing
-
-**Principle**: Tests should be configuration-driven and validate both happy path and error conditions.
-
-**Rationale**: Infrastructure systems must work across diverse environments and configurations. Tests must validate the configuration-driven nature of the system.
-
-**Testing Strategy**:
-
-```plaintext
-Unit Testing:
+
+
+
+Principle : Tests should be configuration-driven and validate both happy path and error conditions.
+Rationale : Infrastructure systems must work across diverse environments and configurations. Tests must validate the configuration-driven nature of the system.
+Testing Strategy :
+Unit Testing:
├── Configuration validation tests
├── Individual component tests
├── Error condition tests
@@ -6055,28 +5936,21 @@ System Testing:
├── Upgrade and migration tests
├── Performance and scalability tests
└── Security and isolation tests
-```plaintext
-
-## Error Handling Principles
-
-### 11. Fail Fast, Recover Gracefully
-
-**Principle**: Validate early and fail fast on errors, but provide comprehensive recovery mechanisms.
-
-**Rationale**: Early validation prevents complex error states, while graceful recovery maintains system reliability.
-
-**Implementation Guidelines**:
-
-- Complete configuration validation before execution
-- Input validation at system boundaries
-- Clear error messages without internal stack traces (except in DEBUG mode)
-- Comprehensive error categorization and handling
-- Recovery procedures for all error categories
-
-**Error Categories**:
-
-```plaintext
-Configuration Errors:
+
+
+
+Principle : Validate early and fail fast on errors, but provide comprehensive recovery mechanisms.
+Rationale : Early validation prevents complex error states, while graceful recovery maintains system reliability.
+Implementation Guidelines :
+
+Complete configuration validation before execution
+Input validation at system boundaries
+Clear error messages without internal stack traces (except in DEBUG mode)
+Comprehensive error categorization and handling
+Recovery procedures for all error categories
+
+Error Categories :
+Configuration Errors:
├── Invalid configuration syntax
├── Missing required configuration
├── Configuration conflicts
@@ -6093,18 +5967,12 @@ System Errors:
├── Memory and resource exhaustion
├── Process communication failures
└── External dependency failures
-```plaintext
-
-### 12. Observable Operations
-
-**Principle**: All operations must be observable through comprehensive logging, metrics, and monitoring.
-
-**Rationale**: Infrastructure operations must be debuggable and monitorable in production environments.
-
-**Observability Implementation**:
-
-```plaintext
-Logging:
+
+
+Principle : All operations must be observable through comprehensive logging, metrics, and monitoring.
+Rationale : Infrastructure operations must be debuggable and monitorable in production environments.
+Observability Implementation :
+Logging:
├── Structured JSON logging
├── Configurable log levels
├── Context-aware log messages
@@ -6121,48 +5989,35 @@ Monitoring:
├── Real-time status reporting
├── Workflow progress tracking
└── Alert integration capabilities
-```plaintext
-
-## Evolution and Maintenance Principles
-
-### 13. Backward Compatibility
-
-**Principle**: Maintain backward compatibility for configuration, APIs, and user interfaces.
-
-**Rationale**: Infrastructure systems are long-lived and must support existing configurations and workflows during evolution.
-
-**Compatibility Guidelines**:
-
-- Semantic versioning for all interfaces
-- Configuration migration tools and procedures
-- Deprecation warnings and migration guides
-- API versioning for external interfaces
-- Comprehensive upgrade testing
-
-### 14. Documentation-Driven Development
-
-**Principle**: Architecture decisions, APIs, and operational procedures must be thoroughly documented.
-
-**Rationale**: Infrastructure systems are complex and require clear documentation for operation, maintenance, and evolution.
-
-**Documentation Requirements**:
-
-- Architecture Decision Records (ADRs) for major decisions
-- API documentation with examples
-- Operational runbooks and procedures
-- Configuration guides and examples
-- Troubleshooting guides and common issues
-
-### 15. Technical Debt Management
-
-**Principle**: Actively manage technical debt through regular assessment and systematic improvement.
-
-**Rationale**: Infrastructure systems accumulate complexity over time. Proactive debt management prevents system degradation.
-
-**Debt Management Strategy**:
-
-```plaintext
-Assessment:
+
+
+
+Principle : Maintain backward compatibility for configuration, APIs, and user interfaces.
+Rationale : Infrastructure systems are long-lived and must support existing configurations and workflows during evolution.
+Compatibility Guidelines :
+
+Semantic versioning for all interfaces
+Configuration migration tools and procedures
+Deprecation warnings and migration guides
+API versioning for external interfaces
+Comprehensive upgrade testing
+
+
+Principle : Architecture decisions, APIs, and operational procedures must be thoroughly documented.
+Rationale : Infrastructure systems are complex and require clear documentation for operation, maintenance, and evolution.
+Documentation Requirements :
+
+Architecture Decision Records (ADRs) for major decisions
+API documentation with examples
+Operational runbooks and procedures
+Configuration guides and examples
+Troubleshooting guides and common issues
+
+
+Principle : Actively manage technical debt through regular assessment and systematic improvement.
+Rationale : Infrastructure systems accumulate complexity over time. Proactive debt management prevents system degradation.
+Debt Management Strategy :
+Assessment:
├── Regular code quality reviews
├── Performance profiling and optimization
├── Security audit and updates
@@ -6173,20 +6028,13 @@ Improvement:
├── Performance optimization based on metrics
├── Security enhancement and hardening
└── Test coverage improvement and validation
-```plaintext
-
-## Trade-off Management
-
-### 16. Explicit Trade-off Documentation
-
-**Principle**: All architectural trade-offs must be explicitly documented with rationale and alternatives considered.
-
-**Rationale**: Understanding trade-offs enables informed decision making and future evolution of the system.
-
-**Trade-off Categories**:
-
-```plaintext
-Performance vs. Maintainability:
+
+
+
+Principle : All architectural trade-offs must be explicitly documented with rationale and alternatives considered.
+Rationale : Understanding trade-offs enables informed decision making and future evolution of the system.
+Trade-off Categories :
+Performance vs. Maintainability:
├── Rust coordination layer for performance
├── Nushell business logic for maintainability
├── Caching strategies for speed vs. consistency
@@ -6203,25 +6051,20 @@ Security vs. Usability:
├── Extension sandboxing vs. functionality
├── Authentication requirements vs. ease of use
└── Audit logging vs. performance overhead
-```plaintext
-
-## Conclusion
-
-These design principles form the foundation of provisioning's architecture. They guide decision making, ensure quality, and provide a framework for system evolution. Adherence to these principles has enabled the development of a sophisticated, reliable, and maintainable infrastructure automation platform.
-
-The principles are living guidelines that evolve with the system while maintaining core architectural integrity. They serve as both implementation guidance and evaluation criteria for new features and modifications.
-
-Success in applying these principles is measured by:
-
-- System reliability and error recovery capabilities
-- Development efficiency and maintainability
-- Configuration flexibility and user experience
-- Performance and scalability characteristics
-- Security and isolation effectiveness
-
-These principles represent the distilled wisdom from building and operating complex infrastructure automation systems at scale.
-
+
+These design principles form the foundation of provisioning’s architecture. They guide decision making, ensure quality, and provide a framework for system evolution. Adherence to these principles has enabled the development of a sophisticated, reliable, and maintainable infrastructure automation platform.
+The principles are living guidelines that evolve with the system while maintaining core architectural integrity. They serve as both implementation guidance and evaluation criteria for new features and modifications.
+Success in applying these principles is measured by:
+
+System reliability and error recovery capabilities
+Development efficiency and maintainability
+Configuration flexibility and user experience
+Performance and scalability characteristics
+Security and isolation effectiveness
+
+These principles represent the distilled wisdom from building and operating complex infrastructure automation systems at scale.
+
Provisioning implements sophisticated integration patterns to coordinate between its hybrid Rust/Nushell architecture, manage multi-provider workflows, and enable extensible functionality. This document outlines the key integration patterns, their implementations, and best practices.
@@ -6744,20 +6587,17 @@ mod integration_tests {
→ "Type not supported" errors
→ Cannot handle complex nested workflows
→ Performance bottlenecks with recursive calls
-```plaintext
-
-**Solution:** Rust orchestrator provides:
-
-1. **Task queue management** (file-based, reliable)
-2. **Priority scheduling** (intelligent task ordering)
-3. **Deep call stack elimination** (Rust handles recursion)
-4. **Performance optimization** (async/await, parallel execution)
-5. **State management** (workflow checkpointing)
-
-### How It Works Today (Monorepo)
-
-```plaintext
-┌─────────────────────────────────────────────────────────────┐
+
+Solution: Rust orchestrator provides:
+
+Task queue management (file-based, reliable)
+Priority scheduling (intelligent task ordering)
+Deep call stack elimination (Rust handles recursion)
+Performance optimization (async/await, parallel execution)
+State management (workflow checkpointing)
+
+
+┌─────────────────────────────────────────────────────────────┐
│ User │
└───────────────────────────┬─────────────────────────────────┘
│ calls
@@ -6795,39 +6635,29 @@ mod integration_tests {
│ • taskservs.nu │
│ • clusters.nu │
└────────────────┘
-```plaintext
-
-### Three Execution Modes
-
-#### Mode 1: Direct Mode (Simple Operations)
-
-```bash
-# No orchestrator needed
+
+
+
+# No orchestrator needed
provisioning server list
provisioning env
provisioning help
# Direct Nushell execution
provisioning (CLI) → Nushell scripts → Result
-```plaintext
-
-#### Mode 2: Orchestrated Mode (Complex Operations)
-
-```bash
-# Uses orchestrator for coordination
+
+
+# Uses orchestrator for coordination
provisioning server create --orchestrated
# Flow:
provisioning CLI → Orchestrator API → Task Queue → Nushell executor
↓
Result back to user
-```plaintext
-
-#### Mode 3: Workflow Mode (Batch Operations)
-
-```bash
-# Complex workflows with dependencies
-provisioning workflow submit server-cluster.k
+
+
+# Complex workflows with dependencies
+provisioning workflow submit server-cluster.ncl
# Flow:
provisioning CLI → Orchestrator Workflow Engine → Dependency Graph
@@ -6837,20 +6667,13 @@ provisioning CLI → Orchestrator Workflow Engine → Dependency Graph
Nushell scripts for each task
↓
Checkpoint state
-```plaintext
-
----
-
-## Integration Patterns
-
-### Pattern 1: CLI Submits Tasks to Orchestrator
-
-**Current Implementation:**
-
-**Nushell CLI (`core/nulib/workflows/server_create.nu`):**
-
-```nushell
-# Submit server creation workflow to orchestrator
+
+
+
+
+Current Implementation:
+Nushell CLI (core/nulib/workflows/server_create.nu):
+# Submit server creation workflow to orchestrator
export def server_create_workflow [
infra_name: string
--orchestrated
@@ -6870,12 +6693,9 @@ export def server_create_workflow [
do-server-create $infra_name
}
}
-```plaintext
-
-**Rust Orchestrator (`platform/orchestrator/src/api/workflows.rs`):**
-
-```rust
-// Receive workflow submission from Nushell CLI
+
+Rust Orchestrator (platform/orchestrator/src/api/workflows.rs):
+// Receive workflow submission from Nushell CLI
#[axum::debug_handler]
async fn create_server_workflow(
State(state): State<Arc<AppState>>,
@@ -6899,13 +6719,9 @@ async fn create_server_workflow(
workflow_id: task.id,
status: "queued",
}))
-}
-```plaintext
-
-**Flow:**
-
-```plaintext
-User → provisioning server create --orchestrated
+}
+Flow:
+User → provisioning server create --orchestrated
↓
Nushell CLI prepares task
↓
@@ -6916,14 +6732,10 @@ Orchestrator queues task
Returns workflow ID immediately
↓
User can monitor: provisioning workflow monitor <id>
-```plaintext
-
-### Pattern 2: Orchestrator Executes Nushell Scripts
-
-**Orchestrator Task Executor (`platform/orchestrator/src/executor.rs`):**
-
-```rust
-// Orchestrator spawns Nushell to execute business logic
+
+
+Orchestrator Task Executor (platform/orchestrator/src/executor.rs):
+// Orchestrator spawns Nushell to execute business logic
pub async fn execute_task(task: Task) -> Result<TaskResult> {
match task.task_type {
TaskType::ServerCreate => {
@@ -6949,13 +6761,9 @@ pub async fn execute_task(task: Task) -> Result<TaskResult> {
}
// Other task types...
}
-}
-```plaintext
-
-**Flow:**
-
-```plaintext
-Orchestrator task queue has pending task
+}
+Flow:
+Orchestrator task queue has pending task
↓
Executor picks up task
↓
@@ -6968,14 +6776,10 @@ Returns result to orchestrator
Orchestrator updates task status
↓
User monitors via: provisioning workflow status <id>
-```plaintext
-
-### Pattern 3: Bidirectional Communication
-
-**Nushell Calls Orchestrator API:**
-
-```nushell
-# Nushell script checks orchestrator status during execution
+
+
+Nushell Calls Orchestrator API:
+# Nushell script checks orchestrator status during execution
export def check-orchestrator-health [] {
let response = (http get http://localhost:9090/health)
@@ -6993,12 +6797,9 @@ export def report-progress [task_id: string, progress: int] {
status: "in_progress"
}
}
-```plaintext
-
-**Orchestrator Monitors Nushell Execution:**
-
-```rust
-// Orchestrator tracks Nushell subprocess
+
+Orchestrator Monitors Nushell Execution:
+// Orchestrator tracks Nushell subprocess
pub async fn execute_with_monitoring(task: Task) -> Result<TaskResult> {
let mut child = Command::new("nu")
.arg("-c")
@@ -7028,33 +6829,25 @@ pub async fn execute_with_monitoring(task: Task) -> Result<TaskResult>
).await??;
Ok(TaskResult::from_exit_status(result))
-}
-```plaintext
-
----
-
-## Multi-Repo Architecture Impact
-
-### Repository Split Doesn't Change Integration Model
-
-**In Multi-Repo Setup:**
-
-**Repository: `provisioning-core`**
-
-- Contains: Nushell business logic
-- Installs to: `/usr/local/lib/provisioning/`
-- Package: `provisioning-core-3.2.1.tar.gz`
-
-**Repository: `provisioning-platform`**
-
-- Contains: Rust orchestrator
-- Installs to: `/usr/local/bin/provisioning-orchestrator`
-- Package: `provisioning-platform-2.5.3.tar.gz`
-
-**Runtime Integration (Same as Monorepo):**
-
-```plaintext
-User installs both packages:
+}
+
+
+
+In Multi-Repo Setup:
+Repository: provisioning-core
+
+Contains: Nushell business logic
+Installs to: /usr/local/lib/provisioning/
+Package: provisioning-core-3.2.1.tar.gz
+
+Repository: provisioning-platform
+
+Contains: Rust orchestrator
+Installs to: /usr/local/bin/provisioning-orchestrator
+Package: provisioning-platform-2.5.3.tar.gz
+
+Runtime Integration (Same as Monorepo):
+User installs both packages:
provisioning-core-3.2.1 → /usr/local/lib/provisioning/
provisioning-platform-2.5.3 → /usr/local/bin/provisioning-orchestrator
@@ -7062,14 +6855,10 @@ Orchestrator expects core at: /usr/local/lib/provisioning/
Core expects orchestrator at: http://localhost:9090/
No code dependencies, just runtime coordination!
-```plaintext
-
-### Configuration-Based Integration
-
-**Core Package (`provisioning-core`) config:**
-
-```toml
-# /usr/local/share/provisioning/config/config.defaults.toml
+
+
+Core Package (provisioning-core) config:
+# /usr/local/share/provisioning/config/config.defaults.toml
[orchestrator]
enabled = true
@@ -7080,12 +6869,9 @@ auto_start = true # Start orchestrator if not running
[execution]
default_mode = "orchestrated" # Use orchestrator by default
fallback_to_direct = true # Fall back if orchestrator down
-```plaintext
-
-**Platform Package (`provisioning-platform`) config:**
-
-```toml
-# /usr/local/share/provisioning/platform/config.toml
+
+Platform Package (provisioning-platform) config:
+# /usr/local/share/provisioning/platform/config.toml
[orchestrator]
host = "127.0.0.1"
@@ -7097,14 +6883,10 @@ nushell_binary = "nu" # Expects nu in PATH
provisioning_lib = "/usr/local/lib/provisioning"
max_concurrent_tasks = 10
task_timeout_seconds = 3600
-```plaintext
-
-### Version Compatibility
-
-**Compatibility Matrix (`provisioning-distribution/versions.toml`):**
-
-```toml
-[compatibility.platform."2.5.3"]
+
+
+Compatibility Matrix (provisioning-distribution/versions.toml):
+[compatibility.platform."2.5.3"]
core = "^3.2" # Platform 2.5.3 compatible with core 3.2.x
min-core = "3.2.0"
api-version = "v1"
@@ -7113,30 +6895,20 @@ api-version = "v1"
platform = "^2.5" # Core 3.2.1 compatible with platform 2.5.x
min-platform = "2.5.0"
orchestrator-api = "v1"
-```plaintext
-
----
-
-## Execution Flow Examples
-
-### Example 1: Simple Server Creation (Direct Mode)
-
-**No Orchestrator Needed:**
-
-```bash
-provisioning server list
+
+
+
+
+No Orchestrator Needed:
+provisioning server list
# Flow:
CLI → servers/list.nu → Query state → Return results
(Orchestrator not involved)
-```plaintext
-
-### Example 2: Server Creation with Orchestrator
-
-**Using Orchestrator:**
-
-```bash
-provisioning server create --orchestrated --infra wuji
+
+
+Using Orchestrator:
+provisioning server create --orchestrated --infra wuji
# Detailed Flow:
1. User executes command
@@ -7170,7 +6942,7 @@ provisioning server create --orchestrated --infra wuji
nu -c "use /usr/local/lib/provisioning/servers/create.nu; create-server 'wuji'"
↓
13. Nushell executes business logic:
- - Reads KCL config
+ - Reads Nickel config
- Calls provider API (UpCloud/AWS)
- Creates server
- Returns result
@@ -7181,14 +6953,10 @@ provisioning server create --orchestrated --infra wuji
↓
16. User monitors: provisioning workflow status abc-123
→ Shows: "Server wuji created successfully"
-```plaintext
-
-### Example 3: Batch Workflow with Dependencies
-
-**Complex Workflow:**
-
-```bash
-provisioning batch submit multi-cloud-deployment.k
+
+
+Complex Workflow:
+provisioning batch submit multi-cloud-deployment.ncl
# Workflow contains:
- Create 5 servers (parallel)
@@ -7196,7 +6964,7 @@ provisioning batch submit multi-cloud-deployment.k
- Deploy applications (depends on Kubernetes)
# Detailed Flow:
-1. CLI submits KCL workflow to orchestrator
+1. CLI submits Nickel workflow to orchestrator
↓
2. Orchestrator parses workflow
↓
@@ -7232,24 +7000,26 @@ provisioning batch submit multi-cloud-deployment.k
8. If failure occurs, can retry from checkpoint
↓
9. User monitors real-time: provisioning batch monitor <id>
-```plaintext
+
+
+
+
+
+
+Eliminates Deep Call Stack Issues
+
+Without Orchestrator:
+template.nu → calls → cluster.nu → calls → taskserv.nu → calls → provider.nu
+(Deep nesting causes "Type not supported" errors)
----
-
-## Why This Architecture?
-
-### Orchestrator Benefits
-
-1. **Eliminates Deep Call Stack Issues**
+With Orchestrator:
+Orchestrator → spawns → Nushell subprocess (flat execution)
+(No deep nesting, fresh Nushell context for each task)
-Without Orchestrator:
-template.nu → calls → cluster.nu → calls → taskserv.nu → calls → provider.nu
-(Deep nesting causes “Type not supported” errors)
-With Orchestrator:
-Orchestrator → spawns → Nushell subprocess (flat execution)
-(No deep nesting, fresh Nushell context for each task)
-
+
+
+
2. **Performance Optimization**
```rust
@@ -7279,7 +7049,7 @@ Orchestrator → spawns → Nushell subprocess (flat execution)
Each does what it's best at!
-
+
Question: Why not implement everything in Rust?
Answer:
@@ -7325,14 +7095,10 @@ Orchestrator → spawns → Nushell subprocess (flat execution)
→ /usr/local/share/provisioning/platform/ (platform configs)
3. Sets up systemd/launchd service for orchestrator
-```plaintext
-
-### Runtime Coordination
-
-**Core package expects orchestrator:**
-
-```nushell
-# core/nulib/lib_provisioning/orchestrator/client.nu
+
+
+Core package expects orchestrator:
+# core/nulib/lib_provisioning/orchestrator/client.nu
# Check if orchestrator is running
export def orchestrator-available [] {
@@ -7357,12 +7123,9 @@ export def ensure-orchestrator [] {
}
}
}
-```plaintext
-
-**Platform package executes core scripts:**
-
-```rust
-// platform/orchestrator/src/executor/nushell.rs
+
+Platform package executes core scripts:
+// platform/orchestrator/src/executor/nushell.rs
pub struct NushellExecutor {
provisioning_lib: PathBuf, // /usr/local/lib/provisioning
@@ -7395,19 +7158,12 @@ impl NushellExecutor {
self.execute_script(&script).await
}
-}
-```plaintext
-
----
-
-## Configuration Examples
-
-### Core Package Config
-
-**`/usr/local/share/provisioning/config/config.defaults.toml`:**
-
-```toml
-[orchestrator]
+}
+
+
+
+/usr/local/share/provisioning/config/config.defaults.toml:
+[orchestrator]
enabled = true
endpoint = "http://localhost:9090"
timeout_seconds = 60
@@ -7433,14 +7189,10 @@ force_direct = [
"help",
"version"
]
-```plaintext
-
-### Platform Package Config
-
-**`/usr/local/share/provisioning/platform/config.toml`:**
-
-```toml
-[server]
+
+
+/usr/local/share/provisioning/platform/config.toml:
+[server]
host = "127.0.0.1"
port = 8080
@@ -7457,70 +7209,61 @@ checkpoint_interval_seconds = 30
binary = "nu" # Expects nu in PATH
provisioning_lib = "/usr/local/lib/provisioning"
env_vars = { NU_LIB_DIRS = "/usr/local/lib/provisioning" }
-```plaintext
-
----
-
-## Key Takeaways
-
-### 1. **Orchestrator is Essential**
-
-- Solves deep call stack problems
-- Provides performance optimization
-- Enables complex workflows
-- NOT optional for production use
-
-### 2. **Integration is Loose but Coordinated**
-
-- No code dependencies between repos
-- Runtime integration via CLI + REST API
-- Configuration-driven coordination
-- Works in both monorepo and multi-repo
-
-### 3. **Best of Both Worlds**
-
-- Rust: High-performance coordination
-- Nushell: Flexible business logic
-- Clean separation of concerns
-- Each technology does what it's best at
-
-### 4. **Multi-Repo Doesn't Change Integration**
-
-- Same runtime model as monorepo
-- Package installation sets up paths
-- Configuration enables discovery
-- Versioning ensures compatibility
-
----
-
-## Conclusion
-
-The confusing example in the multi-repo doc was **oversimplified**. The real architecture is:
-
-```plaintext
-✅ Orchestrator IS USED and IS ESSENTIAL
+
+
+
+
+
+Solves deep call stack problems
+Provides performance optimization
+Enables complex workflows
+NOT optional for production use
+
+
+
+No code dependencies between repos
+Runtime integration via CLI + REST API
+Configuration-driven coordination
+Works in both monorepo and multi-repo
+
+
+
+Rust: High-performance coordination
+Nushell: Flexible business logic
+Clean separation of concerns
+Each technology does what it’s best at
+
+
+
+Same runtime model as monorepo
+Package installation sets up paths
+Configuration enables discovery
+Versioning ensures compatibility
+
+
+
+The confusing example in the multi-repo doc was oversimplified . The real architecture is:
+✅ Orchestrator IS USED and IS ESSENTIAL
✅ Platform (Rust) coordinates Core (Nushell) execution
✅ Loose coupling via CLI + REST API (not code dependencies)
✅ Works identically in monorepo and multi-repo
✅ Configuration-based integration (no hardcoded paths)
-```plaintext
-
-The orchestrator provides:
-
-- Performance layer (async, parallel execution)
-- Workflow engine (complex dependencies)
-- State management (checkpoints, recovery)
-- Task queue (reliable execution)
-
-While Nushell provides:
-
-- Business logic (providers, taskservs, clusters)
-- Template rendering (Jinja2 via nu_plugin_tera)
-- Configuration management (KCL integration)
-- User-facing scripting
-
-**Multi-repo just splits WHERE the code lives, not HOW it works together.**
+The orchestrator provides:
+
+Performance layer (async, parallel execution)
+Workflow engine (complex dependencies)
+State management (checkpoints, recovery)
+Task queue (reliable execution)
+
+While Nushell provides:
+
+Business logic (providers, taskservs, clusters)
+Template rendering (Jinja2 via nu_plugin_tera)
+Configuration management (KCL integration)
+User-facing scripting
+
+Multi-repo just splits WHERE the code lives, not HOW it works together.
Version : 1.0.0
Date : 2025-10-06
@@ -7555,14 +7298,14 @@ While Nushell provides:
│ │ └── workflows/ # Core workflow system
│ ├── plugins/ # System plugins
│ └── scripts/ # Utility scripts
-├── kcl/ # Base KCL schemas
-│ ├── main.k # Main schema entry
-│ ├── lib.k # Core library types
-│ ├── settings.k # Settings schema
-│ ├── dependencies.k # Dependency schemas (with OCI support)
-│ ├── server.k # Server schemas
-│ ├── cluster.k # Cluster schemas
-│ └── workflows.k # Workflow schemas
+├── schemas/ # Base Nickel schemas
+│ ├── main.ncl # Main schema entry
+│ ├── lib.ncl # Core library types
+│ ├── settings.ncl # Settings schema
+│ ├── dependencies.ncl # Dependency schemas (with OCI support)
+│ ├── server.ncl # Server schemas
+│ ├── cluster.ncl # Cluster schemas
+│ └── workflows.ncl # Workflow schemas
├── config/ # Core configuration templates
├── templates/ # Core templates
├── tools/ # Build and distribution tools
@@ -7575,36 +7318,31 @@ While Nushell provides:
├── architecture/ # Architecture docs
└── development/ # Development guides
-```plaintext
-
-**Distribution**:
-
-- Published as OCI artifact: `oci://registry/provisioning-core:v3.5.0`
-- Contains all core functionality needed to run the provisioning system
-- Version format: `v{major}.{minor}.{patch}` (e.g., v3.5.0)
-
-**CI/CD**:
-
-- Build on commit to main
-- Publish OCI artifact on git tag (v*)
-- Run integration tests before publishing
-- Update changelog automatically
-
----
-
-### Repository 2: `provisioning-extensions`
-
-**Purpose**: All provider, taskserv, and cluster extensions
-
-```plaintext
-provisioning-extensions/
+
+Distribution :
+
+Published as OCI artifact: oci://registry/provisioning-core:v3.5.0
+Contains all core functionality needed to run the provisioning system
+Version format: v{major}.{minor}.{patch} (for example, v3.5.0)
+
+CI/CD :
+
+Build on commit to main
+Publish OCI artifact on git tag (v*)
+Run integration tests before publishing
+Update changelog automatically
+
+
+
+Purpose : All provider, taskserv, and cluster extensions
+provisioning-extensions/
├── providers/
│ ├── aws/
-│ │ ├── kcl/ # KCL schemas
-│ │ │ ├── kcl.mod # KCL dependencies
-│ │ │ ├── aws.k # Main provider schema
-│ │ │ ├── defaults_aws.k # AWS defaults
-│ │ │ └── server_aws.k # AWS server schema
+│ │ ├── schemas/ # Nickel schemas
+│ │ │ ├── manifest.toml # Nickel dependencies
+│ │ │ ├── aws.ncl # Main provider schema
+│ │ │ ├── defaults_aws.ncl # AWS defaults
+│ │ │ └── server_aws.ncl # AWS server schema
│ │ ├── scripts/ # Nushell scripts
│ │ │ └── install.nu # Installation script
│ │ ├── templates/ # Provider templates
@@ -7616,11 +7354,11 @@ provisioning-extensions/
│ └── (same structure)
├── taskservs/
│ ├── kubernetes/
-│ │ ├── kcl/
-│ │ │ ├── kcl.mod
-│ │ │ ├── kubernetes.k # Main taskserv schema
-│ │ │ ├── version.k # Version management
-│ │ │ └── dependencies.k # Taskserv dependencies
+│ │ ├── schemas/
+│ │ │ ├── manifest.toml
+│ │ │ ├── kubernetes.ncl # Main taskserv schema
+│ │ │ ├── version.ncl # Version management
+│ │ │ └── dependencies.ncl # Taskserv dependencies
│ │ ├── scripts/
│ │ │ ├── install.nu # Installation script
│ │ │ ├── check.nu # Health check script
@@ -7646,19 +7384,16 @@ provisioning-extensions/
├── extension-guide.md # Extension development guide
└── publishing.md # Publishing guide
-```plaintext
-
-**Distribution**:
-Each extension published separately as OCI artifact:
-
-- `oci://registry/provisioning-extensions/kubernetes:1.28.0`
-- `oci://registry/provisioning-extensions/aws:2.0.0`
-- `oci://registry/provisioning-extensions/buildkit:0.12.0`
-
-**Extension Manifest** (`manifest.yaml`):
-
-```yaml
-name: kubernetes
+
+Distribution :
+Each extension published separately as OCI artifact:
+
+oci://registry/provisioning-extensions/kubernetes:1.28.0
+oci://registry/provisioning-extensions/aws:2.0.0
+oci://registry/provisioning-extensions/buildkit:0.12.0
+
+Extension Manifest (manifest.yaml):
+name: kubernetes
type: taskserv
version: 1.28.0
description: Kubernetes container orchestration platform
@@ -7681,24 +7416,22 @@ platforms:
- linux/arm64
min_provisioning_version: "3.0.0"
-```plaintext
-
-**CI/CD**:
-
-- Build and publish each extension independently
-- Git tag format: `{extension-type}/{extension-name}/v{version}`
- - Example: `taskservs/kubernetes/v1.28.0`
-- Automated publishing to OCI registry on tag
-- Run extension-specific tests before publishing
-
----
-
-### Repository 3: `provisioning-platform`
-
-**Purpose**: Platform services (orchestrator, control-center, MCP server, API gateway)
-
-```plaintext
-provisioning-platform/
+
+CI/CD :
+
+Build and publish each extension independently
+Git tag format: {extension-type}/{extension-name}/v{version}
+
+Example: taskservs/kubernetes/v1.28.0
+
+
+Automated publishing to OCI registry on tag
+Run extension-specific tests before publishing
+
+
+
+Purpose : Platform services (orchestrator, control-center, MCP server, API gateway)
+provisioning-platform/
├── orchestrator/ # Rust orchestrator service
│ ├── src/
│ ├── Cargo.toml
@@ -7729,31 +7462,26 @@ provisioning-platform/
├── deployment.md
└── api-reference.md
-```plaintext
-
-**Distribution**:
-Standard Docker images in OCI registry:
-
-- `oci://registry/provisioning-platform/orchestrator:v1.2.0`
-- `oci://registry/provisioning-platform/control-center:v1.2.0`
-- `oci://registry/provisioning-platform/mcp-server:v1.0.0`
-- `oci://registry/provisioning-platform/api-gateway:v1.0.0`
-
-**CI/CD**:
-
-- Build Docker images on commit to main
-- Publish images on git tag (v*)
-- Multi-architecture builds (amd64, arm64)
-- Security scanning before publishing
-
----
-
-## OCI Registry Integration
-
-### Registry Structure
-
-```plaintext
-OCI Registry (localhost:5000 or harbor.company.com)
+
+Distribution :
+Standard Docker images in OCI registry:
+
+oci://registry/provisioning-platform/orchestrator:v1.2.0
+oci://registry/provisioning-platform/control-center:v1.2.0
+oci://registry/provisioning-platform/mcp-server:v1.0.0
+oci://registry/provisioning-platform/api-gateway:v1.0.0
+
+CI/CD :
+
+Build Docker images on commit to main
+Publish images on git tag (v*)
+Multi-architecture builds (amd64, arm64)
+Security scanning before publishing
+
+
+
+
+OCI Registry (localhost:5000 or harbor.company.com)
├── provisioning-core/
│ ├── v3.5.0 # Core system artifact
│ ├── v3.4.0
@@ -7771,18 +7499,14 @@ OCI Registry (localhost:5000 or harbor.company.com)
├── mcp-server:v1.0.0
└── api-gateway:v1.0.0
-```plaintext
-
-### OCI Artifact Structure
-
-Each extension packaged as OCI artifact:
-
-```plaintext
-kubernetes-1.28.0.tar.gz
-├── kcl/ # KCL schemas
-│ ├── kubernetes.k
-│ ├── version.k
-│ └── dependencies.k
+
+
+Each extension packaged as OCI artifact:
+kubernetes-1.28.0.tar.gz
+├── schemas/ # Nickel schemas
+│ ├── kubernetes.ncl
+│ ├── version.ncl
+│ └── dependencies.ncl
├── scripts/ # Nushell scripts
│ ├── install.nu
│ ├── check.nu
@@ -7795,18 +7519,12 @@ kubernetes-1.28.0.tar.gz
├── manifest.yaml # Extension manifest
└── oci-manifest.json # OCI manifest metadata
-```plaintext
-
----
-
-## Dependency Management
-
-### Workspace Configuration
-
-**File**: `workspace/config/provisioning.yaml`
-
-```yaml
-# Core system dependency
+
+
+
+
+File : workspace/config/provisioning.yaml
+# Core system dependency
dependencies:
core:
source: "oci://harbor.company.com/provisioning-core:v3.5.0"
@@ -7857,28 +7575,27 @@ dependencies:
endpoint: "localhost:5000"
namespaces:
extensions: "provisioning-extensions"
- kcl: "provisioning-kcl"
+ nickel: "provisioning-nickel"
platform: "provisioning-platform"
test: "provisioning-test"
-```plaintext
-
-### Dependency Resolution
-
-The system resolves dependencies in this order:
-
-1. **Parse Configuration**: Read `provisioning.yaml` and extract dependencies
-2. **Resolve Core**: Ensure core system version is compatible
-3. **Resolve Extensions**: For each extension:
- - Check if already installed and version matches
- - Pull from OCI registry if needed
- - Recursively resolve extension dependencies
-4. **Validate Graph**: Check for dependency cycles and conflicts
-5. **Install**: Install extensions in topological order
-
-### Dependency Resolution Commands
-
-```bash
-# Resolve and install all dependencies
+
+
+The system resolves dependencies in this order:
+
+Parse Configuration : Read provisioning.yaml and extract dependencies
+Resolve Core : Ensure core system version is compatible
+Resolve Extensions : For each extension:
+
+Check if already installed and version matches
+Pull from OCI registry if needed
+Recursively resolve extension dependencies
+
+
+Validate Graph : Check for dependency cycles and conflicts
+Install : Install extensions in topological order
+
+
+# Resolve and install all dependencies
provisioning dep resolve
# Check for dependency updates
@@ -7892,16 +7609,11 @@ provisioning dep validate
# Show dependency tree
provisioning dep tree kubernetes
-```plaintext
-
----
-
-## OCI Client Operations
-
-### CLI Commands
-
-```bash
-# Pull extension from OCI registry
+
+
+
+
+# Pull extension from OCI registry
provisioning oci pull kubernetes:1.28.0
# Push extension to OCI registry
@@ -7929,12 +7641,9 @@ provisioning oci delete kubernetes:1.28.0
provisioning oci copy \
localhost:5000/provisioning-extensions/kubernetes:1.28.0 \
harbor.company.com/provisioning-extensions/kubernetes:1.28.0
-```plaintext
-
-### OCI Configuration
-
-```bash
-# Show OCI configuration
+
+
+# Show OCI configuration
provisioning oci config
# Output:
@@ -7948,25 +7657,20 @@ provisioning oci config
cache_dir: "~/.provisioning/oci-cache"
tls_enabled: false
}
-```plaintext
-
----
-
-## Extension Development Workflow
-
-### 1. Develop Extension
-
-```bash
-# Create new extension from template
+
+
+
+
+# Create new extension from template
provisioning generate extension taskserv redis
# Directory structure created:
# extensions/taskservs/redis/
-# ├── kcl/
-# │ ├── kcl.mod
-# │ ├── redis.k
-# │ ├── version.k
-# │ └── dependencies.k
+# ├── schemas/
+# │ ├── manifest.toml
+# │ ├── redis.ncl
+# │ ├── version.ncl
+# │ └── dependencies.ncl
# ├── scripts/
# │ ├── install.nu
# │ ├── check.nu
@@ -7976,12 +7680,9 @@ provisioning generate extension taskserv redis
# │ └── README.md
# ├── tests/
# └── manifest.yaml
-```plaintext
-
-### 2. Test Extension Locally
-
-```bash
-# Load extension from local path
+
+
+# Load extension from local path
provisioning module load taskserv workspace_dev redis --source local
# Test installation
@@ -7989,24 +7690,18 @@ provisioning taskserv create redis --infra test-env --check
# Run extension tests
provisioning test extension redis
-```plaintext
-
-### 3. Package Extension
-
-```bash
-# Validate extension structure
+
+
+# Validate extension structure
provisioning oci package validate ./extensions/taskservs/redis
# Package as OCI artifact
provisioning oci package ./extensions/taskservs/redis
# Output: redis-1.0.0.tar.gz
-```plaintext
-
-### 4. Publish Extension
-
-```bash
-# Login to registry (one-time)
+
+
+# Login to registry (one-time)
provisioning oci login localhost:5000
# Publish extension
@@ -8021,12 +7716,9 @@ provisioning oci tags redis
# ├───────────┼─────────┼───────────────────────────────────────────────────┤
# │ redis │ 1.0.0 │ localhost:5000/provisioning-extensions/redis:1.0.0│
# └───────────┴─────────┴───────────────────────────────────────────────────┘
-```plaintext
-
-### 5. Use Published Extension
-
-```bash
-# Add to workspace configuration
+
+
+# Add to workspace configuration
# workspace/config/provisioning.yaml:
# dependencies:
# extensions:
@@ -8038,18 +7730,12 @@ provisioning oci tags redis
provisioning dep resolve
# Extension automatically downloaded and installed
-```plaintext
-
----
-
-## Registry Deployment Options
-
-### Local Registry (Solo Development)
-
-**Using Zot (lightweight OCI registry)**:
-
-```bash
-# Start local OCI registry
+
+
+
+
+Using Zot (lightweight OCI registry) :
+# Start local OCI registry
provisioning oci-registry start
# Configuration:
@@ -8063,14 +7749,10 @@ provisioning oci-registry stop
# Check status
provisioning oci-registry status
-```plaintext
-
-### Remote Registry (Multi-User/Enterprise)
-
-**Using Harbor**:
-
-```yaml
-# workspace/config/provisioning.yaml
+
+
+Using Harbor :
+# workspace/config/provisioning.yaml
dependencies:
registry:
type: "oci"
@@ -8081,148 +7763,136 @@ dependencies:
platform: "provisioning/platform"
tls_enabled: true
auth_token_path: "~/.provisioning/tokens/harbor"
-```plaintext
-
-**Features**:
-
-- Multi-user authentication
-- Role-based access control (RBAC)
-- Vulnerability scanning
-- Replication across registries
-- Webhook notifications
-- Image signing (cosign/notation)
-
----
-
-## Migration from Monorepo
-
-### Phase 1: Parallel Structure (Current)
-
-- Monorepo still exists and works
-- OCI distribution layer added on top
-- Extensions can be loaded from local or OCI
-- No breaking changes
-
-### Phase 2: Gradual Migration
-
-```bash
-# Migrate extensions one by one
+
+Features :
+
+Multi-user authentication
+Role-based access control (RBAC)
+Vulnerability scanning
+Replication across registries
+Webhook notifications
+Image signing (cosign/notation)
+
+
+
+
+
+Monorepo still exists and works
+OCI distribution layer added on top
+Extensions can be loaded from local or OCI
+No breaking changes
+
+
+# Migrate extensions one by one
for ext in (ls provisioning/extensions/taskservs); do
provisioning oci publish $ext.name
done
# Update workspace configurations to use OCI
provisioning workspace migrate-to-oci workspace_prod
-```plaintext
-
-### Phase 3: Repository Split
-
-1. Create `provisioning-core` repository
- - Extract core/ and kcl/ directories
- - Set up CI/CD for core publishing
- - Publish initial OCI artifact
-
-2. Create `provisioning-extensions` repository
- - Extract extensions/ directory
- - Set up CI/CD for extension publishing
- - Publish all extensions to OCI registry
-
-3. Create `provisioning-platform` repository
- - Extract platform/ directory
- - Set up Docker image builds
- - Publish platform services
-
-4. Update workspaces
- - Reconfigure to use OCI dependencies
- - Test multi-repo setup
- - Verify all functionality works
-
-### Phase 4: Deprecate Monorepo
-
-- Archive monorepo
-- Redirect to new repositories
-- Update documentation
-- Announce migration complete
-
----
-
-## Benefits Summary
-
-### Modularity
-
-✅ Independent repositories for core, extensions, and platform
+
+
+
+
+Create provisioning-core repository
+
+Extract core/ and schemas/ directories
+Set up CI/CD for core publishing
+Publish initial OCI artifact
+
+
+
+Create provisioning-extensions repository
+
+Extract extensions/ directory
+Set up CI/CD for extension publishing
+Publish all extensions to OCI registry
+
+
+
+Create provisioning-platform repository
+
+Extract platform/ directory
+Set up Docker image builds
+Publish platform services
+
+
+
+Update workspaces
+
+Reconfigure to use OCI dependencies
+Test multi-repo setup
+Verify all functionality works
+
+
+
+
+
+Archive monorepo
+Redirect to new repositories
+Update documentation
+Announce migration complete
+
+
+
+
+✅ Independent repositories for core, extensions, and platform
✅ Extensions can be developed and versioned separately
-✅ Clear ownership and responsibility boundaries
-
-### Distribution
-
-✅ OCI-native distribution (industry standard)
+✅ Clear ownership and responsibility boundaries
+
+✅ OCI-native distribution (industry standard)
✅ Built-in versioning with OCI tags
✅ Efficient caching with OCI layers
-✅ Works with standard tools (skopeo, crane, oras)
-
-### Security
-
-✅ TLS support for registries
+✅ Works with standard tools (skopeo, crane, oras)
+
+✅ TLS support for registries
✅ Authentication and authorization
✅ Vulnerability scanning (Harbor)
✅ Image signing (cosign, notation)
-✅ RBAC for access control
-
-### Developer Experience
-
-✅ Simple CLI commands for extension management
+✅ RBAC for access control
+
+✅ Simple CLI commands for extension management
✅ Automatic dependency resolution
✅ Local testing before publishing
-✅ Easy extension discovery and installation
-
-### Operations
-
-✅ Air-gapped deployments (mirror OCI registry)
-✅ Bandwidth efficient (only download what's needed)
+✅ Easy extension discovery and installation
+
+✅ Air-gapped deployments (mirror OCI registry)
+✅ Bandwidth efficient (only download what’s needed)
✅ Version pinning for reproducibility
-✅ Rollback support (use previous versions)
-
-### Ecosystem
-
-✅ Compatible with existing OCI tooling
+✅ Rollback support (use previous versions)
+
+✅ Compatible with existing OCI tooling
✅ Can use public registries (DockerHub, GitHub, etc.)
✅ Mirror to multiple registries
-✅ Replication for high availability
-
----
-
-## Implementation Status
-
-| Component | Status | Notes |
-|-----------|--------|-------|
-| **KCL Schemas** | ✅ Complete | OCI schemas in `dependencies.k` |
-| **OCI Client** | ✅ Complete | `oci/client.nu` with skopeo/crane/oras |
-| **OCI Commands** | ✅ Complete | `oci/commands.nu` CLI interface |
-| **Dependency Resolver** | ✅ Complete | `dependencies/resolver.nu` |
-| **OCI Packaging** | ✅ Complete | `tools/oci-package.nu` |
-| **Repository Design** | ✅ Complete | This document |
-| **Migration Plan** | ✅ Complete | Phased approach defined |
-| **Documentation** | ✅ Complete | User guides and API docs |
-| **CI/CD Setup** | ⏳ Pending | Automated publishing pipelines |
-| **Registry Deployment** | ⏳ Pending | Zot/Harbor setup |
-
----
-
-## Related Documentation
-
-- OCI Packaging Tool - Extension packaging
-- OCI Client Library - OCI operations
-- Dependency Resolver - Dependency management
-- KCL Schemas - Type definitions
-- [Extension Development Guide](../user/extension-development.md) - How to create extensions
-
----
-
-**Maintained By**: Architecture Team
-**Review Cycle**: Quarterly
-**Next Review**: 2026-01-06
-
+✅ Replication for high availability
+
+
+Component Status Notes
+Nickel Schemas ✅ Complete OCI schemas in dependencies.ncl
+OCI Client ✅ Complete oci/client.nu with skopeo/crane/oras
+OCI Commands ✅ Complete oci/commands.nu CLI interface
+Dependency Resolver ✅ Complete dependencies/resolver.nu
+OCI Packaging ✅ Complete tools/oci-package.nu
+Repository Design ✅ Complete This document
+Migration Plan ✅ Complete Phased approach defined
+Documentation ✅ Complete User guides and API docs
+CI/CD Setup ⏳ Pending Automated publishing pipelines
+Registry Deployment ⏳ Pending Zot/Harbor setup
+
+
+
+
+
+OCI Packaging Tool - Extension packaging
+OCI Client Library - OCI operations
+Dependency Resolver - Dependency management
+Nickel Schemas - Type definitions
+Extension Development Guide - How to create extensions
+
+
+Maintained By : Architecture Team
+Review Cycle : Quarterly
+Next Review : 2026-01-06
Date: 2025-10-01
Status: Strategic Analysis
@@ -8267,7 +7937,7 @@ provisioning workspace migrate-to-oci workspace_prod
Independent repositories with package-based integration:
-provisioning-core - Nushell libraries and KCL schemas
+provisioning-core - Nushell libraries and Nickel schemas
provisioning-platform - Rust services (orchestrator, control-center, MCP)
provisioning-extensions - Extension marketplace/catalog
provisioning-workspace - Project templates and examples
@@ -8296,12 +7966,12 @@ provisioning workspace migrate-to-oci workspace_prod
│ └── workflows/ # Workflow orchestration
├── cli/ # CLI entry point
│ └── provisioning # Pure Nushell CLI
-├── kcl/ # KCL schemas
-│ ├── main.k
-│ ├── settings.k
-│ ├── server.k
-│ ├── cluster.k
-│ └── workflows.k
+├── schemas/ # Nickel schemas
+│ ├── main.ncl
+│ ├── settings.ncl
+│ ├── server.ncl
+│ ├── cluster.ncl
+│ └── workflows.ncl
├── config/ # Default configurations
│ └── config.defaults.toml
├── templates/ # Core templates
@@ -8312,38 +7982,28 @@ provisioning workspace migrate-to-oci workspace_prod
├── README.md
├── CHANGELOG.md
└── version.toml # Core version file
-```plaintext
-
-**Technology:** Nushell, KCL
-**Primary Language:** Nushell
-**Release Frequency:** Monthly (stable)
-**Ownership:** Core team
-**Dependencies:** None (foundation)
-
-**Package Output:**
-
-- `provisioning-core-{version}.tar.gz` - Installable package
-- Published to package registry
-
-**Installation Path:**
-
-```plaintext
-/usr/local/
+
+Technology: Nushell, Nickel
+Primary Language: Nushell
+Release Frequency: Monthly (stable)
+Ownership: Core team
+Dependencies: None (foundation)
+Package Output:
+
+provisioning-core-{version}.tar.gz - Installable package
+Published to package registry
+
+Installation Path:
+ /usr/local/
├── bin/provisioning
├── lib/provisioning/
└── share/provisioning/
-```plaintext
-
----
-
-### Repository 2: `provisioning-platform`
-
-**Purpose:** High-performance Rust platform services
-
-**Contents:**
-
-```plaintext
-provisioning-platform/
+
+
+
+Purpose: High-performance Rust platform services
+Contents:
+provisioning-platform/
├── orchestrator/ # Rust orchestrator
│ ├── src/
│ ├── tests/
@@ -8370,48 +8030,39 @@ provisioning-platform/
├── LICENSE
├── README.md
└── CHANGELOG.md
-```plaintext
-
-**Technology:** Rust, WebAssembly
-**Primary Language:** Rust
-**Release Frequency:** Bi-weekly (fast iteration)
-**Ownership:** Platform team
-**Dependencies:**
-
-- `provisioning-core` (runtime integration, loose coupling)
-
-**Package Output:**
-
-- `provisioning-platform-{version}.tar.gz` - Binaries
-- Binaries for: Linux (x86_64, arm64), macOS (x86_64, arm64)
-
-**Installation Path:**
-
-```plaintext
-/usr/local/
+
+Technology: Rust, WebAssembly
+Primary Language: Rust
+Release Frequency: Bi-weekly (fast iteration)
+Ownership: Platform team
+Dependencies:
+
+provisioning-core (runtime integration, loose coupling)
+
+Package Output:
+
+provisioning-platform-{version}.tar.gz - Binaries
+Binaries for: Linux (x86_64, arm64), macOS (x86_64, arm64)
+
+Installation Path:
+/usr/local/
├── bin/
│ ├── provisioning-orchestrator
│ └── provisioning-control-center
└── share/provisioning/platform/
-```plaintext
-
-**Integration with Core:**
-
-- Platform services call `provisioning` CLI via subprocess
-- No direct code dependencies
-- Communication via REST API and file-based queues
-- Core and Platform can be deployed independently
-
----
-
-### Repository 3: `provisioning-extensions`
-
-**Purpose:** Extension marketplace and community modules
-
-**Contents:**
-
-```plaintext
-provisioning-extensions/
+
+Integration with Core:
+
+Platform services call provisioning CLI via subprocess
+No direct code dependencies
+Communication via REST API and file-based queues
+Core and Platform can be deployed independently
+
+
+
+Purpose: Extension marketplace and community modules
+Contents:
+provisioning-extensions/
├── registry/ # Extension registry
│ ├── index.json # Searchable index
│ └── catalog/ # Extension metadata
@@ -8442,52 +8093,40 @@ provisioning-extensions/
├── docs/ # Extension development guide
├── LICENSE
└── README.md
-```plaintext
-
-**Technology:** Nushell, KCL
-**Primary Language:** Nushell
-**Release Frequency:** Continuous (per-extension)
-**Ownership:** Community + Core team
-**Dependencies:**
-
-- `provisioning-core` (extends core functionality)
-
-**Package Output:**
-
-- Individual extension packages: `provisioning-ext-{name}-{version}.tar.gz`
-- Registry index for discovery
-
-**Installation:**
-
-```bash
-# Install extension via core CLI
+
+Technology: Nushell, Nickel
+Primary Language: Nushell
+Release Frequency: Continuous (per-extension)
+Ownership: Community + Core team
+Dependencies:
+
+provisioning-core (extends core functionality)
+
+Package Output:
+
+Individual extension packages: provisioning-ext-{name}-{version}.tar.gz
+Registry index for discovery
+
+Installation:
+# Install extension via core CLI
provisioning extension install mongodb
provisioning extension install azure-provider
-```plaintext
-
-**Extension Structure:**
-Each extension is self-contained:
-
-```plaintext
-mongodb/
+
+Extension Structure:
+Each extension is self-contained:
+mongodb/
├── manifest.toml # Extension metadata
├── taskserv.nu # Implementation
├── templates/ # Templates
-├── kcl/ # KCL schemas
+├── schemas/ # Nickel schemas
├── tests/ # Tests
└── README.md
-```plaintext
-
----
-
-### Repository 4: `provisioning-workspace`
-
-**Purpose:** Project templates and starter kits
-
-**Contents:**
-
-```plaintext
-provisioning-workspace/
+
+
+
+Purpose: Project templates and starter kits
+Contents:
+provisioning-workspace/
├── templates/ # Workspace templates
│ ├── minimal/ # Minimal starter
│ ├── kubernetes/ # Full K8s cluster
@@ -8505,43 +8144,34 @@ provisioning-workspace/
│ └── create-workspace.nu
├── LICENSE
└── README.md
-```plaintext
-
-**Technology:** Configuration files, KCL
-**Primary Language:** TOML, KCL, YAML
-**Release Frequency:** Quarterly (stable templates)
-**Ownership:** Community + Documentation team
-**Dependencies:**
-
-- `provisioning-core` (templates use core)
-- `provisioning-extensions` (may reference extensions)
-
-**Package Output:**
-
-- `provisioning-templates-{version}.tar.gz`
-
-**Usage:**
-
-```bash
-# Create workspace from template
+
+Technology: Configuration files, Nickel
+Primary Language: TOML, Nickel, YAML
+Release Frequency: Quarterly (stable templates)
+Ownership: Community + Documentation team
+Dependencies:
+
+provisioning-core (templates use core)
+provisioning-extensions (may reference extensions)
+
+Package Output:
+
+provisioning-templates-{version}.tar.gz
+
+Usage:
+# Create workspace from template
provisioning workspace init my-project --template kubernetes
# Or use separate tool
gh repo create my-project --template provisioning-workspace
cd my-project
provisioning workspace init
-```plaintext
-
----
-
-### Repository 5: `provisioning-distribution`
-
-**Purpose:** Release automation, packaging, and distribution infrastructure
-
-**Contents:**
-
-```plaintext
-provisioning-distribution/
+
+
+
+Purpose: Release automation, packaging, and distribution infrastructure
+Contents:
+provisioning-distribution/
├── release-automation/ # Automated release workflows
│ ├── build-all.nu # Build all packages
│ ├── publish.nu # Publish to registries
@@ -8569,31 +8199,25 @@ provisioning-distribution/
│ └── packaging-guide.md
├── LICENSE
└── README.md
-```plaintext
-
-**Technology:** Nushell, Bash, CI/CD
-**Primary Language:** Nushell, YAML
-**Release Frequency:** As needed
-**Ownership:** Release engineering team
-**Dependencies:** All repositories (orchestrates releases)
-
-**Responsibilities:**
-
-- Build packages from all repositories
-- Coordinate multi-repo releases
-- Publish to package registries
-- Manage version compatibility
-- Generate release notes
-- Host package registry
-
----
-
-## Dependency and Integration Model
-
-### Package-Based Dependencies (Not Submodules)
-
-```plaintext
-┌─────────────────────────────────────────────────────────────┐
+
+Technology: Nushell, Bash, CI/CD
+Primary Language: Nushell, YAML
+Release Frequency: As needed
+Ownership: Release engineering team
+Dependencies: All repositories (orchestrates releases)
+Responsibilities:
+
+Build packages from all repositories
+Coordinate multi-repo releases
+Publish to package registries
+Manage version compatibility
+Generate release notes
+Host package registry
+
+
+
+
+┌─────────────────────────────────────────────────────────────┐
│ provisioning-distribution │
│ (Release orchestration & registry) │
└──────────────────────────┬──────────────────────────────────┘
@@ -8615,16 +8239,11 @@ provisioning-distribution/
│ ↓ │
└───────────────────────────────────→┘
runtime integration
-```plaintext
-
-### Integration Mechanisms
-
-#### 1. **Core ↔ Platform Integration**
-
-**Method:** Loose coupling via CLI + REST API
-
-```nushell
-# Platform calls Core CLI (subprocess)
+
+
+
+Method: Loose coupling via CLI + REST API
+# Platform calls Core CLI (subprocess)
def create-server [name: string] {
# Orchestrator executes Core CLI
^provisioning server create $name --infra production
@@ -8634,22 +8253,15 @@ def create-server [name: string] {
def submit-workflow [workflow: record] {
http post http://localhost:9090/workflows/submit $workflow
}
-```plaintext
-
-**Version Compatibility:**
-
-```toml
-# platform/Cargo.toml
+
+Version Compatibility:
+# platform/Cargo.toml
[package.metadata.provisioning]
core-version = "^3.0" # Compatible with core 3.x
-```plaintext
-
-#### 2. **Core ↔ Extensions Integration**
-
-**Method:** Plugin/module system
-
-```nushell
-# Extension manifest
+
+
+Method: Plugin/module system
+# Extension manifest
# extensions/mongodb/manifest.toml
[extension]
name = "mongodb"
@@ -8666,14 +8278,10 @@ provisioning extension install mongodb
# → Downloads from registry
# → Validates compatibility
# → Installs to ~/.provisioning/extensions/mongodb
-```plaintext
-
-#### 3. **Workspace Templates**
-
-**Method:** Git templates or package templates
-
-```bash
-# Option 1: GitHub template repository
+
+
+Method: Git templates or package templates
+# Option 1: GitHub template repository
gh repo create my-infra --template provisioning-workspace
cd my-infra
provisioning workspace init
@@ -8683,29 +8291,19 @@ provisioning workspace create my-infra --template kubernetes
# → Downloads template package
# → Scaffolds workspace
# → Initializes configuration
-```plaintext
-
----
-
-## Version Management Strategy
-
-### Semantic Versioning Per Repository
-
-Each repository maintains independent semantic versioning:
-
-```plaintext
-provisioning-core: 3.2.1
+
+
+
+
+Each repository maintains independent semantic versioning:
+provisioning-core: 3.2.1
provisioning-platform: 2.5.3
provisioning-extensions: (per-extension versioning)
provisioning-workspace: 1.4.0
-```plaintext
-
-### Compatibility Matrix
-
-**`provisioning-distribution/version-management/versions.toml`:**
-
-```toml
-# Version compatibility matrix
+
+
+provisioning-distribution/version-management/versions.toml:
+# Version compatibility matrix
[compatibility]
# Core versions and compatible platform versions
@@ -8737,14 +8335,10 @@ lts-until = "2026-09-01"
core = "3.1.5"
platform = "2.4.8"
workspace = "1.3.0"
-```plaintext
-
-### Release Coordination
-
-**Coordinated releases** for major versions:
-
-```bash
-# Major release: All repos release together
+
+
+Coordinated releases for major versions:
+# Major release: All repos release together
provisioning-core: 3.0.0
provisioning-platform: 2.0.0
provisioning-workspace: 1.0.0
@@ -8752,16 +8346,11 @@ provisioning-workspace: 1.0.0
# Minor/patch releases: Independent
provisioning-core: 3.1.0 (adds features, platform stays 2.0.x)
provisioning-platform: 2.1.0 (improves orchestrator, core stays 3.1.x)
-```plaintext
-
----
-
-## Development Workflow
-
-### Working on Single Repository
-
-```bash
-# Developer working on core only
+
+
+
+
+# Developer working on core only
git clone https://github.com/yourorg/provisioning-core
cd provisioning-core
@@ -8777,12 +8366,9 @@ just build
# Test installation locally
just install-dev
-```plaintext
-
-### Working Across Repositories
-
-```bash
-# Scenario: Adding new feature requiring core + platform changes
+
+
+# Scenario: Adding new feature requiring core + platform changes
# 1. Clone both repositories
git clone https://github.com/yourorg/provisioning-core
@@ -8818,12 +8404,9 @@ cargo test
# Merge core PR first, cut release 3.3.0
# Update platform dependency to core 3.3.0
# Merge platform PR, cut release 2.6.0
-```plaintext
-
-### Testing Cross-Repo Integration
-
-```bash
-# Integration tests in provisioning-distribution
+
+
+# Integration tests in provisioning-distribution
cd provisioning-distribution
# Test specific version combination
@@ -8833,18 +8416,12 @@ just test-integration \
# Test bundle
just test-bundle stable-3.3
-```plaintext
-
----
-
-## Distribution Strategy
-
-### Individual Repository Releases
-
-Each repository releases independently:
-
-```bash
-# Core release
+
+
+
+
+Each repository releases independently:
+# Core release
cd provisioning-core
git tag v3.2.1
git push --tags
@@ -8857,14 +8434,10 @@ git tag v2.5.3
git push --tags
# → GitHub Actions builds binaries
# → Publishes to package registry
-```plaintext
-
-### Bundle Releases (Coordinated)
-
-Distribution repository creates tested bundles:
-
-```bash
-cd provisioning-distribution
+
+
+Distribution repository creates tested bundles:
+cd provisioning-distribution
# Create bundle
just create-bundle stable-3.2 \
@@ -8880,26 +8453,19 @@ just publish-bundle stable-3.2
# → Creates meta-package with all components
# → Publishes bundle to registry
# → Updates documentation
-```plaintext
-
-### User Installation Options
-
-#### Option 1: Bundle Installation (Recommended for Users)
-
-```bash
-# Install stable bundle (easiest)
+
+
+
+# Install stable bundle (easiest)
curl -fsSL https://get.provisioning.io | sh
# Installs:
# - provisioning-core 3.2.1
# - provisioning-platform 2.5.3
# - provisioning-workspace 1.4.0
-```plaintext
-
-#### Option 2: Individual Component Installation
-
-```bash
-# Install only core (minimal)
+
+
+# Install only core (minimal)
curl -fsSL https://get.provisioning.io/core | sh
# Add platform later
@@ -8907,68 +8473,55 @@ provisioning install platform
# Add extensions
provisioning extension install mongodb
-```plaintext
-
-#### Option 3: Custom Combination
-
-```bash
-# Install specific versions
+
+
+# Install specific versions
provisioning install core@3.1.0
provisioning install platform@2.4.0
-```plaintext
-
----
-
-## Repository Ownership and Contribution Model
-
-### Core Team Ownership
-
-| Repository | Primary Owner | Contribution Model |
-|------------|---------------|-------------------|
-| `provisioning-core` | Core Team | Strict review, stable API |
-| `provisioning-platform` | Platform Team | Fast iteration, performance focus |
-| `provisioning-extensions` | Community + Core | Open contributions, moderated |
-| `provisioning-workspace` | Docs Team | Template contributions welcome |
-| `provisioning-distribution` | Release Engineering | Core team only |
-
-### Contribution Workflow
-
-**For Core:**
-
-1. Create issue in `provisioning-core`
-2. Discuss design
-3. Submit PR with tests
-4. Strict code review
-5. Merge to `main`
-6. Release when ready
-
-**For Extensions:**
-
-1. Create extension in `provisioning-extensions`
-2. Follow extension guidelines
-3. Submit PR
-4. Community review
-5. Merge and publish to registry
-6. Independent versioning
-
-**For Platform:**
-
-1. Create issue in `provisioning-platform`
-2. Implement with benchmarks
-3. Submit PR
-4. Performance review
-5. Merge and release
-
----
-
-## CI/CD Strategy
-
-### Per-Repository CI/CD
-
-**Core CI (`provisioning-core/.github/workflows/ci.yml`):**
-
-```yaml
-name: Core CI
+
+
+
+
+Repository Primary Owner Contribution Model
+provisioning-coreCore Team Strict review, stable API
+provisioning-platformPlatform Team Fast iteration, performance focus
+provisioning-extensionsCommunity + Core Open contributions, moderated
+provisioning-workspaceDocs Team Template contributions welcome
+provisioning-distributionRelease Engineering Core team only
+
+
+
+For Core:
+
+Create issue in provisioning-core
+Discuss design
+Submit PR with tests
+Strict code review
+Merge to main
+Release when ready
+
+For Extensions:
+
+Create extension in provisioning-extensions
+Follow extension guidelines
+Submit PR
+Community review
+Merge and publish to registry
+Independent versioning
+
+For Platform:
+
+Create issue in provisioning-platform
+Implement with benchmarks
+Submit PR
+Performance review
+Merge and release
+
+
+
+
+Core CI (provisioning-core/.github/workflows/ci.yml):
+name: Core CI
on: [push, pull_request]
@@ -8981,8 +8534,8 @@ jobs:
run: cargo install nu
- name: Run tests
run: just test
- - name: Validate KCL schemas
- run: just validate-kcl
+ - name: Validate Nickel schemas
+ run: just validate-nickel
package:
runs-on: ubuntu-latest
@@ -8995,12 +8548,9 @@ jobs:
run: just publish
env:
REGISTRY_TOKEN: ${{ secrets.REGISTRY_TOKEN }}
-```plaintext
-
-**Platform CI (`provisioning-platform/.github/workflows/ci.yml`):**
-
-```yaml
-name: Platform CI
+
+Platform CI (provisioning-platform/.github/workflows/ci.yml):
+name: Platform CI
on: [push, pull_request]
@@ -9030,14 +8580,10 @@ jobs:
run: cargo build --release --target aarch64-unknown-linux-gnu
- name: Publish binaries
run: just publish-binaries
-```plaintext
-
-### Integration Testing (Distribution Repo)
-
-**Distribution CI (`provisioning-distribution/.github/workflows/integration.yml`):**
-
-```yaml
-name: Integration Tests
+
+
+Distribution CI (provisioning-distribution/.github/workflows/integration.yml):
+name: Integration Tests
on:
schedule:
@@ -9061,175 +8607,172 @@ jobs:
- name: Test upgrade path
run: |
nu tests/integration/test-upgrade.nu 3.1.0 3.2.1
-```plaintext
-
----
-
-## File and Directory Structure Comparison
-
-### Monorepo Structure
-
-```plaintext
-provisioning/ (One repo, ~500MB)
+
+
+
+
+provisioning/ (One repo, ~500 MB)
├── core/ (Nushell)
├── platform/ (Rust)
├── extensions/ (Community)
├── workspace/ (Templates)
└── distribution/ (Build)
-```plaintext
-
-### Multi-Repo Structure
-
-```plaintext
-provisioning-core/ (Repo 1, ~50MB)
+
+
+provisioning-core/ (Repo 1, ~50 MB)
├── nulib/
├── cli/
-├── kcl/
+├── schemas/
└── tools/
-provisioning-platform/ (Repo 2, ~150MB with target/)
+provisioning-platform/ (Repo 2, ~150 MB with target/)
├── orchestrator/
├── control-center/
├── mcp-server/
└── Cargo.toml
-provisioning-extensions/ (Repo 3, ~100MB)
+provisioning-extensions/ (Repo 3, ~100 MB)
├── registry/
├── providers/
├── taskservs/
└── clusters/
-provisioning-workspace/ (Repo 4, ~20MB)
+provisioning-workspace/ (Repo 4, ~20 MB)
├── templates/
├── examples/
└── blueprints/
-provisioning-distribution/ (Repo 5, ~30MB)
+provisioning-distribution/ (Repo 5, ~30 MB)
├── release-automation/
├── installers/
├── packaging/
└── registry/
-```plaintext
-
----
-
-## Decision Matrix
-
-| Criterion | Monorepo | Multi-Repo |
-|-----------|----------|------------|
-| **Development Complexity** | Simple | Moderate |
-| **Clone Size** | Large (~500MB) | Small (50-150MB each) |
-| **Cross-Component Changes** | Easy (atomic) | Moderate (coordinated) |
-| **Independent Releases** | Difficult | Easy |
-| **Language-Specific Tooling** | Mixed | Clean |
-| **Community Contributions** | Harder (big repo) | Easier (focused repos) |
-| **Version Management** | Simple (one version) | Complex (matrix) |
-| **CI/CD Complexity** | Simple (one pipeline) | Moderate (multiple) |
-| **Ownership Clarity** | Unclear | Clear |
-| **Extension Ecosystem** | Monolithic | Modular |
-| **Build Time** | Long (build all) | Short (build one) |
-| **Testing Isolation** | Difficult | Easy |
-
----
-
-## Recommended Approach: Multi-Repo
-
-### Why Multi-Repo Wins for This Project
-
-1. **Clear Separation of Concerns**
- - Nushell core vs Rust platform are different domains
- - Different teams can own different repos
- - Different release cadences make sense
-
-2. **Language-Specific Tooling**
- - `provisioning-core`: Nushell-focused, simple testing
- - `provisioning-platform`: Rust workspace, Cargo tooling
- - No mixed tooling confusion
-
-3. **Community Contributions**
- - Extensions repo is easier to contribute to
- - Don't need to clone entire monorepo
- - Clearer contribution guidelines per repo
-
-4. **Independent Versioning**
- - Core can stay stable (3.x for months)
- - Platform can iterate fast (2.x weekly)
- - Extensions have own lifecycles
-
-5. **Build Performance**
- - Only build what changed
- - Faster CI/CD per repo
- - Parallel builds across repos
-
-6. **Extension Ecosystem**
- - Extensions repo becomes marketplace
- - Third-party extensions can live separately
- - Registry becomes discovery mechanism
-
-### Implementation Strategy
-
-**Phase 1: Split Repositories (Week 1-2)**
-
-1. Create 5 new repositories
-2. Extract code from monorepo
-3. Set up CI/CD for each
-4. Create initial packages
-
-**Phase 2: Package Integration (Week 3)**
-
-1. Implement package registry
-2. Create installers
-3. Set up version compatibility matrix
-4. Test cross-repo integration
-
-**Phase 3: Distribution System (Week 4)**
-
-1. Implement bundle system
-2. Create release automation
-3. Set up package hosting
-4. Document release process
-
-**Phase 4: Migration (Week 5)**
-
-1. Migrate existing users
-2. Update documentation
-3. Archive monorepo
-4. Announce new structure
-
----
-
-## Conclusion
-
-**Recommendation: Multi-Repository Architecture with Package-Based Integration**
-
-The multi-repo approach provides:
-
-- ✅ Clear separation between Nushell core and Rust platform
-- ✅ Independent release cycles for different components
-- ✅ Better community contribution experience
-- ✅ Language-specific tooling and workflows
-- ✅ Modular extension ecosystem
-- ✅ Faster builds and CI/CD
-- ✅ Clear ownership boundaries
-
-**Avoid:** Submodules (complexity nightmare)
-
-**Use:** Package-based dependencies with version compatibility matrix
-
-This architecture scales better for your project's growth, supports a community extension ecosystem, and provides professional-grade separation of concerns while maintaining integration through a well-designed package system.
-
----
-
-## Next Steps
-
-1. **Approve multi-repo strategy**
-2. **Create repository split plan**
-3. **Set up GitHub organizations/teams**
-4. **Implement package registry**
-5. **Begin repository extraction**
-
-Would you like me to create a detailed **repository split implementation plan** next?
+
+
+Criterion Monorepo Multi-Repo
+Development Complexity Simple Moderate
+Clone Size Large (~500 MB) Small (50-150 MB each)
+Cross-Component Changes Easy (atomic) Moderate (coordinated)
+Independent Releases Difficult Easy
+Language-Specific Tooling Mixed Clean
+Community Contributions Harder (big repo) Easier (focused repos)
+Version Management Simple (one version) Complex (matrix)
+CI/CD Complexity Simple (one pipeline) Moderate (multiple)
+Ownership Clarity Unclear Clear
+Extension Ecosystem Monolithic Modular
+Build Time Long (build all) Short (build one)
+Testing Isolation Difficult Easy
+
+
+
+
+
+
+
+Clear Separation of Concerns
+
+Nushell core vs Rust platform are different domains
+Different teams can own different repos
+Different release cadences make sense
+
+
+
+Language-Specific Tooling
+
+provisioning-core: Nushell-focused, simple testing
+provisioning-platform: Rust workspace, Cargo tooling
+No mixed tooling confusion
+
+
+
+Community Contributions
+
+Extensions repo is easier to contribute to
+Don’t need to clone entire monorepo
+Clearer contribution guidelines per repo
+
+
+
+Independent Versioning
+
+Core can stay stable (3.x for months)
+Platform can iterate fast (2.x weekly)
+Extensions have own lifecycles
+
+
+
+Build Performance
+
+Only build what changed
+Faster CI/CD per repo
+Parallel builds across repos
+
+
+
+Extension Ecosystem
+
+Extensions repo becomes marketplace
+Third-party extensions can live separately
+Registry becomes discovery mechanism
+
+
+
+
+Phase 1: Split Repositories (Week 1-2)
+
+Create 5 new repositories
+Extract code from monorepo
+Set up CI/CD for each
+Create initial packages
+
+Phase 2: Package Integration (Week 3)
+
+Implement package registry
+Create installers
+Set up version compatibility matrix
+Test cross-repo integration
+
+Phase 3: Distribution System (Week 4)
+
+Implement bundle system
+Create release automation
+Set up package hosting
+Document release process
+
+Phase 4: Migration (Week 5)
+
+Migrate existing users
+Update documentation
+Archive monorepo
+Announce new structure
+
+
+
+Recommendation: Multi-Repository Architecture with Package-Based Integration
+The multi-repo approach provides:
+
+✅ Clear separation between Nushell core and Rust platform
+✅ Independent release cycles for different components
+✅ Better community contribution experience
+✅ Language-specific tooling and workflows
+✅ Modular extension ecosystem
+✅ Faster builds and CI/CD
+✅ Clear ownership boundaries
+
+Avoid: Submodules (complexity nightmare)
+Use: Package-based dependencies with version compatibility matrix
+This architecture scales better for your project’s growth, supports a community extension ecosystem, and provides professional-grade separation of concerns while maintaining integration through a well-designed package system.
+
+
+
+Approve multi-repo strategy
+Create repository split plan
+Set up GitHub organizations/teams
+Implement package registry
+Begin repository extraction
+
+Would you like me to create a detailed repository split implementation plan next?
Date : 2025-10-07
Status : ACTIVE DOCUMENTATION
@@ -9242,52 +8785,44 @@ Would you like me to create a detailed **repository split implementation plan**
url = "memory" # In-memory backend
namespace = "control_center"
database = "main"
-```plaintext
-
-**Storage**: In-memory (data persists during process lifetime)
-
-**Production Alternative**: Switch to remote WebSocket connection for persistent storage:
-
-```toml
-[database]
+
+Storage : In-memory (data persists during process lifetime)
+Production Alternative : Switch to remote WebSocket connection for persistent storage:
+[database]
url = "ws://localhost:8000"
namespace = "control_center"
database = "main"
username = "root"
password = "secret"
-```plaintext
-
-### Why SurrealDB kv-mem?
-
-| Feature | SurrealDB kv-mem | RocksDB | PostgreSQL |
-|---------|------------------|---------|------------|
-| **Deployment** | Embedded (no server) | Embedded | Server only |
-| **Build Deps** | None | libclang, bzip2 | Many |
-| **Docker** | Simple | Complex | External service |
-| **Performance** | Very fast (memory) | Very fast (disk) | Network latency |
-| **Use Case** | Dev/test, graphs | Production K/V | Relational data |
-| **GraphQL** | Built-in | None | External |
-
-**Control-Center choice**: SurrealDB kv-mem for **zero-dependency embedded storage**, perfect for:
-
-- Policy engine state
-- Session management
-- Configuration cache
-- Audit logs
-- User credentials
-- Graph-based policy relationships
-
-### Additional Database Support
-
-Control-Center also supports (via Cargo.toml dependencies):
-
-1. **SurrealDB (WebSocket)** - For production persistent storage
-
- ```toml
- surrealdb = { version = "2.3", features = ["kv-mem", "protocol-ws", "protocol-http"] }
+
+Feature SurrealDB kv-mem RocksDB PostgreSQL
+Deployment Embedded (no server) Embedded Server only
+Build Deps None libclang, bzip2 Many
+Docker Simple Complex External service
+Performance Very fast (memory) Very fast (disk) Network latency
+Use Case Dev/test, graphs Production K/V Relational data
+GraphQL Built-in None External
+
+
+Control-Center choice : SurrealDB kv-mem for zero-dependency embedded storage , perfect for:
+
+Policy engine state
+Session management
+Configuration cache
+Audit logs
+User credentials
+Graph-based policy relationships
+
+
+Control-Center also supports (via Cargo.toml dependencies):
+SurrealDB (WebSocket) - For production persistent storage
+surrealdb = { version = "2.3", features = ["kv-mem", "protocol-ws", "protocol-http"] }
+
+
+
SQLx - For SQL database backends (optional)
sqlx = { workspace = true }
@@ -9301,20 +8836,13 @@ Control-Center also supports (via Cargo.toml dependencies):
[orchestrator.storage]
type = "filesystem" # Default
backend_path = "{{orchestrator.paths.data_dir}}/queue.rkvs"
-```plaintext
-
-**Resolved Path**:
-
-```plaintext
-{{workspace.path}}/.orchestrator/data/queue.rkvs
-```plaintext
-
-### Optional: SurrealDB Backend
-
-For production deployments, switch to SurrealDB:
-
-```toml
-[orchestrator.storage]
+
+Resolved Path :
+{{workspace.path}}/.orchestrator/data/queue.rkvs
+
+
+For production deployments, switch to SurrealDB:
+[orchestrator.storage]
type = "surrealdb-server" # or surrealdb-embedded
[orchestrator.storage.surrealdb]
@@ -9323,97 +8851,72 @@ namespace = "orchestrator"
database = "tasks"
username = "root"
password = "secret"
-```plaintext
-
----
-
-## Configuration Loading Architecture
-
-### Hierarchical Configuration System
-
-All services load configuration in this order (priority: low → high):
-
-```plaintext
-1. System Defaults provisioning/config/config.defaults.toml
+
+
+
+
+All services load configuration in this order (priority: low → high):
+1. System Defaults provisioning/config/config.defaults.toml
2. Service Defaults provisioning/platform/{service}/config.defaults.toml
3. Workspace Config workspace/{name}/config/provisioning.yaml
4. User Config ~/Library/Application Support/provisioning/user_config.yaml
5. Environment Variables PROVISIONING_*, CONTROL_CENTER_*, ORCHESTRATOR_*
6. Runtime Overrides --config flag or API updates
-```plaintext
-
-### Variable Interpolation
-
-Configs support dynamic variable interpolation:
-
-```toml
-[paths]
+
+
+Configs support dynamic variable interpolation:
+[paths]
base = "/Users/Akasha/project-provisioning/provisioning"
data_dir = "{{paths.base}}/data" # Resolves to: /Users/.../data
[database]
url = "rocksdb://{{paths.data_dir}}/control-center.db"
# Resolves to: rocksdb:///Users/.../data/control-center.db
-```plaintext
-
-**Supported Variables**:
-
-- `{{paths.*}}` - Path variables from config
-- `{{workspace.path}}` - Current workspace path
-- `{{env.HOME}}` - Environment variables
-- `{{now.date}}` - Current date/time
-- `{{git.branch}}` - Git branch name
-
-### Service-Specific Config Files
-
-Each platform service has its own `config.defaults.toml`:
-
-| Service | Config File | Purpose |
-|---------|-------------|---------|
-| **Orchestrator** | `provisioning/platform/orchestrator/config.defaults.toml` | Workflow management, queue settings |
-| **Control-Center** | `provisioning/platform/control-center/config.defaults.toml` | Web UI, auth, database |
-| **MCP Server** | `provisioning/platform/mcp-server/config.defaults.toml` | AI integration settings |
-| **KMS** | `provisioning/core/services/kms/config.defaults.toml` | Key management |
-
-### Central Configuration
-
-**Master config**: `provisioning/config/config.defaults.toml`
-
-Contains:
-
-- Global paths
-- Provider configurations
-- Cache settings
-- Debug flags
-- Environment-specific overrides
-
-### Workspace-Aware Paths
-
-All services use workspace-aware paths:
-
-**Orchestrator**:
-
-```toml
-[orchestrator.paths]
+
+Supported Variables :
+
+{{paths.*}} - Path variables from config
+{{workspace.path}} - Current workspace path
+{{env.HOME}} - Environment variables
+{{now.date}} - Current date/time
+{{git.branch}} - Git branch name
+
+
+Each platform service has its own config.defaults.toml:
+Service Config File Purpose
+Orchestrator provisioning/platform/orchestrator/config.defaults.tomlWorkflow management, queue settings
+Control-Center provisioning/platform/control-center/config.defaults.tomlWeb UI, auth, database
+MCP Server provisioning/platform/mcp-server/config.defaults.tomlAI integration settings
+KMS provisioning/core/services/kms/config.defaults.tomlKey management
+
+
+
+Master config : provisioning/config/config.defaults.toml
+Contains:
+
+Global paths
+Provider configurations
+Cache settings
+Debug flags
+Environment-specific overrides
+
+
+All services use workspace-aware paths:
+Orchestrator :
+[orchestrator.paths]
base = "{{workspace.path}}/.orchestrator"
data_dir = "{{orchestrator.paths.base}}/data"
logs_dir = "{{orchestrator.paths.base}}/logs"
queue_dir = "{{orchestrator.paths.data_dir}}/queue"
-```plaintext
-
-**Control-Center**:
-
-```toml
-[paths]
+
+Control-Center :
+[paths]
base = "{{workspace.path}}/.control-center"
data_dir = "{{paths.base}}/data"
logs_dir = "{{paths.base}}/logs"
-```plaintext
-
-**Result** (workspace: `workspace-librecloud`):
-
-```plaintext
-workspace-librecloud/
+
+Result (workspace: workspace-librecloud):
+workspace-librecloud/
├── .orchestrator/
│ ├── data/
│ │ └── queue.rkvs
@@ -9422,18 +8925,12 @@ workspace-librecloud/
├── data/
│ └── control-center.db
└── logs/
-```plaintext
-
----
-
-## Environment Variable Overrides
-
-Any config value can be overridden via environment variables:
-
-### Control-Center
-
-```bash
-# Override server port
+
+
+
+Any config value can be overridden via environment variables:
+
+# Override server port
export CONTROL_CENTER_SERVER_PORT=8081
# Override database URL
@@ -9441,12 +8938,9 @@ export CONTROL_CENTER_DATABASE_URL="rocksdb:///custom/path/db"
# Override JWT secret
export CONTROL_CENTER_JWT_ISSUER="my-issuer"
-```plaintext
-
-### Orchestrator
-
-```bash
-# Override orchestrator port
+
+
+# Override orchestrator port
export ORCHESTRATOR_SERVER_PORT=8080
# Override storage backend
@@ -9455,39 +8949,27 @@ export ORCHESTRATOR_STORAGE_SURREALDB_URL="ws://localhost:8000"
# Override concurrency
export ORCHESTRATOR_QUEUE_MAX_CONCURRENT_TASKS=10
-```plaintext
-
-### Naming Convention
-
-```plaintext
-{SERVICE}_{SECTION}_{KEY} = value
-```plaintext
-
-**Examples**:
-
-- `CONTROL_CENTER_SERVER_PORT` → `[server] port`
-- `ORCHESTRATOR_QUEUE_MAX_CONCURRENT_TASKS` → `[queue] max_concurrent_tasks`
-- `PROVISIONING_DEBUG_ENABLED` → `[debug] enabled`
-
----
-
-## Docker vs Native Configuration
-
-### Docker Deployment
-
-**Container paths** (resolved inside container):
-
-```toml
-[paths]
+
+
+{SERVICE}_{SECTION}_{KEY} = value
+
+Examples :
+
+CONTROL_CENTER_SERVER_PORT → [server] port
+ORCHESTRATOR_QUEUE_MAX_CONCURRENT_TASKS → [queue] max_concurrent_tasks
+PROVISIONING_DEBUG_ENABLED → [debug] enabled
+
+
+
+
+Container paths (resolved inside container):
+[paths]
base = "/app/provisioning"
data_dir = "/data" # Mounted volume
logs_dir = "/var/log/orchestrator" # Mounted volume
-```plaintext
-
-**Docker Compose volumes**:
-
-```yaml
-services:
+
+Docker Compose volumes :
+services:
orchestrator:
volumes:
- orchestrator-data:/data
@@ -9501,27 +8983,18 @@ volumes:
orchestrator-data:
orchestrator-logs:
control-center-data:
-```plaintext
-
-### Native Deployment
-
-**Host paths** (macOS/Linux):
-
-```toml
-[paths]
+
+
+Host paths (macOS/Linux):
+[paths]
base = "/Users/Akasha/project-provisioning/provisioning"
data_dir = "{{workspace.path}}/.orchestrator/data"
logs_dir = "{{workspace.path}}/.orchestrator/logs"
-```plaintext
-
----
-
-## Configuration Validation
-
-Check current configuration:
-
-```bash
-# Show effective configuration
+
+
+
+Check current configuration:
+# Show effective configuration
provisioning env
# Show all config and environment
@@ -9532,26 +9005,18 @@ provisioning validate config
# Show service-specific config
PROVISIONING_DEBUG=true ./orchestrator --show-config
-```plaintext
-
----
-
-## KMS Database
-
-**Cosmian KMS** uses its own database (when deployed):
-
-```bash
-# KMS database location (Docker)
+
+
+
+Cosmian KMS uses its own database (when deployed):
+# KMS database location (Docker)
/data/kms.db # SQLite database inside KMS container
# KMS database location (Native)
{{workspace.path}}/.kms/data/kms.db
-```plaintext
-
-KMS also integrates with Control-Center's KMS hybrid backend (local + remote):
-
-```toml
-[kms]
+
+KMS also integrates with Control-Center’s KMS hybrid backend (local + remote):
+[kms]
mode = "hybrid" # local, remote, or hybrid
[kms.local]
@@ -9559,49 +9024,45 @@ database_path = "{{paths.data_dir}}/kms.db"
[kms.remote]
server_url = "http://localhost:9998" # Cosmian KMS server
-```plaintext
-
----
-
-## Summary
-
-### Control-Center Database
-
-- **Type**: RocksDB (embedded)
-- **Location**: `{{workspace.path}}/.control-center/data/control-center.db`
-- **No server required**: Embedded in control-center process
-
-### Orchestrator Database
-
-- **Type**: Filesystem (default) or SurrealDB (production)
-- **Location**: `{{workspace.path}}/.orchestrator/data/queue.rkvs`
-- **Optional server**: SurrealDB for production
-
-### Configuration Loading
-
-1. System defaults (provisioning/config/)
-2. Service defaults (platform/{service}/)
-3. Workspace config
-4. User config
-5. Environment variables
-6. Runtime overrides
-
-### Best Practices
-
-- ✅ Use workspace-aware paths
-- ✅ Override via environment variables in Docker
-- ✅ Keep secrets in KMS, not config files
-- ✅ Use RocksDB for single-node deployments
-- ✅ Use SurrealDB for distributed/production deployments
-
----
-
-**Related Documentation**:
-
-- [Configuration System](../infrastructure/configuration-guide.md)
-- [KMS Architecture](../security/kms-architecture.md)
-- [Workspace Switching](../infrastructure/workspace-switching-guide.md)
+
+
+
+
+Type : RocksDB (embedded)
+Location : {{workspace.path}}/.control-center/data/control-center.db
+No server required : Embedded in control-center process
+
+
+
+Type : Filesystem (default) or SurrealDB (production)
+Location : {{workspace.path}}/.orchestrator/data/queue.rkvs
+Optional server : SurrealDB for production
+
+
+
+System defaults (provisioning/config/)
+Service defaults (platform/{service}/)
+Workspace config
+User config
+Environment variables
+Runtime overrides
+
+
+
+✅ Use workspace-aware paths
+✅ Override via environment variables in Docker
+✅ Keep secrets in KMS, not config files
+✅ Use RocksDB for single-node deployments
+✅ Use SurrealDB for distributed/production deployments
+
+
+Related Documentation :
+
Date : 2025-11-23
Version : 1.0.0
@@ -9615,7 +9076,7 @@ server_url = "http://localhost:9998" # Cosmian KMS server
GitOps Events - Event-driven deployments from Git
-
+
┌─────────────────────────────────────────────┐
│ Provisioning CLI (provisioning/core/cli/) │
@@ -9651,24 +9112,16 @@ server_url = "http://localhost:9998" # Cosmian KMS server
│ ✅ gitops: Event-driven automation │
│ ✅ provctl-machines: SSH advanced │
└─────────────────────────────────────────────┘
-```plaintext
-
----
-
-## Components
-
-### 1. Runtime Abstraction
-
-**Location**: `provisioning/platform/integrations/provisioning-bridge/src/runtime.rs`
-**Nushell**: `provisioning/core/nulib/integrations/runtime.nu`
-**KCL Schema**: `provisioning/kcl/integrations/runtime.k`
-
-**Purpose**: Unified interface for Docker, Podman, OrbStack, Colima, nerdctl
-
-**Key Types**:
-
-```rust
-pub enum ContainerRuntime {
+
+
+
+
+Location : provisioning/platform/integrations/provisioning-bridge/src/runtime.rs
+Nushell : provisioning/core/nulib/integrations/runtime.nu
+Nickel Schema : provisioning/schemas/integrations/runtime.ncl
+Purpose : Unified interface for Docker, Podman, OrbStack, Colima, nerdctl
+Key Types :
+pub enum ContainerRuntime {
Docker,
Podman,
OrbStack,
@@ -9677,81 +9130,59 @@ pub enum ContainerRuntime {
}
pub struct RuntimeDetector { ... }
-pub struct ComposeAdapter { ... }
-```plaintext
-
-**Nushell Functions**:
-
-```nushell
-runtime-detect # Auto-detect available runtime
+pub struct ComposeAdapter { ... }
+Nushell Functions :
+runtime-detect # Auto-detect available runtime
runtime-exec # Execute command in detected runtime
runtime-compose # Adapt docker-compose for runtime
runtime-info # Get runtime details
runtime-list # List all available runtimes
-```plaintext
-
-**Benefits**:
-
-- ✅ Eliminates Docker hardcoding
-- ✅ Platform-aware detection
-- ✅ Automatic runtime selection
-- ✅ Docker Compose adaptation
-
----
-
-### 2. SSH Advanced
-
-**Location**: `provisioning/platform/integrations/provisioning-bridge/src/ssh.rs`
-**Nushell**: `provisioning/core/nulib/integrations/ssh_advanced.nu`
-**KCL Schema**: `provisioning/kcl/integrations/ssh_advanced.k`
-
-**Purpose**: Advanced SSH operations with pooling, circuit breaker, retry strategies
-
-**Key Types**:
-
-```rust
-pub struct SshConfig { ... }
+
+Benefits :
+
+✅ Eliminates Docker hardcoding
+✅ Platform-aware detection
+✅ Automatic runtime selection
+✅ Docker Compose adaptation
+
+
+
+Location : provisioning/platform/integrations/provisioning-bridge/src/ssh.rs
+Nushell : provisioning/core/nulib/integrations/ssh_advanced.nu
+Nickel Schema : provisioning/schemas/integrations/ssh_advanced.ncl
+Purpose : Advanced SSH operations with pooling, circuit breaker, retry strategies
+Key Types :
+pub struct SshConfig { ... }
pub struct SshPool { ... }
pub enum DeploymentStrategy {
Rolling,
BlueGreen,
Canary,
-}
-```plaintext
-
-**Nushell Functions**:
-
-```nushell
-ssh-pool-connect # Create SSH pool connection
+}
+Nushell Functions :
+ssh-pool-connect # Create SSH pool connection
ssh-pool-exec # Execute on SSH pool
ssh-pool-status # Check pool status
ssh-deployment-strategies # List strategies
ssh-retry-config # Configure retry strategy
ssh-circuit-breaker-status # Check circuit breaker
-```plaintext
-
-**Features**:
-
-- ✅ Connection pooling (90% faster)
-- ✅ Circuit breaker for fault isolation
-- ✅ Three deployment strategies (rolling, blue-green, canary)
-- ✅ Retry strategies (exponential, linear, fibonacci)
-- ✅ Health check integration
-
----
-
-### 3. Backup System
-
-**Location**: `provisioning/platform/integrations/provisioning-bridge/src/backup.rs`
-**Nushell**: `provisioning/core/nulib/integrations/backup.nu`
-**KCL Schema**: `provisioning/kcl/integrations/backup.k`
-
-**Purpose**: Multi-backend backup with retention policies
-
-**Key Types**:
-
-```rust
-pub enum BackupBackend {
+
+Features :
+
+✅ Connection pooling (90% faster)
+✅ Circuit breaker for fault isolation
+✅ Three deployment strategies (rolling, blue-green, canary)
+✅ Retry strategies (exponential, linear, fibonacci)
+✅ Health check integration
+
+
+
+Location : provisioning/platform/integrations/provisioning-bridge/src/backup.rs
+Nushell : provisioning/core/nulib/integrations/backup.nu
+Nickel Schema : provisioning/schemas/integrations/backup.ncl
+Purpose : Multi-backend backup with retention policies
+Key Types :
+pub enum BackupBackend {
Restic,
Borg,
Tar,
@@ -9761,87 +9192,65 @@ pub enum BackupBackend {
pub struct BackupJob { ... }
pub struct RetentionPolicy { ... }
-pub struct BackupManager { ... }
-```plaintext
-
-**Nushell Functions**:
-
-```nushell
-backup-create # Create backup job
+pub struct BackupManager { ... }
+Nushell Functions :
+backup-create # Create backup job
backup-restore # Restore from snapshot
backup-list # List snapshots
backup-schedule # Schedule regular backups
backup-retention # Configure retention policy
backup-status # Check backup status
-```plaintext
-
-**Features**:
-
-- ✅ Multiple backends (Restic, Borg, Tar, Rsync, CPIO)
-- ✅ Flexible repositories (local, S3, SFTP, REST, B2)
-- ✅ Retention policies (daily/weekly/monthly/yearly)
-- ✅ Pre/post backup hooks
-- ✅ Automatic scheduling
-- ✅ Compression support
-
----
-
-### 4. GitOps Events
-
-**Location**: `provisioning/platform/integrations/provisioning-bridge/src/gitops.rs`
-**Nushell**: `provisioning/core/nulib/integrations/gitops.nu`
-**KCL Schema**: `provisioning/kcl/integrations/gitops.k`
-
-**Purpose**: Event-driven deployments from Git
-
-**Key Types**:
-
-```rust
-pub enum GitProvider {
+
+Features :
+
+✅ Multiple backends (Restic, Borg, Tar, Rsync, CPIO)
+✅ Flexible repositories (local, S3, SFTP, REST, B2)
+✅ Retention policies (daily/weekly/monthly/yearly)
+✅ Pre/post backup hooks
+✅ Automatic scheduling
+✅ Compression support
+
+
+
+Location : provisioning/platform/integrations/provisioning-bridge/src/gitops.rs
+Nushell : provisioning/core/nulib/integrations/gitops.nu
+Nickel Schema : provisioning/schemas/integrations/gitops.ncl
+Purpose : Event-driven deployments from Git
+Key Types :
+pub enum GitProvider {
GitHub,
GitLab,
Gitea,
}
pub struct GitOpsRule { ... }
-pub struct GitOpsOrchestrator { ... }
-```plaintext
-
-**Nushell Functions**:
-
-```nushell
-gitops-rules # Load rules from config
+pub struct GitOpsOrchestrator { ... }
+Nushell Functions :
+gitops-rules # Load rules from config
gitops-watch # Watch for Git events
gitops-trigger # Manually trigger deployment
gitops-event-types # List supported events
gitops-rule-config # Configure GitOps rule
gitops-deployments # List active deployments
gitops-status # Get GitOps status
-```plaintext
-
-**Features**:
-
-- ✅ Event-driven automation (push, PR, webhook, scheduled)
-- ✅ Multi-provider support (GitHub, GitLab, Gitea)
-- ✅ Three deployment strategies
-- ✅ Manual approval workflow
-- ✅ Health check triggers
-- ✅ Audit logging
-
----
-
-### 5. Service Management
-
-**Location**: `provisioning/platform/integrations/provisioning-bridge/src/service.rs`
-**Nushell**: `provisioning/core/nulib/integrations/service.nu`
-**KCL Schema**: `provisioning/kcl/integrations/service.k`
-
-**Purpose**: Cross-platform service management (systemd, launchd, runit, OpenRC)
-
-**Nushell Functions**:
-
-```nushell
-service-install # Install service
+
+Features :
+
+✅ Event-driven automation (push, PR, webhook, scheduled)
+✅ Multi-provider support (GitHub, GitLab, Gitea)
+✅ Three deployment strategies
+✅ Manual approval workflow
+✅ Health check triggers
+✅ Audit logging
+
+
+
+Location : provisioning/platform/integrations/provisioning-bridge/src/service.rs
+Nushell : provisioning/core/nulib/integrations/service.nu
+Nickel Schema : provisioning/schemas/integrations/service.ncl
+Purpose : Cross-platform service management (systemd, launchd, runit, OpenRC)
+Nushell Functions :
+service-install # Install service
service-start # Start service
service-stop # Stop service
service-restart # Restart service
@@ -9849,56 +9258,49 @@ service-status # Get service status
service-list # List all services
service-restart-policy # Configure restart policy
service-detect-init # Detect init system
-```plaintext
-
-**Features**:
-
-- ✅ Multi-platform support (systemd, launchd, runit, OpenRC)
-- ✅ Service file generation
-- ✅ Restart policies (always, on-failure, no)
-- ✅ Health checks
-- ✅ Logging configuration
-- ✅ Metrics collection
-
----
-
-## Code Quality Standards
-
-All implementations follow project standards:
-
-### Rust (`provisioning-bridge`)
-
-- ✅ **Zero unsafe code** - `#![forbid(unsafe_code)]`
-- ✅ **Idiomatic error handling** - `Result<T, BridgeError>` pattern
-- ✅ **Comprehensive docs** - Full rustdoc with examples
-- ✅ **Tests** - Unit and integration tests for each module
-- ✅ **No unwrap()** - Only in tests with comments
-- ✅ **No clippy warnings** - All warnings suppressed
-
-### Nushell
-
-- ✅ **17 Nushell rules** - See Nushell Development Guide
-- ✅ **Explicit types** - Colon notation: `[param: type]: return_type`
-- ✅ **Early return** - Validate inputs immediately
-- ✅ **Single purpose** - Each function does one thing
-- ✅ **Atomic operations** - Succeed or fail completely
-- ✅ **Pure functions** - No hidden side effects
-
-### KCL
-
-- ✅ **Schema-first** - All configs have schemas
-- ✅ **Explicit types** - Full type annotations
-- ✅ **Direct imports** - No re-exports
-- ✅ **Immutability-first** - Mutable only when needed
-- ✅ **Validation** - Check blocks for constraints
-- ✅ **Security defaults** - TLS enabled, secrets referenced
-
----
-
-## File Structure
-
-```plaintext
-provisioning/
+
+Features :
+
+✅ Multi-platform support (systemd, launchd, runit, OpenRC)
+✅ Service file generation
+✅ Restart policies (always, on-failure, no)
+✅ Health checks
+✅ Logging configuration
+✅ Metrics collection
+
+
+
+All implementations follow project standards:
+
+
+✅ Zero unsafe code - #![forbid(unsafe_code)]
+✅ Idiomatic error handling - Result<T, BridgeError> pattern
+✅ Comprehensive docs - Full rustdoc with examples
+✅ Tests - Unit and integration tests for each module
+✅ No unwrap() - Only in tests with comments
+✅ No clippy warnings - All warnings suppressed
+
+
+
+✅ 17 Nushell rules - See Nushell Development Guide
+✅ Explicit types - Colon notation: [param: type]: return_type
+✅ Early return - Validate inputs immediately
+✅ Single purpose - Each function does one thing
+✅ Atomic operations - Succeed or fail completely
+✅ Pure functions - No hidden side effects
+
+
+
+✅ Schema-first - All configs have schemas
+✅ Explicit types - Full type annotations
+✅ Direct imports - No re-exports
+✅ Immutability-first - Mutable only when needed
+✅ Lazy evaluation - Efficient computation
+✅ Security defaults - TLS enabled, secrets referenced
+
+
+
+provisioning/
├── platform/integrations/
│ └── provisioning-bridge/ # Rust bridge crate
│ ├── Cargo.toml
@@ -9920,23 +9322,18 @@ provisioning/
│ ├── gitops.nu # GitOps functions
│ └── service.nu # Service functions
│
-└── kcl/integrations/ # KCL schemas
- ├── main.k # Main integration schema
- ├── runtime.k # Runtime schema
- ├── ssh_advanced.k # SSH schema
- ├── backup.k # Backup schema
- ├── gitops.k # GitOps schema
- └── service.k # Service schema
-```plaintext
-
----
-
-## Usage
-
-### Runtime Abstraction
-
-```nushell
-# Auto-detect available runtime
+└── schemas/integrations/ # Nickel schemas
+ ├── main.ncl # Main integration schema
+ ├── runtime.ncl # Runtime schema
+ ├── ssh_advanced.ncl # SSH schema
+ ├── backup.ncl # Backup schema
+ ├── gitops.ncl # GitOps schema
+ └── service.ncl # Service schema
+
+
+
+
+# Auto-detect available runtime
let runtime = (runtime-detect)
# Execute command in detected runtime
@@ -9944,12 +9341,9 @@ runtime-exec "docker ps" --check
# Adapt compose file
let compose_cmd = (runtime-compose "./docker-compose.yml")
-```plaintext
-
-### SSH Advanced
-
-```nushell
-# Connect to SSH pool
+
+
+# Connect to SSH pool
let pool = (ssh-pool-connect "server01.example.com" "root" --port 22)
# Execute distributed command
@@ -9957,12 +9351,9 @@ let results = (ssh-pool-exec $hosts "systemctl status provisioning" --strategy p
# Check circuit breaker
ssh-circuit-breaker-status
-```plaintext
-
-### Backup System
-
-```nushell
-# Schedule regular backups
+
+
+# Schedule regular backups
backup-schedule "daily-app-backup" "0 2 * * *" \
--paths ["/opt/app" "/var/lib/app"] \
--backend "restic"
@@ -9974,12 +9365,9 @@ backup-create "full-backup" ["/home" "/opt"] \
# Restore from snapshot
backup-restore "snapshot-001" --restore_path "."
-```plaintext
-
-### GitOps Events
-
-```nushell
-# Load GitOps rules
+
+
+# Load GitOps rules
let rules = (gitops-rules "./gitops-rules.yaml")
# Watch for Git events
@@ -9987,12 +9375,9 @@ gitops-watch --provider "github" --webhook-port 8080
# Manually trigger deployment
gitops-trigger "deploy-app" --environment "prod"
-```plaintext
-
-### Service Management
-
-```nushell
-# Install service
+
+
+# Install service
service-install "my-app" "/usr/local/bin/my-app" \
--user "appuser" \
--working-dir "/opt/myapp"
@@ -10005,121 +9390,90 @@ service-status "my-app"
# Set restart policy
service-restart-policy "my-app" --policy "on-failure" --delay-secs 5
-```plaintext
-
----
-
-## Integration Points
-
-### CLI Commands
-
-Existing `provisioning` CLI will gain new command tree:
-
-```bash
-provisioning runtime detect|exec|compose|info|list
+
+
+
+
+Existing provisioning CLI will gain new command tree:
+provisioning runtime detect|exec|compose|info|list
provisioning ssh pool connect|exec|status|strategies
provisioning backup create|restore|list|schedule|retention|status
provisioning gitops rules|watch|trigger|events|config|deployments|status
provisioning service install|start|stop|restart|status|list|policy|detect-init
-```plaintext
-
-### Configuration
-
-All integrations use KCL schemas from `provisioning/kcl/integrations/`:
-
-```kcl
-import provisioning.integrations as integrations
-
-config: integrations.IntegrationConfig = {
- runtime = { ... }
- ssh = { ... }
- backup = { ... }
- gitops = { ... }
- service = { ... }
+
+
+All integrations use Nickel schemas from provisioning/schemas/integrations/:
+let { IntegrationConfig } = import "provisioning/integrations.ncl" in
+{
+ runtime = { ... },
+ ssh = { ... },
+ backup = { ... },
+ gitops = { ... },
+ service = { ... },
}
-```plaintext
-
-### Plugins
-
-Nushell plugins can be created for performance-critical operations:
-
-```bash
-provisioning plugin list
+
+
+Nushell plugins can be created for performance-critical operations:
+provisioning plugin list
# [installed]
# nu_plugin_runtime
# nu_plugin_ssh_advanced
# nu_plugin_backup
# nu_plugin_gitops
-```plaintext
-
----
-
-## Testing
-
-### Rust Tests
-
-```bash
-cd provisioning/platform/integrations/provisioning-bridge
+
+
+
+
+cd provisioning/platform/integrations/provisioning-bridge
cargo test --all
cargo test -p provisioning-bridge --lib
cargo test -p provisioning-bridge --doc
-```plaintext
-
-### Nushell Tests
-
-```bash
-nu provisioning/core/nulib/integrations/runtime.nu
-nu provisioning/core/nulib/integrations/ssh_advanced.nu
-```plaintext
-
----
-
-## Performance
-
-| Operation | Performance |
-|-----------|-------------|
-| Runtime detection | ~50ms (cached: ~1ms) |
-| SSH pool init | ~100ms per connection |
-| SSH command exec | 90% faster with pooling |
-| Backup initiation | <100ms |
-| GitOps rule load | <10ms |
-
----
-
-## Migration Path
-
-If you want to fully migrate from provisioning to provctl + prov-ecosystem:
-
-1. **Phase 1**: Use integrations for new features (runtime, backup, gitops)
-2. **Phase 2**: Migrate SSH operations to `provctl-machines`
-3. **Phase 3**: Adopt provctl CLI for machine orchestration
-4. **Phase 4**: Use prov-ecosystem crates directly where beneficial
-
-Currently we implement **Phase 1** with selective integration.
-
----
-
-## Next Steps
-
-1. ✅ **Implement**: Integrate bridge into provisioning CLI
-2. ⏳ **Document**: Add to `docs/user/` for end users
-3. ⏳ **Examples**: Create example configurations
-4. ⏳ **Tests**: Integration tests with real providers
-5. ⏳ **Plugins**: Nushell plugins for performance
-
----
-
-## References
-
-- **Rust Bridge**: `provisioning/platform/integrations/provisioning-bridge/`
-- **Nushell Integration**: `provisioning/core/nulib/integrations/`
-- **KCL Schemas**: `provisioning/kcl/integrations/`
-- **Prov-Ecosystem**: `/Users/Akasha/Development/prov-ecosystem/`
-- **Provctl**: `/Users/Akasha/Development/provctl/`
-- **Rust Guidelines**: See Rust Development
-- **Nushell Guidelines**: See Nushell Development
-- **KCL Guidelines**: See KCL Module System
+
+nu provisioning/core/nulib/integrations/runtime.nu
+nu provisioning/core/nulib/integrations/ssh_advanced.nu
+
+
+
+Operation Performance
+Runtime detection ~50 ms (cached: ~1 ms)
+SSH pool init ~100 ms per connection
+SSH command exec 90% faster with pooling
+Backup initiation <100 ms
+GitOps rule load <10 ms
+
+
+
+
+If you want to fully migrate from provisioning to provctl + prov-ecosystem:
+
+Phase 1 : Use integrations for new features (runtime, backup, gitops)
+Phase 2 : Migrate SSH operations to provctl-machines
+Phase 3 : Adopt provctl CLI for machine orchestration
+Phase 4 : Use prov-ecosystem crates directly where beneficial
+
+Currently we implement Phase 1 with selective integration.
+
+
+
+✅ Implement : Integrate bridge into provisioning CLI
+⏳ Document : Add to docs/user/ for end users
+⏳ Examples : Create example configurations
+⏳ Tests : Integration tests with real providers
+⏳ Plugins : Nushell plugins for performance
+
+
+
+
+Rust Bridge : provisioning/platform/integrations/provisioning-bridge/
+Nushell Integration : provisioning/core/nulib/integrations/
+Nickel Schemas : provisioning/schemas/integrations/
+Prov-Ecosystem : /Users/Akasha/Development/prov-ecosystem/
+Provctl : /Users/Akasha/Development/provctl/
+Rust Guidelines : See Rust Development
+Nushell Guidelines : See Nushell Development
+Nickel Guidelines : See Nickel Module System
+
This document describes the new package-based architecture implemented for the provisioning system, replacing hardcoded extension paths with a flexible module discovery and loading system.
@@ -10135,15 +9489,15 @@ Currently we implement **Phase 1** with selective integration.
Version Management : Core package and extensions can be versioned independently
Developer Friendly : Easy workspace setup and module management
-
+
Contains fundamental schemas for provisioning:
-settings.k - System settings and configuration
-server.k - Server definitions and schemas
-defaults.k - Default configurations
-lib.k - Common library schemas
-dependencies.k - Dependency management schemas
+settings.ncl - System settings and configuration
+server.ncl - Server definitions and schemas
+defaults.ncl - Default configurations
+lib.ncl - Common library schemas
+dependencies.ncl - Dependency management schemas
Key Features:
@@ -10157,20 +9511,16 @@ Currently we implement **Phase 1** with selective integration.
module-loader discover taskservs # List all taskservs
module-loader discover providers --format yaml # List providers as YAML
module-loader discover clusters redis # Search for redis clusters
-```plaintext
-
-#### Supported Module Types
-
-- **Taskservs**: Infrastructure services (kubernetes, redis, postgres, etc.)
-- **Providers**: Cloud providers (upcloud, aws, local)
-- **Clusters**: Complete configurations (buildkit, web, oci-reg)
-
-### 3. Module Loading System
-
-#### Loading Commands
-
-```bash
-# Load modules into workspace
+
+
+
+Taskservs : Infrastructure services (kubernetes, redis, postgres, etc.)
+Providers : Cloud providers (upcloud, aws, local)
+Clusters : Complete configurations (buildkit, web, oci-reg)
+
+
+
+ # Load modules into workspace
module-loader load taskservs . [kubernetes, cilium, containerd]
module-loader load providers . [upcloud]
module-loader load clusters . [buildkit]
@@ -10179,26 +9529,22 @@ module-loader load clusters . [buildkit]
module-loader init workspace/infra/production \
--taskservs [kubernetes, cilium] \
--providers [upcloud]
-```plaintext
-
-#### Generated Files
-
-- `taskservs.k` - Auto-generated taskserv imports
-- `providers.k` - Auto-generated provider imports
-- `clusters.k` - Auto-generated cluster imports
-- `.manifest/*.yaml` - Module loading manifests
-
-## Workspace Structure
-
-### New Workspace Layout
-
-```plaintext
-workspace/infra/my-project/
+
+
+
+taskservs.ncl - Auto-generated taskserv imports
+providers.ncl - Auto-generated provider imports
+clusters.ncl - Auto-generated cluster imports
+.manifest/*.yaml - Module loading manifests
+
+
+
+workspace/infra/my-project/
├── kcl.mod # Package dependencies
-├── servers.k # Main server configuration
-├── taskservs.k # Auto-generated taskserv imports
-├── providers.k # Auto-generated provider imports
-├── clusters.k # Auto-generated cluster imports
+├── servers.ncl # Main server configuration
+├── taskservs.ncl # Auto-generated taskserv imports
+├── providers.ncl # Auto-generated provider imports
+├── clusters.ncl # Auto-generated cluster imports
├── .taskservs/ # Loaded taskserv modules
│ ├── kubernetes/
│ ├── cilium/
@@ -10215,34 +9561,23 @@ workspace/infra/my-project/
├── tmp/ # Temporary files
├── resources/ # Resource definitions
└── clusters/ # Cluster configurations
-```plaintext
-
-### Import Patterns
-
-#### Before (Old System)
-
-```kcl
-# Hardcoded relative paths
+
+
+
+# Hardcoded relative paths
import ../../../kcl/server as server
import ../../../extensions/taskservs/kubernetes/kcl/kubernetes as k8s
-```plaintext
-
-#### After (New System)
-
-```kcl
-# Package-based imports
+
+
+# Package-based imports
import provisioning.server as server
# Auto-generated module imports (after loading)
-import .taskservs.kubernetes.kubernetes as k8s
-```plaintext
-
-## Package Distribution
-
-### Building Core Package
-
-```bash
-# Build distributable package
+import .taskservs.nclubernetes.kubernetes as k8s
+
+
+
+# Build distributable package
./provisioning/tools/kcl-packager.nu build --version 1.0.0
# Install locally
@@ -10250,37 +9585,23 @@ import .taskservs.kubernetes.kubernetes as k8s
# Create release
./provisioning/tools/kcl-packager.nu build --format tar.gz --include-docs
-```plaintext
-
-### Package Installation Methods
-
-#### Method 1: Local Installation (Recommended for development)
-
-```toml
-[dependencies]
+
+
+
+[dependencies]
provisioning = { path = "~/.kcl/packages/provisioning", version = "0.0.1" }
-```plaintext
-
-#### Method 2: Git Repository (For distributed teams)
-
-```toml
-[dependencies]
+
+
+[dependencies]
provisioning = { git = "https://github.com/your-org/provisioning-kcl", version = "v0.0.1" }
-```plaintext
-
-#### Method 3: KCL Registry (When available)
-
-```toml
-[dependencies]
+
+
+[dependencies]
provisioning = { version = "0.0.1" }
-```plaintext
-
-## Developer Workflows
-
-### 1. New Project Setup
-
-```bash
-# Create workspace from template
+
+
+
+# Create workspace from template
cp -r provisioning/templates/workspaces/kubernetes ./my-k8s-cluster
cd my-k8s-cluster
@@ -10292,14 +9613,11 @@ module-loader load taskservs . [kubernetes, cilium, containerd]
module-loader load providers . [upcloud]
# Validate and deploy
-kcl run servers.k
+kcl run servers.ncl
provisioning server create --infra . --check
-```plaintext
-
-### 2. Extension Development
-
-```bash
-# Create new taskserv
+
+
+# Create new taskserv
mkdir -p extensions/taskservs/my-service/kcl
cd extensions/taskservs/my-service/kcl
@@ -10309,12 +9627,9 @@ echo 'provisioning = { path = "~/.kcl/packages/provisioning", version = "0.0.1"
# Develop and test
module-loader discover taskservs # Should find your service
-```plaintext
-
-### 3. Workspace Migration
-
-```bash
-# Analyze existing workspace
+
+
+# Analyze existing workspace
workspace-migrate.nu workspace/infra/old-project dry-run
# Perform migration
@@ -10322,12 +9637,9 @@ workspace-migrate.nu workspace/infra/old-project
# Verify migration
module-loader validate workspace/infra/old-project
-```plaintext
-
-### 4. Multi-Environment Management
-
-```bash
-# Development environment
+
+
+# Development environment
cd workspace/infra/dev
module-loader load taskservs . [redis, postgres]
module-loader load providers . [local]
@@ -10336,14 +9648,10 @@ module-loader load providers . [local]
cd workspace/infra/prod
module-loader load taskservs . [redis, postgres, kubernetes, monitoring]
module-loader load providers . [upcloud, aws] # Multi-cloud
-```plaintext
-
-## Module Management
-
-### Listing and Validation
-
-```bash
-# List loaded modules
+
+
+
+# List loaded modules
module-loader list taskservs .
module-loader list providers .
module-loader list clusters .
@@ -10353,33 +9661,23 @@ module-loader validate .
# Show workspace info
workspace-init.nu . info
-```plaintext
-
-### Unloading Modules
-
-```bash
-# Remove specific modules
+
+
+# Remove specific modules
module-loader unload taskservs . redis
module-loader unload providers . aws
# This regenerates import files automatically
-```plaintext
-
-### Module Information
-
-```bash
-# Get detailed module info
+
+
+# Get detailed module info
module-loader info taskservs kubernetes
module-loader info providers upcloud
module-loader info clusters buildkit
-```plaintext
-
-## CI/CD Integration
-
-### Pipeline Example
-
-```bash
-#!/usr/bin/env nu
+
+
+
+#!/usr/bin/env nu
# deploy-pipeline.nu
# Install specific versions
@@ -10395,1323 +9693,317 @@ module-loader validate $env.WORKSPACE_PATH
# Deploy infrastructure
provisioning server create --infra $env.WORKSPACE_PATH
-```plaintext
-
-## Troubleshooting
-
-### Common Issues
-
-#### Module Import Errors
-
-```plaintext
-Error: module not found
-```plaintext
-
-**Solution**: Verify modules are loaded and regenerate imports
-
-```bash
-module-loader list taskservs .
+
+
+
+
+Error: module not found
+
+Solution : Verify modules are loaded and regenerate imports
+module-loader list taskservs .
module-loader load taskservs . [kubernetes, cilium, containerd]
-```plaintext
-
-#### Provider Configuration Issues
-
-**Solution**: Check provider-specific configuration in `.providers/` directory
-
-#### KCL Compilation Errors
-
-**Solution**: Verify core package installation and kcl.mod configuration
-
-```bash
-kcl-packager.nu install --version latest
-kcl run --dry-run servers.k
-```plaintext
-
-### Debug Commands
-
-```bash
-# Show workspace structure
+
+
+Solution : Check provider-specific configuration in .providers/ directory
+
+Solution : Verify core package installation and kcl.mod configuration
+kcl-packager.nu install --version latest
+kcl run --dry-run servers.ncl
+
+
+# Show workspace structure
tree -a workspace/infra/my-project
# Check generated imports
-cat workspace/infra/my-project/taskservs.k
+cat workspace/infra/my-project/taskservs.ncl
# Validate KCL files
-kcl check workspace/infra/my-project/*.k
+nickel typecheck workspace/infra/my-project/*.ncl
# Show module manifests
cat workspace/infra/my-project/.manifest/taskservs.yaml
-```plaintext
-
-## Best Practices
-
-### 1. Version Management
-
-- Pin core package versions in production
-- Use semantic versioning for extensions
-- Test compatibility before upgrading
-
-### 2. Module Organization
-
-- Load only required modules to keep workspaces clean
-- Use meaningful workspace names
-- Document required modules in README
-
-### 3. Security
-
-- Exclude `.manifest/` and `data/` from version control
-- Use secrets management for sensitive configuration
-- Validate modules before loading in production
-
-### 4. Performance
-
-- Load modules at workspace initialization, not runtime
-- Cache discovery results when possible
-- Use parallel loading for multiple modules
-
-## Migration Guide
-
-For existing workspaces, follow these steps:
-
-### 1. Backup Current Workspace
-
-```bash
-cp -r workspace/infra/existing workspace/infra/existing-backup
-```plaintext
-
-### 2. Analyze Migration Requirements
-
-```bash
-workspace-migrate.nu workspace/infra/existing dry-run
-```plaintext
-
-### 3. Perform Migration
-
-```bash
-workspace-migrate.nu workspace/infra/existing
-```plaintext
-
-### 4. Load Required Modules
-
-```bash
-cd workspace/infra/existing
+
+
+
+
+Pin core package versions in production
+Use semantic versioning for extensions
+Test compatibility before upgrading
+
+
+
+Load only required modules to keep workspaces clean
+Use meaningful workspace names
+Document required modules in README
+
+
+
+Exclude .manifest/ and data/ from version control
+Use secrets management for sensitive configuration
+Validate modules before loading in production
+
+
+
+Load modules at workspace initialization, not runtime
+Cache discovery results when possible
+Use parallel loading for multiple modules
+
+
+For existing workspaces, follow these steps:
+
+cp -r workspace/infra/existing workspace/infra/existing-backup
+
+
+workspace-migrate.nu workspace/infra/existing dry-run
+
+
+workspace-migrate.nu workspace/infra/existing
+
+
+cd workspace/infra/existing
module-loader load taskservs . [kubernetes, cilium]
module-loader load providers . [upcloud]
-```plaintext
-
-### 5. Test and Validate
-
-```bash
-kcl run servers.k
+
+
+kcl run servers.ncl
module-loader validate .
-```plaintext
-
-### 6. Deploy
-
-```bash
-provisioning server create --infra . --check
-```plaintext
-
-## Future Enhancements
-
-- Registry-based module distribution
-- Module dependency resolution
-- Automatic version updates
-- Module templates and scaffolding
-- Integration with external package managers
-
-Status : Reference Guide
-Last Updated : 2025-12-15
-Related : ADR-011: Migration from KCL to Nickel
-
-
-Need to define infrastructure/schemas?
-├─ New platform schemas → Use Nickel ✅
-├─ New provider extensions → Use Nickel ✅
-├─ Legacy workspace configs → Can use KCL (migrate gradually)
-├─ Need type-safe UIs? → Nickel + TypeDialog ✅
-├─ Application settings? → Use TOML (not KCL/Nickel)
-└─ K8s/CI-CD config? → Use YAML (not KCL/Nickel)
-```plaintext
-
----
-
-## 1. Side-by-Side Code Examples
-
-### Simple Schema: Server Configuration
-
-#### KCL Approach
-
-```kcl
-schema ServerDefaults:
- name: str
- cpu_cores: int = 2
- memory_gb: int = 4
- os: str = "ubuntu"
-
- check:
- cpu_cores > 0, "CPU cores must be positive"
- memory_gb > 0, "Memory must be positive"
-
-server_defaults: ServerDefaults = {
- name = "web-server",
- cpu_cores = 4,
- memory_gb = 8,
- os = "ubuntu",
-}
-```plaintext
-
-#### Nickel Approach (Three-File Pattern)
-
-**server_contracts.ncl**:
-
-```nickel
-{
- ServerDefaults = {
- name | String,
- cpu_cores | Number,
- memory_gb | Number,
- os | String,
- },
-}
-```plaintext
-
-**server_defaults.ncl**:
-
-```nickel
-{
- server = {
- name = "web-server",
- cpu_cores = 4,
- memory_gb = 8,
- os = "ubuntu",
- },
-}
-```plaintext
-
-**server.ncl**:
-
-```nickel
-let contracts = import "./server_contracts.ncl" in
-let defaults = import "./server_defaults.ncl" in
-
-{
- defaults = defaults,
-
- make_server | not_exported = fun overrides =>
- defaults.server & overrides,
-
- DefaultServer = defaults.server,
-}
-```plaintext
-
-**Usage**:
-
-```nickel
-let server = import "./server.ncl" in
-
-# Simple override
-my_server = server.make_server { cpu_cores = 8 }
-
-# With custom field (Nickel allows this!)
-my_custom = server.defaults.server & {
- cpu_cores = 16,
- custom_monitoring_level = "verbose" # ✅ Works!
-}
-```plaintext
-
-**Key Differences**:
-
-- **KCL**: Validation inline, single file, rigid schema
-- **Nickel**: Separated concerns (contracts, defaults, instances), flexible composition
-
----
-
-### Complex Schema: Provider with Multiple Types
-
-#### KCL (from `provisioning/extensions/providers/upcloud/kcl/`)
-
-```kcl
-schema StorageBackup:
- backup_id: str
- frequency: str
- retention_days: int = 7
-
-schema ServerUpcloud:
- name: str
- plan: str
- zone: str
- storage_backups: [StorageBackup] = []
-
-schema ProvisionUpcloud:
- api_key: str
- api_password: str
- servers: [ServerUpcloud] = []
-
-provision_upcloud: ProvisionUpcloud = {
- api_key = ""
- api_password = ""
- servers = []
-}
-```plaintext
-
-#### Nickel (from `provisioning/extensions/providers/upcloud/nickel/`)
-
-**upcloud_contracts.ncl**:
-
-```nickel
-{
- StorageBackup = {
- backup_id | String,
- frequency | String,
- retention_days | Number,
- },
-
- ServerUpcloud = {
- name | String,
- plan | String,
- zone | String,
- storage_backups | Array,
- },
-
- ProvisionUpcloud = {
- api_key | String,
- api_password | String,
- servers | Array,
- },
-}
-```plaintext
-
-**upcloud_defaults.ncl**:
-
-```nickel
-{
- storage_backup = {
- backup_id = "",
- frequency = "daily",
- retention_days = 7,
- },
-
- server_upcloud = {
- name = "",
- plan = "1xCPU-1GB",
- zone = "us-nyc1",
- storage_backups = [],
- },
-
- provision_upcloud = {
- api_key = "",
- api_password = "",
- servers = [],
- },
-}
-```plaintext
-
-**upcloud_main.ncl** (from actual codebase):
-
-```nickel
-let contracts = import "./upcloud_contracts.ncl" in
-let defaults = import "./upcloud_defaults.ncl" in
-
-{
- defaults = defaults,
-
- make_storage_backup | not_exported = fun overrides =>
- defaults.storage_backup & overrides,
-
- make_server_upcloud | not_exported = fun overrides =>
- defaults.server_upcloud & overrides,
-
- make_provision_upcloud | not_exported = fun overrides =>
- defaults.provision_upcloud & overrides,
-
- DefaultStorageBackup = defaults.storage_backup,
- DefaultServerUpcloud = defaults.server_upcloud,
- DefaultProvisionUpcloud = defaults.provision_upcloud,
-}
-```plaintext
-
-**Usage Comparison**:
-
-```nickel
-# KCL way (KCL no lo permite bien)
-# Cannot easily extend without schema modification
-
-# Nickel way (flexible!)
-let upcloud = import "./upcloud.ncl" in
-
-# Simple override
-staging_server = upcloud.make_server_upcloud {
- name = "staging-01",
- zone = "eu-fra1",
-}
-
-# Complex config with custom fields
-production_stack = upcloud.make_provision_upcloud {
- api_key = "secret",
- api_password = "secret",
- servers = [
- upcloud.make_server_upcloud { name = "prod-web-01" },
- upcloud.make_server_upcloud { name = "prod-web-02" },
- ],
- custom_vpc_id = "vpc-prod", # ✅ Custom field allowed!
- monitoring_enabled = true, # ✅ Custom field allowed!
- backup_schedule = "24h", # ✅ Custom field allowed!
-}
-```plaintext
-
----
-
-## 2. Performance Benchmarks
-
-### Evaluation Speed
-
-| File Type | KCL | Nickel | Improvement |
-|-----------|-----|--------|------------|
-| Simple schema (100 lines) | 45ms | 18ms | 60% faster |
-| Complex config (500 lines) | 180ms | 72ms | 60% faster |
-| Large nested (2000 lines) | 420ms | 160ms | 62% faster |
-| Infrastructure full stack | 850ms | 340ms | 60% faster |
-
-**Test Conditions**:
-
-- MacOS 13.x, M1 Pro
-- Single evaluation run
-- JSON output export
-- Average of 5 runs
-
-### Memory Usage
-
-| Configuration | KCL | Nickel | Improvement |
-|---------------|-----|--------|------------|
-| Platform schemas (422 files) | ~180MB | ~85MB | 53% less |
-| Full workspace (47 files) | ~45MB | ~22MB | 51% less |
-| Single provider ext | ~8MB | ~4MB | 50% less |
-
-**Lazy Evaluation Benefit**:
-
-- KCL: Evaluates all schemas upfront
-- Nickel: Only evaluates what's used (lazy)
-- Nickel advantage: 40-50% memory savings on large configs
-
----
-
-## 3. Use Case Examples
-
-### Use Case 1: Simple Server Definition
-
-**KCL**:
-
-```kcl
-schema ServerConfig:
- name: str
- zone: str = "us-nyc1"
-
-web_server: ServerConfig = {
- name = "web-01",
-}
-```plaintext
-
-**Nickel**:
-
-```nickel
-let defaults = import "./server_defaults.ncl" in
-web_server = defaults.make_server { name = "web-01" }
-```plaintext
-
-**Winner**: Nickel (simpler, cleaner)
-
----
-
-### Use Case 2: Multiple Taskservs with Dependencies
-
-**KCL** (from wuji infrastructure):
-
-```kcl
-schema TaskServDependency:
- name: str
- wait_for_health: bool = false
-
-schema TaskServ:
- name: str
- version: str
- dependencies: [TaskServDependency] = []
-
-taskserv_kubernetes: TaskServ = {
- name = "kubernetes",
- version = "1.28.0",
- dependencies = [
- {name = "containerd"},
- {name = "etcd"},
- ]
-}
-
-taskserv_cilium: TaskServ = {
- name = "cilium",
- version = "1.14.0",
- dependencies = [
- {name = "kubernetes", wait_for_health = true}
- ]
-}
-```plaintext
-
-**Nickel** (from wuji/main.ncl):
-
-```nickel
-let ts_kubernetes = import "./taskservs/kubernetes.ncl" in
-let ts_cilium = import "./taskservs/cilium.ncl" in
-let ts_containerd = import "./taskservs/containerd.ncl" in
-
-{
- taskservs = {
- kubernetes = ts_kubernetes.kubernetes,
- cilium = ts_cilium.cilium,
- containerd = ts_containerd.containerd,
- },
-}
-```plaintext
-
-**Winner**: Nickel (modular, scalable to 20 taskservs)
-
----
-
-### Use Case 3: Configuration Extension with Custom Fields
-
-**Scenario**: Need to add monitoring configuration to server definition
-
-**KCL**:
-
-```kcl
-schema ServerConfig:
- name: str
- # Would need to modify schema!
- monitoring_enabled: bool = false
- monitoring_level: str = "basic"
-
-# All existing configs need updating...
-```plaintext
-
-**Nickel**:
-
-```nickel
-let server = import "./server.ncl" in
-
-# Add custom fields without modifying schema!
-my_server = server.defaults.server & {
- name = "web-01",
- monitoring_enabled = true,
- monitoring_level = "detailed",
- custom_tags = ["production", "critical"],
- grafana_dashboard = "web-servers",
-}
-```plaintext
-
-**Winner**: Nickel (no schema modifications needed)
-
----
-
-## 4. Architecture Patterns Comparison
-
-### Schema Inheritance
-
-**KCL Approach**:
-
-```kcl
-schema ServerDefaults:
- cpu: int = 2
- memory: int = 4
-
-schema Server(ServerDefaults):
- name: str
-
-server: Server = {
- name = "web-01",
- cpu = 4,
- memory = 8,
-}
-```plaintext
-
-**Problem**: Inheritance creates rigid hierarchies, breaking changes propagate
-
----
-
-**Nickel Approach**:
-
-```nickel
-# defaults.ncl
-server_defaults = {
- cpu = 2,
- memory = 4,
-}
-
-# main.ncl
-let make_server = fun overrides =>
- defaults.server_defaults & overrides
-
-server = make_server {
- name = "web-01",
- cpu = 4,
- memory = 8,
-}
-```plaintext
-
-**Advantage**: Flexible composition via record merging, no inheritance rigidity
-
----
-
-### Validation
-
-**KCL Validation** (compile-time, inline):
-
-```kcl
-schema Config:
- timeout: int = 5
-
- check:
- timeout > 0, "Timeout must be positive"
- timeout < 300, "Timeout must be < 5min"
-```plaintext
-
-**Pros**: Validation at schema definition
-**Cons**: Overhead during compilation, rigid
-
----
-
-**Nickel Validation** (runtime, contract-based):
-
-```nickel
-# contracts.ncl - Pure type definitions
-Config = {
- timeout | Number,
-}
-
-# Usage - Optional validation
-let validate_config = fun config =>
- if config.timeout <= 0 then
- std.record.fail "Timeout must be positive"
- else if config.timeout >= 300 then
- std.record.fail "Timeout must be < 5min"
- else
- config
-
-# Apply only when needed
-my_config = validate_config { timeout = 10 }
-```plaintext
-
-**Pros**: Lazy evaluation, optional, fine-grained control
-**Cons**: Must invoke validation explicitly
-
----
-
-## 5. Migration Patterns (Before/After)
-
-### Pattern 1: Simple Schema Migration
-
-**Before (KCL)**:
-
-```kcl
-schema Scheduler:
- strategy: str = "fifo"
- workers: int = 4
-
- check:
- workers > 0, "Workers must be positive"
-
-scheduler_config: Scheduler = {
- strategy = "priority",
- workers = 8,
-}
-```plaintext
-
-**After (Nickel)**:
-
-`scheduler_contracts.ncl`:
-
-```nickel
-{
- Scheduler = {
- strategy | String,
- workers | Number,
- },
-}
-```plaintext
-
-`scheduler_defaults.ncl`:
-
-```nickel
-{
- scheduler = {
- strategy = "fifo",
- workers = 4,
- },
-}
-```plaintext
-
-`scheduler.ncl`:
-
-```nickel
-let contracts = import "./scheduler_contracts.ncl" in
-let defaults = import "./scheduler_defaults.ncl" in
-
-{
- defaults = defaults,
- make_scheduler | not_exported = fun o =>
- defaults.scheduler & o,
- DefaultScheduler = defaults.scheduler,
- SchedulerConfig = defaults.scheduler & {
- strategy = "priority",
- workers = 8,
- },
-}
-```plaintext
-
----
-
-### Pattern 2: Union Types → Enums
-
-**Before (KCL)**:
-
-```kcl
-schema Mode:
- deployment_type: str = "solo" # "solo" | "multiuser" | "cicd" | "enterprise"
-
- check:
- deployment_type in ["solo", "multiuser", "cicd", "enterprise"],
- "Invalid deployment type"
-```plaintext
-
-**After (Nickel)**:
-
-```nickel
-# contracts.ncl
-{
- Mode = {
- deployment_type | [| 'solo, 'multiuser, 'cicd, 'enterprise |],
- },
-}
-
-# defaults.ncl
-{
- mode = {
- deployment_type = 'solo,
- },
-}
-```plaintext
-
-**Benefits**: Type-safe, no string validation needed
-
----
-
-### Pattern 3: Schema Inheritance → Record Merging
-
-**Before (KCL)**:
-
-```kcl
-schema ServerDefaults:
- cpu: int = 2
- memory: int = 4
-
-schema Server(ServerDefaults):
- name: str
-
-web_server: Server = {
- name = "web-01",
- cpu = 8,
- memory = 16,
-}
-```plaintext
-
-**After (Nickel)**:
-
-```nickel
-# defaults.ncl
-{
- server_defaults = {
- cpu = 2,
- memory = 4,
- },
-
- web_server = {
- name = "web-01",
- cpu = 8,
- memory = 16,
- },
-}
-
-# main.ncl - Composition
-let make_server = fun config =>
- defaults.server_defaults & config & {
- name = config.name,
- }
-```plaintext
-
-**Advantage**: Explicit, flexible, composable
-
----
-
-## 6. Deployment Workflows
-
-### Development Mode (Single Source of Truth)
-
-**When to Use**: Local development, testing, iterations
-
-**Workflow**:
-
-```bash
-# Edit workspace config
-cd workspace_librecloud/nickel
-vim wuji/main.ncl
-
-# Test immediately (relative imports)
-nickel export wuji/main.ncl --format json
-
-# Changes to central provisioning reflected immediately
-vim ../../provisioning/schemas/lib/main.ncl
-nickel export wuji/main.ncl # Uses updated schemas
-```plaintext
-
-**Imports** (relative, central):
-
-```nickel
-import "../../provisioning/schemas/main.ncl"
-import "../../provisioning/extensions/taskservs/kubernetes/nickel/main.ncl"
-```plaintext
-
----
-
-### Production Mode (Frozen Snapshots)
-
-**When to Use**: Deployments, releases, reproducibility
-
-**Workflow**:
-
-```bash
-# 1. Create immutable snapshot
-provisioning workspace freeze \
- --version "2025-12-15-prod-v1" \
- --env production
-
-# 2. Frozen structure created
-.frozen/2025-12-15-prod-v1/
-├── provisioning/schemas/ # Snapshot
-├── extensions/ # Snapshot
-└── workspace/ # Snapshot
-
-# 3. Deploy from frozen
-provisioning deploy \
- --frozen "2025-12-15-prod-v1" \
- --infra wuji
-
-# 4. Rollback if needed
-provisioning deploy \
- --frozen "2025-12-10-prod-v0" \
- --infra wuji
-```plaintext
-
-**Frozen Imports** (rewritten to local):
-
-```nickel
-# Original in workspace
-import "../../provisioning/schemas/main.ncl"
-
-# Rewritten in frozen snapshot
-import "./provisioning/schemas/main.ncl"
-```plaintext
-
-**Benefits**:
-
-- ✅ Immutable deployments
-- ✅ No external dependencies
-- ✅ Reproducible across environments
-- ✅ Works offline/air-gapped
-- ✅ Easy rollback
-
----
-
-## 7. Troubleshooting Guide
-
-### Error: "unexpected token" with Multiple Let Bindings
-
-**Problem**:
-
-```nickel
-# ❌ WRONG
-let A = { x = 1 }
-let B = { y = 2 }
-{ A = A, B = B }
-```plaintext
-
-Error: `unexpected token`
-
-**Solution**: Use `let...in` chaining:
-
-```nickel
-# ✅ CORRECT
-let A = { x = 1 } in
-let B = { y = 2 } in
-{ A = A, B = B }
-```plaintext
-
----
-
-### Error: "this can't be used as a contract"
-
-**Problem**:
-
-```nickel
-# ❌ WRONG
-let StorageVol = {
- mount_path : String | null = null,
-}
-```plaintext
-
-Error: `this can't be used as a contract`
-
-**Explanation**: Union types with `null` don't work in field annotations
-
-**Solution**: Use untyped assignment:
-
-```nickel
-# ✅ CORRECT
-let StorageVol = {
- mount_path = null,
-}
-```plaintext
-
----
-
-### Error: "infinite recursion" when Exporting
-
-**Problem**:
-
-```nickel
-# ❌ WRONG
-{
- get_value = fun x => x + 1,
- result = get_value 5,
-}
-```plaintext
-
-Error: Functions can't be serialized
-
-**Solution**: Mark helper functions `not_exported`:
-
-```nickel
-# ✅ CORRECT
-{
- get_value | not_exported = fun x => x + 1,
- result = get_value 5,
-}
-```plaintext
-
----
-
-### Error: "field not found" After Renaming
-
-**Problem**:
-
-```nickel
-let defaults = import "./defaults.ncl" in
-defaults.scheduler_config # But file has "scheduler"
-```plaintext
-
-Error: `field not found`
-
-**Solution**: Use exact field names:
-
-```nickel
-let defaults = import "./defaults.ncl" in
-defaults.scheduler # Correct name from defaults.ncl
-```plaintext
-
----
-
-### Performance Issue: Slow Exports
-
-**Problem**: Large nested configs slow to export
-
-**Solution**: Check for circular references or missing `not_exported`:
-
-```nickel
-# ❌ Slow - functions being serialized
-{
- validate_config = fun x => x,
- data = { foo = "bar" },
-}
-
-# ✅ Fast - functions excluded
-{
- validate_config | not_exported = fun x => x,
- data = { foo = "bar" },
-}
-```plaintext
-
----
-
-## 8. Best Practices
-
-### For Nickel Schemas
-
-1. **Follow Three-File Pattern**
-
+
+provisioning server create --infra . --check
-module_contracts.ncl # Types only
-module_defaults.ncl # Values only
-module.ncl # Instances + interface
-
-2. **Use Hybrid Interface** (4 levels)
- - Level 1: Direct defaults (inspection)
- - Level 2: Maker functions (customization)
- - Level 3: Default instances (pre-built)
- - Level 4: Contracts (optional, advanced)
-
-3. **Record Merging for Composition**
-
- ```nickel
- let defaults = import "./defaults.ncl" in
- my_config = defaults.server & { custom_field = "value" }
-
-
-
-Mark Helper Functions not_exported
-validate | not_exported = fun x => x,
-
-
-
-No Null Values in Defaults
-# ✅ Good
-{ field = "" } # empty string for optional
-
-# ❌ Avoid
-{ field = null } # causes export issues
-
-
-
-
-
-
-
-Schema-First Development
+
-Define schemas before configs
-Explicit validation
+Registry-based module distribution
+Module dependency resolution
+Automatic version updates
+Module templates and scaffolding
+Integration with external package managers
-
-
-Immutability by Default
+
+
+The configuration system has been refactored into modular components to achieve 2-3x performance improvements
+for regular commands while maintaining full functionality for complex operations.
+
+
+File : loader-minimal.nu (~150 lines)
+Contains only essential functions needed for:
-KCL enforces immutability
-Use _ prefix only when necessary
+Workspace detection
+Environment determination
+Project root discovery
+Fast path detection
-
-
-Direct Submodule Imports
-import provisioning.lib as lib
+Exported Functions :
+
+get-active-workspace - Get current workspace
+detect-current-environment - Determine dev/test/prod
+get-project-root - Find project directory
+get-defaults-config-path - Path to default config
+check-if-sops-encrypted - SOPS file detection
+find-sops-config-path - Locate SOPS config
+
+Used by :
+
+Help commands (help infrastructure, help workspace, etc.)
+Status commands
+Workspace listing
+Quick reference operations
+
+
+File : loader-lazy.nu (~80 lines)
+Smart loader that decides which configuration to load:
+
+Fast path for help/status commands
+Full path for operations that need config
+
+Key Function :
+
+command-needs-full-config - Determines if full config required
+
+
+File : loader.nu (1990 lines)
+Original comprehensive loader that handles:
+
+Hierarchical config loading
+Variable interpolation
+Config validation
+Provider configuration
+Platform configuration
+
+Used by :
+
+Server creation
+Infrastructure operations
+Deployment commands
+Anything needing full config
+
+
+
+Operation Time Notes
+Workspace detection 0.023s 23ms for minimal load
+Full config load 0.091s ~4x slower than minimal
+Help command 0.040s Uses minimal loader only
+Status command 0.030s Fast path, no full config
+Server operations 0.150s+ Requires full config load
+
+
+
+
+Help commands : 30-40% faster (40ms vs 60ms with full config)
+Workspace operations : 50% faster (uses minimal loader)
+Status checks : Nearly instant (23ms)
+
+
+Help/Status Commands
+ ↓
+loader-lazy.nu
+ ↓
+loader-minimal.nu (workspace, environment detection)
+ ↓
+ (no further deps)
+
+Infrastructure/Server Commands
+ ↓
+loader-lazy.nu
+ ↓
+loader.nu (full configuration)
+ ├── loader-minimal.nu (for workspace detection)
+ ├── Interpolation functions
+ ├── Validation functions
+ └── Config merging logic
-
-
-Complex Validation
-check:
- timeout > 0, "Must be positive"
- timeout < 300, "Must be < 5min"
+
+
+# Uses minimal loader - 23ms
+./provisioning help infrastructure
+./provisioning workspace list
+./provisioning version
-
+
+# Uses minimal loader with some full config - ~50ms
+./provisioning status
+./provisioning workspace active
+./provisioning config validate
+
+
+# Uses full loader - ~150ms
+./provisioning server create --infra myinfra
+./provisioning taskserv create kubernetes
+./provisioning workflow submit batch.yaml
+
+
+
+# In loader-lazy.nu
+let is_fast_command = (
+ $command == "help" or
+ $command == "status" or
+ $command == "version"
+)
+
+if $is_fast_command {
+ # Use minimal loader only (0.023s)
+ get-minimal-config
+} else {
+ # Load full configuration (0.091s)
+ load-provisioning-config
+}
+
+
+The minimal loader returns a lightweight config record:
+{
+ workspace: {
+ name: "librecloud"
+ path: "/path/to/workspace_librecloud"
+ }
+ environment: "dev"
+ debug: false
+ paths: {
+ base: "/path/to/workspace_librecloud"
+ }
+}
+
+This is sufficient for:
+
+Workspace identification
+Environment determination
+Path resolution
+Help text generation
+
+
+The full loader returns comprehensive configuration with:
+
+Workspace settings
+Provider configurations
+Platform settings
+Interpolated variables
+Validation results
+Environment-specific overrides
+
+
+
+
+Commands are already categorized (help, workspace, server, etc.)
+Help system uses fast path (minimal loader)
+Infrastructure commands use full path (full loader)
+No changes needed to command implementations
-
-
-
-Type-safe prompts, forms, and schemas that bidirectionally integrate with Nickel .
-Location : /Users/Akasha/Development/typedialog
-
-# 1. Define schema in Nickel
-cat > server.ncl << 'EOF'
-let contracts = import "./contracts.ncl" in
-{
- DefaultServer = {
- name = "web-01",
- cpu = 4,
- memory = 8,
- zone = "us-nyc1",
- },
-}
-EOF
+
+When creating new modules:
+
+Check if full config is needed
+If not, use loader-minimal.nu functions only
+If yes, use get-config from main config accessor
+
+
+
+
+Cache full config for 60 seconds
+Reuse config across related commands
+Potential: Additional 50% improvement
+
+
+
+Create thin config profiles for common scenarios
+Pre-loaded templates for workspace/infra combinations
+Fast switching between profiles
+
+
+
+Load workspace and provider configs in parallel
+Async validation and interpolation
+Potential: 30% improvement for full config load
+
+
+
+Only add if:
+
+Used by help/status commands
+Doesn’t require full config
+Performance-critical path
+
+
+
+Changes are backward compatible
+Validate against existing config files
+Update tests in test suite
+
+
+# Benchmark minimal loader
+time nu -n -c "use loader-minimal.nu *; get-active-workspace"
-# 2. Generate interactive form from schema
-typedialog form --schema server.ncl --output json
+# Benchmark full loader
+time nu -c "use config/accessor.nu *; get-config"
-# 3. User fills form interactively (CLI, TUI, or Web)
-# Prompts generated from field names
-# Defaults populated from Nickel config
-
-# 4. Output back to Nickel
-typedialog form --input form.toml --output nickel
-```plaintext
-
-### Benefits
-
-- **Type-Safe UIs**: Forms validated against Nickel contracts
-- **Auto-Generated**: No UI code to maintain
-- **Multiple Backends**: CLI (inquire), TUI (ratatui), Web (axum)
-- **Multiple Formats**: JSON, YAML, TOML, Nickel output
-- **Bidirectional**: Nickel → UIs → Nickel
-
-### Example: Infrastructure Wizard
-
-```bash
-# User runs
-provisioning init --wizard
-
-# Backend generates TypeDialog form from:
-provisioning/schemas/config/workspace_config/main.ncl
-
-# Interactive form with:
-- workspace_name (text prompt)
-- deployment_mode (select: solo/multiuser/cicd/enterprise)
-- preferred_provider (select: upcloud/aws/hetzner)
-- taskservs (multi-select: kubernetes, cilium, etcd, etc)
-- custom_settings (advanced, optional)
-
-# Output: workspace_config.ncl (valid Nickel!)
-```plaintext
-
----
-
-## 10. Migration Checklist
-
-### Before Starting Migration
-
-- [ ] Read ADR-011
-- [ ] Review [Nickel Migration Guide](../development/nickel-executable-examples.md)
-- [ ] Identify which module to migrate
-- [ ] Check for dependencies on other modules
-
-### During Migration
-
-- [ ] Extract contracts from KCL schema
-- [ ] Extract defaults from KCL config
-- [ ] Create main.ncl with hybrid interface
-- [ ] Validate JSON export: `nickel export main.ncl --format json`
-- [ ] Compare JSON output with original KCL
-
-### Validation
-
-- [ ] All required fields present
-- [ ] No null values (use empty strings/arrays)
-- [ ] Contracts are pure definitions
-- [ ] Defaults are complete values
-- [ ] Main file has 4-level interface
-- [ ] Syntax validation passes
-- [ ] No `...` as code omission indicators
-
-### Post-Migration
-
-- [ ] Update imports in dependent files
-- [ ] Test in development mode
-- [ ] Create frozen snapshot
-- [ ] Test production deployment
-- [ ] Update documentation
-
----
-
-## 11. Real-World Examples from Codebase
-
-### Example 1: Platform Schemas Entry Point
-
-**File**: `provisioning/schemas/main.ncl` (174 lines)
-
-```nickel
-# Domain-organized architecture
-{
- lib | doc "Core library types"
- = import "./lib/main.ncl",
-
- config | doc "Settings, defaults, workspace_config"
- = {
- settings = import "./config/settings/main.ncl",
- defaults = import "./config/defaults/main.ncl",
- workspace_config = import "./config/workspace_config/main.ncl",
- },
-
- infrastructure | doc "Compute, storage, provisioning"
- = {
- compute = {
- server = import "./infrastructure/compute/server/main.ncl",
- cluster = import "./infrastructure/compute/cluster/main.ncl",
- },
- storage = {
- vm = import "./infrastructure/storage/vm/main.ncl",
- },
- },
-
- operations | doc "Workflows, batch, dependencies, tasks"
- = {
- workflows = import "./operations/workflows/main.ncl",
- batch = import "./operations/batch/main.ncl",
- },
-
- deployment | doc "Kubernetes, modes"
- = {
- kubernetes = import "./deployment/kubernetes/main.ncl",
- modes = import "./deployment/modes/main.ncl",
- },
-}
-```plaintext
-
-**Usage**:
-
-```nickel
-let provisioning = import "./main.ncl" in
-
-provisioning.lib.Storage
-provisioning.config.settings
-provisioning.infrastructure.compute.server
-provisioning.operations.workflows
-```plaintext
-
----
-
-### Example 2: Provider Extension (UpCloud)
-
-**File**: `provisioning/extensions/providers/upcloud/nickel/main.ncl` (38 lines)
-
-```nickel
-let contracts_lib = import "./contracts.ncl" in
-let defaults_lib = import "./defaults.ncl" in
-
-{
- defaults = defaults_lib,
-
- make_storage_backup | not_exported = fun overrides =>
- defaults_lib.storage_backup & overrides,
-
- make_storage | not_exported = fun overrides =>
- defaults_lib.storage & overrides,
-
- make_provision_env | not_exported = fun overrides =>
- defaults_lib.provision_env & overrides,
-
- make_provision_upcloud | not_exported = fun overrides =>
- defaults_lib.provision_upcloud & overrides,
-
- make_server_defaults_upcloud | not_exported = fun overrides =>
- defaults_lib.server_defaults_upcloud & overrides,
-
- make_server_upcloud | not_exported = fun overrides =>
- defaults_lib.server_upcloud & overrides,
-
- DefaultStorageBackup = defaults_lib.storage_backup,
- DefaultStorage = defaults_lib.storage,
- DefaultProvisionEnv = defaults_lib.provision_env,
- DefaultProvisionUpcloud = defaults_lib.provision_upcloud,
- DefaultServerDefaults_upcloud = defaults_lib.server_defaults_upcloud,
- DefaultServerUpcloud = defaults_lib.server_upcloud,
-}
-```plaintext
-
----
-
-### Example 3: Workspace Infrastructure (wuji)
-
-**File**: `workspace_librecloud/nickel/wuji/main.ncl` (53 lines)
-
-```nickel
-let settings_config = import "./settings.ncl" in
-let ts_cilium = import "./taskservs/cilium.ncl" in
-let ts_containerd = import "./taskservs/containerd.ncl" in
-let ts_coredns = import "./taskservs/coredns.ncl" in
-let ts_crio = import "./taskservs/crio.ncl" in
-let ts_crun = import "./taskservs/crun.ncl" in
-let ts_etcd = import "./taskservs/etcd.ncl" in
-let ts_external_nfs = import "./taskservs/external-nfs.ncl" in
-let ts_k8s_nodejoin = import "./taskservs/k8s-nodejoin.ncl" in
-let ts_kubernetes = import "./taskservs/kubernetes.ncl" in
-let ts_mayastor = import "./taskservs/mayastor.ncl" in
-let ts_os = import "./taskservs/os.ncl" in
-let ts_podman = import "./taskservs/podman.ncl" in
-let ts_postgres = import "./taskservs/postgres.ncl" in
-let ts_proxy = import "./taskservs/proxy.ncl" in
-let ts_redis = import "./taskservs/redis.ncl" in
-let ts_resolv = import "./taskservs/resolv.ncl" in
-let ts_rook_ceph = import "./taskservs/rook_ceph.ncl" in
-let ts_runc = import "./taskservs/runc.ncl" in
-let ts_webhook = import "./taskservs/webhook.ncl" in
-let ts_youki = import "./taskservs/youki.ncl" in
-
-{
- settings = settings_config.settings,
- servers = settings_config.servers,
-
- taskservs = {
- cilium = ts_cilium.cilium,
- containerd = ts_containerd.containerd,
- coredns = ts_coredns.coredns,
- crio = ts_crio.crio,
- crun = ts_crun.crun,
- etcd = ts_etcd.etcd,
- external_nfs = ts_external_nfs.external_nfs,
- k8s_nodejoin = ts_k8s_nodejoin.k8s_nodejoin,
- kubernetes = ts_kubernetes.kubernetes,
- mayastor = ts_mayastor.mayastor,
- os = ts_os.os,
- podman = ts_podman.podman,
- postgres = ts_postgres.postgres,
- proxy = ts_proxy.proxy,
- redis = ts_redis.redis,
- resolv = ts_resolv.resolv,
- rook_ceph = ts_rook_ceph.rook_ceph,
- runc = ts_runc.runc,
- webhook = ts_webhook.webhook,
- youki = ts_youki.youki,
- },
-}
-```plaintext
-
----
-
-## Summary Table
-
-| Aspect | KCL | Nickel | Recommendation |
-|--------|-----|--------|---|
-| **Learning Curve** | 10 hours | 3 hours | Nickel |
-| **Performance** | Baseline | 60% faster | Nickel |
-| **Flexibility** | Limited | Excellent | Nickel |
-| **Type Safety** | Strong | Good (gradual) | KCL (slightly) |
-| **Extensibility** | Rigid | Excellent | Nickel |
-| **Boilerplate** | High | Low | Nickel |
-| **Ecosystem** | Small | Growing | Nickel |
-| **For New Projects** | ❌ | ✅ | Nickel |
-| **For Legacy Configs** | ✅ Supported | ⏳ Gradual | Both (migrate gradually) |
-
----
-
-## Key Takeaways
-
-1. **Nickel is the future** - 60% faster, more flexible, simpler mental model
-2. **Three-file pattern** - Cleanly separates contracts, defaults, instances
-3. **Hybrid interface** - 4 levels cover all use cases (90% makers, 9% defaults, 1% contracts)
-4. **Domain organization** - 8 logical domains for clarity and scalability
-5. **Two deployment modes** - Development (fast iteration) + Production (immutable snapshots)
-6. **TypeDialog integration** - Amplifies Nickel beyond IaC (UI generation)
-7. **KCL still supported** - For legacy workspace configs during gradual migration
-8. **Production validated** - 47 active files, 20 taskservs, 422 total schemas
-
----
-
-**Next Steps**:
-
-- For new schemas → Use Nickel (three-file pattern)
-- For workspace configs → Can migrate gradually
-- For UI generation → Combine Nickel + TypeDialog
-- For application settings → Use TOML (not KCL/Nickel)
-- For K8s/CI-CD → Use YAML (not KCL/Nickel)
-
----
-
-**Version**: 1.0.0
-**Status**: Complete Reference Guide
-**Last Updated**: 2025-12-15
+# Benchmark help command
+time ./provisioning help infrastructure
+
+
+loader.nu - Full configuration loading system
+loader-minimal.nu - Fast path loader
+loader-lazy.nu - Smart loader decision logic
+config/ARCHITECTURE.md - Configuration architecture details
+
Status : Practical Developer Guide
Last Updated : 2025-12-15
@@ -11884,7 +10176,7 @@ EOF
server = {
name = "",
- plan = "1xCPU-1GB",
+ plan = "1xCPU-1 GB",
zone = "us-nyc1",
backups = [],
},
@@ -11925,7 +10217,7 @@ let defaults = import "./upcloud_defaults.ncl" in
servers = [
defaults.server & {
name = "web-01",
- plan = "2xCPU-4GB",
+ plan = "2xCPU-4 GB",
zone = "us-nyc1",
backups = [
defaults.backup & { frequency = "hourly" },
@@ -11933,7 +10225,7 @@ let defaults = import "./upcloud_defaults.ncl" in
},
defaults.server & {
name = "web-02",
- plan = "2xCPU-4GB",
+ plan = "2xCPU-4 GB",
zone = "eu-fra1",
backups = [
defaults.backup & { frequency = "hourly" },
@@ -11941,7 +10233,7 @@ let defaults = import "./upcloud_defaults.ncl" in
},
defaults.server & {
name = "db-01",
- plan = "4xCPU-16GB",
+ plan = "4xCPU-16 GB",
zone = "us-nyc1",
backups = [
defaults.backup & { frequency = "every-6h", retention_days = 30 },
@@ -11974,8 +10266,8 @@ let upcloud = import "./upcloud_main.ncl" in
api_key = "prod-key",
api_password = "prod-secret",
servers = [
- upcloud.make_server { name = "web-01", plan = "2xCPU-4GB" },
- upcloud.make_server { name = "web-02", plan = "2xCPU-4GB" },
+ upcloud.make_server { name = "web-01", plan = "2xCPU-4 GB" },
+ upcloud.make_server { name = "web-02", plan = "2xCPU-4 GB" },
],
},
@@ -12264,43 +10556,6 @@ nickel export validated_config.ncl --format json
# }
-
-
-schema ServerConfig:
- name: str
- cpu_cores: int = 4
- memory_gb: int = 8
-
- check:
- cpu_cores > 0, "CPU must be positive"
- memory_gb > 0, "Memory must be positive"
-
-server_config: ServerConfig = {
- name = "web-01",
-}
-
-
-# server_contracts.ncl
-{ ServerConfig = { name | String, cpu_cores | Number, memory_gb | Number } }
-
-# server_defaults.ncl
-{ server = { name = "web-01", cpu_cores = 4, memory_gb = 8 } }
-
-# server.ncl
-let contracts = import "./server_contracts.ncl" in
-let defaults = import "./server_defaults.ncl" in
-{
- defaults = defaults,
- DefaultServer = defaults.server,
- make_server | not_exported = fun o => defaults.server & o,
-}
-
-
-
-KCL : All-in-one, validation inline, rigid
-Nickel : Separated (3 files), validation optional, flexible
-
-
#!/bin/bash
@@ -12405,7 +10660,7 @@ let B = {y = 2} in
{ optional_field = {} } # for objects
-
+
These examples are:
✅ Copy-paste ready - Can run directly
@@ -12421,7 +10676,7 @@ let B = {y = 2} in
Status : Tested & Verified
Last Updated : 2025-12-15
Execution Complete
-Perfect question! Let me explain clearly:
+Perfect question. Here’s a clear explanation:
The Orchestrator IS USED and IS CRITICAL
That code example was misleading. Here’s the real architecture:
How It Actually Works
@@ -12466,81 +10721,102 @@ params: {…}
Orchestrator receives and queues:
-// Orchestrator receives HTTP request
+
// Orchestrator receives HTTP request
async fn create_server_workflow(request) {
-let task = Task::new(TaskType::ServerCreate, request);
-task_queue.enqueue(task).await; // Queue for execution
-return workflow_id; // Return immediately
-}
-4. Orchestrator executes via Nushell subprocess:
-// Orchestrator spawns Nushell to run business logic
+ let task = Task::new(TaskType::ServerCreate, request);
+ task_queue.enqueue(task).await; // Queue for execution
+ return workflow_id; // Return immediately
+}
+
+Orchestrator executes via Nushell subprocess:
+
+// Orchestrator spawns Nushell to run business logic
async fn execute_task(task: Task) {
-let output = Command::new(“nu”)
-.arg(“-c”)
-.arg(“use /usr/local/lib/provisioning/servers/create.nu; create-server ‘wuji’”)
-.output()
-.await?;
-// Orchestrator manages: retry, checkpointing, monitoring
-
-}
-5. Nushell executes the actual work:
-
-export def create-server [name: string] {
-# This is the business logic
-# Calls UpCloud API, creates server, etc.
-let provider = (load-provider)
-$provider | create-vm $name
+ let output = Command::new("nu")
+ .arg("-c")
+ .arg("use /usr/local/lib/provisioning/servers/create.nu; create-server 'wuji'")
+ .output()
+ .await?;
+
+ // Orchestrator manages: retry, checkpointing, monitoring
+}
+
+Nushell executes the actual work:
+
+# servers/create.nu
+
+export def create-server [name: string] {
+ # This is the business logic
+ # Calls UpCloud API, creates server, etc.
+ let provider = (load-provider)
+ $provider | create-vm $name
}
-Why This Architecture?
-Problem It Solves
-Without Orchestrator (Old Way):
-provisioning → template.nu → cluster.nu → taskserv.nu → provider.nu
-(Deep call stack = crashes!)
-With Orchestrator (Current):
-provisioning → Orchestrator → spawns fresh Nushell subprocess for each task
-(No deep nesting, parallel execution, recovery)
-What Orchestrator Provides
-Task Queue - Reliable execution even if system crashes
+Why This Architecture?
+
+Problem It Solves
+
+Without Orchestrator (Old Way):
+
+provisioning → template.nu → cluster.nu → taskserv.nu → provider.nu
+ (Deep call stack = crashes!)
+With Orchestrator (Current):
+
+provisioning → Orchestrator → spawns fresh Nushell subprocess for each task
+ (No deep nesting, parallel execution, recovery)
+What Orchestrator Provides
+
+Task Queue - Reliable execution even if system crashes
Parallel Execution - Run 10 tasks at once (Rust async)
Workflow Engine - Handle complex dependencies
Checkpointing - Resume from failure
Monitoring - Real-time progress tracking
-What Nushell Provides
-Business Logic - Provider integrations, config generation
+What Nushell Provides
+
+Business Logic - Provider integrations, config generation
Flexibility - Easy to modify without recompiling
Readability - Shell-like syntax for infrastructure ops
-Multi-Repo Impact: NONE on Integration
-In Monorepo:
-provisioning/
+Multi-Repo Impact: NONE on Integration
+
+In Monorepo:
+
+provisioning/
├── core/nulib/ # Nushell code
└── platform/orchestrator/ # Rust code
-In Multi-Repo:
-provisioning-core/ # Separate repo, installs to /usr/local/lib/provisioning
+In Multi-Repo:
+
+provisioning-core/ # Separate repo, installs to /usr/local/lib/provisioning
provisioning-platform/ # Separate repo, installs to /usr/local/bin/provisioning-orchestrator
-Integration is the same:
-Orchestrator calls: nu -c “use /usr/local/lib/provisioning/servers/create.nu”
-Nushell calls: http post http://localhost:9090/workflows/ …
-No code dependency, just runtime coordination!
-The Orchestrator IS Essential
-The orchestrator:
-✅ IS USED for all complex operations
+Integration is the same:
+
+Orchestrator calls: nu -c "use /usr/local/lib/provisioning/servers/create.nu"
+Nushell calls: http post <http://localhost:9090/workflows/>...
+No code dependency, just runtime coordination!
+
+The Orchestrator IS Essential
+
+The orchestrator:
+
+✅ IS USED for all complex operations
✅ IS CRITICAL for workflow system (v3.0)
✅ IS REQUIRED for batch operations (v3.1)
✅ SOLVES deep call stack issues
✅ PROVIDES performance and reliability
-That misleading code example showed how Platform doesn’t link to Core code, but it absolutely uses the orchestrator for coordination.
-Does this clear it up? The orchestrator is the performance and reliability layer that makes the whole system work!
-Cost: $0.1565 USD
+That misleading code example showed how Platform doesn't link to Core code, but it absolutely uses the orchestrator for coordination.
+
+Does this clear it up? The orchestrator is the performance and reliability layer that makes the whole system work!
+
+Cost: $0.1565 USD
Duration: 137.69s
Turns: 40
-Total tokens: 7466(7 in, 7459 out)
+Total tokens: 7466(7 in, 7459 out)
+
Version : 1.0.0
Date : 2025-10-08
Status : Implemented
-
+
Complete authentication and authorization flow integration for the Provisioning Orchestrator, connecting all security components (JWT validation, MFA verification, Cedar authorization, rate limiting, and audit logging) into a cohesive security middleware chain.
-
+
The middleware chain is applied in this specific order to ensure proper security:
┌─────────────────────────────────────────────────────────────────┐
@@ -12597,28 +10873,21 @@ Total tokens: 7466(7 in, 7459 out)
│ - Access security context │
│ - Execute business logic │
└────────────────────────────────┘
-```plaintext
-
-## Implementation Details
-
-### 1. Security Context Builder (`middleware/security_context.rs`)
-
-**Purpose**: Build complete security context from authenticated requests.
-
-**Key Features**:
-
-- Extracts JWT token claims
-- Determines MFA verification status
-- Extracts IP address (X-Forwarded-For, X-Real-IP)
-- Extracts user agent and session info
-- Provides permission checking methods
-
-**Lines of Code**: 275
-
-**Example**:
-
-```rust
-pub struct SecurityContext {
+
+
+
+Purpose : Build complete security context from authenticated requests.
+Key Features :
+
+Extracts JWT token claims
+Determines MFA verification status
+Extracts IP address (X-Forwarded-For, X-Real-IP)
+Extracts user agent and session info
+Provides permission checking methods
+
+Lines of Code : 275
+Example :
+pub struct SecurityContext {
pub user_id: String,
pub token: ValidatedToken,
pub mfa_verified: bool,
@@ -12634,162 +10903,123 @@ impl SecurityContext {
pub fn has_permission(&self, permission: &str) -> bool { ... }
pub fn has_any_permission(&self, permissions: &[&str]) -> bool { ... }
pub fn has_all_permissions(&self, permissions: &[&str]) -> bool { ... }
-}
-```plaintext
-
-### 2. Enhanced Authentication Middleware (`middleware/auth.rs`)
-
-**Purpose**: JWT token validation with revocation checking.
-
-**Key Features**:
-
-- Bearer token extraction
-- JWT signature validation (RS256)
-- Expiry, issuer, audience checks
-- Token revocation status
-- Security context injection
-
-**Lines of Code**: 245
-
-**Flow**:
-
-1. Extract `Authorization: Bearer <token>` header
-2. Validate JWT with TokenValidator
-3. Build SecurityContext
-4. Inject into request extensions
-5. Continue to next middleware or return 401
-
-**Error Responses**:
-
-- `401 Unauthorized`: Missing/invalid token, expired, revoked
-- `403 Forbidden`: Insufficient permissions
-
-### 3. MFA Verification Middleware (`middleware/mfa.rs`)
-
-**Purpose**: Enforce MFA for sensitive operations.
-
-**Key Features**:
-
-- Path-based MFA requirements
-- Method-based enforcement (all DELETEs)
-- Production environment protection
-- Clear error messages
-
-**Lines of Code**: 290
-
-**MFA Required For**:
-
-- Production deployments (`/production/`, `/prod/`)
-- All DELETE operations
-- Server operations (POST, PUT, DELETE)
-- Cluster operations (POST, PUT, DELETE)
-- Batch submissions
-- Rollback operations
-- Configuration changes (POST, PUT, DELETE)
-- Secret management
-- User/role management
-
-**Example**:
-
-```rust
-fn requires_mfa(method: &str, path: &str) -> bool {
+}
+
+Purpose : JWT token validation with revocation checking.
+Key Features :
+
+Bearer token extraction
+JWT signature validation (RS256)
+Expiry, issuer, audience checks
+Token revocation status
+Security context injection
+
+Lines of Code : 245
+Flow :
+
+Extract Authorization: Bearer <token> header
+Validate JWT with TokenValidator
+Build SecurityContext
+Inject into request extensions
+Continue to next middleware or return 401
+
+Error Responses :
+
+401 Unauthorized: Missing/invalid token, expired, revoked
+403 Forbidden: Insufficient permissions
+
+
+Purpose : Enforce MFA for sensitive operations.
+Key Features :
+
+Path-based MFA requirements
+Method-based enforcement (all DELETEs)
+Production environment protection
+Clear error messages
+
+Lines of Code : 290
+MFA Required For :
+
+Production deployments (/production/, /prod/)
+All DELETE operations
+Server operations (POST, PUT, DELETE)
+Cluster operations (POST, PUT, DELETE)
+Batch submissions
+Rollback operations
+Configuration changes (POST, PUT, DELETE)
+Secret management
+User/role management
+
+Example :
+fn requires_mfa(method: &str, path: &str) -> bool {
if path.contains("/production/") { return true; }
if method == "DELETE" { return true; }
if path.contains("/deploy") { return true; }
// ...
-}
-```plaintext
-
-### 4. Enhanced Authorization Middleware (`middleware/authz.rs`)
-
-**Purpose**: Cedar policy evaluation with audit logging.
-
-**Key Features**:
-
-- Builds Cedar authorization request from HTTP request
-- Maps HTTP methods to Cedar actions (GET→Read, POST→Create, etc.)
-- Extracts resource types from paths
-- Evaluates Cedar policies with context (MFA, IP, time, workspace)
-- Logs all authorization decisions to audit log
-- Non-blocking audit logging (tokio::spawn)
-
-**Lines of Code**: 380
-
-**Resource Mapping**:
-
-```rust
-/api/v1/servers/srv-123 → Resource::Server("srv-123")
+}
+
+Purpose : Cedar policy evaluation with audit logging.
+Key Features :
+
+Builds Cedar authorization request from HTTP request
+Maps HTTP methods to Cedar actions (GET→Read, POST→Create, etc.)
+Extracts resource types from paths
+Evaluates Cedar policies with context (MFA, IP, time, workspace)
+Logs all authorization decisions to audit log
+Non-blocking audit logging (tokio::spawn)
+
+Lines of Code : 380
+Resource Mapping :
+/api/v1/servers/srv-123 → Resource::Server("srv-123")
/api/v1/taskserv/kubernetes → Resource::TaskService("kubernetes")
/api/v1/cluster/prod → Resource::Cluster("prod")
-/api/v1/config/settings → Resource::Config("settings")
-```plaintext
-
-**Action Mapping**:
-
-```rust
-GET → Action::Read
+/api/v1/config/settings → Resource::Config("settings")
+Action Mapping :
+GET → Action::Read
POST → Action::Create
PUT → Action::Update
-DELETE → Action::Delete
-```plaintext
-
-### 5. Rate Limiting Middleware (`middleware/rate_limit.rs`)
-
-**Purpose**: Prevent API abuse with per-IP rate limiting.
-
-**Key Features**:
-
-- Sliding window rate limiting
-- Per-IP request tracking
-- Configurable limits and windows
-- Exempt IP support
-- Automatic cleanup of old entries
-- Statistics tracking
-
-**Lines of Code**: 420
-
-**Configuration**:
-
-```rust
-pub struct RateLimitConfig {
- pub max_requests: u32, // e.g., 100
- pub window_duration: Duration, // e.g., 60 seconds
- pub exempt_ips: Vec<IpAddr>, // e.g., internal services
+DELETE → Action::Delete
+
+Purpose : Prevent API abuse with per-IP rate limiting.
+Key Features :
+
+Sliding window rate limiting
+Per-IP request tracking
+Configurable limits and windows
+Exempt IP support
+Automatic cleanup of old entries
+Statistics tracking
+
+Lines of Code : 420
+Configuration :
+pub struct RateLimitConfig {
+ pub max_requests: u32, // for example, 100
+ pub window_duration: Duration, // for example, 60 seconds
+ pub exempt_ips: Vec<IpAddr>, // for example, internal services
pub enabled: bool,
}
-// Default: 100 requests per minute
-```plaintext
-
-**Statistics**:
-
-```rust
-pub struct RateLimitStats {
+// Default: 100 requests per minute
+Statistics :
+pub struct RateLimitStats {
pub total_ips: usize, // Number of tracked IPs
pub total_requests: u32, // Total requests made
pub limited_ips: usize, // IPs that hit the limit
pub config: RateLimitConfig,
-}
-```plaintext
-
-### 6. Security Integration Module (`security_integration.rs`)
-
-**Purpose**: Helper module to integrate all security components.
-
-**Key Features**:
-
-- `SecurityComponents` struct grouping all middleware
-- `SecurityConfig` for configuration
-- `initialize()` method to set up all components
-- `disabled()` method for development mode
-- `apply_security_middleware()` helper for router setup
-
-**Lines of Code**: 265
-
-**Usage Example**:
-
-```rust
-use provisioning_orchestrator::security_integration::{
+}
+
+Purpose : Helper module to integrate all security components.
+Key Features :
+
+SecurityComponents struct grouping all middleware
+SecurityConfig for configuration
+initialize() method to set up all components
+disabled() method for development mode
+apply_security_middleware() helper for router setup
+
+Lines of Code : 265
+Usage Example :
+use provisioning_orchestrator::security_integration::{
SecurityComponents, SecurityConfig
};
@@ -12812,15 +11042,10 @@ let app = Router::new()
.route("/api/v1/servers", post(create_server))
.route("/api/v1/servers/:id", delete(delete_server));
-let secured_app = apply_security_middleware(app, &security);
-```plaintext
-
-## Integration with AppState
-
-### Updated AppState Structure
-
-```rust
-pub struct AppState {
+let secured_app = apply_security_middleware(app, &security);
+
+
+pub struct AppState {
// Existing fields
pub task_storage: Arc<dyn TaskStorage>,
pub batch_coordinator: BatchCoordinator,
@@ -12839,13 +11064,9 @@ pub struct AppState {
// NEW: Security components
pub security: SecurityComponents,
-}
-```plaintext
-
-### Initialization in main.rs
-
-```rust
-#[tokio::main]
+}
+
+#[tokio::main]
async fn main() -> Result<()> {
let args = Args::parse();
@@ -12900,33 +11121,26 @@ async fn main() -> Result<()> {
axum::serve(listener, app).await?;
Ok(())
-}
-```plaintext
-
-## Protected Endpoints
-
-### Endpoint Categories
-
-| Category | Example Endpoints | Auth Required | MFA Required | Cedar Policy |
-|----------|-------------------|---------------|--------------|--------------|
-| **Health** | `/health` | ❌ | ❌ | ❌ |
-| **Read-Only** | `GET /api/v1/servers` | ✅ | ❌ | ✅ |
-| **Server Mgmt** | `POST /api/v1/servers` | ✅ | ❌ | ✅ |
-| **Server Delete** | `DELETE /api/v1/servers/:id` | ✅ | ✅ | ✅ |
-| **Taskserv Mgmt** | `POST /api/v1/taskserv` | ✅ | ❌ | ✅ |
-| **Cluster Mgmt** | `POST /api/v1/cluster` | ✅ | ✅ | ✅ |
-| **Production** | `POST /api/v1/production/*` | ✅ | ✅ | ✅ |
-| **Batch Ops** | `POST /api/v1/batch/submit` | ✅ | ✅ | ✅ |
-| **Rollback** | `POST /api/v1/rollback` | ✅ | ✅ | ✅ |
-| **Config Write** | `POST /api/v1/config` | ✅ | ✅ | ✅ |
-| **Secrets** | `GET /api/v1/secret/*` | ✅ | ✅ | ✅ |
-
-## Complete Authentication Flow
-
-### Step-by-Step Flow
-
-```plaintext
-1. CLIENT REQUEST
+}
+
+
+Category Example Endpoints Auth Required MFA Required Cedar Policy
+Health /health❌ ❌ ❌
+Read-Only GET /api/v1/servers✅ ❌ ✅
+Server Mgmt POST /api/v1/servers✅ ❌ ✅
+Server Delete DELETE /api/v1/servers/:id✅ ✅ ✅
+Taskserv Mgmt POST /api/v1/taskserv✅ ❌ ✅
+Cluster Mgmt POST /api/v1/cluster✅ ✅ ✅
+Production POST /api/v1/production/*✅ ✅ ✅
+Batch Ops POST /api/v1/batch/submit✅ ✅ ✅
+Rollback POST /api/v1/rollback✅ ✅ ✅
+Config Write POST /api/v1/config✅ ✅ ✅
+Secrets GET /api/v1/secret/*✅ ✅ ✅
+
+
+
+
+1. CLIENT REQUEST
├─ Headers:
│ ├─ Authorization: Bearer <jwt_token>
│ ├─ X-Forwarded-For: 192.168.1.100
@@ -13006,14 +11220,10 @@ async fn main() -> Result<()> {
9. CLIENT RESPONSE
└─ 200 OK: Server deleted successfully
-```plaintext
-
-## Configuration
-
-### Environment Variables
-
-```bash
-# JWT Configuration
+
+
+
+# JWT Configuration
JWT_ISSUER=control-center
JWT_AUDIENCE=orchestrator
PUBLIC_KEY_PATH=/path/to/keys/public.pem
@@ -13034,119 +11244,101 @@ RATE_LIMIT_EXEMPT_IPS=10.0.0.1,10.0.0.2
# Audit Logging
AUDIT_ENABLED=true
AUDIT_RETENTION_DAYS=365
-```plaintext
-
-### Development Mode
-
-For development/testing, all security can be disabled:
-
-```rust
-// In main.rs
+
+
+For development/testing, all security can be disabled:
+// In main.rs
let security = if env::var("DEVELOPMENT_MODE").unwrap_or("false".to_string()) == "true" {
SecurityComponents::disabled(audit_logger.clone())
} else {
SecurityComponents::initialize(security_config, audit_logger.clone()).await?
-};
-```plaintext
-
-## Testing
-
-### Integration Tests
-
-Location: `provisioning/platform/orchestrator/tests/security_integration_tests.rs`
-
-**Test Coverage**:
-
-- ✅ Rate limiting enforcement
-- ✅ Rate limit statistics
-- ✅ Exempt IP handling
-- ✅ Authentication missing token
-- ✅ MFA verification for sensitive operations
-- ✅ Cedar policy evaluation
-- ✅ Complete security flow
-- ✅ Security components initialization
-- ✅ Configuration defaults
-
-**Lines of Code**: 340
-
-**Run Tests**:
-
-```bash
-cd provisioning/platform/orchestrator
+};
+
+
+Location: provisioning/platform/orchestrator/tests/security_integration_tests.rs
+Test Coverage :
+
+✅ Rate limiting enforcement
+✅ Rate limit statistics
+✅ Exempt IP handling
+✅ Authentication missing token
+✅ MFA verification for sensitive operations
+✅ Cedar policy evaluation
+✅ Complete security flow
+✅ Security components initialization
+✅ Configuration defaults
+
+Lines of Code : 340
+Run Tests :
+cd provisioning/platform/orchestrator
cargo test security_integration_tests
-```plaintext
-
-## File Summary
-
-| File | Purpose | Lines | Tests |
-|------|---------|-------|-------|
-| `middleware/security_context.rs` | Security context builder | 275 | 8 |
-| `middleware/auth.rs` | JWT authentication | 245 | 5 |
-| `middleware/mfa.rs` | MFA verification | 290 | 15 |
-| `middleware/authz.rs` | Cedar authorization | 380 | 4 |
-| `middleware/rate_limit.rs` | Rate limiting | 420 | 8 |
-| `middleware/mod.rs` | Module exports | 25 | 0 |
-| `security_integration.rs` | Integration helpers | 265 | 2 |
-| `tests/security_integration_tests.rs` | Integration tests | 340 | 11 |
-| **Total** | | **2,240** | **53** |
-
-## Benefits
-
-### Security
-
-- ✅ Complete authentication flow with JWT validation
-- ✅ MFA enforcement for sensitive operations
-- ✅ Fine-grained authorization with Cedar policies
-- ✅ Rate limiting prevents API abuse
-- ✅ Complete audit trail for compliance
-
-### Architecture
-
-- ✅ Modular middleware design
-- ✅ Clear separation of concerns
-- ✅ Reusable security components
-- ✅ Easy to test and maintain
-- ✅ Configuration-driven behavior
-
-### Operations
-
-- ✅ Can enable/disable features independently
-- ✅ Development mode for testing
-- ✅ Comprehensive error messages
-- ✅ Real-time statistics and monitoring
-- ✅ Non-blocking audit logging
-
-## Future Enhancements
-
-1. **Token Refresh**: Automatic token refresh before expiry
-2. **IP Whitelisting**: Additional IP-based access control
-3. **Geolocation**: Block requests from specific countries
-4. **Advanced Rate Limiting**: Per-user, per-endpoint limits
-5. **Session Management**: Track active sessions, force logout
-6. **2FA Integration**: Direct integration with TOTP/SMS providers
-7. **Policy Hot Reload**: Update Cedar policies without restart
-8. **Metrics Dashboard**: Real-time security metrics visualization
-
-## Related Documentation
-
-- Cedar Policy Language
-- JWT Token Management
-- MFA Setup Guide
-- Audit Log Format
-- Rate Limiting Best Practices
-
-## Version History
-
-| Version | Date | Changes |
-|---------|------|---------|
-| 1.0.0 | 2025-10-08 | Initial implementation |
-
----
-
-**Maintained By**: Security Team
-**Review Cycle**: Quarterly
-**Last Reviewed**: 2025-10-08
+
+File Purpose Lines Tests
+middleware/security_context.rsSecurity context builder 275 8
+middleware/auth.rsJWT authentication 245 5
+middleware/mfa.rsMFA verification 290 15
+middleware/authz.rsCedar authorization 380 4
+middleware/rate_limit.rsRate limiting 420 8
+middleware/mod.rsModule exports 25 0
+security_integration.rsIntegration helpers 265 2
+tests/security_integration_tests.rsIntegration tests 340 11
+Total 2,240 53
+
+
+
+
+
+✅ Complete authentication flow with JWT validation
+✅ MFA enforcement for sensitive operations
+✅ Fine-grained authorization with Cedar policies
+✅ Rate limiting prevents API abuse
+✅ Complete audit trail for compliance
+
+
+
+✅ Modular middleware design
+✅ Clear separation of concerns
+✅ Reusable security components
+✅ Easy to test and maintain
+✅ Configuration-driven behavior
+
+
+
+✅ Can enable/disable features independently
+✅ Development mode for testing
+✅ Comprehensive error messages
+✅ Real-time statistics and monitoring
+✅ Non-blocking audit logging
+
+
+
+Token Refresh : Automatic token refresh before expiry
+IP Whitelisting : Additional IP-based access control
+Geolocation : Block requests from specific countries
+Advanced Rate Limiting : Per-user, per-endpoint limits
+Session Management : Track active sessions, force logout
+2FA Integration : Direct integration with TOTP/SMS providers
+Policy Hot Reload : Update Cedar policies without restart
+Metrics Dashboard : Real-time security metrics visualization
+
+
+
+Cedar Policy Language
+JWT Token Management
+MFA Setup Guide
+Audit Log Format
+Rate Limiting Best Practices
+
+
+Version Date Changes
+1.0.0 2025-10-08 Initial implementation
+
+
+
+Maintained By : Security Team
+Review Cycle : Quarterly
+Last Reviewed : 2025-10-08
Date: 2025-10-01
Status: Analysis Complete - Implementation Planning
@@ -13274,11 +11466,11 @@ cargo test security_integration_tests
│ │ └── api-gateway/ # REST API gateway
│ │
│ ├── kcl/ # KCL configuration schemas
-│ │ ├── main.k # Main entry point
-│ │ ├── settings.k # Settings schema
-│ │ ├── server.k # Server definitions
-│ │ ├── cluster.k # Cluster definitions
-│ │ ├── workflows.k # Workflow definitions
+│ │ ├── main.ncl # Main entry point
+│ │ ├── settings.ncl # Settings schema
+│ │ ├── server.ncl # Server definitions
+│ │ ├── cluster.ncl # Cluster definitions
+│ │ ├── workflows.ncl # Workflow definitions
│ │ └── docs/ # KCL documentation
│ │
│ ├── templates/ # Jinja2 templates
@@ -13382,38 +11574,30 @@ cargo test security_integration_tests
├── README.md # Project README
├── CHANGELOG.md # Changelog
└── CLAUDE.md # AI assistant instructions
-```plaintext
-
-### Key Principles
-
-1. **Clear Separation**: Source code (`provisioning/`), runtime data (`workspace/`), build artifacts (`distribution/`)
-2. **Single Source of Truth**: One location for each type of content
-3. **Gitignore Strategy**: Runtime and build artifacts ignored, templates tracked
-4. **Standard Paths**: Follow Unix conventions for installation
-
----
-
-## Distribution Strategy
-
-### Package Types
-
-#### 1. **provisioning-core** (Required)
-
-**Contents:**
-
-- Nushell CLI and libraries
-- Core providers (local, upcloud, aws)
-- Essential taskservs (kubernetes, containerd, cilium)
-- KCL schemas
-- Configuration system
-- Templates
-
-**Size:** ~50MB (compressed)
-
-**Installation:**
-
-```bash
-/usr/local/
+
+
+
+Clear Separation : Source code (provisioning/), runtime data (workspace/), build artifacts (distribution/)
+Single Source of Truth : One location for each type of content
+Gitignore Strategy : Runtime and build artifacts ignored, templates tracked
+Standard Paths : Follow Unix conventions for installation
+
+
+
+
+
+Contents:
+
+Nushell CLI and libraries
+Core providers (local, upcloud, aws)
+Essential taskservs (kubernetes, containerd, cilium)
+KCL schemas
+Configuration system
+Templates
+
+Size: ~50 MB (compressed)
+Installation:
+/usr/local/
├── bin/
│ └── provisioning
├── lib/
@@ -13426,73 +11610,54 @@ cargo test security_integration_tests
├── templates/
├── config/
└── docs/
-```plaintext
-
-#### 2. **provisioning-platform** (Optional)
-
-**Contents:**
-
-- Rust orchestrator binary
-- Control center web UI
-- MCP server
-- API gateway
-
-**Size:** ~30MB (compressed)
-
-**Installation:**
-
-```bash
-/usr/local/
+
+
+Contents:
+
+Rust orchestrator binary
+Control center web UI
+MCP server
+API gateway
+
+Size: ~30 MB (compressed)
+Installation:
+/usr/local/
├── bin/
│ ├── provisioning-orchestrator
│ └── provisioning-control-center
└── share/
└── provisioning/
└── platform/
-```plaintext
-
-#### 3. **provisioning-extensions** (Optional)
-
-**Contents:**
-
-- Additional taskservs (radicle, gitea, postgres, etc.)
-- Cluster templates
-- Workflow templates
-
-**Size:** ~20MB (compressed)
-
-**Installation:**
-
-```bash
-/usr/local/lib/provisioning/extensions/
+
+
+Contents:
+
+Additional taskservs (radicle, gitea, postgres, etc.)
+Cluster templates
+Workflow templates
+
+Size: ~20 MB (compressed)
+Installation:
+/usr/local/lib/provisioning/extensions/
├── taskservs/
├── clusters/
└── workflows/
-```plaintext
-
-#### 4. **provisioning-plugins** (Optional)
-
-**Contents:**
-
-- Pre-built Nushell plugins
-- `nu_plugin_kcl`
-- `nu_plugin_tera`
-- Other custom plugins
-
-**Size:** ~15MB (compressed)
-
-**Installation:**
-
-```bash
-~/.config/nushell/plugins/
-```plaintext
-
-### Installation Paths
-
-#### System Installation (Root)
-
-```bash
-/usr/local/
+
+
+Contents:
+
+Pre-built Nushell plugins
+nu_plugin_kcl
+nu_plugin_tera
+Other custom plugins
+
+Size: ~15 MB (compressed)
+Installation:
+~/.config/nushell/plugins/
+
+
+
+/usr/local/
├── bin/
│ ├── provisioning # Main CLI
│ ├── provisioning-orchestrator # Orchestrator binary
@@ -13513,12 +11678,9 @@ cargo test security_integration_tests
├── config/ # Default configs
│ └── config.defaults.toml
└── docs/ # Documentation
-```plaintext
-
-#### User Configuration
-
-```bash
-~/.provisioning/
+
+
+~/.provisioning/
├── config/
│ └── config.user.toml # User overrides
├── extensions/ # User extensions
@@ -13527,12 +11689,9 @@ cargo test security_integration_tests
│ └── clusters/
├── cache/ # Cache directory
└── plugins/ # User plugins
-```plaintext
-
-#### Project Workspace
-
-```bash
-./workspace/
+
+
+./workspace/
├── infra/ # Infrastructure definitions
│ ├── my-cluster/
│ │ ├── config.toml
@@ -13546,29 +11705,20 @@ cargo test security_integration_tests
│ ├── state/
│ └── cache/
└── extensions/ # Project-specific extensions
-```plaintext
-
-### Configuration Hierarchy
-
-```plaintext
-Priority (highest to lowest):
+
+
+Priority (highest to lowest):
1. CLI flags --debug, --infra=my-cluster
2. Runtime overrides PROVISIONING_DEBUG=true
3. Project config ./workspace/config/config.toml
4. User config ~/.provisioning/config/config.user.toml
5. System config /usr/local/share/provisioning/config/config.defaults.toml
-```plaintext
-
----
-
-## Build System
-
-### Build Tools Structure
-
-**`provisioning/tools/build/`:**
-
-```plaintext
-build/
+
+
+
+
+provisioning/tools/build/:
+build/
├── build-system.nu # Main build orchestrator
├── package-core.nu # Core packaging
├── package-platform.nu # Platform packaging
@@ -13577,14 +11727,10 @@ build/
├── create-installers.nu # Installer generation
├── validate-package.nu # Package validation
└── publish-registry.nu # Registry publishing
-```plaintext
-
-### Build System Implementation
-
-**`provisioning/tools/build/build-system.nu`:**
-
-```nushell
-#!/usr/bin/env nu
+
+
+provisioning/tools/build/build-system.nu:
+#!/usr/bin/env nu
# Build system for provisioning project
use ../core/nulib/lib_provisioning/config/accessor.nu *
@@ -13755,14 +11901,10 @@ export def "main status" [] {
$packages | select name size
}
}
-```plaintext
-
-### Justfile Integration
-
-**`Justfile`:**
-
-```makefile
-# Provisioning Build System
+
+
+Justfile:
+# Provisioning Build System
# Use 'just --list' to see all available commands
# Default recipe
@@ -13883,18 +12025,12 @@ check-licenses:
# Security audit
audit:
cargo audit --file provisioning/platform/Cargo.lock
-```plaintext
-
----
-
-## Installation System
-
-### Installer Script
-
-**`distribution/installers/install.nu`:**
-
-```nushell
-#!/usr/bin/env nu
+
+
+
+
+distribution/installers/install.nu:
+#!/usr/bin/env nu
# Provisioning installation script
const DEFAULT_PREFIX = "/usr/local"
@@ -14144,14 +12280,10 @@ export def "main upgrade" [
error make { msg: "Upgrade failed" }
}
}
-```plaintext
-
-### Bash Installer (For Systems Without Nushell)
-
-**`distribution/installers/install.sh`:**
-
-```bash
-#!/usr/bin/env bash
+
+
+distribution/installers/install.sh:
+#!/usr/bin/env bash
# Provisioning installation script (Bash version)
# This script installs Nushell first, then runs the Nushell installer
@@ -14258,52 +12390,41 @@ main() {
# Run main
main "$@"
-```plaintext
-
----
-
-## Implementation Plan
-
-### Phase 1: Repository Restructuring (3-4 days)
-
-#### Day 1: Cleanup and Preparation
-
-**Tasks:**
-
-1. Create backup of current state
-2. Analyze and document all workspace directories
-3. Identify active workspace vs backups
-4. Map all file dependencies
-
-**Commands:**
-
-```bash
-# Backup current state
+
+
+
+
+
+Tasks:
+
+Create backup of current state
+Analyze and document all workspace directories
+Identify active workspace vs backups
+Map all file dependencies
+
+Commands:
+# Backup current state
cp -r /Users/Akasha/project-provisioning /Users/Akasha/project-provisioning.backup
# Analyze workspaces
fd workspace -t d > workspace-dirs.txt
-```plaintext
-
-**Deliverables:**
-
-- Complete backup
-- Workspace analysis document
-- Dependency map
-
-#### Day 2: Directory Restructuring
-
-**Tasks:**
-
-1. Consolidate workspace directories
-2. Move build artifacts to `distribution/`
-3. Remove obsolete directories (`NO/`, `wrks/`, presentation artifacts)
-4. Create proper `.gitignore`
-
-**Commands:**
-
-```bash
-# Create distribution directory
+
+Deliverables:
+
+Complete backup
+Workspace analysis document
+Dependency map
+
+
+Tasks:
+
+Consolidate workspace directories
+Move build artifacts to distribution/
+Remove obsolete directories (NO/, wrks/, presentation artifacts)
+Create proper .gitignore
+
+Commands:
+# Create distribution directory
mkdir -p distribution/{packages,installers,registry}
# Move build artifacts
@@ -14312,272 +12433,248 @@ mv provisioning/tools/dist distribution/packages/
# Remove obsolete
rm -rf NO/ wrks/ presentations/
-```plaintext
-
-**Deliverables:**
-
-- Clean directory structure
-- Updated `.gitignore`
-- Migration log
-
-#### Day 3: Update Path References
-
-**Tasks:**
-
-1. Update all hardcoded paths in Nushell scripts
-2. Update CLAUDE.md with new paths
-3. Update documentation references
-4. Test all path changes
-
-**Files to Update:**
-
-- `provisioning/core/nulib/**/*.nu` (~65 files)
-- `CLAUDE.md`
-- `docs/**/*.md`
-
-**Deliverables:**
-
-- Updated scripts
-- Updated documentation
-- Test results
-
-#### Day 4: Validation and Documentation
-
-**Tasks:**
-
-1. Run full test suite
-2. Verify all commands work
-3. Update README.md
-4. Create migration guide
-
-**Deliverables:**
-
-- Passing tests
-- Updated README
-- Migration guide for users
-
-### Phase 2: Build System Implementation (3-4 days)
-
-#### Day 5: Build System Core
-
-**Tasks:**
-
-1. Create `provisioning/tools/build/` structure
-2. Implement `build-system.nu`
-3. Implement `package-core.nu`
-4. Create Justfile
-
-**Files to Create:**
-
-- `provisioning/tools/build/build-system.nu`
-- `provisioning/tools/build/package-core.nu`
-- `provisioning/tools/build/validate-package.nu`
-- `Justfile`
-
-**Deliverables:**
-
-- Working build system
-- Core packaging capability
-- Justfile with basic recipes
-
-#### Day 6: Platform and Extension Packaging
-
-**Tasks:**
-
-1. Implement `package-platform.nu`
-2. Implement `package-extensions.nu`
-3. Implement `package-plugins.nu`
-4. Add checksum generation
-
-**Deliverables:**
-
-- Platform packaging
-- Extension packaging
-- Plugin packaging
-- Checksum generation
-
-#### Day 7: Package Validation
-
-**Tasks:**
-
-1. Create package validation system
-2. Implement integrity checks
-3. Create test suite for packages
-4. Document package format
-
-**Deliverables:**
-
-- Package validation
-- Test suite
-- Package format documentation
-
-#### Day 8: Build System Testing
-
-**Tasks:**
-
-1. Test full build pipeline
-2. Test all package types
-3. Optimize build performance
-4. Document build system
-
-**Deliverables:**
-
-- Tested build system
-- Performance optimizations
-- Build system documentation
-
-### Phase 3: Installation System (2-3 days)
-
-#### Day 9: Nushell Installer
-
-**Tasks:**
-
-1. Create `install.nu`
-2. Implement installation logic
-3. Implement upgrade logic
-4. Implement uninstallation
-
-**Files to Create:**
-
-- `distribution/installers/install.nu`
-
-**Deliverables:**
-
-- Working Nushell installer
-- Upgrade mechanism
-- Uninstall mechanism
-
-#### Day 10: Bash Installer and CLI
-
-**Tasks:**
-
-1. Create `install.sh`
-2. Replace bash CLI wrapper with pure Nushell
-3. Update PATH handling
-4. Test installation on clean system
-
-**Files to Create:**
-
-- `distribution/installers/install.sh`
-- Updated `provisioning/core/cli/provisioning`
-
-**Deliverables:**
-
-- Bash installer
-- Pure Nushell CLI
-- Installation tests
-
-#### Day 11: Installation Testing
-
-**Tasks:**
-
-1. Test installation on multiple OSes
-2. Test upgrade scenarios
-3. Test uninstallation
-4. Create installation documentation
-
-**Deliverables:**
-
-- Multi-OS installation tests
-- Installation guide
-- Troubleshooting guide
-
-### Phase 4: Package Registry (Optional, 2-3 days)
-
-#### Day 12: Registry System
-
-**Tasks:**
-
-1. Design registry format
-2. Implement registry indexing
-3. Create package metadata
-4. Implement search functionality
-
-**Files to Create:**
-
-- `provisioning/tools/build/publish-registry.nu`
-- `distribution/registry/index.json`
-
-**Deliverables:**
-
-- Registry system
-- Package metadata
-- Search functionality
-
-#### Day 13: Registry Commands
-
-**Tasks:**
-
-1. Implement `provisioning registry list`
-2. Implement `provisioning registry search`
-3. Implement `provisioning registry install`
-4. Implement `provisioning registry update`
-
-**Deliverables:**
-
-- Registry commands
-- Package installation from registry
-- Update mechanism
-
-#### Day 14: Registry Hosting
-
-**Tasks:**
-
-1. Set up registry hosting (S3, GitHub releases, etc.)
-2. Implement upload mechanism
-3. Create CI/CD for automatic publishing
-4. Document registry system
-
-**Deliverables:**
-
-- Hosted registry
-- CI/CD pipeline
-- Registry documentation
-
-### Phase 5: Documentation and Release (2 days)
-
-#### Day 15: Documentation
-
-**Tasks:**
-
-1. Update all documentation for new structure
-2. Create user guides
-3. Create development guides
-4. Create API documentation
-
-**Deliverables:**
-
-- Updated documentation
-- User guides
-- Developer guides
-- API docs
-
-#### Day 16: Release Preparation
-
-**Tasks:**
-
-1. Create CHANGELOG.md
-2. Build release packages
-3. Test installation from packages
-4. Create release announcement
-
-**Deliverables:**
-
-- CHANGELOG
-- Release packages
-- Installation verification
-- Release announcement
-
----
-
-## Migration Strategy
-
-### For Existing Users
-
-#### Option 1: Clean Migration
-
-```bash
-# Backup current workspace
+
+Deliverables:
+
+Clean directory structure
+Updated .gitignore
+Migration log
+
+
+Tasks:
+
+Update all hardcoded paths in Nushell scripts
+Update CLAUDE.md with new paths
+Update documentation references
+Test all path changes
+
+Files to Update:
+
+provisioning/core/nulib/**/*.nu (~65 files)
+CLAUDE.md
+docs/**/*.md
+
+Deliverables:
+
+Updated scripts
+Updated documentation
+Test results
+
+
+Tasks:
+
+Run full test suite
+Verify all commands work
+Update README.md
+Create migration guide
+
+Deliverables:
+
+Passing tests
+Updated README
+Migration guide for users
+
+
+
+Tasks:
+
+Create provisioning/tools/build/ structure
+Implement build-system.nu
+Implement package-core.nu
+Create Justfile
+
+Files to Create:
+
+provisioning/tools/build/build-system.nu
+provisioning/tools/build/package-core.nu
+provisioning/tools/build/validate-package.nu
+Justfile
+
+Deliverables:
+
+Working build system
+Core packaging capability
+Justfile with basic recipes
+
+
+Tasks:
+
+Implement package-platform.nu
+Implement package-extensions.nu
+Implement package-plugins.nu
+Add checksum generation
+
+Deliverables:
+
+Platform packaging
+Extension packaging
+Plugin packaging
+Checksum generation
+
+
+Tasks:
+
+Create package validation system
+Implement integrity checks
+Create test suite for packages
+Document package format
+
+Deliverables:
+
+Package validation
+Test suite
+Package format documentation
+
+
+Tasks:
+
+Test full build pipeline
+Test all package types
+Optimize build performance
+Document build system
+
+Deliverables:
+
+Tested build system
+Performance optimizations
+Build system documentation
+
+
+
+Tasks:
+
+Create install.nu
+Implement installation logic
+Implement upgrade logic
+Implement uninstallation
+
+Files to Create:
+
+distribution/installers/install.nu
+
+Deliverables:
+
+Working Nushell installer
+Upgrade mechanism
+Uninstall mechanism
+
+
+Tasks:
+
+Create install.sh
+Replace bash CLI wrapper with pure Nushell
+Update PATH handling
+Test installation on clean system
+
+Files to Create:
+
+distribution/installers/install.sh
+Updated provisioning/core/cli/provisioning
+
+Deliverables:
+
+Bash installer
+Pure Nushell CLI
+Installation tests
+
+
+Tasks:
+
+Test installation on multiple OSes
+Test upgrade scenarios
+Test uninstallation
+Create installation documentation
+
+Deliverables:
+
+Multi-OS installation tests
+Installation guide
+Troubleshooting guide
+
+
+
+Tasks:
+
+Design registry format
+Implement registry indexing
+Create package metadata
+Implement search functionality
+
+Files to Create:
+
+provisioning/tools/build/publish-registry.nu
+distribution/registry/index.json
+
+Deliverables:
+
+Registry system
+Package metadata
+Search functionality
+
+
+Tasks:
+
+Implement provisioning registry list
+Implement provisioning registry search
+Implement provisioning registry install
+Implement provisioning registry update
+
+Deliverables:
+
+Registry commands
+Package installation from registry
+Update mechanism
+
+
+Tasks:
+
+Set up registry hosting (S3, GitHub releases, etc.)
+Implement upload mechanism
+Create CI/CD for automatic publishing
+Document registry system
+
+Deliverables:
+
+Hosted registry
+CI/CD pipeline
+Registry documentation
+
+
+
+Tasks:
+
+Update all documentation for new structure
+Create user guides
+Create development guides
+Create API documentation
+
+Deliverables:
+
+Updated documentation
+User guides
+Developer guides
+API docs
+
+
+Tasks:
+
+Create CHANGELOG.md
+Build release packages
+Test installation from packages
+Create release announcement
+
+Deliverables:
+
+CHANGELOG
+Release packages
+Installation verification
+Release announcement
+
+
+
+
+
+# Backup current workspace
cp -r workspace workspace.backup
# Upgrade to new version
@@ -14585,20 +12682,14 @@ provisioning upgrade --version 3.2.0
# Migrate workspace
provisioning workspace migrate --from workspace.backup --to workspace/
-```plaintext
-
-#### Option 2: In-Place Migration
-
-```bash
-# Run migration script
+
+
+# Run migration script
provisioning migrate --check # Dry run
provisioning migrate # Execute migration
-```plaintext
-
-### For Developers
-
-```bash
-# Pull latest changes
+
+
+# Pull latest changes
git pull origin main
# Rebuild
@@ -14610,176 +12701,169 @@ just install-dev
# Verify
provisioning --version
-```plaintext
-
----
-
-## Success Criteria
-
-### Repository Structure
-
-- ✅ Single `workspace/` directory for all runtime data
-- ✅ Clear separation: source (`provisioning/`), runtime (`workspace/`), artifacts (`distribution/`)
-- ✅ All build artifacts in `distribution/` and gitignored
-- ✅ Clean root directory (no `wrks/`, `NO/`, etc.)
-- ✅ Unified documentation in `docs/`
-
-### Build System
-
-- ✅ Single command builds all packages: `just build`
-- ✅ Packages can be built independently
-- ✅ Checksums generated automatically
-- ✅ Validation before packaging
-- ✅ Build time < 5 minutes for full build
-
-### Installation
-
-- ✅ One-line installation: `curl -fsSL https://get.provisioning.io | sh`
-- ✅ Works on Linux and macOS
-- ✅ Standard installation paths (`/usr/local/`)
-- ✅ User configuration in `~/.provisioning/`
-- ✅ Clean uninstallation
-
-### Distribution
-
-- ✅ Packages available at stable URL
-- ✅ Automated releases via CI/CD
-- ✅ Package registry for extensions
-- ✅ Upgrade mechanism works reliably
-
-### Documentation
-
-- ✅ Complete installation guide
-- ✅ Quick start guide
-- ✅ Developer contributing guide
-- ✅ API documentation
-- ✅ Architecture documentation
-
----
-
-## Risks and Mitigations
-
-### Risk 1: Breaking Changes for Existing Users
-
-**Impact:** High
-**Probability:** High
-**Mitigation:**
-
-- Provide migration script
-- Support both old and new paths during transition (v3.2.x)
-- Clear migration guide
-- Automated backup before migration
-
-### Risk 2: Build System Complexity
-
-**Impact:** Medium
-**Probability:** Medium
-**Mitigation:**
-
-- Start with simple packaging
-- Iterate and improve
-- Document thoroughly
-- Provide examples
-
-### Risk 3: Installation Path Conflicts
-
-**Impact:** Medium
-**Probability:** Low
-**Mitigation:**
-
-- Check for existing installations
-- Support custom prefix
-- Clear uninstallation
-- Non-conflicting binary names
-
-### Risk 4: Cross-Platform Issues
-
-**Impact:** High
-**Probability:** Medium
-**Mitigation:**
-
-- Test on multiple OSes (Linux, macOS)
-- Use portable commands
-- Provide fallbacks
-- Clear error messages
-
-### Risk 5: Dependency Management
-
-**Impact:** Medium
-**Probability:** Medium
-**Mitigation:**
-
-- Document all dependencies
-- Check prerequisites during installation
-- Provide installation instructions for dependencies
-- Consider bundling critical dependencies
-
----
-
-## Timeline Summary
-
-| Phase | Duration | Key Deliverables |
-|-------|----------|------------------|
-| Phase 1: Restructuring | 3-4 days | Clean directory structure, updated paths |
-| Phase 2: Build System | 3-4 days | Working build system, all package types |
-| Phase 3: Installation | 2-3 days | Installers, pure Nushell CLI |
-| Phase 4: Registry (Optional) | 2-3 days | Package registry, extension management |
-| Phase 5: Documentation | 2 days | Complete documentation, release |
-| **Total** | **12-16 days** | Production-ready distribution system |
-
----
-
-## Next Steps
-
-1. **Review and Approval** (Day 0)
- - Review this analysis
- - Approve implementation plan
- - Assign resources
-
-2. **Kickoff** (Day 1)
- - Create implementation branch
- - Set up project tracking
- - Begin Phase 1
-
-3. **Weekly Reviews**
- - End of Phase 1: Structure review
- - End of Phase 2: Build system review
- - End of Phase 3: Installation review
- - Final review before release
-
----
-
-## Conclusion
-
-This comprehensive plan transforms the provisioning system into a professional-grade infrastructure automation platform with:
-
-- **Clean Architecture**: Clear separation of concerns
-- **Professional Distribution**: Standard installation paths and packaging
-- **Easy Installation**: One-command installation for users
-- **Developer Friendly**: Simple build system and clear development workflow
-- **Extensible**: Package registry for community extensions
-- **Well Documented**: Complete guides for users and developers
-
-The implementation will take approximately **2-3 weeks** and will result in a production-ready system suitable for both individual developers and enterprise deployments.
-
----
-
-## References
-
-- Current codebase structure
-- Unix FHS (Filesystem Hierarchy Standard)
-- Rust cargo packaging conventions
-- npm/yarn package management patterns
-- Homebrew formula best practices
-- KCL package management design
+
+
+
+
+✅ Single workspace/ directory for all runtime data
+✅ Clear separation: source (provisioning/), runtime (workspace/), artifacts (distribution/)
+✅ All build artifacts in distribution/ and gitignored
+✅ Clean root directory (no wrks/, NO/, etc.)
+✅ Unified documentation in docs/
+
+
+
+✅ Single command builds all packages: just build
+✅ Packages can be built independently
+✅ Checksums generated automatically
+✅ Validation before packaging
+✅ Build time < 5 minutes for full build
+
+
+
+✅ One-line installation: curl -fsSL https://get.provisioning.io | sh
+✅ Works on Linux and macOS
+✅ Standard installation paths (/usr/local/)
+✅ User configuration in ~/.provisioning/
+✅ Clean uninstallation
+
+
+
+✅ Packages available at stable URL
+✅ Automated releases via CI/CD
+✅ Package registry for extensions
+✅ Upgrade mechanism works reliably
+
+
+
+✅ Complete installation guide
+✅ Quick start guide
+✅ Developer contributing guide
+✅ API documentation
+✅ Architecture documentation
+
+
+
+
+Impact: High
+Probability: High
+Mitigation:
+
+Provide migration script
+Support both old and new paths during transition (v3.2.x)
+Clear migration guide
+Automated backup before migration
+
+
+Impact: Medium
+Probability: Medium
+Mitigation:
+
+Start with simple packaging
+Iterate and improve
+Document thoroughly
+Provide examples
+
+
+Impact: Medium
+Probability: Low
+Mitigation:
+
+Check for existing installations
+Support custom prefix
+Clear uninstallation
+Non-conflicting binary names
+
+
+Impact: High
+Probability: Medium
+Mitigation:
+
+Test on multiple OSes (Linux, macOS)
+Use portable commands
+Provide fallbacks
+Clear error messages
+
+
+Impact: Medium
+Probability: Medium
+Mitigation:
+
+Document all dependencies
+Check prerequisites during installation
+Provide installation instructions for dependencies
+Consider bundling critical dependencies
+
+
+
+Phase Duration Key Deliverables
+Phase 1: Restructuring 3-4 days Clean directory structure, updated paths
+Phase 2: Build System 3-4 days Working build system, all package types
+Phase 3: Installation 2-3 days Installers, pure Nushell CLI
+Phase 4: Registry (Optional) 2-3 days Package registry, extension management
+Phase 5: Documentation 2 days Complete documentation, release
+Total 12-16 days Production-ready distribution system
+
+
+
+
+
+
+Review and Approval (Day 0)
+
+Review this analysis
+Approve implementation plan
+Assign resources
+
+
+
+Kickoff (Day 1)
+
+Create implementation branch
+Set up project tracking
+Begin Phase 1
+
+
+
+Weekly Reviews
+
+End of Phase 1: Structure review
+End of Phase 2: Build system review
+End of Phase 3: Installation review
+Final review before release
+
+
+
+
+
+This comprehensive plan transforms the provisioning system into a professional-grade infrastructure automation platform with:
+
+Clean Architecture : Clear separation of concerns
+Professional Distribution : Standard installation paths and packaging
+Easy Installation : One-command installation for users
+Developer Friendly : Simple build system and clear development workflow
+Extensible : Package registry for community extensions
+Well Documented : Complete guides for users and developers
+
+The implementation will take approximately 2-3 weeks and will result in a production-ready system suitable for both individual developers and enterprise deployments.
+
+
+
+Current codebase structure
+Unix FHS (Filesystem Hierarchy Standard)
+Rust cargo packaging conventions
+npm/yarn package management patterns
+Homebrew formula best practices
+KCL package management design
+
Status : Implementation Guide
Last Updated : 2025-12-15
Project : TypeDialog at /Users/Akasha/Development/typedialog
Purpose : Type-safe UI generation from Nickel schemas
-
+
TypeDialog generates type-safe interactive forms from configuration schemas with bidirectional Nickel integration .
Nickel Schema
↓
@@ -14788,28 +12872,20 @@ TypeDialog Form (Auto-generated)
User fills form interactively
↓
Nickel output config (Type-safe)
-```plaintext
-
----
-
-## Architecture
-
-### Three Layers
-
-```plaintext
-CLI/TUI/Web Layer
+
+
+
+
+CLI/TUI/Web Layer
↓
TypeDialog Form Engine
↓
Nickel Integration
↓
Schema Contracts
-```plaintext
-
-### Data Flow
-
-```plaintext
-Input (Nickel)
+
+
+Input (Nickel)
↓
Form Definition (TOML)
↓
@@ -14820,16 +12896,11 @@ User Input
Validation (against Nickel contracts)
↓
Output (JSON/YAML/TOML/Nickel)
-```plaintext
-
----
-
-## Setup
-
-### Installation
-
-```bash
-# Clone TypeDialog
+
+
+
+
+# Clone TypeDialog
git clone https://github.com/jesusperezlorenzo/typedialog.git
cd typedialog
@@ -14838,23 +12909,15 @@ cargo build --release
# Install (optional)
cargo install --path ./crates/typedialog
-```plaintext
-
-### Verify Installation
-
-```bash
-typedialog --version
+
+
+typedialog --version
typedialog --help
-```plaintext
-
----
-
-## Basic Workflow
-
-### Step 1: Define Nickel Schema
-
-```nickel
-# server_config.ncl
+
+
+
+
+# server_config.ncl
let contracts = import "./contracts.ncl" in
let defaults = import "./defaults.ncl" in
@@ -14866,12 +12929,9 @@ let defaults = import "./defaults.ncl" in
DefaultServer = defaults.server,
}
-```plaintext
-
-### Step 2: Define TypeDialog Form (TOML)
-
-```toml
-# server_form.toml
+
+
+# server_form.toml
[form]
title = "Server Configuration"
description = "Create a new server configuration"
@@ -14920,18 +12980,12 @@ label = "Tags"
type = "multiselect"
options = ["production", "staging", "testing", "development"]
help = "Select applicable tags"
-```plaintext
-
-### Step 3: Render Form (CLI)
-
-```bash
-typedialog form --config server_form.toml --backend cli
-```plaintext
-
-**Output**:
-
-```plaintext
-Server Configuration
+
+
+typedialog form --config server_form.toml --backend cli
+
+Output :
+Server Configuration
Create a new server configuration
? Server Name: web-01
@@ -14944,28 +12998,19 @@ Create a new server configuration
◯ staging
◯ testing
◯ development
-```plaintext
-
-### Step 4: Validate Against Nickel Schema
-
-```bash
-# Validation happens automatically
+
+
+# Validation happens automatically
# If input matches Nickel contract, proceeds to output
-```plaintext
-
-### Step 5: Output to Nickel
-
-```bash
-typedialog form \
+
+
+typedialog form \
--config server_form.toml \
--output nickel \
--backend cli
-```plaintext
-
-**Output file** (`server_config_output.ncl`):
-
-```nickel
-{
+
+Output file (server_config_output.ncl):
+{
server_name = "web-01",
cpu_cores = 4,
memory_gb = 8,
@@ -14973,20 +13018,13 @@ typedialog form \
monitoring = true,
tags = ["production"],
}
-```plaintext
-
----
-
-## Real-World Example 1: Infrastructure Wizard
-
-### Scenario
-
-You want an interactive CLI wizard for infrastructure provisioning.
-
-### Step 1: Define Nickel Schema for Infrastructure
-
-```nickel
-# infrastructure_schema.ncl
+
+
+
+
+You want an interactive CLI wizard for infrastructure provisioning.
+
+# infrastructure_schema.ncl
{
InfrastructureConfig = {
workspace_name | String,
@@ -15010,12 +13048,9 @@ You want an interactive CLI wizard for infrastructure provisioning.
DefaultInfra = defaults,
}
-```plaintext
-
-### Step 2: Create Comprehensive Form
-
-```toml
-# infrastructure_wizard.toml
+
+
+# infrastructure_wizard.toml
[form]
title = "Infrastructure Provisioning Wizard"
description = "Create a complete infrastructure setup"
@@ -15035,10 +13070,10 @@ label = "Deployment Mode"
type = "select"
required = true
options = [
- { value = "solo", label = "Solo (Single user, 2 CPU, 4GB RAM)" },
- { value = "multiuser", label = "MultiUser (Team, 4 CPU, 8GB RAM)" },
- { value = "cicd", label = "CI/CD (Pipelines, 8 CPU, 16GB RAM)" },
- { value = "enterprise", label = "Enterprise (Production, 16 CPU, 32GB RAM)" },
+ { value = "solo", label = "Solo (Single user, 2 CPU, 4 GB RAM)" },
+ { value = "multiuser", label = "MultiUser (Team, 4 CPU, 8 GB RAM)" },
+ { value = "cicd", label = "CI/CD (Pipelines, 8 CPU, 16 GB RAM)" },
+ { value = "enterprise", label = "Enterprise (Production, 16 CPU, 32 GB RAM)" },
]
default = "solo"
@@ -15099,21 +13134,15 @@ required = true
validation_pattern = "^[a-zA-Z0-9._%+-]+@[a-zA-Z0-9.-]+\\.[a-zA-Z]{2,}$"
help = "For alerts and notifications"
placeholder = "admin@company.com"
-```plaintext
-
-### Step 3: Run Interactive Wizard
-
-```bash
-typedialog form \
+
+
+typedialog form \
--config infrastructure_wizard.toml \
--backend tui \
--output nickel
-```plaintext
-
-**Output** (`infrastructure_config.ncl`):
-
-```nickel
-{
+
+Output (infrastructure_config.ncl):
+{
workspace_name = "production-eu",
deployment_mode = 'enterprise,
provider = 'upcloud,
@@ -15123,12 +13152,9 @@ typedialog form \
backup_retention_days = 30,
email = "ops@company.com",
}
-```plaintext
-
-### Step 4: Use Output in Infrastructure
-
-```nickel
-# main_infrastructure.ncl
+
+
+# main_infrastructure.ncl
let config = import "./infrastructure_config.ncl" in
let schemas = import "../../provisioning/schemas/main.ncl" in
@@ -15159,16 +13185,11 @@ let schemas = import "../../provisioning/schemas/main.ncl" in
# default fallback
{},
}
-```plaintext
-
----
-
-## Real-World Example 2: Server Configuration Form
-
-### Form Definition (Advanced)
-
-```toml
-# server_advanced_form.toml
+
+
+
+
+# server_advanced_form.toml
[form]
title = "Server Configuration"
description = "Configure server settings with validation"
@@ -15297,12 +13318,9 @@ section = "advanced"
label = "Tags"
type = "multiselect"
options = ["production", "staging", "testing", "development"]
-```plaintext
-
-### Output Structure
-
-```nickel
-{
+
+
+{
# Basic
server_name = "web-prod-01",
description = "Primary web server",
@@ -15323,28 +13341,20 @@ options = ["production", "staging", "testing", "development"]
monitoring_interval = 30,
tags = ["production"],
}
-```plaintext
-
----
-
-## API Integration
-
-### TypeDialog REST Endpoints
-
-```bash
-# Start TypeDialog server
+
+
+
+
+# Start TypeDialog server
typedialog server --port 8080
# Render form via HTTP
curl -X POST http://localhost:8080/forms \
-H "Content-Type: application/json" \
-d @server_form.toml
-```plaintext
-
-### Response Format
-
-```json
-{
+
+
+{
"form_id": "srv_abc123",
"status": "rendered",
"fields": [
@@ -15357,12 +13367,9 @@ curl -X POST http://localhost:8080/forms \
}
]
}
-```plaintext
-
-### Submit Form
-
-```bash
-curl -X POST http://localhost:8080/forms/srv_abc123/submit \
+
+
+curl -X POST http://localhost:8080/forms/srv_abc123/submit \
-H "Content-Type: application/json" \
-d '{
"server_name": "web-01",
@@ -15372,12 +13379,9 @@ curl -X POST http://localhost:8080/forms/srv_abc123/submit \
"monitoring": true,
"tags": ["production"]
}'
-```plaintext
-
-### Response
-
-```json
-{
+
+
+{
"status": "success",
"validation": "passed",
"output_format": "nickel",
@@ -15390,18 +13394,12 @@ curl -X POST http://localhost:8080/forms/srv_abc123/submit \
"tags": ["production"]
}
}
-```plaintext
-
----
-
-## Validation
-
-### Contract-Based Validation
-
-TypeDialog validates user input against Nickel contracts:
-
-```nickel
-# Nickel contract
+
+
+
+
+TypeDialog validates user input against Nickel contracts:
+# Nickel contract
ServerConfig = {
cpu_cores | Number, # Must be number
memory_gb | Number, # Must be number
@@ -15410,28 +13408,20 @@ ServerConfig = {
# If user enters invalid value
# TypeDialog rejects before serializing
-```plaintext
-
-### Validation Rules in Form
-
-```toml
-[[fields]]
+
+
+[[fields]]
name = "cpu_cores"
type = "number"
min = 1
max = 32
help = "Must be 1-32 cores"
# TypeDialog enforces before user can submit
-```plaintext
-
----
-
-## Integration with Provisioning Platform
-
-### Use Case: Infrastructure Initialization
-
-```bash
-# 1. User runs initialization
+
+
+
+
+# 1. User runs initialization
provisioning init --wizard
# 2. Behind the scenes:
@@ -15444,12 +13434,9 @@ provisioning init --wizard
# 4. Provisioning uses output
# provisioning server create --from-config infrastructure_config.ncl
-```plaintext
-
-### Implementation in Nushell
-
-```nushell
-# provisioning/core/nulib/provisioning_init.nu
+
+
+# provisioning/core/nulib/provisioning_init.nu
def provisioning_init_wizard [] {
# Launch TypeDialog form
@@ -15473,30 +13460,20 @@ def provisioning_init_wizard [] {
print "Infrastructure configuration created!"
print "Use: provisioning deploy --from-config"
}
-```plaintext
-
----
-
-## Advanced Features
-
-### Conditional Visibility
-
-Show/hide fields based on user selections:
-
-```toml
-[[fields]]
+
+
+
+
+Show/hide fields based on user selections:
+[[fields]]
name = "backup_retention"
label = "Backup Retention (days)"
type = "number"
visible_if = "enable_backup == true" # Only shown if backup enabled
-```plaintext
-
-### Dynamic Defaults
-
-Set defaults based on other fields:
-
-```toml
-[[fields]]
+
+
+Set defaults based on other fields:
+[[fields]]
name = "deployment_mode"
type = "select"
options = ["solo", "enterprise"]
@@ -15506,26 +13483,18 @@ name = "cpu_cores"
type = "number"
default_from = "deployment_mode" # Can reference other fields
# solo → default 2, enterprise → default 16
-```plaintext
-
-### Custom Validation
-
-```toml
-[[fields]]
+
+
+[[fields]]
name = "memory_gb"
type = "number"
validation_rule = "memory_gb >= cpu_cores * 2"
-help = "Memory must be at least 2GB per CPU core"
-```plaintext
-
----
-
-## Output Formats
-
-TypeDialog can output to multiple formats:
-
-```bash
-# Output to Nickel (recommended for IaC)
+help = "Memory must be at least 2 GB per CPU core"
+
+
+
+TypeDialog can output to multiple formats:
+# Output to Nickel (recommended for IaC)
typedialog form --config form.toml --output nickel
# Output to JSON (for APIs)
@@ -15536,92 +13505,55 @@ typedialog form --config form.toml --output yaml
# Output to TOML (for application config)
typedialog form --config form.toml --output toml
-```plaintext
-
----
-
-## Backends
-
-TypeDialog supports three rendering backends:
-
-### 1. CLI (Command-line prompts)
-
-```bash
-typedialog form --config form.toml --backend cli
-```plaintext
-
-**Pros**: Lightweight, SSH-friendly, no dependencies
-**Cons**: Basic UI
-
-### 2. TUI (Terminal User Interface - Ratatui)
-
-```bash
-typedialog form --config form.toml --backend tui
-```plaintext
-
-**Pros**: Rich UI, keyboard navigation, sections
-**Cons**: Requires terminal support
-
-### 3. Web (HTTP Server - Axum)
-
-```bash
-typedialog form --config form.toml --backend web --port 3000
+
+
+
+TypeDialog supports three rendering backends:
+
+typedialog form --config form.toml --backend cli
+
+Pros : Lightweight, SSH-friendly, no dependencies
+Cons : Basic UI
+
+typedialog form --config form.toml --backend tui
+
+Pros : Rich UI, keyboard navigation, sections
+Cons : Requires terminal support
+
+typedialog form --config form.toml --backend web --port 3000
# Opens http://localhost:3000
-```plaintext
-
-**Pros**: Beautiful UI, remote access, multi-user
-**Cons**: Requires browser, network
-
----
-
-## Troubleshooting
-
-### Problem: Form doesn't match Nickel contract
-
-**Cause**: Field names or types don't match contract
-
-**Solution**: Verify field definitions match Nickel schema:
-
-```toml
-# Form field
+
+Pros : Beautiful UI, remote access, multi-user
+Cons : Requires browser, network
+
+
+
+Cause : Field names or types don’t match contract
+Solution : Verify field definitions match Nickel schema:
+# Form field
[[fields]]
name = "cpu_cores" # Must match Nickel field name
type = "number" # Must match Nickel type
-```plaintext
-
-### Problem: Validation fails
-
-**Cause**: User input violates contract constraints
-
-**Solution**: Add help text and validation rules:
-
-```toml
-[[fields]]
+
+
+Cause : User input violates contract constraints
+Solution : Add help text and validation rules:
+[[fields]]
name = "cpu_cores"
validation_pattern = "^[1-9][0-9]*$"
help = "Must be positive integer"
-```plaintext
-
-### Problem: Output not valid Nickel
-
-**Cause**: Missing required fields
-
-**Solution**: Ensure all required fields in form:
-
-```toml
-[[fields]]
+
+
+Cause : Missing required fields
+Solution : Ensure all required fields in form:
+[[fields]]
name = "required_field"
required = true # User must provide value
-```plaintext
-
----
-
-## Complete Example: End-to-End Workflow
-
-### Step 1: Define Nickel Schema
-
-```nickel
-# workspace_schema.ncl
+
+
+
+
+# workspace_schema.ncl
{
workspace = {
name = "",
@@ -15631,12 +13563,9 @@ required = true # User must provide value
email = "",
},
}
-```plaintext
-
-### Step 2: Define Form
-
-```toml
-# workspace_form.toml
+
+
+# workspace_form.toml
[[fields]]
name = "name"
type = "text"
@@ -15660,19 +13589,13 @@ type = "confirm"
name = "email"
type = "text"
required = true
-```plaintext
-
-### Step 3: User Interaction
-
-```bash
-$ typedialog form --config workspace_form.toml --backend tui
+
+
+$ typedialog form --config workspace_form.toml --backend tui
# User fills form interactively
-```plaintext
-
-### Step 4: Output
-
-```nickel
-{
+
+
+{
workspace = {
name = "production",
mode = 'enterprise,
@@ -15681,12 +13604,9 @@ $ typedialog form --config workspace_form.toml --backend tui
email = "ops@company.com",
},
}
-```plaintext
-
-### Step 5: Use in Provisioning
-
-```nickel
-# main.ncl
+
+
+# main.ncl
let config = import "./workspace.ncl" in
let schemas = import "provisioning/schemas/main.ncl" in
@@ -15697,34 +13617,26 @@ let schemas = import "provisioning/schemas/main.ncl" in
provider = config.workspace.provider,
},
}
-```plaintext
-
----
-
-## Summary
-
-TypeDialog + Nickel provides:
-
-✅ **Type-Safe UIs**: Forms validated against Nickel contracts
-✅ **Auto-Generated**: No UI code to maintain
-✅ **Bidirectional**: Nickel → Forms → Nickel
-✅ **Multiple Outputs**: JSON, YAML, TOML, Nickel
-✅ **Three Backends**: CLI, TUI, Web
-✅ **Production-Ready**: Used in real infrastructure
-
-**Key Benefit**: Reduce configuration errors by enforcing schema validation at UI level, not after deployment.
-
----
-
-**Version**: 1.0.0
-**Status**: Implementation Guide
-**Last Updated**: 2025-12-15
+
+
+TypeDialog + Nickel provides:
+✅ Type-Safe UIs : Forms validated against Nickel contracts
+✅ Auto-Generated : No UI code to maintain
+✅ Bidirectional : Nickel → Forms → Nickel
+✅ Multiple Outputs : JSON, YAML, TOML, Nickel
+✅ Three Backends : CLI, TUI, Web
+✅ Production-Ready : Used in real infrastructure
+Key Benefit : Reduce configuration errors by enforcing schema validation at UI level, not after deployment.
+
+Version : 1.0.0
+Status : Implementation Guide
+Last Updated : 2025-12-15
Accepted
-Provisioning had evolved from a monolithic structure into a complex system with mixed organizational patterns. The original structure had several issues:
+Provisioning had evolved from a monolithic structure into a complex system with mixed organizational patterns. The original structure had multiple issues:
Provider-specific code scattered : Cloud provider implementations were mixed with core logic
Task services fragmented : Infrastructure services lacked consistent structure
@@ -15752,86 +13664,73 @@ TypeDialog + Nickel provides:
├── control-center/ # Web UI management interface
├── tools/ # Development and utility tools
└── extensions/ # Plugin and extension framework
-```plaintext
-
-### Key Structural Principles
-
-1. **Domain Separation**: Each major component has clear boundaries and responsibilities
-2. **Hybrid Architecture**: Rust for performance-critical coordination, Nushell for business logic
-3. **Provider Abstraction**: Standardized interfaces across cloud providers
-4. **Service Modularity**: Reusable task services with consistent structure
-5. **Clean Distribution**: Development tools separated from user-facing components
-6. **Configuration Hierarchy**: Systematic config management with interpolation support
-
-### Domain Organization
-
-- **Core**: CLI interface, library modules, and common utilities
-- **Platform**: High-performance Rust orchestrator for workflow coordination
-- **Provisioning**: Main business logic with providers, task services, and clusters
-- **Control Center**: Web-based management interface
-- **Tools**: Development utilities and build systems
-- **Extensions**: Plugin framework and custom extensions
-
-## Consequences
-
-### Positive
-
-- **Clear Boundaries**: Each domain has well-defined responsibilities and interfaces
-- **Scalable Growth**: New providers and services can be added without structural changes
-- **Development Efficiency**: Developers can focus on specific domains without system-wide knowledge
-- **Clean Distribution**: Users receive only necessary components without development artifacts
-- **Maintenance Clarity**: Issues can be isolated to specific domains
-- **Hybrid Benefits**: Leverage Rust performance where needed while maintaining Nushell productivity
-- **Configuration Consistency**: Systematic approach to configuration management across all domains
-
-### Negative
-
-- **Migration Complexity**: Required systematic migration of existing components
-- **Learning Curve**: New developers need to understand domain boundaries
-- **Coordination Overhead**: Cross-domain features require careful interface design
-- **Path Management**: More complex path resolution with domain separation
-- **Build Complexity**: Multiple domains require coordinated build processes
-
-### Neutral
-
-- **Development Patterns**: Each domain may develop its own patterns within architectural guidelines
-- **Testing Strategy**: Domain-specific testing strategies while maintaining integration coverage
-- **Documentation**: Domain-specific documentation with clear cross-references
-
-## Alternatives Considered
-
-### Alternative 1: Monolithic Structure
-
-Keep all code in a single flat structure with minimal organization.
-**Rejected**: Would not solve maintainability or scalability issues. Continued technical debt accumulation.
-
-### Alternative 2: Microservice Architecture
-
-Split into completely separate services with network communication.
-**Rejected**: Overhead too high for single-machine deployment use case. Would complicate installation and configuration.
-
-### Alternative 3: Language-Based Organization
-
-Organize by implementation language (rust/, nushell/, kcl/).
-**Rejected**: Does not align with functional boundaries. Cross-cutting concerns would be scattered.
-
-### Alternative 4: Feature-Based Organization
-
-Organize by user-facing features (servers/, clusters/, networking/).
-**Rejected**: Would duplicate cross-cutting infrastructure and provider logic across features.
-
-### Alternative 5: Layer-Based Architecture
-
-Organize by architectural layers (presentation/, business/, data/).
-**Rejected**: Does not align with domain complexity. Infrastructure provisioning has different layering needs.
-
-## References
-
-- Configuration System Migration (ADR-002)
-- Hybrid Architecture Decision (ADR-004)
-- Extension Framework Design (ADR-005)
-- Project Architecture Principles (PAP) Guidelines
+
+
+Domain Separation : Each major component has clear boundaries and responsibilities
+Hybrid Architecture : Rust for performance-critical coordination, Nushell for business logic
+Provider Abstraction : Standardized interfaces across cloud providers
+Service Modularity : Reusable task services with consistent structure
+Clean Distribution : Development tools separated from user-facing components
+Configuration Hierarchy : Systematic config management with interpolation support
+
+
+
+Core : CLI interface, library modules, and common utilities
+Platform : High-performance Rust orchestrator for workflow coordination
+Provisioning : Main business logic with providers, task services, and clusters
+Control Center : Web-based management interface
+Tools : Development utilities and build systems
+Extensions : Plugin framework and custom extensions
+
+
+
+
+Clear Boundaries : Each domain has well-defined responsibilities and interfaces
+Scalable Growth : New providers and services can be added without structural changes
+Development Efficiency : Developers can focus on specific domains without system-wide knowledge
+Clean Distribution : Users receive only necessary components without development artifacts
+Maintenance Clarity : Issues can be isolated to specific domains
+Hybrid Benefits : Leverage Rust performance where needed while maintaining Nushell productivity
+Configuration Consistency : Systematic approach to configuration management across all domains
+
+
+
+Migration Complexity : Required systematic migration of existing components
+Learning Curve : New developers need to understand domain boundaries
+Coordination Overhead : Cross-domain features require careful interface design
+Path Management : More complex path resolution with domain separation
+Build Complexity : Multiple domains require coordinated build processes
+
+
+
+Development Patterns : Each domain may develop its own patterns within architectural guidelines
+Testing Strategy : Domain-specific testing strategies while maintaining integration coverage
+Documentation : Domain-specific documentation with clear cross-references
+
+
+
+Keep all code in a single flat structure with minimal organization.
+Rejected : Would not solve maintainability or scalability issues. Continued technical debt accumulation.
+
+Split into completely separate services with network communication.
+Rejected : Overhead too high for single-machine deployment use case. Would complicate installation and configuration.
+
+Organize by implementation language (rust/, nushell/, kcl/).
+Rejected : Does not align with functional boundaries. Cross-cutting concerns would be scattered.
+
+Organize by user-facing features (servers/, clusters/, networking/).
+Rejected : Would duplicate cross-cutting infrastructure and provider logic across features.
+
+Organize by architectural layers (presentation/, business/, data/).
+Rejected : Does not align with domain complexity. Infrastructure provisioning has different layering needs.
+
+
+Configuration System Migration (ADR-002)
+Hybrid Architecture Decision (ADR-004)
+Extension Framework Design (ADR-005)
+Project Architecture Principles (PAP) Guidelines
+
Accepted
@@ -15911,109 +13810,92 @@ Organize by architectural layers (presentation/, business/, data/).
├── scripts/ # Development tools
├── tests/ # Test suites
└── tools/ # Build and development utilities
-```plaintext
-
-### Key Distribution Principles
-
-1. **Clean Separation**: Development artifacts never appear in user installations
-2. **Hierarchical Configuration**: Clear precedence from system defaults to user overrides
-3. **Self-Contained User Tools**: Users can work without accessing development directories
-4. **Workspace Isolation**: User data and customizations isolated from system installation
-5. **Consistent Paths**: Predictable path resolution across different installation types
-6. **Version Management**: Clear versioning and upgrade paths for distributed components
-
-## Consequences
-
-### Positive
-
-- **Clean User Experience**: Users interact only with production-ready tools and interfaces
-- **Simplified Installation**: Clear installation process without development complexity
-- **Workspace Isolation**: User customizations don't interfere with system installation
-- **Development Efficiency**: Developers can work with full toolset without affecting users
-- **Configuration Clarity**: Clear hierarchy and precedence for configuration settings
-- **Maintainable Updates**: System updates don't affect user customizations
-- **Path Simplicity**: Predictable path resolution without development-specific logic
-- **Security Isolation**: User workspace separated from system components
-
-### Negative
-
-- **Distribution Complexity**: Multiple distribution targets require coordinated build processes
-- **Path Management**: More complex path resolution logic to support multiple layers
-- **Migration Overhead**: Existing users need to migrate to new workspace structure
-- **Documentation Burden**: Need clear documentation for different user types
-- **Testing Complexity**: Must validate distribution across different installation scenarios
-
-### Neutral
-
-- **Development Patterns**: Different patterns for development versus production deployment
-- **Configuration Strategy**: Layer-specific configuration management approaches
-- **Tool Integration**: Different integration patterns for development versus user tools
-
-## Alternatives Considered
-
-### Alternative 1: Monolithic Distribution
-
-Ship everything (development and production) in single package.
-**Rejected**: Creates confusing user experience and bloated installations. Mixes development concerns with user needs.
-
-### Alternative 2: Container-Only Distribution
-
-Package entire system as container images only.
-**Rejected**: Limits deployment flexibility and complicates local development workflows. Not suitable for all use cases.
-
-### Alternative 3: Source-Only Distribution
-
-Require users to build from source with development environment.
-**Rejected**: Creates high barrier to entry and mixes user concerns with development complexity.
-
-### Alternative 4: Plugin-Based Distribution
-
-Minimal core with everything else as downloadable plugins.
-**Rejected**: Would fragment essential functionality and complicate initial setup. Network dependency for basic functionality.
-
-### Alternative 5: Environment-Based Distribution
-
-Use environment variables to control what gets installed.
-**Rejected**: Creates complex configuration matrix and potential for inconsistent installations.
-
-## Implementation Details
-
-### Distribution Build Process
-
-1. **Core Layer Build**: Extract essential user components from source
-2. **Template Processing**: Generate configuration templates with proper defaults
-3. **Path Resolution**: Generate path resolution logic for different installation types
-4. **Documentation Generation**: Create user-specific documentation excluding development details
-5. **Package Creation**: Build distribution packages for different platforms
-6. **Validation Testing**: Test installations in clean environments
-
-### Configuration Hierarchy
-
-```plaintext
-System Defaults (lowest precedence)
+
+
+
+Clean Separation : Development artifacts never appear in user installations
+Hierarchical Configuration : Clear precedence from system defaults to user overrides
+Self-Contained User Tools : Users can work without accessing development directories
+Workspace Isolation : User data and customizations isolated from system installation
+Consistent Paths : Predictable path resolution across different installation types
+Version Management : Clear versioning and upgrade paths for distributed components
+
+
+
+
+Clean User Experience : Users interact only with production-ready tools and interfaces
+Simplified Installation : Clear installation process without development complexity
+Workspace Isolation : User customizations don’t interfere with system installation
+Development Efficiency : Developers can work with full toolset without affecting users
+Configuration Clarity : Clear hierarchy and precedence for configuration settings
+Maintainable Updates : System updates don’t affect user customizations
+Path Simplicity : Predictable path resolution without development-specific logic
+Security Isolation : User workspace separated from system components
+
+
+
+Distribution Complexity : Multiple distribution targets require coordinated build processes
+Path Management : More complex path resolution logic to support multiple layers
+Migration Overhead : Existing users need to migrate to new workspace structure
+Documentation Burden : Need clear documentation for different user types
+Testing Complexity : Must validate distribution across different installation scenarios
+
+
+
+Development Patterns : Different patterns for development versus production deployment
+Configuration Strategy : Layer-specific configuration management approaches
+Tool Integration : Different integration patterns for development versus user tools
+
+
+
+Ship everything (development and production) in single package.
+Rejected : Creates confusing user experience and bloated installations. Mixes development concerns with user needs.
+
+Package entire system as container images only.
+Rejected : Limits deployment flexibility and complicates local development workflows. Not suitable for all use cases.
+
+Require users to build from source with development environment.
+Rejected : Creates high barrier to entry and mixes user concerns with development complexity.
+
+Minimal core with everything else as downloadable plugins.
+Rejected : Would fragment essential functionality and complicate initial setup. Network dependency for basic functionality.
+
+Use environment variables to control what gets installed.
+Rejected : Creates complex configuration matrix and potential for inconsistent installations.
+
+
+
+Core Layer Build : Extract essential user components from source
+Template Processing : Generate configuration templates with proper defaults
+Path Resolution : Generate path resolution logic for different installation types
+Documentation Generation : Create user-specific documentation excluding development details
+Package Creation : Build distribution packages for different platforms
+Validation Testing : Test installations in clean environments
+
+
+System Defaults (lowest precedence)
└── User Configuration
└── Project Configuration
└── Infrastructure Configuration
└── Environment Configuration
└── Runtime Configuration (highest precedence)
-```plaintext
-
-### Workspace Management
-
-- **Automatic Creation**: User workspace created on first run
-- **Template Initialization**: Workspace populated with configuration templates
-- **Version Tracking**: Workspace tracks compatible system versions
-- **Migration Support**: Automatic migration between workspace versions
-- **Backup Integration**: Workspace backup and restore capabilities
-
-## References
-
-- Project Structure Decision (ADR-001)
-- Workspace Isolation Decision (ADR-003)
-- Configuration System Migration (CLAUDE.md)
-- User Experience Guidelines (Design Principles)
-- Installation and Deployment Procedures
+
+
+Automatic Creation : User workspace created on first run
+Template Initialization : Workspace populated with configuration templates
+Version Tracking : Workspace tracks compatible system versions
+Migration Support : Automatic migration between workspace versions
+Backup Integration : Workspace backup and restore capabilities
+
+
+
+Project Structure Decision (ADR-001)
+Workspace Isolation Decision (ADR-003)
+Configuration System Migration (CLAUDE.md)
+User Experience Guidelines (Design Principles)
+Installation and Deployment Procedures
+
Accepted
@@ -16040,7 +13922,7 @@ System Defaults (lowest precedence)
Implement isolated user workspaces with clear boundaries and hierarchical configuration:
-
+
~/workspace/provisioning/ # User workspace root
├── config/
│ ├── user.toml # User preferences and overrides
@@ -16065,91 +13947,75 @@ System Defaults (lowest precedence)
├── logs/ # User-specific logs
├── state/ # Local state files
└── backups/ # Automatic workspace backups
-```plaintext
-
-### Configuration Hierarchy (Precedence Order)
-
-1. **Runtime Parameters** (command line, environment variables)
-2. **Environment Configuration** (`config/environments/{env}.toml`)
-3. **Infrastructure Configuration** (`infra/{name}/config.toml`)
-4. **Project Configuration** (project-specific settings)
-5. **User Configuration** (`config/user.toml`)
-6. **System Defaults** (system-wide defaults)
-
-### Key Isolation Principles
-
-1. **Complete Isolation**: User workspace completely independent of system installation
-2. **Hierarchical Inheritance**: Clear configuration inheritance with user overrides
-3. **Security Boundaries**: User workspace in user-writable area only
-4. **Multi-User Safe**: Multiple users can have independent workspaces
-5. **Portable**: Entire user workspace can be backed up and restored
-6. **Version Independent**: Workspace compatible across system version upgrades
-7. **Extension Safe**: User extensions cannot affect system behavior
-8. **State Isolation**: All user state contained within workspace
-
-## Consequences
-
-### Positive
-
-- **User Independence**: Users can customize without affecting system or other users
-- **Configuration Clarity**: Clear hierarchy and precedence for all configuration
-- **Security Isolation**: User modifications cannot compromise system installation
-- **Easy Backup**: Complete user environment can be backed up and restored
-- **Development Flexibility**: Developers can have multiple isolated workspaces
-- **System Upgrades**: System updates don't affect user customizations
-- **Multi-User Support**: Multiple users can work independently on same system
-- **Portable Configurations**: User workspace can be moved between systems
-- **State Management**: All user state in predictable locations
-
-### Negative
-
-- **Initial Setup**: Users must initialize workspace before first use
-- **Path Complexity**: More complex path resolution to support workspace isolation
-- **Disk Usage**: Each user maintains separate cache and state
-- **Configuration Duplication**: Some configuration may be duplicated across users
-- **Migration Overhead**: Existing users need workspace migration
-- **Documentation Complexity**: Need clear documentation for workspace management
-
-### Neutral
-
-- **Backup Strategy**: Users responsible for their own workspace backup
-- **Extension Management**: User-specific extension installation and management
-- **Version Compatibility**: Workspace versions must be compatible with system versions
-- **Performance Implications**: Additional path resolution overhead
-
-## Alternatives Considered
-
-### Alternative 1: System-Wide Configuration Only
-
-All configuration in system directories with user overrides via environment variables.
-**Rejected**: Creates conflicts between users and makes customization difficult. Poor isolation and security.
-
-### Alternative 2: Home Directory Dotfiles
-
-Use traditional dotfile approach (~/.provisioning/).
-**Rejected**: Clutters home directory and provides less structured organization. Harder to backup and migrate.
-
-### Alternative 3: XDG Base Directory Specification
-
-Follow XDG specification for config/data/cache separation.
-**Rejected**: While standards-compliant, would fragment user data across multiple directories making management complex.
-
-### Alternative 4: Container-Based Isolation
-
-Each user gets containerized environment.
-**Rejected**: Too heavy for simple configuration isolation. Adds deployment complexity without sufficient benefits.
-
-### Alternative 5: Database-Based Configuration
-
-Store all user configuration in database.
-**Rejected**: Adds dependency complexity and makes backup/restore more difficult. Over-engineering for configuration needs.
-
-## Implementation Details
-
-### Workspace Initialization
-
-```bash
-# Automatic workspace creation on first run
+
+
+
+Runtime Parameters (command line, environment variables)
+Environment Configuration (config/environments/{env}.toml)
+Infrastructure Configuration (infra/{name}/config.toml)
+Project Configuration (project-specific settings)
+User Configuration (config/user.toml)
+System Defaults (system-wide defaults)
+
+
+
+Complete Isolation : User workspace completely independent of system installation
+Hierarchical Inheritance : Clear configuration inheritance with user overrides
+Security Boundaries : User workspace in user-writable area only
+Multi-User Safe : Multiple users can have independent workspaces
+Portable : Entire user workspace can be backed up and restored
+Version Independent : Workspace compatible across system version upgrades
+Extension Safe : User extensions cannot affect system behavior
+State Isolation : All user state contained within workspace
+
+
+
+
+User Independence : Users can customize without affecting system or other users
+Configuration Clarity : Clear hierarchy and precedence for all configuration
+Security Isolation : User modifications cannot compromise system installation
+Easy Backup : Complete user environment can be backed up and restored
+Development Flexibility : Developers can have multiple isolated workspaces
+System Upgrades : System updates don’t affect user customizations
+Multi-User Support : Multiple users can work independently on same system
+Portable Configurations : User workspace can be moved between systems
+State Management : All user state in predictable locations
+
+
+
+Initial Setup : Users must initialize workspace before first use
+Path Complexity : More complex path resolution to support workspace isolation
+Disk Usage : Each user maintains separate cache and state
+Configuration Duplication : Some configuration may be duplicated across users
+Migration Overhead : Existing users need workspace migration
+Documentation Complexity : Need clear documentation for workspace management
+
+
+
+Backup Strategy : Users responsible for their own workspace backup
+Extension Management : User-specific extension installation and management
+Version Compatibility : Workspace versions must be compatible with system versions
+Performance Implications : Additional path resolution overhead
+
+
+
+All configuration in system directories with user overrides via environment variables.
+Rejected : Creates conflicts between users and makes customization difficult. Poor isolation and security.
+
+Use traditional dotfile approach (~/.provisioning/).
+Rejected : Clutters home directory and provides less structured organization. Harder to backup and migrate.
+
+Follow XDG specification for config/data/cache separation.
+Rejected : While standards-compliant, would fragment user data across multiple directories making management complex.
+
+Each user gets containerized environment.
+Rejected : Too heavy for simple configuration isolation. Adds deployment complexity without sufficient benefits.
+
+Store all user configuration in database.
+Rejected : Adds dependency complexity and makes backup/restore more difficult. Over-engineering for configuration needs.
+
+
+# Automatic workspace creation on first run
provisioning workspace init
# Manual workspace creation with template
@@ -16158,20 +14024,17 @@ provisioning workspace init --template=developer
# Workspace status and validation
provisioning workspace status
provisioning workspace validate
-```plaintext
-
-### Configuration Resolution Process
-
-1. **Workspace Discovery**: Locate user workspace (env var → default location)
-2. **Configuration Loading**: Load configuration hierarchy with proper precedence
-3. **Path Resolution**: Resolve all paths relative to workspace and system installation
-4. **Variable Interpolation**: Process configuration variables and templates
-5. **Validation**: Validate merged configuration for completeness and correctness
-
-### Backup and Migration
-
-```bash
-# Backup entire workspace
+
+
+
+Workspace Discovery : Locate user workspace (env var → default location)
+Configuration Loading : Load configuration hierarchy with proper precedence
+Path Resolution : Resolve all paths relative to workspace and system installation
+Variable Interpolation : Process configuration variables and templates
+Validation : Validate merged configuration for completeness and correctness
+
+
+# Backup entire workspace
provisioning workspace backup --output ~/backup/provisioning-workspace.tar.gz
# Restore workspace from backup
@@ -16179,24 +14042,23 @@ provisioning workspace restore --input ~/backup/provisioning-workspace.tar.gz
# Migrate workspace to new version
provisioning workspace migrate --from-version 2.0.0 --to-version 3.0.0
-```plaintext
-
-### Security Considerations
-
-- **File Permissions**: Workspace created with appropriate user permissions
-- **Secret Management**: Secrets encrypted and isolated within workspace
-- **Extension Sandboxing**: User extensions cannot access system directories
-- **Path Validation**: All paths validated to prevent directory traversal
-- **Configuration Validation**: User configuration validated against schemas
-
-## References
-
-- Distribution Strategy (ADR-002)
-- Configuration System Migration (CLAUDE.md)
-- Security Guidelines (Design Principles)
-- Extension Framework (ADR-005)
-- Multi-User Deployment Patterns
+
+
+File Permissions : Workspace created with appropriate user permissions
+Secret Management : Secrets encrypted and isolated within workspace
+Extension Sandboxing : User extensions cannot access system directories
+Path Validation : All paths validated to prevent directory traversal
+Configuration Validation : User configuration validated against schemas
+
+
+
+Distribution Strategy (ADR-002)
+Configuration System Migration (CLAUDE.md)
+Security Guidelines (Design Principles)
+Extension Framework (ADR-005)
+Multi-User Deployment Patterns
+
Accepted
@@ -16222,7 +14084,7 @@ provisioning workspace migrate --from-version 2.0.0 --to-version 3.0.0
Implement a Hybrid Rust/Nushell Architecture with clear separation of concerns:
-
+
Orchestrator : High-performance workflow coordination and task scheduling
@@ -16241,7 +14103,7 @@ provisioning workspace migrate --from-version 2.0.0 --to-version 3.0.0
CLI Interface : User-facing command-line tools and workflows
Domain Operations : All business-specific logic and operations
-
+
// Rust orchestrator invokes Nushell scripts via process execution
let result = Command::new("nu")
@@ -16272,8 +14134,8 @@ http post "http://localhost:9090/workflows/servers/create" {
Error Handling : Coordinated error handling across language boundaries
State Consistency : Consistent state management across hybrid system
-
-
+
+
Technical Limitations Solved : Eliminates Nushell deep call stack issues
Performance Optimized : High-performance coordination while preserving productivity
@@ -16285,7 +14147,7 @@ http post "http://localhost:9090/workflows/servers/create" {
Scalability : Architecture scales to complex multi-provider workflows
Maintainability : Clear separation of concerns between layers
-
+
Complexity Increase : Two-language system requires more architectural coordination
Integration Overhead : Data serialization/deserialization between languages
@@ -16294,14 +14156,14 @@ http post "http://localhost:9090/workflows/servers/create" {
Deployment Complexity : Two runtime environments must be coordinated
Debugging Challenges : Debugging across language boundaries more complex
-
+
Development Patterns : Different patterns for each layer while maintaining consistency
Documentation Strategy : Language-specific documentation with integration guides
Tool Chain : Multiple development tool chains must be maintained
Performance Characteristics : Different performance characteristics for different operations
-
+
Continue with Nushell-only approach and work around limitations.
Rejected : Technical limitations are fundamental and cannot be worked around without compromising functionality. Deep call stack issues are architectural.
@@ -16317,7 +14179,7 @@ http post "http://localhost:9090/workflows/servers/create" {
Run Nushell and coordination layer in separate containers.
Rejected : Adds deployment complexity and network communication overhead. Complicates local development significantly.
-
+
Task Queue : File-based persistent queue for reliable workflow management
@@ -16333,21 +14195,21 @@ http post "http://localhost:9090/workflows/servers/create" {
File-based State : Lightweight persistence without database dependencies
Process Execution : Secure subprocess execution for Nushell operations
-
+
Rust Development : Focus on coordination, performance, and integration
Nushell Development : Focus on business logic, providers, and task services
Integration Testing : Validate communication between layers
End-to-End Validation : Complete workflow testing across both layers
-
+
Structured Logging : JSON logs from both Rust and Nushell components
Metrics Collection : Performance metrics from coordination layer
Health Checks : System health monitoring across both layers
Workflow Tracking : Complete audit trail of workflow execution
-
+
✅ Rust orchestrator implementation
@@ -16369,7 +14231,7 @@ http post "http://localhost:9090/workflows/servers/create" {
✅ Rollback capabilities
✅ Real-time monitoring
-
+
Deep Call Stack Limitations (CLAUDE.md - Architectural Lessons Learned)
Configuration-Driven Architecture (ADR-002)
@@ -16441,12 +14303,9 @@ http post "http://localhost:9090/workflows/servers/create" {
└── external-tool/
├── extension.toml
└── nulib/
-```plaintext
-
-### Extension Manifest (extension.toml)
-
-```toml
-[extension]
+
+
+ [extension]
name = "custom-provider"
version = "1.0.0"
type = "provider"
@@ -16467,97 +14326,81 @@ json_parser = ">=2.0.0"
[entry_points]
cli = "nulib/cli.nu"
provider = "nulib/provider.nu"
-config_schema = "kcl/schema.k"
+config_schema = "schemas/schema.ncl"
[configuration]
config_prefix = "custom_provider"
required_env_vars = ["CUSTOM_PROVIDER_API_KEY"]
optional_config = ["custom_provider.region", "custom_provider.timeout"]
-```plaintext
-
-### Key Framework Principles
-
-1. **Registry-Based Discovery**: Extensions registered in structured directories
-2. **Manifest-Driven Loading**: Extension capabilities declared in manifest files
-3. **Version Compatibility**: Explicit compatibility declarations and validation
-4. **Configuration Integration**: Extensions integrate with system configuration hierarchy
-5. **Isolation Boundaries**: Extensions isolated from core system and each other
-6. **Standard Interfaces**: Consistent interfaces across extension types
-7. **Development Patterns**: Clear patterns for extension development
-8. **Community Support**: Framework designed for community contributions
-
-## Consequences
-
-### Positive
-
-- **Extensibility**: System can be extended without modifying core code
-- **Community Growth**: Enable community contributions and ecosystem development
-- **Organization Customization**: Organizations can add proprietary integrations
-- **Innovation Support**: New technologies can be integrated via extensions
-- **Isolation Safety**: Extensions cannot compromise system stability
-- **Configuration Consistency**: Extensions integrate with configuration-driven architecture
-- **Development Efficiency**: Clear patterns reduce extension development time
-- **Version Management**: Compatibility system prevents breaking changes
-- **Discovery Automation**: Extensions automatically discovered and loaded
-
-### Negative
-
-- **Complexity Increase**: Additional layer of abstraction and management
-- **Performance Overhead**: Extension loading and isolation adds runtime cost
-- **Testing Complexity**: Must test extension framework and individual extensions
-- **Documentation Burden**: Need comprehensive extension development documentation
-- **Version Coordination**: Extension compatibility matrix requires management
-- **Support Complexity**: Community extensions may require support resources
-
-### Neutral
-
-- **Development Patterns**: Different patterns for extension vs core development
-- **Quality Control**: Community extensions may vary in quality and maintenance
-- **Security Considerations**: Extensions need security review and validation
-- **Dependency Management**: Extension dependencies must be managed carefully
-
-## Alternatives Considered
-
-### Alternative 1: Filesystem-Based Extensions
-
-Simple filesystem scanning for extension discovery.
-**Rejected**: No manifest validation or version compatibility checking. Fragile discovery mechanism.
-
-### Alternative 2: Database-Backed Registry
-
-Store extension metadata in database for discovery.
-**Rejected**: Adds database dependency complexity. Over-engineering for extension discovery needs.
-
-### Alternative 3: Package Manager Integration
-
-Use existing package managers (cargo, npm) for extension distribution.
-**Rejected**: Complicates installation and creates external dependencies. Not suitable for corporate environments.
-
-### Alternative 4: Container-Based Extensions
-
-Each extension runs in isolated container.
-**Rejected**: Too heavy for simple extensions. Complicates development and deployment significantly.
-
-### Alternative 5: Plugin Architecture
-
-Traditional plugin architecture with dynamic loading.
-**Rejected**: Complex for shell-based system. Security and isolation challenges in Nushell environment.
-
-## Implementation Details
-
-### Extension Discovery Process
-
-1. **Directory Scanning**: Scan extension directories for manifest files
-2. **Manifest Validation**: Parse and validate extension manifest
-3. **Compatibility Check**: Verify version compatibility requirements
-4. **Dependency Resolution**: Resolve extension dependencies
-5. **Configuration Integration**: Merge extension configuration schemas
-6. **Entry Point Registration**: Register extension entry points with system
-
-### Extension Loading Lifecycle
-
-```bash
-# Extension discovery and validation
+
+
+
+Registry-Based Discovery : Extensions registered in structured directories
+Manifest-Driven Loading : Extension capabilities declared in manifest files
+Version Compatibility : Explicit compatibility declarations and validation
+Configuration Integration : Extensions integrate with system configuration hierarchy
+Isolation Boundaries : Extensions isolated from core system and each other
+Standard Interfaces : Consistent interfaces across extension types
+Development Patterns : Clear patterns for extension development
+Community Support : Framework designed for community contributions
+
+
+
+
+Extensibility : System can be extended without modifying core code
+Community Growth : Enable community contributions and ecosystem development
+Organization Customization : Organizations can add proprietary integrations
+Innovation Support : New technologies can be integrated via extensions
+Isolation Safety : Extensions cannot compromise system stability
+Configuration Consistency : Extensions integrate with configuration-driven architecture
+Development Efficiency : Clear patterns reduce extension development time
+Version Management : Compatibility system prevents breaking changes
+Discovery Automation : Extensions automatically discovered and loaded
+
+
+
+Complexity Increase : Additional layer of abstraction and management
+Performance Overhead : Extension loading and isolation adds runtime cost
+Testing Complexity : Must test extension framework and individual extensions
+Documentation Burden : Need comprehensive extension development documentation
+Version Coordination : Extension compatibility matrix requires management
+Support Complexity : Community extensions may require support resources
+
+
+
+Development Patterns : Different patterns for extension vs core development
+Quality Control : Community extensions may vary in quality and maintenance
+Security Considerations : Extensions need security review and validation
+Dependency Management : Extension dependencies must be managed carefully
+
+
+
+Simple filesystem scanning for extension discovery.
+Rejected : No manifest validation or version compatibility checking. Fragile discovery mechanism.
+
+Store extension metadata in database for discovery.
+Rejected : Adds database dependency complexity. Over-engineering for extension discovery needs.
+
+Use existing package managers (cargo, npm) for extension distribution.
+Rejected : Complicates installation and creates external dependencies. Not suitable for corporate environments.
+
+Each extension runs in isolated container.
+Rejected : Too heavy for simple extensions. Complicates development and deployment significantly.
+
+Traditional plugin architecture with dynamic loading.
+Rejected : Complex for shell-based system. Security and isolation challenges in Nushell environment.
+
+
+
+Directory Scanning : Scan extension directories for manifest files
+Manifest Validation : Parse and validate extension manifest
+Compatibility Check : Verify version compatibility requirements
+Dependency Resolution : Resolve extension dependencies
+Configuration Integration : Merge extension configuration schemas
+Entry Point Registration : Register extension entry points with system
+
+
+# Extension discovery and validation
provisioning extension discover
provisioning extension validate --extension custom-provider
@@ -16572,14 +14415,10 @@ provisioning server create --provider custom-provider
# Extension management
provisioning extension disable custom-provider
provisioning extension update custom-provider
-```plaintext
-
-### Configuration Integration
-
-Extensions integrate with hierarchical configuration system:
-
-```toml
-# System configuration includes extension settings
+
+
+Extensions integrate with hierarchical configuration system:
+# System configuration includes extension settings
[custom_provider]
api_endpoint = "https://api.custom-cloud.com"
region = "us-west-1"
@@ -16587,29 +14426,25 @@ timeout = 30
# Extension configuration follows same hierarchy rules
# System defaults → User config → Environment config → Runtime
-```plaintext
-
-### Security and Isolation
-
-- **Sandboxed Execution**: Extensions run in controlled environment
-- **Permission Model**: Extensions declare required permissions in manifest
-- **Code Review**: Community extensions require review process
-- **Digital Signatures**: Extensions can be digitally signed for authenticity
-- **Audit Logging**: Extension usage tracked in system audit logs
-
-### Development Support
-
-- **Extension Templates**: Scaffold new extensions from templates
-- **Development Tools**: Testing and validation tools for extension developers
-- **Documentation Generation**: Automatic documentation from extension manifests
-- **Integration Testing**: Framework for testing extensions with core system
-
-## Extension Development Patterns
-
-### Provider Extension Pattern
-
-```nushell
-# extensions/providers/custom-cloud/nulib/provider.nu
+
+
+
+Sandboxed Execution : Extensions run in controlled environment
+Permission Model : Extensions declare required permissions in manifest
+Code Review : Community extensions require review process
+Digital Signatures : Extensions can be digitally signed for authenticity
+Audit Logging : Extension usage tracked in system audit logs
+
+
+
+Extension Templates : Scaffold new extensions from templates
+Development Tools : Testing and validation tools for extension developers
+Documentation Generation : Automatic documentation from extension manifests
+Integration Testing : Framework for testing extensions with core system
+
+
+
+# extensions/providers/custom-cloud/nulib/provider.nu
export def list-servers [] -> table {
http get $"($config.custom_provider.api_endpoint)/servers"
| from json
@@ -16626,12 +14461,9 @@ export def create-server [name: string, config: record] -> record {
http post $"($config.custom_provider.api_endpoint)/servers" $payload
| from json
}
-```plaintext
-
-### Task Service Extension Pattern
-
-```nushell
-# extensions/taskservs/custom-service/nulib/service.nu
+
+
+# extensions/taskservs/custom-service/nulib/service.nu
export def install [server: string] -> nothing {
let manifest_data = open ./manifests/deployment.yaml
| str replace "{{server}}" $server
@@ -16642,24 +14474,23 @@ export def install [server: string] -> nothing {
export def uninstall [server: string] -> nothing {
kubectl delete deployment custom-service --server $server
}
-```plaintext
-
-## References
-
-- Workspace Isolation (ADR-003)
-- Configuration System Architecture (ADR-002)
-- Hybrid Architecture Integration (ADR-004)
-- Community Extension Guidelines
-- Extension Security Framework
-- Extension Development Documentation
+
+
+Workspace Isolation (ADR-003)
+Configuration System Architecture (ADR-002)
+Hybrid Architecture Integration (ADR-004)
+Community Extension Guidelines
+Extension Security Framework
+Extension Development Documentation
+
Status : Implemented ✅
Date : 2025-09-30
Authors : Infrastructure Team
Related : ADR-001 (Project Structure), ADR-004 (Hybrid Architecture)
-The main provisioning CLI script (provisioning/core/nulib/provisioning) had grown to 1,329 lines with a massive 1,100+ line match statement handling all commands. This monolithic structure created several critical problems:
+The main provisioning CLI script (provisioning/core/nulib/provisioning) had grown to 1,329 lines with a massive 1,100+ line match statement handling all commands. This monolithic structure created multiple critical problems:
@@ -16714,195 +14545,161 @@ export def uninstall [server: string] -> nothing {
│ ├── orchestration.nu (64 lines)
│ ├── utilities.nu (157 lines)
│ └── workspace.nu (56 lines)
-```plaintext
-
-### Key Components
-
-#### 1. Centralized Flag Handling (`flags.nu`)
-
-Single source of truth for all flag parsing and argument building:
-
-```nushell
-export def parse_common_flags [flags: record]: nothing -> record
+
+
+
+Single source of truth for all flag parsing and argument building:
+export def parse_common_flags [flags: record]: nothing -> record
export def build_module_args [flags: record, extra: string = ""]: nothing -> string
export def set_debug_env [flags: record]
export def get_debug_flag [flags: record]: nothing -> string
-```plaintext
-
-**Benefits:**
-
-- Eliminates 50+ instances of duplicate code
-- Single place to add/modify flags
-- Consistent flag handling across all commands
-- Reduced from 10 lines to 3 lines per command handler
-
-#### 2. Command Dispatcher (`dispatcher.nu`)
-
-Central routing with 80+ command mappings:
-
-```nushell
-export def get_command_registry []: nothing -> record # 80+ shortcuts
+
+Benefits:
+
+Eliminates 50+ instances of duplicate code
+Single place to add/modify flags
+Consistent flag handling across all commands
+Reduced from 10 lines to 3 lines per command handler
+
+
+Central routing with 80+ command mappings:
+export def get_command_registry []: nothing -> record # 80+ shortcuts
export def dispatch_command [args: list, flags: record] # Main router
-```plaintext
-
-**Features:**
-
-- Command registry with shortcuts (ws → workspace, orch → orchestrator, etc.)
-- Bi-directional help support (`provisioning ws help` works)
-- Domain-based routing (infrastructure, orchestration, development, etc.)
-- Special command handling (create, delete, price, etc.)
-
-#### 3. Domain Command Handlers (`commands/*.nu`)
-
-Seven focused modules organized by domain:
-
-| Module | Lines | Responsibility |
-|--------|-------|----------------|
-| `infrastructure.nu` | 117 | Server, taskserv, cluster, infra |
-| `orchestration.nu` | 64 | Workflow, batch, orchestrator |
-| `development.nu` | 72 | Module, layer, version, pack |
-| `workspace.nu` | 56 | Workspace, template |
-| `generation.nu` | 78 | Generate commands |
-| `utilities.nu` | 157 | SSH, SOPS, cache, providers |
-| `configuration.nu` | 316 | Env, show, init, validate |
-
-Each handler:
-
-- Exports `handle_<domain>_command` function
-- Uses shared flag handling
-- Provides error messages with usage hints
-- Isolated and testable
-
-## Architecture Principles
-
-### 1. Separation of Concerns
-
-- **Routing** → `dispatcher.nu`
-- **Flag parsing** → `flags.nu`
-- **Business logic** → `commands/*.nu`
-- **Help system** → `help_system.nu` (existing)
-
-### 2. Single Responsibility
-
-Each module has ONE clear purpose:
-
-- Command handlers execute specific domains
-- Dispatcher routes to correct handler
-- Flags module normalizes all inputs
-
-### 3. DRY (Don't Repeat Yourself)
-
-Eliminated repetition:
-
-- Flag handling: 50+ instances → 1 function
-- Command routing: Scattered logic → Command registry
-- Error handling: Consistent across all domains
-
-### 4. Open/Closed Principle
-
-- Open for extension: Add new handlers easily
-- Closed for modification: Core routing unchanged
-
-### 5. Dependency Inversion
-
-All handlers depend on abstractions (flag records, not concrete flags):
-
-```nushell
-# Handler signature
+
+Features:
+
+Command registry with shortcuts (ws → workspace, orch → orchestrator, etc.)
+Bi-directional help support (provisioning ws help works)
+Domain-based routing (infrastructure, orchestration, development, etc.)
+Special command handling (create, delete, price, etc.)
+
+
+Seven focused modules organized by domain:
+Module Lines Responsibility
+infrastructure.nu117 Server, taskserv, cluster, infra
+orchestration.nu64 Workflow, batch, orchestrator
+development.nu72 Module, layer, version, pack
+workspace.nu56 Workspace, template
+generation.nu78 Generate commands
+utilities.nu157 SSH, SOPS, cache, providers
+configuration.nu316 Env, show, init, validate
+
+
+Each handler:
+
+Exports handle_<domain>_command function
+Uses shared flag handling
+Provides error messages with usage hints
+Isolated and testable
+
+
+
+
+Routing → dispatcher.nu
+Flag parsing → flags.nu
+Business logic → commands/*.nu
+Help system → help_system.nu (existing)
+
+
+Each module has ONE clear purpose:
+
+Command handlers execute specific domains
+Dispatcher routes to correct handler
+Flags module normalizes all inputs
+
+
+Eliminated repetition:
+
+Flag handling: 50+ instances → 1 function
+Command routing: Scattered logic → Command registry
+Error handling: Consistent across all domains
+
+
+
+Open for extension: Add new handlers easily
+Closed for modification: Core routing unchanged
+
+
+All handlers depend on abstractions (flag records, not concrete flags):
+# Handler signature
export def handle_infrastructure_command [
command: string
ops: string
flags: record # ⬅️ Abstraction, not concrete flags
]
-```plaintext
-
-## Implementation Details
-
-### Migration Path (Completed in 2 Phases)
-
-**Phase 1: Foundation**
-
-1. ✅ Created `commands/` directory structure
-2. ✅ Created `flags.nu` with common flag handling
-3. ✅ Created initial command handlers (infrastructure, utilities, configuration)
-4. ✅ Created `dispatcher.nu` with routing logic
-5. ✅ Refactored main file (1,329 → 211 lines)
-6. ✅ Tested basic functionality
-
-**Phase 2: Completion**
-
-1. ✅ Fixed bi-directional help (`provisioning ws help` now works)
-2. ✅ Created remaining handlers (orchestration, development, workspace, generation)
-3. ✅ Removed duplicate code from dispatcher
-4. ✅ Added comprehensive test suite
-5. ✅ Verified all shortcuts work
-
-### Bi-directional Help System
-
-Users can now access help in multiple ways:
-
-```bash
-# All these work equivalently:
+
+
+
+Phase 1: Foundation
+
+✅ Created commands/ directory structure
+✅ Created flags.nu with common flag handling
+✅ Created initial command handlers (infrastructure, utilities, configuration)
+✅ Created dispatcher.nu with routing logic
+✅ Refactored main file (1,329 → 211 lines)
+✅ Tested basic functionality
+
+Phase 2: Completion
+
+✅ Fixed bi-directional help (provisioning ws help now works)
+✅ Created remaining handlers (orchestration, development, workspace, generation)
+✅ Removed duplicate code from dispatcher
+✅ Added comprehensive test suite
+✅ Verified all shortcuts work
+
+
+Users can now access help in multiple ways:
+# All these work equivalently:
provisioning help workspace
provisioning workspace help # ⬅️ NEW: Bi-directional
provisioning ws help # ⬅️ NEW: With shortcuts
provisioning help ws # ⬅️ NEW: Shortcut in help
-```plaintext
-
-**Implementation:**
-
-```nushell
-# Intercept "command help" → "help command"
+
+Implementation:
+# Intercept "command help" → "help command"
let first_op = if ($ops_list | length) > 0 { ($ops_list | get 0) } else { "" }
if $first_op in ["help" "h"] {
exec $"($env.PROVISIONING_NAME)" help $task --notitles
}
-```plaintext
-
-### Command Shortcuts
-
-Comprehensive shortcut system with 30+ mappings:
-
-**Infrastructure:**
-
-- `s` → `server`
-- `t`, `task` → `taskserv`
-- `cl` → `cluster`
-- `i` → `infra`
-
-**Orchestration:**
-
-- `wf`, `flow` → `workflow`
-- `bat` → `batch`
-- `orch` → `orchestrator`
-
-**Development:**
-
-- `mod` → `module`
-- `lyr` → `layer`
-
-**Workspace:**
-
-- `ws` → `workspace`
-- `tpl`, `tmpl` → `template`
-
-## Testing
-
-Comprehensive test suite created (`tests/test_provisioning_refactor.nu`):
-
-### Test Coverage
-
-- ✅ Main help display
-- ✅ Category help (infrastructure, orchestration, development, workspace)
-- ✅ Bi-directional help routing
-- ✅ All command shortcuts
-- ✅ Category shortcut help
-- ✅ Command routing to correct handlers
-
-### Test Results
-
-```plaintext
-📋 Testing main help... ✅
+
+
+Comprehensive shortcut system with 30+ mappings:
+Infrastructure:
+
+s → server
+t, task → taskserv
+cl → cluster
+i → infra
+
+Orchestration:
+
+wf, flow → workflow
+bat → batch
+orch → orchestrator
+
+Development:
+
+mod → module
+lyr → layer
+
+Workspace:
+
+ws → workspace
+tpl, tmpl → template
+
+
+Comprehensive test suite created (tests/test_provisioning_refactor.nu):
+
+
+✅ Main help display
+✅ Category help (infrastructure, orchestration, development, workspace)
+✅ Bi-directional help routing
+✅ All command shortcuts
+✅ Category shortcut help
+✅ Command routing to correct handlers
+
+
+📋 Testing main help... ✅
📋 Testing category help... ✅
🔄 Testing bi-directional help... ✅
⚡ Testing command shortcuts... ✅
@@ -16910,76 +14707,67 @@ Comprehensive test suite created (`tests/test_provisioning_refactor.nu`):
🎯 Testing command routing... ✅
📊 TEST RESULTS: 6 passed, 0 failed
-```plaintext
-
-## Results
-
-### Quantitative Improvements
-
-| Metric | Before | After | Improvement |
-|--------|--------|-------|-------------|
-| **Main file size** | 1,329 lines | 211 lines | **84% reduction** |
-| **Command handler** | 1 massive match (1,100+ lines) | 7 focused modules | **Domain separation** |
-| **Flag handling** | Repeated 50+ times | 1 function | **98% duplication removal** |
-| **Code per command** | 10 lines | 3 lines | **70% reduction** |
-| **Modules count** | 1 monolith | 9 modules | **Modular architecture** |
-| **Test coverage** | None | 6 test groups | **Comprehensive testing** |
-
-### Qualitative Improvements
-
-**Maintainability**
-
-- ✅ Easy to find specific command logic
-- ✅ Clear separation of concerns
-- ✅ Self-documenting structure
-- ✅ Focused modules (< 320 lines each)
-
-**Extensibility**
-
-- ✅ Add new commands: Just update appropriate handler
-- ✅ Add new flags: Single function update
-- ✅ Add new shortcuts: Update command registry
-- ✅ No massive file edits required
-
-**Testability**
-
-- ✅ Isolated command handlers
-- ✅ Mockable dependencies
-- ✅ Test individual domains
-- ✅ Fast test execution
-
-**Developer Experience**
-
-- ✅ Lower cognitive load
-- ✅ Faster onboarding
-- ✅ Easier code review
-- ✅ Better IDE navigation
-
-## Trade-offs
-
-### Advantages
-
-1. **Dramatically reduced complexity**: 84% smaller main file
-2. **Better organization**: Domain-focused modules
-3. **Easier testing**: Isolated, testable units
-4. **Improved maintainability**: Clear structure, less duplication
-5. **Enhanced UX**: Bi-directional help, shortcuts
-6. **Future-proof**: Easy to extend
-
-### Disadvantages
-
-1. **More files**: 1 file → 9 files (but smaller, focused)
-2. **Module imports**: Need to import multiple modules (automated via mod.nu)
-3. **Learning curve**: New structure requires documentation (this ADR)
-
-**Decision**: Advantages significantly outweigh disadvantages.
-
-## Examples
-
-### Before: Repetitive Flag Handling
-
-```nushell
-"server" => {
+
+
+
+Metric Before After Improvement
+Main file size 1,329 lines 211 lines 84% reduction
+Command handler 1 massive match (1,100+ lines) 7 focused modules Domain separation
+Flag handling Repeated 50+ times 1 function 98% duplication removal
+Code per command 10 lines 3 lines 70% reduction
+Modules count 1 monolith 9 modules Modular architecture
+Test coverage None 6 test groups Comprehensive testing
+
+
+
+Maintainability
+
+✅ Easy to find specific command logic
+✅ Clear separation of concerns
+✅ Self-documenting structure
+✅ Focused modules (< 320 lines each)
+
+Extensibility
+
+✅ Add new commands: Just update appropriate handler
+✅ Add new flags: Single function update
+✅ Add new shortcuts: Update command registry
+✅ No massive file edits required
+
+Testability
+
+✅ Isolated command handlers
+✅ Mockable dependencies
+✅ Test individual domains
+✅ Fast test execution
+
+Developer Experience
+
+✅ Lower cognitive load
+✅ Faster onboarding
+✅ Easier code review
+✅ Better IDE navigation
+
+
+
+
+Dramatically reduced complexity : 84% smaller main file
+Better organization : Domain-focused modules
+Easier testing : Isolated, testable units
+Improved maintainability : Clear structure, less duplication
+Enhanced UX : Bi-directional help, shortcuts
+Future-proof : Easy to extend
+
+
+
+More files : 1 file → 9 files (but smaller, focused)
+Module imports : Need to import multiple modules (automated via mod.nu)
+Learning curve : New structure requires documentation (this ADR)
+
+Decision : Advantages significantly outweigh disadvantages.
+
+
+"server" => {
let use_check = if $check { "--check "} else { "" }
let use_yes = if $yes { "--yes" } else { "" }
let use_wait = if $wait { "--wait" } else { "" }
@@ -16990,62 +14778,50 @@ Comprehensive test suite created (`tests/test_provisioning_refactor.nu`):
let arg_include_notuse = if $include_notuse { $"--include_notuse "} else { "" }
run_module $"($str_ops) ($str_infra) ($use_check)..." "server" --exec
}
-```plaintext
-
-### After: Clean, Reusable
-
-```nushell
-def handle_server [ops: string, flags: record] {
+
+
+def handle_server [ops: string, flags: record] {
let args = build_module_args $flags $ops
run_module $args "server" --exec
}
-```plaintext
-
-**Reduction: 10 lines → 3 lines (70% reduction)**
-
-## Future Considerations
-
-### Potential Enhancements
-
-1. **Unit test expansion**: Add tests for each command handler
-2. **Integration tests**: End-to-end workflow tests
-3. **Performance profiling**: Measure routing overhead (expected to be negligible)
-4. **Documentation generation**: Auto-generate docs from handlers
-5. **Plugin architecture**: Allow third-party command extensions
-
-### Migration Guide for Contributors
-
-See `docs/development/COMMAND_HANDLER_GUIDE.md` for:
-
-- How to add new commands
-- How to modify existing handlers
-- How to add new shortcuts
-- Testing guidelines
-
-## Related Documentation
-
-- **Architecture Overview**: `docs/architecture/system-overview.md`
-- **Developer Guide**: `docs/development/COMMAND_HANDLER_GUIDE.md`
-- **Main Project Docs**: `CLAUDE.md` (updated with new structure)
-- **Test Suite**: `tests/test_provisioning_refactor.nu`
-
-## Conclusion
-
-This refactoring transforms the provisioning CLI from a monolithic, hard-to-maintain script into a modular, well-organized system following software engineering best practices. The 84% reduction in main file size, elimination of code duplication, and comprehensive test coverage position the project for sustainable long-term growth.
-
-The new architecture enables:
-
-- **Faster development**: Add commands in minutes, not hours
-- **Better quality**: Isolated testing catches bugs early
-- **Easier maintenance**: Clear structure reduces cognitive load
-- **Enhanced UX**: Shortcuts and bi-directional help improve usability
-
-**Status**: Successfully implemented and tested. All commands operational. Ready for production use.
-
----
-
-*This ADR documents a major architectural improvement completed on 2025-09-30.*
+Reduction: 10 lines → 3 lines (70% reduction)
+
+
+
+Unit test expansion : Add tests for each command handler
+Integration tests : End-to-end workflow tests
+Performance profiling : Measure routing overhead (expected to be negligible)
+Documentation generation : Auto-generate docs from handlers
+Plugin architecture : Allow third-party command extensions
+
+
+See docs/development/COMMAND_HANDLER_GUIDE.md for:
+
+How to add new commands
+How to modify existing handlers
+How to add new shortcuts
+Testing guidelines
+
+
+
+Architecture Overview : docs/architecture/system-overview.md
+Developer Guide : docs/development/COMMAND_HANDLER_GUIDE.md
+Main Project Docs : CLAUDE.md (updated with new structure)
+Test Suite : tests/test_provisioning_refactor.nu
+
+
+This refactoring transforms the provisioning CLI from a monolithic, hard-to-maintain script into a modular, well-organized system following software engineering best practices. The 84% reduction in main file size, elimination of code duplication, and comprehensive test coverage position the project for sustainable long-term growth.
+The new architecture enables:
+
+Faster development : Add commands in minutes, not hours
+Better quality : Isolated testing catches bugs early
+Easier maintenance : Clear structure reduces cognitive load
+Enhanced UX : Shortcuts and bi-directional help improve usability
+
+Status : Successfully implemented and tested. All commands operational. Ready for production use.
+
+This ADR documents a major architectural improvement completed on 2025-09-30.
Status : Accepted
Date : 2025-10-08
@@ -17056,7 +14832,7 @@ The new architecture enables:
Complexity : Supporting 4 different backends increased maintenance burden
-Dependencies : AWS SDK added significant compile time (~30s) and binary size
+Dependencies : AWS SDK added significant compile time (~30 s) and binary size
Confusion : No clear guidance on which backend to use when
Cloud Lock-in : AWS KMS dependency limited infrastructure flexibility
Operational Overhead : Vault requires server setup even for simple dev environments
@@ -17100,8 +14876,8 @@ The new architecture enables:
❌ HashiCorp Vault (redundant with Cosmian)
❌ AWS KMS (cloud lock-in, complexity)
-
-
+
+
Simpler Code : 2 backends instead of 4 reduces complexity by 50%
Faster Compilation : Removing AWS SDK saves ~30 seconds compile time
@@ -17112,17 +14888,17 @@ The new architecture enables:
Easier Testing : Age backend requires no setup
Reduced Dependencies : Fewer external crates to maintain
-
+
Migration Required : Existing Vault/AWS KMS users must migrate
Learning Curve : Teams must learn Age and Cosmian
Cosmian Dependency : Production depends on Cosmian availability
Cost : Cosmian may have licensing costs (cloud or self-hosted)
-
+
Feature Parity : Cosmian provides all features Vault/AWS had
-API Compatibility : Encrypt/decrypt API remains largely the same
+API Compatibility : Encrypt/decrypt API remains primarily the same
Configuration Change : TOML config structure updated but similar
@@ -17170,7 +14946,7 @@ The new architecture enables:
reqwest (for Cosmian HTTP API)
base64, serde, tokio, etc.
-
+
# 1. Install Age
brew install age # or apt install age
@@ -17190,7 +14966,7 @@ age-keygen -y ~/.config/provisioning/age/private_key.txt > ~/.config/provisio
# 5. Deploy new KMS service
See docs/migration/KMS_SIMPLIFICATION.md for detailed steps.
-
+
Pros :
@@ -17264,7 +15040,7 @@ age-keygen -y ~/.config/provisioning/age/private_key.txt > ~/.config/provisio
Improvement : 33% faster
-
+
Age Security : X25519 (Curve25519) encryption, modern and secure
Cosmian Security : Confidential computing, zero-knowledge, enterprise-grade
@@ -17278,11 +15054,11 @@ age-keygen -y ~/.config/provisioning/age/private_key.txt > ~/.config/provisio
Cosmian Tests : Require test server (marked as #[ignore])
Migration Tests : Verify old configs fail gracefully
-
+
@@ -17313,7 +15089,7 @@ age-keygen -y ~/.config/provisioning/age/private_key.txt > ~/.config/provisio
Security : Critical for production infrastructure access
Auditability : Compliance requirements demand clear authorization policies
Flexibility : Policies change more frequently than code
-Performance : Low-latency authorization decisions (<10ms)
+Performance : Low-latency authorization decisions (<10 ms)
Maintainability : Security team should update policies without developers
Type Safety : Prevent policy errors before deployment
@@ -17363,7 +15139,7 @@ age-keygen -y ~/.config/provisioning/age/private_key.txt > ~/.config/provisio
Cons :
-Relatively new (2023)
+Recently introduced (2023)
Smaller ecosystem than OPA
Learning curve for policy authors
@@ -17385,14 +15161,14 @@ age-keygen -y ~/.config/provisioning/age/private_key.txt > ~/.config/provisio
Type Safety : Cedar’s schema validation prevents policy errors before deployment
-Performance : Native Rust library, no network overhead, <1ms authorization decisions
+Performance : Native Rust library, no network overhead, <1 ms authorization decisions
Auditability : Declarative policies in version control
Hot Reload : Update policies without orchestrator restart
AWS Standard : Used in production by AWS for AVP (Amazon Verified Permissions)
Deny-by-Default : Secure by design
-
-
+
+
┌─────────────────────────────────────────────────────────┐
│ Orchestrator │
├─────────────────────────────────────────────────────────┤
@@ -17414,59 +15190,63 @@ age-keygen -y ~/.config/provisioning/age/private_key.txt > ~/.config/provisio
│ Allow / Deny │
│ │
└─────────────────────────────────────────────────────────┘
-```plaintext
-
-#### Policy Organization
-
-```plaintext
-provisioning/config/cedar-policies/
+
+
+provisioning/config/cedar-policies/
├── schema.cedar # Entity and action definitions
├── production.cedar # Production environment policies
├── development.cedar # Development environment policies
├── admin.cedar # Administrative policies
└── README.md # Documentation
-```plaintext
-
-#### Rust Implementation
-
-```plaintext
-provisioning/platform/orchestrator/src/security/
+
+
+provisioning/platform/orchestrator/src/security/
├── cedar.rs # Cedar engine integration (450 lines)
├── policy_loader.rs # Policy loading with hot reload (320 lines)
├── authorization.rs # Middleware integration (380 lines)
├── mod.rs # Module exports
└── tests.rs # Comprehensive tests (450 lines)
-```plaintext
-
-#### Key Components
-
-1. **CedarEngine**: Core authorization engine
- - Load policies from strings
- - Load schema for validation
- - Authorize requests
- - Policy statistics
-
-2. **PolicyLoader**: File-based policy management
- - Load policies from directory
- - Hot reload on file changes (notify crate)
- - Validate policy syntax
- - Schema validation
-
-3. **Authorization Middleware**: Axum integration
- - Extract JWT claims
- - Build authorization context (IP, MFA, time)
- - Check authorization
- - Return 403 Forbidden on deny
-
-4. **Policy Files**: Declarative authorization rules
- - Production: MFA, approvals, IP restrictions, business hours
- - Development: Permissive for developers
- - Admin: Platform admin, SRE, audit team policies
-
-#### Context Variables
-
-```rust
-AuthorizationContext {
+
+
+
+
+CedarEngine : Core authorization engine
+
+Load policies from strings
+Load schema for validation
+Authorize requests
+Policy statistics
+
+
+
+PolicyLoader : File-based policy management
+
+Load policies from directory
+Hot reload on file changes (notify crate)
+Validate policy syntax
+Schema validation
+
+
+
+Authorization Middleware : Axum integration
+
+Extract JWT claims
+Build authorization context (IP, MFA, time)
+Check authorization
+Return 403 Forbidden on deny
+
+
+
+Policy Files : Declarative authorization rules
+
+Production: MFA, approvals, IP restrictions, business hours
+Development: Permissive for developers
+Admin: Platform admin, SRE, audit team policies
+
+
+
+
+AuthorizationContext {
mfa_verified: bool, // MFA verification status
ip_address: String, // Client IP address
time: String, // ISO 8601 timestamp
@@ -17474,13 +15254,9 @@ AuthorizationContext {
reason: Option<String>, // Reason for operation
force: bool, // Force flag
additional: HashMap, // Additional context
-}
-```plaintext
-
-#### Example Policy
-
-```cedar
-// Production deployments require MFA verification
+}
+
+// Production deployments require MFA verification
@id("prod-deploy-mfa")
@description("All production deployments must have MFA verification")
permit (
@@ -17490,141 +15266,127 @@ permit (
) when {
context.mfa_verified == true
};
-```plaintext
-
-### Integration Points
-
-1. **JWT Tokens**: Extract principal and context from validated JWT
-2. **Audit System**: Log all authorization decisions
-3. **Control Center**: UI for policy management and testing
-4. **CLI**: Policy validation and testing commands
-
-### Security Best Practices
-
-1. **Deny by Default**: Cedar defaults to deny all actions
-2. **Schema Validation**: Type-check policies before loading
-3. **Version Control**: All policies in git for auditability
-4. **Principle of Least Privilege**: Grant minimum necessary permissions
-5. **Defense in Depth**: Combine with JWT validation and rate limiting
-6. **Separation of Concerns**: Security team owns policies, developers own code
-
-## Consequences
-
-### Positive
-
-1. ✅ **Auditable**: All policies in version control
-2. ✅ **Type-Safe**: Schema validation prevents errors
-3. ✅ **Fast**: <1ms authorization decisions
-4. ✅ **Maintainable**: Security team can update policies independently
-5. ✅ **Hot Reload**: No downtime for policy updates
-6. ✅ **Testable**: Comprehensive test suite for policies
-7. ✅ **Declarative**: Clear intent, no hidden logic
-
-### Negative
-
-1. ❌ **Learning Curve**: Team must learn Cedar policy language
-2. ❌ **New Technology**: Cedar is relatively new (2023)
-3. ❌ **Ecosystem**: Smaller community than OPA
-4. ❌ **Tooling**: Limited IDE support compared to Rego
-
-### Neutral
-
-1. 🔶 **Migration**: Existing authorization logic needs migration to Cedar
-2. 🔶 **Policy Complexity**: Complex rules may be harder to express
-3. 🔶 **Debugging**: Policy debugging requires understanding Cedar evaluation
-
-## Compliance
-
-### Security Standards
-
-- **SOC 2**: Auditable access control policies
-- **ISO 27001**: Access control management
-- **GDPR**: Data access authorization and logging
-- **NIST 800-53**: AC-3 Access Enforcement
-
-### Audit Requirements
-
-All authorization decisions include:
-
-- Principal (user/team)
-- Action performed
-- Resource accessed
-- Context (MFA, IP, time)
-- Decision (allow/deny)
-- Policies evaluated
-
-## Migration Path
-
-### Phase 1: Implementation (Completed)
-
-- ✅ Cedar engine integration
-- ✅ Policy loader with hot reload
-- ✅ Authorization middleware
-- ✅ Production, development, and admin policies
-- ✅ Comprehensive tests
-
-### Phase 2: Rollout (Next)
-
-- 🔲 Enable Cedar authorization in orchestrator
-- 🔲 Migrate existing authorization logic to Cedar policies
-- 🔲 Add authorization checks to all API endpoints
-- 🔲 Integrate with audit logging
-
-### Phase 3: Enhancement (Future)
-
-- 🔲 Control Center policy editor UI
-- 🔲 Policy testing UI
-- 🔲 Policy simulation and dry-run mode
-- 🔲 Policy analytics and insights
-- 🔲 Advanced context variables (location, device type)
-
-## Alternatives Considered
-
-### Alternative 1: Continue with Code-Based Authorization
-
-Keep authorization logic in Rust/Nushell code.
-
-**Rejected Because**:
-
-- Not auditable
-- Requires code changes for policy updates
-- Difficult to test all combinations
-- Not compliant with security standards
-
-### Alternative 2: Hybrid Approach
-
-Use Cedar for high-level policies, code for fine-grained checks.
-
-**Rejected Because**:
-
-- Complexity of two authorization systems
-- Unclear separation of concerns
-- Harder to audit
-
-## References
-
-- **Cedar Documentation**: <https://docs.cedarpolicy.com/>
-- **Cedar GitHub**: <https://github.com/cedar-policy/cedar>
-- **AWS AVP**: <https://aws.amazon.com/verified-permissions/>
-- **Policy Files**: `/provisioning/config/cedar-policies/`
-- **Implementation**: `/provisioning/platform/orchestrator/src/security/`
-
-## Related ADRs
-
-- ADR-003: JWT Token-Based Authentication
-- ADR-004: Audit Logging System
-- ADR-005: KMS Key Management
-
-## Notes
-
-Cedar policy language is inspired by decades of authorization research (XACML, AWS IAM) and production experience at AWS. It balances expressiveness with safety.
-
----
-
-**Approved By**: Architecture Team
-**Implementation Date**: 2025-10-08
-**Review Date**: 2026-01-08 (Quarterly)
+
+
+JWT Tokens : Extract principal and context from validated JWT
+Audit System : Log all authorization decisions
+Control Center : UI for policy management and testing
+CLI : Policy validation and testing commands
+
+
+
+Deny by Default : Cedar defaults to deny all actions
+Schema Validation : Type-check policies before loading
+Version Control : All policies in git for auditability
+Principle of Least Privilege : Grant minimum necessary permissions
+Defense in Depth : Combine with JWT validation and rate limiting
+Separation of Concerns : Security team owns policies, developers own code
+
+
+
+
+✅ Auditable : All policies in version control
+✅ Type-Safe : Schema validation prevents errors
+✅ Fast : <1 ms authorization decisions
+✅ Maintainable : Security team can update policies independently
+✅ Hot Reload : No downtime for policy updates
+✅ Testable : Comprehensive test suite for policies
+✅ Declarative : Clear intent, no hidden logic
+
+
+
+❌ Learning Curve : Team must learn Cedar policy language
+❌ New Technology : Cedar is relatively new (2023)
+❌ Ecosystem : Smaller community than OPA
+❌ Tooling : Limited IDE support compared to Rego
+
+
+
+🔶 Migration : Existing authorization logic needs migration to Cedar
+🔶 Policy Complexity : Complex rules may be harder to express
+🔶 Debugging : Policy debugging requires understanding Cedar evaluation
+
+
+
+
+SOC 2 : Auditable access control policies
+ISO 27001 : Access control management
+GDPR : Data access authorization and logging
+NIST 800-53 : AC-3 Access Enforcement
+
+
+All authorization decisions include:
+
+Principal (user/team)
+Action performed
+Resource accessed
+Context (MFA, IP, time)
+Decision (allow/deny)
+Policies evaluated
+
+
+
+
+✅ Cedar engine integration
+✅ Policy loader with hot reload
+✅ Authorization middleware
+✅ Production, development, and admin policies
+✅ Comprehensive tests
+
+
+
+🔲 Enable Cedar authorization in orchestrator
+🔲 Migrate existing authorization logic to Cedar policies
+🔲 Add authorization checks to all API endpoints
+🔲 Integrate with audit logging
+
+
+
+🔲 Control Center policy editor UI
+🔲 Policy testing UI
+🔲 Policy simulation and dry-run mode
+🔲 Policy analytics and insights
+🔲 Advanced context variables (location, device type)
+
+
+
+Keep authorization logic in Rust/Nushell code.
+Rejected Because :
+
+Not auditable
+Requires code changes for policy updates
+Difficult to test all combinations
+Not compliant with security standards
+
+
+Use Cedar for high-level policies, code for fine-grained checks.
+Rejected Because :
+
+Complexity of two authorization systems
+Unclear separation of concerns
+Harder to audit
+
+
+
+
+
+ADR-003: JWT Token-Based Authentication
+ADR-004: Audit Logging System
+ADR-005: KMS Key Management
+
+
+Cedar policy language is inspired by decades of authorization research (XACML, AWS IAM) and production experience at AWS. It balances expressiveness with safety.
+
+Approved By : Architecture Team
+Implementation Date : 2025-10-08
+Review Date : 2026-01-08 (Quarterly)
Status : Implemented
Date : 2025-10-08
@@ -17653,7 +15415,7 @@ Cedar policy language is inspired by decades of authorization research (XACML, A
Features :
RS256 asymmetric signing
-Access tokens (15min) + refresh tokens (7d)
+Access tokens (15 min) + refresh tokens (7 d)
Token rotation and revocation
Argon2id password hashing
5 user roles (Admin, Developer, Operator, Viewer, Auditor)
@@ -17719,7 +15481,7 @@ Cedar policy language is inspired by decades of authorization research (XACML, A
Location : provisioning/platform/orchestrator/src/secrets/
Features :
-AWS STS temporary credentials (15min-12h)
+AWS STS temporary credentials (15 min-12 h)
SSH key pair generation (Ed25519)
UpCloud API subaccounts
TTL manager with auto-cleanup
@@ -17736,7 +15498,7 @@ Cedar policy language is inspired by decades of authorization research (XACML, A
Vault OTP (one-time passwords)
Vault CA (certificate authority signing)
Auto-deployment to authorized_keys
-Background cleanup every 5min
+Background cleanup every 5 min
API : 7 endpoints
CLI : 10 commands
@@ -17747,12 +15509,12 @@ Cedar policy language is inspired by decades of authorization research (XACML, A
Location : provisioning/platform/control-center/src/mfa/
Features :
-TOTP (RFC 6238, 6-digit codes, 30s window)
+TOTP (RFC 6238, 6-digit codes, 30 s window)
WebAuthn/FIDO2 (YubiKey, Touch ID, Windows Hello)
QR code generation
10 backup codes per user
Multiple devices per user
-Rate limiting (5 attempts/5min)
+Rate limiting (5 attempts/5 min)
API : 13 endpoints
CLI : 15 commands
@@ -17791,7 +15553,7 @@ Cedar policy language is inspired by decades of authorization research (XACML, A
Features :
Multi-party approval (2+ approvers, different teams)
-Emergency JWT tokens (4h max, special claims)
+Emergency JWT tokens (4 h max, special claims)
Auto-revocation (expiration + inactivity)
Enhanced audit (7-year retention)
Real-time alerts
@@ -17821,7 +15583,7 @@ Cedar policy language is inspired by decades of authorization research (XACML, A
↓
2. Rate Limiting (100 req/min per IP)
↓
-3. JWT Authentication (RS256, 15min tokens)
+3. JWT Authentication (RS256, 15 min tokens)
↓
4. MFA Verification (TOTP/WebAuthn for sensitive ops)
↓
@@ -17834,12 +15596,9 @@ Cedar policy language is inspired by decades of authorization research (XACML, A
8. Audit Logging (structured JSON, GDPR-compliant)
↓
9. Response
-```plaintext
-
-### Emergency Access Flow
-
-```plaintext
-1. Emergency Request (reason + justification)
+
+
+ 1. Emergency Request (reason + justification)
↓
2. Multi-Party Approval (2+ approvers, different teams)
↓
@@ -17848,118 +15607,93 @@ Cedar policy language is inspired by decades of authorization research (XACML, A
4. Enhanced Audit (7-year retention, immutable)
↓
5. Auto-Revocation (expiration/inactivity)
-```plaintext
-
----
-
-## Technology Stack
-
-### Backend (Rust)
-
-- **axum**: HTTP framework
-- **jsonwebtoken**: JWT handling (RS256)
-- **cedar-policy**: Authorization engine
-- **totp-rs**: TOTP implementation
-- **webauthn-rs**: WebAuthn/FIDO2
-- **aws-sdk-kms**: AWS KMS integration
-- **argon2**: Password hashing
-- **tracing**: Structured logging
-
-### Frontend (TypeScript/React)
-
-- **React 18**: UI framework
-- **Leptos**: Rust WASM framework
-- **@simplewebauthn/browser**: WebAuthn client
-- **qrcode.react**: QR code generation
-
-### CLI (Nushell)
-
-- **Nushell 0.107**: Shell and scripting
-- **nu_plugin_kcl**: KCL integration
-
-### Infrastructure
-
-- **HashiCorp Vault**: Secrets management, KMS, SSH CA
-- **AWS KMS**: Key management service
-- **PostgreSQL/SurrealDB**: Data storage
-- **SOPS**: Config encryption
-
----
-
-## Security Guarantees
-
-### Authentication
-
-✅ RS256 asymmetric signing (no shared secrets)
-✅ Short-lived access tokens (15min)
+
+
+
+
+
+axum : HTTP framework
+jsonwebtoken : JWT handling (RS256)
+cedar-policy : Authorization engine
+totp-rs : TOTP implementation
+webauthn-rs : WebAuthn/FIDO2
+aws-sdk-kms : AWS KMS integration
+argon2 : Password hashing
+tracing : Structured logging
+
+
+
+React 18 : UI framework
+Leptos : Rust WASM framework
+@simplewebauthn/browser : WebAuthn client
+qrcode.react : QR code generation
+
+
+
+Nushell 0.107 : Shell and scripting
+nu_plugin_kcl : KCL integration
+
+
+
+HashiCorp Vault : Secrets management, KMS, SSH CA
+AWS KMS : Key management service
+PostgreSQL/SurrealDB : Data storage
+SOPS : Config encryption
+
+
+
+
+✅ RS256 asymmetric signing (no shared secrets)
+✅ Short-lived access tokens (15 min)
✅ Token revocation support
✅ Argon2id password hashing (memory-hard)
-✅ MFA enforced for production operations
-
-### Authorization
-
-✅ Fine-grained permissions (Cedar policies)
+✅ MFA enforced for production operations
+
+✅ Fine-grained permissions (Cedar policies)
✅ Context-aware (MFA, IP, time windows)
✅ Hot reload policies (no downtime)
-✅ Deny by default
-
-### Secrets Management
-
-✅ No static credentials stored
+✅ Deny by default
+
+✅ No static credentials stored
✅ Time-limited secrets (1h default)
✅ Auto-revocation on expiry
✅ Encryption at rest (KMS)
-✅ Memory-only decryption
-
-### Audit & Compliance
-
-✅ Immutable audit logs
+✅ Memory-only decryption
+
+✅ Immutable audit logs
✅ GDPR-compliant (PII anonymization)
✅ SOC2 controls implemented
✅ ISO 27001 controls verified
-✅ 7-year retention for break-glass
-
-### Emergency Access
-
-✅ Multi-party approval required
+✅ 7-year retention for break-glass
+
+✅ Multi-party approval required
✅ Time-limited sessions (4h max)
✅ Enhanced audit logging
✅ Auto-revocation
-✅ Cannot be disabled
-
----
-
-## Performance Characteristics
-
-| Component | Latency | Throughput | Memory |
-|-----------|---------|------------|--------|
-| JWT Auth | <5ms | 10,000/s | ~10MB |
-| Cedar Authz | <10ms | 5,000/s | ~50MB |
-| Audit Log | <5ms | 20,000/s | ~100MB |
-| KMS Encrypt | <50ms | 1,000/s | ~20MB |
-| Dynamic Secrets | <100ms | 500/s | ~50MB |
-| MFA Verify | <50ms | 2,000/s | ~30MB |
-
-**Total Overhead**: ~10-20ms per request
-**Memory Usage**: ~260MB total for all security components
-
----
-
-## Deployment Options
-
-### Development
-
-```bash
-# Start all services
+✅ Cannot be disabled
+
+
+Component Latency Throughput Memory
+JWT Auth <5 ms 10,000/s ~10 MB
+Cedar Authz <10 ms 5,000/s ~50 MB
+Audit Log <5 ms 20,000/s ~100 MB
+KMS Encrypt <50 ms 1,000/s ~20 MB
+Dynamic Secrets <100 ms 500/s ~50 MB
+MFA Verify <50 ms 2,000/s ~30 MB
+
+
+Total Overhead : ~10-20 ms per request
+Memory Usage : ~260 MB total for all security components
+
+
+
+# Start all services
cd provisioning/platform/kms-service && cargo run &
cd provisioning/platform/orchestrator && cargo run &
cd provisioning/platform/control-center && cargo run &
-```plaintext
-
-### Production
-
-```bash
-# Kubernetes deployment
+
+
+# Kubernetes deployment
kubectl apply -f k8s/security-stack.yaml
# Docker Compose
@@ -17969,16 +15703,11 @@ docker-compose up -d kms orchestrator control-center
systemctl start provisioning-kms
systemctl start provisioning-orchestrator
systemctl start provisioning-control-center
-```plaintext
-
----
-
-## Configuration
-
-### Environment Variables
-
-```bash
-# JWT
+
+
+
+
+# JWT
export JWT_ISSUER="control-center"
export JWT_AUDIENCE="orchestrator,cli"
export JWT_PRIVATE_KEY_PATH="/keys/private.pem"
@@ -17996,12 +15725,9 @@ export VAULT_TOKEN="..."
# MFA
export MFA_TOTP_ISSUER="Provisioning"
export MFA_WEBAUTHN_RP_ID="provisioning.example.com"
-```plaintext
-
-### Config Files
-
-```toml
-# provisioning/config/security.toml
+
+
+# provisioning/config/security.toml
[jwt]
issuer = "control-center"
audience = ["orchestrator", "cli"]
@@ -18029,16 +15755,11 @@ retention_days = 365
retention_break_glass_days = 2555 # 7 years
export_format = "json"
pii_anonymization = true
-```plaintext
-
----
-
-## Testing
-
-### Run All Tests
-
-```bash
-# Control Center (JWT, MFA)
+
+
+
+
+# Control Center (JWT, MFA)
cd provisioning/platform/control-center
cargo test
@@ -18052,181 +15773,173 @@ cargo test
# Config Encryption (Nushell)
nu provisioning/core/nulib/lib_provisioning/config/encryption_tests.nu
-```plaintext
-
-### Integration Tests
-
-```bash
-# Full security flow
+
+
+# Full security flow
cd provisioning/platform/orchestrator
cargo test --test security_integration_tests
cargo test --test break_glass_integration_tests
-```plaintext
-
----
-
-## Monitoring & Alerts
-
-### Metrics to Monitor
-
-- Authentication failures (rate, sources)
-- Authorization denials (policies, resources)
-- MFA failures (attempts, users)
-- Token revocations (rate, reasons)
-- Break-glass activations (frequency, duration)
-- Secrets generation (rate, types)
-- Audit log volume (events/sec)
-
-### Alerts to Configure
-
-- Multiple failed auth attempts (5+ in 5min)
-- Break-glass session created
-- Compliance report non-compliant
-- Incident severity critical/high
-- Token revocation spike
-- KMS errors
-- Audit log export failures
-
----
-
-## Maintenance
-
-### Daily
-
-- Monitor audit logs for anomalies
-- Review failed authentication attempts
-- Check break-glass sessions (should be zero)
-
-### Weekly
-
-- Review compliance reports
-- Check incident response status
-- Verify backup code usage
-- Review MFA device additions/removals
-
-### Monthly
-
-- Rotate KMS keys
-- Review and update Cedar policies
-- Generate compliance reports (GDPR, SOC2, ISO)
-- Audit access control matrix
-
-### Quarterly
-
-- Full security audit
-- Penetration testing
-- Compliance certification review
-- Update security documentation
-
----
-
-## Migration Path
-
-### From Existing System
-
-1. **Phase 1**: Deploy security infrastructure
- - KMS service
- - Orchestrator with auth middleware
- - Control Center
-
-2. **Phase 2**: Migrate authentication
- - Enable JWT authentication
- - Migrate existing users
- - Disable old auth system
-
-3. **Phase 3**: Enable MFA
- - Require MFA enrollment for admins
- - Gradual rollout to all users
-
-4. **Phase 4**: Enable Cedar authorization
- - Deploy initial policies (permissive)
- - Monitor authorization decisions
- - Tighten policies incrementally
-
-5. **Phase 5**: Enable advanced features
- - Break-glass procedures
- - Compliance reporting
- - Incident response
-
----
-
-## Future Enhancements
-
-### Planned (Not Implemented)
-
-- **Hardware Security Module (HSM)** integration
-- **OAuth2/OIDC** federation
-- **SAML SSO** for enterprise
-- **Risk-based authentication** (IP reputation, device fingerprinting)
-- **Behavioral analytics** (anomaly detection)
-- **Zero-Trust Network** (service mesh integration)
-
-### Under Consideration
-
-- **Blockchain audit log** (immutable append-only log)
-- **Quantum-resistant cryptography** (post-quantum algorithms)
-- **Confidential computing** (SGX/SEV enclaves)
-- **Distributed break-glass** (multi-region approval)
-
----
-
-## Consequences
-
-### Positive
-
-✅ **Enterprise-grade security** meeting GDPR, SOC2, ISO 27001
-✅ **Zero static credentials** (all dynamic, time-limited)
-✅ **Complete audit trail** (immutable, GDPR-compliant)
-✅ **MFA-enforced** for sensitive operations
-✅ **Emergency access** with enhanced controls
-✅ **Fine-grained authorization** (Cedar policies)
-✅ **Automated compliance** (reports, incident response)
-
-### Negative
-
-⚠️ **Increased complexity** (12 components to manage)
-⚠️ **Performance overhead** (~10-20ms per request)
-⚠️ **Memory footprint** (~260MB additional)
-⚠️ **Learning curve** (Cedar policy language, MFA setup)
-⚠️ **Operational overhead** (key rotation, policy updates)
-
-### Mitigations
-
-- Comprehensive documentation (ADRs, guides, API docs)
-- CLI commands for all operations
-- Automated monitoring and alerting
-- Gradual rollout with feature flags
-- Training materials for operators
-
----
-
-## Related Documentation
-
-- **JWT Auth**: `docs/architecture/JWT_AUTH_IMPLEMENTATION.md`
-- **Cedar Authz**: `docs/architecture/CEDAR_AUTHORIZATION_IMPLEMENTATION.md`
-- **Audit Logging**: `docs/architecture/AUDIT_LOGGING_IMPLEMENTATION.md`
-- **MFA**: `docs/architecture/MFA_IMPLEMENTATION_SUMMARY.md`
-- **Break-Glass**: `docs/architecture/BREAK_GLASS_IMPLEMENTATION_SUMMARY.md`
-- **Compliance**: `docs/architecture/COMPLIANCE_IMPLEMENTATION_SUMMARY.md`
-- **Config Encryption**: `docs/user/CONFIG_ENCRYPTION_GUIDE.md`
-- **Dynamic Secrets**: `docs/user/DYNAMIC_SECRETS_QUICK_REFERENCE.md`
-- **SSH Keys**: `docs/user/SSH_TEMPORAL_KEYS_USER_GUIDE.md`
-
----
-
-## Approval
-
-**Architecture Team**: Approved
-**Security Team**: Approved (pending penetration test)
-**Compliance Team**: Approved (pending audit)
-**Engineering Team**: Approved
-
----
-
-**Date**: 2025-10-08
-**Version**: 1.0.0
-**Status**: Implemented and Production-Ready
+
+
+
+
+Authentication failures (rate, sources)
+Authorization denials (policies, resources)
+MFA failures (attempts, users)
+Token revocations (rate, reasons)
+Break-glass activations (frequency, duration)
+Secrets generation (rate, types)
+Audit log volume (events/sec)
+
+
+
+Multiple failed auth attempts (5+ in 5 min)
+Break-glass session created
+Compliance report non-compliant
+Incident severity critical/high
+Token revocation spike
+KMS errors
+Audit log export failures
+
+
+
+
+
+Monitor audit logs for anomalies
+Review failed authentication attempts
+Check break-glass sessions (should be zero)
+
+
+
+Review compliance reports
+Check incident response status
+Verify backup code usage
+Review MFA device additions/removals
+
+
+
+Rotate KMS keys
+Review and update Cedar policies
+Generate compliance reports (GDPR, SOC2, ISO)
+Audit access control matrix
+
+
+
+Full security audit
+Penetration testing
+Compliance certification review
+Update security documentation
+
+
+
+
+
+
+Phase 1 : Deploy security infrastructure
+
+KMS service
+Orchestrator with auth middleware
+Control Center
+
+
+
+Phase 2 : Migrate authentication
+
+Enable JWT authentication
+Migrate existing users
+Disable old auth system
+
+
+
+Phase 3 : Enable MFA
+
+Require MFA enrollment for admins
+Gradual rollout to all users
+
+
+
+Phase 4 : Enable Cedar authorization
+
+Deploy initial policies (permissive)
+Monitor authorization decisions
+Tighten policies incrementally
+
+
+
+Phase 5 : Enable advanced features
+
+Break-glass procedures
+Compliance reporting
+Incident response
+
+
+
+
+
+
+
+Hardware Security Module (HSM) integration
+OAuth2/OIDC federation
+SAML SSO for enterprise
+Risk-based authentication (IP reputation, device fingerprinting)
+Behavioral analytics (anomaly detection)
+Zero-Trust Network (service mesh integration)
+
+
+
+Blockchain audit log (immutable append-only log)
+Quantum-resistant cryptography (post-quantum algorithms)
+Confidential computing (SGX/SEV enclaves)
+Distributed break-glass (multi-region approval)
+
+
+
+
+✅ Enterprise-grade security meeting GDPR, SOC2, ISO 27001
+✅ Zero static credentials (all dynamic, time-limited)
+✅ Complete audit trail (immutable, GDPR-compliant)
+✅ MFA-enforced for sensitive operations
+✅ Emergency access with enhanced controls
+✅ Fine-grained authorization (Cedar policies)
+✅ Automated compliance (reports, incident response)
+
+⚠️ Increased complexity (12 components to manage)
+⚠️ Performance overhead (~10-20 ms per request)
+⚠️ Memory footprint (~260 MB additional)
+⚠️ Learning curve (Cedar policy language, MFA setup)
+⚠️ Operational overhead (key rotation, policy updates)
+
+
+Comprehensive documentation (ADRs, guides, API docs)
+CLI commands for all operations
+Automated monitoring and alerting
+Gradual rollout with feature flags
+Training materials for operators
+
+
+
+
+JWT Auth : docs/architecture/JWT_AUTH_IMPLEMENTATION.md
+Cedar Authz : docs/architecture/CEDAR_AUTHORIZATION_IMPLEMENTATION.md
+Audit Logging : docs/architecture/AUDIT_LOGGING_IMPLEMENTATION.md
+MFA : docs/architecture/MFA_IMPLEMENTATION_SUMMARY.md
+Break-Glass : docs/architecture/BREAK_GLASS_IMPLEMENTATION_SUMMARY.md
+Compliance : docs/architecture/COMPLIANCE_IMPLEMENTATION_SUMMARY.md
+Config Encryption : docs/user/CONFIG_ENCRYPTION_GUIDE.md
+Dynamic Secrets : docs/user/DYNAMIC_SECRETS_QUICK_REFERENCE.md
+SSH Keys : docs/user/SSH_TEMPORAL_KEYS_USER_GUIDE.md
+
+
+
+Architecture Team : Approved
+Security Team : Approved (pending penetration test)
+Compliance Team : Approved (pending audit)
+Engineering Team : Approved
+
+Date : 2025-10-08
+Version : 1.0.0
+Status : Implemented and Production-Ready
Status : Accepted
Date : 2025-12-03
@@ -18252,7 +15965,7 @@ cargo test --test break_glass_integration_tests
-
+
Define and document the three-format approach through:
@@ -18270,48 +15983,43 @@ cargo test --test break_glass_integration_tests
Expected Outcome :
-workspace/config/provisioning.k (KCL, type-safe, validated)
+workspace/config/provisioning.ncl (KCL, type-safe, validated)
Full schema validation with semantic versioning checks
Automatic validation at config load time
Move template files to proper directory structure and correct extensions :
-Current (wrong):
- provisioning/kcl/templates/*.k (has Nushell/Jinja2 code, not KCL)
+Previous (KCL):
+ provisioning/kcl/templates/*.k (had Nushell/Jinja2 code, not KCL)
-Desired:
+Current (Nickel):
provisioning/templates/
├── nushell/*.nu.j2
├── config/*.toml.j2
- ├── kcl/*.k.j2
+ ├── nickel/*.ncl.j2
└── README.md
-```plaintext
-
-**Expected Outcome**:
-
-- Templates properly classified and discoverable
-- KCL validation passes (15/16 errors eliminated)
-- Template system clean and maintainable
-
----
-
-## Rationale for Each Format
-
-### KCL for Workspace Configuration
-
-**Why KCL over YAML or TOML?**
-
-1. **Type Safety**: Catch configuration errors at schema validation time, not runtime
-
- ```kcl
- schema WorkspaceDeclaration:
- metadata: Metadata
- check:
- regex.match(metadata.version, r"^\d+\.\d+\.\d+$"), \
- "Version must be semantic versioning"
+Expected Outcome :
+
+Templates properly classified and discoverable
+KCL validation passes (15/16 errors eliminated)
+Template system clean and maintainable
+
+
+
+
+Why KCL over YAML or TOML?
+Type Safety : Catch configuration errors at schema validation time, not runtime
+schema WorkspaceDeclaration:
+ metadata: Metadata
+ check:
+ regex.match(metadata.version, r"^\d+\.\d+\.\d+$"), \
+ "Version must be semantic versioning"
+
+
+
Schema-First Development : Schemas are first-class citizens
Document expected structure upfront
@@ -18346,7 +16054,7 @@ Desired:
-Existing Schemas : provisioning/kcl/generator/declaration.k already defines complete workspace schemas
+Existing Schemas : provisioning/kcl/generator/declaration.ncl already defines complete workspace schemas
-
+
-Backward Compatibility : Config loader checks for .k first, falls back to .yaml
-# Try KCL first
-if ($config_kcl | path exists) {
- let config = (load_kcl_workspace_config $config_kcl)
+Migration Path : Config loader checks for .ncl first, then falls back to .yaml for legacy systems
+# Try Nickel first (current)
+if ($config_nickel | path exists) {
+ let config = (load_nickel_workspace_config $config_nickel)
} else if ($config_yaml | path exists) {
- # Legacy YAML support
+ # Legacy YAML support (from pre-migration)
let config = (open $config_yaml)
}
-Automatic Migration : Migration script converts YAML → KCL
+Automatic Migration : Migration script converts YAML/KCL → Nickel
provisioning workspace migrate-config --all
@@ -18497,11 +16205,11 @@ if ($config_kcl | path exists) {
Generate KCL : Workspace initialization creates .k files
provisioning workspace create my-workspace
-# Creates: workspace/my-workspace/config/provisioning.k
+# Creates: workspace/my-workspace/config/provisioning.ncl
-Use Existing Schemas : Leverage provisioning/kcl/generator/declaration.k
+Use Existing Schemas : Leverage provisioning/kcl/generator/declaration.ncl
Schema Validation : Automatic validation during config load
@@ -18535,8 +16243,8 @@ if ($config_kcl | path exists) {
Version control metadata
-
-
+
+
✅ Type Safety : KCL schema validation catches config errors early
✅ Consistency : Infrastructure definitions and configs use same language
✅ Maintainability : Clear separation of concerns (IaC vs settings vs metadata)
@@ -18544,7 +16252,7 @@ if ($config_kcl | path exists) {
✅ Tooling : IDE support for KCL auto-completion
✅ Documentation : Self-documenting schemas with descriptions
✅ Ecosystem Alignment : TOML for settings (Rust standard), YAML for K8s
-
+
⚠️ Learning Curve : Developers must understand three formats
⚠️ Migration Effort : Existing YAML configs need conversion
⚠️ Tooling Requirements : KCL compiler needed (already a dependency)
@@ -18560,23 +16268,19 @@ if ($config_kcl | path exists) {
Currently, 15/16 files in provisioning/kcl/templates/ have .k extension but contain Nushell/Jinja2 code, not KCL:
provisioning/kcl/templates/
-├── server.k # Actually Nushell/Jinja2 template
-├── taskserv.k # Actually Nushell/Jinja2 template
+├── server.ncl # Actually Nushell/Jinja2 template
+├── taskserv.ncl # Actually Nushell/Jinja2 template
└── ... # 15 more template files
-```plaintext
-
-This causes:
-
-- KCL validation failures (96.6% of errors)
-- Misclassification (templates in KCL directory)
-- Confusing directory structure
-
-### Solution
-
-Reorganize into type-specific directories:
-
-```plaintext
-provisioning/templates/
+
+This causes:
+
+KCL validation failures (96.6% of errors)
+Misclassification (templates in KCL directory)
+Confusing directory structure
+
+
+Reorganize into type-specific directories:
+provisioning/templates/
├── nushell/ # Nushell code generation (*.nu.j2)
│ ├── server.nu.j2
│ ├── taskserv.nu.j2
@@ -18585,65 +16289,65 @@ provisioning/templates/
│ ├── provider.toml.j2
│ └── ...
├── kcl/ # KCL file generation (*.k.j2)
-│ ├── workspace.k.j2
+│ ├── workspace.ncl.j2
│ └── ...
└── README.md
-```plaintext
-
-### Outcome
-
-✅ Correct file classification
+
+
+✅ Correct file classification
✅ KCL validation passes completely
✅ Clear template organization
-✅ Easier to discover and maintain templates
-
----
-
-## References
-
-### Existing KCL Schemas
-
-1. **Workspace Declaration**: `provisioning/kcl/generator/declaration.k`
- - `WorkspaceDeclaration` - Complete workspace specification
- - `Metadata` - Name, version, author, timestamps
- - `DeploymentConfig` - Deployment modes, servers, HA settings
- - Includes validation rules and semantic versioning
-
-2. **Workspace Layer**: `provisioning/workspace/layers/workspace.layer.k`
- - `WorkspaceLayer` - Template paths, priorities, metadata
-
-3. **Core Settings**: `provisioning/kcl/settings.k`
- - `Settings` - Main provisioning settings
- - `SecretProvider` - SOPS/KMS configuration
- - `AIProvider` - AI provider configuration
-
-### Related ADRs
-
-- **ADR-001**: Project Structure
-- **ADR-005**: Extension Framework
-- **ADR-006**: Provisioning CLI Refactoring
-- **ADR-009**: Security System Complete
-
----
-
-## Decision Status
-
-**Status**: Accepted
-
-**Next Steps**:
-
-1. ✅ Document strategy (this ADR)
-2. ⏳ Create workspace configuration KCL schema
-3. ⏳ Implement backward-compatible config loader
-4. ⏳ Create migration script for YAML → KCL
-5. ⏳ Move template files to proper directories
-6. ⏳ Update documentation with examples
-7. ⏳ Migrate workspace_librecloud to KCL
-
----
-
-**Last Updated**: 2025-12-03
-
+✅ Easier to discover and maintain templates
+
+
+
+
+
+Workspace Declaration : provisioning/kcl/generator/declaration.ncl
+
+WorkspaceDeclaration - Complete workspace specification
+Metadata - Name, version, author, timestamps
+DeploymentConfig - Deployment modes, servers, HA settings
+Includes validation rules and semantic versioning
+
+
+
+Workspace Layer : provisioning/workspace/layers/workspace.layer.ncl
+
+WorkspaceLayer - Template paths, priorities, metadata
+
+
+
+Core Settings : provisioning/kcl/settings.ncl
+
+Settings - Main provisioning settings
+SecretProvider - SOPS/KMS configuration
+AIProvider - AI provider configuration
+
+
+
+
+
+ADR-001 : Project Structure
+ADR-005 : Extension Framework
+ADR-006 : Provisioning CLI Refactoring
+ADR-009 : Security System Complete
+
+
+
+Status : Accepted
+Next Steps :
+
+✅ Document strategy (this ADR)
+⏳ Create workspace configuration KCL schema
+⏳ Implement backward-compatible config loader
+⏳ Create migration script for YAML → KCL
+⏳ Move template files to proper directories
+⏳ Update documentation with examples
+⏳ Migrate workspace_librecloud to KCL
+
+
+Last Updated : 2025-12-03
Status : Implemented
Date : 2025-12-15
@@ -18651,7 +16355,7 @@ provisioning/templates/
Implementation : Complete for platform schemas (100%)
-The provisioning platform historically used KCL (KLang) as the primary infrastructure-as-code language for all configuration schemas. As the system evolved through four migration phases (Foundation, Core, Complex, Very Complex), KCL’s limitations became increasingly apparent:
+The provisioning platform historically used KCL (KLang) as the primary infrastructure-as-code language for all configuration schemas. As the system evolved through four migration phases (Foundation, Core, Complex, Highly Complex), KCL’s limitations became increasingly apparent:
@@ -18773,7 +16477,7 @@ provisioning/templates/
Consistent structure : Each extension has nickel/ subdirectory with contracts, defaults, main, version
Example - UpCloud Provider :
-# upcloud/nickel/main.ncl
+# upcloud/nickel/main.ncl (migrated from upcloud/kcl/)
let contracts = import "./contracts.ncl" in
let defaults = import "./defaults.ncl" in
@@ -18788,51 +16492,46 @@ let defaults = import "./defaults.ncl" in
DefaultServerDefaults_upcloud = defaults.server_defaults_upcloud,
DefaultServerUpcloud = defaults.server_upcloud,
}
-```plaintext
-
-### Active Workspaces (`workspace_librecloud/nickel/`)
-
-- **47 Nickel files** in productive use
-- **2 infrastructures**:
- - `wuji` - Kubernetes cluster with 20 taskservs
- - `sgoyol` - Support servers group
-- **Two deployment modes** fully implemented and tested
-- **Daily production usage** validated ✅
-
-### Backward Compatibility
-
-- **955 KCL files** remain in workspaces/ (legacy user configs)
-- 100% backward compatible - old KCL code still works
-- Config loader supports both formats during transition
-- No breaking changes to APIs
-
----
-
-## Comparison: KCL vs Nickel
-
-| Aspect | KCL | Nickel | Winner |
-|--------|-----|--------|--------|
-| **Mental Model** | Python-like with schemas | JSON with functions | Nickel |
-| **Performance** | Baseline | 60% faster evaluation | Nickel |
-| **Type System** | Rigid schemas | Gradual typing + contracts | Nickel |
-| **Composition** | Schema inheritance | Record merging (`&`) | Nickel |
-| **Extensibility** | Requires schema modifications | Merging with custom fields | Nickel |
-| **Validation** | Compile-time (overhead) | Runtime contracts (lazy) | Nickel |
-| **Boilerplate** | High | Low (3-file pattern) | Nickel |
-| **Exports** | JSON/YAML | JSON/TOML/YAML | Nickel |
-| **Learning Curve** | Medium-High | Low | Nickel |
-| **Lazy Evaluation** | No | Yes (built-in) | Nickel |
-
----
-
-## Architecture Patterns
-
-### Three-File Pattern
-
-**File 1: Contracts** (`batch_contracts.ncl`):
-
-```nickel
-{
+
+
+
+47 Nickel files in productive use
+2 infrastructures :
+
+wuji - Kubernetes cluster with 20 taskservs
+sgoyol - Support servers group
+
+
+Two deployment modes fully implemented and tested
+Daily production usage validated ✅
+
+
+
+955 KCL files remain in workspaces/ (legacy user configs)
+100% backward compatible - old KCL code still works
+Config loader supports both formats during transition
+No breaking changes to APIs
+
+
+
+Aspect KCL Nickel Winner
+Mental Model Python-like with schemas JSON with functions Nickel
+Performance Baseline 60% faster evaluation Nickel
+Type System Rigid schemas Gradual typing + contracts Nickel
+Composition Schema inheritance Record merging (&) Nickel
+Extensibility Requires schema modifications Merging with custom fields Nickel
+Validation Compile-time (overhead) Runtime contracts (lazy) Nickel
+Boilerplate High Low (3-file pattern) Nickel
+Exports JSON/YAML JSON/TOML/YAML Nickel
+Learning Curve Medium-High Low Nickel
+Lazy Evaluation No Yes (built-in) Nickel
+
+
+
+
+
+File 1: Contracts (batch_contracts.ncl):
+{
BatchScheduler = {
strategy | String,
resource_limits,
@@ -18840,12 +16539,9 @@ let defaults = import "./defaults.ncl" in
enable_preemption | Bool,
},
}
-```plaintext
-
-**File 2: Defaults** (`batch_defaults.ncl`):
-
-```nickel
-{
+
+File 2: Defaults (batch_defaults.ncl):
+{
scheduler = {
strategy = "dependency_first",
resource_limits = {"max_cpu_cores" = 0},
@@ -18853,12 +16549,9 @@ let defaults = import "./defaults.ncl" in
enable_preemption = false,
},
}
-```plaintext
-
-**File 3: Main** (`batch.ncl`):
-
-```nickel
-let contracts = import "./batch_contracts.ncl" in
+
+File 3: Main (batch.ncl):
+let contracts = import "./batch_contracts.ncl" in
let defaults = import "./batch_defaults.ncl" in
{
@@ -18867,19 +16560,16 @@ let defaults = import "./batch_defaults.ncl" in
defaults.scheduler & o, # Level 2: Makers
DefaultScheduler = defaults.scheduler, # Level 3: Instances
}
-```plaintext
-
-### Hybrid Pattern Benefits
-
-- **90% of users**: Use makers for simple customization
-- **9% of users**: Reference defaults for inspection
-- **1% of users**: Access contracts for advanced combinations
-- **No validation conflicts**: Record merging works without contract constraints
-
-### Domain-Organized Architecture
-
-```plaintext
-provisioning/schemas/
+
+
+
+90% of users : Use makers for simple customization
+9% of users : Reference defaults for inspection
+1% of users : Access contracts for advanced combinations
+No validation conflicts : Record merging works without contract constraints
+
+
+provisioning/schemas/
├── lib/ # Storage, TaskServDef, ClusterDef
├── config/ # Settings, defaults, workspace_config
├── infrastructure/ # Compute, storage, provisioning
@@ -18889,255 +16579,211 @@ provisioning/schemas/
├── generator/ # Declarations, gap analysis, changes
├── integrations/ # Runtime, GitOps, main
└── main.ncl # Entry point with namespace organization
-```plaintext
-
-**Import pattern**:
-
-```nickel
-let provisioning = import "./main.ncl" in
+
+Import pattern :
+let provisioning = import "./main.ncl" in
provisioning.lib # For Storage, TaskServDef
provisioning.config.settings # For Settings, Defaults
provisioning.infrastructure.compute.server
provisioning.operations.workflows
-```plaintext
-
----
-
-## Production Deployment Patterns
-
-### Two-Mode Strategy
-
-#### 1. Development Mode (Single Source of Truth)
-
-- Relative imports to central provisioning
-- Fast iteration with immediate schema updates
-- No snapshot overhead
-- Usage: Local development, testing, experimentation
-
-```bash
-# workspace_librecloud/nickel/main.ncl
+
+
+
+
+
+
+Relative imports to central provisioning
+Fast iteration with immediate schema updates
+No snapshot overhead
+Usage: Local development, testing, experimentation
+
+# workspace_librecloud/nickel/main.ncl
import "../../provisioning/schemas/main.ncl"
import "../../provisioning/extensions/taskservs/kubernetes/nickel/main.ncl"
-```plaintext
-
-#### 2. Production Mode (Hermetic Deployment)
-
-Create immutable snapshots for reproducible deployments:
-
-```bash
-provisioning workspace freeze --version "2025-12-15-prod-v1" --env production
-```plaintext
-
-**Frozen structure** (`.frozen/{version}/`):
-
-```plaintext
-├── provisioning/schemas/ # Snapshot of central schemas
+
+
+Create immutable snapshots for reproducible deployments:
+provisioning workspace freeze --version "2025-12-15-prod-v1" --env production
+
+Frozen structure (.frozen/{version}/):
+├── provisioning/schemas/ # Snapshot of central schemas
├── extensions/ # Snapshot of all extensions
└── workspace/ # Snapshot of workspace configs
-```plaintext
-
-**All imports rewritten to local paths**:
-
-- `import "../../provisioning/schemas/main.ncl"` → `import "./provisioning/schemas/main.ncl"`
-- Guarantees immutability and reproducibility
-- No external dependencies
-- Can be deployed to air-gapped environments
-
-**Deploy from frozen snapshot**:
-
-```bash
-provisioning deploy --frozen "2025-12-15-prod-v1" --infra wuji
-```plaintext
-
-**Benefits**:
-
-- ✅ Development: Fast iteration with central updates
-- ✅ Production: Immutable, reproducible deployments
-- ✅ Audit trail: Each frozen version timestamped
-- ✅ Rollback: Easy rollback to previous versions
-- ✅ Air-gapped: Works in offline environments
-
----
-
-## Ecosystem Integration
-
-### TypeDialog (Bidirectional Nickel Integration)
-
-**Location**: `/Users/Akasha/Development/typedialog`
-**Purpose**: Type-safe prompts, forms, and schemas with Nickel output
-
-**Key Feature**: Nickel schemas → Type-safe UIs → Nickel output
-
-```bash
-# Nickel schema → Interactive form
+
+All imports rewritten to local paths :
+
+import "../../provisioning/schemas/main.ncl" → import "./provisioning/schemas/main.ncl"
+Guarantees immutability and reproducibility
+No external dependencies
+Can be deployed to air-gapped environments
+
+Deploy from frozen snapshot :
+provisioning deploy --frozen "2025-12-15-prod-v1" --infra wuji
+
+Benefits :
+
+✅ Development: Fast iteration with central updates
+✅ Production: Immutable, reproducible deployments
+✅ Audit trail: Each frozen version timestamped
+✅ Rollback: Easy rollback to previous versions
+✅ Air-gapped: Works in offline environments
+
+
+
+
+Location : /Users/Akasha/Development/typedialog
+Purpose : Type-safe prompts, forms, and schemas with Nickel output
+Key Feature : Nickel schemas → Type-safe UIs → Nickel output
+# Nickel schema → Interactive form
typedialog form --schema server.ncl --output json
# Interactive form → Nickel output
typedialog form --input form.toml --output nickel
-```plaintext
-
-**Value**: Amplifies Nickel ecosystem beyond IaC:
-
-- Schemas auto-generate type-safe UIs
-- Forms output configurations back to Nickel
-- Multiple backends: CLI, TUI, Web
-- Multiple output formats: JSON, YAML, TOML, Nickel
-
----
-
-## Technical Patterns
-
-### Expression-Based Structure
-
-| KCL | Nickel |
-|-----|--------|
-| Multiple top-level let bindings | Single root expression with `let...in` chaining |
-
-### Schema Inheritance → Record Merging
-
-| KCL | Nickel |
-|-----|--------|
-| `schema Server(defaults.ServerDefaults)` | `defaults.ServerDefaults & { overrides }` |
-
-### Optional Fields
-
-| KCL | Nickel |
-|-----|--------|
-| `field?: type` | `field = null` or `field = ""` |
-
-### Union Types
-
-| KCL | Nickel |
-|-----|--------|
-| `"ubuntu" \| "debian" \| "centos"` | `[\\| 'ubuntu, 'debian, 'centos \\|]` |
-
-### Boolean/Null Conversion
-
-| KCL | Nickel |
-|-----|--------|
-| `True` / `False` / `None` | `true` / `false` / `null` |
-
----
-
-## Quality Metrics
-
-- **Syntax Validation**: 100% (all files compile)
-- **JSON Export**: 100% success rate (4,680+ lines)
-- **Pattern Coverage**: All 5 templates tested and proven
-- **Backward Compatibility**: 100%
-- **Performance**: 60% faster evaluation than KCL
-- **Test Coverage**: 422 Nickel files validated in production
-
----
-
-## Consequences
-
-### Positive ✅
-
-- **60% performance gain** in evaluation speed
-- **Reduced boilerplate** (contracts + defaults separation)
-- **Greater flexibility** (record merging without validation)
-- **Extensibility without conflicts** (custom fields allowed)
-- **Simplified mental model** ("JSON with functions")
-- **Lazy evaluation** (better performance for large configs)
-- **Clean exports** (100% JSON/TOML compatible)
-- **Hybrid pattern** (4 levels covering all use cases)
-- **Domain-organized architecture** (8 logical domains, clear imports)
-- **Production deployment** with frozen snapshots (immutable, reproducible)
-- **Ecosystem expansion** (TypeDialog integration for UI generation)
-- **Real-world validation** (47 files in productive use)
-- **20 taskservs** deployed in production infrastructure
-
-### Challenges ⚠️
-
-- **Dual format support** during transition (KCL + Nickel)
-- **Learning curve** for team (new language)
-- **Migration effort** (40 files migrated manually)
-- **Documentation updates** (guides, examples, training)
-- **955 KCL files remain** (gradual workspace migration)
-- **Frozen snapshots workflow** (requires understanding workspace freeze)
-- **TypeDialog dependency** (external Rust project)
-
-### Mitigations
-
-- ✅ Complete documentation in `docs/development/kcl-module-system.md`
-- ✅ 100% backward compatibility maintained
-- ✅ Migration framework established (5 templates, validation checklist)
-- ✅ Validation checklist for each migration step
-- ✅ 100% syntax validation on all files
-- ✅ Real-world usage validated (47 files in production)
-- ✅ Frozen snapshots guarantee reproducibility
-- ✅ Two deployment modes cover development and production
-- ✅ Gradual migration strategy (workspace-level, no hard cutoff)
-
----
-
-## Migration Status
-
-### Completed (Phase 1-4)
-
-- ✅ Foundation (8 files) - Basic schemas, validation library
-- ✅ Core Schemas (8 files) - Settings, workspace config, gitea
-- ✅ Complex Features (7 files) - VM lifecycle, system config, services
-- ✅ Very Complex (9+ files) - Modes, commands, orchestrator, main entry point
-- ✅ Platform schemas (422 files total)
-- ✅ Extensions (providers, clusters)
-- ✅ Production workspace (47 files, 20 taskservs)
-
-### In Progress (Workspace-Level)
-
-- ⏳ Workspace migration (323+ files in workspace_librecloud)
-- ⏳ Extension migration (taskservs, clusters, providers)
-- ⏳ Parallel testing against original KCL
-- ⏳ CI/CD integration updates
-
-### Future (Optional)
-
-- User workspace KCL to Nickel (gradual, as needed)
-- Full migration of legacy configurations
-- TypeDialog UI generation for infrastructure
-
----
-
-## Related Documentation
-
-### Development Guides
-
-- KCL Module System - Critical syntax differences and patterns
-- [Nickel Migration Guide](../development/nickel-executable-examples.md) - Three-file pattern specification and examples
-- [Configuration Architecture](../development/configuration.md) - Composition patterns and best practices
-
-### Related ADRs
-
-- **ADR-010**: Configuration Format Strategy (multi-format approach)
-- **ADR-006**: CLI Refactoring (domain-driven design)
-- **ADR-004**: Hybrid Rust/Nushell Architecture (platform architecture)
-
-### Referenced Files
-
-- **Entry point**: `provisioning/schemas/main.ncl`
-- **Workspace pattern**: `workspace_librecloud/nickel/main.ncl`
-- **Example extension**: `provisioning/extensions/providers/upcloud/nickel/main.ncl`
-- **Production infrastructure**: `workspace_librecloud/nickel/wuji/main.ncl` (20 taskservs)
-
----
-
-## Approval
-
-**Status**: Implemented and Production-Ready
-
-- ✅ Architecture Team: Approved
-- ✅ Platform implementation: Complete (422 files)
-- ✅ Production validation: Passed (47 files active)
-- ✅ Backward compatibility: 100%
-- ✅ Real-world usage: Validated in wuji infrastructure
-
----
-
-**Last Updated**: 2025-12-15
-**Version**: 1.0.0
-**Implementation**: Complete (Phase 1-4 finished, workspace-level in progress)
+Value : Amplifies Nickel ecosystem beyond IaC:
+
+Schemas auto-generate type-safe UIs
+Forms output configurations back to Nickel
+Multiple backends: CLI, TUI, Web
+Multiple output formats: JSON, YAML, TOML, Nickel
+
+
+
+
+KCL Nickel
+Multiple top-level let bindings Single root expression with let...in chaining
+
+
+
+KCL Nickel
+schema Server(defaults.ServerDefaults)defaults.ServerDefaults & { overrides }
+
+
+
+KCL Nickel
+field?: typefield = null or field = ""
+
+
+
+KCL Nickel
+"ubuntu" | "debian" | "centos"[\| 'ubuntu, 'debian, 'centos \|]
+
+
+
+KCL Nickel
+True / False / Nonetrue / false / null
+
+
+
+
+
+Syntax Validation : 100% (all files compile)
+JSON Export : 100% success rate (4,680+ lines)
+Pattern Coverage : All 5 templates tested and proven
+Backward Compatibility : 100%
+Performance : 60% faster evaluation than KCL
+Test Coverage : 422 Nickel files validated in production
+
+
+
+
+
+60% performance gain in evaluation speed
+Reduced boilerplate (contracts + defaults separation)
+Greater flexibility (record merging without validation)
+Extensibility without conflicts (custom fields allowed)
+Simplified mental model (“JSON with functions”)
+Lazy evaluation (better performance for large configs)
+Clean exports (100% JSON/TOML compatible)
+Hybrid pattern (4 levels covering all use cases)
+Domain-organized architecture (8 logical domains, clear imports)
+Production deployment with frozen snapshots (immutable, reproducible)
+Ecosystem expansion (TypeDialog integration for UI generation)
+Real-world validation (47 files in productive use)
+20 taskservs deployed in production infrastructure
+
+
+
+Dual format support during transition (KCL + Nickel)
+Learning curve for team (new language)
+Migration effort (40 files migrated manually)
+Documentation updates (guides, examples, training)
+955 KCL files remain (gradual workspace migration)
+Frozen snapshots workflow (requires understanding workspace freeze)
+TypeDialog dependency (external Rust project)
+
+
+
+✅ Complete documentation in docs/development/kcl-module-system.md
+✅ 100% backward compatibility maintained
+✅ Migration framework established (5 templates, validation checklist)
+✅ Validation checklist for each migration step
+✅ 100% syntax validation on all files
+✅ Real-world usage validated (47 files in production)
+✅ Frozen snapshots guarantee reproducibility
+✅ Two deployment modes cover development and production
+✅ Gradual migration strategy (workspace-level, no hard cutoff)
+
+
+
+
+
+✅ Foundation (8 files) - Basic schemas, validation library
+✅ Core Schemas (8 files) - Settings, workspace config, gitea
+✅ Complex Features (7 files) - VM lifecycle, system config, services
+✅ Very Complex (9+ files) - Modes, commands, orchestrator, main entry point
+✅ Platform schemas (422 files total)
+✅ Extensions (providers, clusters)
+✅ Production workspace (47 files, 20 taskservs)
+
+
+
+⏳ Workspace migration (323+ files in workspace_librecloud)
+⏳ Extension migration (taskservs, clusters, providers)
+⏳ Parallel testing against original KCL
+⏳ CI/CD integration updates
+
+
+
+User workspace KCL to Nickel (gradual, as needed)
+Full migration of legacy configurations
+TypeDialog UI generation for infrastructure
+
+
+
+
+
+
+
+ADR-010 : Configuration Format Strategy (multi-format approach)
+ADR-006 : CLI Refactoring (domain-driven design)
+ADR-004 : Hybrid Rust/Nushell Architecture (platform architecture)
+
+
+
+Entry point : provisioning/schemas/main.ncl
+Workspace pattern : workspace_librecloud/nickel/main.ncl
+Example extension : provisioning/extensions/providers/upcloud/nickel/main.ncl
+Production infrastructure : workspace_librecloud/nickel/wuji/main.ncl (20 taskservs)
+
+
+
+Status : Implemented and Production-Ready
+
+✅ Architecture Team: Approved
+✅ Platform implementation: Complete (422 files)
+✅ Production validation: Passed (47 files active)
+✅ Backward compatibility: 100%
+✅ Real-world usage: Validated in wuji infrastructure
+
+
+Last Updated : 2025-12-15
+Version : 1.0.0
+Implementation : Complete (Phase 1-4 finished, workspace-level in progress)
Accepted - 2025-12-15
@@ -19162,23 +16808,18 @@ import "lib/validation" as valid
}
}
}
-```plaintext
-
-Module system includes:
-
-- Import resolution with search paths
-- Standard library (`builtins`, stdlib packages)
-- Module caching
-- Complex evaluation context
-
-## Decision
-
-Implement the `nu_plugin_nickel` plugin as a **CLI wrapper** that invokes the external `nickel` command.
-
-### Architecture Diagram
-
-```plaintext
-┌─────────────────────────────┐
+
+Module system includes:
+
+Import resolution with search paths
+Standard library (builtins, stdlib packages)
+Module caching
+Complex evaluation context
+
+
+Implement the nu_plugin_nickel plugin as a CLI wrapper that invokes the external nickel command.
+
+┌─────────────────────────────┐
│ Nushell Script │
│ │
│ nickel-export json /file │
@@ -19222,55 +16863,48 @@ Implement the `nu_plugin_nickel` plugin as a **CLI wrapper** that invokes the ex
│ ✅ Cell path access works │
│ ✅ Piping works │
└─────────────────────────────┘
-```plaintext
-
-### Implementation Characteristics
-
-**Plugin provides**:
-
-- ✅ Nushell commands: `nickel-export`, `nickel-eval`, `nickel-format`, `nickel-validate`
-- ✅ JSON/YAML output parsing (serde_json → nu_protocol::Value)
-- ✅ Automatic caching (SHA256-based, ~80-90% hit rate)
-- ✅ Error handling (CLI errors → Nushell errors)
-- ✅ Type-safe output (nu_protocol::Value::Record, not strings)
-
-**Plugin delegates to Nickel CLI**:
-
-- ✅ Module resolution with search paths
-- ✅ Standard library access and discovery
-- ✅ Evaluation context setup
-- ✅ Module caching
-- ✅ Output formatting
-
-## Rationale
-
-### Why CLI Wrapper Is The Correct Choice
-
-| Aspect | Pure Rust (nickel-lang-core) | CLI Wrapper (chosen) |
-|--------|-------------------------------|----------------------|
-| **Module resolution** | ❓ Undocumented API | ✅ Official, proven |
-| **Search paths** | ❓ How to configure? | ✅ CLI handles it |
-| **Standard library** | ❓ How to access? | ✅ Automatic discovery |
-| **Import system** | ❌ API unclear | ✅ Built-in |
-| **Evaluation context** | ❌ Complex setup needed | ✅ CLI provides |
-| **Future versions** | ⚠️ Maintain parity | ✅ Automatic support |
-| **Maintenance burden** | 🔴 High | 🟢 Low |
-| **Complexity** | 🔴 High | 🟢 Low |
-| **Correctness** | ⚠️ Risk of divergence | ✅ Single source of truth |
-
-### The Module System Problem
-
-Using `nickel-lang-core` directly would require the plugin to:
-
-1. **Configure import search paths**:
-
- ```rust
- // Where should Nickel look for modules?
- // Current directory? Workspace? System paths?
- // This is complex and configuration-dependent
+
+Plugin provides :
+
+✅ Nushell commands: nickel-export, nickel-eval, nickel-format, nickel-validate
+✅ JSON/YAML output parsing (serde_json → nu_protocol::Value)
+✅ Automatic caching (SHA256-based, ~80-90% hit rate)
+✅ Error handling (CLI errors → Nushell errors)
+✅ Type-safe output (nu_protocol::Value::Record, not strings)
+
+Plugin delegates to Nickel CLI :
+
+✅ Module resolution with search paths
+✅ Standard library access and discovery
+✅ Evaluation context setup
+✅ Module caching
+✅ Output formatting
+
+
+
+Aspect Pure Rust (nickel-lang-core) CLI Wrapper (chosen)
+Module resolution ❓ Undocumented API ✅ Official, proven
+Search paths ❓ How to configure? ✅ CLI handles it
+Standard library ❓ How to access? ✅ Automatic discovery
+Import system ❌ API unclear ✅ Built-in
+Evaluation context ❌ Complex setup needed ✅ CLI provides
+Future versions ⚠️ Maintain parity ✅ Automatic support
+Maintenance burden 🔴 High 🟢 Low
+Complexity 🔴 High 🟢 Low
+Correctness ⚠️ Risk of divergence ✅ Single source of truth
+
+
+
+Using nickel-lang-core directly would require the plugin to:
+Configure import search paths :
+// Where should Nickel look for modules?
+// Current directory? Workspace? System paths?
+// This is complex and configuration-dependent
+
+
Access standard library :
// Where is the Nickel stdlib installed?
// How to handle different Nickel versions?
@@ -19316,8 +16950,8 @@ Using `nickel-lang-core` directly would require the plugin to:
Import resolution with multiple fallbacks
Evaluation context that mirrors CLI
-
-
+
+
Correctness : Module resolution guaranteed by official Nickel CLI
Reliability : No risk from reverse-engineering undocumented APIs
@@ -19327,10 +16961,10 @@ Using `nickel-lang-core` directly would require the plugin to:
User Expectations : Same behavior as CLI users experience
Community Alignment : Uses official Nickel distribution
-
+
External Dependency : Requires nickel binary installed in PATH
-Process Overhead : ~100-200ms per execution (heavily cached)
+Process Overhead : ~100-200 ms per execution (heavily cached)
Subprocess Management : Spawn handling and stderr capture needed
Distribution : Provisioning must include Nickel binary
@@ -19345,7 +16979,7 @@ Using `nickel-lang-core` directly would require the plugin to:
Performance :
Aggressive caching (80-90% typical hit rate)
-Cache hits: ~1-5ms (not 100-200ms)
+Cache hits: ~1-5 ms (not 100-200 ms)
Cache directory: ~/.cache/provisioning/config-cache/
Distribution :
@@ -19354,7 +16988,7 @@ Using `nickel-lang-core` directly would require the plugin to:
Installers set up Nickel automatically
CI/CD has Nickel available
-
+
Pros : No external dependency
Cons : Undocumented API, high risk, maintenance burden
@@ -19371,7 +17005,7 @@ Using `nickel-lang-core` directly would require the plugin to:
Pros : Uses official interface
Cons : LSP not designed for evaluation, wrong abstraction
Decision : REJECTED - Inappropriate tool
-
+
@@ -19405,90 +17039,2416 @@ cmd.arg("export").arg(file).arg("--format").arg(format);
// WRONG (previously):
cmd.arg("export").arg(format).arg(file);
// Results in: "nickel export json /file"
-// ↑ This triggers auto-import of nonexistent JSON module
-```plaintext
+// ↑ This triggers auto-import of nonexistent JSON module
+
+Cache Key : SHA256(file_content + format)
+Cache Hit Rate : 80-90% (typical provisioning workflows)
+Performance :
+
+Cache miss: ~100-200 ms (process fork)
+Cache hit: ~1-5 ms (filesystem read + parse)
+Speedup: 50-100x for cached runs
+
+Storage : ~/.cache/provisioning/config-cache/
+
+Plugin correctly processes JSON output:
+
+Invokes: nickel export /file.ncl --format json
+Receives: JSON string from stdout
+Parses: serde_json::Value
+Converts: json_value_to_nu_value() (recursive)
+Returns: nu_protocol::Value::Record (not string!)
+
+This enables Nushell cell path access:
+nickel-export json /config.ncl | .database.host # ✅ Works
+
+
+Unit Tests :
+
+JSON parsing correctness
+Value type conversions
+Cache logic
+
+Integration Tests :
+
+Real Nickel file execution
+Module imports verification
+Search path resolution
+
+Manual Verification :
+# Test module imports
+nickel-export json /workspace/config.ncl
-## Caching Strategy
+# Test cell path access
+nickel-export json /workspace/config.ncl | .database
-**Cache Key**: SHA256(file_content + format)
-**Cache Hit Rate**: 80-90% (typical provisioning workflows)
-**Performance**:
+# Verify output types
+nickel-export json /workspace/config.ncl | type
+# Should show: record, not string
+
+
+Plugin integrates with provisioning config system:
+
+Nickel path auto-detected: which nickel
+Cache location: platform-specific cache_dir()
+Errors: consistent with provisioning patterns
+
+
+
+
+Status : Accepted and Implemented
+Last Updated : 2025-12-15
+Implementation : Complete
+Tests : Passing
+
+
+Accepted - 2025-01-08
+
+The provisioning system requires interactive user input for configuration workflows, workspace initialization, credential setup, and guided deployment scenarios. The system architecture combines Rust (performance-critical), Nushell (scripting), and Nickel (declarative configuration), creating challenges for interactive form-based input and multi-user collaboration.
+
+Current limitations :
+
+
+Nushell CLI : Terminal-only interaction
+
+input command: Single-line text prompts only
+No form validation, no complex multi-field forms
+Limited to single-user, terminal-bound workflows
+User experience: Basic and error-prone
+
+
+
+Nickel : Declarative configuration language
+
+Cannot handle interactive prompts (by design)
+Pure evaluation model (no side effects)
+Forms must be defined statically, not interactively
+No runtime user interaction
+
+
+
+Existing Solutions : Inadequate for modern infrastructure provisioning
+
+Shell-based prompts : Error-prone, no validation, single-user
+Custom web forms : High maintenance, inconsistent UX
+Separate admin panels : Disconnected from IaC workflow
+Terminal-only TUI : Limited to SSH sessions, no collaboration
+
+
+
+
+
+
+Workspace Initialization :
+# Current: Error-prone prompts
+let workspace_name = input "Workspace name: "
+let provider = input "Provider (aws/azure/oci): "
+# No validation, no autocomplete, no guidance
+
+
+
+Credential Setup :
+# Current: Insecure and basic
+let api_key = input "API Key: " # Shows in terminal history
+let region = input "Region: " # No validation
+
+
+
+Configuration Wizards :
+
+Database connection setup (host, port, credentials, SSL)
+Network configuration (CIDR blocks, subnets, gateways)
+Security policies (encryption, access control, audit)
+
+
+
+Guided Deployments :
+
+Multi-step infrastructure provisioning
+Service selection with dependencies
+Environment-specific overrides
+
+
+
+
+
+✅ Terminal UI widgets : Text input, password, select, multi-select, confirm
+✅ Validation : Type checking, regex patterns, custom validators
+✅ Security : Password masking, sensitive data handling
+✅ User Experience : Arrow key navigation, autocomplete, help text
+✅ Composability : Chain multiple prompts into forms
+✅ Error Handling : Clear validation errors, retry logic
+✅ Rust Integration : Native Rust library (no subprocess overhead)
+✅ Cross-Platform : Works on Linux, macOS, Windows
+
+
+Integrate typdialog with its Web UI backend as the standard interactive configuration interface for the provisioning platform. The major achievement of typdialog is not the TUI - it is the Web UI backend that enables browser-based forms, multi-user collaboration, and seamless integration with the provisioning orchestrator.
+
+┌─────────────────────────────────────────┐
+│ Nushell Script │
+│ │
+│ provisioning workspace init │
+│ provisioning config setup │
+│ provisioning deploy guided │
+└────────────┬────────────────────────────┘
+ │
+ ▼
+┌─────────────────────────────────────────┐
+│ Rust CLI Handler │
+│ (provisioning/core/cli/) │
+│ │
+│ - Parse command │
+│ - Determine if interactive needed │
+│ - Invoke TUI dialog module │
+└────────────┬────────────────────────────┘
+ │
+ ▼
+┌─────────────────────────────────────────┐
+│ TUI Dialog Module │
+│ (typdialog wrapper) │
+│ │
+│ - Form definition (validation rules) │
+│ - Widget rendering (text, select) │
+│ - User input capture │
+│ - Validation execution │
+│ - Result serialization (JSON/TOML) │
+└────────────┬────────────────────────────┘
+ │
+ ▼
+┌─────────────────────────────────────────┐
+│ typdialog Library │
+│ │
+│ - Terminal rendering (crossterm) │
+│ - Event handling (keyboard, mouse) │
+│ - Widget state management │
+│ - Input validation engine │
+└────────────┬────────────────────────────┘
+ │
+ ▼
+┌─────────────────────────────────────────┐
+│ Terminal (stdout/stdin) │
+│ │
+│ ✅ Rich TUI with validation │
+│ ✅ Secure password input │
+│ ✅ Guided multi-step forms │
+└─────────────────────────────────────────┘
+
+
+CLI Integration Provides :
+
+✅ Native Rust commands with TUI dialogs
+✅ Form-based input for complex configurations
+✅ Validation rules defined in Rust (type-safe)
+✅ Secure input (password masking, no history)
+✅ Error handling with retry logic
+✅ Serialization to Nickel/TOML/JSON
+
+TUI Dialog Library Handles :
+
+✅ Terminal UI rendering and event loop
+✅ Widget management (text, select, checkbox, confirm)
+✅ Input validation and error display
+✅ Navigation (arrow keys, tab, enter)
+✅ Cross-platform terminal compatibility
+
+
+
+Aspect Shell Prompts (current) Web Forms TUI Dialog (chosen)
+User Experience ❌ Basic text only ✅ Rich UI ✅ Rich TUI
+Validation ❌ Manual, error-prone ✅ Built-in ✅ Built-in
+Security ❌ Plain text, history ⚠️ Network risk ✅ Secure terminal
+Setup Complexity ✅ None ❌ Server required ✅ Minimal
+Terminal Workflow ✅ Native ❌ Browser switch ✅ Native
+Offline Support ✅ Always ❌ Requires server ✅ Always
+Dependencies ✅ None ❌ Web stack ✅ Single crate
+Error Handling ❌ Manual ⚠️ Complex ✅ Built-in retry
+
+
+
+Nushell’s input command is limited:
+# Current: No validation, no security
+let password = input "Password: " # ❌ Shows in terminal
+let region = input "AWS Region: " # ❌ No autocomplete/validation
-- Cache miss: ~100-200ms (process fork)
-- Cache hit: ~1-5ms (filesystem read + parse)
-- Speedup: 50-100x for cached runs
+# Cannot do:
+# - Multi-select from options
+# - Conditional fields (if X then ask Y)
+# - Password masking
+# - Real-time validation
+# - Autocomplete/fuzzy search
+
+
+Nickel is declarative and cannot prompt users:
+# Nickel defines what the config looks like, NOT how to get it
+{
+ database = {
+ host | String,
+ port | Number,
+ credentials | { username: String, password: String },
+ }
+}
-**Storage**: `~/.cache/provisioning/config-cache/`
+# Nickel cannot:
+# - Prompt user for values
+# - Show interactive forms
+# - Validate input interactively
+
+
+Rust provides :
+
+Native terminal control (crossterm, termion)
+Type-safe form definitions
+Validation rules as functions
+Secure memory handling (password zeroization)
+Performance (no subprocess overhead)
+
+TUI Dialog provides :
+
+Widget library (text, select, multi-select, confirm)
+Event loop and rendering
+Validation framework
+Error display and retry logic
+
+Integration enables :
+
+Nushell calls Rust CLI → Shows TUI dialog → Returns validated config
+Nickel receives validated config → Type checks → Merges with defaults
+
+
+
+
+User Experience : Professional TUI with validation and guidance
+Security : Password masking, sensitive data protection, no terminal history
+Validation : Type-safe rules enforced before config generation
+Developer Experience : Reusable form components across CLI commands
+Error Handling : Clear validation errors with retry options
+Offline First : No network dependencies for interactive input
+Terminal Native : Fits CLI workflow, no context switching
+Maintainability : Single library for all interactive input
+
+
+
+Terminal Dependency : Requires interactive terminal (not scriptable)
+Learning Curve : Developers must learn TUI dialog patterns
+Library Lock-in : Tied to specific TUI library API
+Testing Complexity : Interactive tests require terminal mocking
+Non-Interactive Fallback : Need alternative for CI/CD and scripts
+
+
+Non-Interactive Mode :
+// Support both interactive and non-interactive
+if terminal::is_interactive() {
+ // Show TUI dialog
+ let config = show_workspace_form()?;
+} else {
+ // Use config file or CLI args
+ let config = load_config_from_file(args.config)?;
+}
+Testing :
+// Unit tests: Test form validation logic (no TUI)
+#[test]
+fn test_validate_workspace_name() {
+ assert!(validate_name("my-workspace").is_ok());
+ assert!(validate_name("invalid name!").is_err());
+}
-## JSON Output Processing
+// Integration tests: Use mock terminal or config files
+Scriptability :
+# Batch mode: Provide config via file
+provisioning workspace init --config workspace.toml
-Plugin correctly processes JSON output:
+# Interactive mode: Show TUI dialog
+provisioning workspace init --interactive
+
+Documentation :
+
+Form schemas documented in docs/
+Config file examples provided
+Screenshots of TUI forms in guides
+
+
+
+Pros : Simple, no dependencies
+Cons : No validation, poor UX, security risks
+Decision : REJECTED - Inadequate for production use
+
+Pros : Rich UI, well-known patterns
+Cons : Requires server, network dependency, context switch
+Decision : REJECTED - Too complex for CLI tool
+
+Pros : Tailored to each need
+Cons : High maintenance, code duplication, inconsistent UX
+Decision : REJECTED - Not sustainable
+
+Pros : Mature, cross-platform
+Cons : Subprocess overhead, limited validation, shell escaping issues
+Decision : REJECTED - Poor Rust integration
+
+Pros : Fully scriptable, no interactive complexity
+Cons : Steep learning curve, no guidance for new users
+Decision : REJECTED - Poor user onboarding experience
+
+
+use typdialog::Form;
-1. Invokes: `nickel export /file.ncl --format json`
-2. Receives: JSON string from stdout
-3. Parses: serde_json::Value
-4. Converts: `json_value_to_nu_value()` (recursive)
-5. Returns: nu_protocol::Value::Record (not string!)
+pub fn workspace_initialization_form() -> Result<WorkspaceConfig> {
+ let form = Form::new("Workspace Initialization")
+ .add_text_input("name", "Workspace Name")
+ .required()
+ .validator(|s| validate_workspace_name(s))
+ .add_select("provider", "Cloud Provider")
+ .options(&["aws", "azure", "oci", "local"])
+ .required()
+ .add_text_input("region", "Region")
+ .default("us-west-2")
+ .validator(|s| validate_region(s))
+ .add_password("admin_password", "Admin Password")
+ .required()
+ .min_length(12)
+ .add_confirm("enable_monitoring", "Enable Monitoring?")
+ .default(true);
-This enables Nushell cell path access:
+ let responses = form.run()?;
-```nushell
-nickel-export json /config.ncl | .database.host # ✅ Works
-```plaintext
+ // Convert to strongly-typed config
+ let config = WorkspaceConfig {
+ name: responses.get_string("name")?,
+ provider: responses.get_string("provider")?.parse()?,
+ region: responses.get_string("region")?,
+ admin_password: responses.get_password("admin_password")?,
+ enable_monitoring: responses.get_bool("enable_monitoring")?,
+ };
-# Testing Strategy
+ Ok(config)
+}
+
+// 1. Get validated input from TUI dialog
+let config = workspace_initialization_form()?;
-**Unit Tests**:
+// 2. Serialize to TOML/JSON
+let config_toml = toml::to_string(&config)?;
-- JSON parsing correctness
-- Value type conversions
-- Cache logic
+// 3. Write to workspace config
+fs::write("workspace/config.toml", config_toml)?;
-**Integration Tests**:
+// 4. Nickel merges with defaults
+// nickel export workspace/main.ncl --format json
+// (uses workspace/config.toml as input)
+
+// provisioning/core/cli/src/commands/workspace.rs
-- Real Nickel file execution
-- Module imports verification
-- Search path resolution
+#[derive(Parser)]
+pub enum WorkspaceCommand {
+ Init {
+ #[arg(long)]
+ interactive: bool,
-**Manual Verification**:
+ #[arg(long)]
+ config: Option<PathBuf>,
+ },
+}
-```bash
-Test module imports
- nickel-export json /workspace/config.ncl
+pub fn handle_workspace_init(args: InitArgs) -> Result<()> {
+ if args.interactive || terminal::is_interactive() {
+ // Show TUI dialog
+ let config = workspace_initialization_form()?;
+ config.save("workspace/config.toml")?;
+ } else if let Some(config_path) = args.config {
+ // Use provided config
+ let config = WorkspaceConfig::load(config_path)?;
+ config.save("workspace/config.toml")?;
+ } else {
+ bail!("Either --interactive or --config required");
+ }
-Test cell path access
- nickel-export json /workspace/config.ncl | .database
+ // Continue with workspace setup
+ Ok(())
+}
+
+pub fn validate_workspace_name(name: &str) -> Result<(), String> {
+ // Alphanumeric, hyphens, 3-32 chars
+ let re = Regex::new(r"^[a-z0-9-]{3,32}$").unwrap();
+ if !re.is_match(name) {
+ return Err("Name must be 3-32 lowercase alphanumeric chars with hyphens".into());
+ }
+ Ok(())
+}
-Verify output types
- nickel-export json /workspace/config.ncl | type
-Should show: record, not string
- ```plaintext
+pub fn validate_region(region: &str) -> Result<(), String> {
+ const VALID_REGIONS: &[&str] = &["us-west-1", "us-west-2", "us-east-1", "eu-west-1"];
+ if !VALID_REGIONS.contains(®ion) {
+ return Err(format!("Invalid region. Must be one of: {}", VALID_REGIONS.join(", ")));
+ }
+ Ok(())
+}
+
+use zeroize::Zeroizing;
-# Configuration Integration
+pub fn get_secure_password() -> Result<Zeroizing<String>> {
+ let form = Form::new("Secure Input")
+ .add_password("password", "Password")
+ .required()
+ .min_length(12)
+ .validator(password_strength_check);
-Plugin integrates with provisioning config system:
+ let responses = form.run()?;
-- Nickel path auto-detected: `which nickel`
-- Cache location: platform-specific `cache_dir()`
-- Errors: consistent with provisioning patterns
+ // Password automatically zeroized when dropped
+ let password = Zeroizing::new(responses.get_password("password")?);
-# References
+ Ok(password)
+}
+
+Unit Tests :
+#[test]
+fn test_workspace_name_validation() {
+ assert!(validate_workspace_name("my-workspace").is_ok());
+ assert!(validate_workspace_name("UPPERCASE").is_err());
+ assert!(validate_workspace_name("ab").is_err()); // Too short
+}
+Integration Tests :
+// Use non-interactive mode with config files
+#[test]
+fn test_workspace_init_non_interactive() {
+ let config = WorkspaceConfig {
+ name: "test-workspace".into(),
+ provider: Provider::Local,
+ region: "us-west-2".into(),
+ admin_password: "secure-password-123".into(),
+ enable_monitoring: true,
+ };
-- ADR-012: Nushell Plugins (general framework)
-- [Nickel Official Documentation](https://nickel-lang.org/)
-- [nickel-lang-core Rust Crate](https://crates.io/crates/nickel-lang-core/)
-- nu_plugin_nickel Implementation: `provisioning/core/plugins/nushell-plugins/nu_plugin_nickel/`
-- [Related: ADR-013-NUSHELL-KCL-PLUGIN](adr/adr-nushell-kcl-plugin-cli-wrapper.md)
+ config.save("/tmp/test-config.toml").unwrap();
----
+ let result = handle_workspace_init(InitArgs {
+ interactive: false,
+ config: Some("/tmp/test-config.toml".into()),
+ });
-**Status**: Accepted and Implemented
-**Last Updated**: 2025-12-15
-**Implementation**: Complete
-**Tests**: Passing
+ assert!(result.is_ok());
+}
+Manual Testing :
+# Test interactive flow
+cargo build --release
+./target/release/provisioning workspace init --interactive
+
+# Test validation errors
+# - Try invalid workspace name
+# - Try weak password
+# - Try invalid region
+
+
+CLI Flag :
+# provisioning/config/config.defaults.toml
+[ui]
+interactive_mode = "auto" # "auto" | "always" | "never"
+dialog_theme = "default" # "default" | "minimal" | "colorful"
+
+Environment Override :
+# Force non-interactive mode (for CI/CD)
+export PROVISIONING_INTERACTIVE=false
+
+# Force interactive mode
+export PROVISIONING_INTERACTIVE=true
+
+
+User Guides :
+
+docs/user/interactive-configuration.md - How to use TUI dialogs
+docs/guides/workspace-setup.md - Workspace initialization with screenshots
+
+Developer Documentation :
+
+docs/development/tui-forms.md - Creating new TUI forms
+Form definition best practices
+Validation rule patterns
+
+Configuration Schema :
+# provisioning/schemas/workspace.ncl
+{
+ WorkspaceConfig = {
+ name
+ | doc "Workspace identifier (3-32 alphanumeric chars with hyphens)"
+ | String,
+ provider
+ | doc "Cloud provider"
+ | [| 'aws, 'azure, 'oci, 'local |],
+ region
+ | doc "Deployment region"
+ | String,
+ admin_password
+ | doc "Admin password (min 12 characters)"
+ | String,
+ enable_monitoring
+ | doc "Enable monitoring services"
+ | Bool,
+ }
+}
+
+
+Phase 1: Add Library
+
+Add typdialog dependency to provisioning/core/cli/Cargo.toml
+Create TUI dialog wrapper module
+Implement basic text/select widgets
+
+Phase 2: Implement Forms
+
+Workspace initialization form
+Credential setup form
+Configuration wizard forms
+
+Phase 3: CLI Integration
+
+Update CLI commands to use TUI dialogs
+Add --interactive / --config flags
+Implement non-interactive fallback
+
+Phase 4: Documentation
+
+User guides with screenshots
+Developer documentation for form creation
+Example configs for non-interactive use
+
+Phase 5: Testing
+
+Unit tests for validation logic
+Integration tests with config files
+Manual testing on all platforms
+
+
+
+typdialog Crate (or similar: dialoguer, inquire)
+crossterm - Terminal manipulation
+zeroize - Secure memory zeroization
+ADR-004: Hybrid Architecture (Rust/Nushell integration)
+ADR-011: Nickel Migration (declarative config language)
+ADR-012: Nushell Plugins (CLI wrapper patterns)
+Nushell input command limitations: Nushell Book - Input
+
+
+Status : Accepted
+Last Updated : 2025-01-08
+Implementation : Planned
+Priority : High (User onboarding and security)
+Estimated Complexity : Moderate
+
+
+Accepted - 2025-01-08
+
+The provisioning system manages sensitive data across multiple infrastructure layers: cloud provider credentials, database passwords, API keys, SSH keys, encryption keys, and service tokens. The current security architecture (ADR-009) includes SOPS for encrypted config files and Age for key management, but lacks a centralized secrets management solution with dynamic secrets, access control, and audit logging.
+
+Existing Approach :
+
+
+SOPS + Age : Static secrets encrypted in config files
+
+Good: Version-controlled, gitops-friendly
+Limited: Static rotation, no audit trail, manual key distribution
+
+
+
+Nickel Configuration : Declarative secrets references
+
+Good: Type-safe configuration
+Limited: Cannot generate dynamic secrets, no lifecycle management
+
+
+
+Manual Secret Injection : Environment variables, CLI flags
+
+Good: Simple for development
+Limited: No security guarantees, prone to leakage
+
+
+
+
+Security Issues :
+
+❌ No centralized audit trail (who accessed which secret when)
+❌ No automatic secret rotation policies
+❌ No fine-grained access control (Cedar policies not enforced on secrets)
+❌ Secrets scattered across: SOPS files, env vars, config files, K8s secrets
+❌ No detection of secret sprawl or leaked credentials
+
+Operational Issues :
+
+❌ Manual secret rotation (error-prone, often neglected)
+❌ No secret versioning (cannot rollback to previous credentials)
+❌ Difficult onboarding (manual key distribution)
+❌ No dynamic secrets (credentials exist indefinitely)
+
+Compliance Issues :
+
+❌ Cannot prove compliance with secret access policies
+❌ No audit logs for regulatory requirements
+❌ Cannot enforce secret expiration policies
+❌ Difficult to demonstrate least-privilege access
+
+
+
+
+Dynamic Database Credentials :
+
+Generate short-lived DB credentials for applications
+Automatic rotation based on policies
+Revocation on application termination
+
+
+
+Cloud Provider API Keys :
+
+Centralized storage with access control
+Audit trail of credential usage
+Automatic rotation schedules
+
+
+
+Service-to-Service Authentication :
+
+Dynamic tokens for microservices
+Short-lived certificates for mTLS
+Automatic renewal before expiration
+
+
+
+SSH Key Management :
+
+Temporal SSH keys (ADR-009 SSH integration)
+Centralized certificate authority
+Audit trail of SSH access
+
+
+
+Encryption Key Management :
+
+Master encryption keys for data at rest
+Key rotation and versioning
+Integration with KMS systems
+
+
+
+
+
+✅ Dynamic Secrets : Generate credentials on-demand with TTL
+✅ Access Control : Integration with Cedar authorization policies
+✅ Audit Logging : Complete trail of secret access and modifications
+✅ Secret Rotation : Automatic and manual rotation policies
+✅ Versioning : Track secret versions, enable rollback
+✅ High Availability : Distributed, fault-tolerant architecture
+✅ Encryption at Rest : AES-256-GCM for stored secrets
+✅ API-First : RESTful API for integration
+✅ Plugin Ecosystem : Extensible backends (AWS, Azure, databases)
+✅ Open Source : Self-hosted, no vendor lock-in
+
+
+Integrate SecretumVault as the centralized secrets management system for the provisioning platform.
+
+┌─────────────────────────────────────────────────────────────┐
+│ Provisioning CLI / Orchestrator / Services │
+│ │
+│ - Workspace initialization (credentials) │
+│ - Infrastructure deployment (cloud API keys) │
+│ - Service configuration (database passwords) │
+│ - SSH temporal keys (certificate generation) │
+└────────────┬────────────────────────────────────────────────┘
+ │
+ ▼
+┌─────────────────────────────────────────────────────────────┐
+│ SecretumVault Client Library (Rust) │
+│ (provisioning/core/libs/secretum-client/) │
+│ │
+│ - Authentication (token, mTLS) │
+│ - Secret CRUD operations │
+│ - Dynamic secret generation │
+│ - Lease renewal and revocation │
+│ - Policy enforcement │
+└────────────┬────────────────────────────────────────────────┘
+ │ HTTPS + mTLS
+ ▼
+┌─────────────────────────────────────────────────────────────┐
+│ SecretumVault Server │
+│ (Rust-based Vault implementation) │
+│ │
+│ ┌───────────────────────────────────────────────────┐ │
+│ │ API Layer (REST + gRPC) │ │
+│ ├───────────────────────────────────────────────────┤ │
+│ │ Authentication & Authorization │ │
+│ │ - Token auth, mTLS, OIDC integration │ │
+│ │ - Cedar policy enforcement │ │
+│ ├───────────────────────────────────────────────────┤ │
+│ │ Secret Engines │ │
+│ │ - KV (key-value v2 with versioning) │ │
+│ │ - Database (dynamic credentials) │ │
+│ │ - SSH (certificate authority) │ │
+│ │ - PKI (X.509 certificates) │ │
+│ │ - Cloud Providers (AWS/Azure/OCI) │ │
+│ ├───────────────────────────────────────────────────┤ │
+│ │ Storage Backend │ │
+│ │ - Encrypted storage (AES-256-GCM) │ │
+│ │ - PostgreSQL / Raft cluster │ │
+│ ├───────────────────────────────────────────────────┤ │
+│ │ Audit Backend │ │
+│ │ - Structured logging (JSON) │ │
+│ │ - Syslog, file, database sinks │ │
+│ └───────────────────────────────────────────────────┘ │
+└─────────────────────────────────────────────────────────────┘
+ │
+ ▼
+┌─────────────────────────────────────────────────────────────┐
+│ Backends (Dynamic Secret Generation) │
+│ │
+│ - PostgreSQL/MySQL (database credentials) │
+│ - AWS IAM (temporary access keys) │
+│ - Azure AD (service principals) │
+│ - SSH CA (signed certificates) │
+│ - PKI (X.509 certificates) │
+└─────────────────────────────────────────────────────────────┘
+
+
+SecretumVault Provides :
+
+✅ Dynamic secret generation with configurable TTL
+✅ Secret versioning and rollback capabilities
+✅ Fine-grained access control (Cedar policies)
+✅ Complete audit trail (all operations logged)
+✅ Automatic secret rotation policies
+✅ High availability (Raft consensus)
+✅ Encryption at rest (AES-256-GCM)
+✅ Plugin architecture for secret backends
+✅ RESTful and gRPC APIs
+✅ Rust implementation (performance, safety)
+
+Integration with Provisioning System :
+
+✅ Rust client library (native integration)
+✅ Nushell commands via CLI wrapper
+✅ Nickel configuration references secrets
+✅ Cedar policies control secret access
+✅ Orchestrator manages secret lifecycle
+✅ SSH integration for temporal keys
+✅ KMS integration for encryption keys
+
+
+
+Aspect SOPS + Age (current) HashiCorp Vault SecretumVault (chosen)
+Dynamic Secrets ❌ Static only ✅ Full support ✅ Full support
+Rust Native ⚠️ External CLI ❌ Go binary ✅ Pure Rust
+Cedar Integration ❌ None ❌ Custom policies ✅ Native Cedar
+Audit Trail ❌ Git only ✅ Comprehensive ✅ Comprehensive
+Secret Rotation ❌ Manual ✅ Automatic ✅ Automatic
+Open Source ✅ Yes ⚠️ MPL 2.0 (BSL now) ✅ Yes
+Self-Hosted ✅ Yes ✅ Yes ✅ Yes
+License ✅ Permissive ⚠️ BSL (proprietary) ✅ Permissive
+Versioning ⚠️ Git commits ✅ Built-in ✅ Built-in
+High Availability ❌ Single file ✅ Raft cluster ✅ Raft cluster
+Performance ✅ Fast (local) ⚠️ Network latency ✅ Rust performance
+
+
+
+SOPS is excellent for static secrets in git , but inadequate for:
+
+Dynamic Credentials : Cannot generate temporary DB passwords
+Audit Trail : Git commits are insufficient for compliance
+Rotation Policies : Manual rotation is error-prone
+Access Control : No runtime policy enforcement
+Secret Lifecycle : Cannot track usage or revoke access
+Multi-System Integration : Limited to files, not API-accessible
+
+Complementary Approach :
+
+SOPS: Configuration files with long-lived secrets (gitops workflow)
+SecretumVault: Runtime dynamic secrets, short-lived credentials, audit trail
+
+
+HashiCorp Vault Limitations :
+
+License Change : BSL (Business Source License) - proprietary for production
+Not Rust Native : Go binary, subprocess overhead
+Custom Policy Language : HCL policies, not Cedar (provisioning standard)
+Complex Deployment : Heavy operational burden
+Vendor Lock-In : HashiCorp ecosystem dependency
+
+SecretumVault Advantages :
+
+Rust Native : Zero-cost integration, no subprocess spawning
+Cedar Policies : Consistent with ADR-008 authorization model
+Lightweight : Smaller binary, lower resource usage
+Open Source : Permissive license, community-driven
+Provisioning-First : Designed for IaC workflows
+
+
+ADR-009 (Security System) :
+
+SOPS: Static config encryption (unchanged)
+Age: Key management for SOPS (unchanged)
+SecretumVault: Dynamic secrets, runtime access control (new)
+
+ADR-008 (Cedar Authorization) :
+
+Cedar policies control SecretumVault secret access
+Fine-grained permissions: read:secret:database/prod/password
+Audit trail records Cedar policy decisions
+
+SSH Temporal Keys :
+
+SecretumVault SSH CA signs user certificates
+Short-lived certificates (1-24 hours)
+Audit trail of SSH access
+
+
+
+
+Security Posture : Centralized secrets with audit trail and rotation
+Compliance : Complete audit logs for regulatory requirements
+Operational Excellence : Automatic rotation, dynamic credentials
+Developer Experience : Simple API for secret access
+Performance : Rust implementation, zero-cost abstractions
+Consistency : Cedar policies across entire system (auth + secrets)
+Observability : Metrics, logs, traces for secret access
+Disaster Recovery : Secret versioning enables rollback
+
+
+
+Infrastructure Complexity : Additional service to deploy and operate
+High Availability Requirements : Raft cluster needs 3+ nodes
+Migration Effort : Existing SOPS secrets need migration path
+Learning Curve : Operators must learn vault concepts
+Dependency Risk : Critical path service (secrets unavailable = system down)
+
+
+High Availability :
+# Deploy SecretumVault cluster (3 nodes)
+provisioning deploy secretum-vault --ha --replicas 3
+
+# Automatic leader election via Raft
+# Clients auto-reconnect to leader
+
+Migration from SOPS :
+# Phase 1: Import existing SOPS secrets into SecretumVault
+provisioning secrets migrate --from-sops config/secrets.yaml
+
+# Phase 2: Update Nickel configs to reference vault paths
+# Phase 3: Deprecate SOPS for runtime secrets (keep for config files)
+
+Fallback Strategy :
+// Graceful degradation if vault unavailable
+let secret = match vault_client.get_secret("database/password").await {
+ Ok(s) => s,
+ Err(VaultError::Unavailable) => {
+ // Fallback to SOPS for read-only operations
+ warn!("Vault unavailable, using SOPS fallback");
+ sops_decrypt("config/secrets.yaml", "database.password")?
+ },
+ Err(e) => return Err(e),
+};
+Operational Monitoring :
+# prometheus metrics
+secretum_vault_request_duration_seconds
+secretum_vault_secret_lease_expiry
+secretum_vault_auth_failures_total
+secretum_vault_raft_leader_changes
+
+# Alerts: Vault unavailable, high auth failure rate, lease expiry
+
+
+
+Pros : No new infrastructure, simple
+Cons : No dynamic secrets, no audit trail, manual rotation
+Decision : REJECTED - Insufficient for production security
+
+Pros : Mature, feature-rich, widely adopted
+Cons : BSL license, Go binary, HCL policies (not Cedar), complex deployment
+Decision : REJECTED - License and integration concerns
+
+Pros : Fully managed, high availability
+Cons : Vendor lock-in, multi-cloud complexity, cost at scale
+Decision : REJECTED - Against open-source and multi-cloud principles
+
+Pros : Enterprise features
+Cons : Proprietary, expensive, poor API integration
+Decision : REJECTED - Not suitable for IaC automation
+
+Pros : Full control, tailored to needs
+Cons : High maintenance burden, security risk, reinventing wheel
+Decision : REJECTED - SecretumVault provides this already
+
+
+# Deploy via provisioning system
+provisioning deploy secretum-vault \
+ --ha \
+ --replicas 3 \
+ --storage postgres \
+ --tls-cert /path/to/cert.pem \
+ --tls-key /path/to/key.pem
+
+# Initialize and unseal
+provisioning vault init
+provisioning vault unseal --key-shares 5 --key-threshold 3
+
+
+// provisioning/core/libs/secretum-client/src/lib.rs
+
+use secretum_vault::{Client, SecretEngine, Auth};
+
+pub struct VaultClient {
+ client: Client,
+}
+
+impl VaultClient {
+ pub async fn new(addr: &str, token: &str) -> Result<Self> {
+ let client = Client::new(addr)
+ .auth(Auth::Token(token))
+ .tls_config(TlsConfig::from_files("ca.pem", "cert.pem", "key.pem"))?
+ .build()?;
+
+ Ok(Self { client })
+ }
+
+ pub async fn get_secret(&self, path: &str) -> Result<Secret> {
+ self.client.kv2().get(path).await
+ }
+
+ pub async fn create_dynamic_db_credentials(&self, role: &str) -> Result<DbCredentials> {
+ self.client.database().generate_credentials(role).await
+ }
+
+ pub async fn sign_ssh_key(&self, public_key: &str, ttl: Duration) -> Result<Certificate> {
+ self.client.ssh().sign_key(public_key, ttl).await
+ }
+}
+
+# Nushell commands via Rust CLI wrapper
+provisioning secrets get database/prod/password
+provisioning secrets set api/keys/stripe --value "sk_live_xyz"
+provisioning secrets rotate database/prod/password
+provisioning secrets lease renew lease_id_12345
+provisioning secrets list database/
+
+
+# provisioning/schemas/database.ncl
+{
+ database = {
+ host = "postgres.example.com",
+ port = 5432,
+ username = secrets.get "database/prod/username",
+ password = secrets.get "database/prod/password",
+ }
+}
+
+# Nickel function: secrets.get resolves to SecretumVault API call
+
+
+// policy: developers can read dev secrets, not prod
+permit(
+ principal in Group::"developers",
+ action == Action::"read",
+ resource in Secret::"database/dev"
+);
+
+forbid(
+ principal in Group::"developers",
+ action == Action::"read",
+ resource in Secret::"database/prod"
+);
+
+// policy: CI/CD can generate dynamic DB credentials
+permit(
+ principal == Service::"github-actions",
+ action == Action::"generate",
+ resource in Secret::"database/dynamic"
+) when {
+ context.ttl <= duration("1h")
+};
+
+
+// Application requests temporary DB credentials
+let creds = vault_client
+ .database()
+ .generate_credentials("postgres-readonly")
+ .await?;
+
+println!("Username: {}", creds.username); // v-app-abcd1234
+println!("Password: {}", creds.password); // random-secure-password
+println!("TTL: {}", creds.lease_duration); // 1h
+
+// Credentials automatically revoked after TTL
+// No manual cleanup needed
+
+# secretum-vault config
+[[rotation_policies]]
+path = "database/prod/password"
+schedule = "0 0 * * 0" # Weekly on Sunday midnight
+max_age = "30d"
+
+[[rotation_policies]]
+path = "api/keys/stripe"
+schedule = "0 0 1 * *" # Monthly on 1st
+max_age = "90d"
+
+
+{
+ "timestamp": "2025-01-08T12:34:56Z",
+ "type": "request",
+ "auth": {
+ "client_token": "sha256:abc123...",
+ "accessor": "hmac:def456...",
+ "display_name": "service-orchestrator",
+ "policies": ["default", "service-policy"]
+ },
+ "request": {
+ "operation": "read",
+ "path": "secret/data/database/prod/password",
+ "remote_address": "10.0.1.5"
+ },
+ "response": {
+ "status": 200
+ },
+ "cedar_policy": {
+ "decision": "permit",
+ "policy_id": "allow-orchestrator-read-secrets"
+ }
+}
+
+
+Unit Tests :
+#[tokio::test]
+async fn test_get_secret() {
+ let vault = mock_vault_client();
+ let secret = vault.get_secret("test/secret").await.unwrap();
+ assert_eq!(secret.value, "expected-value");
+}
+
+#[tokio::test]
+async fn test_dynamic_credentials_generation() {
+ let vault = mock_vault_client();
+ let creds = vault.create_dynamic_db_credentials("postgres-readonly").await.unwrap();
+ assert!(creds.username.starts_with("v-"));
+ assert_eq!(creds.lease_duration, Duration::from_secs(3600));
+}
+Integration Tests :
+# Test vault deployment
+provisioning deploy secretum-vault --test-mode
+provisioning vault init
+provisioning vault unseal
+
+# Test secret operations
+provisioning secrets set test/secret --value "test-value"
+provisioning secrets get test/secret | assert "test-value"
+
+# Test dynamic credentials
+provisioning secrets db-creds postgres-readonly | jq '.username' | assert-contains "v-"
+
+# Test rotation
+provisioning secrets rotate test/secret
+
+Security Tests :
+#[tokio::test]
+async fn test_unauthorized_access_denied() {
+ let vault = vault_client_with_limited_token();
+ let result = vault.get_secret("database/prod/password").await;
+ assert!(matches!(result, Err(VaultError::PermissionDenied)));
+}
+
+Provisioning Config :
+# provisioning/config/config.defaults.toml
+[secrets]
+provider = "secretum-vault" # "secretum-vault" | "sops" | "env"
+vault_addr = "https://vault.example.com:8200"
+vault_namespace = "provisioning"
+vault_mount = "secret"
+
+[secrets.tls]
+ca_cert = "/etc/provisioning/vault-ca.pem"
+client_cert = "/etc/provisioning/vault-client.pem"
+client_key = "/etc/provisioning/vault-client-key.pem"
+
+[secrets.cache]
+enabled = true
+ttl = "5m"
+max_size = "100MB"
+
+Environment Variables :
+export VAULT_ADDR="https://vault.example.com:8200"
+export VAULT_TOKEN="s.abc123def456..."
+export VAULT_NAMESPACE="provisioning"
+export VAULT_CACERT="/etc/provisioning/vault-ca.pem"
+
+
+Phase 1: Deploy SecretumVault
+
+Deploy vault cluster in HA mode
+Initialize and configure backends
+Set up Cedar policies
+
+Phase 2: Migrate Static Secrets
+
+Import SOPS secrets into vault KV store
+Update Nickel configs to reference vault paths
+Verify secret access via new API
+
+Phase 3: Enable Dynamic Secrets
+
+Configure database secret engine
+Configure SSH CA secret engine
+Update applications to use dynamic credentials
+
+Phase 4: Deprecate SOPS for Runtime
+
+SOPS remains for gitops config files
+Runtime secrets exclusively from vault
+Audit trail enforcement
+
+Phase 5: Automation
+
+Automatic rotation policies
+Lease renewal automation
+Monitoring and alerting
+
+
+User Guides :
+
+docs/user/secrets-management.md - Using SecretumVault
+docs/user/dynamic-credentials.md - Dynamic secret workflows
+docs/user/secret-rotation.md - Rotation policies and procedures
+
+Operations Documentation :
+
+docs/operations/vault-deployment.md - Deploying and configuring vault
+docs/operations/vault-backup-restore.md - Backup and disaster recovery
+docs/operations/vault-monitoring.md - Metrics, logs, alerts
+
+Developer Documentation :
+
+docs/development/secrets-api.md - Rust client library usage
+docs/development/cedar-secret-policies.md - Writing Cedar policies for secrets
+Secret engine development guide
+
+Security Documentation :
+
+docs/security/secrets-architecture.md - Security architecture overview
+docs/security/audit-logging.md - Audit trail and compliance
+Threat model and risk assessment
+
+
+
+
+Status : Accepted
+Last Updated : 2025-01-08
+Implementation : Planned
+Priority : High (Security and compliance)
+Estimated Complexity : Complex
+
+
+Accepted - 2025-01-08
+
+The provisioning platform has evolved to include complex workflows for infrastructure configuration, deployment, and management.
+Current interaction patterns require deep technical knowledge of Nickel schemas, cloud provider APIs, networking concepts, and security best practices.
+This creates barriers to entry and slows down infrastructure provisioning for operators who are not infrastructure experts.
+
+Current state challenges :
+
+
+Knowledge Barrier : Deep Nickel, cloud, and networking expertise required
+
+Understanding Nickel type system and contracts
+Knowing cloud provider resource relationships
+Configuring security policies correctly
+Debugging deployment failures
+
+
+
+Manual Configuration : All configs hand-written
+
+Repetitive boilerplate for common patterns
+Easy to make mistakes (typos, missing fields)
+No intelligent suggestions or autocomplete
+Trial-and-error debugging
+
+
+
+Limited Assistance : No contextual help
+
+Documentation is separate from workflow
+No explanation of validation errors
+No suggestions for fixing issues
+No learning from past deployments
+
+
+
+Troubleshooting Difficulty : Manual log analysis
+
+Deployment failures require expert analysis
+No automated root cause detection
+No suggested fixes based on similar issues
+Long time-to-resolution
+
+
+
+
+
+
+Natural Language to Configuration :
+
+User: “Create a production PostgreSQL cluster with encryption and daily backups”
+AI: Generates validated Nickel configuration
+
+
+
+AI-Assisted Form Filling :
+
+User starts typing in typdialog web form
+AI suggests values based on context
+AI explains validation errors in plain language
+
+
+
+Intelligent Troubleshooting :
+
+Deployment fails
+AI analyzes logs and suggests fixes
+AI generates corrected configuration
+
+
+
+Configuration Optimization :
+
+AI analyzes workload patterns
+AI suggests performance improvements
+AI detects security misconfigurations
+
+
+
+Learning from Operations :
+
+AI indexes past deployments
+AI suggests configurations based on similar workloads
+AI predicts potential issues
+
+
+
+
+The system integrates multiple AI components:
+
+typdialog-ai : AI-assisted form interactions
+typdialog-ag : AI agents for autonomous operations
+typdialog-prov-gen : AI-powered configuration generation
+platform/crates/ai-service : Core AI service backend
+platform/crates/mcp-server : Model Context Protocol server
+platform/crates/rag : Retrieval-Augmented Generation system
+
+
+
+✅ Natural Language Understanding : Parse user intent from free-form text
+✅ Schema-Aware Generation : Generate valid Nickel configurations
+✅ Context Retrieval : Access documentation, schemas, past deployments
+✅ Security Enforcement : Cedar policies control AI access
+✅ Human-in-the-Loop : All AI actions require human approval
+✅ Audit Trail : Complete logging of AI operations
+✅ Multi-Provider Support : OpenAI, Anthropic, local models
+✅ Cost Control : Rate limiting and budget management
+✅ Observability : Trace AI decisions and reasoning
+
+
+Integrate a comprehensive AI system consisting of:
+
+AI-Assisted Interfaces (typdialog-ai)
+Autonomous AI Agents (typdialog-ag)
+AI Configuration Generator (typdialog-prov-gen)
+Core AI Infrastructure (ai-service, mcp-server, rag)
+
+All AI components are schema-aware , security-enforced , and human-supervised .
+
+┌─────────────────────────────────────────────────────────────────┐
+│ User Interfaces │
+│ │
+│ Natural Language: "Create production K8s cluster in AWS" │
+│ Typdialog Forms: AI-assisted field suggestions │
+│ CLI: provisioning ai generate-config "description" │
+└────────────┬────────────────────────────────────────────────────┘
+ │
+ ▼
+┌─────────────────────────────────────────────────────────────────┐
+│ AI Frontend Layer │
+│ ┌───────────────────────────────────────────────────────┐ │
+│ │ typdialog-ai (AI-Assisted Forms) │ │
+│ │ - Natural language form filling │ │
+│ │ - Real-time AI suggestions │ │
+│ │ - Validation error explanations │ │
+│ │ - Context-aware autocomplete │ │
+│ ├───────────────────────────────────────────────────────┤ │
+│ │ typdialog-ag (AI Agents) │ │
+│ │ - Autonomous task execution │ │
+│ │ - Multi-step workflow automation │ │
+│ │ - Learning from feedback │ │
+│ │ - Agent collaboration │ │
+│ ├───────────────────────────────────────────────────────┤ │
+│ │ typdialog-prov-gen (Config Generator) │ │
+│ │ - Natural language → Nickel config │ │
+│ │ - Template-based generation │ │
+│ │ - Best practice injection │ │
+│ │ - Validation and refinement │ │
+│ └───────────────────────────────────────────────────────┘ │
+└────────────┬────────────────────────────────────────────────────┘
+ │
+ ▼
+┌────────────────────────────────────────────────────────────────┐
+│ Core AI Infrastructure (platform/crates/) │
+│ ┌───────────────────────────────────────────────────────┐ │
+│ │ ai-service (Central AI Service) │ │
+│ │ │ │
+│ │ - Request routing and orchestration │ │
+│ │ - Authentication and authorization (Cedar) │ │
+│ │ - Rate limiting and cost control │ │
+│ │ - Caching and optimization │ │
+│ │ - Audit logging and observability │ │
+│ │ - Multi-provider abstraction │ │
+│ └─────────────┬─────────────────────┬───────────────────┘ │
+│ │ │ │
+│ ▼ ▼ │
+│ ┌─────────────────────┐ ┌─────────────────────┐ │
+│ │ mcp-server │ │ rag │ │
+│ │ (Model Context │ │ (Retrieval-Aug Gen) │ │
+│ │ Protocol) │ │ │ │
+│ │ │ │ ┌─────────────────┐ │ │
+│ │ - LLM integration │ │ │ Vector Store │ │ │
+│ │ - Tool calling │ │ │ (Qdrant/Milvus) │ │ │
+│ │ - Context mgmt │ │ └─────────────────┘ │ │
+│ │ - Multi-provider │ │ ┌─────────────────┐ │ │
+│ │ (OpenAI, │ │ │ Embeddings │ │ │
+│ │ Anthropic, │ │ │ (text-embed) │ │ │
+│ │ Local models) │ │ └─────────────────┘ │ │
+│ │ │ │ ┌─────────────────┐ │ │
+│ │ Tools: │ │ │ Index: │ │ │
+│ │ - nickel_validate │ │ │ - Nickel schemas│ │ │
+│ │ - schema_query │ │ │ - Documentation │ │ │
+│ │ - config_generate │ │ │ - Past deploys │ │ │
+│ │ - cedar_check │ │ │ - Best practices│ │ │
+│ └─────────────────────┘ │ └─────────────────┘ │ │
+│ │ │ │
+│ │ Query: "How to │ │
+│ │ configure Postgres │ │
+│ │ with encryption?" │ │
+│ │ │ │
+│ │ Retrieval: Relevant │ │
+│ │ docs + examples │ │
+│ └─────────────────────┘ │
+└────────────┬───────────────────────────────────────────────────┘
+ │
+ ▼
+┌─────────────────────────────────────────────────────────────────┐
+│ Integration Points │
+│ │
+│ ┌─────────────┐ ┌──────────────┐ ┌─────────────────────┐ │
+│ │ Nickel │ │ SecretumVault│ │ Cedar Authorization │ │
+│ │ Validation │ │ (Secrets) │ │ (AI Policies) │ │
+│ └─────────────┘ └──────────────┘ └─────────────────────┘ │
+│ │
+│ ┌─────────────┐ ┌──────────────┐ ┌─────────────────────┐ │
+│ │ Orchestrator│ │ Typdialog │ │ Audit Logging │ │
+│ │ (Deploy) │ │ (Forms) │ │ (All AI Ops) │ │
+│ └─────────────┘ └──────────────┘ └─────────────────────┘ │
+└─────────────────────────────────────────────────────────────────┘
+ │
+ ▼
+┌─────────────────────────────────────────────────────────────────┐
+│ Output: Validated Nickel Configuration │
+│ │
+│ ✅ Schema-validated │
+│ ✅ Security-checked (Cedar policies) │
+│ ✅ Human-approved │
+│ ✅ Audit-logged │
+│ ✅ Ready for deployment │
+└─────────────────────────────────────────────────────────────────┘
+
+
+typdialog-ai (AI-Assisted Forms):
+
+Real-time form field suggestions based on context
+Natural language form filling
+Validation error explanations in plain English
+Context-aware autocomplete for configuration values
+Integration with typdialog web UI
+
+typdialog-ag (AI Agents):
+
+Autonomous task execution (multi-step workflows)
+Agent collaboration (multiple agents working together)
+Learning from user feedback and past operations
+Goal-oriented behavior (achieve outcome, not just execute steps)
+Safety boundaries (cannot deploy without approval)
+
+typdialog-prov-gen (Config Generator):
+
+Natural language → Nickel configuration
+Template-based generation with customization
+Best practice injection (security, performance, HA)
+Iterative refinement based on validation feedback
+Integration with Nickel schema system
+
+ai-service (Core AI Service):
+
+Central request router for all AI operations
+Authentication and authorization (Cedar policies)
+Rate limiting and cost control
+Caching (reduce LLM API calls)
+Audit logging (all AI operations)
+Multi-provider abstraction (OpenAI, Anthropic, local)
+
+mcp-server (Model Context Protocol):
+
+LLM integration (OpenAI, Anthropic, local models)
+Tool calling framework (nickel_validate, schema_query, etc.)
+Context management (conversation history, schemas)
+Streaming responses for real-time feedback
+Error handling and retries
+
+rag (Retrieval-Augmented Generation):
+
+Vector store (Qdrant/Milvus) for embeddings
+Document indexing (Nickel schemas, docs, deployments)
+Semantic search (find relevant context)
+Embedding generation (text-embedding-3-large)
+Query expansion and reranking
+
+
+
+Aspect Manual Config AI-Assisted (chosen)
+Learning Curve 🔴 Steep 🟢 Gentle
+Time to Deploy 🔴 Hours 🟢 Minutes
+Error Rate 🔴 High 🟢 Low (validated)
+Documentation Access 🔴 Separate 🟢 Contextual
+Troubleshooting 🔴 Manual 🟢 AI-assisted
+Best Practices ⚠️ Manual enforcement ✅ Auto-injected
+Consistency ⚠️ Varies by operator ✅ Standardized
+Scalability 🔴 Limited by expertise 🟢 AI scales knowledge
+
+
+
+Traditional AI code generation fails for infrastructure because:
+Generic AI (like GitHub Copilot):
+❌ Generates syntactically correct but semantically wrong configs
+❌ Doesn't understand cloud provider constraints
+❌ No validation against schemas
+❌ No security policy enforcement
+❌ Hallucinated resource names/IDs
+
+Schema-aware AI (our approach):
+# Nickel schema provides ground truth
+{
+ Database = {
+ engine | [| 'postgres, 'mysql, 'mongodb |],
+ version | String,
+ storage_gb | Number,
+ backup_retention_days | Number,
+ }
+}
+
+# AI generates ONLY valid configs
+# AI knows:
+# - Valid engine values ('postgres', not 'postgresql')
+# - Required fields (all listed above)
+# - Type constraints (storage_gb is Number, not String)
+# - Nickel contracts (if defined)
+
+Result : AI cannot generate invalid configs.
+
+LLMs alone have limitations:
+Pure LLM:
+❌ Knowledge cutoff (no recent updates)
+❌ Hallucinations (invents plausible-sounding configs)
+❌ No project-specific knowledge
+❌ No access to past deployments
+
+RAG-enhanced LLM :
+Query: "How to configure Postgres with encryption?"
+
+RAG retrieves:
+- Nickel schema: provisioning/schemas/database.ncl
+- Documentation: docs/user/database-encryption.md
+- Past deployment: workspaces/prod/postgres-encrypted.ncl
+- Best practice: .claude/patterns/secure-database.md
+
+LLM generates answer WITH retrieved context:
+✅ Accurate (based on actual schemas)
+✅ Project-specific (uses our patterns)
+✅ Proven (learned from past deployments)
+✅ Secure (follows our security guidelines)
+
+
+AI-generated infrastructure configs require human approval:
+// All AI operations require approval
+pub async fn ai_generate_config(request: GenerateRequest) -> Result<Config> {
+ let ai_generated = ai_service.generate(request).await?;
+
+ // Validate against Nickel schema
+ let validation = nickel_validate(&ai_generated)?;
+ if !validation.is_valid() {
+ return Err("AI generated invalid config");
+ }
+
+ // Check Cedar policies
+ let authorized = cedar_authorize(
+ principal: user,
+ action: "approve_ai_config",
+ resource: ai_generated,
+ )?;
+ if !authorized {
+ return Err("User not authorized to approve AI config");
+ }
+
+ // Require explicit human approval
+ let approval = prompt_user_approval(&ai_generated).await?;
+ if !approval.approved {
+ audit_log("AI config rejected by user", &ai_generated);
+ return Err("User rejected AI-generated config");
+ }
+
+ audit_log("AI config approved by user", &ai_generated);
+ Ok(ai_generated)
+}
+Why :
+
+Infrastructure changes have real-world cost and security impact
+AI can make mistakes (hallucinations, misunderstandings)
+Compliance requires human accountability
+Learning opportunity (human reviews teach AI)
+
+
+No single LLM provider is best for all tasks:
+Provider Best For Considerations
+Anthropic (Claude) Long context, accuracy ✅ Best for complex configs
+OpenAI (GPT-4) Tool calling, speed ✅ Best for quick suggestions
+Local (Llama, Mistral) Privacy, cost ✅ Best for air-gapped envs
+
+
+Strategy :
+
+Complex config generation → Claude (long context)
+Real-time form suggestions → GPT-4 (fast)
+Air-gapped deployments → Local models (privacy)
+
+
+
+
+Accessibility : Non-experts can provision infrastructure
+Productivity : 10x faster configuration creation
+Quality : AI injects best practices automatically
+Consistency : Standardized configurations across teams
+Learning : Users learn from AI explanations
+Troubleshooting : AI-assisted debugging reduces MTTR
+Documentation : Contextual help embedded in workflow
+Safety : Schema validation prevents invalid configs
+Security : Cedar policies control AI access
+Auditability : Complete trail of AI operations
+
+
+
+Dependency : Requires LLM API access (or local models)
+Cost : LLM API calls have per-token cost
+Latency : AI responses take 1-5 seconds
+Accuracy : AI can still make mistakes (needs validation)
+Trust : Users must understand AI limitations
+Complexity : Additional infrastructure to operate
+Privacy : Configs sent to LLM providers (unless local)
+
+
+Cost Control :
+[ai.rate_limiting]
+requests_per_minute = 60
+tokens_per_day = 1000000
+cost_limit_per_day = "100.00" # USD
+
+[ai.caching]
+enabled = true
+ttl = "1h"
+# Cache similar queries to reduce API calls
+
+Latency Optimization :
+// Streaming responses for real-time feedback
+pub async fn ai_generate_stream(request: GenerateRequest) -> impl Stream<Item = String> {
+ ai_service
+ .generate_stream(request)
+ .await
+ .map(|chunk| chunk.text)
+}
+Privacy (Local Models) :
+[ai]
+provider = "local"
+model_path = "/opt/provisioning/models/llama-3-70b"
+
+# No data leaves the network
+
+Validation (Defense in Depth) :
+AI generates config
+ ↓
+Nickel schema validation (syntax, types, contracts)
+ ↓
+Cedar policy check (security, compliance)
+ ↓
+Human approval (final gate)
+ ↓
+Deployment
+
+Observability :
+[ai.observability]
+trace_all_requests = true
+store_conversations = true
+conversation_retention = "30d"
+
+# Every AI operation logged:
+# - Input prompt
+# - Retrieved context (RAG)
+# - Generated output
+# - Validation results
+# - Human approval decision
+
+
+
+Pros : Simpler, no LLM dependencies
+Cons : Steep learning curve, slow provisioning, manual troubleshooting
+Decision : REJECTED - Poor user experience (10x slower provisioning, high error rate)
+
+Pros : Existing tools, well-known UX
+Cons : Not schema-aware, generates invalid configs, no validation
+Decision : REJECTED - Inadequate for infrastructure (correctness critical)
+
+Pros : Lower risk (AI doesn’t generate configs)
+Cons : Missed opportunity for 10x productivity gains
+Decision : REJECTED - Too conservative
+
+Pros : Maximum automation
+Cons : Unacceptable risk for infrastructure changes
+Decision : REJECTED - Safety and compliance requirements
+
+Pros : Simpler integration
+Cons : Vendor lock-in, no flexibility for different use cases
+Decision : REJECTED - Multi-provider abstraction provides flexibility
+
+
+// platform/crates/ai-service/src/lib.rs
+
+#[async_trait]
+pub trait AIService {
+ async fn generate_config(
+ &self,
+ prompt: &str,
+ schema: &NickelSchema,
+ context: Option<RAGContext>,
+ ) -> Result<GeneratedConfig>;
+
+ async fn suggest_field_value(
+ &self,
+ field: &FieldDefinition,
+ partial_input: &str,
+ form_context: &FormContext,
+ ) -> Result<Vec<Suggestion>>;
+
+ async fn explain_validation_error(
+ &self,
+ error: &ValidationError,
+ config: &Config,
+ ) -> Result<Explanation>;
+
+ async fn troubleshoot_deployment(
+ &self,
+ deployment_id: &str,
+ logs: &DeploymentLogs,
+ ) -> Result<TroubleshootingReport>;
+}
+
+pub struct AIServiceImpl {
+ mcp_client: MCPClient,
+ rag: RAGService,
+ cedar: CedarEngine,
+ audit: AuditLogger,
+ rate_limiter: RateLimiter,
+ cache: Cache,
+}
+
+impl AIService for AIServiceImpl {
+ async fn generate_config(
+ &self,
+ prompt: &str,
+ schema: &NickelSchema,
+ context: Option<RAGContext>,
+ ) -> Result<GeneratedConfig> {
+ // Check authorization
+ self.cedar.authorize(
+ principal: current_user(),
+ action: "ai:generate_config",
+ resource: schema,
+ )?;
+
+ // Rate limiting
+ self.rate_limiter.check(current_user()).await?;
+
+ // Retrieve relevant context via RAG
+ let rag_context = match context {
+ Some(ctx) => ctx,
+ None => self.rag.retrieve(prompt, schema).await?,
+ };
+
+ // Generate config via MCP
+ let generated = self.mcp_client.generate(
+ prompt: prompt,
+ schema: schema,
+ context: rag_context,
+ tools: &["nickel_validate", "schema_query"],
+ ).await?;
+
+ // Validate generated config
+ let validation = nickel_validate(&generated.config)?;
+ if !validation.is_valid() {
+ return Err(AIError::InvalidGeneration(validation.errors));
+ }
+
+ // Audit log
+ self.audit.log(AIOperation::GenerateConfig {
+ user: current_user(),
+ prompt: prompt,
+ schema: schema.name(),
+ generated: &generated.config,
+ validation: validation,
+ });
+
+ Ok(GeneratedConfig {
+ config: generated.config,
+ explanation: generated.explanation,
+ confidence: generated.confidence,
+ validation: validation,
+ })
+ }
+}
+
+// platform/crates/mcp-server/src/lib.rs
+
+pub struct MCPClient {
+ provider: Box<dyn LLMProvider>,
+ tools: ToolRegistry,
+}
+
+#[async_trait]
+pub trait LLMProvider {
+ async fn generate(&self, request: GenerateRequest) -> Result<GenerateResponse>;
+ async fn generate_stream(&self, request: GenerateRequest) -> Result<impl Stream<Item = String>>;
+}
+
+// Tool definitions for LLM
+pub struct ToolRegistry {
+ tools: HashMap<String, Tool>,
+}
+
+impl ToolRegistry {
+ pub fn new() -> Self {
+ let mut tools = HashMap::new();
+
+ tools.insert("nickel_validate", Tool {
+ name: "nickel_validate",
+ description: "Validate Nickel configuration against schema",
+ parameters: json!({
+ "type": "object",
+ "properties": {
+ "config": {"type": "string"},
+ "schema_path": {"type": "string"},
+ },
+ "required": ["config", "schema_path"],
+ }),
+ handler: Box::new(|params| async {
+ let config = params["config"].as_str().unwrap();
+ let schema = params["schema_path"].as_str().unwrap();
+ nickel_validate_tool(config, schema).await
+ }),
+ });
+
+ tools.insert("schema_query", Tool {
+ name: "schema_query",
+ description: "Query Nickel schema for field information",
+ parameters: json!({
+ "type": "object",
+ "properties": {
+ "schema_path": {"type": "string"},
+ "query": {"type": "string"},
+ },
+ "required": ["schema_path"],
+ }),
+ handler: Box::new(|params| async {
+ let schema = params["schema_path"].as_str().unwrap();
+ let query = params.get("query").and_then(|v| v.as_str());
+ schema_query_tool(schema, query).await
+ }),
+ });
+
+ Self { tools }
+ }
+}
+
+// platform/crates/rag/src/lib.rs
+
+pub struct RAGService {
+ vector_store: Box<dyn VectorStore>,
+ embeddings: EmbeddingModel,
+ indexer: DocumentIndexer,
+}
+
+impl RAGService {
+ pub async fn index_all(&self) -> Result<()> {
+ // Index Nickel schemas
+ self.index_schemas("provisioning/schemas").await?;
+
+ // Index documentation
+ self.index_docs("docs").await?;
+
+ // Index past deployments
+ self.index_deployments("workspaces").await?;
+
+ // Index best practices
+ self.index_patterns(".claude/patterns").await?;
+
+ Ok(())
+ }
+
+ pub async fn retrieve(
+ &self,
+ query: &str,
+ schema: &NickelSchema,
+ ) -> Result<RAGContext> {
+ // Generate query embedding
+ let query_embedding = self.embeddings.embed(query).await?;
+
+ // Search vector store
+ let results = self.vector_store.search(
+ embedding: query_embedding,
+ top_k: 10,
+ filter: Some(json!({
+ "schema": schema.name(),
+ })),
+ ).await?;
+
+ // Rerank results
+ let reranked = self.rerank(query, results).await?;
+
+ // Build context
+ Ok(RAGContext {
+ query: query.to_string(),
+ schema_definition: schema.to_string(),
+ relevant_docs: reranked.iter()
+ .take(5)
+ .map(|r| r.content.clone())
+ .collect(),
+ similar_configs: self.find_similar_configs(schema).await?,
+ best_practices: self.find_best_practices(schema).await?,
+ })
+ }
+}
+
+#[async_trait]
+pub trait VectorStore {
+ async fn insert(&self, id: &str, embedding: Vec<f32>, metadata: Value) -> Result<()>;
+ async fn search(&self, embedding: Vec<f32>, top_k: usize, filter: Option<Value>) -> Result<Vec<SearchResult>>;
+}
+
+// Qdrant implementation
+pub struct QdrantStore {
+ client: qdrant::QdrantClient,
+ collection: String,
+}
+
+// typdialog-ai/src/form_assistant.rs
+
+pub struct FormAssistant {
+ ai_service: Arc<AIService>,
+}
+
+impl FormAssistant {
+ pub async fn suggest_field_value(
+ &self,
+ field: &FieldDefinition,
+ partial_input: &str,
+ form_context: &FormContext,
+ ) -> Result<Vec<Suggestion>> {
+ self.ai_service.suggest_field_value(
+ field,
+ partial_input,
+ form_context,
+ ).await
+ }
+
+ pub async fn explain_error(
+ &self,
+ error: &ValidationError,
+ field_value: &str,
+ ) -> Result<String> {
+ let explanation = self.ai_service.explain_validation_error(
+ error,
+ field_value,
+ ).await?;
+
+ Ok(format!(
+ "Error: {}\n\nExplanation: {}\n\nSuggested fix: {}",
+ error.message,
+ explanation.plain_english,
+ explanation.suggested_fix,
+ ))
+ }
+
+ pub async fn fill_from_natural_language(
+ &self,
+ description: &str,
+ form_schema: &FormSchema,
+ ) -> Result<HashMap<String, Value>> {
+ let prompt = format!(
+ "User wants to: {}\n\nForm schema: {}\n\nGenerate field values:",
+ description,
+ serde_json::to_string_pretty(form_schema)?,
+ );
+
+ let generated = self.ai_service.generate_config(
+ &prompt,
+ &form_schema.nickel_schema,
+ None,
+ ).await?;
+
+ Ok(generated.field_values)
+ }
+}
+
+// typdialog-ag/src/agent.rs
+
+pub struct ProvisioningAgent {
+ ai_service: Arc<AIService>,
+ orchestrator: Arc<OrchestratorClient>,
+ max_iterations: usize,
+}
+
+impl ProvisioningAgent {
+ pub async fn execute_goal(&self, goal: &str) -> Result<AgentResult> {
+ let mut state = AgentState::new(goal);
+
+ for iteration in 0..self.max_iterations {
+ // AI determines next action
+ let action = self.ai_service.agent_next_action(&state).await?;
+
+ // Execute action (with human approval for critical operations)
+ let result = self.execute_action(&action, &state).await?;
+
+ // Update state
+ state.update(action, result);
+
+ // Check if goal achieved
+ if state.goal_achieved() {
+ return Ok(AgentResult::Success(state));
+ }
+ }
+
+ Err(AgentError::MaxIterationsReached)
+ }
+
+ async fn execute_action(
+ &self,
+ action: &AgentAction,
+ state: &AgentState,
+ ) -> Result<ActionResult> {
+ match action {
+ AgentAction::GenerateConfig { description } => {
+ let config = self.ai_service.generate_config(
+ description,
+ &state.target_schema,
+ Some(state.context.clone()),
+ ).await?;
+
+ Ok(ActionResult::ConfigGenerated(config))
+ },
+
+ AgentAction::Deploy { config } => {
+ // Require human approval for deployment
+ let approval = prompt_user_approval(
+ "Agent wants to deploy. Approve?",
+ config,
+ ).await?;
+
+ if !approval.approved {
+ return Ok(ActionResult::DeploymentRejected);
+ }
+
+ let deployment = self.orchestrator.deploy(config).await?;
+ Ok(ActionResult::Deployed(deployment))
+ },
+
+ AgentAction::Troubleshoot { deployment_id } => {
+ let report = self.ai_service.troubleshoot_deployment(
+ deployment_id,
+ &self.orchestrator.get_logs(deployment_id).await?,
+ ).await?;
+
+ Ok(ActionResult::TroubleshootingReport(report))
+ },
+ }
+ }
+}
+
+// AI cannot access secrets without explicit permission
+forbid(
+ principal == Service::"ai-service",
+ action == Action::"read",
+ resource in Secret::"*"
+);
+
+// AI can generate configs for non-production environments without approval
+permit(
+ principal == Service::"ai-service",
+ action == Action::"generate_config",
+ resource in Schema::"*"
+) when {
+ resource.environment in ["dev", "staging"]
+};
+
+// AI config generation for production requires senior engineer approval
+permit(
+ principal in Group::"senior-engineers",
+ action == Action::"approve_ai_config",
+ resource in Config::"*"
+) when {
+ resource.environment == "production" &&
+ resource.generated_by == "ai-service"
+};
+
+// AI agents cannot deploy without human approval
+forbid(
+ principal == Service::"ai-agent",
+ action == Action::"deploy",
+ resource == Infrastructure::"*"
+) unless {
+ context.human_approved == true
+};
+
+
+Unit Tests :
+#[tokio::test]
+async fn test_ai_config_generation_validates() {
+ let ai_service = mock_ai_service();
+
+ let generated = ai_service.generate_config(
+ "Create a PostgreSQL database with encryption",
+ &postgres_schema(),
+ None,
+ ).await.unwrap();
+
+ // Must validate against schema
+ assert!(generated.validation.is_valid());
+ assert_eq!(generated.config["engine"], "postgres");
+ assert_eq!(generated.config["encryption_enabled"], true);
+}
+
+#[tokio::test]
+async fn test_ai_cannot_access_secrets() {
+ let ai_service = ai_service_with_cedar();
+
+ let result = ai_service.get_secret("database/password").await;
+
+ assert!(result.is_err());
+ assert_eq!(result.unwrap_err(), AIError::PermissionDenied);
+}
+Integration Tests :
+#[tokio::test]
+async fn test_end_to_end_ai_config_generation() {
+ // User provides natural language
+ let description = "Create a production Kubernetes cluster in AWS with 5 nodes";
+
+ // AI generates config
+ let generated = ai_service.generate_config(description).await.unwrap();
+
+ // Nickel validation
+ let validation = nickel_validate(&generated.config).await.unwrap();
+ assert!(validation.is_valid());
+
+ // Human approval
+ let approval = Approval {
+ user: "senior-engineer@example.com",
+ approved: true,
+ timestamp: Utc::now(),
+ };
+
+ // Deploy
+ let deployment = orchestrator.deploy_with_approval(
+ generated.config,
+ approval,
+ ).await.unwrap();
+
+ assert_eq!(deployment.status, DeploymentStatus::Success);
+}
+RAG Quality Tests :
+#[tokio::test]
+async fn test_rag_retrieval_accuracy() {
+ let rag = rag_service();
+
+ // Index test documents
+ rag.index_all().await.unwrap();
+
+ // Query
+ let context = rag.retrieve(
+ "How to configure PostgreSQL with encryption?",
+ &postgres_schema(),
+ ).await.unwrap();
+
+ // Should retrieve relevant docs
+ assert!(context.relevant_docs.iter().any(|doc| {
+ doc.contains("encryption") && doc.contains("postgres")
+ }));
+
+ // Should retrieve similar configs
+ assert!(!context.similar_configs.is_empty());
+}
+
+AI Access Control :
+AI Service Permissions (enforced by Cedar):
+✅ CAN: Read Nickel schemas
+✅ CAN: Generate configurations
+✅ CAN: Query documentation
+✅ CAN: Analyze deployment logs (sanitized)
+❌ CANNOT: Access secrets directly
+❌ CANNOT: Deploy without approval
+❌ CANNOT: Modify Cedar policies
+❌ CANNOT: Access user credentials
+
+Data Privacy :
+[ai.privacy]
+# Sanitize before sending to LLM
+sanitize_secrets = true
+sanitize_pii = true
+sanitize_credentials = true
+
+# What gets sent to LLM:
+# ✅ Nickel schemas (public)
+# ✅ Documentation (public)
+# ✅ Error messages (sanitized)
+# ❌ Secret values (never)
+# ❌ Passwords (never)
+# ❌ API keys (never)
+
+Audit Trail :
+// Every AI operation logged
+pub struct AIAuditLog {
+ timestamp: DateTime<Utc>,
+ user: UserId,
+ operation: AIOperation,
+ input_prompt: String,
+ generated_output: String,
+ validation_result: ValidationResult,
+ human_approval: Option<Approval>,
+ deployment_outcome: Option<DeploymentResult>,
+}
+
+Estimated Costs (per month, based on typical usage):
+Assumptions:
+- 100 active users
+- 10 AI config generations per user per day
+- Average prompt: 2000 tokens
+- Average response: 1000 tokens
+
+Provider: Anthropic Claude Sonnet
+Cost: $3 per 1M input tokens, $15 per 1M output tokens
+
+Monthly cost:
+= 100 users × 10 generations × 30 days × (2000 input + 1000 output tokens)
+= 100 × 10 × 30 × 3000 tokens
+= 90M tokens
+= (60M input × $3/1M) + (30M output × $15/1M)
+= $180 + $450
+= $630/month
+
+With caching (50% hit rate):
+= $315/month
+
+Cost optimization strategies :
+
+Caching (50-80% cost reduction)
+Streaming (lower latency, same cost)
+Local models for non-critical operations (zero marginal cost)
+Rate limiting (prevent runaway costs)
+
+
+
+
+Status : Accepted
+Last Updated : 2025-01-08
+Implementation : Planned (High Priority)
+Estimated Complexity : Very Complex
+Dependencies : ADR-008, ADR-011, ADR-013, ADR-014
+
+The provisioning platform integrates AI capabilities to provide intelligent assistance for infrastructure configuration, deployment, and troubleshooting.
+This section documents the AI system architecture, features, and usage patterns.
+
+The AI integration consists of multiple components working together to provide intelligent infrastructure provisioning:
+
+typdialog-ai : AI-assisted form filling and configuration
+typdialog-ag : Autonomous AI agents for complex workflows
+typdialog-prov-gen : Natural language to Nickel configuration generation
+ai-service : Core AI service backend with multi-provider support
+mcp-server : Model Context Protocol server for LLM integration
+rag : Retrieval-Augmented Generation for contextual knowledge
+
+
+
+Generate infrastructure configurations from plain English descriptions:
+provisioning ai generate "Create a production PostgreSQL cluster with encryption and daily backups"
+
+
+Real-time suggestions and explanations as you fill out configuration forms via typdialog web UI.
+
+AI analyzes deployment failures and suggests fixes:
+provisioning ai troubleshoot deployment-12345
+
+
+Configuration Optimization
+AI reviews configurations and suggests performance and security improvements:
+provisioning ai optimize workspaces/prod/config.ncl
+
+
+AI agents execute multi-step workflows with minimal human intervention:
+provisioning ai agent --goal "Set up complete dev environment for Python app"
+
+
+
+
+
+# Edit provisioning config
+vim provisioning/config/ai.toml
+
+# Set provider and enable features
+[ai]
+enabled = true
+provider = "anthropic" # or "openai" or "local"
+model = "claude-sonnet-4"
+
+[ai.features]
+form_assistance = true
+config_generation = true
+troubleshooting = true
+
+
+# Simple generation
+provisioning ai generate "PostgreSQL database with encryption"
+
+# With specific schema
+provisioning ai generate \
+ --schema database \
+ --output workspaces/dev/db.ncl \
+ "Production PostgreSQL with 100GB storage and daily backups"
+
+
+# Open typdialog web UI with AI assistance
+provisioning workspace init --interactive --ai-assist
+
+# AI provides real-time suggestions as you type
+# AI explains validation errors in plain English
+# AI fills multiple fields from natural language description
+
+
+# Analyze failed deployment
+provisioning ai troubleshoot deployment-12345
+
+# AI analyzes logs and suggests fixes
+# AI generates corrected configuration
+# AI explains root cause in plain language
+
+
+The AI system implements strict security controls:
+
+✅ Cedar Policies : AI access controlled by Cedar authorization
+✅ Secret Isolation : AI cannot access secrets directly
+✅ Human Approval : Critical operations require human approval
+✅ Audit Trail : All AI operations logged
+✅ Data Sanitization : Secrets/PII sanitized before sending to LLM
+✅ Local Models : Support for air-gapped deployments
+
+See Security Policies for complete details.
+
+Provider Models Best For
+Anthropic Claude Sonnet 4, Claude Opus 4 Complex configs, long context
+OpenAI GPT-4 Turbo, GPT-4 Fast suggestions, tool calling
+Local Llama 3, Mistral Air-gapped, privacy-critical
+
+
+
+AI features incur LLM API costs. The system implements cost controls:
+
+Caching : Reduces API calls by 50-80%
+Rate Limiting : Prevents runaway costs
+Budget Limits : Daily/monthly cost caps
+Local Models : Zero marginal cost for air-gapped deployments
+
+See Cost Management for optimization strategies.
+
+The AI integration is documented in:
+
+
+
+Read Architecture to understand AI system design
+Configure AI features in Configuration
+Try Natural Language Config for your first AI-generated config
+Explore AI Agents for automation workflows
+Review Security Policies to understand access controls
+
+
+Version : 1.0
+Last Updated : 2025-01-08
+Status : Active
+
+
+
+
+
+
+
+
+
+
+
+
This document provides comprehensive documentation for all REST API endpoints in provisioning.
-
+
Provisioning exposes two main REST APIs:
Orchestrator API (Port 8080): Core workflow management and batch operations
@@ -19499,16 +19459,13 @@ Plugin integrates with provisioning config system:
Orchestrator : http://localhost:9090
Control Center : http://localhost:9080
-
+
All API endpoints (except health checks) require JWT authentication via the Authorization header:
Authorization: Bearer <jwt_token>
-```plaintext
-
-### Getting Access Token
-
-```http
-POST /auth/login
+
+
+POST /auth/login
Content-Type: application/json
{
@@ -19516,41 +19473,28 @@ Content-Type: application/json
"password": "password",
"mfa_code": "123456"
}
-```plaintext
-
-## Orchestrator API Endpoints
-
-### Health Check
-
-#### GET /health
-
-Check orchestrator health status.
-
-**Response:**
-
-```json
-{
+
+
+
+
+Check orchestrator health status.
+Response:
+{
"success": true,
"data": "Orchestrator is healthy"
}
-```plaintext
-
-### Task Management
-
-#### GET /tasks
-
-List all workflow tasks.
-
-**Query Parameters:**
-
-- `status` (optional): Filter by task status (Pending, Running, Completed, Failed, Cancelled)
-- `limit` (optional): Maximum number of results
-- `offset` (optional): Pagination offset
-
-**Response:**
-
-```json
-{
+
+
+
+List all workflow tasks.
+Query Parameters:
+
+status (optional): Filter by task status (Pending, Running, Completed, Failed, Cancelled)
+limit (optional): Maximum number of results
+offset (optional): Pagination offset
+
+Response:
+{
"success": true,
"data": [
{
@@ -19568,20 +19512,15 @@ List all workflow tasks.
}
]
}
-```plaintext
-
-#### GET /tasks/{id}
-
-Get specific task status and details.
-
-**Path Parameters:**
-
-- `id`: Task UUID
-
-**Response:**
-
-```json
-{
+
+GET /tasks/
+Get specific task status and details.
+Path Parameters:
+
+Response:
+{
"success": true,
"data": {
"id": "uuid-string",
@@ -19597,96 +19536,65 @@ Get specific task status and details.
"error": null
}
}
-```plaintext
-
-### Workflow Submission
-
-#### POST /workflows/servers/create
-
-Submit server creation workflow.
-
-**Request Body:**
-
-```json
-{
+
+
+
+Submit server creation workflow.
+Request Body:
+{
"infra": "production",
- "settings": "config.k",
+ "settings": "config.ncl",
"check_mode": false,
"wait": true
}
-```plaintext
-
-**Response:**
-
-```json
-{
+
+Response:
+{
"success": true,
"data": "uuid-task-id"
}
-```plaintext
-
-#### POST /workflows/taskserv/create
-
-Submit task service workflow.
-
-**Request Body:**
-
-```json
-{
+
+
+Submit task service workflow.
+Request Body:
+{
"operation": "create",
"taskserv": "kubernetes",
"infra": "production",
- "settings": "config.k",
+ "settings": "config.ncl",
"check_mode": false,
"wait": true
}
-```plaintext
-
-**Response:**
-
-```json
-{
+
+Response:
+{
"success": true,
"data": "uuid-task-id"
}
-```plaintext
-
-#### POST /workflows/cluster/create
-
-Submit cluster workflow.
-
-**Request Body:**
-
-```json
-{
+
+
+Submit cluster workflow.
+Request Body:
+{
"operation": "create",
"cluster_type": "buildkit",
"infra": "production",
- "settings": "config.k",
+ "settings": "config.ncl",
"check_mode": false,
"wait": true
}
-```plaintext
-
-**Response:**
-
-```json
-{
+
+Response:
+{
"success": true,
"data": "uuid-task-id"
}
-```plaintext
-
-### Batch Operations
-
-#### POST /batch/execute
-
-Execute batch workflow operation.
-
-**Request Body:**
-
-```json
-{
+
+
+
+Execute batch workflow operation.
+Request Body:
+{
"name": "multi_cloud_deployment",
"version": "1.0.0",
"storage_backend": "surrealdb",
@@ -19699,8 +19607,8 @@ Execute batch workflow operation.
"provider": "upcloud",
"dependencies": [],
"server_configs": [
- {"name": "web-01", "plan": "1xCPU-2GB", "zone": "de-fra1"},
- {"name": "web-02", "plan": "1xCPU-2GB", "zone": "us-nyc1"}
+ {"name": "web-01", "plan": "1xCPU-2 GB", "zone": "de-fra1"},
+ {"name": "web-02", "plan": "1xCPU-2 GB", "zone": "us-nyc1"}
]
},
{
@@ -19712,12 +19620,9 @@ Execute batch workflow operation.
}
]
}
-```plaintext
-
-**Response:**
-
-```json
-{
+
+Response:
+{
"success": true,
"data": {
"batch_id": "uuid-string",
@@ -19736,16 +19641,11 @@ Execute batch workflow operation.
]
}
}
-```plaintext
-
-#### GET /batch/operations
-
-List all batch operations.
-
-**Response:**
-
-```json
-{
+
+
+List all batch operations.
+Response:
+{
"success": true,
"data": [
{
@@ -19757,20 +19657,15 @@ List all batch operations.
}
]
}
-```plaintext
-
-#### GET /batch/operations/{id}
-
-Get batch operation status.
-
-**Path Parameters:**
-
-- `id`: Batch operation ID
-
-**Response:**
-
-```json
-{
+
+GET /batch/operations/
+Get batch operation status.
+Path Parameters:
+
+id: Batch operation ID
+
+Response:
+{
"success": true,
"data": {
"batch_id": "uuid-string",
@@ -19786,39 +19681,28 @@ Get batch operation status.
]
}
}
-```plaintext
-
-#### POST /batch/operations/{id}/cancel
-
-Cancel running batch operation.
-
-**Path Parameters:**
-
-- `id`: Batch operation ID
-
-**Response:**
-
-```json
-{
+
+
+Cancel running batch operation.
+Path Parameters:
+
+id: Batch operation ID
+
+Response:
+{
"success": true,
"data": "Operation cancelled"
}
-```plaintext
-
-### State Management
-
-#### GET /state/workflows/{id}/progress
-
-Get real-time workflow progress.
-
-**Path Parameters:**
-
-- `id`: Workflow ID
-
-**Response:**
-
-```json
-{
+
+
+
+Get real-time workflow progress.
+Path Parameters:
+
+Response:
+{
"success": true,
"data": {
"workflow_id": "uuid-string",
@@ -19829,20 +19713,15 @@ Get real-time workflow progress.
"estimated_time_remaining": 180
}
}
-```plaintext
-
-#### GET /state/workflows/{id}/snapshots
-
-Get workflow state snapshots.
-
-**Path Parameters:**
-
-- `id`: Workflow ID
-
-**Response:**
-
-```json
-{
+
+
+Get workflow state snapshots.
+Path Parameters:
+
+Response:
+{
"success": true,
"data": [
{
@@ -19853,16 +19732,11 @@ Get workflow state snapshots.
}
]
}
-```plaintext
-
-#### GET /state/system/metrics
-
-Get system-wide metrics.
-
-**Response:**
-
-```json
-{
+
+
+Get system-wide metrics.
+Response:
+{
"success": true,
"data": {
"total_workflows": 150,
@@ -19876,16 +19750,11 @@ Get system-wide metrics.
}
}
}
-```plaintext
-
-#### GET /state/system/health
-
-Get system health status.
-
-**Response:**
-
-```json
-{
+
+
+Get system health status.
+Response:
+{
"success": true,
"data": {
"overall_status": "Healthy",
@@ -19897,58 +19766,39 @@ Get system health status.
"last_check": "2025-09-26T10:00:00Z"
}
}
-```plaintext
-
-#### GET /state/statistics
-
-Get state manager statistics.
-
-**Response:**
-
-```json
-{
+
+
+Get state manager statistics.
+Response:
+{
"success": true,
"data": {
"total_workflows": 150,
"active_snapshots": 25,
- "storage_usage": "245MB",
+ "storage_usage": "245 MB",
"average_workflow_duration": 300
}
}
-```plaintext
-
-### Rollback and Recovery
-
-#### POST /rollback/checkpoints
-
-Create new checkpoint.
-
-**Request Body:**
-
-```json
-{
+
+
+
+Create new checkpoint.
+Request Body:
+{
"name": "before_major_update",
"description": "Checkpoint before deploying v2.0.0"
}
-```plaintext
-
-**Response:**
-
-```json
-{
+
+Response:
+{
"success": true,
"data": "checkpoint-uuid"
}
-```plaintext
-
-#### GET /rollback/checkpoints
-
-List all checkpoints.
-
-**Response:**
-
-```json
-{
+
+
+List all checkpoints.
+Response:
+{
"success": true,
"data": [
{
@@ -19956,60 +19806,44 @@ List all checkpoints.
"name": "before_major_update",
"description": "Checkpoint before deploying v2.0.0",
"created_at": "2025-09-26T10:00:00Z",
- "size": "150MB"
+ "size": "150 MB"
}
]
}
-```plaintext
-
-#### GET /rollback/checkpoints/{id}
-
-Get specific checkpoint details.
-
-**Path Parameters:**
-
-- `id`: Checkpoint ID
-
-**Response:**
-
-```json
-{
+
+GET /rollback/checkpoints/
+Get specific checkpoint details.
+Path Parameters:
+
+Response:
+{
"success": true,
"data": {
"id": "checkpoint-uuid",
"name": "before_major_update",
"description": "Checkpoint before deploying v2.0.0",
"created_at": "2025-09-26T10:00:00Z",
- "size": "150MB",
+ "size": "150 MB",
"operations_count": 25
}
}
-```plaintext
-
-#### POST /rollback/execute
-
-Execute rollback operation.
-
-**Request Body:**
-
-```json
-{
+
+
+Execute rollback operation.
+Request Body:
+{
"checkpoint_id": "checkpoint-uuid"
}
-```plaintext
-
-Or for partial rollback:
-
-```json
-{
+
+Or for partial rollback:
+{
"operation_ids": ["op-1", "op-2", "op-3"]
}
-```plaintext
-
-**Response:**
-
-```json
-{
+
+Response:
+{
"success": true,
"data": {
"rollback_id": "rollback-uuid",
@@ -20019,33 +19853,23 @@ Or for partial rollback:
"duration": 45.5
}
}
-```plaintext
-
-#### POST /rollback/restore/{id}
-
-Restore system state from checkpoint.
-
-**Path Parameters:**
-
-- `id`: Checkpoint ID
-
-**Response:**
-
-```json
-{
+
+POST /rollback/restore/
+Restore system state from checkpoint.
+Path Parameters:
+
+Response:
+{
"success": true,
"data": "State restored from checkpoint checkpoint-uuid"
}
-```plaintext
-
-#### GET /rollback/statistics
-
-Get rollback system statistics.
-
-**Response:**
-
-```json
-{
+
+
+Get rollback system statistics.
+Response:
+{
"success": true,
"data": {
"total_checkpoints": 10,
@@ -20054,30 +19878,20 @@ Get rollback system statistics.
"average_rollback_time": 30.5
}
}
-```plaintext
-
-## Control Center API Endpoints
-
-### Authentication
-
-#### POST /auth/login
-
-Authenticate user and get JWT token.
-
-**Request Body:**
-
-```json
-{
+
+
+
+
+Authenticate user and get JWT token.
+Request Body:
+{
"username": "admin",
"password": "secure_password",
"mfa_code": "123456"
}
-```plaintext
-
-**Response:**
-
-```json
-{
+
+Response:
+{
"success": true,
"data": {
"token": "jwt-token-string",
@@ -20090,60 +19904,41 @@ Authenticate user and get JWT token.
}
}
}
-```plaintext
-
-#### POST /auth/refresh
-
-Refresh JWT token.
-
-**Request Body:**
-
-```json
-{
+
+
+Refresh JWT token.
+Request Body:
+{
"token": "current-jwt-token"
}
-```plaintext
-
-**Response:**
-
-```json
-{
+
+Response:
+{
"success": true,
"data": {
"token": "new-jwt-token",
"expires_at": "2025-09-26T18:00:00Z"
}
}
-```plaintext
-
-#### POST /auth/logout
-
-Logout and invalidate token.
-
-**Response:**
-
-```json
-{
+
+
+Logout and invalidate token.
+Response:
+{
"success": true,
"data": "Successfully logged out"
}
-```plaintext
-
-### User Management
-
-#### GET /users
-
-List all users.
-
-**Query Parameters:**
-
-- `role` (optional): Filter by role
-- `enabled` (optional): Filter by enabled status
-
-**Response:**
-
-```json
-{
+
+
+
+List all users.
+Query Parameters:
+
+role (optional): Filter by role
+enabled (optional): Filter by enabled status
+
+Response:
+{
"success": true,
"data": [
{
@@ -20157,28 +19952,20 @@ List all users.
}
]
}
-```plaintext
-
-#### POST /users
-
-Create new user.
-
-**Request Body:**
-
-```json
-{
+
+
+Create new user.
+Request Body:
+{
"username": "newuser",
"email": "newuser@example.com",
"password": "secure_password",
"roles": ["operator"],
"enabled": true
}
-```plaintext
-
-**Response:**
-
-```json
-{
+
+Response:
+{
"success": true,
"data": {
"id": "new-user-uuid",
@@ -20188,62 +19975,43 @@ Create new user.
"enabled": true
}
}
-```plaintext
-
-#### PUT /users/{id}
-
-Update existing user.
-
-**Path Parameters:**
-
-- `id`: User ID
-
-**Request Body:**
-
-```json
-{
+
+PUT /users/
+Update existing user.
+Path Parameters:
+
+Request Body:
+{
"email": "updated@example.com",
"roles": ["admin", "operator"],
"enabled": false
}
-```plaintext
-
-**Response:**
-
-```json
-{
+
+Response:
+{
"success": true,
"data": "User updated successfully"
}
-```plaintext
-
-#### DELETE /users/{id}
-
-Delete user.
-
-**Path Parameters:**
-
-- `id`: User ID
-
-**Response:**
-
-```json
-{
+
+DELETE /users/
+Delete user.
+Path Parameters:
+
+Response:
+{
"success": true,
"data": "User deleted successfully"
}
-```plaintext
-
-### Policy Management
-
-#### GET /policies
-
-List all policies.
-
-**Response:**
-
-```json
-{
+
+
+
+List all policies.
+Response:
+{
"success": true,
"data": [
{
@@ -20256,16 +20024,11 @@ List all policies.
}
]
}
-```plaintext
-
-#### POST /policies
-
-Create new policy.
-
-**Request Body:**
-
-```json
-{
+
+
+Create new policy.
+Request Body:
+{
"name": "new_policy",
"version": "1.0.0",
"rules": [
@@ -20277,12 +20040,9 @@ Create new policy.
}
]
}
-```plaintext
-
-**Response:**
-
-```json
-{
+
+Response:
+{
"success": true,
"data": {
"id": "new-policy-uuid",
@@ -20290,54 +20050,40 @@ Create new policy.
"version": "1.0.0"
}
}
-```plaintext
-
-#### PUT /policies/{id}
-
-Update policy.
-
-**Path Parameters:**
-
-- `id`: Policy ID
-
-**Request Body:**
-
-```json
-{
+
+PUT /policies/
+Update policy.
+Path Parameters:
+
+Request Body:
+{
"name": "updated_policy",
"rules": [...]
}
-```plaintext
-
-**Response:**
-
-```json
-{
+
+Response:
+{
"success": true,
"data": "Policy updated successfully"
}
-```plaintext
-
-### Audit Logging
-
-#### GET /audit/logs
-
-Get audit logs.
-
-**Query Parameters:**
-
-- `user_id` (optional): Filter by user
-- `action` (optional): Filter by action
-- `resource` (optional): Filter by resource
-- `from` (optional): Start date (ISO 8601)
-- `to` (optional): End date (ISO 8601)
-- `limit` (optional): Maximum results
-- `offset` (optional): Pagination offset
-
-**Response:**
-
-```json
-{
+
+
+
+Get audit logs.
+Query Parameters:
+
+user_id (optional): Filter by user
+action (optional): Filter by action
+resource (optional): Filter by resource
+from (optional): Start date (ISO 8601)
+to (optional): End date (ISO 8601)
+limit (optional): Maximum results
+offset (optional): Pagination offset
+
+Response:
+{
"success": true,
"data": [
{
@@ -20351,56 +20097,42 @@ Get audit logs.
}
]
}
-```plaintext
-
-## Error Responses
-
-All endpoints may return error responses in this format:
-
-```json
-{
+
+
+All endpoints may return error responses in this format:
+{
"success": false,
"error": "Detailed error message"
}
-```plaintext
-
-### HTTP Status Codes
-
-- `200 OK`: Successful request
-- `201 Created`: Resource created successfully
-- `400 Bad Request`: Invalid request parameters
-- `401 Unauthorized`: Authentication required or invalid
-- `403 Forbidden`: Permission denied
-- `404 Not Found`: Resource not found
-- `422 Unprocessable Entity`: Validation error
-- `500 Internal Server Error`: Server error
-
-## Rate Limiting
-
-API endpoints are rate-limited:
-
-- Authentication: 5 requests per minute per IP
-- General APIs: 100 requests per minute per user
-- Batch operations: 10 requests per minute per user
-
-Rate limit headers are included in responses:
-
-```http
-X-RateLimit-Limit: 100
+
+
+
+200 OK: Successful request
+201 Created: Resource created successfully
+400 Bad Request: Invalid request parameters
+401 Unauthorized: Authentication required or invalid
+403 Forbidden: Permission denied
+404 Not Found: Resource not found
+422 Unprocessable Entity: Validation error
+500 Internal Server Error: Server error
+
+
+API endpoints are rate-limited:
+
+Authentication: 5 requests per minute per IP
+General APIs: 100 requests per minute per user
+Batch operations: 10 requests per minute per user
+
+Rate limit headers are included in responses:
+X-RateLimit-Limit: 100
X-RateLimit-Remaining: 95
X-RateLimit-Reset: 1632150000
-```plaintext
-
-## Monitoring Endpoints
-
-### GET /metrics
-
-Prometheus-compatible metrics endpoint.
-
-**Response:**
-
-```plaintext
-# HELP orchestrator_tasks_total Total number of tasks
+
+
+
+Prometheus-compatible metrics endpoint.
+Response:
+# HELP orchestrator_tasks_total Total number of tasks
# TYPE orchestrator_tasks_total counter
orchestrator_tasks_total{status="completed"} 150
orchestrator_tasks_total{status="failed"} 5
@@ -20410,27 +20142,19 @@ orchestrator_tasks_total{status="failed"} 5
orchestrator_task_duration_seconds_bucket{le="10"} 50
orchestrator_task_duration_seconds_bucket{le="30"} 120
orchestrator_task_duration_seconds_bucket{le="+Inf"} 155
-```plaintext
-
-### WebSocket /ws
-
-Real-time event streaming via WebSocket connection.
-
-**Connection:**
-
-```javascript
-const ws = new WebSocket('ws://localhost:9090/ws?token=jwt-token');
+
+
+Real-time event streaming via WebSocket connection.
+Connection:
+const ws = new WebSocket('ws://localhost:9090/ws?token=jwt-token');
ws.onmessage = function(event) {
const data = JSON.parse(event.data);
console.log('Event:', data);
};
-```plaintext
-
-**Event Format:**
-
-```json
-{
+
+Event Format:
+{
"event_type": "TaskStatusChanged",
"timestamp": "2025-09-26T10:00:00Z",
"data": {
@@ -20442,14 +20166,10 @@ ws.onmessage = function(event) {
"status": "completed"
}
}
-```plaintext
-
-## SDK Examples
-
-### Python SDK Example
-
-```python
-import requests
+
+
+
+import requests
class ProvisioningClient:
def __init__(self, base_url, token):
@@ -20482,14 +20202,11 @@ class ProvisioningClient:
# Usage
client = ProvisioningClient('http://localhost:9090', 'your-jwt-token')
-result = client.create_server_workflow('production', 'config.k')
+result = client.create_server_workflow('production', 'config.ncl')
print(f"Task ID: {result['data']}")
-```plaintext
-
-### JavaScript/Node.js SDK Example
-
-```javascript
-const axios = require('axios');
+
+
+const axios = require('axios');
class ProvisioningClient {
constructor(baseUrl, token) {
@@ -20520,20 +20237,14 @@ class ProvisioningClient {
// Usage
const client = new ProvisioningClient('http://localhost:9090', 'your-jwt-token');
-const result = await client.createServerWorkflow('production', 'config.k');
+const result = await client.createServerWorkflow('production', 'config.ncl');
console.log(`Task ID: ${result.data}`);
-```plaintext
-
-## Webhook Integration
-
-The system supports webhooks for external integrations:
-
-### Webhook Configuration
-
-Configure webhooks in the system configuration:
-
-```toml
-[webhooks]
+
+
+The system supports webhooks for external integrations:
+
+Configure webhooks in the system configuration:
+[webhooks]
enabled = true
endpoints = [
{
@@ -20542,12 +20253,9 @@ endpoints = [
secret = "webhook-secret"
}
]
-```plaintext
-
-### Webhook Payload
-
-```json
-{
+
+
+{
"event": "task.completed",
"timestamp": "2025-09-26T10:00:00Z",
"data": {
@@ -20557,50 +20265,36 @@ endpoints = [
},
"signature": "sha256=calculated-signature"
}
-```plaintext
-
-## Pagination
-
-For endpoints that return lists, use pagination parameters:
-
-- `limit`: Maximum number of items per page (default: 50, max: 1000)
-- `offset`: Number of items to skip
-
-Pagination metadata is included in response headers:
-
-```http
-X-Total-Count: 1500
+
+
+For endpoints that return lists, use pagination parameters:
+
+limit: Maximum number of items per page (default: 50, max: 1000)
+offset: Number of items to skip
+
+Pagination metadata is included in response headers:
+X-Total-Count: 1500
X-Limit: 50
X-Offset: 100
Link: </api/endpoint?offset=150&limit=50>; rel="next"
-```plaintext
-
-## API Versioning
-
-The API uses header-based versioning:
-
-```http
-Accept: application/vnd.provisioning.v1+json
-```plaintext
-
-Current version: v1
-
-## Testing
-
-Use the included test suite to validate API functionality:
-
-```bash
-# Run API integration tests
+
+
+The API uses header-based versioning:
+Accept: application/vnd.provisioning.v1+json
+
+Current version: v1
+
+Use the included test suite to validate API functionality:
+# Run API integration tests
cd src/orchestrator
cargo test --test api_tests
# Run load tests
cargo test --test load_tests --release
-```plaintext
This document provides comprehensive documentation for the WebSocket API used for real-time monitoring, event streaming, and live updates in provisioning.
-
+
The WebSocket API enables real-time communication between clients and the provisioning orchestrator, providing:
Live workflow progress updates
@@ -20642,7 +20336,7 @@ cargo test --test load_tests --release
Component-specific logs
Search and filtering
-
+
All WebSocket connections require authentication via JWT token:
// Include token in connection URL
@@ -21375,14 +21069,14 @@ ws.on('disconnected', (event) => {
Enable message compression for large events:
const ws = new WebSocket('ws://localhost:9090/ws?token=jwt&compression=true');
-
+
The server implements rate limiting to prevent abuse:
Maximum connections per user: 10
Maximum messages per second: 100
Maximum subscription events: 50
-
+
All connections require valid JWT tokens
@@ -21404,7 +21098,7 @@ ws.on('disconnected', (event) => {
This WebSocket API provides a robust, real-time communication channel for monitoring and managing provisioning with comprehensive security and performance features.
This document provides comprehensive guidance for developing extensions for provisioning, including providers, task services, and cluster configurations.
-
+
Provisioning supports three types of extensions:
Providers : Cloud infrastructure providers (AWS, UpCloud, Local, etc.)
@@ -21415,12 +21109,12 @@ ws.on('disconnected', (event) => {
extension-name/
-├── kcl.mod # KCL module definition
-├── kcl/ # KCL configuration files
-│ ├── mod.k # Main module
-│ ├── settings.k # Settings schema
-│ ├── version.k # Version configuration
-│ └── lib.k # Common functions
+├── manifest.toml # Extension metadata
+├── schemas/ # Nickel configuration files
+│ ├── main.ncl # Main schema
+│ ├── settings.ncl # Settings schema
+│ ├── version.ncl # Version configuration
+│ └── contracts.ncl # Contract definitions
├── nulib/ # Nushell library modules
│ ├── mod.nu # Main module
│ ├── create.nu # Creation operations
@@ -21433,62 +21127,55 @@ ws.on('disconnected', (event) => {
│ └── generate.nu # Generation commands
├── README.md # Extension documentation
└── metadata.toml # Extension metadata
-```plaintext
-
-## Provider Extension API
-
-### Provider Interface
-
-All providers must implement the following interface:
-
-#### Core Operations
-
-- `create-server(config: record) -> record`
-- `delete-server(server_id: string) -> null`
-- `list-servers() -> list<record>`
-- `get-server-info(server_id: string) -> record`
-- `start-server(server_id: string) -> null`
-- `stop-server(server_id: string) -> null`
-- `reboot-server(server_id: string) -> null`
-
-#### Pricing and Plans
-
-- `get-pricing() -> list<record>`
-- `get-plans() -> list<record>`
-- `get-zones() -> list<record>`
-
-#### SSH and Access
-
-- `get-ssh-access(server_id: string) -> record`
-- `configure-firewall(server_id: string, rules: list<record>) -> null`
-
-### Provider Development Template
-
-#### KCL Configuration Schema
-
-Create `kcl/settings.k`:
-
-```kcl
-# Provider settings schema
-schema ProviderSettings {
+
+
+
+All providers must implement the following interface:
+
+
+create-server(config: record) -> record
+delete-server(server_id: string) -> null
+list-servers() -> list<record>
+get-server-info(server_id: string) -> record
+start-server(server_id: string) -> null
+stop-server(server_id: string) -> null
+reboot-server(server_id: string) -> null
+
+
+
+get-pricing() -> list<record>
+get-plans() -> list<record>
+get-zones() -> list<record>
+
+
+
+get-ssh-access(server_id: string) -> record
+configure-firewall(server_id: string, rules: list<record>) -> null
+
+
+
+Create schemas/settings.ncl:
+# Provider settings schema
+{
+ ProviderSettings = {
# Authentication configuration
- auth: {
- method: "api_key" | "certificate" | "oauth" | "basic"
- api_key?: str
- api_secret?: str
- username?: str
- password?: str
- certificate_path?: str
- private_key_path?: str
- }
+ auth | {
+ method | "api_key" | "certificate" | "oauth" | "basic",
+ api_key | String = null,
+ api_secret | String = null,
+ username | String = null,
+ password | String = null,
+ certificate_path | String = null,
+ private_key_path | String = null,
+ },
# API configuration
- api: {
- base_url: str
- version?: str = "v1"
- timeout?: int = 30
- retries?: int = 3
- }
+ api | {
+ base_url | String,
+ version | String = "v1",
+ timeout | Number = 30,
+ retries | Number = 3,
+ },
# Default server configuration
defaults: {
@@ -21536,14 +21223,10 @@ schema ServerConfig {
bandwidth?: int
}
}
-```plaintext
-
-#### Nushell Implementation
-
-Create `nulib/mod.nu`:
-
-```nushell
-use std log
+
+
+Create nulib/mod.nu:
+use std log
# Provider name and version
export const PROVIDER_NAME = "my-provider"
@@ -21623,12 +21306,9 @@ export def "test-connection" [config: record] -> record {
}
}
}
-```plaintext
-
-Create `nulib/create.nu`:
-
-```nushell
-use std log
+
+Create nulib/create.nu:
+use std log
use utils.nu *
export def "create-server" [
@@ -21758,14 +21438,10 @@ def wait-for-server-ready [server_id: string] -> string {
error make { msg: "Server creation timeout" }
}
-```plaintext
-
-### Provider Registration
-
-Add provider metadata in `metadata.toml`:
-
-```toml
-[extension]
+
+
+Add provider metadata in metadata.toml:
+[extension]
name = "my-provider"
type = "provider"
version = "1.0.0"
@@ -21776,7 +21452,7 @@ license = "MIT"
[compatibility]
provisioning_version = ">=2.0.0"
nushell_version = ">=0.107.0"
-kcl_version = ">=0.11.0"
+nickel_version = ">=1.15.0"
[capabilities]
server_management = true
@@ -21796,88 +21472,79 @@ available = ["us-east-1", "us-west-2", "eu-west-1"]
[support]
documentation = "https://docs.example.com/provider"
issues = "https://github.com/example/provider/issues"
-```plaintext
-
-## Task Service Extension API
-
-### Task Service Interface
-
-Task services must implement:
-
-#### Core Operations
-
-- `install(config: record) -> record`
-- `uninstall(config: record) -> null`
-- `configure(config: record) -> null`
-- `status() -> record`
-- `restart() -> null`
-- `upgrade(version: string) -> record`
-
-#### Version Management
-
-- `get-current-version() -> string`
-- `get-available-versions() -> list<string>`
-- `check-updates() -> record`
-
-### Task Service Development Template
-
-#### KCL Schema
-
-Create `kcl/version.k`:
-
-```kcl
-# Task service version configuration
-import version_management
-
-taskserv_version: version_management.TaskservVersion = {
- name = "my-service"
- version = "1.0.0"
+
+
+
+Task services must implement:
+
+
+install(config: record) -> record
+uninstall(config: record) -> null
+configure(config: record) -> null
+status() -> record
+restart() -> null
+upgrade(version: string) -> record
+
+
+
+get-current-version() -> string
+get-available-versions() -> list<string>
+check-updates() -> record
+
+
+
+Create schemas/version.ncl:
+# Task service version configuration
+{
+ taskserv_version = {
+ name | String = "my-service",
+ version | String = "1.0.0",
# Version source configuration
- source = {
- type = "github"
- repository = "example/my-service"
- release_pattern = "v{version}"
- }
+ source | {
+ type | String = "github",
+ repository | String,
+ release_pattern | String = "v{version}",
+ },
# Installation configuration
- install = {
- method = "binary"
- binary_name = "my-service"
- binary_path = "/usr/local/bin"
- config_path = "/etc/my-service"
- data_path = "/var/lib/my-service"
- }
+ install | {
+ method | String = "binary",
+ binary_name | String,
+ binary_path | String = "/usr/local/bin",
+ config_path | String = "/etc/my-service",
+ data_path | String = "/var/lib/my-service",
+ },
# Dependencies
- dependencies = [
- { name = "containerd", version = ">=1.6.0" }
- ]
+ dependencies | [
+ {
+ name | String,
+ version | String = ">=1.0.0",
+ }
+ ],
# Service configuration
- service = {
- type = "systemd"
- user = "my-service"
- group = "my-service"
- ports = [8080, 9090]
- }
+ service | {
+ type | String = "systemd",
+ user | String = "my-service",
+ group | String = "my-service",
+ ports | [Number] = [8080, 9090],
+ },
# Health check configuration
- health_check = {
- endpoint = "http://localhost:9090/health"
- interval = 30
- timeout = 5
- retries = 3
- }
+ health_check | {
+ endpoint | String,
+ interval | Number = 30,
+ timeout | Number = 5,
+ retries | Number = 3,
+ },
+ }
}
-```plaintext
-
-#### Nushell Implementation
-
-Create `nulib/mod.nu`:
-
-```nushell
-use std log
+
+
+Create nulib/mod.nu:
+use std log
use ../../../lib_provisioning *
export const SERVICE_NAME = "my-service"
@@ -22062,156 +21729,139 @@ def check-health [] -> record {
}
}
}
-```plaintext
-
-## Cluster Extension API
-
-### Cluster Interface
-
-Clusters orchestrate multiple components:
-
-#### Core Operations
-
-- `create(config: record) -> record`
-- `delete(config: record) -> null`
-- `status() -> record`
-- `scale(replicas: int) -> record`
-- `upgrade(version: string) -> record`
-
-#### Component Management
-
-- `list-components() -> list<record>`
-- `component-status(name: string) -> record`
-- `restart-component(name: string) -> null`
-
-### Cluster Development Template
-
-#### KCL Configuration
-
-Create `kcl/cluster.k`:
-
-```kcl
-# Cluster configuration schema
-schema ClusterConfig {
+
+
+
+Clusters orchestrate multiple components:
+
+
+create(config: record) -> record
+delete(config: record) -> null
+status() -> record
+scale(replicas: int) -> record
+upgrade(version: string) -> record
+
+
+
+list-components() -> list<record>
+component-status(name: string) -> record
+restart-component(name: string) -> null
+
+
+
+Create schemas/cluster.ncl:
+# Cluster configuration schema
+{
+ ClusterConfig = {
# Cluster metadata
- name: str
- version: str = "1.0.0"
- description?: str
+ name | String,
+ version | String = "1.0.0",
+ description | String = "",
# Components to deploy
- components: [Component]
+ components | [Component],
# Resource requirements
- resources: {
- min_nodes?: int = 1
- cpu_per_node?: str = "2"
- memory_per_node?: str = "4Gi"
- storage_per_node?: str = "20Gi"
- }
+ resources | {
+ min_nodes | Number = 1,
+ cpu_per_node | String = "2",
+ memory_per_node | String = "4Gi",
+ storage_per_node | String = "20Gi",
+ },
# Network configuration
- network: {
- cluster_cidr?: str = "10.244.0.0/16"
- service_cidr?: str = "10.96.0.0/12"
- dns_domain?: str = "cluster.local"
- }
+ network | {
+ cluster_cidr | String = "10.244.0.0/16",
+ service_cidr | String = "10.96.0.0/12",
+ dns_domain | String = "cluster.local",
+ },
# Feature flags
- features: {
- monitoring?: bool = true
- logging?: bool = true
- ingress?: bool = false
- storage?: bool = true
- }
-}
+ features | {
+ monitoring | Bool = true,
+ logging | Bool = true,
+ ingress | Bool = false,
+ storage | Bool = true,
+ },
+ },
-schema Component {
- name: str
- type: "taskserv" | "application" | "infrastructure"
- version?: str
- enabled: bool = true
- dependencies?: [str] = []
-
- # Component-specific configuration
- config?: {str: any} = {}
-
- # Resource requirements
- resources?: {
- cpu?: str
- memory?: str
- storage?: str
- replicas?: int = 1
- }
-}
-
-# Example cluster configuration
-buildkit_cluster: ClusterConfig = {
- name = "buildkit"
- version = "1.0.0"
- description = "Container build cluster with BuildKit and registry"
+ Component = {
+ name | String,
+ type | String | "taskserv" | "application" | "infrastructure",
+ version | String = "",
+ enabled | Bool = true,
+ dependencies | [String] = [],
+ config | {} = {},
+ resources | {
+ cpu | String = "",
+ memory | String = "",
+ storage | String = "",
+ replicas | Number = 1,
+ } = {},
+ },
+ # Example cluster configuration
+ buildkit_cluster = {
+ name = "buildkit",
+ version = "1.0.0",
+ description = "Container build cluster with BuildKit and registry",
components = [
- {
- name = "containerd"
- type = "taskserv"
- version = "1.7.0"
- enabled = True
- dependencies = []
+ {
+ name = "containerd",
+ type = "taskserv",
+ version = "1.7.0",
+ enabled = true,
+ dependencies = [],
+ },
+ {
+ name = "buildkit",
+ type = "taskserv",
+ version = "0.12.0",
+ enabled = true,
+ dependencies = ["containerd"],
+ config = {
+ worker_count = 4,
+ cache_size = "10Gi",
+ registry_mirrors = ["registry:5000"],
},
- {
- name = "buildkit"
- type = "taskserv"
- version = "0.12.0"
- enabled = True
- dependencies = ["containerd"]
- config = {
- worker_count = 4
- cache_size = "10Gi"
- registry_mirrors = ["registry:5000"]
- }
+ },
+ {
+ name = "registry",
+ type = "application",
+ version = "2.8.0",
+ enabled = true,
+ dependencies = [],
+ config = {
+ storage_driver = "filesystem",
+ storage_path = "/var/lib/registry",
+ auth_enabled = false,
},
- {
- name = "registry"
- type = "application"
- version = "2.8.0"
- enabled = True
- dependencies = []
- config = {
- storage_driver = "filesystem"
- storage_path = "/var/lib/registry"
- auth_enabled = False
- }
- resources = {
- cpu = "500m"
- memory = "1Gi"
- storage = "50Gi"
- replicas = 1
- }
- }
- ]
-
+ resources = {
+ cpu = "500m",
+ memory = "1Gi",
+ storage = "50Gi",
+ replicas = 1,
+ },
+ },
+ ],
resources = {
- min_nodes = 1
- cpu_per_node = "4"
- memory_per_node = "8Gi"
- storage_per_node = "100Gi"
- }
-
+ min_nodes = 1,
+ cpu_per_node = "4",
+ memory_per_node = "8Gi",
+ storage_per_node = "100Gi",
+ },
features = {
- monitoring = True
- logging = True
- ingress = False
- storage = True
- }
+ monitoring = true,
+ logging = true,
+ ingress = false,
+ storage = true,
+ },
+ },
}
-```plaintext
-
-#### Nushell Implementation
-
-Create `nulib/mod.nu`:
-
-```nushell
-use std log
+
+
+Create nulib/mod.nu:
+use std log
use ../../../lib_provisioning *
export const CLUSTER_NAME = "my-cluster"
@@ -22408,63 +22058,44 @@ def resolve-component-dependencies [components: list<record>] -> list&l
$sorted
}
-```plaintext
-
-## Extension Registration and Discovery
-
-### Extension Registry
-
-Extensions are registered in the system through:
-
-1. **Directory Structure**: Placed in appropriate directories (providers/, taskservs/, cluster/)
-2. **Metadata Files**: `metadata.toml` with extension information
-3. **Module Files**: `kcl.mod` for KCL dependencies
-
-### Registration API
-
-#### `register-extension(path: string, type: string) -> record`
-
-Registers a new extension with the system.
-
-**Parameters:**
-
-- `path`: Path to extension directory
-- `type`: Extension type (provider, taskserv, cluster)
-
-#### `unregister-extension(name: string, type: string) -> null`
-
-Removes extension from the registry.
-
-#### `list-registered-extensions(type?: string) -> list<record>`
-
-Lists all registered extensions, optionally filtered by type.
-
-### Extension Validation
-
-#### Validation Rules
-
-1. **Structure Validation**: Required files and directories exist
-2. **Schema Validation**: KCL schemas are valid
-3. **Interface Validation**: Required functions are implemented
-4. **Dependency Validation**: Dependencies are available
-5. **Version Validation**: Version constraints are met
-
-#### `validate-extension(path: string, type: string) -> record`
-
-Validates extension structure and implementation.
-
-## Testing Extensions
-
-### Test Framework
-
-Extensions should include comprehensive tests:
-
-#### Unit Tests
-
-Create `tests/unit_tests.nu`:
-
-```nushell
-use std testing
+
+
+
+Extensions are registered in the system through:
+
+Directory Structure : Placed in appropriate directories (providers/, taskservs/, cluster/)
+Metadata Files : metadata.toml with extension information
+Schema Files : schemas/ directory with Nickel schema files
+
+
+
+Registers a new extension with the system.
+Parameters:
+
+path: Path to extension directory
+type: Extension type (provider, taskserv, cluster)
+
+
+Removes extension from the registry.
+
+Lists all registered extensions, optionally filtered by type.
+
+
+
+Structure Validation : Required files and directories exist
+Schema Validation : Nickel schemas are valid
+Interface Validation : Required functions are implemented
+Dependency Validation : Dependencies are available
+Version Validation : Version constraints are met
+
+
+Validates extension structure and implementation.
+
+
+Extensions should include comprehensive tests:
+
+Create tests/unit_tests.nu:
+use std testing
export def test_provider_config_validation [] {
let config = {
@@ -22480,7 +22111,7 @@ export def test_provider_config_validation [] {
export def test_server_creation_check_mode [] {
let config = {
hostname: "test-server",
- plan: "1xCPU-1GB",
+ plan: "1xCPU-1 GB",
zone: "test-zone"
}
@@ -22488,20 +22119,16 @@ export def test_server_creation_check_mode [] {
assert ($result.check_mode == true)
assert ($result.would_create == true)
}
-```plaintext
-
-#### Integration Tests
-
-Create `tests/integration_tests.nu`:
-
-```nushell
-use std testing
+
+
+Create tests/integration_tests.nu:
+use std testing
export def test_full_server_lifecycle [] {
# Test server creation
let create_config = {
hostname: "integration-test",
- plan: "1xCPU-1GB",
+ plan: "1xCPU-1 GB",
zone: "test-zone"
}
@@ -22521,12 +22148,9 @@ export def test_full_server_lifecycle [] {
let final_info = try { get-server-info $server_id } catch { null }
assert ($final_info == null)
}
-```plaintext
-
-### Running Tests
-
-```bash
-# Run unit tests
+
+
+# Run unit tests
nu tests/unit_tests.nu
# Run integration tests
@@ -22534,23 +22158,18 @@ nu tests/integration_tests.nu
# Run all tests
nu tests/run_all_tests.nu
-```plaintext
-
-## Documentation Requirements
-
-### Extension Documentation
-
-Each extension must include:
-
-1. **README.md**: Overview, installation, and usage
-2. **API.md**: Detailed API documentation
-3. **EXAMPLES.md**: Usage examples and tutorials
-4. **CHANGELOG.md**: Version history and changes
-
-### API Documentation Template
-
-```markdown
-# Extension Name API
+
+
+
+Each extension must include:
+
+README.md : Overview, installation, and usage
+API.md : Detailed API documentation
+EXAMPLES.md : Usage examples and tutorials
+CHANGELOG.md : Version history and changes
+
+
+# Extension Name API
## Overview
Brief description of the extension and its purpose.
@@ -22569,39 +22188,36 @@ Common usage patterns and examples.
## Troubleshooting
Common issues and solutions.
-```plaintext
-
-## Best Practices
-
-### Development Guidelines
-
-1. **Follow Naming Conventions**: Use consistent naming for functions and variables
-2. **Error Handling**: Implement comprehensive error handling and recovery
-3. **Logging**: Use structured logging for debugging and monitoring
-4. **Configuration Validation**: Validate all inputs and configurations
-5. **Documentation**: Document all public APIs and configurations
-6. **Testing**: Include comprehensive unit and integration tests
-7. **Versioning**: Follow semantic versioning principles
-8. **Security**: Implement secure credential handling and API calls
-
-### Performance Considerations
-
-1. **Caching**: Cache expensive operations and API calls
-2. **Parallel Processing**: Use parallel execution where possible
-3. **Resource Management**: Clean up resources properly
-4. **Batch Operations**: Batch API calls when possible
-5. **Health Monitoring**: Implement health checks and monitoring
-
-### Security Best Practices
-
-1. **Credential Management**: Store credentials securely
-2. **Input Validation**: Validate and sanitize all inputs
-3. **Access Control**: Implement proper access controls
-4. **Audit Logging**: Log all security-relevant operations
-5. **Encryption**: Encrypt sensitive data in transit and at rest
-
-This extension development API provides a comprehensive framework for building robust, scalable, and maintainable extensions for provisioning.
+
+
+
+Follow Naming Conventions : Use consistent naming for functions and variables
+Error Handling : Implement comprehensive error handling and recovery
+Logging : Use structured logging for debugging and monitoring
+Configuration Validation : Validate all inputs and configurations
+Documentation : Document all public APIs and configurations
+Testing : Include comprehensive unit and integration tests
+Versioning : Follow semantic versioning principles
+Security : Implement secure credential handling and API calls
+
+
+
+Caching : Cache expensive operations and API calls
+Parallel Processing : Use parallel execution where possible
+Resource Management : Clean up resources properly
+Batch Operations : Batch API calls when possible
+Health Monitoring : Implement health checks and monitoring
+
+
+
+Credential Management : Store credentials securely
+Input Validation : Validate and sanitize all inputs
+Access Control : Implement proper access controls
+Audit Logging : Log all security-relevant operations
+Encryption : Encrypt sensitive data in transit and at rest
+
+This extension development API provides a comprehensive framework for building robust, scalable, and maintainable extensions for provisioning.
This document provides comprehensive documentation for the official SDKs and client libraries available for provisioning.
@@ -22620,14 +22236,14 @@ This extension development API provides a comprehensive framework for building r
PHP SDK - PHP client library
-
+
# Install from PyPI
pip install provisioning-client
# Or install development version
pip install git+https://github.com/provisioning-systems/python-client.git
-
+
from provisioning_client import ProvisioningClient
import asyncio
@@ -22648,7 +22264,7 @@ async def main():
# Create a server workflow
task_id = client.create_server_workflow(
infra="production",
- settings="prod-settings.k",
+ settings="prod-settings.ncl",
wait=False
)
print(f"Server workflow created: {task_id}")
@@ -22690,7 +22306,7 @@ if __name__ == "__main__":
# Keep connection alive
await asyncio.sleep(3600) # Monitor for 1 hour
-
+
async def execute_batch_deployment():
client = ProvisioningClient()
await client.authenticate()
@@ -22709,8 +22325,8 @@ if __name__ == "__main__":
"dependencies": [],
"config": {
"server_configs": [
- {"name": "web-01", "plan": "2xCPU-4GB", "zone": "de-fra1"},
- {"name": "web-02", "plan": "2xCPU-4GB", "zone": "de-fra1"}
+ {"name": "web-01", "plan": "2xCPU-4 GB", "zone": "de-fra1"},
+ {"name": "web-02", "plan": "2xCPU-4 GB", "zone": "de-fra1"}
]
}
},
@@ -22782,13 +22398,13 @@ async def robust_workflow():
try:
task_id = await client.create_server_workflow_with_retry(
infra="production",
- settings="config.k"
+ settings="config.ncl"
)
print(f"Workflow created successfully: {task_id}")
except Exception as e:
print(f"Failed after retries: {e}")
-
+
class ProvisioningClient:
def __init__(self,
@@ -22804,7 +22420,7 @@ async def robust_workflow():
def create_server_workflow(self,
infra: str,
- settings: str = "config.k",
+ settings: str = "config.ncl",
check_mode: bool = False,
wait: bool = False) -> str:
"""Create a server provisioning workflow"""
@@ -22813,7 +22429,7 @@ async def robust_workflow():
operation: str,
taskserv: str,
infra: str,
- settings: str = "config.k",
+ settings: str = "config.ncl",
check_mode: bool = False,
wait: bool = False) -> str:
"""Create a task service workflow"""
@@ -22834,7 +22450,7 @@ async def robust_workflow():
"""Register an event handler"""
-
+
# npm
npm install @provisioning/client
@@ -22844,7 +22460,7 @@ yarn add @provisioning/client
# pnpm
pnpm add @provisioning/client
-
+
import { ProvisioningClient } from '@provisioning/client';
async function main() {
@@ -22863,7 +22479,7 @@ async function main() {
// Create server workflow
const taskId = await client.createServerWorkflow({
infra: 'production',
- settings: 'prod-settings.k'
+ settings: 'prod-settings.ncl'
});
console.log(`Server workflow created: ${taskId}`);
@@ -22944,7 +22560,7 @@ const WorkflowDashboard: React.FC = () => {
try {
const taskId = await client.createServerWorkflow({
infra: 'production',
- settings: 'config.k'
+ settings: 'config.ncl'
});
// Add to tasks list
@@ -23020,7 +22636,7 @@ program
.command('create-server')
.description('Create a server workflow')
.requiredOption('-i, --infra <infra>', 'Infrastructure target')
- .option('-s, --settings <settings>', 'Settings file', 'config.k')
+ .option('-s, --settings <settings>', 'Settings file', 'config.ncl')
.option('-c, --check', 'Check mode only')
.option('-w, --wait', 'Wait for completion')
.action(async (options) => {
@@ -23154,7 +22770,7 @@ program
program.parse();
-
+
interface ProvisioningClientOptions {
baseUrl?: string;
authUrl?: string;
@@ -23204,10 +22820,10 @@ class ProvisioningClient extends EventEmitter {
}
-
+
go get github.com/provisioning-systems/go-client
-
+
package main
import (
@@ -23243,7 +22859,7 @@ func main() {
// Create server workflow
taskID, err := client.CreateServerWorkflow(ctx, &provisioning.CreateServerRequest{
Infra: "production",
- Settings: "prod-settings.k",
+ Settings: "prod-settings.ncl",
Wait: false,
})
if err != nil {
@@ -23403,7 +23019,7 @@ func main() {
// Create workflow with retry
taskID, err := client.CreateServerWorkflowWithRetry(ctx, &provisioning.CreateServerRequest{
Infra: "production",
- Settings: "config.k",
+ Settings: "config.ncl",
})
if err != nil {
log.Fatalf("Failed to create workflow: %v", err)
@@ -23413,13 +23029,13 @@ func main() {
}
-
+
Add to your Cargo.toml:
[dependencies]
provisioning-rs = "2.0.0"
tokio = { version = "1.0", features = ["full"] }
-
+
use provisioning_rs::{ProvisioningClient, Config, CreateServerRequest};
use tokio;
@@ -23443,7 +23059,7 @@ async fn main() -> Result<(), Box<dyn std::error::Error>> {
// Create server workflow
let request = CreateServerRequest {
infra: "production".to_string(),
- settings: Some("prod-settings.k".to_string()),
+ settings: Some("prod-settings.ncl".to_string()),
check_mode: false,
wait: false,
};
@@ -23523,7 +23139,7 @@ async fn main() -> Result<(), Box<dyn std::error::Error>> {
Ok(())
}
-
+
use provisioning_rs::{BatchOperationRequest, BatchOperation};
#[tokio::main]
@@ -23546,8 +23162,8 @@ async fn main() -> Result<(), Box<dyn std::error::Error>> {
dependencies: vec![],
config: serde_json::json!({
"server_configs": [
- {"name": "web-01", "plan": "2xCPU-4GB", "zone": "de-fra1"},
- {"name": "web-02", "plan": "2xCPU-4GB", "zone": "de-fra1"}
+ {"name": "web-01", "plan": "2xCPU-4 GB", "zone": "de-fra1"},
+ {"name": "web-02", "plan": "2xCPU-4 GB", "zone": "de-fra1"}
]
}),
},
@@ -23580,7 +23196,7 @@ async fn main() -> Result<(), Box<dyn std::error::Error>> {
Ok(())
}
-
+
Token Management : Store tokens securely and implement automatic refresh
@@ -23609,7 +23225,7 @@ async fn main() -> Result<(), Box<dyn std::error::Error>> {
Error Handling : Handle WebSocket errors gracefully
Resource Cleanup : Properly close WebSocket connections
-
+
Unit Tests : Test SDK functionality with mocked responses
Integration Tests : Test against real API endpoints
@@ -23619,7 +23235,7 @@ async fn main() -> Result<(), Box<dyn std::error::Error>> {
This comprehensive SDK documentation provides developers with everything needed to integrate with provisioning using their preferred programming language, complete with examples, best practices, and detailed API references.
This document provides comprehensive examples and patterns for integrating with provisioning APIs, including client libraries, SDKs, error handling strategies, and performance optimization.
-
+
Provisioning offers multiple integration points:
pip install provisioning-client
-
+
from provisioning_client import ProvisioningClient
# Initialize client
@@ -24893,7 +24509,7 @@ client = ProvisioningClient(
# Create workflow
task_id = await client.create_server_workflow(
infra="production",
- settings="config.k"
+ settings="config.ncl"
)
# Wait for completion
@@ -24917,10 +24533,10 @@ async with ProvisioningClient() as client:
client.on_event('TaskStatusChanged', handle_task_update)
-
+
npm install @provisioning/client
-
+
import { ProvisioningClient } from '@provisioning/client';
const client = new ProvisioningClient({
@@ -24932,7 +24548,7 @@ const client = new ProvisioningClient({
// Create workflow
const taskId = await client.createServerWorkflow({
infra: 'production',
- settings: 'config.k'
+ settings: 'config.ncl'
});
// Monitor progress
@@ -25157,7 +24773,7 @@ async def complex_deployment():
This comprehensive integration documentation provides developers with everything needed to successfully integrate with provisioning, including complete client implementations, error handling strategies, performance optimizations, and common integration patterns.
API documentation for creating and using infrastructure providers.
-
+
Providers handle cloud-specific operations and resource provisioning. The provisioning platform supports multiple cloud providers through a unified API.
@@ -25165,7 +24781,7 @@ async def complex_deployment():
AWS - Amazon Web Services
Local - Local development environment
-
+
All providers must implement the following interface:
# Provider initialization
@@ -25180,45 +24796,35 @@ export def list-servers [] -> table { ... }
export def get-server-plans [] -> table { ... }
export def get-regions [] -> list { ... }
export def get-pricing [plan: string] -> record { ... }
-```plaintext
-
-### Provider Configuration
-
-Each provider requires configuration in KCL format:
-
-```kcl
-# Example: UpCloud provider configuration
-provider: Provider = {
- name = "upcloud"
- type = "cloud"
- enabled = True
-
+
+
+Each provider requires configuration in Nickel format:
+# Example: UpCloud provider configuration
+{
+ provider = {
+ name = "upcloud",
+ type = "cloud",
+ enabled = true,
config = {
- username = "{{ env.UPCLOUD_USERNAME }}"
- password = "{{ env.UPCLOUD_PASSWORD }}"
- default_zone = "de-fra1"
- }
+ username = "{{env.UPCLOUD_USERNAME}}",
+ password = "{{env.UPCLOUD_PASSWORD}}",
+ default_zone = "de-fra1",
+ },
+ }
}
-```plaintext
-
-## Creating a Custom Provider
-
-### 1. Directory Structure
-
-```plaintext
-provisioning/extensions/providers/my-provider/
-├── nu/
+
+
+
+provisioning/extensions/providers/my-provider/
+├── nulib/
│ └── my_provider.nu # Provider implementation
-├── kcl/
-│ ├── my_provider.k # KCL schema
-│ └── defaults_my_provider.k # Default configuration
+├── schemas/
+│ ├── main.ncl # Nickel schema
+│ └── defaults.ncl # Default configuration
└── README.md # Provider documentation
-```plaintext
-
-### 2. Implementation Template
-
-```nushell
-# my_provider.nu
+
+
+# my_provider.nu
export def init [] {
{
name: "my-provider"
@@ -25238,48 +24844,38 @@ export def list-servers [] {
}
# ... other required functions
-```plaintext
+
+
+# main.ncl
+{
+ MyProvider = {
+ # My custom provider schema
+ name | String = "my-provider",
+ type | String | "cloud" | "local" = "cloud",
+ config | MyProviderConfig,
+ },
-### 3. KCL Schema
-
-```kcl
-# my_provider.k
-import provisioning.lib as lib
-
-schema MyProvider(lib.Provider):
- """My custom provider schema"""
-
- name: str = "my-provider"
- type: "cloud" | "local" = "cloud"
-
- config: MyProviderConfig
-
-schema MyProviderConfig:
- api_key: str
- region: str = "us-east-1"
-```plaintext
-
-## Provider Discovery
-
-Providers are automatically discovered from:
-
-- `provisioning/extensions/providers/*/nu/*.nu`
-- User workspace: `workspace/extensions/providers/*/nu/*.nu`
-
-```bash
-# Discover available providers
+ MyProviderConfig = {
+ api_key | String,
+ region | String = "us-east-1",
+ },
+}
+
+
+Providers are automatically discovered from:
+
+provisioning/extensions/providers/*/nu/*.nu
+User workspace: workspace/extensions/providers/*/nu/*.nu
+
+# Discover available providers
provisioning module discover providers
# Load provider
provisioning module load providers workspace my-provider
-```plaintext
-
-## Provider API Examples
-
-### Create Servers
-
-```nushell
-use my_provider.nu *
+
+
+
+use my_provider.nu *
let plan = {
count: 3
@@ -25288,54 +24884,38 @@ let plan = {
}
create-servers $plan
-```plaintext
-
-### List Servers
-
-```nushell
-list-servers | where status == "running" | select hostname ip_address
-```plaintext
-
-### Get Pricing
-
-```nushell
-get-pricing "small" | to yaml
-```plaintext
-
-## Testing Providers
-
-Use the test environment system to test providers:
-
-```bash
-# Test provider without real resources
-provisioning test env single my-provider --check
-```plaintext
-
-## Provider Development Guide
-
-For complete provider development guide, see:
-
-- **[Provider Development](../development/QUICK_PROVIDER_GUIDE.md)** - Quick start guide
-- **[Extension Development](../development/extensions.md)** - Complete extension guide
-- **[Integration Examples](integration-examples.md)** - Example implementations
-
-## API Stability
-
-Provider API follows semantic versioning:
-
-- **Major**: Breaking changes
-- **Minor**: New features, backward compatible
-- **Patch**: Bug fixes
-
-Current API version: `2.0.0`
-
----
-
-For more examples, see [Integration Examples](integration-examples.md).
+
+list-servers | where status == "running" | select hostname ip_address
+
+
+get-pricing "small" | to yaml
+
+
+Use the test environment system to test providers:
+# Test provider without real resources
+provisioning test env single my-provider --check
+
+
+For complete provider development guide, see:
+
+
+Provider API follows semantic versioning:
+
+Major : Breaking changes
+Minor : New features, backward compatible
+Patch : Bug fixes
+
+Current API version: 2.0.0
+
+For more examples, see Integration Examples .
API documentation for Nushell library functions in the provisioning platform.
-
+
The provisioning platform provides a comprehensive Nushell library with reusable functions for infrastructure automation.
@@ -25413,7 +24993,7 @@ next-steps
Pure functions : No side effects (mutations marked with !)
Pipeline-friendly : Output designed for Nu pipelines
-
+
See Nushell Best Practices for coding guidelines.
Browse the complete source code:
@@ -25425,7 +25005,7 @@ next-steps
For integration examples, see Integration Examples .
This document describes the path resolution system used throughout the provisioning infrastructure for discovering configurations, extensions, and resolving workspace paths.
-
+
The path resolution system provides a hierarchical and configurable mechanism for:
Configuration file discovery and loading
@@ -25442,76 +25022,55 @@ next-steps
4. Infrastructure config (infra/config.toml)
5. Environment config (config.{env}.toml)
6. Runtime overrides (CLI arguments, ENV vars)
-```plaintext
-
-### Configuration Search Paths
-
-The system searches for configuration files in these locations:
-
-```bash
-# Default search paths (in order)
+
+
+The system searches for configuration files in these locations:
+# Default search paths (in order)
/usr/local/provisioning/config.defaults.toml
$HOME/.config/provisioning/config.user.toml
$PWD/config.project.toml
$PROVISIONING_KLOUD_PATH/config.infra.toml
$PWD/config.{PROVISIONING_ENV}.toml
-```plaintext
-
-## Path Resolution API
-
-### Core Functions
-
-#### `resolve-config-path(pattern: string, search_paths: list<string>) -> string`
-
-Resolves configuration file paths using the search hierarchy.
-
-**Parameters:**
-
-- `pattern`: File pattern to search for (e.g., "config.*.toml")
-- `search_paths`: Additional paths to search (optional)
-
-**Returns:**
-
-- Full path to the first matching configuration file
-- Empty string if no file found
-
-**Example:**
-
-```nushell
-use path-resolution.nu *
+
+
+
+
+Resolves configuration file paths using the search hierarchy.
+Parameters:
+
+pattern: File pattern to search for (for example, “config.*.toml”)
+search_paths: Additional paths to search (optional)
+
+Returns:
+
+Full path to the first matching configuration file
+Empty string if no file found
+
+Example:
+use path-resolution.nu *
let config_path = (resolve-config-path "config.user.toml" [])
# Returns: "/home/user/.config/provisioning/config.user.toml"
-```plaintext
-
-#### `resolve-extension-path(type: string, name: string) -> record`
-
-Discovers extension paths (providers, taskservs, clusters).
-
-**Parameters:**
-
-- `type`: Extension type ("provider", "taskserv", "cluster")
-- `name`: Extension name (e.g., "upcloud", "kubernetes", "buildkit")
-
-**Returns:**
-
-```nushell
-{
+
+
+Discovers extension paths (providers, taskservs, clusters).
+Parameters:
+
+type: Extension type (“provider”, “taskserv”, “cluster”)
+name: Extension name (for example, “upcloud”, “kubernetes”, “buildkit”)
+
+Returns:
+{
base_path: "/usr/local/provisioning/providers/upcloud",
- kcl_path: "/usr/local/provisioning/providers/upcloud/kcl",
+ schemas_path: "/usr/local/provisioning/providers/upcloud/schemas",
nulib_path: "/usr/local/provisioning/providers/upcloud/nulib",
templates_path: "/usr/local/provisioning/providers/upcloud/templates",
exists: true
}
-```plaintext
-
-#### `resolve-workspace-paths() -> record`
-
-Gets current workspace path configuration.
-
-**Returns:**
-
-```nushell
-{
+
+
+Gets current workspace path configuration.
+Returns:
+{
base: "/usr/local/provisioning",
current_infra: "/workspace/infra/production",
kloud_path: "/workspace/kloud",
@@ -25520,63 +25079,49 @@ Gets current workspace path configuration.
clusters: "/usr/local/provisioning/cluster",
extensions: "/workspace/extensions"
}
-```plaintext
-
-### Path Interpolation
-
-The system supports variable interpolation in configuration paths:
-
-#### Supported Variables
-
-- `{{paths.base}}` - Base provisioning path
-- `{{paths.kloud}}` - Current kloud path
-- `{{env.HOME}}` - User home directory
-- `{{env.PWD}}` - Current working directory
-- `{{now.date}}` - Current date (YYYY-MM-DD)
-- `{{now.time}}` - Current time (HH:MM:SS)
-- `{{git.branch}}` - Current git branch
-- `{{git.commit}}` - Current git commit hash
-
-#### `interpolate-path(template: string, context: record) -> string`
-
-Interpolates variables in path templates.
-
-**Parameters:**
-
-- `template`: Path template with variables
-- `context`: Variable context record
-
-**Example:**
-
-```nushell
-let template = "{{paths.base}}/infra/{{env.USER}}/{{git.branch}}"
+
+
+The system supports variable interpolation in configuration paths:
+
+
+{{paths.base}} - Base provisioning path
+{{paths.kloud}} - Current kloud path
+{{env.HOME}} - User home directory
+{{env.PWD}} - Current working directory
+{{now.date}} - Current date (YYYY-MM-DD)
+{{now.time}} - Current time (HH:MM:SS)
+{{git.branch}} - Current git branch
+{{git.commit}} - Current git commit hash
+
+
+Interpolates variables in path templates.
+Parameters:
+
+template: Path template with variables
+context: Variable context record
+
+Example:
+let template = "{{paths.base}}/infra/{{env.USER}}/{{git.branch}}"
let result = (interpolate-path $template {
paths: { base: "/usr/local/provisioning" },
env: { USER: "admin" },
git: { branch: "main" }
})
# Returns: "/usr/local/provisioning/infra/admin/main"
-```plaintext
-
-## Extension Discovery API
-
-### Provider Discovery
-
-#### `discover-providers() -> list<record>`
-
-Discovers all available providers.
-
-**Returns:**
-
-```nushell
-[
+
+
+
+
+Discovers all available providers.
+Returns:
+[
{
name: "upcloud",
path: "/usr/local/provisioning/providers/upcloud",
type: "provider",
version: "1.2.0",
enabled: true,
- has_kcl: true,
+ has_schemas: true,
has_nulib: true,
has_templates: true
},
@@ -25586,25 +25131,20 @@ Discovers all available providers.
type: "provider",
version: "2.1.0",
enabled: true,
- has_kcl: true,
+ has_schemas: true,
has_nulib: true,
has_templates: true
}
]
-```plaintext
-
-#### `get-provider-config(name: string) -> record`
-
-Gets provider-specific configuration and paths.
-
-**Parameters:**
-
-- `name`: Provider name
-
-**Returns:**
-
-```nushell
-{
+
+
+Gets provider-specific configuration and paths.
+Parameters:
+
+Returns:
+{
name: "upcloud",
base_path: "/usr/local/provisioning/providers/upcloud",
config: {
@@ -25613,7 +25153,7 @@ Gets provider-specific configuration and paths.
interface: "API"
},
paths: {
- kcl: "/usr/local/provisioning/providers/upcloud/kcl",
+ schemas: "/usr/local/provisioning/providers/upcloud/schemas",
nulib: "/usr/local/provisioning/providers/upcloud/nulib",
templates: "/usr/local/provisioning/providers/upcloud/templates"
},
@@ -25622,18 +25162,12 @@ Gets provider-specific configuration and paths.
description: "UpCloud provider for server provisioning"
}
}
-```plaintext
-
-### Task Service Discovery
-
-#### `discover-taskservs() -> list<record>`
-
-Discovers all available task services.
-
-**Returns:**
-
-```nushell
-[
+
+
+
+Discovers all available task services.
+Returns:
+[
{
name: "kubernetes",
path: "/usr/local/provisioning/taskservs/kubernetes",
@@ -25651,20 +25185,15 @@ Discovers all available task services.
enabled: true
}
]
-```plaintext
-
-#### `get-taskserv-config(name: string) -> record`
-
-Gets task service configuration and version information.
-
-**Parameters:**
-
-- `name`: Task service name
-
-**Returns:**
-
-```nushell
-{
+
+
+Gets task service configuration and version information.
+Parameters:
+
+name: Task service name
+
+Returns:
+{
name: "kubernetes",
path: "/usr/local/provisioning/taskservs/kubernetes",
version: {
@@ -25680,18 +25209,12 @@ Gets task service configuration and version information.
supports_versions: ["1.26.x", "1.27.x", "1.28.x"]
}
}
-```plaintext
-
-### Cluster Discovery
-
-#### `discover-clusters() -> list<record>`
-
-Discovers all available cluster configurations.
-
-**Returns:**
-
-```nushell
-[
+
+
+
+Discovers all available cluster configurations.
+Returns:
+[
{
name: "buildkit",
path: "/usr/local/provisioning/cluster/buildkit",
@@ -25701,37 +25224,29 @@ Discovers all available cluster configurations.
enabled: true
}
]
-```plaintext
-
-## Environment Management API
-
-### Environment Detection
-
-#### `detect-environment() -> string`
-
-Automatically detects the current environment based on:
-
-1. `PROVISIONING_ENV` environment variable
-2. Git branch patterns (main → prod, develop → dev, etc.)
-3. Directory structure analysis
-4. Configuration file presence
-
-**Returns:**
-
-- Environment name string (dev, test, prod, etc.)
-
-#### `get-environment-config(env: string) -> record`
-
-Gets environment-specific configuration.
-
-**Parameters:**
-
-- `env`: Environment name
-
-**Returns:**
-
-```nushell
-{
+
+
+
+
+Automatically detects the current environment based on:
+
+PROVISIONING_ENV environment variable
+Git branch patterns (main → prod, develop → dev, etc.)
+Directory structure analysis
+Configuration file presence
+
+Returns:
+
+Environment name string (dev, test, prod, etc.)
+
+
+Gets environment-specific configuration.
+Parameters:
+
+Returns:
+{
name: "production",
paths: {
base: "/opt/provisioning",
@@ -25748,43 +25263,33 @@ Gets environment-specific configuration.
rollback: true
}
}
-```plaintext
-
-### Environment Switching
-
-#### `switch-environment(env: string, validate: bool = true) -> null`
-
-Switches to a different environment and updates path resolution.
-
-**Parameters:**
-
-- `env`: Target environment name
-- `validate`: Whether to validate environment configuration
-
-**Effects:**
-
-- Updates `PROVISIONING_ENV` environment variable
-- Reconfigures path resolution for new environment
-- Validates environment configuration if requested
-
-## Workspace Management API
-
-### Workspace Discovery
-
-#### `discover-workspaces() -> list<record>`
-
-Discovers available workspaces and infrastructure directories.
-
-**Returns:**
-
-```nushell
-[
+
+
+
+Switches to a different environment and updates path resolution.
+Parameters:
+
+env: Target environment name
+validate: Whether to validate environment configuration
+
+Effects:
+
+Updates PROVISIONING_ENV environment variable
+Reconfigures path resolution for new environment
+Validates environment configuration if requested
+
+
+
+
+Discovers available workspaces and infrastructure directories.
+Returns:
+[
{
name: "production",
path: "/workspace/infra/production",
type: "infrastructure",
provider: "upcloud",
- settings: "settings.k",
+ settings: "settings.ncl",
valid: true
},
{
@@ -25792,39 +25297,31 @@ Discovers available workspaces and infrastructure directories.
path: "/workspace/infra/development",
type: "infrastructure",
provider: "local",
- settings: "dev-settings.k",
+ settings: "dev-settings.ncl",
valid: true
}
]
-```plaintext
-
-#### `set-current-workspace(path: string) -> null`
-
-Sets the current workspace for path resolution.
-
-**Parameters:**
-
-- `path`: Workspace directory path
-
-**Effects:**
-
-- Updates `CURRENT_INFRA_PATH` environment variable
-- Reconfigures workspace-relative path resolution
-
-### Project Structure Analysis
-
-#### `analyze-project-structure(path: string = $PWD) -> record`
-
-Analyzes project structure and identifies components.
-
-**Parameters:**
-
-- `path`: Project root path (defaults to current directory)
-
-**Returns:**
-
-```nushell
-{
+
+
+Sets the current workspace for path resolution.
+Parameters:
+
+path: Workspace directory path
+
+Effects:
+
+Updates CURRENT_INFRA_PATH environment variable
+Reconfigures workspace-relative path resolution
+
+
+
+Analyzes project structure and identifies components.
+Parameters:
+
+path: Project root path (defaults to current directory)
+
+Returns:
+{
root: "/workspace/project",
type: "provisioning_workspace",
components: {
@@ -25850,95 +25347,67 @@ Analyzes project structure and identifies components.
"config.prod.toml"
]
}
-```plaintext
-
-## Caching and Performance
-
-### Path Caching
-
-The path resolution system includes intelligent caching:
-
-#### `cache-paths(duration: duration = 5min) -> null`
-
-Enables path caching for the specified duration.
-
-**Parameters:**
-
-- `duration`: Cache validity duration
-
-#### `invalidate-path-cache() -> null`
-
-Invalidates the path resolution cache.
-
-#### `get-cache-stats() -> record`
-
-Gets path resolution cache statistics.
-
-**Returns:**
-
-```nushell
-{
+
+
+
+The path resolution system includes intelligent caching:
+
+Enables path caching for the specified duration.
+Parameters:
+
+duration: Cache validity duration
+
+
+Invalidates the path resolution cache.
+
+Gets path resolution cache statistics.
+Returns:
+{
enabled: true,
size: 150,
hit_rate: 0.85,
last_invalidated: "2025-09-26T10:00:00Z"
}
-```plaintext
-
-## Cross-Platform Compatibility
-
-### Path Normalization
-
-#### `normalize-path(path: string) -> string`
-
-Normalizes paths for cross-platform compatibility.
-
-**Parameters:**
-
-- `path`: Input path (may contain mixed separators)
-
-**Returns:**
-
-- Normalized path using platform-appropriate separators
-
-**Example:**
-
-```nushell
-# On Windows
+
+
+
+
+Normalizes paths for cross-platform compatibility.
+Parameters:
+
+path: Input path (may contain mixed separators)
+
+Returns:
+
+Normalized path using platform-appropriate separators
+
+Example:
+# On Windows
normalize-path "path/to/file" # Returns: "path\to\file"
# On Unix
normalize-path "path\to\file" # Returns: "path/to/file"
-```plaintext
-
-#### `join-paths(segments: list<string>) -> string`
-
-Safely joins path segments using platform separators.
-
-**Parameters:**
-
-- `segments`: List of path segments
-
-**Returns:**
-
-- Joined path string
-
-## Configuration Validation API
-
-### Path Validation
-
-#### `validate-paths(config: record) -> record`
-
-Validates all paths in configuration.
-
-**Parameters:**
-
-- `config`: Configuration record
-
-**Returns:**
-
-```nushell
-{
+
+
+Safely joins path segments using platform separators.
+Parameters:
+
+segments: List of path segments
+
+Returns:
+
+
+
+
+Validates all paths in configuration.
+Parameters:
+
+config: Configuration record
+
+Returns:
+{
valid: true,
errors: [],
warnings: [
@@ -25946,40 +25415,31 @@ Validates all paths in configuration.
],
checks_performed: 15
}
-```plaintext
-
-#### `validate-extension-structure(type: string, path: string) -> record`
-
-Validates extension directory structure.
-
-**Parameters:**
-
-- `type`: Extension type (provider, taskserv, cluster)
-- `path`: Extension base path
-
-**Returns:**
-
-```nushell
-{
+
+
+Validates extension directory structure.
+Parameters:
+
+type: Extension type (provider, taskserv, cluster)
+path: Extension base path
+
+Returns:
+{
valid: true,
required_files: [
- { file: "kcl.mod", exists: true },
+ { file: "manifest.toml", exists: true },
+ { file: "schemas/main.ncl", exists: true },
{ file: "nulib/mod.nu", exists: true }
],
optional_files: [
{ file: "templates/server.j2", exists: false }
]
}
-```plaintext
-
-## Command-Line Interface
-
-### Path Resolution Commands
-
-The path resolution API is exposed via Nushell commands:
-
-```bash
-# Show current path configuration
+
+
+
+The path resolution API is exposed via Nushell commands:
+# Show current path configuration
provisioning show paths
# Discover available extensions
@@ -25995,14 +25455,10 @@ provisioning env switch prod
# Set workspace
provisioning workspace set /path/to/infra
-```plaintext
-
-## Integration Examples
-
-### Python Integration
-
-```python
-import subprocess
+
+
+
+import subprocess
import json
class PathResolver:
@@ -26025,12 +25481,9 @@ class PathResolver:
resolver = PathResolver()
paths = resolver.get_paths()
providers = resolver.discover_providers()
-```plaintext
-
-### JavaScript/Node.js Integration
-
-```javascript
-const { exec } = require('child_process');
+
+
+const { exec } = require('child_process');
const util = require('util');
const execAsync = util.promisify(exec);
@@ -26058,20 +25511,17 @@ class PathResolver {
const resolver = new PathResolver();
const paths = await resolver.getPaths();
const providers = await resolver.discoverExtensions('providers');
-```plaintext
-
-## Error Handling
-
-### Common Error Scenarios
-
-1. **Configuration File Not Found**
-
- ```nushell
- Error: Configuration file not found in search paths
- Searched: ["/usr/local/provisioning/config.defaults.toml", ...]
+
+
+Configuration File Not Found
+Error: Configuration file not found in search paths
+Searched: ["/usr/local/provisioning/config.defaults.toml", ...]
+
+
+
Extension Not Found
Error: Provider 'missing-provider' not found
Available providers: ["upcloud", "aws", "local"]
@@ -26098,8 +25548,8 @@ Available environments: ["dev", "test", "prod"]
Extension discovery continues if some paths are inaccessible
Environment detection falls back to ‘local’ if detection fails
-
-
+
+
Use Path Caching : Enable caching for frequently accessed paths
Batch Discovery : Discover all extensions at once rather than individually
@@ -26116,28 +25566,23 @@ provisioning debug cache-stats
# Profile path resolution
provisioning debug profile-paths
-```plaintext
-
-## Security Considerations
-
-### Path Traversal Protection
-
-The system includes protections against path traversal attacks:
-
-- All paths are normalized and validated
-- Relative paths are resolved within safe boundaries
-- Symlinks are validated before following
-
-### Access Control
-
-Path resolution respects file system permissions:
-
-- Configuration files require read access
-- Extension directories require read/execute access
-- Workspace directories may require write access for operations
-
-This path resolution API provides a comprehensive and flexible system for managing the complex path requirements of multi-provider, multi-environment infrastructure provisioning.
+
+
+The system includes protections against path traversal attacks:
+
+All paths are normalized and validated
+Relative paths are resolved within safe boundaries
+Symlinks are validated before following
+
+
+Path resolution respects file system permissions:
+
+Configuration files require read access
+Extension directories require read/execute access
+Workspace directories may require write access for operations
+
+This path resolution API provides a comprehensive and flexible system for managing the complex path requirements of multi-provider, multi-environment infrastructure provisioning.
This guide will help you create custom providers, task services, and cluster configurations to extend provisioning for your specific needs.
@@ -26161,8 +25606,8 @@ This path resolution API provides a comprehensive and flexible system for managi
my-extension/
-├── kcl/ # KCL schemas and models
-│ ├── models/ # Data models
+├── schemas/ # Nickel schemas and models
+│ ├── contracts.ncl # Type contracts
│ ├── providers/ # Provider definitions
│ ├── taskservs/ # Task service definitions
│ └── clusters/ # Cluster definitions
@@ -26175,14 +25620,10 @@ This path resolution API provides a comprehensive and flexible system for managi
├── docs/ # Documentation
├── extension.toml # Extension metadata
└── README.md # Extension documentation
-```plaintext
-
-### Extension Metadata
-
-`extension.toml`:
-
-```toml
-[extension]
+
+
+extension.toml:
+[extension]
name = "my-custom-provider"
version = "1.0.0"
description = "Custom cloud provider integration"
@@ -26191,7 +25632,7 @@ license = "MIT"
[compatibility]
provisioning_version = ">=1.0.0"
-kcl_version = ">=0.11.2"
+nickel_version = ">=1.15.0"
[provides]
providers = ["custom-cloud"]
@@ -26205,91 +25646,76 @@ system_packages = ["curl", "jq"]
[configuration]
required_env = ["CUSTOM_CLOUD_API_KEY"]
optional_env = ["CUSTOM_CLOUD_REGION"]
-```plaintext
-
-## Creating Custom Providers
-
-### Provider Architecture
-
-A provider handles:
-
-- Authentication with cloud APIs
-- Resource lifecycle management (create, read, update, delete)
-- Provider-specific configurations
-- Cost estimation and billing integration
-
-### Step 1: Define Provider Schema
-
-`kcl/providers/custom_cloud.k`:
-
-```kcl
-# Custom cloud provider schema
-import models.base
-
-schema CustomCloudConfig(base.ProviderConfig):
- """Configuration for Custom Cloud provider"""
-
+
+
+
+A provider handles:
+
+Authentication with cloud APIs
+Resource lifecycle management (create, read, update, delete)
+Provider-specific configurations
+Cost estimation and billing integration
+
+
+schemas/providers/custom_cloud.ncl:
+# Custom cloud provider schema
+{
+ CustomCloudConfig = {
+ # Configuration for Custom Cloud provider
# Authentication
- api_key: str
- api_secret?: str
- region?: str = "us-west-1"
+ api_key | String,
+ api_secret | String = "",
+ region | String = "us-west-1",
# Provider-specific settings
- project_id?: str
- organization?: str
+ project_id | String = "",
+ organization | String = "",
# API configuration
- api_url?: str = "https://api.custom-cloud.com/v1"
- timeout?: int = 30
+ api_url | String = "https://api.custom-cloud.com/v1",
+ timeout | Number = 30,
# Cost configuration
- billing_account?: str
- cost_center?: str
-
-schema CustomCloudServer(base.ServerConfig):
- """Server configuration for Custom Cloud"""
+ billing_account | String = "",
+ cost_center | String = "",
+ },
+ CustomCloudServer = {
+ # Server configuration for Custom Cloud
# Instance configuration
- machine_type: str
- zone: str
- disk_size?: int = 20
- disk_type?: str = "ssd"
+ machine_type | String,
+ zone | String,
+ disk_size | Number = 20,
+ disk_type | String = "ssd",
# Network configuration
- vpc?: str
- subnet?: str
- external_ip?: bool = true
+ vpc | String = "",
+ subnet | String = "",
+ external_ip | Bool = true,
# Custom Cloud specific
- preemptible?: bool = false
- labels?: {str: str} = {}
+ preemptible | Bool = false,
+ labels | {String: String} = {},
+ },
- # Validation rules
- check:
- len(machine_type) > 0, "machine_type cannot be empty"
- disk_size >= 10, "disk_size must be at least 10GB"
-
-# Provider capabilities
-provider_capabilities = {
- "name": "custom-cloud"
- "supports_auto_scaling": True
- "supports_load_balancing": True
- "supports_managed_databases": True
- "regions": [
- "us-west-1", "us-west-2", "us-east-1", "eu-west-1"
- ]
- "machine_types": [
- "micro", "small", "medium", "large", "xlarge"
- ]
+ # Provider capabilities
+ provider_capabilities = {
+ name = "custom-cloud",
+ supports_auto_scaling = true,
+ supports_load_balancing = true,
+ supports_managed_databases = true,
+ regions = [
+ "us-west-1", "us-west-2", "us-east-1", "eu-west-1"
+ ],
+ machine_types = [
+ "micro", "small", "medium", "large", "xlarge"
+ ],
+ },
}
-```plaintext
-
-### Step 2: Implement Provider Logic
-
-`nulib/providers/custom_cloud.nu`:
-
-```nushell
-# Custom Cloud provider implementation
+
+
+nulib/providers/custom_cloud.nu:
+# Custom Cloud provider implementation
# Provider initialization
export def custom_cloud_init [] {
@@ -26482,14 +25908,10 @@ def custom_cloud_wait_for_server [
print $"Waiting for server status: ($current_status) -> ($target_status)"
}
}
-```plaintext
-
-### Step 3: Provider Registration
-
-`nulib/providers/mod.nu`:
-
-```nushell
-# Provider module exports
+
+
+nulib/providers/mod.nu:
+# Provider module exports
export use custom_cloud.nu *
# Provider registry
@@ -26507,92 +25929,78 @@ export def get_provider_info [] -> record {
auth_methods: ["api_key", "oauth"]
}
}
-```plaintext
-
-## Creating Custom Task Services
-
-### Task Service Architecture
-
-Task services handle:
-
-- Software installation and configuration
-- Service lifecycle management
-- Health checking and monitoring
-- Version management and updates
-
-### Step 1: Define Service Schema
-
-`kcl/taskservs/custom_database.k`:
-
-```kcl
-# Custom database task service
-import models.base
-
-schema CustomDatabaseConfig(base.TaskServiceConfig):
- """Configuration for Custom Database service"""
-
+
+
+
+Task services handle:
+
+Software installation and configuration
+Service lifecycle management
+Health checking and monitoring
+Version management and updates
+
+
+schemas/taskservs/custom_database.ncl:
+# Custom database task service
+{
+ CustomDatabaseConfig = {
+ # Configuration for Custom Database service
# Database configuration
- version?: str = "14.0"
- port?: int = 5432
- max_connections?: int = 100
- memory_limit?: str = "512MB"
+ version | String = "14.0",
+ port | Number = 5432,
+ max_connections | Number = 100,
+ memory_limit | String = "512 MB",
# Data configuration
- data_directory?: str = "/var/lib/customdb"
- log_directory?: str = "/var/log/customdb"
+ data_directory | String = "/var/lib/customdb",
+ log_directory | String = "/var/log/customdb",
# Replication
- replication?: {
- enabled?: bool = false
- mode?: str = "async" # async, sync
- replicas?: int = 1
- }
+ replication | {
+ enabled | Bool = false,
+ mode | String = "async",
+ replicas | Number = 1,
+ } = {},
# Backup configuration
- backup?: {
- enabled?: bool = true
- schedule?: str = "0 2 * * *" # Daily at 2 AM
- retention_days?: int = 7
- storage_location?: str = "local"
- }
+ backup | {
+ enabled | Bool = true,
+ schedule | String = "0 2 * * *",
+ retention_days | Number = 7,
+ storage_location | String = "local",
+ } = {},
# Security
- ssl?: {
- enabled?: bool = true
- cert_file?: str = "/etc/ssl/certs/customdb.crt"
- key_file?: str = "/etc/ssl/private/customdb.key"
- }
+ ssl | {
+ enabled | Bool = true,
+ cert_file | String = "/etc/ssl/certs/customdb.crt",
+ key_file | String = "/etc/ssl/private/customdb.key",
+ } = {},
# Monitoring
- monitoring?: {
- enabled?: bool = true
- metrics_port?: int = 9187
- log_level?: str = "info"
- }
+ monitoring | {
+ enabled | Bool = true,
+ metrics_port | Number = 9187,
+ log_level | String = "info",
+ } = {},
+ },
- check:
- port > 1024 and port < 65536, "port must be between 1024 and 65535"
- max_connections > 0, "max_connections must be positive"
-
-# Service metadata
-service_metadata = {
- "name": "custom-database"
- "description": "Custom Database Server"
- "version": "14.0"
- "category": "database"
- "dependencies": ["systemd"]
- "supported_os": ["ubuntu", "debian", "centos", "rhel"]
- "ports": [5432, 9187]
- "data_directories": ["/var/lib/customdb"]
+ # Service metadata
+ service_metadata = {
+ name = "custom-database",
+ description = "Custom Database Server",
+ version = "14.0",
+ category = "database",
+ dependencies = ["systemd"],
+ supported_os = ["ubuntu", "debian", "centos", "rhel"],
+ ports = [5432, 9187],
+ data_directories = ["/var/lib/customdb"],
+ },
}
-```plaintext
-
-### Step 2: Implement Service Logic
-
-`nulib/taskservs/custom_database.nu`:
-
-```nushell
-# Custom Database task service implementation
+
+
+nulib/taskservs/custom_database.nu:
+# Custom Database task service implementation
# Install custom database
export def install_custom_database [
@@ -26818,7 +26226,7 @@ def validate_prerequisites [config: record] {
let memory_mb = (^free -m | lines | get 1 | split row ' ' | get 1 | into int)
if $memory_mb < 512 {
error make {
- msg: $"Insufficient memory: ($memory_mb)MB. Minimum 512MB required."
+ msg: $"Insufficient memory: ($memory_mb)MB. Minimum 512 MB required."
}
}
}
@@ -26845,7 +26253,7 @@ def configure_service [config: record] {
def generate_config [config: record] -> string {
let port = ($config.port | default 5432)
let max_connections = ($config.max_connections | default 100)
- let memory_limit = ($config.memory_limit | default "512MB")
+ let memory_limit = ($config.memory_limit | default "512 MB")
return $"
# Custom Database Configuration
@@ -26936,129 +26344,111 @@ def check_port [port: int] -> bool {
let result = (^nc -z localhost $port | complete)
return ($result.exit_code == 0)
}
-```plaintext
-
-## Creating Custom Clusters
-
-### Cluster Architecture
-
-Clusters orchestrate multiple services to work together as a cohesive application stack.
-
-### Step 1: Define Cluster Schema
-
-`kcl/clusters/custom_web_stack.k`:
-
-```kcl
-# Custom web application stack
-import models.base
-import models.server
-import models.taskserv
-
-schema CustomWebStackConfig(base.ClusterConfig):
- """Configuration for Custom Web Application Stack"""
-
+
+
+
+Clusters orchestrate multiple services to work together as a cohesive application stack.
+
+schemas/clusters/custom_web_stack.ncl:
+# Custom web application stack
+{
+ CustomWebStackConfig = {
+ # Configuration for Custom Web Application Stack
# Application configuration
- app_name: str
- app_version?: str = "latest"
- environment?: str = "production"
+ app_name | String,
+ app_version | String = "latest",
+ environment | String = "production",
# Web tier configuration
- web_tier: {
- replicas?: int = 3
- instance_type?: str = "t3.medium"
- load_balancer?: {
- enabled?: bool = true
- ssl?: bool = true
- health_check_path?: str = "/health"
- }
- }
+ web_tier | {
+ replicas | Number = 3,
+ instance_type | String = "t3.medium",
+ load_balancer | {
+ enabled | Bool = true,
+ ssl | Bool = true,
+ health_check_path | String = "/health",
+ } = {},
+ },
# Application tier configuration
- app_tier: {
- replicas?: int = 5
- instance_type?: str = "t3.large"
- auto_scaling?: {
- enabled?: bool = true
- min_replicas?: int = 2
- max_replicas?: int = 10
- cpu_threshold?: int = 70
- }
- }
+ app_tier | {
+ replicas | Number = 5,
+ instance_type | String = "t3.large",
+ auto_scaling | {
+ enabled | Bool = true,
+ min_replicas | Number = 2,
+ max_replicas | Number = 10,
+ cpu_threshold | Number = 70,
+ } = {},
+ },
# Database tier configuration
- database_tier: {
- type?: str = "postgresql" # postgresql, mysql, custom-database
- instance_type?: str = "t3.xlarge"
- high_availability?: bool = true
- backup_enabled?: bool = true
- }
+ database_tier | {
+ type | String = "postgresql",
+ instance_type | String = "t3.xlarge",
+ high_availability | Bool = true,
+ backup_enabled | Bool = true,
+ } = {},
# Monitoring configuration
- monitoring: {
- enabled?: bool = true
- metrics_retention?: str = "30d"
- alerting?: bool = true
- }
+ monitoring | {
+ enabled | Bool = true,
+ metrics_retention | String = "30d",
+ alerting | Bool = true,
+ } = {},
# Networking
- network: {
- vpc_cidr?: str = "10.0.0.0/16"
- public_subnets?: [str] = ["10.0.1.0/24", "10.0.2.0/24"]
- private_subnets?: [str] = ["10.0.10.0/24", "10.0.20.0/24"]
- database_subnets?: [str] = ["10.0.100.0/24", "10.0.200.0/24"]
- }
+ network | {
+ vpc_cidr | String = "10.0.0.0/16",
+ public_subnets | [String] = ["10.0.1.0/24", "10.0.2.0/24"],
+ private_subnets | [String] = ["10.0.10.0/24", "10.0.20.0/24"],
+ database_subnets | [String] = ["10.0.100.0/24", "10.0.200.0/24"],
+ } = {},
+ },
- check:
- len(app_name) > 0, "app_name cannot be empty"
- web_tier.replicas >= 1, "web_tier replicas must be at least 1"
- app_tier.replicas >= 1, "app_tier replicas must be at least 1"
-
-# Cluster blueprint
-cluster_blueprint = {
- "name": "custom-web-stack"
- "description": "Custom web application stack with load balancer, app servers, and database"
- "version": "1.0.0"
- "components": [
- {
- "name": "load-balancer"
- "type": "taskserv"
- "service": "haproxy"
- "tier": "web"
- }
- {
- "name": "web-servers"
- "type": "server"
- "tier": "web"
- "scaling": "horizontal"
- }
- {
- "name": "app-servers"
- "type": "server"
- "tier": "app"
- "scaling": "horizontal"
- }
- {
- "name": "database"
- "type": "taskserv"
- "service": "postgresql"
- "tier": "database"
- }
- {
- "name": "monitoring"
- "type": "taskserv"
- "service": "prometheus"
- "tier": "monitoring"
- }
- ]
+ # Cluster blueprint
+ cluster_blueprint = {
+ name = "custom-web-stack",
+ description = "Custom web application stack with load balancer, app servers, and database",
+ version = "1.0.0",
+ components = [
+ {
+ name = "load-balancer",
+ type = "taskserv",
+ service = "haproxy",
+ tier = "web",
+ },
+ {
+ name = "web-servers",
+ type = "server",
+ tier = "web",
+ scaling = "horizontal",
+ },
+ {
+ name = "app-servers",
+ type = "server",
+ tier = "app",
+ scaling = "horizontal",
+ },
+ {
+ name = "database",
+ type = "taskserv",
+ service = "postgresql",
+ tier = "database",
+ },
+ {
+ name = "monitoring",
+ type = "taskserv",
+ service = "prometheus",
+ tier = "monitoring",
+ },
+ ],
+ },
}
-```plaintext
-
-### Step 2: Implement Cluster Logic
-
-`nulib/clusters/custom_web_stack.nu`:
-
-```nushell
-# Custom Web Stack cluster implementation
+
+
+nulib/clusters/custom_web_stack.nu:
+# Custom Web Stack cluster implementation
# Deploy web stack cluster
export def deploy_custom_web_stack [
@@ -27321,14 +26711,10 @@ def calculate_cluster_cost [config: record] -> float {
return ($web_cost + $app_cost + $db_cost + $lb_cost)
}
-```plaintext
-
-## Extension Testing
-
-### Test Structure
-
-```plaintext
-tests/
+
+
+
+tests/
├── unit/ # Unit tests
│ ├── provider_test.nu # Provider unit tests
│ ├── taskserv_test.nu # Task service unit tests
@@ -27342,14 +26728,10 @@ tests/
└── fixtures/ # Test data
├── configs/
└── mocks/
-```plaintext
-
-### Example Unit Test
-
-`tests/unit/provider_test.nu`:
-
-```nushell
-# Unit tests for custom cloud provider
+
+
+tests/unit/provider_test.nu:
+# Unit tests for custom cloud provider
use std testing
@@ -27382,7 +26764,7 @@ export def test_cost_calculation [] {
}
let cost = (calculate_server_cost $server_config)
- assert equal $cost 0.15 # 0.10 (medium) + 0.05 (50GB storage)
+ assert equal $cost 0.15 # 0.10 (medium) + 0.05 (50 GB storage)
}
export def test_api_call_formatting [] {
@@ -27398,14 +26780,10 @@ export def test_api_call_formatting [] {
assert equal $api_payload.machine_type "small"
assert equal $api_payload.zone "us-west-1a"
}
-```plaintext
-
-### Integration Test
-
-`tests/integration/provider_integration_test.nu`:
-
-```nushell
-# Integration tests for custom cloud provider
+
+
+tests/integration/provider_integration_test.nu:
+# Integration tests for custom cloud provider
use std testing
@@ -27436,14 +26814,10 @@ export def test_server_listing [] {
assert ($servers | is-not-empty)
}
}
-```plaintext
-
-## Publishing Extensions
-
-### Extension Package Structure
-
-```plaintext
-my-extension-package/
+
+
+
+my-extension-package/
├── extension.toml # Extension metadata
├── README.md # Documentation
├── LICENSE # License file
@@ -27454,14 +26828,10 @@ my-extension-package/
│ ├── nulib/
│ └── templates/
└── tests/ # Test files
-```plaintext
-
-### Publishing Configuration
-
-`extension.toml`:
-
-```toml
-[extension]
+
+
+extension.toml:
+[extension]
name = "my-custom-provider"
version = "1.0.0"
description = "Custom cloud provider integration"
@@ -27474,7 +26844,7 @@ categories = ["providers"]
[compatibility]
provisioning_version = ">=1.0.0"
-kcl_version = ">=0.11.2"
+nickel_version = ">=1.15.0"
[provides]
providers = ["custom-cloud"]
@@ -27488,12 +26858,9 @@ extensions = []
[build]
include = ["src/**", "examples/**", "README.md", "LICENSE"]
exclude = ["tests/**", ".git/**", "*.tmp"]
-```plaintext
-
-### Publishing Process
-
-```bash
-# 1. Validate extension
+
+
+# 1. Validate extension
provisioning extension validate .
# 2. Run tests
@@ -27504,26 +26871,19 @@ provisioning extension build .
# 4. Publish to registry
provisioning extension publish ./dist/my-custom-provider-1.0.0.tar.gz
-```plaintext
-
-## Best Practices
-
-### 1. Code Organization
-
-```plaintext
-# Follow standard structure
+
+
+
+# Follow standard structure
extension/
-├── kcl/ # Schemas and models
-├── nulib/ # Implementation
+├── schemas/ # Nickel schemas and models
+├── nulib/ # Nushell implementation
├── templates/ # Configuration templates
├── tests/ # Comprehensive tests
└── docs/ # Documentation
-```plaintext
-
-### 2. Error Handling
-
-```nushell
-# Always provide meaningful error messages
+
+
+# Always provide meaningful error messages
if ($api_response | get -o status | default "" | str contains "error") {
error make {
msg: $"API Error: ($api_response.message)"
@@ -27534,50 +26894,52 @@ if ($api_response | get -o status | default "" | str contains "error") {
help: "Check your API key and network connectivity"
}
}
-```plaintext
-
-### 3. Configuration Validation
-
-```kcl
-# Use KCL's validation features
-schema CustomConfig:
- name: str
- size: int
-
- check:
- len(name) > 0, "name cannot be empty"
- size > 0, "size must be positive"
- size <= 1000, "size cannot exceed 1000"
-```plaintext
-
-### 4. Testing
-
-- Write comprehensive unit tests
-- Include integration tests
-- Test error conditions
-- Use fixtures for consistent test data
-- Mock external dependencies
-
-### 5. Documentation
-
-- Include README with examples
-- Document all configuration options
-- Provide troubleshooting guide
-- Include architecture diagrams
-- Write API documentation
-
-## Next Steps
-
-Now that you understand extension development:
-
-1. **Study existing extensions** in the `providers/` and `taskservs/` directories
-2. **Practice with simple extensions** before building complex ones
-3. **Join the community** to share and collaborate on extensions
-4. **Contribute to the core system** by improving extension APIs
-5. **Build a library** of reusable templates and patterns
-
-You're now equipped to extend provisioning for any custom requirements!
+
+# Use Nickel's validation features with contracts
+{
+ CustomConfig = {
+ # Configuration with validation
+ name | String | doc "Name must not be empty",
+ size | Number | doc "Size must be positive and at most 1000",
+ },
+
+ # Validation rules
+ validate_config = fun config =>
+ let valid_name = (std.string.length config.name) > 0 in
+ let valid_size = config.size > 0 && config.size <= 1000 in
+ if valid_name && valid_size then
+ config
+ else
+ (std.fail "Configuration validation failed"),
+}
+
+
+
+Write comprehensive unit tests
+Include integration tests
+Test error conditions
+Use fixtures for consistent test data
+Mock external dependencies
+
+
+
+Include README with examples
+Document all configuration options
+Provide troubleshooting guide
+Include architecture diagrams
+Write API documentation
+
+
+Now that you understand extension development:
+
+Study existing extensions in the providers/ and taskservs/ directories
+Practice with simple extensions before building complex ones
+Join the community to share and collaborate on extensions
+Contribute to the core system by improving extension APIs
+Build a library of reusable templates and patterns
+
+You’re now equipped to extend provisioning for any custom requirements!
This guide focuses on creating extensions tailored to specific infrastructure requirements, business needs, and organizational constraints.
@@ -27590,7 +26952,7 @@ You're now equipped to extend provisioning for any custom requirements!
Integration Patterns
Real-World Examples
-
+
Infrastructure-specific extensions address unique requirements that generic modules cannot cover:
Company-specific applications and services
@@ -27673,7 +27035,7 @@ EOF
-"""
+"""
Business Requirements Schema for Custom Extensions
Use this template to document requirements before development
"""
@@ -27741,11 +27103,11 @@ schema Integration:
# Create company-specific taskserv
-mkdir -p extensions/taskservs/company-specific/legacy-erp/kcl
-cd extensions/taskservs/company-specific/legacy-erp/kcl
+mkdir -p extensions/taskservs/company-specific/legacy-erp/nickel
+cd extensions/taskservs/company-specific/legacy-erp/nickel
-Create legacy-erp.k:
-"""
+Create legacy-erp.ncl:
+"""
Legacy ERP System Taskserv
Handles deployment and management of company's legacy ERP system
"""
@@ -27990,8 +27352,8 @@ legacy_erp_default: LegacyERPTaskserv = {
}
-Create compliance-monitor.k:
-"""
+Create compliance-monitor.ncl:
+"""
Compliance Monitoring Taskserv
Automated compliance checking and reporting for regulated environments
"""
@@ -28157,11 +27519,11 @@ compliance_monitor_default: ComplianceMonitorTaskserv = {
When working with specialized or private cloud providers:
# Create custom provider extension
-mkdir -p extensions/providers/company-private-cloud/kcl
-cd extensions/providers/company-private-cloud/kcl
+mkdir -p extensions/providers/company-private-cloud/nickel
+cd extensions/providers/company-private-cloud/nickel
-Create provision_company-private-cloud.k:
-"""
+Create provision_company-private-cloud.ncl:
+"""
Company Private Cloud Provider
Integration with company's private cloud infrastructure
"""
@@ -28228,7 +27590,7 @@ schema CompanyPrivateCloudServer(server.Server):
environment: "dev" | "test" | "staging" | "prod" = "prod"
check:
- root_disk_size >= 20, "Root disk must be at least 20GB"
+ root_disk_size >= 20, "Root disk must be at least 20 GB"
len(cost_center) > 0, "Cost center required"
len(department) > 0, "Department required"
@@ -28304,11 +27666,11 @@ company_private_cloud_defaults: defaults.ServerDefaults = {
Create environment-specific extensions that handle different deployment patterns:
# Create environment management extension
-mkdir -p extensions/clusters/company-environments/kcl
-cd extensions/clusters/company-environments/kcl
+mkdir -p extensions/clusters/company-environments/nickel
+cd extensions/clusters/company-environments/nickel
-Create company-environments.k:
-"""
+Create company-environments.ncl:
+"""
Company Environment Management
Standardized environment configurations for different deployment stages
"""
@@ -28480,15 +27842,15 @@ environment_templates = {
schema: CompanyEnvironment
}
-
+
Create integration patterns for common legacy system scenarios:
# Create integration patterns
-mkdir -p extensions/taskservs/integrations/legacy-bridge/kcl
-cd extensions/taskservs/integrations/legacy-bridge/kcl
+mkdir -p extensions/taskservs/integrations/legacy-bridge/nickel
+cd extensions/taskservs/integrations/legacy-bridge/nickel
-Create legacy-bridge.k:
-"""
+Create legacy-bridge.ncl:
+"""
Legacy System Integration Bridge
Provides standardized integration patterns for legacy systems
"""
@@ -28688,17 +28050,17 @@ legacy_bridge_dependencies: deps.TaskservDependencies = {
# Financial services specific extensions
-mkdir -p extensions/taskservs/financial-services/{trading-system,risk-engine,compliance-reporter}/kcl
+mkdir -p extensions/taskservs/financial-services/{trading-system,risk-engine,compliance-reporter}/nickel
# Healthcare specific extensions
-mkdir -p extensions/taskservs/healthcare/{hl7-processor,dicom-storage,hipaa-audit}/kcl
+mkdir -p extensions/taskservs/healthcare/{hl7-processor,dicom-storage,hipaa-audit}/nickel
# Manufacturing specific extensions
-mkdir -p extensions/taskservs/manufacturing/{iot-gateway,scada-bridge,quality-system}/kcl
+mkdir -p extensions/taskservs/manufacturing/{iot-gateway,scada-bridge,quality-system}/nickel
-
+
# Load company-specific extensions
cd workspace/infra/production
@@ -28711,7 +28073,7 @@ module-loader list taskservs .
module-loader validate .
-# Import loaded extensions
+# Import loaded extensions
import .taskservs.legacy-erp.legacy-erp as erp
import .taskservs.compliance-monitor.compliance-monitor as compliance
import .providers.company-private-cloud as private_cloud
@@ -28851,16 +28213,17 @@ nu -c "use provisioning/core/nulib/lib_provisioning/providers/loader.nu *; load-
nu -c "use provisioning/extensions/providers/your_provider_name/provider.nu *; query_servers"
-Add to your KCL configuration:
-# workspace/infra/example/servers.k
-servers = [
+Add to your Nickel configuration:
+# workspace/infra/example/servers.ncl
+let servers = [
{
- hostname = "test-server"
- provider = "your_provider_name"
- zone = "your-region-1"
- plan = "your-instance-type"
+ hostname = "test-server",
+ provider = "your_provider_name",
+ zone = "your-region-1",
+ plan = "your-instance-type",
}
-]
+] in
+servers
@@ -28892,7 +28255,7 @@ export def baremetal_query_servers [find?: string, cols?: string]: nothing ->
$inventory.servers | select hostname ip_address status
}
-
+
export def provider_operation []: nothing -> any {
try {
@@ -28971,7 +28334,7 @@ Error handling working
Compatible with existing infrastructure configs
-
+
# Check provider directory structure
ls -la provisioning/extensions/providers/your_provider_name/
@@ -28990,7 +28353,7 @@ env | grep PROVIDER
# Test API access manually
curl -H "Authorization: Bearer $PROVIDER_TOKEN" https://api.provider.com/test
-
+
Documentation : Add provider-specific documentation to docs/providers/
Examples : Create example infrastructure using your provider
@@ -28998,7 +28361,7 @@ curl -H "Authorization: Bearer $PROVIDER_TOKEN" https://api.provider.com/test
Optimization : Implement caching and performance optimizations
Features : Add provider-specific advanced features
-
+
Check existing providers for implementation patterns
Review the Provider Interface Documentation
@@ -29008,8 +28371,8 @@ curl -H "Authorization: Bearer $PROVIDER_TOKEN" https://api.provider.com/test
Target Audience : Developers working on the provisioning CLI
Last Updated : 2025-09-30
-Related : ADR-006 CLI Refactoring
-
+Related : ADR-006 CLI Refactoring
+
The provisioning CLI uses a modular, domain-driven architecture that separates concerns into focused command handlers. This guide shows you how to work with this architecture.
@@ -29034,32 +28397,24 @@ curl -H "Authorization: Bearer $PROVIDER_TOKEN" https://api.provider.com/test
│ ├── generation.nu (78 lines) - Generate commands
│ ├── utilities.nu (157 lines) - SSH, SOPS, cache, providers
│ └── configuration.nu (316 lines) - Env, show, init, validate
-```plaintext
-
-## Adding New Commands
-
-### Step 1: Choose the Right Domain Handler
-
-Commands are organized by domain. Choose the appropriate handler:
-
-| Domain | Handler | Responsibility |
-|--------|---------|----------------|
-| `infrastructure.nu` | Server/taskserv/cluster/infra lifecycle |
-| `orchestration.nu` | Workflow/batch operations, orchestrator control |
-| `development.nu` | Module discovery, layers, versions, packaging |
-| `workspace.nu` | Workspace and template management |
-| `configuration.nu` | Environment, settings, initialization |
-| `utilities.nu` | SSH, SOPS, cache, providers, utilities |
-| `generation.nu` | Generate commands (server, taskserv, etc.) |
-
-### Step 2: Add Command to Handler
-
-**Example: Adding a new server command `server status`**
-
-Edit `provisioning/core/nulib/main_provisioning/commands/infrastructure.nu`:
-
-```nushell
-# Add to the handle_infrastructure_command match statement
+
+
+
+Commands are organized by domain. Choose the appropriate handler:
+Domain Handler Responsibility
+infrastructure.nuServer/taskserv/cluster/infra lifecycle
+orchestration.nuWorkflow/batch operations, orchestrator control
+development.nuModule discovery, layers, versions, packaging
+workspace.nuWorkspace and template management
+configuration.nuEnvironment, settings, initialization
+utilities.nuSSH, SOPS, cache, providers, utilities
+generation.nuGenerate commands (server, taskserv, etc.)
+
+
+
+Example: Adding a new server command server status
+Edit provisioning/core/nulib/main_provisioning/commands/infrastructure.nu:
+# Add to the handle_infrastructure_command match statement
export def handle_infrastructure_command [
command: string
ops: string
@@ -29092,18 +28447,12 @@ def handle_server [ops: string, flags: record] {
let args = build_module_args $flags $ops
run_module $args "server" --exec
}
-```plaintext
-
-**That's it!** The command is now available as `provisioning server status`.
-
-### Step 3: Add Shortcuts (Optional)
-
-If you want shortcuts like `provisioning s status`:
-
-Edit `provisioning/core/nulib/main_provisioning/dispatcher.nu`:
-
-```nushell
-export def get_command_registry []: nothing -> record {
+
+That’s it! The command is now available as provisioning server status.
+
+If you want shortcuts like provisioning s status:
+Edit provisioning/core/nulib/main_provisioning/dispatcher.nu:
+export def get_command_registry []: nothing -> record {
{
# Infrastructure commands
"s" => "infrastructure server" # Already exists
@@ -29115,29 +28464,19 @@ export def get_command_registry []: nothing -> record {
# ... rest of registry
}
}
-```plaintext
-
-**Note**: Most shortcuts are already configured. You only need to add new shortcuts if you're creating completely new command categories.
-
-## Modifying Existing Handlers
-
-### Example: Enhancing the `taskserv` Command
-
-Let's say you want to add better error handling to the taskserv command:
-
-**Before:**
-
-```nushell
-def handle_taskserv [ops: string, flags: record] {
+
+Note : Most shortcuts are already configured. You only need to add new shortcuts if you’re creating completely new command categories.
+
+
+Let’s say you want to add better error handling to the taskserv command:
+Before:
+def handle_taskserv [ops: string, flags: record] {
let args = build_module_args $flags $ops
run_module $args "taskserv" --exec
}
-```plaintext
-
-**After:**
-
-```nushell
-def handle_taskserv [ops: string, flags: record] {
+
+After:
+def handle_taskserv [ops: string, flags: record] {
# Validate taskserv name if provided
let first_arg = ($ops | split row " " | get -o 0)
if ($first_arg | is-not-empty) and $first_arg not-in ["create", "delete", "list", "generate", "check-updates", "help"] {
@@ -29155,16 +28494,11 @@ def handle_taskserv [ops: string, flags: record] {
let args = build_module_args $flags $ops
run_module $args "taskserv" --exec
}
-```plaintext
-
-## Working with Flags
-
-### Using Centralized Flag Handling
-
-The `flags.nu` module provides centralized flag handling:
-
-```nushell
-# Parse all flags into normalized record
+
+
+
+The flags.nu module provides centralized flag handling:
+# Parse all flags into normalized record
let parsed_flags = (parse_common_flags {
version: $version, v: $v, info: $info,
debug: $debug, check: $check, yes: $yes,
@@ -29176,42 +28510,36 @@ let args = build_module_args $parsed_flags $ops
# Set environment variables based on flags
set_debug_env $parsed_flags
-```plaintext
-
-### Available Flag Parsing
-
-The `parse_common_flags` function normalizes these flags:
-
-| Flag Record Field | Description |
-|-------------------|-------------|
-| `show_version` | Version display (`--version`, `-v`) |
-| `show_info` | Info display (`--info`, `-i`) |
-| `show_about` | About display (`--about`, `-a`) |
-| `debug_mode` | Debug mode (`--debug`, `-x`) |
-| `check_mode` | Check mode (`--check`, `-c`) |
-| `auto_confirm` | Auto-confirm (`--yes`, `-y`) |
-| `wait` | Wait for completion (`--wait`, `-w`) |
-| `keep_storage` | Keep storage (`--keepstorage`) |
-| `infra` | Infrastructure name (`--infra`) |
-| `outfile` | Output file (`--outfile`) |
-| `output_format` | Output format (`--out`) |
-| `template` | Template name (`--template`) |
-| `select` | Selection (`--select`) |
-| `settings` | Settings file (`--settings`) |
-| `new_infra` | New infra name (`--new`) |
-
-### Adding New Flags
-
-If you need to add a new flag:
-
-1. **Update main `provisioning` file** to accept the flag
-2. **Update `flags.nu:parse_common_flags`** to normalize it
-3. **Update `flags.nu:build_module_args`** to pass it to modules
-
-**Example: Adding `--timeout` flag**
-
-```nushell
-# 1. In provisioning main file (parameter list)
+
+
+The parse_common_flags function normalizes these flags:
+Flag Record Field Description
+show_versionVersion display (--version, -v)
+show_infoInfo display (--info, -i)
+show_aboutAbout display (--about, -a)
+debug_modeDebug mode (--debug, -x)
+check_modeCheck mode (--check, -c)
+auto_confirmAuto-confirm (--yes, -y)
+waitWait for completion (--wait, -w)
+keep_storageKeep storage (--keepstorage)
+infraInfrastructure name (--infra)
+outfileOutput file (--outfile)
+output_formatOutput format (--out)
+templateTemplate name (--template)
+selectSelection (--select)
+settingsSettings file (--settings)
+new_infraNew infra name (--new)
+
+
+
+If you need to add a new flag:
+
+Update main provisioning file to accept the flag
+Update flags.nu:parse_common_flags to normalize it
+Update flags.nu:build_module_args to pass it to modules
+
+Example: Adding --timeout flag
+# 1. In provisioning main file (parameter list)
def main [
# ... existing parameters
--timeout: int = 300 # Timeout in seconds
@@ -29239,22 +28567,17 @@ export def build_module_args [flags: record, extra: string = ""]: nothing ->
# ... rest of function
$"($extra) ($use_check)($use_yes)($use_wait)($str_timeout)..."
}
-```plaintext
-
-## Adding New Shortcuts
-
-### Shortcut Naming Conventions
-
-- **1-2 letters**: Ultra-short for common commands (`s` for server, `ws` for workspace)
-- **3-4 letters**: Abbreviations (`orch` for orchestrator, `tmpl` for template)
-- **Aliases**: Alternative names (`task` for taskserv, `flow` for workflow)
-
-### Example: Adding a New Shortcut
-
-Edit `provisioning/core/nulib/main_provisioning/dispatcher.nu`:
-
-```nushell
-export def get_command_registry []: nothing -> record {
+
+
+
+
+1-2 letters : Ultra-short for common commands (s for server, ws for workspace)
+3-4 letters : Abbreviations (orch for orchestrator, tmpl for template)
+Aliases : Alternative names (task for taskserv, flow for workflow)
+
+
+Edit provisioning/core/nulib/main_provisioning/dispatcher.nu:
+export def get_command_registry []: nothing -> record {
{
# ... existing shortcuts
@@ -29265,36 +28588,26 @@ export def get_command_registry []: nothing -> record {
# ... rest of registry
}
}
-```plaintext
-
-**Important**: After adding a shortcut, update the help system in `help_system.nu` to document it.
-
-## Testing Your Changes
-
-### Running the Test Suite
-
-```bash
-# Run comprehensive test suite
+
+Important : After adding a shortcut, update the help system in help_system.nu to document it.
+
+
+# Run comprehensive test suite
nu tests/test_provisioning_refactor.nu
-```plaintext
-
-### Test Coverage
-
-The test suite validates:
-
-- ✅ Main help display
-- ✅ Category help (infrastructure, orchestration, development, workspace)
-- ✅ Bi-directional help routing
-- ✅ All command shortcuts
-- ✅ Category shortcut help
-- ✅ Command routing to correct handlers
-
-### Adding Tests for Your Changes
-
-Edit `tests/test_provisioning_refactor.nu`:
-
-```nushell
-# Add your test function
+
+
+The test suite validates:
+
+✅ Main help display
+✅ Category help (infrastructure, orchestration, development, workspace)
+✅ Bi-directional help routing
+✅ All command shortcuts
+✅ Category shortcut help
+✅ Command routing to correct handlers
+
+
+Edit tests/test_provisioning_refactor.nu:
+# Add your test function
export def test_my_new_feature [] {
print "\n🧪 Testing my new feature..."
@@ -29313,12 +28626,9 @@ export def main [] {
# ... rest of main
}
-```plaintext
-
-### Manual Testing
-
-```bash
-# Test command execution
+
+
+# Test command execution
provisioning/core/cli/provisioning my-command test --check
# Test with debug mode
@@ -29327,27 +28637,18 @@ provisioning/core/cli/provisioning --debug my-command test
# Test help
provisioning/core/cli/provisioning my-command help
provisioning/core/cli/provisioning help my-command # Bi-directional
-```plaintext
-
-## Common Patterns
-
-### Pattern 1: Simple Command Handler
-
-**Use Case**: Command just needs to execute a module with standard flags
-
-```nushell
-def handle_simple_command [ops: string, flags: record] {
+
+
+
+Use Case : Command just needs to execute a module with standard flags
+def handle_simple_command [ops: string, flags: record] {
let args = build_module_args $flags $ops
run_module $args "module_name" --exec
}
-```plaintext
-
-### Pattern 2: Command with Validation
-
-**Use Case**: Need to validate input before execution
-
-```nushell
-def handle_validated_command [ops: string, flags: record] {
+
+
+Use Case : Need to validate input before execution
+def handle_validated_command [ops: string, flags: record] {
# Validate
let first_arg = ($ops | split row " " | get -o 0)
if ($first_arg | is-empty) {
@@ -29360,14 +28661,10 @@ def handle_validated_command [ops: string, flags: record] {
let args = build_module_args $flags $ops
run_module $args "module_name" --exec
}
-```plaintext
-
-### Pattern 3: Command with Subcommands
-
-**Use Case**: Command has multiple subcommands (like `server create`, `server delete`)
-
-```nushell
-def handle_complex_command [ops: string, flags: record] {
+
+
+Use Case : Command has multiple subcommands (like server create, server delete)
+def handle_complex_command [ops: string, flags: record] {
let subcommand = ($ops | split row " " | get -o 0)
let rest_ops = ($ops | split row " " | skip 1 | str join " ")
@@ -29382,14 +28679,10 @@ def handle_complex_command [ops: string, flags: record] {
}
}
}
-```plaintext
-
-### Pattern 4: Command with Flag-Based Routing
-
-**Use Case**: Command behavior changes based on flags
-
-```nushell
-def handle_flag_routed_command [ops: string, flags: record] {
+
+
+Use Case : Command behavior changes based on flags
+def handle_flag_routed_command [ops: string, flags: record] {
if $flags.check_mode {
# Dry-run mode
print "🔍 Check mode: simulating command..."
@@ -29401,21 +28694,16 @@ def handle_flag_routed_command [ops: string, flags: record] {
run_module $args "module_name" --exec
}
}
-```plaintext
-
-## Best Practices
-
-### 1. Keep Handlers Focused
-
-Each handler should do **one thing well**:
-
-- ✅ Good: `handle_server` manages all server operations
-- ❌ Bad: `handle_server` also manages clusters and taskservs
-
-### 2. Use Descriptive Error Messages
-
-```nushell
-# ❌ Bad
+
+
+
+Each handler should do one thing well :
+
+✅ Good: handle_server manages all server operations
+❌ Bad: handle_server also manages clusters and taskservs
+
+
+# ❌ Bad
print "Error"
# ✅ Good
@@ -29427,14 +28715,10 @@ print " • containerd"
print " • cilium"
print ""
print "Use 'provisioning taskserv list' to see all available taskservs"
-```plaintext
-
-### 3. Leverage Centralized Functions
-
-Don't repeat code - use centralized functions:
-
-```nushell
-# ❌ Bad: Repeating flag handling
+
+
+Don’t repeat code - use centralized functions:
+# ❌ Bad: Repeating flag handling
def handle_bad [ops: string, flags: record] {
let use_check = if $flags.check_mode { "--check " } else { "" }
let use_yes = if $flags.auto_confirm { "--yes " } else { "" }
@@ -29448,54 +28732,47 @@ def handle_good [ops: string, flags: record] {
let args = build_module_args $flags $ops
run_module $args "module" --exec
}
-```plaintext
-
-### 4. Document Your Changes
-
-Update relevant documentation:
-
-- **ADR-006**: If architectural changes
-- **CLAUDE.md**: If new commands or shortcuts
-- **help_system.nu**: If new categories or commands
-- **This guide**: If new patterns or conventions
-
-### 5. Test Thoroughly
-
-Before committing:
-
-- [ ] Run test suite: `nu tests/test_provisioning_refactor.nu`
-- [ ] Test manual execution
-- [ ] Test with `--check` flag
-- [ ] Test with `--debug` flag
-- [ ] Test help: both `provisioning cmd help` and `provisioning help cmd`
-- [ ] Test shortcuts
-
-## Troubleshooting
-
-### Issue: "Module not found"
-
-**Cause**: Incorrect import path in handler
-
-**Fix**: Use relative imports with `.nu` extension:
-
-```nushell
-# ✅ Correct
+
+
+Update relevant documentation:
+
+ADR-006 : If architectural changes
+CLAUDE.md : If new commands or shortcuts
+help_system.nu : If new categories or commands
+This guide : If new patterns or conventions
+
+
+Before committing:
+
+
+
+Cause : Incorrect import path in handler
+Fix : Use relative imports with .nu extension:
+# ✅ Correct
use ../flags.nu *
use ../../lib_provisioning *
# ❌ Wrong
use ../main_provisioning/flags *
use lib_provisioning *
-```plaintext
-
-### Issue: "Parse mismatch: expected colon"
-
-**Cause**: Missing type signature format
-
-**Fix**: Use proper Nushell 0.107 type signature:
-
-```nushell
-# ✅ Correct
+
+
+Cause : Missing type signature format
+Fix : Use proper Nushell 0.107 type signature:
+# ✅ Correct
export def my_function [param: string]: nothing -> string {
"result"
}
@@ -29504,35 +28781,21 @@ export def my_function [param: string]: nothing -> string {
export def my_function [param: string] -> string {
"result"
}
-```plaintext
-
-### Issue: "Command not routing correctly"
-
-**Cause**: Shortcut not in command registry
-
-**Fix**: Add to `dispatcher.nu:get_command_registry`:
-
-```nushell
-"myshortcut" => "domain command"
-```plaintext
-
-### Issue: "Flags not being passed"
-
-**Cause**: Not using `build_module_args`
-
-**Fix**: Use centralized flag builder:
-
-```nushell
-let args = build_module_args $flags $ops
+
+
+Cause : Shortcut not in command registry
+Fix : Add to dispatcher.nu:get_command_registry:
+"myshortcut" => "domain command"
+
+
+Cause : Not using build_module_args
+Fix : Use centralized flag builder:
+let args = build_module_args $flags $ops
run_module $args "module" --exec
-```plaintext
-
-## Quick Reference
-
-### File Locations
-
-```plaintext
-provisioning/core/nulib/
+
+
+
+provisioning/core/nulib/
├── provisioning - Main entry, flag definitions
├── main_provisioning/
│ ├── flags.nu - Flag parsing (parse_common_flags, build_module_args)
@@ -29543,15 +28806,12 @@ tests/
└── test_provisioning_refactor.nu - Test suite
docs/
├── architecture/
-│ └── ADR-006-provisioning-cli-refactoring.md - Architecture docs
+│ └── adr-006-provisioning-cli-refactoring.md - Architecture docs
└── development/
└── COMMAND_HANDLER_GUIDE.md - This guide
-```plaintext
-
-### Key Functions
-
-```nushell
-# In flags.nu
+
+
+# In flags.nu
parse_common_flags [flags: record]: nothing -> record
build_module_args [flags: record, extra: string = ""]: nothing -> string
set_debug_env [flags: record]
@@ -29570,12 +28830,9 @@ help-orchestration []: nothing -> string
# In commands/*.nu
handle_*_command [command: string, ops: string, flags: record]
# Example: handle_infrastructure_command, handle_workspace_command
-```plaintext
-
-### Testing Commands
-
-```bash
-# Run full test suite
+
+
+# Run full test suite
nu tests/test_provisioning_refactor.nu
# Test specific command
@@ -29587,32 +28844,27 @@ provisioning/core/cli/provisioning --debug my-command test
# Test help
provisioning/core/cli/provisioning help my-command
provisioning/core/cli/provisioning my-command help # Bi-directional
-```plaintext
-
-## Further Reading
-
-- **[ADR-006: CLI Refactoring](../architecture/adr/ADR-006-provisioning-cli-refactoring.md)** - Complete architectural decision record
-- **[Project Structure](project-structure.md)** - Overall project organization
-- **[Workflow Development](workflow.md)** - Workflow system architecture
-- **[Development Integration](integration.md)** - Integration patterns
-
-## Contributing
-
-When contributing command handler changes:
-
-1. **Follow existing patterns** - Use the patterns in this guide
-2. **Update documentation** - Keep docs in sync with code
-3. **Add tests** - Cover your new functionality
-4. **Run test suite** - Ensure nothing breaks
-5. **Update CLAUDE.md** - Document new commands/shortcuts
-
-For questions or issues, refer to ADR-006 or ask the team.
-
----
-
-*This guide is part of the provisioning project documentation. Last updated: 2025-09-30*
-
+
+
+
+When contributing command handler changes:
+
+Follow existing patterns - Use the patterns in this guide
+Update documentation - Keep docs in sync with code
+Add tests - Cover your new functionality
+Run test suite - Ensure nothing breaks
+Update CLAUDE.md - Document new commands/shortcuts
+
+For questions or issues, refer to ADR-006 or ask the team.
+
+This guide is part of the provisioning project documentation. Last updated: 2025-09-30
+
This document outlines the recommended development workflows, coding practices, testing strategies, and debugging techniques for the provisioning project.
@@ -29628,7 +28880,7 @@ For questions or issues, refer to ADR-006 or ask the team.
Quality Assurance
Best Practices
-
+
The provisioning project employs a multi-language, multi-component architecture requiring specific development workflows to maintain consistency, quality, and efficiency.
Key Technologies :
@@ -29654,32 +28906,23 @@ cd provisioning-system
# Navigate to workspace
cd workspace/tools
-```plaintext
-
-**2. Initialize Workspace**:
-
-```bash
-# Initialize development workspace
+
+2. Initialize Workspace :
+ # Initialize development workspace
nu workspace.nu init --user-name $USER --infra-name dev-env
# Check workspace health
nu workspace.nu health --detailed --fix-issues
-```plaintext
-
-**3. Configure Development Environment**:
-
-```bash
-# Create user configuration
+
+3. Configure Development Environment :
+# Create user configuration
cp workspace/config/local-overrides.toml.example workspace/config/$USER.toml
# Edit configuration for development
$EDITOR workspace/config/$USER.toml
-```plaintext
-
-**4. Set Up Build System**:
-
-```bash
-# Navigate to build tools
+
+4. Set Up Build System :
+# Navigate to build tools
cd src/tools
# Check build prerequisites
@@ -29687,43 +28930,32 @@ make info
# Perform initial build
make dev-build
-```plaintext
-
-### Tool Installation
-
-**Required Tools**:
-
-```bash
-# Install Nushell
+
+
+Required Tools :
+# Install Nushell
cargo install nu
-# Install KCL
-cargo install kcl-cli
+# Install Nickel
+cargo install nickel
# Install additional tools
cargo install cross # Cross-compilation
cargo install cargo-audit # Security auditing
cargo install cargo-watch # File watching
-```plaintext
-
-**Optional Development Tools**:
-
-```bash
-# Install development enhancers
+
+Optional Development Tools :
+# Install development enhancers
cargo install nu_plugin_tera # Template plugin
cargo install sops # Secrets management
brew install k9s # Kubernetes management
-```plaintext
-
-### IDE Configuration
-
-**VS Code Setup** (`.vscode/settings.json`):
-
-```json
-{
+
+
+VS Code Setup (.vscode/settings.json):
+{
"files.associations": {
"*.nu": "shellscript",
- "*.k": "kcl",
+ "*.ncl": "nickel",
"*.toml": "toml"
},
"nushell.shellPath": "/usr/local/bin/nu",
@@ -29732,24 +28964,19 @@ brew install k9s # Kubernetes management
"editor.rulers": [100],
"files.trimTrailingWhitespace": true
}
-```plaintext
-
-**Recommended Extensions**:
-
-- Nushell Language Support
-- Rust Analyzer
-- KCL Language Support
-- TOML Language Support
-- Better TOML
-
-## Daily Development Workflow
-
-### Morning Routine
-
-**1. Sync and Update**:
-
-```bash
-# Sync with upstream
+
+Recommended Extensions :
+
+Nushell Language Support
+Rust Analyzer
+Nickel Language Support
+TOML Language Support
+Better TOML
+
+
+
+1. Sync and Update :
+# Sync with upstream
git pull origin main
# Update workspace
@@ -29758,25 +28985,18 @@ nu workspace.nu health --fix-issues
# Check for updates
nu workspace.nu status --detailed
-```plaintext
-
-**2. Review Current State**:
-
-```bash
-# Check current infrastructure
+
+2. Review Current State :
+# Check current infrastructure
provisioning show servers
provisioning show settings
# Review workspace status
nu workspace.nu status
-```plaintext
-
-### Development Cycle
-
-**1. Feature Development**:
-
-```bash
-# Create feature branch
+
+
+1. Feature Development :
+# Create feature branch
git checkout -b feature/new-provider-support
# Start development environment
@@ -29785,12 +29005,9 @@ nu workspace.nu init --workspace-type development
# Begin development
$EDITOR workspace/extensions/providers/new-provider/nulib/provider.nu
-```plaintext
-
-**2. Incremental Testing**:
-
-```bash
-# Test syntax during development
+
+2. Incremental Testing :
+# Test syntax during development
nu --check workspace/extensions/providers/new-provider/nulib/provider.nu
# Run unit tests
@@ -29798,12 +29015,9 @@ nu workspace/extensions/providers/new-provider/tests/unit/basic-test.nu
# Integration testing
nu workspace.nu tools test-extension providers/new-provider
-```plaintext
-
-**3. Build and Validate**:
-
-```bash
-# Quick development build
+
+3. Build and Validate :
+# Quick development build
cd src/tools
make dev-build
@@ -29812,37 +29026,26 @@ make validate-all
# Test distribution
make test-dist
-```plaintext
-
-### Testing During Development
-
-**Unit Testing**:
-
-```nushell
-# Add test examples to functions
+
+
+Unit Testing :
+# Add test examples to functions
def create-server [name: string] -> record {
# @test: "test-server" -> {name: "test-server", status: "created"}
# Implementation here
}
-```plaintext
-
-**Integration Testing**:
-
-```bash
-# Test with real infrastructure
+
+Integration Testing :
+# Test with real infrastructure
nu workspace/extensions/providers/new-provider/nulib/provider.nu \
create-server test-server --dry-run
# Test with workspace isolation
PROVISIONING_WORKSPACE_USER=$USER provisioning server create test-server --check
-```plaintext
-
-### End-of-Day Routine
-
-**1. Commit Progress**:
-
-```bash
-# Stage changes
+
+
+1. Commit Progress :
+# Stage changes
git add .
# Commit with descriptive message
@@ -29855,12 +29058,9 @@ git commit -m "feat(provider): add new cloud provider support
# Push to feature branch
git push origin feature/new-provider-support
-```plaintext
-
-**2. Workspace Maintenance**:
-
-```bash
-# Clean up development data
+
+2. Workspace Maintenance :
+# Clean up development data
nu workspace.nu cleanup --type cache --age 1d
# Backup current state
@@ -29868,16 +29068,11 @@ nu workspace.nu backup --auto-name --components config,extensions
# Check workspace health
nu workspace.nu health
-```plaintext
-
-## Code Organization
-
-### Nushell Code Structure
-
-**File Organization**:
-
-```plaintext
-Extension Structure:
+
+
+
+File Organization :
+Extension Structure:
├── nulib/
│ ├── main.nu # Main entry point
│ ├── core/ # Core functionality
@@ -29894,12 +29089,9 @@ Extension Structure:
└── templates/ # Template files
├── config.j2 # Configuration templates
└── manifest.j2 # Manifest templates
-```plaintext
-
-**Function Naming Conventions**:
-
-```nushell
-# Use kebab-case for commands
+
+Function Naming Conventions :
+# Use kebab-case for commands
def create-server [name: string] -> record { ... }
def validate-config [config: record] -> bool { ... }
@@ -29911,12 +29103,9 @@ def parse_config_file [path: string] -> record { ... }
def check-server-status [server: string] -> string { ... }
def get-server-info [server: string] -> record { ... }
def list-available-zones [] -> list<string> { ... }
-```plaintext
-
-**Error Handling Pattern**:
-
-```nushell
-def create-server [
+
+Error Handling Pattern :
+def create-server [
name: string
--dry-run: bool = false
] -> record {
@@ -29946,14 +29135,10 @@ def create-server [
# 4. Return result
{server: $name, status: "created", id: (generate-id)}
}
-```plaintext
-
-### Rust Code Structure
-
-**Project Organization**:
-
-```plaintext
-src/
+
+
+Project Organization :
+src/
├── lib.rs # Library root
├── main.rs # Binary entry point
├── config/ # Configuration handling
@@ -29968,12 +29153,9 @@ src/
├── mod.rs
├── workflow.rs # Workflow management
└── task_queue.rs # Task queue management
-```plaintext
-
-**Error Handling**:
-
-```rust
-use anyhow::{Context, Result};
+
+Error Handling :
+use anyhow::{Context, Result};
use thiserror::Error;
#[derive(Error, Debug)]
@@ -30002,62 +29184,46 @@ pub fn create_server(name: &str) -> Result<ServerInfo> {
.context("Failed to provision server")?;
Ok(server)
-}
-```plaintext
-
-### KCL Schema Organization
-
-**Schema Structure**:
-
-```kcl
-# Base schema definitions
-schema ServerConfig:
- name: str
- plan: str
- zone: str
- tags?: {str: str} = {}
-
- check:
- len(name) > 0, "Server name cannot be empty"
- plan in ["1xCPU-2GB", "2xCPU-4GB", "4xCPU-8GB"], "Invalid plan"
+}
+
+Schema Structure :
+# Base schema definitions
+let ServerConfig = {
+ name | string,
+ plan | string,
+ zone | string,
+ tags | { } | default = {},
+} in
+ServerConfig
# Provider-specific extensions
-schema UpCloudServerConfig(ServerConfig):
- template?: str = "Ubuntu Server 22.04 LTS (Jammy Jellyfish)"
- storage?: int = 25
-
- check:
- storage >= 10, "Minimum storage is 10GB"
- storage <= 2048, "Maximum storage is 2TB"
+let UpCloudServerConfig = {
+ template | string | default = "Ubuntu Server 22.04 LTS (Jammy Jellyfish)",
+ storage | number | default = 25,
+} in
+UpCloudServerConfig
# Composition schemas
-schema InfrastructureConfig:
- servers: [ServerConfig]
- networks?: [NetworkConfig] = []
- load_balancers?: [LoadBalancerConfig] = []
-
- check:
- len(servers) > 0, "At least one server required"
-```plaintext
-
-## Testing Strategies
-
-### Test-Driven Development
-
-**TDD Workflow**:
-
-1. **Write Test First**: Define expected behavior
-2. **Run Test (Fail)**: Confirm test fails as expected
-3. **Write Code**: Implement minimal code to pass
-4. **Run Test (Pass)**: Confirm test now passes
-5. **Refactor**: Improve code while keeping tests green
-
-### Nushell Testing
-
-**Unit Test Pattern**:
-
-```nushell
-# Function with embedded test
+let InfrastructureConfig = {
+ servers | array,
+ networks | array | default = [],
+ load_balancers | array | default = [],
+} in
+InfrastructureConfig
+
+
+
+TDD Workflow :
+
+Write Test First : Define expected behavior
+Run Test (Fail) : Confirm test fails as expected
+Write Code : Implement minimal code to pass
+Run Test (Pass) : Confirm test now passes
+Refactor : Improve code while keeping tests green
+
+
+Unit Test Pattern :
+# Function with embedded test
def validate-server-name [name: string] -> bool {
# @test: "valid-name" -> true
# @test: "" -> false
@@ -30088,12 +29254,9 @@ def test_validate_server_name [] {
print "✅ validate-server-name tests passed"
}
-```plaintext
-
-**Integration Test Pattern**:
-
-```nushell
-# tests/integration/server-lifecycle-test.nu
+
+Integration Test Pattern :
+# tests/integration/server-lifecycle-test.nu
def test_complete_server_lifecycle [] {
# Setup
let test_server = "test-server-" + (date now | format date "%Y%m%d%H%M%S")
@@ -30113,14 +29276,10 @@ def test_complete_server_lifecycle [] {
exit 1
}
}
-```plaintext
-
-### Rust Testing
-
-**Unit Testing**:
-
-```rust
-#[cfg(test)]
+
+
+Unit Testing :
+#[cfg(test)]
mod tests {
use super::*;
use tokio_test;
@@ -30145,13 +29304,9 @@ mod tests {
assert_eq!(server.name, "test-server");
assert_eq!(server.status, "created");
}
-}
-```plaintext
-
-**Integration Testing**:
-
-```rust
-#[cfg(test)]
+}
+Integration Testing :
+#[cfg(test)]
mod integration_tests {
use super::*;
use testcontainers::*;
@@ -30173,30 +29328,21 @@ mod integration_tests {
assert_eq!(result.status, WorkflowStatus::Completed);
}
-}
-```plaintext
-
-### KCL Testing
-
-**Schema Validation Testing**:
-
-```bash
-# Test KCL schemas
-kcl test kcl/
+}
+
+Schema Validation Testing :
+# Test Nickel schemas
+nickel check schemas/
# Validate specific schemas
-kcl check kcl/server.k --data test-data.yaml
+nickel typecheck schemas/server.ncl
# Test with examples
-kcl run kcl/server.k -D name="test-server" -D plan="2xCPU-4GB"
-```plaintext
-
-### Test Automation
-
-**Continuous Testing**:
-
-```bash
-# Watch for changes and run tests
+nickel eval schemas/server.ncl
+
+
+Continuous Testing :
+# Watch for changes and run tests
cargo watch -x test -x check
# Watch Nushell files
@@ -30204,16 +29350,11 @@ find . -name "*.nu" | entr -r nu tests/run-all-tests.nu
# Automated testing in workspace
nu workspace.nu tools test-all --watch
-```plaintext
-
-## Debugging Techniques
-
-### Debug Configuration
-
-**Enable Debug Mode**:
-
-```bash
-# Environment variables
+
+
+
+Enable Debug Mode :
+# Environment variables
export PROVISIONING_DEBUG=true
export PROVISIONING_LOG_LEVEL=debug
export RUST_LOG=debug
@@ -30221,14 +29362,10 @@ export RUST_BACKTRACE=1
# Workspace debug
export PROVISIONING_WORKSPACE_USER=$USER
-```plaintext
-
-### Nushell Debugging
-
-**Debug Techniques**:
-
-```nushell
-# Debug prints
+
+
+Debug Techniques :
+# Debug prints
def debug-server-creation [name: string] {
print $"🐛 Creating server: ($name)"
@@ -30264,12 +29401,9 @@ def debug-interactive [] {
# Drop into interactive shell
nu --interactive
}
-```plaintext
-
-**Error Investigation**:
-
-```nushell
-# Comprehensive error handling
+
+Error Investigation :
+# Comprehensive error handling
def safe-server-creation [name: string] {
try {
create-server $name
@@ -30295,14 +29429,10 @@ def safe-server-creation [name: string] {
}
}
}
-```plaintext
-
-### Rust Debugging
-
-**Debug Logging**:
-
-```rust
-use tracing::{debug, info, warn, error, instrument};
+
+
+Debug Logging :
+use tracing::{debug, info, warn, error, instrument};
#[instrument]
pub async fn create_server(name: &str) -> Result<ServerInfo> {
@@ -30325,27 +29455,18 @@ pub async fn create_server(name: &str) -> Result<ServerInfo> {
info!("Server {} created successfully", name);
Ok(server)
-}
-```plaintext
-
-**Interactive Debugging**:
-
-```rust
-// Use debugger breakpoints
+}
+Interactive Debugging :
+// Use debugger breakpoints
#[cfg(debug_assertions)]
{
println!("Debug: server creation starting");
dbg!(&config);
// Add breakpoint here in IDE
-}
-```plaintext
-
-### Log Analysis
-
-**Log Monitoring**:
-
-```bash
-# Follow all logs
+}
+
+Log Monitoring :
+# Follow all logs
tail -f workspace/runtime/logs/$USER/*.log
# Filter for errors
@@ -30356,25 +29477,17 @@ tail -f workspace/runtime/logs/$USER/orchestrator.log | grep -i workflow
# Structured log analysis
jq '.level == "ERROR"' workspace/runtime/logs/$USER/structured.jsonl
-```plaintext
-
-**Debug Log Levels**:
-
-```bash
-# Different verbosity levels
+
+Debug Log Levels :
+# Different verbosity levels
PROVISIONING_LOG_LEVEL=trace provisioning server create test
PROVISIONING_LOG_LEVEL=debug provisioning server create test
PROVISIONING_LOG_LEVEL=info provisioning server create test
-```plaintext
-
-## Integration Workflows
-
-### Existing System Integration
-
-**Working with Legacy Components**:
-
-```bash
-# Test integration with existing system
+
+
+
+Working with Legacy Components :
+# Test integration with existing system
provisioning --version # Legacy system
src/core/nulib/provisioning --version # New system
@@ -30384,32 +29497,24 @@ PROVISIONING_WORKSPACE_USER=$USER provisioning server list
# Validate configuration compatibility
provisioning validate config
nu workspace.nu config validate
-```plaintext
-
-### API Integration Testing
-
-**REST API Testing**:
-
-```bash
-# Test orchestrator API
+
+
+REST API Testing :
+# Test orchestrator API
curl -X GET http://localhost:9090/health
curl -X GET http://localhost:9090/tasks
# Test workflow creation
curl -X POST http://localhost:9090/workflows/servers/create \
-H "Content-Type: application/json" \
- -d '{"name": "test-server", "plan": "2xCPU-4GB"}'
+ -d '{"name": "test-server", "plan": "2xCPU-4 GB"}'
# Monitor workflow
curl -X GET http://localhost:9090/workflows/batch/status/workflow-id
-```plaintext
-
-### Database Integration
-
-**SurrealDB Integration**:
-
-```nushell
-# Test database connectivity
+
+
+SurrealDB Integration :
+# Test database connectivity
use core/nulib/lib_provisioning/database/surreal.nu
let db = (connect-database)
(test-connection $db)
@@ -30418,14 +29523,10 @@ let db = (connect-database)
let workflow_id = (create-workflow-record "test-workflow")
let status = (get-workflow-status $workflow_id)
assert ($status.status == "pending")
-```plaintext
-
-### External Tool Integration
-
-**Container Integration**:
-
-```bash
-# Test with Docker
+
+
+Container Integration :
+# Test with Docker
docker run --rm -v $(pwd):/work provisioning:dev provisioning --version
# Test with Kubernetes
@@ -30435,24 +29536,19 @@ kubectl logs test-pod
# Validate in different environments
make test-dist PLATFORM=docker
make test-dist PLATFORM=kubernetes
-```plaintext
-
-## Collaboration Guidelines
-
-### Branch Strategy
-
-**Branch Naming**:
-
-- `feature/description` - New features
-- `fix/description` - Bug fixes
-- `docs/description` - Documentation updates
-- `refactor/description` - Code refactoring
-- `test/description` - Test improvements
-
-**Workflow**:
-
-```bash
-# Start new feature
+
+
+
+Branch Naming :
+
+feature/description - New features
+fix/description - Bug fixes
+docs/description - Documentation updates
+refactor/description - Code refactoring
+test/description - Test improvements
+
+Workflow :
+# Start new feature
git checkout main
git pull origin main
git checkout -b feature/new-provider-support
@@ -30464,23 +29560,25 @@ git commit -m "feat(provider): implement server creation API"
# Push and create PR
git push origin feature/new-provider-support
gh pr create --title "Add new provider support" --body "..."
-```plaintext
-
-### Code Review Process
-
-**Review Checklist**:
-
-- [ ] Code follows project conventions
-- [ ] Tests are included and passing
-- [ ] Documentation is updated
-- [ ] No hardcoded values
-- [ ] Error handling is comprehensive
-- [ ] Performance considerations addressed
-
-**Review Commands**:
-
-```bash
-# Test PR locally
+
+
+Review Checklist :
+
+Review Commands :
+# Test PR locally
gh pr checkout 123
cd src/tools && make ci-test
@@ -30490,53 +29588,43 @@ nu workspace/extensions/providers/new-provider/tests/run-all.nu
# Check code quality
cargo clippy -- -D warnings
nu --check $(find . -name "*.nu")
-```plaintext
-
-### Documentation Requirements
-
-**Code Documentation**:
-
-```nushell
-# Function documentation
+
+
+Code Documentation :
+# Function documentation
def create-server [
name: string # Server name (must be unique)
- plan: string # Server plan (e.g., "2xCPU-4GB")
+ plan: string # Server plan (for example, "2xCPU-4 GB")
--dry-run: bool # Show what would be created without doing it
] -> record { # Returns server creation result
# Creates a new server with the specified configuration
#
# Examples:
- # create-server "web-01" "2xCPU-4GB"
- # create-server "test" "1xCPU-2GB" --dry-run
+ # create-server "web-01" "2xCPU-4 GB"
+ # create-server "test" "1xCPU-2 GB" --dry-run
# Implementation
}
-```plaintext
-
-### Communication
-
-**Progress Updates**:
-
-- Daily standup participation
-- Weekly architecture reviews
-- PR descriptions with context
-- Issue tracking with details
-
-**Knowledge Sharing**:
-
-- Technical blog posts
-- Architecture decision records
-- Code review discussions
-- Team documentation updates
-
-## Quality Assurance
-
-### Code Quality Checks
-
-**Automated Quality Gates**:
-
-```bash
-# Pre-commit hooks
+
+
+Progress Updates :
+
+Daily standup participation
+Weekly architecture reviews
+PR descriptions with context
+Issue tracking with details
+
+Knowledge Sharing :
+
+Technical blog posts
+Architecture decision records
+Code review discussions
+Team documentation updates
+
+
+
+Automated Quality Gates :
+# Pre-commit hooks
pre-commit install
# Manual quality check
@@ -30545,22 +29633,18 @@ make validate-all
# Security audit
cargo audit
-```plaintext
-
-**Quality Metrics**:
-
-- Code coverage > 80%
-- No critical security vulnerabilities
-- All tests passing
-- Documentation coverage complete
-- Performance benchmarks met
-
-### Performance Monitoring
-
-**Performance Testing**:
-
-```bash
-# Benchmark builds
+
+Quality Metrics :
+
+Code coverage > 80%
+No critical security vulnerabilities
+All tests passing
+Documentation coverage complete
+Performance benchmarks met
+
+
+Performance Testing :
+# Benchmark builds
make benchmark
# Performance profiling
@@ -30568,41 +29652,29 @@ cargo flamegraph --bin provisioning-orchestrator
# Load testing
ab -n 1000 -c 10 http://localhost:9090/health
-```plaintext
-
-**Resource Monitoring**:
-
-```bash
-# Monitor during development
+
+Resource Monitoring :
+# Monitor during development
nu workspace/tools/runtime-manager.nu monitor --duration 5m
# Check resource usage
du -sh workspace/runtime/
df -h
-```plaintext
-
-## Best Practices
-
-### Configuration Management
-
-**Never Hardcode**:
-
-```nushell
-# Bad
+
+
+
+Never Hardcode :
+# Bad
def get-api-url [] { "https://api.upcloud.com" }
# Good
def get-api-url [] {
get-config-value "providers.upcloud.api_url" "https://api.upcloud.com"
}
-```plaintext
-
-### Error Handling
-
-**Comprehensive Error Context**:
-
-```nushell
-def create-server [name: string] {
+
+
+Comprehensive Error Context :
+def create-server [name: string] {
try {
validate-server-name $name
} catch { |e|
@@ -30621,14 +29693,10 @@ def create-server [name: string] {
}
}
}
-```plaintext
-
-### Resource Management
-
-**Clean Up Resources**:
-
-```nushell
-def with-temporary-server [name: string, action: closure] {
+
+
+Clean Up Resources :
+def with-temporary-server [name: string, action: closure] {
let server = (create-server $name)
try {
@@ -30642,14 +29710,10 @@ def with-temporary-server [name: string, action: closure] {
# Clean up on success
delete-server $name
}
-```plaintext
-
-### Testing Best Practices
-
-**Test Isolation**:
-
-```nushell
-def test-with-isolation [test_name: string, test_action: closure] {
+
+
+Test Isolation :
+def test-with-isolation [test_name: string, test_action: closure] {
let test_workspace = $"test-($test_name)-(date now | format date '%Y%m%d%H%M%S')"
try {
@@ -30669,10 +29733,8 @@ def test-with-isolation [test_name: string, test_action: closure] {
nu workspace.nu cleanup --user-name $test_workspace --type all --force
}
}
-```plaintext
-
-This development workflow provides a comprehensive framework for efficient, quality-focused development while maintaining the project's architectural principles and ensuring smooth collaboration across the team.
+This development workflow provides a comprehensive framework for efficient, quality-focused development while maintaining the project’s architectural principles and ensuring smooth collaboration across the team.
This document explains how the new project structure integrates with existing systems, API compatibility and versioning, database migration strategies, deployment considerations, and monitoring and observability.
@@ -30687,7 +29749,7 @@ This development workflow provides a comprehensive framework for efficient, qual
Migration Pathways
Troubleshooting Integration Issues
-
+
Provisioning has been designed with integration as a core principle, ensuring seamless compatibility between new development-focused components and existing production systems while providing clear migration pathways.
Integration Principles :
# All existing commands continue to work unchanged
+./core/nulib/provisioning server create web-01 2xCPU-4 GB
./core/nulib/provisioning taskserv install kubernetes
./core/nulib/provisioning cluster create buildkit
# New commands available alongside existing ones
-./src/core/nulib/provisioning server create web-01 2xCPU-4GB --orchestrated
+./src/core/nulib/provisioning server create web-01 2xCPU-4 GB --orchestrated
nu workspace/tools/workspace.nu health --detailed
-```plaintext
-
-**Path Resolution Integration**:
-
-```nushell
-# Automatic path resolution between systems
+
+Path Resolution Integration :
+# Automatic path resolution between systems
use workspace/lib/path-resolver.nu
# Resolves to workspace path if available, falls back to core
@@ -30737,14 +29791,10 @@ let config_path = (path-resolver resolve_path "config" "user" --fallback-to-core
# Seamless extension discovery
let provider_path = (path-resolver resolve_extension "providers" "upcloud")
-```plaintext
-
-### Configuration System Bridge
-
-**Dual Configuration Support**:
-
-```nushell
-# Configuration bridge supports both ENV and TOML
+
+
+Dual Configuration Support :
+# Configuration bridge supports both ENV and TOML
def get-config-value-bridge [key: string, default: string = ""] -> string {
# Try new TOML configuration first
let toml_value = try {
@@ -30774,14 +29824,10 @@ def get-config-value-bridge [key: string, default: string = ""] -> string {
help: $"Migrate from ($env_key) environment variable to ($key) in config file"
}
}
-```plaintext
-
-### Data Integration
-
-**Shared Data Access**:
-
-```nushell
-# Unified data access across old and new systems
+
+
+Shared Data Access :
+# Unified data access across old and new systems
def get-server-info [server_name: string] -> record {
# Try new orchestrator data store first
let orchestrator_data = try {
@@ -30803,14 +29849,10 @@ def get-server-info [server_name: string] -> record {
error make {msg: $"Server not found: ($server_name)"}
}
-```plaintext
-
-### Process Integration
-
-**Hybrid Process Management**:
-
-```nushell
-# Orchestrator-aware process management
+
+
+Hybrid Process Management :
+# Orchestrator-aware process management
def create-server-integrated [
name: string,
plan: string,
@@ -30832,33 +29874,24 @@ def check-orchestrator-available [] -> bool {
false
}
}
-```plaintext
-
-## API Compatibility and Versioning
-
-### REST API Versioning
-
-**API Version Strategy**:
-
-- **v1**: Legacy compatibility API (existing functionality)
-- **v2**: Enhanced API with orchestrator features
-- **v3**: Full workflow and batch operation support
-
-**Version Header Support**:
-
-```bash
-# API calls with version specification
+
+
+
+API Version Strategy :
+
+v1 : Legacy compatibility API (existing functionality)
+v2 : Enhanced API with orchestrator features
+v3 : Full workflow and batch operation support
+
+Version Header Support :
+# API calls with version specification
curl -H "API-Version: v1" http://localhost:9090/servers
curl -H "API-Version: v2" http://localhost:9090/workflows/servers/create
curl -H "API-Version: v3" http://localhost:9090/workflows/batch/submit
-```plaintext
-
-### API Compatibility Layer
-
-**Backward Compatible Endpoints**:
-
-```rust
-// Rust API compatibility layer
+
+
+Backward Compatible Endpoints :
+// Rust API compatibility layer
#[derive(Debug, Serialize, Deserialize)]
struct ApiRequest {
version: Option<String>,
@@ -30893,54 +29926,40 @@ async fn handle_v1_request(payload: serde_json::Value) -> Result<ApiRespon
// Transform response to v1 format
Ok(transform_to_v1_response(result))
-}
-```plaintext
-
-### Schema Evolution
-
-**Backward Compatible Schema Changes**:
-
-```kcl
-# API schema with version support
-schema ServerCreateRequest {
+}
+
+Backward Compatible Schema Changes :
+# API schema with version support
+let ServerCreateRequest = {
# V1 fields (always supported)
- name: str
- plan: str
- zone?: str = "auto"
+ name | string,
+ plan | string,
+ zone | string | default = "auto",
# V2 additions (optional for backward compatibility)
- orchestrated?: bool = false
- workflow_options?: WorkflowOptions
+ orchestrated | bool | default = false,
+ workflow_options | { } | optional,
# V3 additions
- batch_options?: BatchOptions
- dependencies?: [str] = []
+ batch_options | { } | optional,
+ dependencies | array | default = [],
# Version constraints
- api_version?: str = "v1"
-
- check:
- len(name) > 0, "Name cannot be empty"
- plan in ["1xCPU-2GB", "2xCPU-4GB", "4xCPU-8GB", "8xCPU-16GB"], "Invalid plan"
-}
+ api_version | string | default = "v1",
+} in
+ServerCreateRequest
# Conditional validation based on API version
-schema WorkflowOptions:
- wait_for_completion?: bool = true
- timeout_seconds?: int = 300
- retry_count?: int = 3
-
- check:
- timeout_seconds > 0, "Timeout must be positive"
- retry_count >= 0, "Retry count must be non-negative"
-```plaintext
-
-### Client SDK Compatibility
-
-**Multi-Version Client Support**:
-
-```nushell
-# Nushell client with version support
+let WorkflowOptions = {
+ wait_for_completion | bool | default = true,
+ timeout_seconds | number | default = 300,
+ retry_count | number | default = 3,
+} in
+WorkflowOptions
+
+
+Multi-Version Client Support :
+# Nushell client with version support
def "client create-server" [
name: string,
plan: string,
@@ -30973,16 +29992,11 @@ def "client create-server" [
"API-Version": $api_version
}
}
-```plaintext
-
-## Database Migration Strategies
-
-### Database Architecture Evolution
-
-**Migration Strategy**:
-
-```plaintext
-Database Evolution Path
+
+
+
+Migration Strategy :
+Database Evolution Path
┌─────────────────┐ ┌─────────────────┐ ┌─────────────────┐
│ File-based │ → │ SQLite │ → │ SurrealDB │
│ Storage │ │ Migration │ │ Full Schema │
@@ -30991,14 +30005,10 @@ Database Evolution Path
│ - Text logs │ │ - Transactions │ │ - Real-time │
│ - Simple state │ │ - Backup/restore│ │ - Clustering │
└─────────────────┘ └─────────────────┘ └─────────────────┘
-```plaintext
-
-### Migration Scripts
-
-**Automated Database Migration**:
-
-```nushell
-# Database migration orchestration
+
+
+Automated Database Migration :
+# Database migration orchestration
def migrate-database [
--from: string = "filesystem",
--to: string = "surrealdb",
@@ -31034,12 +30044,9 @@ def migrate-database [
print $"Migration from ($from) to ($to) completed successfully"
{from: $from, to: $to, status: "completed", migrated_at: (date now)}
}
-```plaintext
-
-**File System to SurrealDB Migration**:
-
-```nushell
-def migrate_filesystem_to_surrealdb [] -> record {
+
+File System to SurrealDB Migration :
+def migrate_filesystem_to_surrealdb [] -> record {
# Initialize SurrealDB connection
let db = (connect-surrealdb)
@@ -31086,14 +30093,10 @@ def migrate_filesystem_to_surrealdb [] -> record {
status: "completed"
}
}
-```plaintext
-
-### Data Integrity Verification
-
-**Migration Verification**:
-
-```nushell
-def verify-migration [from: string, to: string] -> record {
+
+
+Migration Verification :
+def verify-migration [from: string, to: string] -> record {
print "Verifying data integrity..."
let source_data = (read-source-data $from)
@@ -31130,16 +30133,11 @@ def verify-migration [from: string, to: string] -> record {
verified_at: (date now)
}
}
-```plaintext
-
-## Deployment Considerations
-
-### Deployment Architecture
-
-**Hybrid Deployment Model**:
-
-```plaintext
-Deployment Architecture
+
+
+
+Hybrid Deployment Model :
+Deployment Architecture
┌─────────────────────────────────────────────────────────────────┐
│ Load Balancer / Reverse Proxy │
└─────────────────────┬───────────────────────────────────────────┘
@@ -31154,14 +30152,10 @@ Deployment Architecture
│- Files │ │- Compat │ │- DB │
│- Logs │ │- Monitor │ │- Queue │
└────────┘ └────────────┘ └────────┘
-```plaintext
-
-### Deployment Strategies
-
-**Blue-Green Deployment**:
-
-```bash
-# Blue-Green deployment with integration bridge
+
+
+Blue-Green Deployment :
+# Blue-Green deployment with integration bridge
# Phase 1: Deploy new system alongside existing (Green environment)
cd src/tools
make all
@@ -31187,12 +30181,9 @@ nginx-traffic-split --new-backend 90%
# Phase 4: Complete cutover
nginx-traffic-split --new-backend 100%
/opt/provisioning-v1/bin/orchestrator stop
-```plaintext
-
-**Rolling Update**:
-
-```nushell
-def rolling-deployment [
+
+Rolling Update :
+def rolling-deployment [
--target-version: string,
--batch-size: int = 3,
--health-check-interval: duration = 30sec
@@ -31242,14 +30233,10 @@ def rolling-deployment [
completed_at: (date now)
}
}
-```plaintext
-
-### Configuration Deployment
-
-**Environment-Specific Deployment**:
-
-```bash
-# Development deployment
+
+
+Environment-Specific Deployment :
+# Development deployment
PROVISIONING_ENV=dev ./deploy.sh \
--config-source config.dev.toml \
--enable-debug \
@@ -31268,14 +30255,10 @@ PROVISIONING_ENV=prod ./deploy.sh \
--enable-all-monitoring \
--backup-before-deploy \
--health-check-timeout 5m
-```plaintext
-
-### Container Integration
-
-**Docker Deployment with Bridge**:
-
-```dockerfile
-# Multi-stage Docker build supporting both systems
+
+
+Docker Deployment with Bridge :
+# Multi-stage Docker build supporting both systems
FROM rust:1.70 as builder
WORKDIR /app
COPY . .
@@ -31298,12 +30281,9 @@ ENV PROVISIONING_NEW_PATH=/app/bin
EXPOSE 8080
CMD ["/app/bin/bridge-start.sh"]
-```plaintext
-
-**Kubernetes Integration**:
-
-```yaml
-# Kubernetes deployment with bridge sidecar
+
+Kubernetes Integration :
+# Kubernetes deployment with bridge sidecar
apiVersion: apps/v1
kind: Deployment
metadata:
@@ -31342,16 +30322,11 @@ spec:
- name: legacy-data
persistentVolumeClaim:
claimName: provisioning-data
-```plaintext
-
-## Monitoring and Observability
-
-### Integrated Monitoring Architecture
-
-**Monitoring Stack Integration**:
-
-```plaintext
-Observability Architecture
+
+
+
+Monitoring Stack Integration :
+Observability Architecture
┌─────────────────────────────────────────────────────────────────┐
│ Monitoring Dashboard │
│ ┌─────────────┐ ┌─────────────┐ ┌─────────────┐ │
@@ -31380,14 +30355,10 @@ Observability Architecture
│ - Compatibility │
│ - Migration │
└───────────────────┘
-```plaintext
-
-### Metrics Integration
-
-**Unified Metrics Collection**:
-
-```nushell
-# Metrics bridge for legacy and new systems
+
+
+Unified Metrics Collection :
+# Metrics bridge for legacy and new systems
def collect-system-metrics [] -> record {
let legacy_metrics = collect-legacy-metrics
let new_metrics = collect-new-metrics
@@ -31436,14 +30407,10 @@ def collect-new-metrics [] -> record {
database_stats: (get-database-metrics)
}
}
-```plaintext
-
-### Logging Integration
-
-**Unified Logging Strategy**:
-
-```nushell
-# Structured logging bridge
+
+
+Unified Logging Strategy :
+# Structured logging bridge
def log-integrated [
level: string,
message: string,
@@ -31471,14 +30438,10 @@ def log-integrated [
# Send to monitoring system
send-to-monitoring $log_entry
}
-```plaintext
-
-### Health Check Integration
-
-**Comprehensive Health Monitoring**:
-
-```nushell
-def health-check-integrated [] -> record {
+
+
+Comprehensive Health Monitoring :
+def health-check-integrated [] -> record {
let health_checks = [
{name: "legacy-system", check: (check-legacy-health)},
{name: "orchestrator", check: (check-orchestrator-health)},
@@ -31508,16 +30471,11 @@ def health-check-integrated [] -> record {
checked_at: (date now)
}
}
-```plaintext
-
-## Legacy System Bridge
-
-### Bridge Architecture
-
-**Bridge Component Design**:
-
-```nushell
-# Legacy system bridge module
+
+
+
+Bridge Component Design :
+# Legacy system bridge module
export module bridge {
# Bridge state management
export def init-bridge [] -> record {
@@ -31571,14 +30529,10 @@ export module bridge {
}
}
}
-```plaintext
-
-### Bridge Operation Modes
-
-**Compatibility Mode**:
-
-```nushell
-# Full compatibility with legacy system
+
+
+Compatibility Mode :
+# Full compatibility with legacy system
def run-compatibility-mode [] {
print "Starting bridge in compatibility mode..."
@@ -31599,12 +30553,9 @@ def run-compatibility-mode [] {
}
}
}
-```plaintext
-
-**Migration Mode**:
-
-```nushell
-# Gradual migration with traffic splitting
+
+Migration Mode :
+# Gradual migration with traffic splitting
def run-migration-mode [
--new-system-percentage: int = 50
] {
@@ -31627,39 +30578,33 @@ def run-migration-mode [
}
}
}
-```plaintext
-
-## Migration Pathways
-
-### Migration Phases
-
-**Phase 1: Parallel Deployment**
-
-- Deploy new system alongside existing
-- Enable bridge for compatibility
-- Begin data synchronization
-- Monitor integration health
-
-**Phase 2: Gradual Migration**
-
-- Route increasing traffic to new system
-- Migrate data in background
-- Validate consistency
-- Address integration issues
-
-**Phase 3: Full Migration**
-
-- Complete traffic cutover
-- Decommission legacy system
-- Clean up bridge components
-- Finalize data migration
-
-### Migration Automation
-
-**Automated Migration Orchestration**:
-
-```nushell
-def execute-migration-plan [
+
+
+
+Phase 1: Parallel Deployment
+
+Deploy new system alongside existing
+Enable bridge for compatibility
+Begin data synchronization
+Monitor integration health
+
+Phase 2: Gradual Migration
+
+Route increasing traffic to new system
+Migrate data in background
+Validate consistency
+Address integration issues
+
+Phase 3: Full Migration
+
+Complete traffic cutover
+Decommission legacy system
+Clean up bridge components
+Finalize data migration
+
+
+Automated Migration Orchestration :
+def execute-migration-plan [
migration_plan: string,
--dry-run: bool = false,
--skip-backup: bool = false
@@ -31709,12 +30654,9 @@ def execute-migration-plan [
results: $migration_results
}
}
-```plaintext
-
-**Migration Validation**:
-
-```nushell
-def validate-migration-readiness [] -> record {
+
+Migration Validation :
+def validate-migration-readiness [] -> record {
let checks = [
{name: "backup-available", check: (check-backup-exists)},
{name: "new-system-healthy", check: (check-new-system-health)},
@@ -31741,18 +30683,12 @@ def validate-migration-readiness [] -> record {
validated_at: (date now)
}
}
-```plaintext
-
-## Troubleshooting Integration Issues
-
-### Common Integration Problems
-
-#### API Compatibility Issues
-
-**Problem**: Version mismatch between client and server
-
-```bash
-# Diagnosis
+
+
+
+
+Problem : Version mismatch between client and server
+# Diagnosis
curl -H "API-Version: v1" http://localhost:9090/health
curl -H "API-Version: v2" http://localhost:9090/health
@@ -31761,14 +30697,10 @@ curl http://localhost:9090/api/versions
# Update client API version
export PROVISIONING_API_VERSION=v2
-```plaintext
-
-#### Configuration Bridge Issues
-
-**Problem**: Configuration not found in either system
-
-```nushell
-# Diagnosis
+
+
+Problem : Configuration not found in either system
+# Diagnosis
def diagnose-config-issue [key: string] -> record {
let toml_result = try {
get-config-value $key
@@ -31797,14 +30729,10 @@ def migrate-single-config [key: string] {
print $"Migrated ($key) from environment variable"
}
}
-```plaintext
-
-#### Database Integration Issues
-
-**Problem**: Data inconsistency between systems
-
-```nushell
-# Diagnosis and repair
+
+
+Problem : Data inconsistency between systems
+# Diagnosis and repair
def repair-data-consistency [] -> record {
let legacy_data = (read-legacy-data)
let new_data = (read-new-data)
@@ -31832,27 +30760,20 @@ def repair-data-consistency [] -> record {
repaired_at: (date now)
}
}
-```plaintext
-
-### Debug Tools
-
-**Integration Debug Mode**:
-
-```bash
-# Enable comprehensive debugging
+
+
+Integration Debug Mode :
+# Enable comprehensive debugging
export PROVISIONING_DEBUG=true
export PROVISIONING_LOG_LEVEL=debug
export PROVISIONING_BRIDGE_DEBUG=true
export PROVISIONING_INTEGRATION_TRACE=true
# Run with integration debugging
-provisioning server create test-server 2xCPU-4GB --debug-integration
-```plaintext
-
-**Health Check Debugging**:
-
-```nushell
-def debug-integration-health [] -> record {
+provisioning server create test-server 2xCPU-4 GB --debug-integration
+
+Health Check Debugging :
+def debug-integration-health [] -> record {
print "=== Integration Health Debug ==="
# Check all integration points
@@ -31885,10 +30806,8 @@ def debug-integration-health [] -> record {
debug_timestamp: (date now)
}
}
-```plaintext
-
-This integration guide provides a comprehensive framework for seamlessly integrating new development components with existing production systems while maintaining reliability, compatibility, and clear migration pathways.
+This integration guide provides a comprehensive framework for seamlessly integrating new development components with existing production systems while maintaining reliability, compatibility, and clear migration pathways.
This document provides comprehensive documentation for the provisioning project’s build system, including the complete Makefile reference with 40+ targets, build tools, compilation instructions, and troubleshooting.
@@ -31902,19 +30821,19 @@ This integration guide provides a comprehensive framework for seamlessly integra
Troubleshooting
CI/CD Integration
-
+
The build system is a comprehensive, Makefile-based solution that orchestrates:
Rust compilation : Platform binaries (orchestrator, control-center, etc.)
Nushell bundling : Core libraries and CLI tools
-KCL validation : Configuration schema validation
+Nickel validation : Configuration schema validation
Distribution generation : Multi-platform packages
Release management : Automated release pipelines
Documentation generation : API and user documentation
Location : /src/tools/
Main entry point : /src/tools/Makefile
-
+
# Navigate to build system
cd src/tools
@@ -31966,7 +30885,7 @@ PARALLEL := true
make build-all - Build all components
-Runs: build-platform build-core validate-kcl
+Runs: build-platform build-core validate-nickel
Use for: Complete system compilation
make build-platform - Build platform binaries for all targets
@@ -31987,11 +30906,11 @@ nu tools/build/bundle-core.nu \
--validate \
--exclude-dev
-make validate-kcl - Validate and compile KCL schemas
-make validate-kcl
+make validate-nickel - Validate and compile Nickel schemas
+make validate-nickel
# Equivalent to:
-nu tools/build/validate-kcl.nu \
- --output-dir dist/kcl \
+nu tools/build/validate-nickel.nu \
+ --output-dir dist/schemas \
--format-code \
--check-dependencies
@@ -32091,7 +31010,7 @@ make dist-generate PLATFORMS=linux-amd64,macos-amd64 VARIANTS=complete
make validate-all - Validate all components
-KCL schema validation
+Nickel schema validation
Package validation
Configuration validation
@@ -32267,22 +31186,22 @@ Options:
Function signature validation
Test execution (if tests present)
-
-Purpose : Validates and compiles KCL schemas
+
+Purpose : Validates and compiles Nickel schemas
Validation Process :
-Syntax validation of all .k files
+Syntax validation of all .ncl files
Schema dependency checking
Type constraint validation
Example validation against schemas
Documentation generation
Usage :
-nu validate-kcl.nu [options]
+nu validate-nickel.nu [options]
Options:
- --output-dir STRING Output directory (default: dist/kcl)
- --format-code Format KCL code during validation
+ --output-dir STRING Output directory (default: dist/schemas)
+ --format-code Format Nickel code during validation
--check-dependencies Validate schema dependencies
--verbose Enable verbose logging
@@ -32330,7 +31249,7 @@ Options:
Platform binary compilation
Core library bundling
-KCL schema validation and packaging
+Nickel schema validation and packaging
Configuration system preparation
Documentation generation
Archive creation and compression
@@ -32502,7 +31421,7 @@ make linux # Linux AMD64
make macos # macOS AMD64
make windows # Windows AMD64
-
+
Required Tools :
@@ -32535,8 +31454,8 @@ make windows # Windows AMD64
# Install Nushell
cargo install nu
-# Install KCL
-cargo install kcl-cli
+# Install Nickel
+cargo install nickel
# Install Cross (for cross-compilation)
cargo install cross
@@ -32558,7 +31477,7 @@ cross clean
# Clean all caches
make clean SCOPE=cache
-
+
Error : linker 'cc' not found
@@ -32590,17 +31509,17 @@ chmod +x src/tools/build/*.nu
cd src/tools
nu build/compile-platform.nu --help
-
-Error : kcl command not found
-# Solution: Install KCL
-cargo install kcl-cli
+
+Error : nickel command not found
+# Solution: Install Nickel
+cargo install nickel
# or
-brew install kcl
+brew install nickel
Error : Schema validation failed
-# Solution: Check KCL syntax
-kcl fmt kcl/
-kcl check kcl/
+# Solution: Check Nickel syntax
+nickel fmt schemas/
+nickel check schemas/
@@ -32674,7 +31593,7 @@ make status
# Tool information
make info
-
+
Example Workflow (.github/workflows/build.yml):
name: Build and Test
@@ -32754,7 +31673,7 @@ make ci-release
Best Practices
Troubleshooting
-
+
Provisioning supports three types of extensions that enable customization and expansion of functionality:
Providers : Cloud provider implementations for resource management
@@ -32790,21 +31709,17 @@ make ci-release
├── CI/CD Pipeline # Continuous integration/deployment
├── Data Platform # Data processing and analytics
└── Custom Clusters # User-defined clusters
-```plaintext
-
-### Extension Discovery
-
-**Discovery Order**:
-
-1. `workspace/extensions/{type}/{user}/{name}` - User-specific extensions
-2. `workspace/extensions/{type}/{name}` - Workspace shared extensions
-3. `workspace/extensions/{type}/template` - Templates
-4. Core system paths (fallback)
-
-**Path Resolution**:
-
-```nushell
-# Automatic extension discovery
+
+
+Discovery Order :
+
+workspace/extensions/{type}/{user}/{name} - User-specific extensions
+workspace/extensions/{type}/{name} - Workspace shared extensions
+workspace/extensions/{type}/template - Templates
+Core system paths (fallback)
+
+Path Resolution :
+# Automatic extension discovery
use workspace/lib/path-resolver.nu
# Find provider extension
@@ -32815,55 +31730,42 @@ let taskservs = (path-resolver list_extensions "taskservs" --include-core)
# Resolve cluster definition
let cluster_path = (path-resolver resolve_extension "clusters" "web-stack")
-```plaintext
-
-## Provider Development
-
-### Provider Architecture
-
-Providers implement cloud resource management through a standardized interface that supports multiple cloud platforms while maintaining consistent APIs.
-
-**Core Responsibilities**:
-
-- **Authentication**: Secure API authentication and credential management
-- **Resource Management**: Server creation, deletion, and lifecycle management
-- **Configuration**: Provider-specific settings and validation
-- **Error Handling**: Comprehensive error handling and recovery
-- **Rate Limiting**: API rate limiting and retry logic
-
-### Creating a New Provider
-
-**1. Initialize from Template**:
-
-```bash
-# Copy provider template
+
+
+
+Providers implement cloud resource management through a standardized interface that supports multiple cloud platforms while maintaining consistent APIs.
+Core Responsibilities :
+
+Authentication : Secure API authentication and credential management
+Resource Management : Server creation, deletion, and lifecycle management
+Configuration : Provider-specific settings and validation
+Error Handling : Comprehensive error handling and recovery
+Rate Limiting : API rate limiting and retry logic
+
+
+1. Initialize from Template :
+# Copy provider template
cp -r workspace/extensions/providers/template workspace/extensions/providers/my-cloud
# Navigate to new provider
cd workspace/extensions/providers/my-cloud
-```plaintext
-
-**2. Update Configuration**:
-
-```bash
-# Initialize provider metadata
+
+2. Update Configuration :
+# Initialize provider metadata
nu init-provider.nu \
--name "my-cloud" \
--display-name "MyCloud Provider" \
--author "$USER" \
--description "MyCloud platform integration"
-```plaintext
-
-### Provider Structure
-
-```plaintext
-my-cloud/
+
+
+my-cloud/
├── README.md # Provider documentation
-├── kcl/ # KCL configuration schemas
-│ ├── settings.k # Provider settings schema
-│ ├── servers.k # Server configuration schema
-│ ├── networks.k # Network configuration schema
-│ └── kcl.mod # KCL module dependencies
+├── schemas/ # Nickel configuration schemas
+│ ├── settings.ncl # Provider settings schema
+│ ├── servers.ncl # Server configuration schema
+│ ├── networks.ncl # Network configuration schema
+│ └── manifest.toml # Nickel module dependencies
├── nulib/ # Nushell implementation
│ ├── provider.nu # Main provider interface
│ ├── servers/ # Server management
@@ -32898,14 +31800,10 @@ my-cloud/
└── mock/ # Mock data and services
├── api-responses.json # Mock API responses
└── test-configs.toml # Test configurations
-```plaintext
-
-### Provider Implementation
-
-**Main Provider Interface** (`nulib/provider.nu`):
-
-```nushell
-#!/usr/bin/env nu
+
+
+Main Provider Interface (nulib/provider.nu):
+#!/usr/bin/env nu
# MyCloud Provider Implementation
# Provider metadata
@@ -33062,12 +31960,9 @@ export def "provider test" [
_ => (error make {msg: $"Unknown test type: ($test_type)"})
}
}
-```plaintext
-
-**Authentication Module** (`nulib/auth/client.nu`):
-
-```nushell
-# API client setup and authentication
+
+Authentication Module (nulib/auth/client.nu):
+# API client setup and authentication
export def setup_api_client [config: record] -> record {
# Validate credentials
@@ -33109,83 +32004,64 @@ def test_auth_api [client: record] -> bool {
$response.status == "success"
}
-```plaintext
+
+Nickel Configuration Schema (schemas/settings.ncl):
+# MyCloud Provider Configuration Schema
-**KCL Configuration Schema** (`kcl/settings.k`):
-
-```kcl
-# MyCloud Provider Configuration Schema
-
-schema MyCloudConfig:
- """MyCloud provider configuration"""
-
- api_url?: str = "https://api.my-cloud.com"
- api_key: str
- api_secret: str
- timeout?: int = 30
- retries?: int = 3
+let MyCloudConfig = {
+ # MyCloud provider configuration
+ api_url | string | default = "https://api.my-cloud.com",
+ api_key | string,
+ api_secret | string,
+ timeout | number | default = 30,
+ retries | number | default = 3,
# Rate limiting
- rate_limit?: {
- requests_per_minute?: int = 60
- burst_size?: int = 10
- } = {}
+ rate_limit | {
+ requests_per_minute | number | default = 60,
+ burst_size | number | default = 10,
+ } | default = {},
# Default settings
- defaults?: {
- zone?: str = "us-east-1"
- template?: str = "ubuntu-22.04"
- network?: str = "default"
- } = {}
+ defaults | {
+ zone | string | default = "us-east-1",
+ template | string | default = "ubuntu-22.04",
+ network | string | default = "default",
+ } | default = {},
+} in
+MyCloudConfig
- check:
- len(api_key) > 0, "API key cannot be empty"
- len(api_secret) > 0, "API secret cannot be empty"
- timeout > 0, "Timeout must be positive"
- retries >= 0, "Retries must be non-negative"
-
-schema MyCloudServerConfig:
- """MyCloud server configuration"""
-
- name: str
- plan: str
- zone?: str
- template?: str = "ubuntu-22.04"
- storage?: int = 25
- tags?: {str: str} = {}
+let MyCloudServerConfig = {
+ # MyCloud server configuration
+ name | string,
+ plan | string,
+ zone | string | optional,
+ template | string | default = "ubuntu-22.04",
+ storage | number | default = 25,
+ tags | { } | default = {},
# Network configuration
- network?: {
- vpc_id?: str
- subnet_id?: str
- public_ip?: bool = true
- firewall_rules?: [FirewallRule] = []
- }
+ network | {
+ vpc_id | string | optional,
+ subnet_id | string | optional,
+ public_ip | bool | default = true,
+ firewall_rules | array | default = [],
+ } | optional,
+} in
+MyCloudServerConfig
- check:
- len(name) > 0, "Server name cannot be empty"
- plan in ["small", "medium", "large", "xlarge"], "Invalid plan"
- storage >= 10, "Minimum storage is 10GB"
- storage <= 2048, "Maximum storage is 2TB"
-
-schema FirewallRule:
- """Firewall rule configuration"""
-
- port: int | str
- protocol: str = "tcp"
- source: str = "0.0.0.0/0"
- description?: str
-
- check:
- protocol in ["tcp", "udp", "icmp"], "Invalid protocol"
-```plaintext
-
-### Provider Testing
-
-**Unit Testing** (`tests/unit/test-servers.nu`):
-
-```nushell
-# Unit tests for server management
+let FirewallRule = {
+ # Firewall rule configuration
+ port | (number | string),
+ protocol | string | default = "tcp",
+ source | string | default = "0.0.0.0/0",
+ description | string | optional,
+} in
+FirewallRule
+
+
+Unit Testing (tests/unit/test-servers.nu):
+# Unit tests for server management
use ../../../nulib/provider.nu
@@ -33232,12 +32108,9 @@ def main [] {
test_invalid_plan
print "✅ All server management tests passed"
}
-```plaintext
-
-**Integration Testing** (`tests/integration/test-lifecycle.nu`):
-
-```nushell
-# Integration tests for complete server lifecycle
+
+Integration Testing (tests/integration/test-lifecycle.nu):
+# Integration tests for complete server lifecycle
use ../../../nulib/provider.nu
@@ -33270,54 +32143,41 @@ def main [] {
test_complete_lifecycle
print "✅ All integration tests passed"
}
-```plaintext
-
-## Task Service Development
-
-### Task Service Architecture
-
-Task services are infrastructure components that can be deployed and managed across different environments. They provide standardized interfaces for installation, configuration, and lifecycle management.
-
-**Core Responsibilities**:
-
-- **Installation**: Service deployment and setup
-- **Configuration**: Dynamic configuration management
-- **Health Checking**: Service status monitoring
-- **Version Management**: Automatic version updates from GitHub
-- **Integration**: Integration with other services and clusters
-
-### Creating a New Task Service
-
-**1. Initialize from Template**:
-
-```bash
-# Copy task service template
+
+
+
+Task services are infrastructure components that can be deployed and managed across different environments. They provide standardized interfaces for installation, configuration, and lifecycle management.
+Core Responsibilities :
+
+Installation : Service deployment and setup
+Configuration : Dynamic configuration management
+Health Checking : Service status monitoring
+Version Management : Automatic version updates from GitHub
+Integration : Integration with other services and clusters
+
+
+1. Initialize from Template :
+# Copy task service template
cp -r workspace/extensions/taskservs/template workspace/extensions/taskservs/my-service
# Navigate to new service
cd workspace/extensions/taskservs/my-service
-```plaintext
-
-**2. Initialize Service**:
-
-```bash
-# Initialize service metadata
+
+2. Initialize Service :
+# Initialize service metadata
nu init-service.nu \
--name "my-service" \
--display-name "My Custom Service" \
--type "database" \
--github-repo "myorg/my-service"
-```plaintext
-
-### Task Service Structure
-
-```plaintext
-my-service/
+
+
+my-service/
├── README.md # Service documentation
-├── kcl/ # KCL schemas
-│ ├── version.k # Version and GitHub integration
-│ ├── config.k # Service configuration schema
-│ └── kcl.mod # Module dependencies
+├── schemas/ # Nickel schemas
+│ ├── version.ncl # Version and GitHub integration
+│ ├── config.ncl # Service configuration schema
+│ └── manifest.toml # Module dependencies
├── nushell/ # Nushell implementation
│ ├── taskserv.nu # Main service interface
│ ├── install.nu # Installation logic
@@ -33344,14 +32204,10 @@ my-service/
├── unit/ # Unit tests
├── integration/ # Integration tests
└── fixtures/ # Test fixtures and data
-```plaintext
-
-### Task Service Implementation
-
-**Main Service Interface** (`nushell/taskserv.nu`):
-
-```nushell
-#!/usr/bin/env nu
+
+
+Main Service Interface (nushell/taskserv.nu):
+#!/usr/bin/env nu
# My Custom Service Task Service Implementation
export const SERVICE_NAME = "my-service"
@@ -33554,136 +32410,120 @@ export def "taskserv test" [
_ => (error make {msg: $"Unknown test type: ($test_type)"})
}
}
-```plaintext
+
+Version Configuration (schemas/version.ncl):
+# Version management with GitHub integration
-**Version Configuration** (`kcl/version.k`):
-
-```kcl
-# Version management with GitHub integration
-
-version_config: VersionConfig = {
- service_name = "my-service"
+let version_config = {
+ service_name = "my-service",
# GitHub repository for version checking
github = {
- owner = "myorg"
- repo = "my-service"
+ owner = "myorg",
+ repo = "my-service",
# Release configuration
release = {
- tag_prefix = "v"
- prerelease = false
- draft = false
- }
+ tag_prefix = "v",
+ prerelease = false,
+ draft = false,
+ },
# Asset patterns for different platforms
assets = {
- linux_amd64 = "my-service-{version}-linux-amd64.tar.gz"
- darwin_amd64 = "my-service-{version}-darwin-amd64.tar.gz"
- windows_amd64 = "my-service-{version}-windows-amd64.zip"
- }
- }
+ linux_amd64 = "my-service-{version}-linux-amd64.tar.gz",
+ darwin_amd64 = "my-service-{version}-darwin-amd64.tar.gz",
+ windows_amd64 = "my-service-{version}-windows-amd64.zip",
+ },
+ },
# Version constraints and compatibility
compatibility = {
- min_kubernetes_version = "1.20.0"
- max_kubernetes_version = "1.28.*"
+ min_kubernetes_version = "1.20.0",
+ max_kubernetes_version = "1.28.*",
# Dependencies
requires = {
- "cert-manager": ">=1.8.0"
- "ingress-nginx": ">=1.0.0"
- }
+ "cert-manager" = ">=1.8.0",
+ "ingress-nginx" = ">=1.0.0",
+ },
# Conflicts
conflicts = {
- "old-my-service": "*"
- }
- }
+ "old-my-service" = "*",
+ },
+ },
# Installation configuration
installation = {
- default_namespace = "my-service"
- create_namespace = true
+ default_namespace = "my-service",
+ create_namespace = true,
# Resource requirements
resources = {
requests = {
- cpu = "100m"
- memory = "128Mi"
- }
+ cpu = "100m",
+ memory = "128Mi",
+ },
limits = {
- cpu = "500m"
- memory = "512Mi"
- }
- }
+ cpu = "500m",
+ memory = "512Mi",
+ },
+ },
# Persistence
persistence = {
- enabled = true
- storage_class = "default"
- size = "10Gi"
- }
- }
+ enabled = true,
+ storage_class = "default",
+ size = "10Gi",
+ },
+ },
# Health check configuration
health_check = {
- initial_delay_seconds = 30
- period_seconds = 10
- timeout_seconds = 5
- failure_threshold = 3
+ initial_delay_seconds = 30,
+ period_seconds = 10,
+ timeout_seconds = 5,
+ failure_threshold = 3,
# Health endpoints
endpoints = {
- liveness = "/health/live"
- readiness = "/health/ready"
- }
- }
-}
-```plaintext
-
-## Cluster Development
-
-### Cluster Architecture
-
-Clusters represent complete deployment solutions that combine multiple task services, providers, and configurations to create functional environments.
-
-**Core Responsibilities**:
-
-- **Service Orchestration**: Coordinate multiple task service deployments
-- **Dependency Management**: Handle service dependencies and startup order
-- **Configuration Management**: Manage cross-service configuration
-- **Health Monitoring**: Monitor overall cluster health
-- **Scaling**: Handle cluster scaling operations
-
-### Creating a New Cluster
-
-**1. Initialize from Template**:
-
-```bash
-# Copy cluster template
+ liveness = "/health/live",
+ readiness = "/health/ready",
+ },
+ },
+} in
+version_config
+
+
+
+Clusters represent complete deployment solutions that combine multiple task services, providers, and configurations to create functional environments.
+Core Responsibilities :
+
+Service Orchestration : Coordinate multiple task service deployments
+Dependency Management : Handle service dependencies and startup order
+Configuration Management : Manage cross-service configuration
+Health Monitoring : Monitor overall cluster health
+Scaling : Handle cluster scaling operations
+
+
+1. Initialize from Template :
+# Copy cluster template
cp -r workspace/extensions/clusters/template workspace/extensions/clusters/my-stack
# Navigate to new cluster
cd workspace/extensions/clusters/my-stack
-```plaintext
-
-**2. Initialize Cluster**:
-
-```bash
-# Initialize cluster metadata
+
+2. Initialize Cluster :
+# Initialize cluster metadata
nu init-cluster.nu \
--name "my-stack" \
--display-name "My Application Stack" \
--type "web-application"
-```plaintext
-
-### Cluster Implementation
-
-**Main Cluster Interface** (`nushell/cluster.nu`):
-
-```nushell
-#!/usr/bin/env nu
+
+
+Main Cluster Interface (nushell/cluster.nu):
+#!/usr/bin/env nu
# My Application Stack Cluster Implementation
export const CLUSTER_NAME = "my-stack"
@@ -33791,26 +32631,20 @@ export def "cluster delete" [
deleted_at: (date now)
}
}
-```plaintext
-
-## Testing and Validation
-
-### Testing Framework
-
-**Test Types**:
-
-- **Unit Tests**: Individual function and module testing
-- **Integration Tests**: Cross-component interaction testing
-- **End-to-End Tests**: Complete workflow testing
-- **Performance Tests**: Load and performance validation
-- **Security Tests**: Security and vulnerability testing
-
-### Extension Testing Commands
-
-**Workspace Testing Tools**:
-
-```bash
-# Validate extension syntax and structure
+
+
+
+Test Types :
+
+Unit Tests : Individual function and module testing
+Integration Tests : Cross-component interaction testing
+End-to-End Tests : Complete workflow testing
+Performance Tests : Load and performance validation
+Security Tests : Security and vulnerability testing
+
+
+Workspace Testing Tools :
+# Validate extension syntax and structure
nu workspace.nu tools validate-extension providers/my-cloud
# Run extension unit tests
@@ -33821,14 +32655,10 @@ nu workspace.nu tools test-extension clusters/my-stack --test-type integration -
# Performance testing
nu workspace.nu tools test-extension providers/my-cloud --test-type performance --duration 5m
-```plaintext
-
-### Automated Testing
-
-**Test Runner** (`tests/run-tests.nu`):
-
-```nushell
-#!/usr/bin/env nu
+
+
+Test Runner (tests/run-tests.nu):
+#!/usr/bin/env nu
# Automated test runner for extensions
def main [
@@ -33888,24 +32718,19 @@ def main [
completed_at: (date now)
}
}
-```plaintext
-
-## Publishing and Distribution
-
-### Extension Publishing
-
-**Publishing Process**:
-
-1. **Validation**: Comprehensive testing and validation
-2. **Documentation**: Complete documentation and examples
-3. **Packaging**: Create distribution packages
-4. **Registry**: Publish to extension registry
-5. **Versioning**: Semantic version tagging
-
-### Publishing Commands
-
-```bash
-# Validate extension for publishing
+
+
+
+Publishing Process :
+
+Validation : Comprehensive testing and validation
+Documentation : Complete documentation and examples
+Packaging : Create distribution packages
+Registry : Publish to extension registry
+Versioning : Semantic version tagging
+
+
+# Validate extension for publishing
nu workspace.nu tools validate-for-publish providers/my-cloud
# Create distribution package
@@ -33916,14 +32741,10 @@ nu workspace.nu tools publish-extension providers/my-cloud --registry official
# Tag version
nu workspace.nu tools tag-extension providers/my-cloud --version 1.0.0 --push
-```plaintext
-
-### Extension Registry
-
-**Registry Structure**:
-
-```plaintext
-Extension Registry
+
+
+Registry Structure :
+Extension Registry
├── providers/
│ ├── aws/ # Official AWS provider
│ ├── upcloud/ # Official UpCloud provider
@@ -33936,16 +32757,11 @@ Extension Registry
├── web-stacks/ # Web application stacks
├── data-platforms/ # Data processing platforms
└── ci-cd/ # CI/CD pipelines
-```plaintext
-
-## Best Practices
-
-### Code Quality
-
-**Function Design**:
-
-```nushell
-# Good: Single responsibility, clear parameters, comprehensive error handling
+
+
+
+Function Design :
+# Good: Single responsibility, clear parameters, comprehensive error handling
export def "provider create-server" [
name: string # Server name (must be unique in region)
plan: string # Server plan (see list-plans for options)
@@ -33969,12 +32785,9 @@ def create [n, p] {
# Missing validation and error handling
api_call $n $p
}
-```plaintext
-
-**Configuration Management**:
-
-```nushell
-# Good: Configuration-driven with validation
+
+Configuration Management :
+# Good: Configuration-driven with validation
def get_api_endpoint [provider: string] -> string {
let config = get-config-value $"providers.($provider).api_url"
@@ -33992,14 +32805,10 @@ def get_api_endpoint [provider: string] -> string {
def get_api_endpoint [] {
"https://api.provider.com" # Never hardcode!
}
-```plaintext
-
-### Error Handling
-
-**Comprehensive Error Context**:
-
-```nushell
-def create_server_with_context [name: string, config: record] -> record {
+
+
+Comprehensive Error Context :
+def create_server_with_context [name: string, config: record] -> record {
try {
# Validate configuration
validate_server_config $config
@@ -34038,14 +32847,10 @@ def create_server_with_context [name: string, config: record] -> record {
}
}
}
-```plaintext
-
-### Testing Practices
-
-**Test Organization**:
-
-```nushell
-# Organize tests by functionality
+
+
+Test Organization :
+# Organize tests by functionality
# tests/unit/server-creation-test.nu
def test_valid_server_creation [] {
@@ -34081,14 +32886,10 @@ def test_invalid_inputs [] {
}
}
}
-```plaintext
-
-### Documentation Standards
-
-**Function Documentation**:
-
-```nushell
-# Comprehensive function documentation
+
+
+Function Documentation :
+# Comprehensive function documentation
def "provider create-server" [
name: string # Server name - must be unique within the provider
plan: string # Server size plan (run 'provider list-plans' for options)
@@ -34125,74 +32926,52 @@ def "provider create-server" [
# Implementation...
}
-```plaintext
-
-## Troubleshooting
-
-### Common Development Issues
-
-#### Extension Not Found
-
-**Error**: `Extension 'my-provider' not found`
-
-```bash
-# Solution: Check extension location and structure
+
+
+
+
+Error : Extension 'my-provider' not found
+# Solution: Check extension location and structure
ls -la workspace/extensions/providers/my-provider
nu workspace/lib/path-resolver.nu resolve_extension "providers" "my-provider"
# Validate extension structure
nu workspace.nu tools validate-extension providers/my-provider
-```plaintext
+
+
+Error : Invalid Nickel configuration
+# Solution: Validate Nickel syntax
+nickel check workspace/extensions/providers/my-provider/schemas/
-#### Configuration Errors
-
-**Error**: `Invalid KCL configuration`
-
-```bash
-# Solution: Validate KCL syntax
-kcl check workspace/extensions/providers/my-provider/kcl/
-
-# Format KCL files
-kcl fmt workspace/extensions/providers/my-provider/kcl/
+# Format Nickel files
+nickel fmt workspace/extensions/providers/my-provider/schemas/
# Test with example data
-kcl run workspace/extensions/providers/my-provider/kcl/settings.k -D api_key="test"
-```plaintext
-
-#### API Integration Issues
-
-**Error**: `Authentication failed`
-
-```bash
-# Solution: Test credentials and connectivity
+nickel eval workspace/extensions/providers/my-provider/schemas/settings.ncl
+
+
+Error : Authentication failed
+# Solution: Test credentials and connectivity
curl -H "Authorization: Bearer $API_KEY" https://api.provider.com/auth/test
# Debug API calls
export PROVISIONING_DEBUG=true
export PROVISIONING_LOG_LEVEL=debug
nu workspace/extensions/providers/my-provider/nulib/provider.nu test --test-type basic
-```plaintext
-
-### Debug Mode
-
-**Enable Extension Debugging**:
-
-```bash
-# Set debug environment
+
+
+Enable Extension Debugging :
+# Set debug environment
export PROVISIONING_DEBUG=true
export PROVISIONING_LOG_LEVEL=debug
export PROVISIONING_WORKSPACE_USER=$USER
# Run extension with debug
nu workspace/extensions/providers/my-provider/nulib/provider.nu create-server test-server small --dry-run
-```plaintext
-
-### Performance Optimization
-
-**Extension Performance**:
-
-```bash
-# Profile extension performance
+
+
+Extension Performance :
+# Profile extension performance
time nu workspace/extensions/providers/my-provider/nulib/provider.nu list-servers
# Monitor resource usage
@@ -34201,10 +32980,8 @@ nu workspace/tools/runtime-manager.nu monitor --duration 1m --interval 5s
# Optimize API calls (use caching)
export PROVISIONING_CACHE_ENABLED=true
export PROVISIONING_CACHE_TTL=300 # 5 minutes
-```plaintext
-
-This extension development guide provides a comprehensive framework for creating high-quality, maintainable extensions that integrate seamlessly with provisioning's architecture and workflows.
+This extension development guide provides a comprehensive framework for creating high-quality, maintainable extensions that integrate seamlessly with provisioning’s architecture and workflows.
This document provides comprehensive documentation for the provisioning project’s distribution process, covering release workflows, package generation, multi-platform distribution, and rollback procedures.
@@ -34220,7 +32997,7 @@ This extension development guide provides a comprehensive framework for creating
CI/CD Integration
Troubleshooting
-
+
The distribution system provides a comprehensive solution for creating, packaging, and distributing provisioning across multiple platforms with automated release management.
Key Features :
@@ -34252,19 +33029,16 @@ This extension development guide provides a comprehensive framework for creating
├── Checksums # SHA256/MD5 verification
├── Signatures # Digital signatures
└── Metadata # Release information
-```plaintext
-
-### Build Pipeline
-
-```plaintext
-Build Pipeline Flow
+
+
+Build Pipeline Flow
┌─────────────────┐ ┌─────────────────┐ ┌─────────────────┐
│ Source Code │ -> │ Build Stage │ -> │ Package Stage │
│ │ │ │ │ │
│ - Rust code │ │ - compile- │ │ - create- │
│ - Nushell libs │ │ platform │ │ archives │
-│ - KCL schemas │ │ - bundle-core │ │ - build- │
-│ - Config files │ │ - validate-kcl │ │ containers │
+│ - Nickel schemas│ │ - bundle-core │ │ - build- │
+│ - Config files │ │ - validate-nickel│ │ containers │
└─────────────────┘ └─────────────────┘ └─────────────────┘
|
v
@@ -34276,45 +33050,37 @@ Build Pipeline Flow
│ - upload- │ │ package │ │ - create- │
│ artifacts │ │ - integration │ │ installers │
└─────────────────┘ └─────────────────┘ └─────────────────┘
-```plaintext
-
-### Distribution Variants
-
-**Complete Distribution**:
-
-- All Rust binaries (orchestrator, control-center, MCP server)
-- Full Nushell library suite
-- All providers, taskservs, and clusters
-- Complete documentation and examples
-- Development tools and templates
-
-**Minimal Distribution**:
-
-- Essential binaries only
-- Core Nushell libraries
-- Basic provider support
-- Essential task services
-- Minimal documentation
-
-## Release Process
-
-### Release Types
-
-**Release Classifications**:
-
-- **Major Release** (x.0.0): Breaking changes, new major features
-- **Minor Release** (x.y.0): New features, backward compatible
-- **Patch Release** (x.y.z): Bug fixes, security updates
-- **Pre-Release** (x.y.z-alpha/beta/rc): Development/testing releases
-
-### Step-by-Step Release Process
-
-#### 1. Preparation Phase
-
-**Pre-Release Checklist**:
-
-```bash
-# Update dependencies and security
+
+
+Complete Distribution :
+
+All Rust binaries (orchestrator, control-center, MCP server)
+Full Nushell library suite
+All providers, taskservs, and clusters
+Complete documentation and examples
+Development tools and templates
+
+Minimal Distribution :
+
+Essential binaries only
+Core Nushell libraries
+Basic provider support
+Essential task services
+Minimal documentation
+
+
+
+Release Classifications :
+
+Major Release (x.0.0): Breaking changes, new major features
+Minor Release (x.y.0): New features, backward compatible
+Patch Release (x.y.z): Bug fixes, security updates
+Pre-Release (x.y.z-alpha/beta/rc): Development/testing releases
+
+
+
+Pre-Release Checklist :
+# Update dependencies and security
cargo update
cargo audit
@@ -34326,12 +33092,9 @@ make docs
# Validate all configurations
make validate-all
-```plaintext
-
-**Version Planning**:
-
-```bash
-# Check current version
+
+Version Planning :
+# Check current version
git describe --tags --always
# Plan next version
@@ -34339,14 +33102,10 @@ make status | grep Version
# Validate version bump
nu src/tools/release/create-release.nu --dry-run --version 2.1.0
-```plaintext
-
-#### 2. Build Phase
-
-**Complete Build**:
-
-```bash
-# Clean build environment
+
+
+Complete Build :
+# Clean build environment
make clean
# Build all platforms and variants
@@ -34354,12 +33113,9 @@ make all
# Validate build output
make test-dist
-```plaintext
-
-**Build with Specific Parameters**:
-
-```bash
-# Build for specific platforms
+
+Build with Specific Parameters :
+# Build for specific platforms
make all PLATFORMS=linux-amd64,macos-amd64 VARIANTS=complete
# Build with custom version
@@ -34367,14 +33123,10 @@ make all VERSION=2.1.0-rc1
# Parallel build for speed
make all PARALLEL=true
-```plaintext
-
-#### 3. Package Generation
-
-**Create Distribution Packages**:
-
-```bash
-# Generate complete distributions
+
+
+Create Distribution Packages :
+# Generate complete distributions
make dist-generate
# Create binary packages
@@ -34385,12 +33137,9 @@ make package-containers
# Create installers
make create-installers
-```plaintext
-
-**Package Validation**:
-
-```bash
-# Validate packages
+
+Package Validation :
+# Validate packages
make test-dist
# Check package contents
@@ -34399,14 +33148,10 @@ nu src/tools/package/validate-package.nu packages/
# Test installation
make install
make uninstall
-```plaintext
-
-#### 4. Release Creation
-
-**Automated Release**:
-
-```bash
-# Create complete release
+
+
+Automated Release :
+# Create complete release
make release VERSION=2.1.0
# Create draft release for review
@@ -34418,22 +33163,18 @@ nu src/tools/release/create-release.nu \
--generate-changelog \
--push-tag \
--auto-upload
-```plaintext
-
-**Release Options**:
-
-- `--pre-release`: Mark as pre-release
-- `--draft`: Create draft release
-- `--generate-changelog`: Auto-generate changelog from commits
-- `--push-tag`: Push git tag to remote
-- `--auto-upload`: Upload assets automatically
-
-#### 5. Distribution and Notification
-
-**Upload Artifacts**:
-
-```bash
-# Upload to GitHub Releases
+
+Release Options :
+
+--pre-release: Mark as pre-release
+--draft: Create draft release
+--generate-changelog: Auto-generate changelog from commits
+--push-tag: Push git tag to remote
+--auto-upload: Upload assets automatically
+
+
+Upload Artifacts :
+# Upload to GitHub Releases
make upload-artifacts
# Update package registries
@@ -34441,12 +33182,9 @@ make update-registry
# Send notifications
make notify-release
-```plaintext
-
-**Registry Updates**:
-
-```bash
-# Update Homebrew formula
+
+Registry Updates :
+# Update Homebrew formula
nu src/tools/release/update-registry.nu \
--registries homebrew \
--version 2.1.0 \
@@ -34457,14 +33195,10 @@ nu src/tools/release/update-registry.nu \
--registries custom \
--registry-url https://packages.company.com \
--credentials-file ~/.registry-creds
-```plaintext
-
-### Release Automation
-
-**Complete Automated Release**:
-
-```bash
-# Full release pipeline
+
+
+Complete Automated Release :
+# Full release pipeline
make cd-deploy VERSION=2.1.0
# Equivalent manual steps:
@@ -34476,23 +33210,18 @@ make release VERSION=2.1.0
make upload-artifacts
make update-registry
make notify-release
-```plaintext
-
-## Package Generation
-
-### Binary Packages
-
-**Package Types**:
-
-- **Standalone Archives**: TAR.GZ and ZIP with all dependencies
-- **Platform Packages**: DEB, RPM, MSI, PKG with system integration
-- **Portable Packages**: Single-directory distributions
-- **Source Packages**: Source code with build instructions
-
-**Create Binary Packages**:
-
-```bash
-# Standard binary packages
+
+
+
+Package Types :
+
+Standalone Archives : TAR.GZ and ZIP with all dependencies
+Platform Packages : DEB, RPM, MSI, PKG with system integration
+Portable Packages : Single-directory distributions
+Source Packages : Source code with build instructions
+
+Create Binary Packages :
+# Standard binary packages
make package-binaries
# Custom package creation
@@ -34504,21 +33233,17 @@ nu src/tools/package/package-binaries.nu \
--compress \
--strip \
--checksum
-```plaintext
-
-**Package Features**:
-
-- **Binary Stripping**: Removes debug symbols for smaller size
-- **Compression**: GZIP, LZMA, and Brotli compression
-- **Checksums**: SHA256 and MD5 verification
-- **Signatures**: GPG and code signing support
-
-### Container Images
-
-**Container Build Process**:
-
-```bash
-# Build container images
+
+Package Features :
+
+Binary Stripping : Removes debug symbols for smaller size
+Compression : GZIP, LZMA, and Brotli compression
+Checksums : SHA256 and MD5 verification
+Signatures : GPG and code signing support
+
+
+Container Build Process :
+# Build container images
make package-containers
# Advanced container build
@@ -34530,38 +33255,34 @@ nu src/tools/package/build-containers.nu \
--optimize-size \
--security-scan \
--multi-stage
-```plaintext
-
-**Container Features**:
-
-- **Multi-Stage Builds**: Minimal runtime images
-- **Security Scanning**: Vulnerability detection
-- **Multi-Platform**: AMD64, ARM64 support
-- **Layer Optimization**: Efficient layer caching
-- **Runtime Configuration**: Environment-based configuration
-
-**Container Registry Support**:
-
-- Docker Hub
-- GitHub Container Registry
-- Amazon ECR
-- Google Container Registry
-- Azure Container Registry
-- Private registries
-
-### Installers
-
-**Installer Types**:
-
-- **Shell Script Installer**: Universal Unix/Linux installer
-- **Package Installers**: DEB, RPM, MSI, PKG
-- **Container Installer**: Docker/Podman setup
-- **Source Installer**: Build-from-source installer
-
-**Create Installers**:
-
-```bash
-# Generate all installer types
+
+Container Features :
+
+Multi-Stage Builds : Minimal runtime images
+Security Scanning : Vulnerability detection
+Multi-Platform : AMD64, ARM64 support
+Layer Optimization : Efficient layer caching
+Runtime Configuration : Environment-based configuration
+
+Container Registry Support :
+
+Docker Hub
+GitHub Container Registry
+Amazon ECR
+Google Container Registry
+Azure Container Registry
+Private registries
+
+
+Installer Types :
+
+Shell Script Installer : Universal Unix/Linux installer
+Package Installers : DEB, RPM, MSI, PKG
+Container Installer : Docker/Podman setup
+Source Installer : Build-from-source installer
+
+Create Installers :
+# Generate all installer types
make create-installers
# Custom installer creation
@@ -34573,43 +33294,37 @@ nu src/tools/distribution/create-installer.nu \
--include-services \
--create-uninstaller \
--validate-installer
-```plaintext
-
-**Installer Features**:
-
-- **System Integration**: Systemd/Launchd service files
-- **Path Configuration**: Automatic PATH updates
-- **User/System Install**: Support for both user and system-wide installation
-- **Uninstaller**: Clean removal capability
-- **Dependency Management**: Automatic dependency resolution
-- **Configuration Setup**: Initial configuration creation
-
-## Multi-Platform Distribution
-
-### Supported Platforms
-
-**Primary Platforms**:
-
-- **Linux AMD64** (x86_64-unknown-linux-gnu)
-- **Linux ARM64** (aarch64-unknown-linux-gnu)
-- **macOS AMD64** (x86_64-apple-darwin)
-- **macOS ARM64** (aarch64-apple-darwin)
-- **Windows AMD64** (x86_64-pc-windows-gnu)
-- **FreeBSD AMD64** (x86_64-unknown-freebsd)
-
-**Platform-Specific Features**:
-
-- **Linux**: SystemD integration, package manager support
-- **macOS**: LaunchAgent services, Homebrew packages
-- **Windows**: Windows Service support, MSI installers
-- **FreeBSD**: RC scripts, pkg packages
-
-### Cross-Platform Build
-
-**Cross-Compilation Setup**:
-
-```bash
-# Install cross-compilation targets
+
+Installer Features :
+
+System Integration : Systemd/Launchd service files
+Path Configuration : Automatic PATH updates
+User/System Install : Support for both user and system-wide installation
+Uninstaller : Clean removal capability
+Dependency Management : Automatic dependency resolution
+Configuration Setup : Initial configuration creation
+
+
+
+Primary Platforms :
+
+Linux AMD64 (x86_64-unknown-linux-gnu)
+Linux ARM64 (aarch64-unknown-linux-gnu)
+macOS AMD64 (x86_64-apple-darwin)
+macOS ARM64 (aarch64-apple-darwin)
+Windows AMD64 (x86_64-pc-windows-gnu)
+FreeBSD AMD64 (x86_64-unknown-freebsd)
+
+Platform-Specific Features :
+
+Linux : SystemD integration, package manager support
+macOS : LaunchAgent services, Homebrew packages
+Windows : Windows Service support, MSI installers
+FreeBSD : RC scripts, pkg packages
+
+
+Cross-Compilation Setup :
+# Install cross-compilation targets
rustup target add aarch64-unknown-linux-gnu
rustup target add x86_64-apple-darwin
rustup target add aarch64-apple-darwin
@@ -34617,12 +33332,9 @@ rustup target add x86_64-pc-windows-gnu
# Install cross-compilation tools
cargo install cross
-```plaintext
-
-**Platform-Specific Builds**:
-
-```bash
-# Build for specific platform
+
+Platform-Specific Builds :
+# Build for specific platform
make build-platform RUST_TARGET=aarch64-apple-darwin
# Build for multiple platforms
@@ -34632,14 +33344,10 @@ make build-cross PLATFORMS=linux-amd64,macos-arm64,windows-amd64
make linux
make macos
make windows
-```plaintext
-
-### Distribution Matrix
-
-**Generated Distributions**:
-
-```plaintext
-Distribution Matrix:
+
+
+Generated Distributions :
+Distribution Matrix:
provisioning-{version}-{platform}-{variant}.{format}
Examples:
@@ -34647,24 +33355,19 @@ Examples:
- provisioning-2.1.0-macos-arm64-minimal.tar.gz
- provisioning-2.1.0-windows-amd64-complete.zip
- provisioning-2.1.0-freebsd-amd64-minimal.tar.xz
-```plaintext
-
-**Platform Considerations**:
-
-- **File Permissions**: Executable permissions on Unix systems
-- **Path Separators**: Platform-specific path handling
-- **Service Integration**: Platform-specific service management
-- **Package Formats**: TAR.GZ for Unix, ZIP for Windows
-- **Line Endings**: CRLF for Windows, LF for Unix
-
-## Validation and Testing
-
-### Distribution Validation
-
-**Validation Pipeline**:
-
-```bash
-# Complete validation
+
+Platform Considerations :
+
+File Permissions : Executable permissions on Unix systems
+Path Separators : Platform-specific path handling
+Service Integration : Platform-specific service management
+Package Formats : TAR.GZ for Unix, ZIP for Windows
+Line Endings : CRLF for Windows, LF for Unix
+
+
+
+Validation Pipeline :
+# Complete validation
make test-dist
# Custom validation
@@ -34674,42 +33377,34 @@ nu src/tools/build/test-distribution.nu \
--platform linux \
--cleanup \
--verbose
-```plaintext
-
-**Validation Types**:
-
-- **Basic**: Installation test, CLI help, version check
-- **Integration**: Server creation, configuration validation
-- **Complete**: Full workflow testing including cluster operations
-
-### Testing Framework
-
-**Test Categories**:
-
-- **Unit Tests**: Component-specific testing
-- **Integration Tests**: Cross-component testing
-- **End-to-End Tests**: Complete workflow testing
-- **Performance Tests**: Load and performance validation
-- **Security Tests**: Security scanning and validation
-
-**Test Execution**:
-
-```bash
-# Run all tests
+
+Validation Types :
+
+Basic : Installation test, CLI help, version check
+Integration : Server creation, configuration validation
+Complete : Full workflow testing including cluster operations
+
+
+Test Categories :
+
+Unit Tests : Component-specific testing
+Integration Tests : Cross-component testing
+End-to-End Tests : Complete workflow testing
+Performance Tests : Load and performance validation
+Security Tests : Security scanning and validation
+
+Test Execution :
+# Run all tests
make ci-test
# Specific test types
nu src/tools/build/test-distribution.nu --test-types basic
nu src/tools/build/test-distribution.nu --test-types integration
nu src/tools/build/test-distribution.nu --test-types complete
-```plaintext
-
-### Package Validation
-
-**Package Integrity**:
-
-```bash
-# Validate package structure
+
+
+Package Integrity :
+# Validate package structure
nu src/tools/package/validate-package.nu dist/
# Check checksums
@@ -34717,12 +33412,9 @@ sha256sum -c packages/checksums.sha256
# Verify signatures
gpg --verify packages/provisioning-2.1.0.tar.gz.sig
-```plaintext
-
-**Installation Testing**:
-
-```bash
-# Test installation process
+
+Installation Testing :
+# Test installation process
./packages/installers/install-provisioning-2.1.0.sh --dry-run
# Test uninstallation
@@ -34730,43 +33422,34 @@ gpg --verify packages/provisioning-2.1.0.tar.gz.sig
# Container testing
docker run --rm provisioning:2.1.0 provisioning --version
-```plaintext
-
-## Release Management
-
-### Release Workflow
-
-**GitHub Release Integration**:
-
-```bash
-# Create GitHub release
+
+
+
+GitHub Release Integration :
+# Create GitHub release
nu src/tools/release/create-release.nu \
--version 2.1.0 \
--asset-dir packages \
--generate-changelog \
--push-tag \
--auto-upload
-```plaintext
-
-**Release Features**:
-
-- **Automated Changelog**: Generated from git commit history
-- **Asset Management**: Automatic upload of all distribution artifacts
-- **Tag Management**: Semantic version tagging
-- **Release Notes**: Formatted release notes with change summaries
-
-### Versioning Strategy
-
-**Semantic Versioning**:
-
-- **MAJOR.MINOR.PATCH** format (e.g., 2.1.0)
-- **Pre-release** suffixes (e.g., 2.1.0-alpha.1, 2.1.0-rc.2)
-- **Build metadata** (e.g., 2.1.0+20250925.abcdef)
-
-**Version Detection**:
-
-```bash
-# Auto-detect next version
+
+Release Features :
+
+Automated Changelog : Generated from git commit history
+Asset Management : Automatic upload of all distribution artifacts
+Tag Management : Semantic version tagging
+Release Notes : Formatted release notes with change summaries
+
+
+Semantic Versioning :
+
+MAJOR.MINOR.PATCH format (for example, 2.1.0)
+Pre-release suffixes (for example, 2.1.0-alpha.1, 2.1.0-rc.2)
+Build metadata (for example, 2.1.0+20250925.abcdef)
+
+Version Detection :
+# Auto-detect next version
nu src/tools/release/create-release.nu --release-type minor
# Manual version specification
@@ -34774,22 +33457,18 @@ nu src/tools/release/create-release.nu --version 2.1.0
# Pre-release versioning
nu src/tools/release/create-release.nu --version 2.1.0-rc.1 --pre-release
-```plaintext
-
-### Artifact Management
-
-**Artifact Types**:
-
-- **Source Archives**: Complete source code distributions
-- **Binary Archives**: Compiled binary distributions
-- **Container Images**: OCI-compliant container images
-- **Installers**: Platform-specific installation packages
-- **Documentation**: Generated documentation packages
-
-**Upload and Distribution**:
-
-```bash
-# Upload to GitHub Releases
+
+
+Artifact Types :
+
+Source Archives : Complete source code distributions
+Binary Archives : Compiled binary distributions
+Container Images : OCI-compliant container images
+Installers : Platform-specific installation packages
+Documentation : Generated documentation packages
+
+Upload and Distribution :
+# Upload to GitHub Releases
make upload-artifacts
# Upload to container registries
@@ -34797,26 +33476,20 @@ docker push provisioning:2.1.0
# Update package repositories
make update-registry
-```plaintext
-
-## Rollback Procedures
-
-### Rollback Scenarios
-
-**Common Rollback Triggers**:
-
-- Critical bugs discovered post-release
-- Security vulnerabilities identified
-- Performance regression
-- Compatibility issues
-- Infrastructure failures
-
-### Rollback Process
-
-**Automated Rollback**:
-
-```bash
-# Rollback latest release
+
+
+
+Common Rollback Triggers :
+
+Critical bugs discovered post-release
+Security vulnerabilities identified
+Performance regression
+Compatibility issues
+Infrastructure failures
+
+
+Automated Rollback :
+# Rollback latest release
nu src/tools/release/rollback-release.nu --version 2.1.0
# Rollback with specific target
@@ -34825,12 +33498,9 @@ nu src/tools/release/rollback-release.nu \
--to-version 2.0.5 \
--update-registries \
--notify-users
-```plaintext
-
-**Manual Rollback Steps**:
-
-```bash
-# 1. Identify target version
+
+Manual Rollback Steps :
+# 1. Identify target version
git tag -l | grep -v 2.1.0 | tail -5
# 2. Create rollback release
@@ -34849,21 +33519,17 @@ nu src/tools/release/notify-users.nu \
--channels slack,discord,email \
--message-type rollback \
--urgent
-```plaintext
-
-### Rollback Safety
-
-**Pre-Rollback Validation**:
-
-- Validate target version integrity
-- Check compatibility matrix
-- Verify rollback procedure testing
-- Confirm communication plan
-
-**Rollback Testing**:
-
-```bash
-# Test rollback in staging
+
+
+Pre-Rollback Validation :
+
+Validate target version integrity
+Check compatibility matrix
+Verify rollback procedure testing
+Confirm communication plan
+
+Rollback Testing :
+# Test rollback in staging
nu src/tools/release/rollback-release.nu \
--version 2.1.0 \
--target-version 2.0.5 \
@@ -34872,39 +33538,27 @@ nu src/tools/release/rollback-release.nu \
# Validate rollback success
make test-dist DIST_VERSION=2.0.5
-```plaintext
-
-### Emergency Procedures
-
-**Critical Security Rollback**:
-
-```bash
-# Emergency rollback (bypasses normal procedures)
+
+
+Critical Security Rollback :
+# Emergency rollback (bypasses normal procedures)
nu src/tools/release/rollback-release.nu \
--version 2.1.0 \
--emergency \
--security-issue \
--immediate-notify
-```plaintext
-
-**Infrastructure Failure Recovery**:
-
-```bash
-# Failover to backup infrastructure
+
+Infrastructure Failure Recovery :
+# Failover to backup infrastructure
nu src/tools/release/rollback-release.nu \
--infrastructure-failover \
--backup-registry \
--mirror-sync
-```plaintext
-
-## CI/CD Integration
-
-### GitHub Actions Integration
-
-**Build Workflow** (`.github/workflows/build.yml`):
-
-```yaml
-name: Build and Distribute
+
+
+
+Build Workflow (.github/workflows/build.yml):
+name: Build and Distribute
on:
push:
branches: [main]
@@ -34938,12 +33592,9 @@ jobs:
with:
name: build-${{ matrix.platform }}
path: src/dist/
-```plaintext
-
-**Release Workflow** (`.github/workflows/release.yml`):
-
-```yaml
-name: Release
+
+Release Workflow (.github/workflows/release.yml):
+name: Release
on:
push:
tags: ['v*']
@@ -34968,14 +33619,10 @@ jobs:
run: |
cd src/tools
make update-registry VERSION=${{ github.ref_name }}
-```plaintext
-
-### GitLab CI Integration
-
-**GitLab CI Configuration** (`.gitlab-ci.yml`):
-
-```yaml
-stages:
+
+
+GitLab CI Configuration (.gitlab-ci.yml):
+stages:
- build
- package
- test
@@ -35008,14 +33655,10 @@ release:
- make cd-deploy VERSION=${CI_COMMIT_TAG}
only:
- tags
-```plaintext
-
-### Jenkins Integration
-
-**Jenkinsfile**:
-
-```groovy
-pipeline {
+
+
+Jenkinsfile :
+pipeline {
agent any
stages {
@@ -35047,18 +33690,12 @@ pipeline {
}
}
}
-```plaintext
-
-## Troubleshooting
-
-### Common Issues
-
-#### Build Failures
-
-**Rust Compilation Errors**:
-
-```bash
-# Solution: Clean and rebuild
+
+
+
+
+Rust Compilation Errors :
+# Solution: Clean and rebuild
make clean
cargo clean
make build-platform
@@ -35066,112 +33703,79 @@ make build-platform
# Check Rust toolchain
rustup show
rustup update
-```plaintext
-
-**Cross-Compilation Issues**:
-
-```bash
-# Solution: Install missing targets
+
+Cross-Compilation Issues :
+# Solution: Install missing targets
rustup target list --installed
rustup target add x86_64-apple-darwin
# Use cross for problematic targets
cargo install cross
make build-platform CROSS=true
-```plaintext
-
-#### Package Generation Issues
-
-**Missing Dependencies**:
-
-```bash
-# Solution: Install build tools
+
+
+Missing Dependencies :
+# Solution: Install build tools
sudo apt-get install build-essential
brew install gnu-tar
# Check tool availability
make info
-```plaintext
-
-**Permission Errors**:
-
-```bash
-# Solution: Fix permissions
+
+Permission Errors :
+# Solution: Fix permissions
chmod +x src/tools/build/*.nu
chmod +x src/tools/distribution/*.nu
chmod +x src/tools/package/*.nu
-```plaintext
-
-#### Distribution Validation Failures
-
-**Package Integrity Issues**:
-
-```bash
-# Solution: Regenerate packages
+
+
+Package Integrity Issues :
+# Solution: Regenerate packages
make clean-dist
make package-all
# Verify manually
sha256sum packages/*.tar.gz
-```plaintext
-
-**Installation Test Failures**:
-
-```bash
-# Solution: Test in clean environment
+
+Installation Test Failures :
+# Solution: Test in clean environment
docker run --rm -v $(pwd):/work ubuntu:latest /work/packages/installers/install.sh
# Debug installation
./packages/installers/install.sh --dry-run --verbose
-```plaintext
-
-### Release Issues
-
-#### Upload Failures
-
-**Network Issues**:
-
-```bash
-# Solution: Retry with backoff
+
+
+
+Network Issues :
+# Solution: Retry with backoff
nu src/tools/release/upload-artifacts.nu \
--retry-count 5 \
--backoff-delay 30
# Manual upload
gh release upload v2.1.0 packages/*.tar.gz
-```plaintext
-
-**Authentication Failures**:
-
-```bash
-# Solution: Refresh tokens
+
+Authentication Failures :
+# Solution: Refresh tokens
gh auth refresh
docker login ghcr.io
# Check credentials
gh auth status
docker system info
-```plaintext
-
-#### Registry Update Issues
-
-**Homebrew Formula Issues**:
-
-```bash
-# Solution: Manual PR creation
+
+
+Homebrew Formula Issues :
+# Solution: Manual PR creation
git clone https://github.com/Homebrew/homebrew-core
cd homebrew-core
# Edit formula
git add Formula/provisioning.rb
git commit -m "provisioning 2.1.0"
-```plaintext
-
-### Debug and Monitoring
-
-**Debug Mode**:
-
-```bash
-# Enable debug logging
+
+
+Debug Mode :
+# Enable debug logging
export PROVISIONING_DEBUG=true
export RUST_LOG=debug
@@ -35182,12 +33786,9 @@ make all VERBOSE=true
nu src/tools/distribution/generate-distribution.nu \
--verbose \
--dry-run
-```plaintext
-
-**Monitoring Build Progress**:
-
-```bash
-# Monitor build logs
+
+Monitoring Build Progress :
+# Monitor build logs
tail -f src/tools/build.log
# Check build status
@@ -35196,16 +33797,14 @@ make status
# Resource monitoring
top
df -h
-```plaintext
-
-This distribution process provides a robust, automated pipeline for creating, validating, and distributing provisioning across multiple platforms while maintaining high quality and reliability standards.
+This distribution process provides a robust, automated pipeline for creating, validating, and distributing provisioning across multiple platforms while maintaining high quality and reliability standards.
Status: Ready for Implementation
Estimated Time: 12-16 days
Priority: High
Related: Architecture Analysis
-
+
This guide provides step-by-step instructions for implementing the repository restructuring and distribution system improvements. Each phase includes specific commands, validation steps, and rollback procedures.
@@ -35313,7 +33912,7 @@ echo "✅ Implementation branch created: feat/repo-restructure"
✅ Implementation branch ready
-
+
cd /Users/Akasha/project-provisioning
@@ -35510,7 +34109,7 @@ echo "✅ Restructuring committed"
✅ Changes committed
-
+
# Create migration script
cat > provisioning/tools/migration/update-paths.nu << 'EOF'
@@ -35661,7 +34260,7 @@ export def main [] {
"provisioning/core"
"provisioning/extensions"
"provisioning/platform"
- "provisioning/kcl"
+ "provisioning/schemas"
"workspace"
"workspace/templates"
"distribution"
@@ -35835,7 +34434,7 @@ echo "✅ Phase 1 complete and merged"
-
+
mkdir -p provisioning/tools/build
cd provisioning/tools/build
@@ -35879,7 +34478,7 @@ just status
[Follow similar pattern for remaining build system components]
-
+
mkdir -p distribution/installers
@@ -35904,7 +34503,7 @@ nu distribution/installers/install.nu uninstall --prefix /tmp/provisioning-test
✅ No files left after uninstall
-
+
# Restore from backup
rm -rf /Users/Akasha/project-provisioning
@@ -35976,7 +34575,7 @@ Day 15: Documentation updated
Day 16: Release prepared
-
+
Take breaks between phases - Don’t rush
Test thoroughly - Each phase builds on previous
@@ -35985,7 +34584,7 @@ Day 16: Release prepared
Ask for review - Get feedback at phase boundaries
-
+
If you encounter issues:
Check the validation reports
@@ -35998,70 +34597,60 @@ Day 16: Release prepared
nu provisioning/tools/create-taskserv-helper.nu interactive
-```plaintext
-
-### Create a New Taskserv (Direct)
-
-```bash
-nu provisioning/tools/create-taskserv-helper.nu create my-api \
+
+
+nu provisioning/tools/create-taskserv-helper.nu create my-api \
--category development \
--port 8080 \
--description "My REST API service"
-```plaintext
-
-## 📋 5-Minute Setup
-
-### 1. Choose Your Method
-
-- **Interactive**: `nu provisioning/tools/create-taskserv-helper.nu interactive`
-- **Command Line**: Use the direct command above
-- **Manual**: Follow the structure guide below
-
-### 2. Basic Structure
-
-```plaintext
-my-service/
-├── kcl/
-│ ├── kcl.mod # Package definition
-│ ├── my-service.k # Main schema
-│ └── version.k # Version info
+
+
+
+
+Interactive : nu provisioning/tools/create-taskserv-helper.nu interactive
+Command Line : Use the direct command above
+Manual : Follow the structure guide below
+
+
+my-service/
+├── nickel/
+│ ├── manifest.toml # Package definition
+│ ├── my-service.ncl # Main schema
+│ └── version.ncl # Version info
├── default/
│ ├── defs.toml # Default config
│ └── install-*.sh # Install script
└── README.md # Documentation
-```plaintext
-
-### 3. Essential Files
-
-**kcl.mod** (package definition):
-
-```toml
-[package]
+
+
+manifest.toml (package definition):
+[package]
name = "my-service"
version = "1.0.0"
description = "My service"
[dependencies]
k8s = { oci = "oci://ghcr.io/kcl-lang/k8s", tag = "1.30" }
-```plaintext
+
+my-service.ncl (main schema):
+let MyService = {
+ name | String,
+ version | String,
+ port | Number,
+ replicas | Number,
+} in
-**my-service.k** (main schema):
-
-```kcl
-schema MyService {
- name: str = "my-service"
- version: str = "latest"
- port: int = 8080
- replicas: int = 1
+{
+ my_service_config = {
+ name = "my-service",
+ version = "latest",
+ port = 8080,
+ replicas = 1,
+ }
}
-
-my_service_config: MyService = MyService {}
-```plaintext
-
-### 4. Test Your Taskserv
-
-```bash
-# Discover your taskserv
+
+
+# Discover your taskserv
nu -c "use provisioning/core/nulib/taskservs/discover.nu *; get-taskserv-info my-service"
# Test layer resolution
@@ -36069,81 +34658,64 @@ nu -c "use provisioning/workspace/tools/layer-utils.nu *; test_layer_resolution
# Deploy with check
provisioning/core/cli/provisioning taskserv create my-service --infra wuji --check
-```plaintext
-
-## 🎯 Common Patterns
-
-### Web Service
-
-```kcl
-schema WebService {
- name: str
- version: str = "latest"
- port: int = 8080
- replicas: int = 1
-
- ingress: {
- enabled: bool = true
- hostname: str
- tls: bool = false
- }
-
- resources: {
- cpu: str = "100m"
- memory: str = "128Mi"
- }
-}
-```plaintext
-
-### Database Service
-
-```kcl
-schema DatabaseService {
- name: str
- version: str = "latest"
- port: int = 5432
-
- persistence: {
- enabled: bool = true
- size: str = "10Gi"
- storage_class: str = "ssd"
- }
-
- auth: {
- database: str = "app"
- username: str = "user"
- password_secret: str
- }
-}
-```plaintext
-
-### Background Worker
-
-```kcl
-schema BackgroundWorker {
- name: str
- version: str = "latest"
- replicas: int = 1
-
- job: {
- schedule?: str # Cron format for scheduled jobs
- parallelism: int = 1
- completions: int = 1
- }
-
- resources: {
- cpu: str = "500m"
- memory: str = "512Mi"
- }
-}
-```plaintext
-
-## 🛠️ CLI Shortcuts
-
-### Discovery
-
-```bash
-# List all taskservs
+
+
+
+let WebService = {
+ name | String,
+ version | String | default = "latest",
+ port | Number | default = 8080,
+ replicas | Number | default = 1,
+ ingress | {
+ enabled | Bool | default = true,
+ hostname | String,
+ tls | Bool | default = false,
+ },
+ resources | {
+ cpu | String | default = "100m",
+ memory | String | default = "128Mi",
+ },
+} in
+WebService
+
+
+let DatabaseService = {
+ name | String,
+ version | String | default = "latest",
+ port | Number | default = 5432,
+ persistence | {
+ enabled | Bool | default = true,
+ size | String | default = "10Gi",
+ storage_class | String | default = "ssd",
+ },
+ auth | {
+ database | String | default = "app",
+ username | String | default = "user",
+ password_secret | String,
+ },
+} in
+DatabaseService
+
+
+let BackgroundWorker = {
+ name | String,
+ version | String | default = "latest",
+ replicas | Number | default = 1,
+ job | {
+ schedule | String | optional, # Cron format for scheduled jobs
+ parallelism | Number | default = 1,
+ completions | Number | default = 1,
+ },
+ resources | {
+ cpu | String | default = "500m",
+ memory | String | default = "512Mi",
+ },
+} in
+BackgroundWorker
+
+
+
+# List all taskservs
nu -c "use provisioning/core/nulib/taskservs/discover.nu *; discover-taskservs | select name group"
# Search taskservs
@@ -36151,13 +34723,10 @@ nu -c "use provisioning/core/nulib/taskservs/discover.nu *; search-taskservs red
# Show stats
nu -c "use provisioning/workspace/tools/layer-utils.nu *; show_layer_stats"
-```plaintext
-
-### Development
-
-```bash
-# Check KCL syntax
-kcl check provisioning/extensions/taskservs/{category}/{name}/kcl/{name}.k
+
+
+# Check Nickel syntax
+nickel typecheck provisioning/extensions/taskservs/{category}/{name}/schemas/{name}.ncl
# Generate configuration
provisioning/core/cli/provisioning taskserv generate {name} --infra {infra}
@@ -36165,77 +34734,62 @@ provisioning/core/cli/provisioning taskserv generate {name} --infra {infra}
# Version management
provisioning/core/cli/provisioning taskserv versions {name}
provisioning/core/cli/provisioning taskserv check-updates
-```plaintext
-
-### Testing
-
-```bash
-# Dry run deployment
+
+
+# Dry run deployment
provisioning/core/cli/provisioning taskserv create {name} --infra {infra} --check
# Layer resolution debug
nu -c "use provisioning/workspace/tools/layer-utils.nu *; test_layer_resolution {name} {infra} {provider}"
-```plaintext
-
-## 📚 Categories Reference
-
-| Category | Examples | Use Case |
-|----------|----------|----------|
-| **container-runtime** | containerd, crio, podman | Container runtime engines |
-| **databases** | postgres, redis | Database services |
-| **development** | coder, gitea, desktop | Development tools |
-| **infrastructure** | kms, webhook, os | System infrastructure |
-| **kubernetes** | kubernetes | Kubernetes orchestration |
-| **networking** | cilium, coredns, etcd | Network services |
-| **storage** | rook-ceph, external-nfs | Storage solutions |
-
-## 🔧 Troubleshooting
-
-### Taskserv Not Found
-
-```bash
-# Check if discovered
+
+
+Category Examples Use Case
+container-runtime containerd, crio, podman Container runtime engines
+databases postgres, redis Database services
+development coder, gitea, desktop Development tools
+infrastructure kms, webhook, os System infrastructure
+kubernetes kubernetes Kubernetes orchestration
+networking cilium, coredns, etcd Network services
+storage rook-ceph, external-nfs Storage solutions
+
+
+
+
+# Check if discovered
nu -c "use provisioning/core/nulib/taskservs/discover.nu *; discover-taskservs | where name == my-service"
# Verify kcl.mod exists
ls provisioning/extensions/taskservs/{category}/my-service/kcl/kcl.mod
-```plaintext
-
-### Layer Resolution Issues
-
-```bash
-# Debug resolution
+
+
+# Debug resolution
nu -c "use provisioning/workspace/tools/layer-utils.nu *; test_layer_resolution my-service wuji upcloud"
# Check template exists
-ls provisioning/workspace/templates/taskservs/{category}/my-service.k
-```plaintext
-
-### KCL Syntax Errors
-
-```bash
-# Check syntax
-kcl check provisioning/extensions/taskservs/{category}/my-service/kcl/my-service.k
+ls provisioning/workspace/templates/taskservs/{category}/my-service.ncl
+
+
+# Check syntax
+nickel typecheck provisioning/extensions/taskservs/{category}/my-service/schemas/my-service.ncl
# Format code
-kcl fmt provisioning/extensions/taskservs/{category}/my-service/kcl/
-```plaintext
-
-## 💡 Pro Tips
-
-1. **Use existing taskservs as templates** - Copy and modify similar services
-2. **Test with --check first** - Always use dry run before actual deployment
-3. **Follow naming conventions** - Use kebab-case for consistency
-4. **Document thoroughly** - Good docs save time later
-5. **Version your schemas** - Include version.k for compatibility tracking
-
-## 🔗 Next Steps
-
-1. Read the full [Taskserv Developer Guide](TASKSERV_DEVELOPER_GUIDE.md)
-2. Explore existing taskservs in `provisioning/extensions/taskservs/`
-3. Check out templates in `provisioning/workspace/templates/taskservs/`
-4. Join the development community for support
+nickel format provisioning/extensions/taskservs/{category}/my-service/schemas/
+
+
+Use existing taskservs as templates - Copy and modify similar services
+Test with –check first - Always use dry run before actual deployment
+Follow naming conventions - Use kebab-case for consistency
+Document thoroughly - Good docs save time later
+Version your schemas - Include version.ncl for compatibility tracking
+
+
+
+Read the full Taskserv Developer Guide
+Explore existing taskservs in provisioning/extensions/taskservs/
+Check out templates in provisioning/workspace/templates/taskservs/
+Join the development community for support
+
This document provides a comprehensive overview of the provisioning project’s structure after the major reorganization, explaining both the new development-focused organization and the preserved existing functionality.
@@ -36248,7 +34802,7 @@ kcl fmt provisioning/extensions/taskservs/{category}/my-service/kcl/
Navigation Guide
Migration Path
-
+
The provisioning project has been restructured to support a dual-organization approach:
src/ : Development-focused structure with build tools, distribution system, and core components
@@ -36266,68 +34820,53 @@ kcl fmt provisioning/extensions/taskservs/{category}/my-service/kcl/
├── docs/ # Documentation (new)
├── extensions/ # Extension framework
├── generators/ # Code generation tools
-├── kcl/ # KCL configuration language files
+├── schemas/ # Nickel configuration schemas (migrated from kcl/)
├── orchestrator/ # Hybrid Rust/Nushell orchestrator
├── platform/ # Platform-specific code
├── provisioning/ # Main provisioning
├── templates/ # Template files
├── tools/ # Build and development tools
└── utils/ # Utility scripts
-```plaintext
-
-### Legacy Structure (Preserved)
-
-```plaintext
-repo-cnz/
+
+
+repo-cnz/
├── cluster/ # Cluster configurations (preserved)
├── core/ # Core system (preserved)
├── generate/ # Generation scripts (preserved)
-├── kcl/ # KCL files (preserved)
+├── schemas/ # Nickel schemas (migrated from kcl/)
├── klab/ # Development lab (preserved)
├── nushell-plugins/ # Plugin development (preserved)
├── providers/ # Cloud providers (preserved)
├── taskservs/ # Task services (preserved)
└── templates/ # Template files (preserved)
-```plaintext
-
-### Development Workspace (`/workspace/`)
-
-```plaintext
-workspace/
+
+
+workspace/
├── config/ # Development configuration
├── extensions/ # Extension development
├── infra/ # Development infrastructure
├── lib/ # Workspace libraries
├── runtime/ # Runtime data
└── tools/ # Workspace management tools
-```plaintext
-
-## Core Directories
-
-### `/src/core/` - Core Development Libraries
-
-**Purpose**: Development-focused core libraries and entry points
-
-**Key Files**:
-
-- `nulib/provisioning` - Main CLI entry point (symlinks to legacy location)
-- `nulib/lib_provisioning/` - Core provisioning libraries
-- `nulib/workflows/` - Workflow management (orchestrator integration)
-
-**Relationship to Legacy**: Preserves original `core/` functionality while adding development enhancements
-
-### `/src/tools/` - Build and Development Tools
-
-**Purpose**: Complete build system for the provisioning project
-
-**Key Components**:
-
-```plaintext
-tools/
+
+
+
+Purpose : Development-focused core libraries and entry points
+Key Files :
+
+nulib/provisioning - Main CLI entry point (symlinks to legacy location)
+nulib/lib_provisioning/ - Core provisioning libraries
+nulib/workflows/ - Workflow management (orchestrator integration)
+
+Relationship to Legacy : Preserves original core/ functionality while adding development enhancements
+
+Purpose : Complete build system for the provisioning project
+Key Components :
+tools/
├── build/ # Build tools
│ ├── compile-platform.nu # Platform-specific compilation
│ ├── bundle-core.nu # Core library bundling
-│ ├── validate-kcl.nu # KCL validation
+│ ├── validate-nickel.nu # Nickel schema validation
│ ├── clean-build.nu # Build cleanup
│ └── test-distribution.nu # Distribution testing
├── distribution/ # Distribution tools
@@ -36348,122 +34887,94 @@ tools/
│ ├── notify-users.nu # Release notifications
│ └── update-registry.nu # Package registry updates
└── Makefile # Main build system (40+ targets)
-```plaintext
-
-### `/src/orchestrator/` - Hybrid Orchestrator
-
-**Purpose**: Rust/Nushell hybrid orchestrator for solving deep call stack limitations
-
-**Key Components**:
-
-- `src/` - Rust orchestrator implementation
-- `scripts/` - Orchestrator management scripts
-- `data/` - File-based task queue and persistence
-
-**Integration**: Provides REST API and workflow management while preserving all Nushell business logic
-
-### `/src/provisioning/` - Enhanced Provisioning
-
-**Purpose**: Enhanced version of the main provisioning with additional features
-
-**Key Features**:
-
-- Batch workflow system (v3.1.0)
-- Provider-agnostic design
-- Configuration-driven architecture (v2.0.0)
-
-### `/workspace/` - Development Workspace
-
-**Purpose**: Complete development environment with tools and runtime management
-
-**Key Components**:
-
-- `tools/workspace.nu` - Unified workspace management interface
-- `lib/path-resolver.nu` - Smart path resolution system
-- `config/` - Environment-specific development configurations
-- `extensions/` - Extension development templates and examples
-- `infra/` - Development infrastructure examples
-- `runtime/` - Isolated runtime data per user
-
-## Development Workspace
-
-### Workspace Management
-
-The workspace provides a sophisticated development environment:
-
-**Initialization**:
-
-```bash
-cd workspace/tools
+
+
+Purpose : Rust/Nushell hybrid orchestrator for solving deep call stack limitations
+Key Components :
+
+src/ - Rust orchestrator implementation
+scripts/ - Orchestrator management scripts
+data/ - File-based task queue and persistence
+
+Integration : Provides REST API and workflow management while preserving all Nushell business logic
+
+Purpose : Enhanced version of the main provisioning with additional features
+Key Features :
+
+Batch workflow system (v3.1.0)
+Provider-agnostic design
+Configuration-driven architecture (v2.0.0)
+
+
+Purpose : Complete development environment with tools and runtime management
+Key Components :
+
+tools/workspace.nu - Unified workspace management interface
+lib/path-resolver.nu - Smart path resolution system
+config/ - Environment-specific development configurations
+extensions/ - Extension development templates and examples
+infra/ - Development infrastructure examples
+runtime/ - Isolated runtime data per user
+
+
+
+The workspace provides a sophisticated development environment:
+Initialization :
+cd workspace/tools
nu workspace.nu init --user-name developer --infra-name my-infra
-```plaintext
-
-**Health Monitoring**:
-
-```bash
-nu workspace.nu health --detailed --fix-issues
-```plaintext
-
-**Path Resolution**:
-
-```nushell
-use lib/path-resolver.nu
+
+Health Monitoring :
+nu workspace.nu health --detailed --fix-issues
+
+Path Resolution :
+use lib/path-resolver.nu
let config = (path-resolver resolve_config "user" --workspace-user "john")
-```plaintext
-
-### Extension Development
-
-The workspace provides templates for developing:
-
-- **Providers**: Custom cloud provider implementations
-- **Task Services**: Infrastructure service components
-- **Clusters**: Complete deployment solutions
-
-Templates are available in `workspace/extensions/{type}/template/`
-
-### Configuration Hierarchy
-
-The workspace implements a sophisticated configuration cascade:
-
-1. Workspace user configuration (`workspace/config/{user}.toml`)
-2. Environment-specific defaults (`workspace/config/{env}-defaults.toml`)
-3. Workspace defaults (`workspace/config/dev-defaults.toml`)
-4. Core system defaults (`config.defaults.toml`)
-
-## File Naming Conventions
-
-### Nushell Files (`.nu`)
-
-- **Commands**: `kebab-case` - `create-server.nu`, `validate-config.nu`
-- **Modules**: `snake_case` - `lib_provisioning`, `path_resolver`
-- **Scripts**: `kebab-case` - `workspace-health.nu`, `runtime-manager.nu`
-
-### Configuration Files
-
-- **TOML**: `kebab-case.toml` - `config-defaults.toml`, `user-settings.toml`
-- **Environment**: `{env}-defaults.toml` - `dev-defaults.toml`, `prod-defaults.toml`
-- **Examples**: `*.toml.example` - `local-overrides.toml.example`
-
-### KCL Files (`.k`)
-
-- **Schemas**: `PascalCase` types - `ServerConfig`, `WorkflowDefinition`
-- **Files**: `kebab-case.k` - `server-config.k`, `workflow-schema.k`
-- **Modules**: `kcl.mod` - Module definition files
-
-### Build and Distribution
-
-- **Scripts**: `kebab-case.nu` - `compile-platform.nu`, `generate-distribution.nu`
-- **Makefiles**: `Makefile` - Standard naming
-- **Archives**: `{project}-{version}-{platform}-{variant}.{ext}`
-
-## Navigation Guide
-
-### Finding Components
-
-**Core System Entry Points**:
-
-```bash
-# Main CLI (development version)
+
+
+The workspace provides templates for developing:
+
+Providers : Custom cloud provider implementations
+Task Services : Infrastructure service components
+Clusters : Complete deployment solutions
+
+Templates are available in workspace/extensions/{type}/template/
+
+The workspace implements a sophisticated configuration cascade:
+
+Workspace user configuration (workspace/config/{user}.toml)
+Environment-specific defaults (workspace/config/{env}-defaults.toml)
+Workspace defaults (workspace/config/dev-defaults.toml)
+Core system defaults (config.defaults.toml)
+
+
+
+
+Commands : kebab-case - create-server.nu, validate-config.nu
+Modules : snake_case - lib_provisioning, path_resolver
+Scripts : kebab-case - workspace-health.nu, runtime-manager.nu
+
+
+
+TOML : kebab-case.toml - config-defaults.toml, user-settings.toml
+Environment : {env}-defaults.toml - dev-defaults.toml, prod-defaults.toml
+Examples : *.toml.example - local-overrides.toml.example
+
+
+
+Schemas : kebab-case.ncl - server-config.ncl, workflow-schema.ncl
+Configuration : manifest.toml - Package metadata
+Structure : Organized in schemas/ directories per extension
+
+
+
+Scripts : kebab-case.nu - compile-platform.nu, generate-distribution.nu
+Makefiles : Makefile - Standard naming
+Archives : {project}-{version}-{platform}-{variant}.{ext}
+
+
+
+Core System Entry Points :
+# Main CLI (development version)
/src/core/nulib/provisioning
# Legacy CLI (production version)
@@ -36471,12 +34982,9 @@ The workspace implements a sophisticated configuration cascade:
# Workspace management
/workspace/tools/workspace.nu
-```plaintext
-
-**Build System**:
-
-```bash
-# Main build system
+
+Build System :
+# Main build system
cd /src/tools && make help
# Quick development build
@@ -36484,12 +34992,9 @@ make dev-build
# Complete distribution
make all
-```plaintext
-
-**Configuration Files**:
-
-```bash
-# System defaults
+
+Configuration Files :
+# System defaults
/config.defaults.toml
# User configuration (workspace)
@@ -36497,12 +35002,9 @@ make all
# Environment-specific
/workspace/config/{env}-defaults.toml
-```plaintext
-
-**Extension Development**:
-
-```bash
-# Provider template
+
+Extension Development :
+# Provider template
/workspace/extensions/providers/template/
# Task service template
@@ -36510,25 +35012,18 @@ make all
# Cluster template
/workspace/extensions/clusters/template/
-```plaintext
-
-### Common Workflows
-
-**1. Development Setup**:
-
-```bash
-# Initialize workspace
+
+
+1. Development Setup :
+# Initialize workspace
cd workspace/tools
nu workspace.nu init --user-name $USER
# Check health
nu workspace.nu health --detailed
-```plaintext
-
-**2. Building Distribution**:
-
-```bash
-# Complete build
+
+2. Building Distribution :
+# Complete build
cd src/tools
make all
@@ -36536,115 +35031,101 @@ make all
make linux
make macos
make windows
-```plaintext
-
-**3. Extension Development**:
-
-```bash
-# Create new provider
+
+3. Extension Development :
+# Create new provider
cp -r workspace/extensions/providers/template workspace/extensions/providers/my-provider
# Test extension
nu workspace/extensions/providers/my-provider/nulib/provider.nu test
-```plaintext
-
-### Legacy Compatibility
-
-**Existing Commands Still Work**:
-
-```bash
-# All existing commands preserved
+
+
+Existing Commands Still Work :
+# All existing commands preserved
./core/nulib/provisioning server create
./core/nulib/provisioning taskserv install kubernetes
./core/nulib/provisioning cluster create buildkit
-```plaintext
-
-**Configuration Migration**:
-
-- ENV variables still supported as fallbacks
-- New configuration system provides better defaults
-- Migration tools available in `src/tools/migration/`
-
-## Migration Path
-
-### For Users
-
-**No Changes Required**:
-
-- All existing commands continue to work
-- Configuration files remain compatible
-- Existing infrastructure deployments unaffected
-
-**Optional Enhancements**:
-
-- Migrate to new configuration system for better defaults
-- Use workspace for development environments
-- Leverage new build system for custom distributions
-
-### For Developers
-
-**Development Environment**:
-
-1. Initialize development workspace: `nu workspace/tools/workspace.nu init`
-2. Use new build system: `cd src/tools && make dev-build`
-3. Leverage extension templates for custom development
-
-**Build System**:
-
-1. Use new Makefile for comprehensive build management
-2. Leverage distribution tools for packaging
-3. Use release management for version control
-
-**Orchestrator Integration**:
-
-1. Start orchestrator for workflow management: `cd src/orchestrator && ./scripts/start-orchestrator.nu`
-2. Use workflow APIs for complex operations
-3. Leverage batch operations for efficiency
-
-### Migration Tools
-
-**Available Migration Scripts**:
-
-- `src/tools/migration/config-migration.nu` - Configuration migration
-- `src/tools/migration/workspace-setup.nu` - Workspace initialization
-- `src/tools/migration/path-resolver.nu` - Path resolution migration
-
-**Validation Tools**:
-
-- `src/tools/validation/system-health.nu` - System health validation
-- `src/tools/validation/compatibility-check.nu` - Compatibility verification
-- `src/tools/validation/migration-status.nu` - Migration status tracking
-
-## Architecture Benefits
-
-### Development Efficiency
-
-- **Build System**: Comprehensive 40+ target Makefile system
-- **Workspace Isolation**: Per-user development environments
-- **Extension Framework**: Template-based extension development
-
-### Production Reliability
-
-- **Backward Compatibility**: All existing functionality preserved
-- **Configuration Migration**: Gradual migration from ENV to config-driven
-- **Orchestrator Architecture**: Hybrid Rust/Nushell for performance and flexibility
-- **Workflow Management**: Batch operations with rollback capabilities
-
-### Maintenance Benefits
-
-- **Clean Separation**: Development tools separate from production code
-- **Organized Structure**: Logical grouping of related functionality
-- **Documentation**: Comprehensive documentation and examples
-- **Testing Framework**: Built-in testing and validation tools
-
-This structure represents a significant evolution in the project's organization while maintaining complete backward compatibility and providing powerful new development capabilities.
+Configuration Migration :
+
+ENV variables still supported as fallbacks
+New configuration system provides better defaults
+Migration tools available in src/tools/migration/
+
+
+
+No Changes Required :
+
+All existing commands continue to work
+Configuration files remain compatible
+Existing infrastructure deployments unaffected
+
+Optional Enhancements :
+
+Migrate to new configuration system for better defaults
+Use workspace for development environments
+Leverage new build system for custom distributions
+
+
+Development Environment :
+
+Initialize development workspace: nu workspace/tools/workspace.nu init
+Use new build system: cd src/tools && make dev-build
+Leverage extension templates for custom development
+
+Build System :
+
+Use new Makefile for comprehensive build management
+Leverage distribution tools for packaging
+Use release management for version control
+
+Orchestrator Integration :
+
+Start orchestrator for workflow management: cd src/orchestrator && ./scripts/start-orchestrator.nu
+Use workflow APIs for complex operations
+Leverage batch operations for efficiency
+
+
+Available Migration Scripts :
+
+src/tools/migration/config-migration.nu - Configuration migration
+src/tools/migration/workspace-setup.nu - Workspace initialization
+src/tools/migration/path-resolver.nu - Path resolution migration
+
+Validation Tools :
+
+src/tools/validation/system-health.nu - System health validation
+src/tools/validation/compatibility-check.nu - Compatibility verification
+src/tools/validation/migration-status.nu - Migration status tracking
+
+
+
+
+Build System : Comprehensive 40+ target Makefile system
+Workspace Isolation : Per-user development environments
+Extension Framework : Template-based extension development
+
+
+
+Backward Compatibility : All existing functionality preserved
+Configuration Migration : Gradual migration from ENV to config-driven
+Orchestrator Architecture : Hybrid Rust/Nushell for performance and flexibility
+Workflow Management : Batch operations with rollback capabilities
+
+
+
+Clean Separation : Development tools separate from production code
+Organized Structure : Logical grouping of related functionality
+Documentation : Comprehensive documentation and examples
+Testing Framework : Built-in testing and validation tools
+
+This structure represents a significant evolution in the project’s organization while maintaining complete backward compatibility and providing powerful new development capabilities.
-
+
The new provider-agnostic architecture eliminates hardcoded provider dependencies and enables true multi-provider infrastructure deployments. This addresses two critical limitations of the previous middleware:
Hardcoded provider dependencies - No longer requires importing specific provider modules
-Single-provider limitation - Now supports mixing multiple providers in the same deployment (e.g., AWS compute + Cloudflare DNS + UpCloud backup)
+Single-provider limitation - Now supports mixing multiple providers in the same deployment (for example, AWS compute + Cloudflare DNS + UpCloud backup)
@@ -36658,21 +35139,17 @@ This structure represents a significant evolution in the project's organization
- server_state
- get_ip
# ... and 20+ other functions
-```plaintext
-
-**Key Features:**
-
-- Type-safe function signatures
-- Comprehensive validation
-- Provider capability flags
-- Interface versioning
-
-### 2. Provider Registry (`registry.nu`)
-
-Manages provider discovery and registration:
-
-```nushell
-# Initialize registry
+
+Key Features:
+
+Type-safe function signatures
+Comprehensive validation
+Provider capability flags
+Interface versioning
+
+
+Manages provider discovery and registration:
+# Initialize registry
init-provider-registry
# List available providers
@@ -36680,21 +35157,17 @@ list-providers --available-only
# Check provider availability
is-provider-available "aws"
-```plaintext
-
-**Features:**
-
-- Automatic provider discovery
-- Core and extension provider support
-- Caching for performance
-- Provider capability tracking
-
-### 3. Provider Loader (`loader.nu`)
-
-Handles dynamic provider loading and validation:
-
-```nushell
-# Load provider dynamically
+
+Features:
+
+Automatic provider discovery
+Core and extension provider support
+Caching for performance
+Provider capability tracking
+
+
+Handles dynamic provider loading and validation:
+# Load provider dynamically
load-provider "aws"
# Get provider with auto-loading
@@ -36702,31 +35175,24 @@ get-provider "upcloud"
# Call provider function
call-provider-function "aws" "query_servers" $find $cols
-```plaintext
-
-**Features:**
-
-- Lazy loading (load only when needed)
-- Interface compliance validation
-- Error handling and recovery
-- Provider health checking
-
-### 4. Provider Adapters
-
-Each provider implements a standard adapter:
-
-```plaintext
-provisioning/extensions/providers/
+
+Features:
+
+Lazy loading (load only when needed)
+Interface compliance validation
+Error handling and recovery
+Provider health checking
+
+
+Each provider implements a standard adapter:
+provisioning/extensions/providers/
├── aws/provider.nu # AWS adapter
├── upcloud/provider.nu # UpCloud adapter
├── local/provider.nu # Local adapter
└── {custom}/provider.nu # Custom providers
-```plaintext
-
-**Adapter Structure:**
-
-```nushell
-# AWS Provider Adapter
+
+Adapter Structure:
+# AWS Provider Adapter
export def query_servers [find?: string, cols?: string] {
aws_query_servers $find $cols
}
@@ -36734,50 +35200,40 @@ export def query_servers [find?: string, cols?: string] {
export def create_server [settings: record, server: record, check: bool, wait: bool] {
# AWS-specific implementation
}
-```plaintext
-
-### 5. Provider-Agnostic Middleware (`middleware_provider_agnostic.nu`)
-
-The new middleware that uses dynamic dispatch:
-
-```nushell
-# No hardcoded imports!
+
+
+The new middleware that uses dynamic dispatch:
+# No hardcoded imports!
export def mw_query_servers [settings: record, find?: string, cols?: string] {
$settings.data.servers | each { |server|
# Dynamic provider loading and dispatch
dispatch_provider_function $server.provider "query_servers" $find $cols
}
}
-```plaintext
-
-## Multi-Provider Support
-
-### Example: Mixed Provider Infrastructure
-
-```kcl
-servers = [
- aws.Server {
- hostname = "compute-01"
- provider = "aws"
+
+
+
+let servers = [
+ {
+ hostname = "compute-01",
+ provider = "aws",
# AWS-specific config
- }
- upcloud.Server {
- hostname = "backup-01"
- provider = "upcloud"
+ },
+ {
+ hostname = "backup-01",
+ provider = "upcloud",
# UpCloud-specific config
- }
- cloudflare.DNS {
- hostname = "api.example.com"
- provider = "cloudflare"
+ },
+ {
+ hostname = "api.example.com",
+ provider = "cloudflare",
# DNS-specific config
- }
-]
-```plaintext
-
-### Multi-Provider Deployment
-
-```nushell
-# Deploy across multiple providers automatically
+ },
+] in
+servers
+
+
+# Deploy across multiple providers automatically
mw_deploy_multi_provider_infra $settings $deployment_plan
# Get deployment strategy recommendations
@@ -36786,14 +35242,10 @@ mw_suggest_deployment_strategy {
high_availability: true
cost_optimization: true
}
-```plaintext
-
-## Provider Capabilities
-
-Providers declare their capabilities:
-
-```nushell
-capabilities: {
+
+
+Providers declare their capabilities:
+capabilities: {
server_management: true
network_management: true
auto_scaling: true # AWS: yes, Local: no
@@ -36801,16 +35253,11 @@ capabilities: {
serverless: true # AWS: yes, UpCloud: no
compliance_certifications: ["SOC2", "HIPAA"]
}
-```plaintext
-
-## Migration Guide
-
-### From Old Middleware
-
-**Before (hardcoded):**
-
-```nushell
-# middleware.nu
+
+
+
+Before (hardcoded):
+# middleware.nu
use ../aws/nulib/aws/servers.nu *
use ../upcloud/nulib/upcloud/servers.nu *
@@ -36818,31 +35265,26 @@ match $server.provider {
"aws" => { aws_query_servers $find $cols }
"upcloud" => { upcloud_query_servers $find $cols }
}
-```plaintext
-
-**After (provider-agnostic):**
-
-```nushell
-# middleware_provider_agnostic.nu
+
+After (provider-agnostic):
+# middleware_provider_agnostic.nu
# No hardcoded imports!
# Dynamic dispatch
dispatch_provider_function $server.provider "query_servers" $find $cols
-```plaintext
-
-### Migration Steps
-
-1. **Replace middleware file:**
-
- ```bash
- cp provisioning/extensions/providers/prov_lib/middleware.nu \
- provisioning/extensions/providers/prov_lib/middleware_legacy.backup
-
- cp provisioning/extensions/providers/prov_lib/middleware_provider_agnostic.nu \
- provisioning/extensions/providers/prov_lib/middleware.nu
+
+Replace middleware file:
+cp provisioning/extensions/providers/prov_lib/middleware.nu \
+ provisioning/extensions/providers/prov_lib/middleware_legacy.backup
+
+cp provisioning/extensions/providers/prov_lib/middleware_provider_agnostic.nu \
+ provisioning/extensions/providers/prov_lib/middleware.nu
+
+
+
Test with existing infrastructure:
./provisioning/tools/test-provider-agnostic.nu run-all-tests
@@ -36876,71 +35318,68 @@ export def create_server [settings: record, server: record, check: bool, wait: b
}
# ... implement all required functions
-```plaintext
-
-### 2. Provider Discovery
-
-The registry will automatically discover the new provider on next initialization.
-
-### 3. Test New Provider
-
-```nushell
-# Check if discovered
+
+
+The registry will automatically discover the new provider on next initialization.
+
+# Check if discovered
is-provider-available "digitalocean"
# Load and test
load-provider "digitalocean"
check-provider-health "digitalocean"
-```plaintext
-
-## Best Practices
-
-### Provider Development
-
-1. **Implement full interface** - All functions must be implemented
-2. **Handle errors gracefully** - Return appropriate error values
-3. **Follow naming conventions** - Use consistent function naming
-4. **Document capabilities** - Accurately declare what your provider supports
-5. **Test thoroughly** - Validate against the interface specification
-
-### Multi-Provider Deployments
-
-1. **Use capability-based selection** - Choose providers based on required features
-2. **Handle provider failures** - Design for provider unavailability
-3. **Optimize for cost/performance** - Mix providers strategically
-4. **Monitor cross-provider dependencies** - Understand inter-provider communication
-
-### Profile-Based Security
-
-```nushell
-# Environment profiles can restrict providers
+
+
+
+
+Implement full interface - All functions must be implemented
+Handle errors gracefully - Return appropriate error values
+Follow naming conventions - Use consistent function naming
+Document capabilities - Accurately declare what your provider supports
+Test thoroughly - Validate against the interface specification
+
+
+
+Use capability-based selection - Choose providers based on required features
+Handle provider failures - Design for provider unavailability
+Optimize for cost/performance - Mix providers strategically
+Monitor cross-provider dependencies - Understand inter-provider communication
+
+
+# Environment profiles can restrict providers
PROVISIONING_PROFILE=production # Only allows certified providers
PROVISIONING_PROFILE=development # Allows all providers including local
-```plaintext
-
-## Troubleshooting
-
-### Common Issues
-
-1. **Provider not found**
- - Check provider is in correct directory
- - Verify provider.nu exists and implements interface
- - Run `init-provider-registry` to refresh
-
-2. **Interface validation failed**
- - Use `validate-provider-interface` to check compliance
- - Ensure all required functions are implemented
- - Check function signatures match interface
-
-3. **Provider loading errors**
- - Check Nushell module syntax
- - Verify import paths are correct
- - Use `check-provider-health` for diagnostics
-
-### Debug Commands
-
-```nushell
-# Registry diagnostics
+
+
+
+
+
+Provider not found
+
+Check provider is in correct directory
+Verify provider.nu exists and implements interface
+Run init-provider-registry to refresh
+
+
+
+Interface validation failed
+
+Use validate-provider-interface to check compliance
+Ensure all required functions are implemented
+Check function signatures match interface
+
+
+
+Provider loading errors
+
+Check Nushell module syntax
+Verify import paths are correct
+Use check-provider-health for diagnostics
+
+
+
+
+# Registry diagnostics
get-provider-stats
list-providers --verbose
@@ -36950,34 +35389,28 @@ check-all-providers-health
# Loader diagnostics
get-loader-stats
-```plaintext
-
-## Performance Benefits
-
-1. **Lazy Loading** - Providers loaded only when needed
-2. **Caching** - Provider registry cached to disk
-3. **Reduced Memory** - No hardcoded imports reducing memory usage
-4. **Parallel Operations** - Multi-provider operations can run in parallel
-
-## Future Enhancements
-
-1. **Provider Plugins** - Support for external provider plugins
-2. **Provider Versioning** - Multiple versions of same provider
-3. **Provider Composition** - Compose providers for complex scenarios
-4. **Provider Marketplace** - Community provider sharing
-
-## API Reference
-
-See the interface specification for complete function documentation:
-
-```nushell
-get-provider-interface-docs | table
-```plaintext
-
-This returns the complete API with signatures and descriptions for all provider interface functions.
+
+
+Lazy Loading - Providers loaded only when needed
+Caching - Provider registry cached to disk
+Reduced Memory - No hardcoded imports reducing memory usage
+Parallel Operations - Multi-provider operations can run in parallel
+
+
+
+Provider Plugins - Support for external provider plugins
+Provider Versioning - Multiple versions of same provider
+Provider Composition - Compose providers for complex scenarios
+Provider Marketplace - Community provider sharing
+
+
+See the interface specification for complete function documentation:
+get-provider-interface-docs | table
+
+This returns the complete API with signatures and descriptions for all provider interface functions.
-
+
Implemented graceful CTRL-C handling for sudo password prompts during server creation/generation operations.
When fix_local_hosts: true is set, the provisioning tool requires sudo access to modify /etc/hosts and SSH config. When a user cancels the sudo password prompt (no password, wrong password, timeout), the system would:
@@ -37016,7 +35449,7 @@ This returns the complete API with signatures and descriptions for all provider
-
+
def check_sudo_cached []: nothing -> bool {
let result = (do --ignore-errors { ^sudo -n true } | complete)
@@ -37037,42 +35470,28 @@ def run_sudo_with_interrupt_check [
}
true
}
-```plaintext
-
-**Design Decision**: Return `bool` instead of throwing error or calling `exit`. This allows the caller to decide how to handle cancellation.
-
-### 2. Pre-emptive Warning (ssh.nu:155-160)
-
-```nushell
-if $server.fix_local_hosts and not (check_sudo_cached) {
+
+Design Decision : Return bool instead of throwing error or calling exit. This allows the caller to decide how to handle cancellation.
+
+if $server.fix_local_hosts and not (check_sudo_cached) {
print "\n⚠ Sudo access required for --fix-local-hosts"
print "ℹ You will be prompted for your password, or press CTRL-C to cancel"
print " Tip: Run 'sudo -v' beforehand to cache credentials\n"
}
-```plaintext
-
-**Design Decision**: Warn users upfront so they're not surprised by the password prompt.
-
-### 3. CTRL-C Detection (ssh.nu:171-199)
-
-All sudo commands wrapped with detection:
-
-```nushell
-let result = (do --ignore-errors { ^sudo <command> } | complete)
+
+Design Decision : Warn users upfront so they’re not surprised by the password prompt.
+
+All sudo commands wrapped with detection:
+let result = (do --ignore-errors { ^sudo <command> } | complete)
if $result.exit_code == 1 and ($result.stderr | str contains "password is required") {
print "\n⚠ Operation cancelled"
return false
}
-```plaintext
-
-**Design Decision**: Use `do --ignore-errors` + `complete` to capture both exit code and stderr without throwing exceptions.
-
-### 4. State Accumulation Pattern (ssh.nu:122-129)
-
-Using Nushell's `reduce` instead of mutable variables:
-
-```nushell
-let all_succeeded = ($settings.data.servers | reduce -f true { |server, acc|
+
+Design Decision : Use do --ignore-errors + complete to capture both exit code and stderr without throwing exceptions.
+
+Using Nushell’s reduce instead of mutable variables:
+let all_succeeded = ($settings.data.servers | reduce -f true { |server, acc|
if $text_match == null or $server.hostname == $text_match {
let result = (on_server_ssh $settings $server $ip_type $request_from $run)
$acc and $result
@@ -37080,26 +35499,18 @@ let all_succeeded = ($settings.data.servers | reduce -f true { |server, acc|
$acc
}
})
-```plaintext
-
-**Design Decision**: Nushell doesn't allow mutable variable capture in closures. Use `reduce` for accumulating boolean state across iterations.
-
-### 5. Caller Handling (create.nu:262-266, generate.nu:269-273)
-
-```nushell
-let ssh_result = (on_server_ssh $settings $server "pub" "create" false)
+
+Design Decision : Nushell doesn’t allow mutable variable capture in closures. Use reduce for accumulating boolean state across iterations.
+
+let ssh_result = (on_server_ssh $settings $server "pub" "create" false)
if not $ssh_result {
_print "\n✗ Server creation cancelled"
return false
}
-```plaintext
-
-**Design Decision**: Check return value and provide context-specific message before returning.
-
-## Error Flow Diagram
-
-```plaintext
-User presses CTRL-C during password prompt
+
+Design Decision : Check return value and provide context-specific message before returning.
+
+User presses CTRL-C during password prompt
↓
sudo exits with code 1, stderr: "password is required"
↓
@@ -37122,25 +35533,16 @@ Return false to settings.nu
settings.nu handles false gracefully (no append)
↓
Clean exit, no cryptic errors
-```plaintext
-
-## Nushell Idioms Used
-
-### 1. `do --ignore-errors` + `complete`
-
-Captures both stdout, stderr, and exit code without throwing:
-
-```nushell
-let result = (do --ignore-errors { ^sudo command } | complete)
+
+
+
+Captures both stdout, stderr, and exit code without throwing:
+let result = (do --ignore-errors { ^sudo command } | complete)
# result = { stdout: "...", stderr: "...", exit_code: 1 }
-```plaintext
-
-### 2. `reduce` for Accumulation
-
-Instead of mutable variables in loops:
-
-```nushell
-# ❌ BAD - mutable capture in closure
+
+
+Instead of mutable variables in loops:
+# ❌ BAD - mutable capture in closure
mut all_succeeded = true
$servers | each { |s|
$all_succeeded = false # Error: capture of mutable variable
@@ -37150,117 +35552,95 @@ $servers | each { |s|
let all_succeeded = ($servers | reduce -f true { |s, acc|
$acc and (check_server $s)
})
-```plaintext
-
-### 3. Early Returns for Error Handling
-
-```nushell
-if not $condition {
+
+
+if not $condition {
print "Error message"
return false
}
# Continue with happy path
-```plaintext
-
-## Testing Scenarios
-
-### Scenario 1: CTRL-C During First Sudo Command
-
-```bash
-provisioning -c server create
+
+
+
+provisioning -c server create
# Password: [CTRL-C]
# Expected Output:
# ⚠ Operation cancelled - sudo password required but not provided
# ℹ Run 'sudo -v' first to cache credentials
# ✗ Server creation cancelled
-```plaintext
-
-### Scenario 2: Pre-cached Credentials
-
-```bash
-sudo -v
+
+
+sudo -v
provisioning -c server create
# Expected: No password prompt, smooth operation
-```plaintext
-
-### Scenario 3: Wrong Password 3 Times
-
-```bash
-provisioning -c server create
+
+
+provisioning -c server create
# Password: [wrong]
# Password: [wrong]
# Password: [wrong]
# Expected: Same as CTRL-C (treated as cancellation)
-```plaintext
-
-### Scenario 4: Multiple Servers, Cancel on Second
-
-```bash
-# If creating multiple servers and CTRL-C on second:
+
+
+# If creating multiple servers and CTRL-C on second:
# - First server completes successfully
# - Second server shows cancellation message
# - Operation stops, doesn't proceed to third
-```plaintext
-
-## Maintenance Notes
-
-### Adding New Sudo Commands
-
-When adding new sudo commands to the codebase:
-
-1. Wrap with `do --ignore-errors` + `complete`
-2. Check for exit code 1 + "password is required"
-3. Return `false` on cancellation
-4. Let caller handle the `false` return value
-
-Example template:
-
-```nushell
-let result = (do --ignore-errors { ^sudo new-command } | complete)
+
+
+
+When adding new sudo commands to the codebase:
+
+Wrap with do --ignore-errors + complete
+Check for exit code 1 + “password is required”
+Return false on cancellation
+Let caller handle the false return value
+
+Example template:
+let result = (do --ignore-errors { ^sudo new-command } | complete)
if $result.exit_code == 1 and ($result.stderr | str contains "password is required") {
print "\n⚠ Operation cancelled - sudo password required"
return false
}
-```plaintext
-
-### Common Pitfalls
-
-1. **Don't use `exit`**: It kills the entire process
-2. **Don't use mutable variables in closures**: Use `reduce` instead
-3. **Don't ignore return values**: Always check and propagate
-4. **Don't forget the pre-check warning**: Users should know sudo is needed
-
-## Future Improvements
-
-1. **Sudo Credential Manager**: Optionally use a credential manager (keychain, etc.)
-2. **Sudo-less Mode**: Alternative implementation that doesn't require root
-3. **Timeout Handling**: Detect when sudo times out waiting for password
-4. **Multiple Password Attempts**: Distinguish between CTRL-C and wrong password
-
-## References
-
-- Nushell `complete` command: <https://www.nushell.sh/commands/docs/complete.html>
-- Nushell `reduce` command: <https://www.nushell.sh/commands/docs/reduce.html>
-- Sudo exit codes: man sudo (exit code 1 = authentication failure)
-- POSIX signal conventions: SIGINT (CTRL-C) = 130
-
-## Related Files
-
-- `provisioning/core/nulib/servers/ssh.nu` - Core implementation
-- `provisioning/core/nulib/servers/create.nu` - Calls on_server_ssh
-- `provisioning/core/nulib/servers/generate.nu` - Calls on_server_ssh
-- `docs/troubleshooting/CTRL-C_SUDO_HANDLING.md` - User-facing docs
-- `docs/quick-reference/SUDO_PASSWORD_HANDLING.md` - Quick reference
-
-## Changelog
-
-- **2025-01-XX**: Initial implementation with return values (v2)
-- **2025-01-XX**: Fixed mutable variable capture with `reduce` pattern
-- **2025-01-XX**: First attempt with `exit 130` (reverted, caused process termination)
+
+
+Don’t use exit : It kills the entire process
+Don’t use mutable variables in closures : Use reduce instead
+Don’t ignore return values : Always check and propagate
+Don’t forget the pre-check warning : Users should know sudo is needed
+
+
+
+Sudo Credential Manager : Optionally use a credential manager (keychain, etc.)
+Sudo-less Mode : Alternative implementation that doesn’t require root
+Timeout Handling : Detect when sudo times out waiting for password
+Multiple Password Attempts : Distinguish between CTRL-C and wrong password
+
+
+
+
+
+provisioning/core/nulib/servers/ssh.nu - Core implementation
+provisioning/core/nulib/servers/create.nu - Calls on_server_ssh
+provisioning/core/nulib/servers/generate.nu - Calls on_server_ssh
+docs/troubleshooting/CTRL-C_SUDO_HANDLING.md - User-facing docs
+docs/quick-reference/SUDO_PASSWORD_HANDLING.md - Quick reference
+
+
+
+2025-01-XX : Initial implementation with return values (v2)
+2025-01-XX : Fixed mutable variable capture with reduce pattern
+2025-01-XX : First attempt with exit 130 (reverted, caused process termination)
+
Status : ✅ Complete and Production-Ready
Version : 1.0.0
@@ -37276,15 +35656,15 @@ if $result.exit_code == 1 and ($result.stderr | str contains "password is requir
Testing
Troubleshooting
-
+
This guide describes the metadata-driven authentication system implemented over 5 weeks across 14 command handlers and 12 major systems. The system provides:
-Centralized Metadata : All command definitions in KCL with runtime validation
+Centralized Metadata : All command definitions in Nickel with runtime validation
Automatic Auth Checks : Pre-execution validation before handler logic
Performance Optimization : 40-100x faster through metadata caching
Flexible Deployment : Works with orchestrator, batch workflows, and direct CLI
-
+
┌─────────────────────────────────────────────────────────────┐
│ User Command │
@@ -37317,37 +35697,33 @@ if $result.exit_code == 1 and ($result.stderr | str contains "password is requir
┌────────────▼─────────────┐
│ Result/Response │
└─────────────────────────┘
-```plaintext
-
-### Data Flow
-
-1. **User Command** → CLI Dispatcher
-2. **Dispatcher** → Load cached metadata (or parse KCL)
-3. **Validate** → Check auth, operation type, permissions
-4. **Execute** → Call appropriate handler
-5. **Return** → Result to user
-
-### Metadata Caching
-
-- **Location**: `~/.cache/provisioning/command_metadata.json`
-- **Format**: Serialized JSON (pre-parsed for speed)
-- **TTL**: 1 hour (configurable via `PROVISIONING_METADATA_TTL`)
-- **Invalidation**: Automatic on `commands.k` modification
-- **Performance**: 40-100x faster than KCL parsing
-
-## Installation
-
-### Prerequisites
-
-- Nushell 0.109.0+
-- KCL 0.11.2
-- SOPS 3.10.2 (for encrypted configs)
-- Age 1.2.1 (for encryption)
-
-### Installation Steps
-
-```bash
-# 1. Clone or update repository
+
+
+
+User Command → CLI Dispatcher
+Dispatcher → Load cached metadata (or parse Nickel)
+Validate → Check auth, operation type, permissions
+Execute → Call appropriate handler
+Return → Result to user
+
+
+
+Location : ~/.cache/provisioning/command_metadata.json
+Format : Serialized JSON (pre-parsed for speed)
+TTL : 1 hour (configurable via PROVISIONING_METADATA_TTL)
+Invalidation : Automatic on main.ncl modification
+Performance : 40-100x faster than Nickel parsing
+
+
+
+
+Nushell 0.109.0+
+Nickel 1.15.0+
+SOPS 3.10.2 (for encrypted configs)
+Age 1.2.1 (for encryption)
+
+
+# 1. Clone or update repository
git clone https://github.com/your-org/project-provisioning.git
cd project-provisioning
@@ -37364,36 +35740,29 @@ cd project-provisioning
nu tests/test-fase5-e2e.nu
nu tests/test-security-audit-day20.nu
nu tests/test-metadata-cache-benchmark.nu
-```plaintext
-
-## Usage Guide
-
-### Basic Commands
-
-```bash
-# Initialize authentication
+
+
+
+# Initialize authentication
provisioning login
# Enroll in MFA
provisioning mfa totp enroll
# Create infrastructure
-provisioning server create --name web-01 --plan 1xCPU-2GB
+provisioning server create --name web-01 --plan 1xCPU-2 GB
# Deploy with orchestrator
-provisioning workflow submit workflows/deployment.k --orchestrated
+provisioning workflow submit workflows/deployment.ncl --orchestrated
# Batch operations
-provisioning batch submit workflows/batch-deploy.k
+provisioning batch submit workflows/batch-deploy.ncl
# Check without executing
provisioning server create --name test --check
-```plaintext
-
-### Authentication Flow
-
-```bash
-# 1. Login (required for production operations)
+
+
+# 1. Login (required for production operations)
$ provisioning login
Username: alice@example.com
Password: ****
@@ -37413,41 +35782,30 @@ Are you sure? [yes/no] yes
$ provisioning taskserv delete postgres web-01
Auth check: Check auth for destructive operation
✓ Taskserv deleted
-```plaintext
-
-### Check Mode (Bypass Auth for Testing)
-
-```bash
-# Dry-run without auth checks
+
+
+# Dry-run without auth checks
provisioning server create --name test --check
# Output: Shows what would happen, no auth checks
Dry-run mode - no changes will be made
✓ Would create server: test
✓ Would deploy taskservs: []
-```plaintext
-
-### Non-Interactive CI/CD Mode
-
-```bash
-# Automated mode - skip confirmations
+
+
+# Automated mode - skip confirmations
provisioning server create --name web-01 --yes
# Batch operations
-provisioning batch submit workflows/batch.k --yes --check
+provisioning batch submit workflows/batch.ncl --yes --check
# With environment variable
PROVISIONING_NON_INTERACTIVE=1 provisioning server create --name web-02 --yes
-```plaintext
-
-## Migration Path
-
-### Phase 1: From Old `input` to Metadata
-
-**Old Pattern** (Before Fase 5):
-
-```nushell
-# Hardcoded auth check
+
+
+
+Old Pattern (Before Fase 5):
+# Hardcoded auth check
let response = (input "Delete server? (yes/no): ")
if $response != "yes" { exit 1 }
@@ -37456,12 +35814,9 @@ export def delete-server [name: string, --yes] {
if not $yes { ... manual confirmation ... }
# ... deletion logic ...
}
-```plaintext
-
-**New Pattern** (After Fase 5):
-
-```nushell
-# Metadata header
+
+New Pattern (After Fase 5):
+# Metadata header
# [command]
# name = "server delete"
# group = "infrastructure"
@@ -37475,16 +35830,13 @@ export def delete-server [name: string, --yes] {
# Operation type: "delete" automatically detected
# ... deletion logic ...
}
-```plaintext
-
-### Phase 2: Adding Metadata Headers
-
-**For each script that was migrated:**
-
-1. Add metadata header after shebang:
-
-```nushell
-#!/usr/bin/env nu
+
+
+For each script that was migrated:
+
+Add metadata header after shebang:
+
+#!/usr/bin/env nu
# [command]
# name = "server create"
# group = "infrastructure"
@@ -37494,39 +35846,35 @@ export def delete-server [name: string, --yes] {
export def create-server [name: string] {
# Logic here
}
-```plaintext
-
-1. Register in `provisioning/kcl/commands.k`:
-
-```kcl
-server_create: CommandMetadata = {
- name = "server create"
- domain = "infrastructure"
- description = "Create a new server"
+
+
+Register in provisioning/schemas/main.ncl:
+
+let server_create = {
+ name = "server create",
+ domain = "infrastructure",
+ description = "Create a new server",
requirements = {
- interactive = False
- requires_auth = True
- auth_type = "jwt"
- side_effect_type = "create"
- min_permission = "write"
- }
-}
-```plaintext
-
-1. Handler integration (happens in dispatcher):
-
-```nushell
-# Dispatcher automatically:
+ interactive = false,
+ requires_auth = true,
+ auth_type = "jwt",
+ side_effect_type = "create",
+ min_permission = "write",
+ },
+} in
+server_create
+
+
+Handler integration (happens in dispatcher):
+
+# Dispatcher automatically:
# 1. Loads metadata for "server create"
# 2. Validates auth based on requirements
# 3. Checks permission levels
# 4. Calls handler if validation passes
-```plaintext
-
-### Phase 3: Validating Migration
-
-```bash
-# Validate metadata headers
+
+
+# Validate metadata headers
nu utils/validate-metadata-headers.nu
# Find scripts by tag
@@ -37540,33 +35888,26 @@ nu utils/search-scripts.nu by-tags server delete
# List all migrated scripts
nu utils/search-scripts.nu list
-```plaintext
-
-## Developer Guide
-
-### Adding New Commands with Metadata
-
-**Step 1: Create metadata in commands.k**
-
-```kcl
-new_feature_command: CommandMetadata = {
- name = "feature command"
- domain = "infrastructure"
- description = "My new feature"
+
+
+
+Step 1: Create metadata in main.ncl
+let new_feature_command = {
+ name = "feature command",
+ domain = "infrastructure",
+ description = "My new feature",
requirements = {
- interactive = False
- requires_auth = True
- auth_type = "jwt"
- side_effect_type = "create"
- min_permission = "write"
- }
-}
-```plaintext
-
-**Step 2: Add metadata header to script**
-
-```nushell
-#!/usr/bin/env nu
+ interactive = false,
+ requires_auth = true,
+ auth_type = "jwt",
+ side_effect_type = "create",
+ min_permission = "write",
+ },
+} in
+new_feature_command
+
+Step 2: Add metadata header to script
+#!/usr/bin/env nu
# [command]
# name = "feature command"
# group = "infrastructure"
@@ -37576,12 +35917,9 @@ new_feature_command: CommandMetadata = {
export def feature-command [param: string] {
# Implementation
}
-```plaintext
-
-**Step 3: Implement handler function**
-
-```nushell
-# Handler registered in dispatcher
+
+Step 3: Implement handler function
+# Handler registered in dispatcher
export def handle-feature-command [
action: string
--flags
@@ -37593,90 +35931,71 @@ export def handle-feature-command [
# Your logic here
}
-```plaintext
-
-**Step 4: Test with check mode**
-
-```bash
-# Dry-run without auth
+
+Step 4: Test with check mode
+# Dry-run without auth
provisioning feature command --check
# Full execution
provisioning feature command --yes
-```plaintext
-
-### Metadata Field Reference
-
-| Field | Type | Required | Description |
-|-------|------|----------|-------------|
-| name | string | Yes | Command canonical name |
-| domain | string | Yes | Command category (infrastructure, orchestration, etc.) |
-| description | string | Yes | Human-readable description |
-| requires_auth | bool | Yes | Whether auth is required |
-| auth_type | enum | Yes | "none", "jwt", "mfa", "cedar" |
-| side_effect_type | enum | Yes | "none", "create", "update", "delete", "deploy" |
-| min_permission | enum | Yes | "read", "write", "admin", "superadmin" |
-| interactive | bool | No | Whether command requires user input |
-| slow_operation | bool | No | Whether operation takes >60 seconds |
-
-### Standard Tags
-
-**Groups**:
-
-- infrastructure - Server, taskserv, cluster operations
-- orchestration - Workflow, batch operations
-- workspace - Workspace management
-- authentication - Auth, MFA, tokens
-- utilities - Helper commands
-
-**Operations**:
-
-- create, read, update, delete - CRUD operations
-- destructive - Irreversible operations
-- interactive - Requires user input
-
-**Performance**:
-
-- slow - Operation >60 seconds
-- optimizable - Candidate for optimization
-
-### Performance Optimization Patterns
-
-**Pattern 1: For Long Operations**
-
-```nushell
-# Use orchestrator for operations >2 seconds
+
+
+Field Type Required Description
+name string Yes Command canonical name
+domain string Yes Command category (infrastructure, orchestration, etc.)
+description string Yes Human-readable description
+requires_auth bool Yes Whether auth is required
+auth_type enum Yes “none”, “jwt”, “mfa”, “cedar”
+side_effect_type enum Yes “none”, “create”, “update”, “delete”, “deploy”
+min_permission enum Yes “read”, “write”, “admin”, “superadmin”
+interactive bool No Whether command requires user input
+slow_operation bool No Whether operation takes >60 seconds
+
+
+
+Groups :
+
+infrastructure - Server, taskserv, cluster operations
+orchestration - Workflow, batch operations
+workspace - Workspace management
+authentication - Auth, MFA, tokens
+utilities - Helper commands
+
+Operations :
+
+create, read, update, delete - CRUD operations
+destructive - Irreversible operations
+interactive - Requires user input
+
+Performance :
+
+slow - Operation >60 seconds
+optimizable - Candidate for optimization
+
+
+Pattern 1: For Long Operations
+# Use orchestrator for operations >2 seconds
if (get-operation-duration "my-operation") > 2000 {
submit-to-orchestrator $operation
return "Operation submitted in background"
}
-```plaintext
-
-**Pattern 2: For Batch Operations**
-
-```nushell
-# Use batch workflows for multiple operations
+
+Pattern 2: For Batch Operations
+# Use batch workflows for multiple operations
nu -c "
use core/nulib/workflows/batch.nu *
-batch submit workflows/batch-deploy.k --parallel-limit 5
+batch submit workflows/batch-deploy.ncl --parallel-limit 5
"
-```plaintext
-
-**Pattern 3: For Metadata Overhead**
-
-```nushell
-# Cache hit rate optimization
+
+Pattern 3: For Metadata Overhead
+# Cache hit rate optimization
# Current: 40-100x faster with warm cache
# Target: >95% cache hit rate
# Achieved: Metadata stays in cache for 1 hour (TTL)
-```plaintext
-
-## Testing
-
-### Running Tests
-
-```bash
-# End-to-End Integration Tests
+
+
+
+# End-to-End Integration Tests
nu tests/test-fase5-e2e.nu
# Security Audit
@@ -37687,41 +36006,29 @@ nu tests/test-metadata-cache-benchmark.nu
# Run all tests
for test in tests/test-*.nu { nu $test }
-```plaintext
-
-### Test Coverage
-
-| Test Suite | Category | Coverage |
-|-----------|----------|----------|
-| E2E Tests | Integration | 7 test groups, 40+ checks |
-| Security Audit | Auth | 5 audit categories, 100% pass |
-| Benchmarks | Performance | 6 benchmark categories |
-
-### Expected Results
-
-✅ All tests pass
+
+
+Test Suite Category Coverage
+E2E Tests Integration 7 test groups, 40+ checks
+Security Audit Auth 5 audit categories, 100% pass
+Benchmarks Performance 6 benchmark categories
+
+
+
+✅ All tests pass
✅ No Nushell syntax violations
✅ Cache hit rate >95%
✅ Auth enforcement 100%
-✅ Performance baselines met
-
-## Troubleshooting
-
-### Issue: Command not found
-
-**Solution**: Ensure metadata is registered in `commands.k`
-
-```bash
-# Check if command is in metadata
-grep "command_name" provisioning/kcl/commands.k
-```plaintext
-
-### Issue: Auth check failing
-
-**Solution**: Verify user has required permission level
-
-```bash
-# Check current user permissions
+✅ Performance baselines met
+
+
+Solution : Ensure metadata is registered in main.ncl
+# Check if command is in metadata
+grep "command_name" provisioning/schemas/main.ncl
+
+
+Solution : Verify user has required permission level
+# Check current user permissions
provisioning auth whoami
# Check command requirements
@@ -37729,104 +36036,78 @@ nu -c "
use core/nulib/lib_provisioning/commands/traits.nu *
get-command-metadata 'server create'
"
-```plaintext
-
-### Issue: Slow command execution
-
-**Solution**: Check cache status
-
-```bash
-# Force cache reload
+
+
+Solution : Check cache status
+# Force cache reload
rm ~/.cache/provisioning/command_metadata.json
# Check cache hit rate
nu tests/test-metadata-cache-benchmark.nu
-```plaintext
-
-### Issue: Nushell syntax error
-
-**Solution**: Run compliance check
-
-```bash
-# Validate Nushell compliance
+
+
+Solution : Run compliance check
+# Validate Nushell compliance
nu --ide-check 100 <file.nu>
# Check for common issues
grep "try {" <file.nu> # Should be empty
grep "let mut" <file.nu> # Should be empty
-```plaintext
-
-## Performance Characteristics
-
-### Baseline Metrics
-
-| Operation | Cold | Warm | Improvement |
-|-----------|------|------|-------------|
-| Metadata Load | 200ms | 2-5ms | 40-100x |
-| Auth Check | <5ms | <5ms | Same |
-| Command Dispatch | <10ms | <10ms | Same |
-| Total Command | ~210ms | ~10ms | 21x |
-
-### Real-World Impact
-
-```plaintext
-Scenario: 20 sequential commands
- Without cache: 20 × 200ms = 4 seconds
- With cache: 1 × 200ms + 19 × 5ms = 295ms
- Speedup: ~13.5x faster
-```plaintext
-
-## Next Steps
-
-1. **Deploy**: Use installer to deploy to production
-2. **Monitor**: Watch cache hit rates (target >95%)
-3. **Extend**: Add new commands following migration pattern
-4. **Optimize**: Use profiling to identify slow operations
-5. **Maintain**: Run validation scripts regularly
-
----
-
-**For Support**: See `docs/troubleshooting-guide.md`
-**For Architecture**: See `docs/architecture/`
-**For User Guide**: See `docs/user/AUTHENTICATION_LAYER_GUIDE.md`
+
+
+Operation Cold Warm Improvement
+Metadata Load 200 ms 2-5 ms 40-100x
+Auth Check <5 ms <5 ms Same
+Command Dispatch <10 ms <10 ms Same
+Total Command ~210 ms ~10 ms 21x
+
+
+
+Scenario: 20 sequential commands
+ Without cache: 20 × 200 ms = 4 seconds
+ With cache: 1 × 200 ms + 19 × 5 ms = 295 ms
+ Speedup: ~13.5x faster
+
+
+
+Deploy : Use installer to deploy to production
+Monitor : Watch cache hit rates (target >95%)
+Extend : Add new commands following migration pattern
+Optimize : Use profiling to identify slow operations
+Maintain : Run validation scripts regularly
+
+
+For Support : See docs/troubleshooting-guide.md
+For Architecture : See docs/architecture/
+For User Guide : See docs/user/AUTHENTICATION_LAYER_GUIDE.md
-
+
This guide walks through migrating from the old config.defaults.toml system to the new workspace-based target configuration system.
-
+
Old System New System
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
config.defaults.toml → ~/workspaces/{name}/config/provisioning.yaml
config.user.toml → ~/Library/Application Support/provisioning/ws_{name}.yaml
providers/{name}/config → ~/workspaces/{name}/config/providers/{name}.toml
→ ~/workspaces/{name}/config/platform/{service}.toml
-```plaintext
-
-## Step-by-Step Migration
-
-### 1. Pre-Migration Check
-
-```bash
-# Check current configuration
+
+
+
+# Check current configuration
provisioning env
# Backup current configuration
cp -r provisioning/config provisioning/config.backup.$(date +%Y%m%d)
-```plaintext
-
-### 2. Run Migration Script (Dry Run)
-
-```bash
-# Preview what will be done
+
+
+# Preview what will be done
./provisioning/scripts/migrate-to-target-configs.nu \
--workspace-name "my-project" \
--dry-run
-```plaintext
-
-### 3. Execute Migration
-
-```bash
-# Run with backup
+
+
+# Run with backup
./provisioning/scripts/migrate-to-target-configs.nu \
--workspace-name "my-project" \
--backup
@@ -37836,12 +36117,9 @@ cp -r provisioning/config provisioning/config.backup.$(date +%Y%m%d)
--workspace-name "my-project" \
--workspace-path "$HOME/my-custom-path" \
--backup
-```plaintext
-
-### 4. Verify Migration
-
-```bash
-# Validate workspace configuration
+
+
+# Validate workspace configuration
provisioning workspace config validate
# Check workspace status
@@ -37849,12 +36127,9 @@ provisioning workspace info
# List all workspaces
provisioning workspace list
-```plaintext
-
-### 5. Test Configuration
-
-```bash
-# Test with new configuration
+
+
+# Test with new configuration
provisioning --check server list
# Test provider configuration
@@ -37862,46 +36137,36 @@ provisioning provider validate aws
# Test platform configuration
provisioning platform orchestrator status
-```plaintext
-
-### 6. Update Environment Variables (if any)
-
-```bash
-# Old approach (no longer needed)
+
+
+# Old approach (no longer needed)
# export PROVISIONING_CONFIG_PATH="/path/to/config.defaults.toml"
# New approach - workspace is auto-detected from context
# Or set explicitly:
export PROVISIONING_WORKSPACE="my-project"
-```plaintext
-
-### 7. Clean Up Old Configuration
-
-```bash
-# After verifying everything works
+
+
+# After verifying everything works
rm provisioning/config/config.defaults.toml
rm provisioning/config/config.user.toml
# Keep backup for reference
# provisioning/config.backup.YYYYMMDD/
-```plaintext
-
-## Migration Script Options
-
-### Required Arguments
-
-- `--workspace-name`: Name for the new workspace (default: "default")
-
-### Optional Arguments
-
-- `--workspace-path`: Custom path for workspace (default: `~/workspaces/{name}`)
-- `--dry-run`: Preview migration without making changes
-- `--backup`: Create backup of old configuration files
-
-### Examples
-
-```bash
-# Basic migration with default workspace
+
+
+
+
+--workspace-name: Name for the new workspace (default: “default”)
+
+
+
+--workspace-path: Custom path for workspace (default: ~/workspaces/{name})
+--dry-run: Preview migration without making changes
+--backup: Create backup of old configuration files
+
+
+# Basic migration with default workspace
./provisioning/scripts/migrate-to-target-configs.nu --backup
# Custom workspace name
@@ -37919,14 +36184,10 @@ rm provisioning/config/config.user.toml
./provisioning/scripts/migrate-to-target-configs.nu \
--workspace-name "production" \
--dry-run
-```plaintext
-
-## New Workspace Structure
-
-After migration, your workspace will look like:
-
-```plaintext
-~/workspaces/{name}/
+
+
+After migration, your workspace will look like:
+~/workspaces/{name}/
├── config/
│ ├── provisioning.yaml # Main workspace config
│ ├── providers/
@@ -37941,21 +36202,14 @@ After migration, your workspace will look like:
│ └── {infra-name}/ # Infrastructure definitions
├── .cache/ # Cache directory
└── .runtime/ # Runtime data
-```plaintext
-
-User context stored at:
-
-```plaintext
-~/Library/Application Support/provisioning/
+
+User context stored at:
+~/Library/Application Support/provisioning/
└── ws_{name}.yaml # User workspace context
-```plaintext
-
-## Configuration Schema Validation
-
-### Validate Workspace Config
-
-```bash
-# Validate main workspace configuration
+
+
+
+# Validate main workspace configuration
provisioning workspace config validate
# Validate specific provider
@@ -37963,12 +36217,9 @@ provisioning provider validate aws
# Validate platform service
provisioning platform validate orchestrator
-```plaintext
-
-### Manual Validation
-
-```nushell
-use provisioning/core/nulib/lib_provisioning/config/schema_validator.nu *
+
+
+use provisioning/core/nulib/lib_provisioning/config/schema_validator.nu *
# Validate workspace config
let config = (open ~/workspaces/my-project/config/provisioning.yaml | from yaml)
@@ -37979,33 +36230,22 @@ print-validation-results $result
let aws_config = (open ~/workspaces/my-project/config/providers/aws.toml | from toml)
let result = (validate-provider-config "aws" $aws_config)
print-validation-results $result
-```plaintext
-
-## Troubleshooting
-
-### Migration Fails
-
-**Problem**: Migration script fails with "workspace path already exists"
-
-**Solution**:
-
-```bash
-# Use merge mode
+
+
+
+Problem : Migration script fails with “workspace path already exists”
+Solution :
+# Use merge mode
# The script will prompt for confirmation
./provisioning/scripts/migrate-to-target-configs.nu --workspace-name "existing"
# Or choose different workspace name
./provisioning/scripts/migrate-to-target-configs.nu --workspace-name "existing-v2"
-```plaintext
-
-### Config Not Found
-
-**Problem**: Commands can't find configuration after migration
-
-**Solution**:
-
-```bash
-# Check workspace context
+
+
+Problem : Commands can’t find configuration after migration
+Solution :
+# Check workspace context
provisioning workspace info
# Ensure workspace is active
@@ -38013,16 +36253,11 @@ provisioning workspace activate my-project
# Manually set workspace
export PROVISIONING_WORKSPACE="my-project"
-```plaintext
-
-### Validation Errors
-
-**Problem**: Configuration validation fails after migration
-
-**Solution**:
-
-```bash
-# Check validation output
+
+
+Problem : Configuration validation fails after migration
+Solution :
+# Check validation output
provisioning workspace config validate
# Review and fix errors in config files
@@ -38030,16 +36265,11 @@ vim ~/workspaces/my-project/config/provisioning.yaml
# Validate again
provisioning workspace config validate
-```plaintext
-
-### Provider Configuration Issues
-
-**Problem**: Provider authentication fails after migration
-
-**Solution**:
-
-```bash
-# Check provider configuration
+
+
+Problem : Provider authentication fails after migration
+Solution :
+# Check provider configuration
cat ~/workspaces/my-project/config/providers/aws.toml
# Update credentials
@@ -38047,14 +36277,10 @@ vim ~/workspaces/my-project/config/providers/aws.toml
# Validate provider config
provisioning provider validate aws
-```plaintext
-
-## Testing Migration
-
-Run the test suite to verify migration:
-
-```bash
-# Run configuration validation tests
+
+
+Run the test suite to verify migration:
+# Run configuration validation tests
nu provisioning/tests/config_validation_tests.nu
# Run integration tests
@@ -38063,14 +36289,10 @@ provisioning test --workspace my-project
# Test specific functionality
provisioning --check server list
provisioning --check taskserv list
-```plaintext
-
-## Rollback Procedure
-
-If migration causes issues, rollback:
-
-```bash
-# Restore old configuration
+
+
+If migration causes issues, rollback:
+# Restore old configuration
cp -r provisioning/config.backup.YYYYMMDD/* provisioning/config/
# Remove new workspace
@@ -38082,47 +36304,57 @@ unset PROVISIONING_WORKSPACE
# Verify old config works
provisioning env
-```plaintext
-
-## Migration Checklist
-
-- [ ] Backup current configuration
-- [ ] Run migration script in dry-run mode
-- [ ] Review dry-run output
-- [ ] Execute migration with backup
-- [ ] Verify workspace structure created
-- [ ] Validate all configurations
-- [ ] Test provider authentication
-- [ ] Test platform services
-- [ ] Run test suite
-- [ ] Update documentation/scripts if needed
-- [ ] Clean up old configuration files
-- [ ] Document any custom changes
-
-## Next Steps
-
-After successful migration:
-
-1. **Review Workspace Configuration**: Customize `provisioning.yaml` for your needs
-2. **Configure Providers**: Update provider configs in `config/providers/`
-3. **Configure Platform Services**: Update platform configs in `config/platform/`
-4. **Test Operations**: Run `--check` mode commands to verify
-5. **Update CI/CD**: Update pipelines to use new workspace system
-6. **Document Changes**: Update team documentation
-
-## Additional Resources
-
-- [Workspace Configuration Schema](../config/workspace.schema.toml)
-- [Provider Configuration Schemas](../extensions/providers/*/config.schema.toml)
-- [Platform Configuration Schemas](../platform/*/config.schema.toml)
-- [Configuration Validation Guide](CONFIG_VALIDATION.md)
-- [Workspace Management Guide](WORKSPACE_GUIDE.md)
+
+
+
+After successful migration:
+
+Review Workspace Configuration : Customize provisioning.yaml for your needs
+Configure Providers : Update provider configs in config/providers/
+Configure Platform Services : Update platform configs in config/platform/
+Test Operations : Run --check mode commands to verify
+Update CI/CD : Update pipelines to use new workspace system
+Document Changes : Update team documentation
+
+
+
Version : 0.2.0
Date : 2025-10-08
Status : Active
-
+
The KMS service has been simplified from supporting 4 backends (Vault, AWS KMS, Age, Cosmian) to supporting only 2 backends:
Age : Development and local testing
@@ -38154,7 +36386,7 @@ After successful migration:
🔄 README and documentation
🔄 Cargo.toml dependencies
-
+
Unnecessary Complexity : 4 backends for simple use cases
@@ -38166,12 +36398,12 @@ After successful migration:
Clear Separation : Age = dev, Cosmian = prod
-Faster Compilation : Removed AWS SDK (saves ~30s)
+Faster Compilation : Removed AWS SDK (saves ~30 s)
Offline Development : Age works without network
Enterprise Security : Cosmian provides confidential computing
Easier Maintenance : 2 backends instead of 4
-
+
If you were using Vault or AWS KMS for development:
@@ -38468,7 +36700,7 @@ curl -X POST $COSMIAN_KMS_URL/api/v1/encrypt \
export PROVISIONING_ENV=prod
cargo run --bin kms-service
-
+
# Check keys exist
ls -la ~/.config/provisioning/age/
@@ -38495,7 +36727,7 @@ cargo clean
cargo update
cargo build --release
-
+
-
+
The KMS simplification reduces complexity while providing better separation between development and production use cases. Age offers a fast, offline solution for development, while Cosmian KMS provides enterprise-grade security for production deployments.
For questions or issues, please refer to the documentation or open an issue.
@@ -38601,7 +36833,7 @@ Decommission old KMS infrastructure
See Also : Architecture Documentation
-Definition : A specialized component that performs a specific task in the system orchestration (e.g., autonomous execution units in the orchestrator).
+Definition : A specialized component that performs a specific task in the system orchestration (for example, autonomous execution units in the orchestrator).
Where Used :
Task orchestration
@@ -38653,7 +36885,7 @@ Decommission old KMS infrastructure
Auth Quick Reference
-
+
Definition : The process of determining user permissions using Cedar policy language.
Where Used :
@@ -38675,7 +36907,7 @@ Decommission old KMS infrastructure
Related Concepts : Workflow, Operation, Orchestrator
Commands :
-provisioning batch submit workflow.k
+provisioning batch submit workflow.ncl
provisioning batch list
provisioning batch status <id>
@@ -38757,7 +36989,7 @@ provisioning cluster delete <name>
See Also : Infrastructure Management
-
+
Definition : System capabilities ensuring adherence to regulatory requirements (GDPR, SOC2, ISO 27001).
Where Used :
@@ -38784,7 +37016,7 @@ provisioning cluster delete <name>
See Also : Configuration Guide
-
+
Definition : Web-based UI for managing provisioning operations built with Ratatui/Crossterm.
Where Used :
@@ -38832,8 +37064,8 @@ provisioning cluster delete <name>
Cluster deployment sequencing
Related Concepts : Version, Taskserv, Workflow
-Schema : provisioning/kcl/dependencies.k
-See Also : KCL Dependency Patterns
+Schema : provisioning/schemas/dependencies.ncl
+See Also : Nickel Dependency Patterns
Definition : System health checking and troubleshooting assistance.
@@ -38953,7 +37185,7 @@ provisioning guide customize
See Also : Guides
-
+
Definition : Automated verification that a component is running correctly.
Where Used :
@@ -38986,7 +37218,7 @@ provisioning guide customize
-
+
Definition : A named collection of servers, configurations, and deployments managed as a unit.
Where Used :
@@ -39045,8 +37277,8 @@ provisioning generate infra --new <name>
See Also : JWT Auth Implementation
-
-Definition : Declarative configuration language used for infrastructure definitions.
+
+Definition : Declarative configuration language with type safety and lazy evaluation for infrastructure definitions.
Where Used :
Infrastructure schemas
@@ -39054,9 +37286,9 @@ provisioning generate infra --new <name>
Configuration validation
Related Concepts : Schema, Configuration, Validation
-Version : 0.11.3+
-Location : provisioning/kcl/*.k
-See Also : KCL Quick Reference
+Version : 1.15.0+
+Location : provisioning/schemas/*.ncl
+See Also : Nickel Quick Reference
Definition : Encryption key management system supporting multiple backends (RustyVault, Age, AWS, Vault).
@@ -39152,7 +37384,7 @@ provisioning module list taskserv
See Also : Module System
-
+
Definition : Primary shell and scripting language (v0.107.1) used throughout the platform.
Where Used :
@@ -39186,7 +37418,7 @@ provisioning module list taskserv
Related Concepts : Workflow, Task, Action
-
+
Definition : Hybrid Rust/Nushell service coordinating complex infrastructure operations.
Where Used :
@@ -39258,7 +37490,7 @@ provisioning providers list
See Also : Quick Provider Guide
-
+
Definition : Condensed command and configuration reference for rapid lookup.
Where Used :
@@ -39334,26 +37566,25 @@ provisioning guide quickstart
-Definition : KCL type definition specifying structure and validation rules.
+Definition : Nickel type definition specifying structure and validation rules.
Where Used :
Configuration validation
Type safety
Documentation
-Related Concepts : KCL, Validation, Type
+Related Concepts : Nickel, Validation, Type
Example :
- schema ServerConfig:
- hostname: str
- cores: int
- memory: int
-
- check:
- cores > 0, "Cores must be positive"
+let ServerConfig = {
+ hostname | string,
+ cores | number,
+ memory | number,
+} in
+ServerConfig
-See Also : KCL Development
+See Also : Nickel Development
-
+
Definition : System for secure storage and retrieval of sensitive data.
Where Used :
@@ -39448,7 +37679,7 @@ provisioning ssh connect <server>
See Also : SSH Temporal Keys User Guide
-
+
Definition : Tracking and persisting workflow execution state.
Where Used :
@@ -39538,7 +37769,7 @@ provisioning test env cluster <cluster>
provisioning mfa totp verify <code>
-
+
Definition : System problem diagnosis and resolution guidance.
Where Used :
@@ -39576,7 +37807,7 @@ provisioning version apply
See Also : Update Infrastructure Guide
-
+
Definition : Verification that configuration or infrastructure meets requirements.
Where Used :
@@ -39584,7 +37815,7 @@ provisioning version apply
Schema validation
Pre-deployment verification
-Related Concepts : Schema, KCL, Check
+Related Concepts : Schema, Nickel, Check
Commands :
provisioning validate config
provisioning validate infrastructure
@@ -39672,7 +37903,7 @@ provisioning workspace create <name>
CLI Command-Line Interface User Interface
GDPR General Data Protection Regulation Compliance
JWT JSON Web Token Security
-KCL KCL Configuration Language Configuration
+Nickel Nickel Configuration Language Configuration
KMS Key Management Service Security
MCP Model Context Protocol Platform
MFA Multi-Factor Authentication Security
@@ -39700,7 +37931,7 @@ provisioning workspace create <name>
Configuration :
-Config, KCL, Schema, Validation, Environment, Layer, Workspace
+Config, Nickel, Schema, Validation, Environment, Layer, Workspace
Workflow & Operations :
@@ -39742,7 +37973,7 @@ provisioning workspace create <name>
Extension
Provider
Taskserv
-KCL
+Nickel
Schema
Template
Plugin
@@ -39759,10 +37990,10 @@ provisioning workspace create <name>
-Consistency : Use the same term throughout documentation (e.g., “Taskserv” not “task service” or “task-serv”)
+Consistency : Use the same term throughout documentation (for example, “Taskserv” not “task service” or “task-serv”)
Capitalization :
-Proper nouns and acronyms: CAPITALIZE (KCL, JWT, MFA)
+Proper nouns and acronyms: CAPITALIZE (Nickel, JWT, MFA)
Generic terms: lowercase (server, cluster, workflow)
Platform-specific terms: Title Case (Taskserv, Workspace, Orchestrator)
@@ -39816,7 +38047,7 @@ provisioning workspace create <name>
Review related terms for consistency
-
+
Version Date Changes
1.0.0 2025-10-10 Initial comprehensive glossary
@@ -39841,7 +38072,7 @@ provisioning workspace create <name>
Best Practices
-
+
The provisioning system supports two complementary approaches for provider management:
Module-Loader : Symlink-based local development with dynamic discovery
@@ -39858,32 +38089,27 @@ provisioning providers install upcloud wuji
# Internal Process:
# 1. Discovers provider in extensions/providers/upcloud/
-# 2. Creates symlink: workspace/infra/wuji/.kcl-modules/upcloud_prov -> extensions/providers/upcloud/kcl/
-# 3. Updates workspace/infra/wuji/kcl.mod with local path dependency
+# 2. Creates symlink: workspace/infra/wuji/.nickel-modules/upcloud_prov -> extensions/providers/upcloud/nickel/
+# 3. Updates workspace/infra/wuji/manifest.toml with local path dependency
# 4. Updates workspace/infra/wuji/providers.manifest.yaml
-```plaintext
-
-### Key Features
-
-✅ **Instant Changes**: Edit code in `extensions/providers/`, immediately available in infrastructure
-✅ **Auto-Discovery**: Automatically finds all providers in extensions/
-✅ **Simple Commands**: `providers install/remove/list/validate`
-✅ **Easy Debugging**: Direct access to source code
-✅ **No Packaging**: Skip build/package step during development
-
-### Best Use Cases
-
-- 🔧 **Active Development**: Writing new provider features
-- 🧪 **Testing**: Rapid iteration and testing cycles
-- 🏠 **Local Infrastructure**: Single machine or small team
-- 📝 **Debugging**: Need to modify and test provider code
-- 🎓 **Learning**: Understanding how providers work
-
-### Example Workflow
-
-```bash
-# 1. List available providers
-provisioning providers list --kcl
+
+
+✅ Instant Changes : Edit code in extensions/providers/, immediately available in infrastructure
+✅ Auto-Discovery : Automatically finds all providers in extensions/
+✅ Simple Commands : providers install/remove/list/validate
+✅ Easy Debugging : Direct access to source code
+✅ No Packaging : Skip build/package step during development
+
+
+🔧 Active Development : Writing new provider features
+🧪 Testing : Rapid iteration and testing cycles
+🏠 Local Infrastructure : Single machine or small team
+📝 Debugging : Need to modify and test provider code
+🎓 Learning : Understanding how providers work
+
+
+# 1. List available providers
+provisioning providers list
# 2. Install provider for infrastructure
provisioning providers install upcloud wuji
@@ -39892,78 +38118,63 @@ provisioning providers install upcloud wuji
provisioning providers validate wuji
# 4. Edit provider code
-vim extensions/providers/upcloud/kcl/server_upcloud.k
+vim extensions/providers/upcloud/nickel/server_upcloud.ncl
# 5. Test changes immediately (no repackaging!)
cd workspace/infra/wuji
-kcl run defs/servers.k
+nickel export main.ncl
# 6. Remove when done
provisioning providers remove upcloud wuji
-```plaintext
-
-### File Structure
-
-```plaintext
-extensions/providers/upcloud/
-├── kcl/
-│ ├── kcl.mod
-│ ├── server_upcloud.k
-│ └── network_upcloud.k
+
+
+extensions/providers/upcloud/
+├── nickel/
+│ ├── manifest.toml
+│ ├── server_upcloud.ncl
+│ └── network_upcloud.ncl
└── README.md
workspace/infra/wuji/
-├── .kcl-modules/
-│ └── upcloud_prov -> ../../../../extensions/providers/upcloud/kcl/ # Symlink
-├── kcl.mod # Updated with local path dependency
+├── .nickel-modules/
+│ └── upcloud_prov -> ../../../../extensions/providers/upcloud/nickel/ # Symlink
+├── manifest.toml # Updated with local path dependency
├── providers.manifest.yaml # Tracks installed providers
-└── defs/
- └── servers.k
-```plaintext
-
----
-
-## Provider Packs Approach
-
-### Purpose
-
-Create versioned, distributable artifacts for production deployments and team collaboration.
-
-### How It Works
-
-```bash
-# Package providers into distributable artifacts
+└── schemas/
+ └── servers.ncl
+
+
+
+
+Create versioned, distributable artifacts for production deployments and team collaboration.
+
+# Package providers into distributable artifacts
export PROVISIONING=/Users/Akasha/project-provisioning/provisioning
./provisioning/core/cli/pack providers
# Internal Process:
-# 1. Enters each provider's kcl/ directory
-# 2. Runs: kcl mod pkg --target distribution/packages/
+# 1. Enters each provider's nickel/ directory
+# 2. Runs: nickel export . --format json (generates JSON for distribution)
# 3. Creates: upcloud_prov_0.0.1.tar
# 4. Generates metadata: distribution/registry/upcloud_prov.json
-```plaintext
-
-### Key Features
-
-✅ **Versioned Artifacts**: Immutable, reproducible packages
-✅ **Portable**: Share across teams and environments
-✅ **Registry Publishing**: Push to artifact registries
-✅ **Metadata**: Version, maintainer, license information
-✅ **Production-Ready**: What you package is what you deploy
-
-### Best Use Cases
-
-- 🚀 **Production Deployments**: Stable, tested provider versions
-- 📦 **Distribution**: Share across teams or organizations
-- 🔄 **CI/CD Pipelines**: Automated build and deploy
-- 📊 **Version Control**: Track provider versions explicitly
-- 🌐 **Registry Publishing**: Publish to artifact registries
-- 🔒 **Compliance**: Immutable artifacts for auditing
-
-### Example Workflow
-
-```bash
-# Set environment variable
+
+
+✅ Versioned Artifacts : Immutable, reproducible packages
+✅ Portable : Share across teams and environments
+✅ Registry Publishing : Push to artifact registries
+✅ Metadata : Version, maintainer, license information
+✅ Production-Ready : What you package is what you deploy
+
+
+🚀 Production Deployments : Stable, tested provider versions
+📦 Distribution : Share across teams or organizations
+🔄 CI/CD Pipelines : Automated build and deploy
+📊 Version Control : Track provider versions explicitly
+🌐 Registry Publishing : Publish to artifact registries
+🔒 Compliance : Immutable artifacts for auditing
+
+
+# Set environment variable
export PROVISIONING=/Users/Akasha/project-provisioning/provisioning
# 1. Package all providers
@@ -39986,12 +38197,9 @@ export PROVISIONING=/Users/Akasha/project-provisioning/provisioning
# 5. Upload to registry (your implementation)
# rsync distribution/packages/*.tar repo.jesusperez.pro:/registry/
-```plaintext
-
-### File Structure
-
-```plaintext
-provisioning/
+
+
+provisioning/
├── distribution/
│ ├── packages/
│ │ ├── provisioning_0.0.1.tar # Core schemas
@@ -40004,12 +38212,9 @@ provisioning/
│ ├── aws_prov.json
│ └── local_prov.json
└── extensions/providers/ # Source code
-```plaintext
-
-### Package Metadata Example
-
-```json
-{
+
+
+{
"name": "upcloud_prov",
"version": "0.0.1",
"package_file": "/path/to/upcloud_prov_0.0.1.tar",
@@ -40019,51 +38224,41 @@ provisioning/
"license": "MIT",
"homepage": "https://github.com/jesusperezlorenzo/provisioning"
}
-```plaintext
-
----
-
-## Comparison Matrix
-
-| Feature | Module-Loader | Provider Packs |
-|---------|--------------|----------------|
-| **Speed** | ⚡ Instant (symlinks) | 📦 Requires packaging |
-| **Versioning** | ❌ No explicit versions | ✅ Semantic versioning |
-| **Portability** | ❌ Local filesystem only | ✅ Distributable archives |
-| **Development** | ✅ Excellent (live reload) | ⚠️ Need repackage cycle |
-| **Production** | ⚠️ Mutable source | ✅ Immutable artifacts |
-| **Discovery** | ✅ Auto-discovery | ⚠️ Manual tracking |
-| **Team Sharing** | ⚠️ Git repository only | ✅ Registry + Git |
-| **Debugging** | ✅ Direct source access | ❌ Need to unpack |
-| **Rollback** | ⚠️ Git revert | ✅ Version pinning |
-| **Compliance** | ❌ Hard to audit | ✅ Signed artifacts |
-| **Setup Time** | ⚡ Seconds | ⏱️ Minutes |
-| **CI/CD** | ⚠️ Not ideal | ✅ Perfect |
-
----
-
-## Recommended Hybrid Workflow
-
-### Development Phase
-
-```bash
-# 1. Start with module-loader for development
+
+
+
+Feature Module-Loader Provider Packs
+Speed ⚡ Instant (symlinks) 📦 Requires packaging
+Versioning ❌ No explicit versions ✅ Semantic versioning
+Portability ❌ Local filesystem only ✅ Distributable archives
+Development ✅ Excellent (live reload) ⚠️ Need repackage cycle
+Production ⚠️ Mutable source ✅ Immutable artifacts
+Discovery ✅ Auto-discovery ⚠️ Manual tracking
+Team Sharing ⚠️ Git repository only ✅ Registry + Git
+Debugging ✅ Direct source access ❌ Need to unpack
+Rollback ⚠️ Git revert ✅ Version pinning
+Compliance ❌ Hard to audit ✅ Signed artifacts
+Setup Time ⚡ Seconds ⏱️ Minutes
+CI/CD ⚠️ Not ideal ✅ Perfect
+
+
+
+
+
+# 1. Start with module-loader for development
provisioning providers list
provisioning providers install upcloud wuji
# 2. Develop and iterate quickly
-vim extensions/providers/upcloud/kcl/server_upcloud.k
+vim extensions/providers/upcloud/nickel/server_upcloud.ncl
# Test immediately - no packaging needed
# 3. Validate before release
provisioning providers validate wuji
-kcl run workspace/infra/wuji/defs/servers.k
-```plaintext
-
-### Release Phase
-
-```bash
-# 4. Create release packages
+nickel export workspace/infra/wuji/main.ncl
+
+
+# 4. Create release packages
export PROVISIONING=/Users/Akasha/project-provisioning/provisioning
./provisioning/core/cli/pack providers
@@ -40076,29 +38271,21 @@ git push origin v0.0.2
# 7. Publish to registry (your workflow)
rsync distribution/packages/*.tar user@repo.jesusperez.pro:/registry/v0.0.2/
-```plaintext
-
-### Production Deployment
-
-```bash
-# 8. Download specific version from registry
+
+
+# 8. Download specific version from registry
wget https://repo.jesusperez.pro/registry/v0.0.2/upcloud_prov_0.0.2.tar
# 9. Extract and install
tar -xf upcloud_prov_0.0.2.tar -C infrastructure/providers/
# 10. Use in production infrastructure
-# (Configure kcl.mod to point to extracted package)
-```plaintext
-
----
-
-## Command Reference
-
-### Module-Loader Commands
-
-```bash
-# List all available providers
+# (Configure manifest.toml to point to extracted package)
+
+
+
+
+# List all available providers
provisioning providers list [--kcl] [--format table|json|yaml]
# Show provider information
@@ -40118,12 +38305,9 @@ provisioning providers validate <infra>
# Sync KCL dependencies
./provisioning/core/cli/module-loader sync-kcl <infra>
-```plaintext
-
-### Provider Pack Commands
-
-```bash
-# Set environment variable (required)
+
+
+# Set environment variable (required)
export PROVISIONING=/path/to/provisioning
# Package core provisioning schemas
@@ -40140,37 +38324,23 @@ export PROVISIONING=/path/to/provisioning
# Clean old packages
./provisioning/core/cli/pack clean [--keep-latest 3] [--dry-run]
-```plaintext
-
----
-
-## Real-World Scenarios
-
-### Scenario 1: Solo Developer - Local Infrastructure
-
-**Situation**: Working alone on local infrastructure projects
-
-**Recommendation**: Module-Loader only
-
-```bash
-# Simple and fast
+
+
+
+
+Situation : Working alone on local infrastructure projects
+Recommendation : Module-Loader only
+# Simple and fast
providers install upcloud homelab
providers install aws cloud-backup
# Edit and test freely
-```plaintext
-
-**Why**: No need for versioning, packaging overhead unnecessary.
-
----
-
-### Scenario 2: Small Team - Shared Development
-
-**Situation**: 2-5 developers sharing code via Git
-
-**Recommendation**: Module-Loader + Git
-
-```bash
-# Each developer
+
+Why : No need for versioning, packaging overhead unnecessary.
+
+
+Situation : 2-5 developers sharing code via Git
+Recommendation : Module-Loader + Git
+# Each developer
git clone repo
providers install upcloud project-x
# Make changes, commit to Git
@@ -40179,20 +38349,13 @@ git push
# Others pull changes
git pull
# Changes immediately available via symlinks
-```plaintext
-
-**Why**: Git provides version control, symlinks provide instant updates.
-
----
-
-### Scenario 3: Medium Team - Multiple Projects
-
-**Situation**: 10+ developers, multiple infrastructure projects
-
-**Recommendation**: Hybrid (Module-Loader dev + Provider Packs releases)
-
-```bash
-# Development (team member)
+
+Why : Git provides version control, symlinks provide instant updates.
+
+
+Situation : 10+ developers, multiple infrastructure projects
+Recommendation : Hybrid (Module-Loader dev + Provider Packs releases)
+# Development (team member)
providers install upcloud staging-env
# Make changes...
@@ -40204,20 +38367,13 @@ git tag v0.2.0
# Other projects
# Download upcloud_prov_0.2.0.tar
# Use stable, tested version
-```plaintext
-
-**Why**: Developers iterate fast, other teams use stable versions.
-
----
-
-### Scenario 4: Enterprise - Production Infrastructure
-
-**Situation**: Critical production systems, compliance requirements
-
-**Recommendation**: Provider Packs only
-
-```bash
-# CI/CD Pipeline
+
+Why : Developers iterate fast, other teams use stable versions.
+
+
+Situation : Critical production systems, compliance requirements
+Recommendation : Provider Packs only
+# CI/CD Pipeline
pack providers # Build artifacts
# Run tests on packages
# Sign packages
@@ -40228,20 +38384,13 @@ pack providers # Build artifacts
# Verify signature
# Deploy immutable artifact
# Document exact versions for compliance
-```plaintext
-
-**Why**: Immutability, auditability, and rollback capabilities required.
-
----
-
-### Scenario 5: Open Source - Public Distribution
-
-**Situation**: Sharing providers with community
-
-**Recommendation**: Provider Packs + Registry
-
-```bash
-# Maintainer
+
+Why : Immutability, auditability, and rollback capabilities required.
+
+
+Situation : Sharing providers with community
+Recommendation : Provider Packs + Registry
+# Maintainer
pack providers
# Create release on GitHub
gh release create v1.0.0 distribution/packages/*.tar
@@ -40250,30 +38399,33 @@ gh release create v1.0.0 distribution/packages/*.tar
# Download from GitHub releases
wget https://github.com/project/releases/v1.0.0/upcloud_prov_1.0.0.tar
# Extract and use
-```plaintext
-
-**Why**: Easy distribution, versioning, and downloading for users.
-
----
-
-## Best Practices
-
-### For Development
-
-1. **Use Module-Loader by default**
- - Fast iteration is crucial during development
- - Symlinks allow immediate testing
-
-2. **Keep providers.manifest.yaml in Git**
- - Documents which providers are used
- - Team members can sync easily
-
-3. **Validate before committing**
-
- ```bash
- providers validate wuji
- kcl run defs/servers.k
+Why : Easy distribution, versioning, and downloading for users.
+
+
+
+
+
+Use Module-Loader by default
+
+Fast iteration is crucial during development
+Symlinks allow immediate testing
+
+
+
+Keep providers.manifest.yaml in Git
+
+Documents which providers are used
+Team members can sync easily
+
+
+
+Validate before committing
+providers validate wuji
+nickel eval defs/servers.ncl
+
+
+
@@ -40349,7 +38501,7 @@ git tag v0.2.0
-
+
When you’re ready to move to production:
# 1. Clean up development setup
@@ -40367,15 +38519,11 @@ tar -xf ../../../distribution/packages/upcloud_prov_1.0.0.tar vendor/
# To: upcloud_prov = { path = "./vendor/upcloud_prov", version = "1.0.0" }
# 5. Test
-kcl run defs/servers.k
-```plaintext
-
-### From Packs Back to Module-Loader
-
-When you need to debug or develop:
-
-```bash
-# 1. Remove vendored version
+nickel eval defs/servers.ncl
+
+
+When you need to debug or develop:
+# 1. Remove vendored version
rm -rf workspace/infra/wuji/vendor/upcloud_prov
# 2. Install via module-loader
@@ -40385,29 +38533,20 @@ providers install upcloud wuji
# 4. Test immediately
cd workspace/infra/wuji
-kcl run defs/servers.k
-```plaintext
-
----
-
-## Configuration
-
-### Environment Variables
-
-```bash
-# Required for pack commands
+nickel eval defs/servers.ncl
+
+
+
+
+# Required for pack commands
export PROVISIONING=/path/to/provisioning
# Alternative
export PROVISIONING_CONFIG=/path/to/provisioning
-```plaintext
-
-### Config Files
-
-Distribution settings in `provisioning/config/config.defaults.toml`:
-
-```toml
-[distribution]
+
+
+Distribution settings in provisioning/config/config.defaults.toml:
+[distribution]
pack_path = "{{paths.base}}/distribution/packages"
registry_path = "{{paths.base}}/distribution/registry"
cache_path = "{{paths.base}}/distribution/cache"
@@ -40425,18 +38564,12 @@ core_version = "0.0.1"
core_package_name = "provisioning_core"
use_module_loader = true
modules_dir = ".kcl-modules"
-```plaintext
-
----
-
-## Troubleshooting
-
-### Module-Loader Issues
-
-**Problem**: Provider not found after install
-
-```bash
-# Check provider exists
+
+
+
+
+Problem : Provider not found after install
+# Check provider exists
providers list | grep upcloud
# Validate installation
@@ -40444,72 +38577,55 @@ providers validate wuji
# Check symlink
ls -la workspace/infra/wuji/.kcl-modules/
-```plaintext
-
-**Problem**: Changes not reflected
-
-```bash
-# Verify symlink is correct
+
+Problem : Changes not reflected
+# Verify symlink is correct
readlink workspace/infra/wuji/.kcl-modules/upcloud_prov
# Should point to extensions/providers/upcloud/kcl/
-```plaintext
-
-### Provider Pack Issues
-
-**Problem**: No .tar file created
-
-```bash
-# Check KCL version (need 0.11.3+)
+
+
+Problem : No .tar file created
+# Check KCL version (need 0.11.3+)
kcl version
# Check kcl.mod exists
ls extensions/providers/upcloud/kcl/kcl.mod
-```plaintext
-
-**Problem**: PROVISIONING environment variable not set
-
-```bash
-# Set it
+
+Problem : PROVISIONING environment variable not set
+# Set it
export PROVISIONING=/Users/Akasha/project-provisioning/provisioning
# Or add to shell profile
echo 'export PROVISIONING=/path/to/provisioning' >> ~/.zshrc
-```plaintext
-
----
-
-## Conclusion
-
-**Both approaches are valuable and complementary:**
-
-- **Module-Loader**: Development velocity, rapid iteration
-- **Provider Packs**: Production stability, version control
-
-**Default Strategy:**
-
-- Use **Module-Loader** for day-to-day development
-- Create **Provider Packs** for releases and production
-- Both systems work seamlessly together
-
-**The system is designed for flexibility** - choose the right tool for your current phase of work!
-
----
-
-## Additional Resources
-
-- [Module-Loader Implementation](../provisioning/core/nulib/lib_provisioning/kcl_module_loader.nu)
-- [KCL Packaging Implementation](../provisioning/core/nulib/lib_provisioning/kcl_packaging.nu)
-- [Providers CLI](.provisioning providers)
-- [Pack CLI](../provisioning/core/cli/pack)
-- [KCL Documentation](https://kcl-lang.io/)
-
----
-
-**Document Version**: 1.0.0
-**Last Updated**: 2025-09-29
-**Maintained by**: JesusPerezLorenzo
+
+
+Both approaches are valuable and complementary:
+
+Module-Loader : Development velocity, rapid iteration
+Provider Packs : Production stability, version control
+
+Default Strategy:
+
+Use Module-Loader for day-to-day development
+Create Provider Packs for releases and production
+Both systems work seamlessly together
+
+The system is designed for flexibility - choose the right tool for your current phase of work!
+
+
+
+
+Document Version : 1.0.0
+Last Updated : 2025-09-29
+Maintained by : JesusPerezLorenzo
@@ -40571,11 +38687,11 @@ echo 'export PROVISIONING=/path/to/provisioning' >> ~/.zshrc
info.md
-kcl.mod
-kcl.mod.lock
+manifest.toml
+manifest.lock
README.md
REFERENCE.md
-version.k
+version.ncl
Total categorized: 32 taskservs + 6 root files = 38 items ✓
@@ -40596,7 +38712,7 @@ echo 'export PROVISIONING=/path/to/provisioning' >> ~/.zshrc
Async/Await : High-performance async operations with Tokio
Backward Compatible : Old single-instance configs auto-migrate to new multi-instance format
-
+
The extension registry uses a trait-based architecture separating source and distribution backends:
┌────────────────────────────────────────────────────────────────────┐
@@ -40622,40 +38738,31 @@ echo 'export PROVISIONING=/path/to/provisioning' >> ~/.zshrc
│ └───────────────────────────────────────────────────────────────┘ │
│ │
└────────────────────────────────────────────────────────────────────┘
-```plaintext
-
-### Request Strategies
-
-#### Aggregation Strategy (list_extensions, list_versions, search)
-
-1. **Parallel Execution**: Spawn concurrent tasks for all source and distribution clients
-2. **Merge Results**: Combine results from all backends
-3. **Deduplication**: Remove duplicates, preferring more recent versions
-4. **Pagination**: Apply limit/offset to merged results
-5. **Caching**: Store merged results with composite cache key
-
-#### Fallback Strategy (get_extension, download_extension)
-
-1. **Sequential Retry**: Try source clients first (in configured order)
-2. **Distribution Fallback**: If all sources fail, try distribution clients
-3. **Return First Success**: Return result from first successful client
-4. **Caching**: Cache successful result with backend-specific key
-
-## Installation
-
-```bash
-cd provisioning/platform/extension-registry
+
+
+
+
+Parallel Execution : Spawn concurrent tasks for all source and distribution clients
+Merge Results : Combine results from all backends
+Deduplication : Remove duplicates, preferring more recent versions
+Pagination : Apply limit/offset to merged results
+Caching : Store merged results with composite cache key
+
+
+
+Sequential Retry : Try source clients first (in configured order)
+Distribution Fallback : If all sources fail, try distribution clients
+Return First Success : Return result from first successful client
+Caching : Cache successful result with backend-specific key
+
+
+cd provisioning/platform/extension-registry
cargo build --release
-```plaintext
-
-## Configuration
-
-### Single-Instance Configuration (Legacy - Auto-Migrated)
-
-Old format is automatically migrated to new multi-instance format:
-
-```toml
-[server]
+
+
+
+Old format is automatically migrated to new multi-instance format:
+[server]
host = "0.0.0.0"
port = 8082
@@ -40674,14 +38781,10 @@ auth_token_path = "/path/to/oci-token.txt"
[cache]
capacity = 1000
ttl_seconds = 300
-```plaintext
-
-### Multi-Instance Configuration (Recommended)
-
-New format supporting multiple backends of each type:
-
-```toml
-[server]
+
+
+New format supporting multiple backends of each type:
+[server]
host = "0.0.0.0"
port = 8082
workers = 4
@@ -40760,23 +38863,19 @@ capacity = 1000
ttl_seconds = 300
enable_metadata_cache = true
enable_list_cache = true
-```plaintext
-
-### Configuration Notes
-
-- **Backend Identifiers**: Use `id` field to uniquely identify each backend instance (auto-generated if omitted)
-- **Gitea/Forgejo Compatible**: Both use same config format; organization field is required for Git repos
-- **GitHub Configuration**: Uses organization as owner; token_path points to GitHub Personal Access Token
-- **OCI Registries**: Support any OCI-compliant registry (Zot, Harbor, Docker Hub, GHCR, Quay, etc.)
-- **Optional Fields**: `id`, `verify_ssl`, `timeout_seconds` have sensible defaults
-- **Token Files**: Should contain only the token with no extra whitespace; permissions should be `0600`
-
-### Environment Variable Overrides
-
-Legacy environment variable support (for backward compatibility):
-
-```bash
-REGISTRY_SERVER_HOST=127.0.0.1
+
+
+
+Backend Identifiers : Use id field to uniquely identify each backend instance (auto-generated if omitted)
+Gitea/Forgejo Compatible : Both use same config format; organization field is required for Git repos
+GitHub Configuration : Uses organization as owner; token_path points to GitHub Personal Access Token
+OCI Registries : Support any OCI-compliant registry (Zot, Harbor, Docker Hub, GHCR, Quay, etc.)
+Optional Fields : id, verify_ssl, timeout_seconds have sensible defaults
+Token Files : Should contain only the token with no extra whitespace; permissions should be 0600
+
+
+Legacy environment variable support (for backward compatibility):
+REGISTRY_SERVER_HOST=127.0.0.1
REGISTRY_SERVER_PORT=8083
REGISTRY_SERVER_WORKERS=8
REGISTRY_GITEA_URL=https://gitea.example.com
@@ -40786,54 +38885,30 @@ REGISTRY_OCI_REGISTRY=registry.example.com
REGISTRY_OCI_NAMESPACE=extensions
REGISTRY_CACHE_CAPACITY=2000
REGISTRY_CACHE_TTL=600
-```plaintext
-
-## API Endpoints
-
-### Extension Operations
-
-#### List Extensions
-
-```bash
-GET /api/v1/extensions?type=provider&limit=10
-```plaintext
-
-#### Get Extension
-
-```bash
-GET /api/v1/extensions/{type}/{name}
-```plaintext
-
-#### List Versions
-
-```bash
-GET /api/v1/extensions/{type}/{name}/versions
-```plaintext
-
-#### Download Extension
-
-```bash
-GET /api/v1/extensions/{type}/{name}/{version}
-```plaintext
-
-#### Search Extensions
-
-```bash
-GET /api/v1/extensions/search?q=kubernetes&type=taskserv
-```plaintext
-
-### System Endpoints
-
-#### Health Check
-
-```bash
-GET /api/v1/health
-```plaintext
-
-**Response** (with multi-backend aggregation):
-
-```json
-{
+
+
+
+
+GET /api/v1/extensions?type=provider&limit=10
+
+
+GET /api/v1/extensions/{type}/{name}
+
+
+GET /api/v1/extensions/{type}/{name}/versions
+
+
+GET /api/v1/extensions/{type}/{name}/{version}
+
+
+GET /api/v1/extensions/search?q=kubernetes&type=taskserv
+
+
+
+GET /api/v1/health
+
+Response (with multi-backend aggregation):
+{
"status": "healthy|degraded|unhealthy",
"version": "0.1.0",
"uptime": 3600,
@@ -40850,29 +38925,21 @@ GET /api/v1/health
}
}
}
-```plaintext
-
-**Status Values**:
-- `healthy`: All configured backends are healthy
-- `degraded`: At least one backend is healthy, but some are failing
-- `unhealthy`: No backends are responding
-
-#### Metrics
-
-```bash
-GET /api/v1/metrics
-```plaintext
-
-#### Cache Statistics
-
-```bash
-GET /api/v1/cache/stats
-```plaintext
-
-**Response**:
-
-```json
-{
+
+Status Values :
+
+healthy: All configured backends are healthy
+degraded: At least one backend is healthy, but some are failing
+unhealthy: No backends are responding
+
+
+GET /api/v1/metrics
+
+
+GET /api/v1/cache/stats
+
+Response :
+{
"metadata_hits": 1024,
"metadata_misses": 256,
"list_hits": 512,
@@ -40881,35 +38948,27 @@ GET /api/v1/cache/stats
"version_misses": 512,
"size": 4096
}
-```plaintext
-
-## Extension Naming Conventions
-
-### Gitea Repositories
-
-- **Providers**: `{name}_prov` (e.g., `aws_prov`)
-- **Task Services**: `{name}_taskserv` (e.g., `kubernetes_taskserv`)
-- **Clusters**: `{name}_cluster` (e.g., `buildkit_cluster`)
-
-### OCI Artifacts
-
-- **Providers**: `{namespace}/{name}-provider`
-- **Task Services**: `{namespace}/{name}-taskserv`
-- **Clusters**: `{namespace}/{name}-cluster`
-
-## Deployment
-
-### Docker
-
-```bash
-docker build -t extension-registry:latest .
+
+
+
+
+Providers : {name}_prov (for example, aws_prov)
+Task Services : {name}_taskserv (for example, kubernetes_taskserv)
+Clusters : {name}_cluster (for example, buildkit_cluster)
+
+
+
+Providers : {namespace}/{name}-provider
+Task Services : {namespace}/{name}-taskserv
+Clusters : {namespace}/{name}-cluster
+
+
+
+docker build -t extension-registry:latest .
docker run -d -p 8082:8082 -v $(pwd)/config.toml:/app/config.toml:ro extension-registry:latest
-```plaintext
-
-### Kubernetes
-
-```yaml
-apiVersion: apps/v1
+
+
+apiVersion: apps/v1
kind: Deployment
metadata:
name: extension-registry
@@ -40922,23 +38981,18 @@ spec:
image: extension-registry:latest
ports:
- containerPort: 8082
-```plaintext
-
-## Migration Guide: Single to Multi-Instance
-
-### Automatic Migration
-
-Old single-instance configs are automatically detected and migrated to the new multi-instance format during startup:
-
-1. **Detection**: Registry checks if old-style fields (`gitea`, `oci`) contain values
-2. **Migration**: Single instances are moved to new Vec-based format (`sources.gitea[0]`, `distributions.oci[0]`)
-3. **Logging**: Migration event is logged for audit purposes
-4. **Transparency**: No user action required; old configs continue to work
-
-### Before Migration
-
-```toml
-[gitea]
+
+
+
+Old single-instance configs are automatically detected and migrated to the new multi-instance format during startup:
+
+Detection : Registry checks if old-style fields (gitea, oci) contain values
+Migration : Single instances are moved to new Vec-based format (sources.gitea[0], distributions.oci[0])
+Logging : Migration event is logged for audit purposes
+Transparency : No user action required; old configs continue to work
+
+
+[gitea]
url = "https://gitea.example.com"
organization = "extensions"
token_path = "/path/to/token"
@@ -40946,12 +39000,9 @@ token_path = "/path/to/token"
[oci]
registry = "registry.example.com"
namespace = "extensions"
-```plaintext
-
-### After Migration (Automatic)
-
-```toml
-[sources.gitea]
+
+
+[sources.gitea]
[[sources.gitea]]
url = "https://gitea.example.com"
organization = "extensions"
@@ -40961,40 +39012,38 @@ token_path = "/path/to/token"
[[distributions.oci]]
registry = "registry.example.com"
namespace = "extensions"
-```plaintext
-
-### Gradual Upgrade Path
-
-To adopt the new format manually:
-
-1. **Backup current config** - Keep old format as reference
-2. **Adopt new format** - Replace old fields with new structure
-3. **Test** - Verify all backends are reachable and extensions are discovered
-4. **Add new backends** - Use new format to add Forgejo, GitHub, or additional OCI registries
-5. **Remove old fields** - Delete deprecated `gitea` and `oci` top-level sections
-
-### Benefits of Upgrading
-
-- **Multiple Sources**: Support Gitea, Forgejo, and GitHub simultaneously
-- **Multiple Registries**: Distribute to multiple OCI registries
-- **Better Resilience**: If one backend fails, others continue to work
-- **Flexible Configuration**: Each backend can have different credentials and timeouts
-- **Future-Proof**: New backends can be added without config restructuring
-
-## Related Documentation
-
-- **Extension Development**: [Module System](../development/extensions.md)
-- **Extension Development Quickstart**: [Getting Started Guide](../guides/extension-development-quickstart.md)
-- **ADR-005**: [Extension Framework Architecture](../architecture/adr/adr-005-extension-framework.md)
-- **OCI Registry Integration**: [OCI Registry Guide](../integration/oci-registry-guide.md)
+
+To adopt the new format manually:
+
+Backup current config - Keep old format as reference
+Adopt new format - Replace old fields with new structure
+Test - Verify all backends are reachable and extensions are discovered
+Add new backends - Use new format to add Forgejo, GitHub, or additional OCI registries
+Remove old fields - Delete deprecated gitea and oci top-level sections
+
+
+
+Multiple Sources : Support Gitea, Forgejo, and GitHub simultaneously
+Multiple Registries : Distribute to multiple OCI registries
+Better Resilience : If one backend fails, others continue to work
+Flexible Configuration : Each backend can have different credentials and timeouts
+Future-Proof : New backends can be added without config restructuring
+
+
+
A Rust-native Model Context Protocol (MCP) server for infrastructure automation and AI-assisted DevOps operations.
Source : provisioning/platform/mcp-server/
Status : Proof of Concept Complete
-
+
Replaces the Python implementation with significant performance improvements while maintaining philosophical consistency with the Rust ecosystem approach.
🚀 Rust MCP Server Performance Analysis
@@ -41016,12 +39065,9 @@ To adopt the new format manually:
• Configuration access: Microsecond latency
• Memory efficient: Small struct footprint
• Zero-copy string operations where possible
-```plaintext
-
-## Architecture
-
-```plaintext
-src/
+
+
+src/
├── simple_main.rs # Lightweight MCP server entry point
├── main.rs # Full MCP server (with SDK integration)
├── lib.rs # Library interface
@@ -41030,30 +39076,26 @@ src/
├── tools.rs # AI-powered parsing tools
├── errors.rs # Error handling
└── performance_test.rs # Performance benchmarking
-```plaintext
-
-## Key Features
-
-1. **AI-Powered Server Parsing**: Natural language to infrastructure config
-2. **Multi-Provider Support**: AWS, UpCloud, Local
-3. **Configuration Management**: TOML-based with environment overrides
-4. **Error Handling**: Comprehensive error types with recovery hints
-5. **Performance Monitoring**: Built-in benchmarking capabilities
-
-## Rust vs Python Comparison
-
-| Metric | Python MCP Server | Rust MCP Server | Improvement |
-|--------|------------------|-----------------|-------------|
-| **Startup Time** | ~500ms | ~50ms | **10x faster** |
-| **Memory Usage** | ~50MB | ~5MB | **10x less** |
-| **Parsing Latency** | ~1ms | ~0.001ms | **1000x faster** |
-| **Binary Size** | Python + deps | ~15MB static | **Portable** |
-| **Type Safety** | Runtime errors | Compile-time | **Zero runtime errors** |
-
-## Usage
-
-```bash
-# Build and run
+
+
+
+AI-Powered Server Parsing : Natural language to infrastructure config
+Multi-Provider Support : AWS, UpCloud, Local
+Configuration Management : TOML-based with environment overrides
+Error Handling : Comprehensive error types with recovery hints
+Performance Monitoring : Built-in benchmarking capabilities
+
+
+Metric Python MCP Server Rust MCP Server Improvement
+Startup Time ~500 ms ~50 ms 10x faster
+Memory Usage ~50 MB ~5 MB 10x less
+Parsing Latency ~1 ms ~0.001 ms 1000x faster
+Binary Size Python + deps ~15 MB static Portable
+Type Safety Runtime errors Compile-time Zero runtime errors
+
+
+
+# Build and run
cargo run --bin provisioning-mcp-server --release
# Run with custom config
@@ -41064,40 +39106,35 @@ cargo test
# Run benchmarks
cargo run --bin provisioning-mcp-server --release
-```plaintext
-
-## Configuration
-
-Set via environment variables:
-
-```bash
-export PROVISIONING_PATH=/path/to/provisioning
+
+
+Set via environment variables:
+export PROVISIONING_PATH=/path/to/provisioning
export PROVISIONING_AI_PROVIDER=openai
export OPENAI_API_KEY=your-key
export PROVISIONING_DEBUG=true
-```plaintext
-
-## Integration Benefits
-
-1. **Philosophical Consistency**: Rust throughout the stack
-2. **Performance**: Sub-millisecond response times
-3. **Memory Safety**: No segfaults, no memory leaks
-4. **Concurrency**: Native async/await support
-5. **Distribution**: Single static binary
-6. **Cross-compilation**: ARM64/x86_64 support
-
-## Next Steps
-
-1. Full MCP SDK integration (schema definitions)
-2. WebSocket/TCP transport layer
-3. Plugin system for extensibility
-4. Metrics collection and monitoring
-5. Documentation and examples
-
-## Related Documentation
-
-- **Architecture**: [MCP Integration](../architecture/orchestrator-integration-model.md)
+
+
+Philosophical Consistency : Rust throughout the stack
+Performance : Sub-millisecond response times
+Memory Safety : No segfaults, no memory leaks
+Concurrency : Native async/await support
+Distribution : Single static binary
+Cross-compilation : ARM64/x86_64 support
+
+
+
+Full MCP SDK integration (schema definitions)
+WebSocket/TCP transport layer
+Plugin system for extensibility
+Metrics collection and monitoring
+Documentation and examples
+
+
+
Version : 2.0.0
Last Updated : 2026-01-05
@@ -41105,7 +39142,7 @@ export PROVISIONING_DEBUG=true
Target Audience : DevOps Engineers, Infrastructure Administrators
Services Covered : 8 platform services (orchestrator, control-center, mcp-server, vault-service, extension-registry, rag, ai-service, provisioning-daemon)
Interactive configuration for cloud-native infrastructure platform services using TypeDialog forms and Nickel.
-
+
TypeDialog is an interactive form system that generates Nickel configurations for platform services. Instead of manually editing TOML or KCL files, you answer questions in an interactive form, and TypeDialog generates validated Nickel configuration.
Benefits :
@@ -41115,7 +39152,7 @@ export PROVISIONING_DEBUG=true
✅ Type-safe configuration (Nickel contracts)
✅ Generated configurations ready for deployment
-
+
# Launch interactive form for orchestrator
provisioning config platform orchestrator
@@ -41308,7 +39345,7 @@ cat workspace_librecloud/config/generated/platform/orchestrator.toml
}
}
-
+
All platform services support four deployment modes, each with different resource allocation and feature sets:
Mode Resources Use Case Storage TLS
solo Minimal (2 workers) Development, testing Embedded/filesystem No
@@ -41709,7 +39746,7 @@ provisioning config platform orchestrator
# Or edit via TypeDialog with existing values
typedialog form .typedialog/provisioning/platform/orchestrator/form.toml
-
+
Problem : Failed to parse config file
Solution : Check form.toml syntax and verify required fields are present (name, description, locales_path, templates_path)
@@ -41734,7 +39771,7 @@ typedialog form .typedialog/provisioning/platform/orchestrator/form.toml
Check service path: provisioning start orchestrator --check
Restart service: provisioning restart orchestrator
-
+
{
workspace = {
@@ -41845,7 +39882,7 @@ typedialog form .typedialog/provisioning/platform/orchestrator/form.toml
}
}
-
+
Start with TypeDialog forms for the best experience:
provisioning config platform orchestrator
@@ -41866,7 +39903,7 @@ provisioning config export
Better : api_password = "{{kms.decrypt('upcloud_key')}}"
Add comments explaining custom settings in the Nickel file.
-
+
Configuration System : See CLAUDE.md#configuration-file-format-selection
@@ -41898,8 +39935,8 @@ provisioning config export
Deployment Modes (Presets): provisioning/schemas/platform/defaults/deployment/
Rust Integration : provisioning/platform/crates/*/src/config.rs
-
-
+
+
Get detailed error messages and check available fields:
nickel typecheck workspace_librecloud/config/config.ncl 2>&1 | less
grep "prompt =" .typedialog/provisioning/platform/orchestrator/form.toml
@@ -41925,6 +39962,357 @@ provisioning config export
# Check generated files
ls -la workspace_librecloud/config/generated/
+
+This document provides a comprehensive comparison of supported cloud providers: Hetzner, UpCloud, AWS, and DigitalOcean. Use this matrix to make informed decisions about which provider is best suited for your workloads.
+
+
+Feature Hetzner UpCloud AWS DigitalOcean
+Product Name Cloud Servers Servers EC2 Droplets
+Instance Sizing Standard, dedicated cores 2-32 vCPUs Extensive (t2, t3, m5, c5, etc) 1-48 vCPUs
+Custom CPU/RAM ✓ ✓ Limited ✗
+Hourly Billing ✓ ✓ ✓ ✓
+Monthly Discount 30% 25% ~30% (RI) ~25%
+GPU Instances ✓ ✗ ✓ ✗
+Auto-scaling Via API Via API Native (ASG) Via API
+Bare Metal ✓ ✗ ✓ (EC2) ✗
+
+
+
+Feature Hetzner UpCloud AWS DigitalOcean
+Product Name Volumes Storage EBS Volumes
+SSD Volumes ✓ ✓ ✓ (gp3, io1) ✓
+HDD Volumes ✗ ✓ ✓ (st1, sc1) ✗
+Max Volume Size 10 TB Unlimited 16 TB 100 TB
+IOPS Provisioning Limited ✓ ✓ ✗
+Snapshots ✓ ✓ ✓ ✓
+Encryption ✓ ✓ ✓ ✓
+Backup Service ✗ ✗ ✓ (AWS Backup) ✓
+
+
+
+Feature Hetzner UpCloud AWS DigitalOcean
+Product Name Object Storage — S3 Spaces
+API Compatibility S3-compatible — S3 (native) S3-compatible
+Pricing (per GB) €0.025 N/A $0.023 $0.015
+Regions 2 N/A 30+ 4
+Versioning ✓ N/A ✓ ✓
+Lifecycle Rules ✓ N/A ✓ ✓
+CDN Integration ✗ N/A ✓ (CloudFront) ✓ (CDN add-on)
+Access Control Bucket policies N/A IAM + bucket policies Token-based
+
+
+
+Feature Hetzner UpCloud AWS DigitalOcean
+Product Name Load Balancer Load Balancer ELB/ALB/NLB Load Balancer
+Type Layer 4/7 Layer 4 Layer 4/7 Layer 4/7
+Health Checks ✓ ✓ ✓ ✓
+SSL/TLS Termination ✓ Limited ✓ ✓
+Path-based Routing ✓ ✗ ✓ (ALB) ✗
+Host-based Routing ✓ ✗ ✓ (ALB) ✗
+Sticky Sessions ✓ ✓ ✓ ✓
+Geographic Distribution ✗ ✗ ✓ (multi-region) ✗
+DDoS Protection Basic ✓ ✓ (Shield) ✓
+
+
+
+Feature Hetzner UpCloud AWS DigitalOcean
+PostgreSQL ✗ ✗ ✓ (RDS) ✓
+MySQL ✗ ✗ ✓ (RDS) ✓
+Redis ✗ ✗ ✓ (ElastiCache) ✓
+MongoDB ✗ ✗ ✓ (DocumentDB) ✗
+Multi-AZ N/A N/A ✓ ✓
+Automatic Backups N/A N/A ✓ ✓
+Read Replicas N/A N/A ✓ ✓
+Param Groups N/A N/A ✓ ✗
+
+
+
+Feature Hetzner UpCloud AWS DigitalOcean
+Service Manual K8s Manual K8s EKS DOKS
+Managed Service ✗ ✗ ✓ ✓
+Control Plane Managed ✗ ✗ ✓ ✓
+Node Management ✗ ✗ ✓ (node groups) ✓ (node pools)
+Multi-AZ ✗ ✗ ✓ ✓
+Ingress Support Via add-on Via add-on ✓ (ALB) ✓
+Storage Classes Via add-on Via add-on ✓ (EBS) ✓
+
+
+
+Feature Hetzner UpCloud AWS DigitalOcean
+CDN Service ✗ ✗ ✓ (CloudFront) ✓
+Edge Locations — — 600+ 12+
+Geographic Routing — — ✓ ✗
+Cache Invalidation — — ✓ ✓
+Origins — — Any HTTP/S, Object Storage
+SSL/TLS — — ✓ ✓
+DDoS Protection — — ✓ (Shield) ✓
+
+
+
+Feature Hetzner UpCloud AWS DigitalOcean
+DNS Service ✓ (Basic) ✗ ✓ (Route53) ✓
+Zones ✓ N/A ✓ ✓
+Failover Manual N/A ✓ (health checks) ✓ (health checks)
+Geolocation ✗ N/A ✓ ✗
+DNSSEC ✓ N/A ✓ ✗
+API Management Limited N/A Full Full
+
+
+
+
+Comparison for 1-year term where applicable:
+Configuration Hetzner UpCloud AWS* DigitalOcean
+1 vCPU, 1 GB RAM €3.29 $5 $18 (t3.micro) $6
+2 vCPU, 4 GB RAM €6.90 $15 $36 (t3.small) $24
+4 vCPU, 8 GB RAM €13.80 $30 $73 (t3.medium) $48
+8 vCPU, 16 GB RAM €27.60 $60 $146 (t3.large) $96
+16 vCPU, 32 GB RAM €55.20 $120 $291 (t3.xlarge) $192
+
+
+*AWS pricing: on-demand; reserved instances 25-30% discount
+
+Per GB for block storage:
+Provider Price/GB Monthly Cost (100 GB)
+Hetzner €0.026 €2.60
+UpCloud $0.025 $2.50
+AWS EBS $0.10 $10.00
+DigitalOcean $0.10 $10.00
+
+
+
+Outbound data transfer (per GB):
+Provider First 1 TB Beyond 1 TB
+Hetzner Included €0.12/GB
+UpCloud $0.02/GB $0.01/GB
+AWS $0.09/GB $0.085/GB
+DigitalOcean $0.01/GB $0.01/GB
+
+
+
+
+Provider Compute Storage Data Transfer Monthly
+Hetzner €13.80 €2.60 Included €16.40
+UpCloud $30 $2.50 $20 $52.50
+AWS $72 $10 $45 $127
+DigitalOcean $48 $10 Included $58
+
+
+
+Provider Compute Storage Data Transfer Monthly
+Hetzner €69 €13 €1,200 €1,282
+UpCloud $150 $12.50 $200 $362.50
+AWS $360 $50 $900 $1,310
+DigitalOcean $240 $50 Included $290
+
+
+
+
+Region Location Data Center Highlights
+nbg1 Nuremberg, Germany 3 EU hub, good performance
+fsn1 Falkenstein, Germany 1 Lower latency, German regulations
+hel1 Helsinki, Finland 1 Nordic region option
+ash Ashburn, USA 1 North American presence
+
+
+
+Region Location Highlights
+fi-hel1 Helsinki, Finland Primary EU location
+de-fra1 Frankfurt, Germany EU alternative
+gb-lon1 London, UK European coverage
+us-nyc1 New York, USA North America
+sg-sin1 Singapore Asia Pacific
+jp-tok1 Tokyo, Japan APAC alternative
+
+
+
+Region Location Availability Zones Highlights
+us-east-1 N. Virginia, USA 6 Largest, most services
+eu-west-1 Ireland 3 EU primary, GDPR compliant
+eu-central-1 Frankfurt, Germany 3 German data residency
+ap-southeast-1 Singapore 3 APAC primary
+ap-northeast-1 Tokyo, Japan 4 Asia alternative
+
+
+
+Region Location Highlights
+nyc3 New York, USA Primary US location
+sfo3 San Francisco, USA US West Coast
+lon1 London, UK European hub
+fra1 Frankfurt, Germany German regulations
+sgp1 Singapore APAC coverage
+blr1 Bangalore, India India region
+
+
+
+Best Global Coverage : AWS (30+ regions, most services)
+Best EU Coverage : All providers have good EU options
+Best APAC Coverage : AWS (most regions), DigitalOcean (Singapore)
+Best North America : All providers have coverage
+Emerging Markets : DigitalOcean (India via Bangalore)
+
+
+Standard Hetzner UpCloud AWS DigitalOcean
+GDPR ✓ ✓ ✓ ✓
+CCPA ✓ ✓ ✓ ✓
+SOC 2 Type II ✓ ✓ ✓ ✓
+ISO 27001 ✓ ✓ ✓ ✓
+ISO 9001 ✗ ✗ ✓ ✓
+FedRAMP ✗ ✗ ✓ ✗
+
+
+
+Standard Hetzner UpCloud AWS DigitalOcean
+HIPAA ✗ ✗ ✓ ✓**
+PCI-DSS ✓ ✓ ✓ ✓
+HITRUST ✗ ✗ ✓ ✗
+FIPS 140-2 ✗ ✗ ✓ ✗
+SOX (Sarbanes-Oxley) Limited Limited ✓ Limited
+
+
+**DigitalOcean: Requires BAA for HIPAA compliance
+
+Region Hetzner UpCloud AWS DigitalOcean
+EU (GDPR) ✓ DE,FI ✓ FI,DE,GB ✓ (multiple) ✓ (multiple)
+Germany (NIS2) ✓ ✓ ✓ ✓
+UK (Post-Brexit) ✗ ✓ GB ✓ ✓
+USA (CCPA) ✗ ✓ ✓ ✓
+Canada ✗ ✗ ✓ ✗
+Australia ✗ ✗ ✓ ✗
+India ✗ ✗ ✓ ✓
+
+
+
+
+Recommended : Hetzner primary + DigitalOcean backup
+Rationale :
+
+Hetzner has best price/performance ratio
+DigitalOcean for geographic diversification
+Both have simple interfaces and good documentation
+Monthly cost: $30-80 for basic HA setup
+
+Example Setup :
+
+Primary: Hetzner cx31 (2 vCPU, 4 GB)
+Backup: DigitalOcean $24/month droplet
+Database: Self-managed PostgreSQL or Hetzner volume
+Total: ~$35/month
+
+
+Recommended : AWS primary + UpCloud backup
+Rationale :
+
+AWS for managed services and compliance
+UpCloud for cost-effective disaster recovery
+AWS compliance certifications (HIPAA, FIPS, SOC2)
+Multiple regions within AWS
+Mature enterprise support
+
+Example Setup :
+
+Primary: AWS RDS (managed DB)
+Secondary: UpCloud for compute burst
+Compliance: Full audit trail and encryption
+
+
+Recommended : Hetzner + AWS spot instances
+Rationale :
+
+Hetzner for sustained compute (good price)
+AWS spot for burst workloads (70-90% discount)
+Hetzner bare metal for specialized workloads
+Cost-effective scaling
+
+
+Recommended : AWS + DigitalOcean + Hetzner
+Rationale :
+
+AWS for primary regions and managed services
+DigitalOcean for edge locations and simpler regions
+Hetzner for EU cost optimization
+Geographic redundancy across 3 providers
+
+Example Setup :
+
+US: AWS (primary region)
+EU: Hetzner (cost-optimized)
+APAC: DigitalOcean (Singapore)
+Global: CloudFront CDN
+
+
+Recommended : AWS RDS/ElastiCache + DigitalOcean Spaces
+Rationale :
+
+AWS managed databases are feature-rich
+DigitalOcean managed DB for simpler needs
+Both support replicas and backups
+Cost: $60-200/month for medium database
+
+
+Recommended : DigitalOcean + AWS
+Rationale :
+
+DigitalOcean for simplicity and speed
+Droplets easy to manage and scale
+AWS for advanced features and multi-region
+Good community and documentation
+
+
+
+Category Winner Notes
+CPU Performance Hetzner Dedicated cores, good specs per price
+Network Bandwidth AWS 1Gbps+ guaranteed in multiple regions
+Storage IOPS AWS gp3 with 16K IOPS provisioning
+Latency (Global) AWS Most regions, best infrastructure
+
+
+
+Category Winner Notes
+Compute Hetzner 50% cheaper than AWS on-demand
+Managed Services AWS Only provider with full managed stack
+Data Transfer DigitalOcean Included with many services
+Storage Hetzner Object Storage €0.025/GB vs AWS S3 $0.023/GB
+
+
+
+Category Winner Notes
+UI/Dashboard DigitalOcean Simple, intuitive, clear pricing
+CLI Tools AWS Comprehensive aws-cli (but steep)
+API Documentation DigitalOcean Clear examples, community-driven
+Getting Started DigitalOcean Fastest path to first deployment
+
+
+
+Category Winner Notes
+Managed Services AWS RDS, ElastiCache, SQS, SNS, etc
+Compliance AWS Most certifications (HIPAA, FIPS, etc)
+Support AWS 24/7 support with paid plans
+Scale AWS Best for 1000+ servers
+
+
+
+Use this matrix to quickly select a provider:
+If you need: Then use:
+─────────────────────────────────────────────────────────────
+Lowest cost compute Hetzner
+Simplest interface DigitalOcean
+Managed databases AWS or DigitalOcean
+Global multi-region AWS
+Compliance (HIPAA/FIPS) AWS
+European data residency Hetzner or DigitalOcean
+High performance compute Hetzner or AWS (bare metal)
+Disaster recovery setup UpCloud or Hetzner
+Quick startup DigitalOcean
+Enterprise SLA AWS or UpCloud
+
+
+
+Hetzner : Best for cost-conscious teams, European focus, good performance
+UpCloud : Mid-market option, Nordic/EU focus, reliable alternative
+AWS : Enterprise standard, global coverage, most services, highest cost
+DigitalOcean : Developer-friendly, simplicity-focused, good value
+
+For most organizations, a multi-provider strategy combining Hetzner (compute), AWS (managed services), and DigitalOcean (edge) provides the best balance of cost, capability, and resilience.
Version : 1.0.0
Last Updated : 2026-01-05
@@ -41945,7 +40333,7 @@ ls -la workspace_librecloud/config/generated/
Troubleshooting
-
+
Rust : 1.70+ (for building services)
@@ -41971,7 +40359,7 @@ ls -la workspace_librecloud/config/generated/
Network Local Local/Cloud Cloud HA Cloud
-
+
# Ensure base directories exist
mkdir -p provisioning/schemas/platform
mkdir -p provisioning/platform/logs
@@ -41980,7 +40368,7 @@ mkdir -p provisioning/.typedialog/platform
mkdir -p provisioning/config/runtime
-
+
Requirement Recommended Mode
Development & testing solo
@@ -41990,7 +40378,7 @@ mkdir -p provisioning/config/runtime
-
+
Use Case : Development, testing, demonstration
Characteristics :
@@ -42030,7 +40418,7 @@ mkdir -p provisioning/config/runtime
Startup Time : ~3-8 minutes (database dependent)
Data Persistence : SurrealDB (shared)
-
+
Use Case : CI/CD pipelines, ephemeral environments
Characteristics :
@@ -42050,7 +40438,7 @@ mkdir -p provisioning/config/runtime
Startup Time : ~1-2 minutes
Data Persistence : None (ephemeral)
-
+
Use Case : Production, high availability, compliance
Characteristics :
git clone https://github.com/your-org/project-provisioning.git
cd project-provisioning
@@ -42240,7 +40628,7 @@ rm -rf /tmp/provisioning-solo
Perfect for : Team environments, shared infrastructure
-
+
SurrealDB : Running and accessible at http://surrealdb:8000
Network Access : All machines can reach SurrealDB
@@ -42355,7 +40743,7 @@ curl -s http://surrealdb:8000/version
Don’t persist data between runs
Use in-memory storage
-Have RAG completely disabled
+Have RAG disabled
Optimize for startup speed
Suitable for containerized deployments
@@ -42480,7 +40868,7 @@ cargo test --release
Perfect for : Production, high availability, compliance
-
+
3+ Machines : Minimum 3 for HA
Etcd Cluster : For distributed consensus
@@ -42635,7 +41023,7 @@ aws s3 cp "$BACKUP_DIR/etcd-$DATE.db" \
find "$BACKUP_DIR" -mtime +30 -delete
-
+
# Start one service
@@ -42712,7 +41100,7 @@ pkill -SIGTERM vault-service
sleep 2
cargo run --release -p vault-service &
-
+
# Check running processes
pgrep -a "cargo run --release"
@@ -42820,7 +41208,7 @@ groups:
summary: "Disk space below 20%"
-
+
Problem : error: failed to bind to port 8200
Solutions :
@@ -42967,7 +41355,7 @@ env | grep -E "VAULT_|SURREALDB_|ETCD_"
free -h && df -h && top -bn1 | head -10
-
+
# 1. Edit the schema definition
vim provisioning/schemas/platform/schemas/vault-service.ncl
@@ -43041,7 +41429,7 @@ Runbooks created for common operations
Disaster recovery plan tested
-
+
GitHub Issues : Report bugs at github.com/your-org/provisioning/issues
@@ -43050,7 +41438,7 @@ Disaster recovery plan tested
-Platform Team : platform@your-org.com
+Platform Team : platform@your-org.com
On-Call : Check PagerDuty for active rotation
Escalation : Contact infrastructure leadership
@@ -43088,9 +41476,9 @@ journalctl -fu provisioning-vault
Troubleshooting
-
+
The Service Management System provides comprehensive lifecycle management for all platform services (orchestrator, control-center, CoreDNS, Gitea, OCI registry, MCP server, API gateway).
-
+
Unified Service Management : Single interface for all services
Automatic Dependency Resolution : Start services in correct order
@@ -43112,7 +41500,7 @@ journalctl -fu provisioning-vault
-
+
┌─────────────────────────────────────────┐
│ Service Management CLI │
│ (platform/services commands) │
@@ -43139,52 +41527,44 @@ journalctl -fu provisioning-vault
│ Pre-flight │
│ (Validation) │
└────────────────┘
-```plaintext
-
-### Component Responsibilities
-
-**Manager** (`manager.nu`)
-
-- Service registry loading
-- Service status tracking
-- State persistence
-
-**Lifecycle** (`lifecycle.nu`)
-
-- Service start/stop operations
-- Deployment mode handling
-- Process management
-
-**Health** (`health.nu`)
-
-- Health check execution
-- HTTP/TCP/Command/File checks
-- Continuous monitoring
-
-**Dependencies** (`dependencies.nu`)
-
-- Dependency graph analysis
-- Topological sorting
-- Startup order calculation
-
-**Pre-flight** (`preflight.nu`)
-
-- Prerequisite validation
-- Conflict detection
-- Auto-start orchestration
-
----
-
-## Service Registry
-
-### Configuration File
-
-**Location**: `provisioning/config/services.toml`
-
-### Service Definition Structure
-
-```toml
-[services.<service-name>]
+
+
+Manager (manager.nu)
+
+Service registry loading
+Service status tracking
+State persistence
+
+Lifecycle (lifecycle.nu)
+
+Service start/stop operations
+Deployment mode handling
+Process management
+
+Health (health.nu)
+
+Health check execution
+HTTP/TCP/Command/File checks
+Continuous monitoring
+
+Dependencies (dependencies.nu)
+
+Dependency graph analysis
+Topological sorting
+Startup order calculation
+
+Pre-flight (preflight.nu)
+
+Prerequisite validation
+Conflict detection
+Auto-start orchestration
+
+
+
+
+Location : provisioning/config/services.toml
+
+[services.<service-name>]
name = "<service-name>"
type = "platform" | "infrastructure" | "utility"
category = "orchestration" | "auth" | "dns" | "git" | "registry" | "api" | "ui"
@@ -43220,12 +41600,9 @@ start_timeout = 30
start_order = 10
restart_on_failure = true
max_restarts = 3
-```plaintext
-
-### Example: Orchestrator Service
-
-```toml
-[services.orchestrator]
+
+
+[services.orchestrator]
name = "orchestrator"
type = "platform"
category = "orchestration"
@@ -43250,20 +41627,13 @@ expected_status = 200
auto_start = true
start_timeout = 30
start_order = 10
-```plaintext
-
----
-
-## Platform Commands
-
-Platform commands manage all services as a cohesive system.
-
-### Start Platform
-
-Start all auto-start services or specific services:
-
-```bash
-# Start all auto-start services
+
+
+
+Platform commands manage all services as a cohesive system.
+
+Start all auto-start services or specific services:
+# Start all auto-start services
provisioning platform start
# Start specific services (with dependencies)
@@ -43271,22 +41641,18 @@ provisioning platform start orchestrator control-center
# Force restart if already running
provisioning platform start --force orchestrator
-```plaintext
-
-**Behavior**:
-
-1. Resolves dependencies
-2. Calculates startup order (topological sort)
-3. Starts services in correct order
-4. Waits for health checks
-5. Reports success/failure
-
-### Stop Platform
-
-Stop all running services or specific services:
-
-```bash
-# Stop all running services
+
+Behavior :
+
+Resolves dependencies
+Calculates startup order (topological sort)
+Starts services in correct order
+Waits for health checks
+Reports success/failure
+
+
+Stop all running services or specific services:
+# Stop all running services
provisioning platform stop
# Stop specific services
@@ -43294,39 +41660,28 @@ provisioning platform stop orchestrator control-center
# Force stop (kill -9)
provisioning platform stop --force orchestrator
-```plaintext
-
-**Behavior**:
-
-1. Checks for dependent services
-2. Stops in reverse dependency order
-3. Updates service state
-4. Cleans up PID files
-
-### Restart Platform
-
-Restart running services:
-
-```bash
-# Restart all running services
+
+Behavior :
+
+Checks for dependent services
+Stops in reverse dependency order
+Updates service state
+Cleans up PID files
+
+
+Restart running services:
+# Restart all running services
provisioning platform restart
# Restart specific services
provisioning platform restart orchestrator
-```plaintext
-
-### Platform Status
-
-Show status of all services:
-
-```bash
-provisioning platform status
-```plaintext
-
-**Output**:
-
-```plaintext
-Platform Services Status
+
+
+Show status of all services:
+provisioning platform status
+
+Output :
+Platform Services Status
Running: 3/7
@@ -43348,20 +41703,13 @@ Running: 3/7
=== API ===
🟢 mcp-server - running (uptime: 3540s) ✅
⚪ api-gateway - stopped ❓
-```plaintext
-
-### Platform Health
-
-Check health of all running services:
-
-```bash
-provisioning platform health
-```plaintext
-
-**Output**:
-
-```plaintext
-Platform Health Check
+
+
+Check health of all running services:
+provisioning platform health
+
+Output :
+Platform Health Check
✅ orchestrator: Healthy - HTTP health check passed
✅ control-center: Healthy - HTTP status 200 matches expected
@@ -43369,14 +41717,10 @@ Platform Health Check
✅ mcp-server: Healthy - HTTP health check passed
Summary: 3 healthy, 0 unhealthy, 4 not running
-```plaintext
-
-### Platform Logs
-
-View service logs:
-
-```bash
-# View last 50 lines
+
+
+View service logs:
+# View last 50 lines
provisioning platform logs orchestrator
# View last 100 lines
@@ -43384,18 +41728,12 @@ provisioning platform logs orchestrator --lines 100
# Follow logs in real-time
provisioning platform logs orchestrator --follow
-```plaintext
-
----
-
-## Service Commands
-
-Individual service management commands.
-
-### List Services
-
-```bash
-# List all services
+
+
+
+Individual service management commands.
+
+# List all services
provisioning services list
# List only running services
@@ -43403,29 +41741,19 @@ provisioning services list --running
# Filter by category
provisioning services list --category orchestration
-```plaintext
-
-**Output**:
-
-```plaintext
-name type category status deployment_mode auto_start
+
+Output :
+name type category status deployment_mode auto_start
orchestrator platform orchestration running binary true
control-center platform ui stopped binary false
coredns infrastructure dns stopped docker false
-```plaintext
-
-### Service Status
-
-Get detailed status of a service:
-
-```bash
-provisioning services status orchestrator
-```plaintext
-
-**Output**:
-
-```plaintext
-Service: orchestrator
+
+
+Get detailed status of a service:
+provisioning services status orchestrator
+
+Output :
+Service: orchestrator
Type: platform
Category: orchestration
Status: running
@@ -43435,64 +41763,45 @@ Auto-start: true
PID: 12345
Uptime: 3600s
Dependencies: []
-```plaintext
-
-### Start Service
-
-```bash
-# Start service (with pre-flight checks)
+
+
+# Start service (with pre-flight checks)
provisioning services start orchestrator
# Force start (skip checks)
provisioning services start orchestrator --force
-```plaintext
-
-**Pre-flight Checks**:
-
-1. Validate prerequisites (binary exists, Docker running, etc.)
-2. Check for conflicts
-3. Verify dependencies are running
-4. Auto-start dependencies if needed
-
-### Stop Service
-
-```bash
-# Stop service (with dependency check)
+
+Pre-flight Checks :
+
+Validate prerequisites (binary exists, Docker running, etc.)
+Check for conflicts
+Verify dependencies are running
+Auto-start dependencies if needed
+
+
+# Stop service (with dependency check)
provisioning services stop orchestrator
# Force stop (ignore dependents)
provisioning services stop orchestrator --force
-```plaintext
-
-### Restart Service
-
-```bash
-provisioning services restart orchestrator
-```plaintext
-
-### Service Health
-
-Check service health:
-
-```bash
-provisioning services health orchestrator
-```plaintext
-
-**Output**:
-
-```plaintext
-Service: orchestrator
+
+
+provisioning services restart orchestrator
+
+
+Check service health:
+provisioning services health orchestrator
+
+Output :
+Service: orchestrator
Status: healthy
Healthy: true
Message: HTTP health check passed
Check type: http
-Check duration: 15ms
-```plaintext
-
-### Service Logs
-
-```bash
-# View logs
+Check duration: 15 ms
+
+
+# View logs
provisioning services logs orchestrator
# Follow logs
@@ -43500,68 +41809,43 @@ provisioning services logs orchestrator --follow
# Custom line count
provisioning services logs orchestrator --lines 200
-```plaintext
-
-### Check Required Services
-
-Check which services are required for an operation:
-
-```bash
-provisioning services check server
-```plaintext
-
-**Output**:
-
-```plaintext
-Operation: server
+
+
+Check which services are required for an operation:
+provisioning services check server
+
+Output :
+Operation: server
Required services: orchestrator
All running: true
-```plaintext
-
-### Service Dependencies
-
-View dependency graph:
-
-```bash
-# View all dependencies
+
+
+View dependency graph:
+# View all dependencies
provisioning services dependencies
# View specific service dependencies
provisioning services dependencies control-center
-```plaintext
-
-### Validate Services
-
-Validate all service configurations:
-
-```bash
-provisioning services validate
-```plaintext
-
-**Output**:
-
-```plaintext
-Total services: 7
+
+
+Validate all service configurations:
+provisioning services validate
+
+Output :
+Total services: 7
Valid: 6
Invalid: 1
Invalid services:
❌ coredns:
- Docker is not installed or not running
-```plaintext
-
-### Readiness Report
-
-Get platform readiness report:
-
-```bash
-provisioning services readiness
-```plaintext
-
-**Output**:
-
-```plaintext
-Platform Readiness Report
+
+
+Get platform readiness report:
+provisioning services readiness
+
+Output :
+Platform Readiness Report
Total services: 7
Running: 3
@@ -43573,32 +41857,21 @@ Services:
🔴 coredns - infrastructure - dns
Issues: 1
🟡 gitea - infrastructure - git
-```plaintext
-
-### Monitor Service
-
-Continuous health monitoring:
-
-```bash
-# Monitor with default interval (30s)
+
+
+Continuous health monitoring:
+# Monitor with default interval (30s)
provisioning services monitor orchestrator
# Custom interval
provisioning services monitor orchestrator --interval 10
-```plaintext
-
----
-
-## Deployment Modes
-
-### Binary Deployment
-
-Run services as native binaries.
-
-**Configuration**:
-
-```toml
-[services.orchestrator.deployment]
+
+
+
+
+Run services as native binaries.
+Configuration :
+[services.orchestrator.deployment]
mode = "binary"
[services.orchestrator.deployment.binary]
@@ -43606,22 +41879,17 @@ binary_path = "${HOME}/.provisioning/bin/provisioning-orchestrator"
args = ["--port", "8080"]
working_dir = "${HOME}/.provisioning/orchestrator"
env = { RUST_LOG = "info" }
-```plaintext
-
-**Process Management**:
-
-- PID tracking in `~/.provisioning/services/pids/`
-- Log output to `~/.provisioning/services/logs/`
-- State tracking in `~/.provisioning/services/state/`
-
-### Docker Deployment
-
-Run services as Docker containers.
-
-**Configuration**:
-
-```toml
-[services.coredns.deployment]
+
+Process Management :
+
+PID tracking in ~/.provisioning/services/pids/
+Log output to ~/.provisioning/services/logs/
+State tracking in ~/.provisioning/services/state/
+
+
+Run services as Docker containers.
+Configuration :
+[services.coredns.deployment]
mode = "docker"
[services.coredns.deployment.docker]
@@ -43630,464 +41898,295 @@ container_name = "provisioning-coredns"
ports = ["5353:53/udp"]
volumes = ["${HOME}/.provisioning/coredns/Corefile:/Corefile:ro"]
restart_policy = "unless-stopped"
-```plaintext
-
-**Prerequisites**:
-
-- Docker daemon running
-- Docker CLI installed
-
-### Docker Compose Deployment
-
-Run services via Docker Compose.
-
-**Configuration**:
-
-```toml
-[services.platform.deployment]
+
+Prerequisites :
+
+Docker daemon running
+Docker CLI installed
+
+
+Run services via Docker Compose.
+Configuration :
+[services.platform.deployment]
mode = "docker-compose"
[services.platform.deployment.docker_compose]
compose_file = "${HOME}/.provisioning/platform/docker-compose.yaml"
service_name = "orchestrator"
project_name = "provisioning"
-```plaintext
-
-**File**: `provisioning/platform/docker-compose.yaml`
-
-### Kubernetes Deployment
-
-Run services on Kubernetes.
-
-**Configuration**:
-
-```toml
-[services.orchestrator.deployment]
+
+File : provisioning/platform/docker-compose.yaml
+
+Run services on Kubernetes.
+Configuration :
+[services.orchestrator.deployment]
mode = "kubernetes"
[services.orchestrator.deployment.kubernetes]
namespace = "provisioning"
deployment_name = "orchestrator"
manifests_path = "${HOME}/.provisioning/k8s/orchestrator/"
-```plaintext
-
-**Prerequisites**:
-
-- kubectl installed and configured
-- Kubernetes cluster accessible
-
-### Remote Deployment
-
-Connect to remotely-running services.
-
-**Configuration**:
-
-```toml
-[services.orchestrator.deployment]
+
+Prerequisites :
+
+kubectl installed and configured
+Kubernetes cluster accessible
+
+
+Connect to remotely-running services.
+Configuration :
+[services.orchestrator.deployment]
mode = "remote"
[services.orchestrator.deployment.remote]
endpoint = "https://orchestrator.example.com"
tls_enabled = true
auth_token_path = "${HOME}/.provisioning/tokens/orchestrator.token"
-```plaintext
-
----
-
-## Health Monitoring
-
-### Health Check Types
-
-#### HTTP Health Check
-
-```toml
-[services.orchestrator.health_check]
+
+
+
+
+
+[services.orchestrator.health_check]
type = "http"
[services.orchestrator.health_check.http]
endpoint = "http://localhost:9090/health"
expected_status = 200
method = "GET"
-```plaintext
-
-#### TCP Health Check
-
-```toml
-[services.coredns.health_check]
+
+
+[services.coredns.health_check]
type = "tcp"
[services.coredns.health_check.tcp]
host = "localhost"
port = 5353
-```plaintext
-
-#### Command Health Check
-
-```toml
-[services.custom.health_check]
+
+
+[services.custom.health_check]
type = "command"
[services.custom.health_check.command]
command = "systemctl is-active myservice"
expected_exit_code = 0
-```plaintext
-
-#### File Health Check
-
-```toml
-[services.custom.health_check]
+
+
+[services.custom.health_check]
type = "file"
[services.custom.health_check.file]
path = "/var/run/myservice.pid"
must_exist = true
-```plaintext
-
-### Health Check Configuration
-
-- `interval`: Seconds between checks (default: 10)
-- `retries`: Max retry attempts (default: 3)
-- `timeout`: Check timeout in seconds (default: 5)
-
-### Continuous Monitoring
-
-```bash
-provisioning services monitor orchestrator --interval 30
-```plaintext
-
-**Output**:
-
-```plaintext
-Starting health monitoring for orchestrator (interval: 30s)
+
+
+
+interval: Seconds between checks (default: 10)
+retries: Max retry attempts (default: 3)
+timeout: Check timeout in seconds (default: 5)
+
+
+provisioning services monitor orchestrator --interval 30
+
+Output :
+Starting health monitoring for orchestrator (interval: 30s)
Press Ctrl+C to stop
2025-10-06 14:30:00 ✅ orchestrator: HTTP health check passed
2025-10-06 14:30:30 ✅ orchestrator: HTTP health check passed
2025-10-06 14:31:00 ✅ orchestrator: HTTP health check passed
-```plaintext
-
----
-
-## Dependency Management
-
-### Dependency Graph
-
-Services can depend on other services:
-
-```toml
-[services.control-center]
+
+
+
+
+Services can depend on other services:
+[services.control-center]
dependencies = ["orchestrator"]
[services.api-gateway]
dependencies = ["orchestrator", "control-center", "mcp-server"]
-```plaintext
-
-### Startup Order
-
-Services start in topological order:
-
-```plaintext
-orchestrator (order: 10)
+
+
+Services start in topological order:
+orchestrator (order: 10)
└─> control-center (order: 20)
└─> api-gateway (order: 45)
-```plaintext
-
-### Dependency Resolution
-
-Automatic dependency resolution when starting services:
-
-```bash
-# Starting control-center automatically starts orchestrator first
+
+
+Automatic dependency resolution when starting services:
+# Starting control-center automatically starts orchestrator first
provisioning services start control-center
-```plaintext
-
-**Output**:
-
-```plaintext
-Starting dependency: orchestrator
+
+Output :
+Starting dependency: orchestrator
✅ Started orchestrator with PID 12345
Waiting for orchestrator to become healthy...
✅ Service orchestrator is healthy
Starting service: control-center
✅ Started control-center with PID 12346
✅ Service control-center is healthy
-```plaintext
-
-### Conflicts
-
-Services can conflict with each other:
-
-```toml
-[services.coredns]
+
+
+Services can conflict with each other:
+[services.coredns]
conflicts = ["dnsmasq", "systemd-resolved"]
-```plaintext
-
-Attempting to start a conflicting service will fail:
-
-```bash
-provisioning services start coredns
-```plaintext
-
-**Output**:
-
-```plaintext
-❌ Pre-flight check failed: conflicts
+
+Attempting to start a conflicting service will fail:
+provisioning services start coredns
+
+Output :
+❌ Pre-flight check failed: conflicts
Conflicting services running: dnsmasq
-```plaintext
-
-### Reverse Dependencies
-
-Check which services depend on a service:
-
-```bash
-provisioning services dependencies orchestrator
-```plaintext
-
-**Output**:
-
-```plaintext
-## orchestrator
+
+
+Check which services depend on a service:
+provisioning services dependencies orchestrator
+
+Output :
+## orchestrator
- Type: platform
- Category: orchestration
- Required by:
- control-center
- mcp-server
- api-gateway
-```plaintext
-
-### Safe Stop
-
-System prevents stopping services with running dependents:
-
-```bash
-provisioning services stop orchestrator
-```plaintext
-
-**Output**:
-
-```plaintext
-❌ Cannot stop orchestrator:
+
+
+System prevents stopping services with running dependents:
+provisioning services stop orchestrator
+
+Output :
+❌ Cannot stop orchestrator:
Dependent services running: control-center, mcp-server, api-gateway
Use --force to stop anyway
-```plaintext
-
----
-
-## Pre-flight Checks
-
-### Purpose
-
-Pre-flight checks ensure services can start successfully before attempting to start them.
-
-### Check Types
-
-1. **Prerequisites**: Binary exists, Docker running, etc.
-2. **Conflicts**: No conflicting services running
-3. **Dependencies**: All dependencies available
-
-### Automatic Checks
-
-Pre-flight checks run automatically when starting services:
-
-```bash
-provisioning services start orchestrator
-```plaintext
-
-**Check Process**:
-
-```plaintext
-Running pre-flight checks for orchestrator...
+
+
+
+
+Pre-flight checks ensure services can start successfully before attempting to start them.
+
+
+Prerequisites : Binary exists, Docker running, etc.
+Conflicts : No conflicting services running
+Dependencies : All dependencies available
+
+
+Pre-flight checks run automatically when starting services:
+provisioning services start orchestrator
+
+Check Process :
+Running pre-flight checks for orchestrator...
✅ Binary found: /Users/user/.provisioning/bin/provisioning-orchestrator
✅ No conflicts detected
✅ All dependencies available
Starting service: orchestrator
-```plaintext
-
-### Manual Validation
-
-Validate all services:
-
-```bash
-provisioning services validate
-```plaintext
-
-Validate specific service:
-
-```bash
-provisioning services status orchestrator
-```plaintext
-
-### Auto-Start
-
-Services with `auto_start = true` can be started automatically when needed:
-
-```bash
-# Orchestrator auto-starts if needed for server operations
+
+
+Validate all services:
+provisioning services validate
+
+Validate specific service:
+provisioning services status orchestrator
+
+
+Services with auto_start = true can be started automatically when needed:
+# Orchestrator auto-starts if needed for server operations
provisioning server create
-```plaintext
-
-**Output**:
-
-```plaintext
-Starting required services...
+
+Output :
+Starting required services...
✅ Orchestrator started
Creating server...
-```plaintext
-
----
-
-## Troubleshooting
-
-### Service Won't Start
-
-**Check prerequisites**:
-
-```bash
-provisioning services validate
+
+
+
+
+Check prerequisites :
+provisioning services validate
provisioning services status <service>
-```plaintext
-
-**Common issues**:
-
-- Binary not found: Check `binary_path` in config
-- Docker not running: Start Docker daemon
-- Port already in use: Check for conflicting processes
-- Dependencies not running: Start dependencies first
-
-### Service Health Check Failing
-
-**View health status**:
-
-```bash
-provisioning services health <service>
-```plaintext
-
-**Check logs**:
-
-```bash
-provisioning services logs <service> --follow
-```plaintext
-
-**Common issues**:
-
-- Service not fully initialized: Wait longer or increase `start_timeout`
-- Wrong health check endpoint: Verify endpoint in config
-- Network issues: Check firewall, port bindings
-
-### Dependency Issues
-
-**View dependency tree**:
-
-```bash
-provisioning services dependencies <service>
-```plaintext
-
-**Check dependency status**:
-
-```bash
-provisioning services status <dependency>
-```plaintext
-
-**Start with dependencies**:
-
-```bash
-provisioning platform start <service>
-```plaintext
-
-### Circular Dependencies
-
-**Validate dependency graph**:
-
-```bash
-# This is done automatically but you can check manually
+
+Common issues :
+
+Binary not found: Check binary_path in config
+Docker not running: Start Docker daemon
+Port already in use: Check for conflicting processes
+Dependencies not running: Start dependencies first
+
+
+View health status :
+provisioning services health <service>
+
+Check logs :
+provisioning services logs <service> --follow
+
+Common issues :
+
+Service not fully initialized: Wait longer or increase start_timeout
+Wrong health check endpoint: Verify endpoint in config
+Network issues: Check firewall, port bindings
+
+
+View dependency tree :
+provisioning services dependencies <service>
+
+Check dependency status :
+provisioning services status <dependency>
+
+Start with dependencies :
+provisioning platform start <service>
+
+
+Validate dependency graph :
+# This is done automatically but you can check manually
nu -c "use lib_provisioning/services/mod.nu *; validate-dependency-graph"
-```plaintext
-
-### PID File Stale
-
-If service reports running but isn't:
-
-```bash
-# Manual cleanup
+
+
+If service reports running but isn’t:
+# Manual cleanup
rm ~/.provisioning/services/pids/<service>.pid
# Force restart
provisioning services restart <service>
-```plaintext
-
-### Port Conflicts
-
-**Find process using port**:
-
-```bash
-lsof -i :9090
-```plaintext
-
-**Kill conflicting process**:
-
-```bash
-kill <PID>
-```plaintext
-
-### Docker Issues
-
-**Check Docker status**:
-
-```bash
-docker ps
+
+
+Find process using port :
+lsof -i :9090
+
+Kill conflicting process :
+kill <PID>
+
+
+Check Docker status :
+docker ps
docker info
-```plaintext
-
-**View container logs**:
-
-```bash
-docker logs provisioning-<service>
-```plaintext
-
-**Restart Docker daemon**:
-
-```bash
-# macOS
+
+View container logs :
+docker logs provisioning-<service>
+
+Restart Docker daemon :
+# macOS
killall Docker && open /Applications/Docker.app
# Linux
systemctl restart docker
-```plaintext
-
-### Service Logs
-
-**View recent logs**:
-
-```bash
-tail -f ~/.provisioning/services/logs/<service>.log
-```plaintext
-
-**Search logs**:
-
-```bash
-grep "ERROR" ~/.provisioning/services/logs/<service>.log
-```plaintext
-
----
-
-## Advanced Usage
-
-### Custom Service Registration
-
-Add custom services by editing `provisioning/config/services.toml`.
-
-### Integration with Workflows
-
-Services automatically start when required by workflows:
-
-```bash
-# Orchestrator starts automatically if not running
+
+
+View recent logs :
+tail -f ~/.provisioning/services/logs/<service>.log
+
+Search logs :
+grep "ERROR" ~/.provisioning/services/logs/<service>.log
+
+
+
+
+Add custom services by editing provisioning/config/services.toml.
+
+Services automatically start when required by workflows:
+# Orchestrator starts automatically if not running
provisioning workflow submit my-workflow
-```plaintext
-
-### CI/CD Integration
-
-```yaml
-# GitLab CI
+
+
+# GitLab CI
before_script:
- provisioning platform start orchestrator
- provisioning services health orchestrator
@@ -44095,30 +42194,21 @@ before_script:
test:
script:
- provisioning test quick kubernetes
-```plaintext
-
-### Monitoring Integration
-
-Services can integrate with monitoring systems via health endpoints.
-
----
-
-## Related Documentation
-
-- Orchestrator README
-- [Test Environment Guide](test-environment-guide.md)
-- [Workflow Management](workflow-management.md)
-
----
-
-## Quick Reference
-
-**Version**: 1.0.0
-
-### Platform Commands (Manage All Services)
-
-```bash
-# Start all auto-start services
+
+
+Services can integrate with monitoring systems via health endpoints.
+
+
+
+
+
+Version : 1.0.0
+
+# Start all auto-start services
provisioning platform start
# Start specific services with dependencies
@@ -44141,14 +42231,10 @@ provisioning platform health
# View service logs
provisioning platform logs orchestrator --follow
-```plaintext
-
----
-
-### Service Commands (Individual Services)
-
-```bash
-# List all services
+
+
+
+# List all services
provisioning services list
# List only running services
@@ -44183,14 +42269,10 @@ provisioning services logs orchestrator --follow --lines 100
# Monitor health continuously
provisioning services monitor orchestrator --interval 30
-```plaintext
-
----
-
-### Dependency & Validation
-
-```bash
-# View dependency graph
+
+
+
+# View dependency graph
provisioning services dependencies
# View specific service dependencies
@@ -44204,28 +42286,22 @@ provisioning services readiness
# Check required services for operation
provisioning services check server
-```plaintext
-
----
-
-### Registered Services
-
-| Service | Port | Type | Auto-Start | Dependencies |
-|---------|------|------|------------|--------------|
-| orchestrator | 8080 | Platform | Yes | - |
-| control-center | 8081 | Platform | No | orchestrator |
-| coredns | 5353 | Infrastructure | No | - |
-| gitea | 3000, 222 | Infrastructure | No | - |
-| oci-registry | 5000 | Infrastructure | No | - |
-| mcp-server | 8082 | Platform | No | orchestrator |
-| api-gateway | 8083 | Platform | No | orchestrator, control-center, mcp-server |
-
----
-
-### Docker Compose
-
-```bash
-# Start all services
+
+
+
+Service Port Type Auto-Start Dependencies
+orchestrator 8080 Platform Yes -
+control-center 8081 Platform No orchestrator
+coredns 5353 Infrastructure No -
+gitea 3000, 222 Infrastructure No -
+oci-registry 5000 Infrastructure No -
+mcp-server 8082 Platform No orchestrator
+api-gateway 8083 Platform No orchestrator, control-center, mcp-server
+
+
+
+
+# Start all services
cd provisioning/platform
docker-compose up -d
@@ -44243,41 +42319,30 @@ docker-compose down
# Stop and remove volumes
docker-compose down -v
-```plaintext
-
----
-
-### Service State Directories
-
-```plaintext
-~/.provisioning/services/
+
+
+
+~/.provisioning/services/
├── pids/ # Process ID files
├── state/ # Service state (JSON)
└── logs/ # Service logs
-```plaintext
-
----
-
-### Health Check Endpoints
-
-| Service | Endpoint | Type |
-|---------|----------|------|
-| orchestrator | <http://localhost:9090/health> | HTTP |
-| control-center | <http://localhost:9080/health> | HTTP |
-| coredns | localhost:5353 | TCP |
-| gitea | <http://localhost:3000/api/healthz> | HTTP |
-| oci-registry | <http://localhost:5000/v2/> | HTTP |
-| mcp-server | <http://localhost:8082/health> | HTTP |
-| api-gateway | <http://localhost:8083/health> | HTTP |
-
----
-
-### Common Workflows
-
-#### Start Platform for Development
-
-```bash
-# Start core services
+
+
+
+
+
+
+
+# Start core services
provisioning platform start orchestrator
# Check status
@@ -44285,24 +42350,18 @@ provisioning platform status
# Check health
provisioning platform health
-```plaintext
-
-#### Start Full Platform Stack
-
-```bash
-# Use Docker Compose
+
+
+# Use Docker Compose
cd provisioning/platform
docker-compose up -d
# Verify
docker-compose ps
provisioning platform health
-```plaintext
-
-#### Debug Service Issues
-
-```bash
-# Check service status
+
+
+# Check service status
provisioning services status <service>
# View logs
@@ -44316,12 +42375,9 @@ provisioning services validate
# Restart service
provisioning services restart <service>
-```plaintext
-
-#### Safe Service Shutdown
-
-```bash
-# Check dependents
+
+
+# Check dependents
nu -c "use lib_provisioning/services/mod.nu *; can-stop-service orchestrator"
# Stop with dependency check
@@ -44329,16 +42385,11 @@ provisioning services stop orchestrator
# Force stop if needed
provisioning services stop orchestrator --force
-```plaintext
-
----
-
-### Troubleshooting
-
-#### Service Won't Start
-
-```bash
-# 1. Check prerequisites
+
+
+
+
+# 1. Check prerequisites
provisioning services validate
# 2. View detailed status
@@ -44350,12 +42401,9 @@ provisioning services logs <service>
# 4. Verify binary/image exists
ls ~/.provisioning/bin/<service>
docker images | grep <service>
-```plaintext
-
-#### Health Check Failing
-
-```bash
-# Check endpoint manually
+
+
+# Check endpoint manually
curl http://localhost:9090/health
# View health details
@@ -44363,22 +42411,16 @@ provisioning services health <service>
# Monitor continuously
provisioning services monitor <service> --interval 10
-```plaintext
-
-#### PID File Stale
-
-```bash
-# Remove stale PID file
+
+
+# Remove stale PID file
rm ~/.provisioning/services/pids/<service>.pid
# Restart service
provisioning services restart <service>
-```plaintext
-
-#### Port Already in Use
-
-```bash
-# Find process using port
+
+
+# Find process using port
lsof -i :9090
# Kill process
@@ -44386,68 +42428,47 @@ kill <PID>
# Restart service
provisioning services start <service>
-```plaintext
-
----
-
-### Integration with Operations
-
-#### Server Operations
-
-```bash
-# Orchestrator auto-starts if needed
+
+
+
+
+# Orchestrator auto-starts if needed
provisioning server create
# Manual check
provisioning services check server
-```plaintext
-
-#### Workflow Operations
-
-```bash
-# Orchestrator auto-starts
+
+
+# Orchestrator auto-starts
provisioning workflow submit my-workflow
# Check status
provisioning services status orchestrator
-```plaintext
-
-#### Test Operations
-
-```bash
-# Orchestrator required for test environments
+
+
+# Orchestrator required for test environments
provisioning test quick kubernetes
# Pre-flight check
provisioning services check test-env
-```plaintext
-
----
-
-### Advanced Usage
-
-#### Custom Service Startup Order
-
-Services start based on:
-
-1. Dependency order (topological sort)
-2. `start_order` field (lower = earlier)
-
-#### Auto-Start Configuration
-
-Edit `provisioning/config/services.toml`:
-
-```toml
-[services.<service>.startup]
+
+
+
+
+Services start based on:
+
+Dependency order (topological sort)
+start_order field (lower = earlier)
+
+
+Edit provisioning/config/services.toml:
+[services.<service>.startup]
auto_start = true # Enable auto-start
start_timeout = 30 # Timeout in seconds
start_order = 10 # Startup priority
-```plaintext
-
-#### Health Check Configuration
-
-```toml
-[services.<service>.health_check]
+
+
+[services.<service>.health_check]
type = "http" # http, tcp, command, file
interval = 10 # Seconds between checks
retries = 3 # Max retry attempts
@@ -44456,23 +42477,18 @@ timeout = 5 # Check timeout
[services.<service>.health_check.http]
endpoint = "http://localhost:9090/health"
expected_status = 200
-```plaintext
-
----
-
-### Key Files
-
-- **Service Registry**: `provisioning/config/services.toml`
-- **KCL Schema**: `provisioning/kcl/services.k`
-- **Docker Compose**: `provisioning/platform/docker-compose.yaml`
-- **User Guide**: `docs/user/SERVICE_MANAGEMENT_GUIDE.md`
-
----
-
-### Getting Help
-
-```bash
-# View documentation
+
+
+
+
+Service Registry : provisioning/config/services.toml
+KCL Schema : provisioning/kcl/services.k
+Docker Compose : provisioning/platform/docker-compose.yaml
+User Guide : docs/user/SERVICE_MANAGEMENT_GUIDE.md
+
+
+
+# View documentation
cat docs/user/SERVICE_MANAGEMENT_GUIDE.md | less
# Run verification
@@ -44480,17 +42496,12 @@ nu provisioning/core/nulib/tests/verify_services.nu
# Check readiness
provisioning services readiness
-```plaintext
-
----
-
-**Quick Tip**: Use `--help` flag with any command for detailed usage information.
-
----
-
-**Maintained By**: Platform Team
-**Support**: [GitHub Issues](https://github.com/your-org/provisioning/issues)
+
+Quick Tip : Use --help flag with any command for detailed usage information.
+
+Maintained By : Platform Team
+Support : GitHub Issues
Complete guide for monitoring the 9-service platform with Prometheus, Grafana, and AlertManager
Version : 1.0.0
@@ -44498,7 +42509,7 @@ provisioning services readiness
Target Audience : DevOps Engineers, Platform Operators
Status : Production Ready
-
+
This guide provides complete setup instructions for monitoring and alerting on the provisioning platform using industry-standard tools:
Prometheus : Metrics collection and time-series database
@@ -44506,8 +42517,8 @@ provisioning services readiness
AlertManager : Alert routing and notification
-
-Services (metrics endpoints)
+
+Services (metrics endpoints)
↓
Prometheus (scrapes every 30s)
↓
@@ -44522,7 +42533,7 @@ Grafana (queries)
Dashboards & Visualization
-
+
# Prometheus (for metrics)
wget https://github.com/prometheus/prometheus/releases/download/v2.48.0/prometheus-2.48.0.linux-amd64.tar.gz
@@ -45388,11 +43399,11 @@ find "$BACKUP_DIR" -mtime +$RETENTION_DAYS -delete
# Keep metrics for 15 days
/opt/prometheus/prometheus \
--storage.tsdb.retention.time=15d \
- --storage.tsdb.retention.size=50GB
+ --storage.tsdb.retention.size=50 GB
-
+
# Check configuration
/opt/prometheus/promtool check config /etc/prometheus/prometheus.yml
@@ -45543,7 +43554,7 @@ If service doesn't recover after restart, escalate to on-call engineer
Advanced Topics
-
+
The CoreDNS integration provides comprehensive DNS management capabilities for the provisioning system. It supports:
Local DNS service - Run CoreDNS as binary or Docker container
@@ -45553,7 +43564,7 @@ If service doesn't recover after restart, escalate to on-call engineer
REST API - Programmatic DNS management
Docker deployment - Containerized CoreDNS with docker-compose
-
+
✅ Automatic Server Registration - Servers automatically registered in DNS on creation
✅ Zone File Management - Create, update, and manage zone files programmatically
✅ Multiple Deployment Modes - Binary, Docker, remote, or hybrid
@@ -45561,8 +43572,8 @@ If service doesn't recover after restart, escalate to on-call engineer
✅ CLI Interface - Comprehensive command-line tools
✅ API Integration - REST API for external integration
-
-
+
+
Nushell 0.107+ - For CLI and scripts
Docker (optional) - For containerized deployment
@@ -45577,130 +43588,101 @@ provisioning dns install 1.11.1
# Check mode
provisioning dns install --check
-```plaintext
-
-The binary will be installed to `~/.provisioning/bin/coredns`.
-
-### Verify Installation
-
-```bash
-# Check CoreDNS version
+
+The binary will be installed to ~/.provisioning/bin/coredns.
+
+# Check CoreDNS version
~/.provisioning/bin/coredns -version
# Verify installation
ls -lh ~/.provisioning/bin/coredns
-```plaintext
+
+
+
+
+Add CoreDNS configuration to your infrastructure config:
+# In workspace/infra/{name}/config.ncl
+let coredns_config = {
+ mode = "local",
----
+ local = {
+ enabled = true,
+ deployment_type = "binary", # or "docker"
+ binary_path = "~/.provisioning/bin/coredns",
+ config_path = "~/.provisioning/coredns/Corefile",
+ zones_path = "~/.provisioning/coredns/zones",
+ port = 5353,
+ auto_start = true,
+ zones = ["provisioning.local", "workspace.local"],
+ },
-## Configuration
+ dynamic_updates = {
+ enabled = true,
+ api_endpoint = "http://localhost:9090/dns",
+ auto_register_servers = true,
+ auto_unregister_servers = true,
+ ttl = 300,
+ },
-### KCL Configuration Schema
-
-Add CoreDNS configuration to your infrastructure config:
-
-```kcl
-# In workspace/infra/{name}/config.k
-import provisioning.coredns as dns
-
-coredns_config: dns.CoreDNSConfig = {
- mode = "local"
-
- local = {
- enabled = True
- deployment_type = "binary" # or "docker"
- binary_path = "~/.provisioning/bin/coredns"
- config_path = "~/.provisioning/coredns/Corefile"
- zones_path = "~/.provisioning/coredns/zones"
- port = 5353
- auto_start = True
- zones = ["provisioning.local", "workspace.local"]
- }
-
- dynamic_updates = {
- enabled = True
- api_endpoint = "http://localhost:9090/dns"
- auto_register_servers = True
- auto_unregister_servers = True
- ttl = 300
- }
-
- upstream = ["8.8.8.8", "1.1.1.1"]
- default_ttl = 3600
- enable_logging = True
- enable_metrics = True
- metrics_port = 9153
-}
-```plaintext
-
-### Configuration Modes
-
-#### Local Mode (Binary)
-
-Run CoreDNS as a local binary process:
-
-```kcl
-coredns_config: CoreDNSConfig = {
- mode = "local"
- local = {
- deployment_type = "binary"
- auto_start = True
- }
-}
-```plaintext
-
-#### Local Mode (Docker)
-
-Run CoreDNS in Docker container:
-
-```kcl
-coredns_config: CoreDNSConfig = {
- mode = "local"
- local = {
- deployment_type = "docker"
- docker = {
- image = "coredns/coredns:1.11.1"
- container_name = "provisioning-coredns"
- restart_policy = "unless-stopped"
- }
- }
-}
-```plaintext
-
-#### Remote Mode
-
-Connect to external CoreDNS service:
-
-```kcl
-coredns_config: CoreDNSConfig = {
- mode = "remote"
- remote = {
- enabled = True
- endpoints = ["https://dns1.example.com", "https://dns2.example.com"]
- zones = ["production.local"]
- verify_tls = True
- }
-}
-```plaintext
-
-#### Disabled Mode
-
-Disable CoreDNS integration:
-
-```kcl
-coredns_config: CoreDNSConfig = {
- mode = "disabled"
-}
-```plaintext
-
----
-
-## CLI Commands
-
-### Service Management
-
-```bash
-# Check status
+ upstream = ["8.8.8.8", "1.1.1.1"],
+ default_ttl = 3600,
+ enable_logging = true,
+ enable_metrics = true,
+ metrics_port = 9153,
+} in
+coredns_config
+
+
+
+Run CoreDNS as a local binary process:
+let coredns_config = {
+ mode = "local",
+ local = {
+ deployment_type = "binary",
+ auto_start = true,
+ },
+} in
+coredns_config
+
+
+Run CoreDNS in Docker container:
+let coredns_config = {
+ mode = "local",
+ local = {
+ deployment_type = "docker",
+ docker = {
+ image = "coredns/coredns:1.11.1",
+ container_name = "provisioning-coredns",
+ restart_policy = "unless-stopped",
+ },
+ },
+} in
+coredns_config
+
+
+Connect to external CoreDNS service:
+let coredns_config = {
+ mode = "remote",
+ remote = {
+ enabled = true,
+ endpoints = ["https://dns1.example.com", "https://dns2.example.com"],
+ zones = ["production.local"],
+ verify_tls = true,
+ },
+} in
+coredns_config
+
+
+Disable CoreDNS integration:
+let coredns_config = {
+ mode = "disabled",
+} in
+coredns_config
+
+
+
+
+# Check status
provisioning dns status
# Start service
@@ -45726,12 +43708,9 @@ provisioning dns logs --follow
# Show last 100 lines
provisioning dns logs --lines 100
-```plaintext
-
-### Health & Monitoring
-
-```bash
-# Check health
+
+
+# Check health
provisioning dns health
# View configuration
@@ -45742,42 +43721,28 @@ provisioning dns config validate
# Generate new Corefile
provisioning dns config generate
-```plaintext
-
----
-
-## Zone Management
-
-### List Zones
-
-```bash
-# List all zones
+
+
+
+
+# List all zones
provisioning dns zone list
-```plaintext
-
-**Output:**
-
-```plaintext
-DNS Zones
+
+Output:
+DNS Zones
=========
• provisioning.local ✓
• workspace.local ✓
-```plaintext
-
-### Create Zone
-
-```bash
-# Create new zone
+
+
+# Create new zone
provisioning dns zone create myapp.local
# Check mode
provisioning dns zone create myapp.local --check
-```plaintext
-
-### Show Zone Details
-
-```bash
-# Show all records in zone
+
+
+# Show all records in zone
provisioning dns zone show provisioning.local
# JSON format
@@ -45785,12 +43750,9 @@ provisioning dns zone show provisioning.local --format json
# YAML format
provisioning dns zone show provisioning.local --format yaml
-```plaintext
-
-### Delete Zone
-
-```bash
-# Delete zone (with confirmation)
+
+
+# Delete zone (with confirmation)
provisioning dns zone delete myapp.local
# Force deletion (skip confirmation)
@@ -45798,18 +43760,12 @@ provisioning dns zone delete myapp.local --force
# Check mode
provisioning dns zone delete myapp.local --check
-```plaintext
-
----
-
-## Record Management
-
-### Add Records
-
-#### A Record (IPv4)
-
-```bash
-provisioning dns record add server-01 A 10.0.1.10
+
+
+
+
+
+provisioning dns record add server-01 A 10.0.1.10
# With custom TTL
provisioning dns record add server-01 A 10.0.1.10 --ttl 600
@@ -45819,36 +43775,21 @@ provisioning dns record add server-01 A 10.0.1.10 --comment "Web server"
# Different zone
provisioning dns record add server-01 A 10.0.1.10 --zone myapp.local
-```plaintext
-
-#### AAAA Record (IPv6)
-
-```bash
-provisioning dns record add server-01 AAAA 2001:db8::1
-```plaintext
-
-#### CNAME Record
-
-```bash
-provisioning dns record add web CNAME server-01.provisioning.local
-```plaintext
-
-#### MX Record
-
-```bash
-provisioning dns record add @ MX mail.example.com --priority 10
-```plaintext
-
-#### TXT Record
-
-```bash
-provisioning dns record add @ TXT "v=spf1 mx -all"
-```plaintext
-
-### Remove Records
-
-```bash
-# Remove record
+
+
+provisioning dns record add server-01 AAAA 2001:db8::1
+
+
+provisioning dns record add web CNAME server-01.provisioning.local
+
+
+provisioning dns record add @ MX mail.example.com --priority 10
+
+
+provisioning dns record add @ TXT "v=spf1 mx -all"
+
+
+# Remove record
provisioning dns record remove server-01
# Different zone
@@ -45856,22 +43797,16 @@ provisioning dns record remove server-01 --zone myapp.local
# Check mode
provisioning dns record remove server-01 --check
-```plaintext
-
-### Update Records
-
-```bash
-# Update record value
+
+
+# Update record value
provisioning dns record update server-01 A 10.0.1.20
# With new TTL
provisioning dns record update server-01 A 10.0.1.20 --ttl 1800
-```plaintext
-
-### List Records
-
-```bash
-# List all records in zone
+
+
+# List all records in zone
provisioning dns record list
# Different zone
@@ -45882,12 +43817,9 @@ provisioning dns record list --format json
# YAML format
provisioning dns record list --format yaml
-```plaintext
-
-**Example Output:**
-
-```plaintext
-DNS Records - Zone: provisioning.local
+
+Example Output:
+DNS Records - Zone: provisioning.local
╭───┬──────────────┬──────┬─────────────┬─────╮
│ # │ name │ type │ value │ ttl │
@@ -45897,35 +43829,23 @@ DNS Records - Zone: provisioning.local
│ 2 │ db-01 │ A │ 10.0.2.10 │ 300 │
│ 3 │ web │ CNAME│ server-01 │ 300 │
╰───┴──────────────┴──────┴─────────────┴─────╯
-```plaintext
-
----
-
-## Docker Deployment
-
-### Prerequisites
-
-Ensure Docker and docker-compose are installed:
-
-```bash
-docker --version
+
+
+
+
+Ensure Docker and docker-compose are installed:
+docker --version
docker-compose --version
-```plaintext
-
-### Start CoreDNS in Docker
-
-```bash
-# Start CoreDNS container
+
+
+# Start CoreDNS container
provisioning dns docker start
# Check mode
provisioning dns docker start --check
-```plaintext
-
-### Manage Docker Container
-
-```bash
-# Check status
+
+
+# Check status
provisioning dns docker status
# View logs
@@ -45942,12 +43862,9 @@ provisioning dns docker stop
# Check health
provisioning dns docker health
-```plaintext
-
-### Update Docker Image
-
-```bash
-# Pull latest image
+
+
+# Pull latest image
provisioning dns docker pull
# Pull specific version
@@ -45955,12 +43872,9 @@ provisioning dns docker pull --version 1.11.1
# Update and restart
provisioning dns docker update
-```plaintext
-
-### Remove Container
-
-```bash
-# Remove container (with confirmation)
+
+
+# Remove container (with confirmation)
provisioning dns docker remove
# Remove with volumes
@@ -45971,34 +43885,22 @@ provisioning dns docker remove --force
# Check mode
provisioning dns docker remove --check
-```plaintext
-
-### View Configuration
-
-```bash
-# Show docker-compose config
+
+
+# Show docker-compose config
provisioning dns docker config
-```plaintext
-
----
-
-## Integration
-
-### Automatic Server Registration
-
-When dynamic DNS is enabled, servers are automatically registered:
-
-```bash
-# Create server (automatically registers in DNS)
+
+
+
+
+When dynamic DNS is enabled, servers are automatically registered:
+# Create server (automatically registers in DNS)
provisioning server create web-01 --infra myapp
# Server gets DNS record: web-01.provisioning.local -> <server-ip>
-```plaintext
-
-### Manual Registration
-
-```nushell
-use lib_provisioning/coredns/integration.nu *
+
+
+use lib_provisioning/coredns/integration.nu *
# Register server
register-server-in-dns "web-01" "10.0.1.10"
@@ -46012,38 +43914,27 @@ bulk-register-servers [
{hostname: "web-02", ip: "10.0.1.11"}
{hostname: "db-01", ip: "10.0.2.10"}
]
-```plaintext
-
-### Sync Infrastructure with DNS
-
-```bash
-# Sync all servers in infrastructure with DNS
+
+
+# Sync all servers in infrastructure with DNS
provisioning dns sync myapp
# Check mode
provisioning dns sync myapp --check
-```plaintext
-
-### Service Registration
-
-```nushell
-use lib_provisioning/coredns/integration.nu *
+
+
+use lib_provisioning/coredns/integration.nu *
# Register service
register-service-in-dns "api" "10.0.1.10"
# Unregister service
unregister-service-from-dns "api"
-```plaintext
-
----
-
-## Query DNS
-
-### Using CLI
-
-```bash
-# Query A record
+
+
+
+
+# Query A record
provisioning dns query server-01
# Query specific type
@@ -46054,12 +43945,9 @@ provisioning dns query server-01 --server 8.8.8.8 --port 53
# Query from local CoreDNS
provisioning dns query server-01 --server 127.0.0.1 --port 5353
-```plaintext
-
-### Using dig
-
-```bash
-# Query from local CoreDNS
+
+
+# Query from local CoreDNS
dig @127.0.0.1 -p 5353 server-01.provisioning.local
# Query CNAME
@@ -46067,26 +43955,20 @@ dig @127.0.0.1 -p 5353 web.provisioning.local CNAME
# Query MX
dig @127.0.0.1 -p 5353 example.com MX
-```plaintext
-
----
-
-## Troubleshooting
-
-### CoreDNS Not Starting
-
-**Symptoms:** `dns start` fails or service doesn't respond
-
-**Solutions:**
-
-1. **Check if port is in use:**
-
- ```bash
- lsof -i :5353
- netstat -an | grep 5353
+
+
+
+Symptoms: dns start fails or service doesn’t respond
+Solutions:
+Check if port is in use:
+lsof -i :5353
+netstat -an | grep 5353
+
+
+
Validate Corefile:
provisioning dns config validate
@@ -46229,47 +44111,34 @@ add-corefile-plugin \
"~/.provisioning/coredns/Corefile" \
"provisioning.local" \
"cache 30"
-```plaintext
-
-### Backup and Restore
-
-```bash
-# Backup configuration
+
+
+# Backup configuration
tar czf coredns-backup.tar.gz ~/.provisioning/coredns/
# Restore configuration
tar xzf coredns-backup.tar.gz -C ~/
-```plaintext
-
-### Zone File Backup
-
-```nushell
-use lib_provisioning/coredns/zones.nu *
+
+
+use lib_provisioning/coredns/zones.nu *
# Backup zone
backup-zone-file "provisioning.local"
# Creates: ~/.provisioning/coredns/zones/provisioning.local.zone.YYYYMMDD-HHMMSS.bak
-```plaintext
-
-### Metrics and Monitoring
-
-CoreDNS exposes Prometheus metrics on port 9153:
-
-```bash
-# View metrics
+
+
+CoreDNS exposes Prometheus metrics on port 9153:
+# View metrics
curl http://localhost:9153/metrics
# Common metrics:
# - coredns_dns_request_duration_seconds
# - coredns_dns_requests_total
# - coredns_dns_responses_total
-```plaintext
-
-### Multi-Zone Setup
-
-```kcl
-coredns_config: CoreDNSConfig = {
+
+
+coredns_config: CoreDNSConfig = {
local = {
zones = [
"provisioning.local",
@@ -46280,14 +44149,10 @@ coredns_config: CoreDNSConfig = {
]
}
}
-```plaintext
-
-### Split-Horizon DNS
-
-Configure different zones for internal/external:
-
-```kcl
-coredns_config: CoreDNSConfig = {
+
+
+Configure different zones for internal/external:
+coredns_config: CoreDNSConfig = {
local = {
zones = ["internal.local"]
port = 5353
@@ -46297,58 +44162,48 @@ coredns_config: CoreDNSConfig = {
endpoints = ["https://dns.external.com"]
}
}
-```plaintext
-
----
-
-## Configuration Reference
-
-### CoreDNSConfig Fields
-
-| Field | Type | Default | Description |
-|-------|------|---------|-------------|
-| `mode` | `"local" \| "remote" \| "hybrid" \| "disabled"` | `"local"` | Deployment mode |
-| `local` | `LocalCoreDNS?` | - | Local config (required for local mode) |
-| `remote` | `RemoteCoreDNS?` | - | Remote config (required for remote mode) |
-| `dynamic_updates` | `DynamicDNS` | - | Dynamic DNS configuration |
-| `upstream` | `[str]` | `["8.8.8.8", "1.1.1.1"]` | Upstream DNS servers |
-| `default_ttl` | `int` | `300` | Default TTL (seconds) |
-| `enable_logging` | `bool` | `True` | Enable query logging |
-| `enable_metrics` | `bool` | `True` | Enable Prometheus metrics |
-| `metrics_port` | `int` | `9153` | Metrics port |
-
-### LocalCoreDNS Fields
-
-| Field | Type | Default | Description |
-|-------|------|---------|-------------|
-| `enabled` | `bool` | `True` | Enable local CoreDNS |
-| `deployment_type` | `"binary" \| "docker"` | `"binary"` | How to deploy |
-| `binary_path` | `str` | `"~/.provisioning/bin/coredns"` | Path to binary |
-| `config_path` | `str` | `"~/.provisioning/coredns/Corefile"` | Corefile path |
-| `zones_path` | `str` | `"~/.provisioning/coredns/zones"` | Zones directory |
-| `port` | `int` | `5353` | DNS listening port |
-| `auto_start` | `bool` | `True` | Auto-start on boot |
-| `zones` | `[str]` | `["provisioning.local"]` | Managed zones |
-
-### DynamicDNS Fields
-
-| Field | Type | Default | Description |
-|-------|------|---------|-------------|
-| `enabled` | `bool` | `True` | Enable dynamic updates |
-| `api_endpoint` | `str` | `"http://localhost:9090/dns"` | Orchestrator API |
-| `auto_register_servers` | `bool` | `True` | Auto-register on create |
-| `auto_unregister_servers` | `bool` | `True` | Auto-unregister on delete |
-| `ttl` | `int` | `300` | TTL for dynamic records |
-| `update_strategy` | `"immediate" \| "batched" \| "scheduled"` | `"immediate"` | Update strategy |
-
----
-
-## Examples
-
-### Complete Setup Example
-
-```bash
-# 1. Install CoreDNS
+
+
+
+
+Field Type Default Description
+mode"local" | "remote" | "hybrid" | "disabled""local"Deployment mode
+localLocalCoreDNS?- Local config (required for local mode)
+remoteRemoteCoreDNS?- Remote config (required for remote mode)
+dynamic_updatesDynamicDNS- Dynamic DNS configuration
+upstream[str]["8.8.8.8", "1.1.1.1"]Upstream DNS servers
+default_ttlint300Default TTL (seconds)
+enable_loggingboolTrueEnable query logging
+enable_metricsboolTrueEnable Prometheus metrics
+metrics_portint9153Metrics port
+
+
+
+Field Type Default Description
+enabledboolTrueEnable local CoreDNS
+deployment_type"binary" | "docker""binary"How to deploy
+binary_pathstr"~/.provisioning/bin/coredns"Path to binary
+config_pathstr"~/.provisioning/coredns/Corefile"Corefile path
+zones_pathstr"~/.provisioning/coredns/zones"Zones directory
+portint5353DNS listening port
+auto_startboolTrueAuto-start on boot
+zones[str]["provisioning.local"]Managed zones
+
+
+
+Field Type Default Description
+enabledboolTrueEnable dynamic updates
+api_endpointstr"http://localhost:9090/dns"Orchestrator API
+auto_register_serversboolTrueAuto-register on create
+auto_unregister_serversboolTrueAuto-unregister on delete
+ttlint300TTL for dynamic records
+update_strategy"immediate" | "batched" | "scheduled""immediate"Update strategy
+
+
+
+
+
+# 1. Install CoreDNS
provisioning dns install
# 2. Generate configuration
@@ -46371,12 +44226,9 @@ provisioning dns query web-01 --server 127.0.0.1 --port 5353
# 7. Check status
provisioning dns status
provisioning dns health
-```plaintext
-
-### Docker Deployment Example
-
-```bash
-# 1. Start CoreDNS in Docker
+
+
+# 1. Start CoreDNS in Docker
provisioning dns docker start
# 2. Check status
@@ -46393,53 +44245,40 @@ dig @127.0.0.1 -p 5353 server-01.provisioning.local
# 6. Stop
provisioning dns docker stop
-```plaintext
-
----
-
-## Best Practices
-
-1. **Use TTL wisely** - Lower TTL (300s) for frequently changing records, higher (3600s) for stable
-2. **Enable logging** - Essential for troubleshooting
-3. **Regular backups** - Backup zone files before major changes
-4. **Validate before reload** - Always run `dns config validate` before reloading
-5. **Monitor metrics** - Track DNS query rates and error rates
-6. **Use comments** - Add comments to records for documentation
-7. **Separate zones** - Use different zones for different environments (dev, staging, prod)
-
----
-
-## See Also
-
-- [Architecture Documentation](../architecture/coredns-architecture.md)
-- [API Reference](../api/dns-api.md)
-- [Orchestrator Integration](../integration/orchestrator-dns.md)
-- KCL Schema Reference
-
----
-
-## Quick Reference
-
-**Quick command reference for CoreDNS DNS management**
-
----
-
-### Installation
-
-```bash
-# Install CoreDNS binary
+
+
+
+
+Use TTL wisely - Lower TTL (300s) for frequently changing records, higher (3600s) for stable
+Enable logging - Essential for troubleshooting
+Regular backups - Backup zone files before major changes
+Validate before reload - Always run dns config validate before reloading
+Monitor metrics - Track DNS query rates and error rates
+Use comments - Add comments to records for documentation
+Separate zones - Use different zones for different environments (dev, staging, prod)
+
+
+
+
+
+
+Quick command reference for CoreDNS DNS management
+
+
+# Install CoreDNS binary
provisioning dns install
# Install specific version
provisioning dns install 1.11.1
-```plaintext
-
----
-
-### Service Management
-
-```bash
-# Status
+
+
+
+# Status
provisioning dns status
# Start
@@ -46461,14 +44300,10 @@ provisioning dns logs --lines 100
# Health
provisioning dns health
-```plaintext
-
----
-
-### Zone Management
-
-```bash
-# List zones
+
+
+
+# List zones
provisioning dns zone list
# Create zone
@@ -46481,14 +44316,10 @@ provisioning dns zone show provisioning.local --format json
# Delete zone
provisioning dns zone delete myapp.local
provisioning dns zone delete myapp.local --force
-```plaintext
-
----
-
-### Record Management
-
-```bash
-# Add A record
+
+
+
+# Add A record
provisioning dns record add server-01 A 10.0.1.10
# Add with custom TTL
@@ -46520,14 +44351,10 @@ provisioning dns record update server-01 A 10.0.1.20
provisioning dns record list
provisioning dns record list --zone myapp.local
provisioning dns record list --format json
-```plaintext
-
----
-
-### DNS Queries
-
-```bash
-# Query A record
+
+
+
+# Query A record
provisioning dns query server-01
# Query CNAME
@@ -46539,14 +44366,10 @@ provisioning dns query server-01 --server 127.0.0.1 --port 5353
# Using dig
dig @127.0.0.1 -p 5353 server-01.provisioning.local
dig @127.0.0.1 -p 5353 provisioning.local SOA
-```plaintext
-
----
-
-### Configuration
-
-```bash
-# Show configuration
+
+
+
+# Show configuration
provisioning dns config show
# Validate configuration
@@ -46554,14 +44377,10 @@ provisioning dns config validate
# Generate Corefile
provisioning dns config generate
-```plaintext
-
----
-
-### Docker Deployment
-
-```bash
-# Start Docker container
+
+
+
+# Start Docker container
provisioning dns docker start
# Status
@@ -46594,16 +44413,11 @@ provisioning dns docker update
# Show config
provisioning dns docker config
-```plaintext
-
----
-
-### Common Workflows
-
-#### Initial Setup
-
-```bash
-# 1. Install
+
+
+
+
+# 1. Install
provisioning dns install
# 2. Start
@@ -46612,22 +44426,16 @@ provisioning dns start
# 3. Verify
provisioning dns status
provisioning dns health
-```plaintext
-
-#### Add Server
-
-```bash
-# Add DNS record for new server
+
+
+# Add DNS record for new server
provisioning dns record add web-01 A 10.0.1.10
# Verify
provisioning dns query web-01
-```plaintext
-
-#### Create Custom Zone
-
-```bash
-# 1. Create zone
+
+
+# 1. Create zone
provisioning dns zone create myapp.local
# 2. Add records
@@ -46639,12 +44447,9 @@ provisioning dns record list --zone myapp.local
# 4. Query
dig @127.0.0.1 -p 5353 web-01.myapp.local
-```plaintext
-
-#### Docker Setup
-
-```bash
-# 1. Start container
+
+
+# 1. Start container
provisioning dns docker start
# 2. Check status
@@ -46655,14 +44460,10 @@ provisioning dns record add server-01 A 10.0.1.10
# 4. Query
dig @127.0.0.1 -p 5353 server-01.provisioning.local
-```plaintext
-
----
-
-### Troubleshooting
-
-```bash
-# Check if CoreDNS is running
+
+
+
+# Check if CoreDNS is running
provisioning dns status
ps aux | grep coredns
@@ -46687,14 +44488,10 @@ provisioning dns restart
provisioning dns docker logs
provisioning dns docker health
docker ps -a | grep coredns
-```plaintext
-
----
-
-### File Locations
-
-```bash
-# Binary
+
+
+
+# Binary
~/.provisioning/bin/coredns
# Corefile
@@ -46711,14 +44508,10 @@ docker ps -a | grep coredns
# Docker compose
provisioning/config/coredns/docker-compose.yml
-```plaintext
-
----
-
-### Configuration Example
-
-```kcl
-import provisioning.coredns as dns
+
+
+
+import provisioning.coredns as dns
coredns_config: dns.CoreDNSConfig = {
mode = "local"
@@ -46734,45 +44527,35 @@ coredns_config: dns.CoreDNSConfig = {
}
upstream = ["8.8.8.8", "1.1.1.1"]
}
-```plaintext
-
----
-
-### Environment Variables
-
-```bash
-# None required - configuration via KCL
-```plaintext
-
----
-
-### Default Values
-
-| Setting | Default |
-|---------|---------|
-| Port | 5353 |
-| Zones | ["provisioning.local"] |
-| Upstream | ["8.8.8.8", "1.1.1.1"] |
-| TTL | 300 |
-| Deployment | binary |
-| Auto-start | true |
-| Logging | enabled |
-| Metrics | enabled |
-| Metrics Port | 9153 |
-
----
-
-## See Also
-
-- [Complete Guide](COREDNS_GUIDE.md) - Full documentation
-- Implementation Summary - Technical details
-- KCL Schema - Configuration schema
-
----
-
-**Last Updated**: 2025-10-06
-**Version**: 1.0.0
+
+
+# None required - configuration via KCL
+
+
+
+Setting Default
+Port 5353
+Zones [“provisioning.local”]
+Upstream [“8.8.8.8”, “1.1.1.1”]
+TTL 300
+Deployment binary
+Auto-start true
+Logging enabled
+Metrics enabled
+Metrics Port 9153
+
+
+
+
+
+Complete Guide - Full documentation
+Implementation Summary - Technical details
+KCL Schema - Configuration schema
+
+
+Last Updated : 2025-10-06
+Version : 1.0.0
@@ -46782,7 +44565,7 @@ coredns_config: dns.CoreDNSConfig = {
Last Verified : 2025-12-09
The Provisioning Setup System is production-ready for enterprise deployment. All components have been tested, validated, and verified to meet production standards.
-
+
✅ Code Quality : 100% Nushell 0.109 compliant
✅ Test Coverage : 33/33 tests passing (100% pass rate)
@@ -47071,9 +44854,9 @@ provisioning setup validate provider upcloud
Operation Expected Time Maximum Time
Setup system 2-5 seconds 10 seconds
Health check < 3 seconds 5 seconds
-Configuration validation < 500ms 1 second
+Configuration validation < 500 ms 1 second
Server creation < 30 seconds 60 seconds
-Workspace switch < 100ms 500ms
+Workspace switch < 100 ms 500 ms
@@ -47100,7 +44883,7 @@ provisioning setup validate provider upcloud
Architecture changes
-
+
If issues occur post-deployment:
# 1. Take backup of current configuration
provisioning setup backup --path rollback-$(date +%Y%m%d-%H%M%S).tar.gz
@@ -47118,7 +44901,7 @@ provisioning setup validate --verbose
nu scripts/health-check.nu
-
+
System is production-ready when:
✅ All tests passing
@@ -47153,9 +44936,9 @@ nu scripts/health-check.nu
Training Duration : 45-60 minutes
Certification : Required annually
-
-Break-glass is an emergency access procedure that allows authorized personnel to bypass normal security controls during critical incidents (e.g., production outages, security breaches, data loss).
-
+
+Break-glass is an emergency access procedure that allows authorized personnel to bypass normal security controls during critical incidents (for example, production outages, security breaches, data loss).
+
Last Resort Only : Use only when normal access is insufficient
Multi-Party Approval : Requires 2+ approvers from different teams
@@ -47268,12 +45051,9 @@ Incident properly documented
│ - Sends notifications to approver pool │
│ - Starts approval timeout (1 hour) │
└─────────────────────────────────────────────────────────┘
-```plaintext
-
-### Phase 2: Approval (10-15 minutes)
-
-```plaintext
-┌─────────────────────────────────────────────────────────┐
+
+
+┌─────────────────────────────────────────────────────────┐
│ 3. First approver reviews request │
│ - Verifies emergency is real │
│ - Checks requester's justification │
@@ -47293,12 +45073,9 @@ Incident properly documented
│ - ✓ Within approval window │
│ - Status → APPROVED │
└─────────────────────────────────────────────────────────┘
-```plaintext
-
-### Phase 3: Activation (1-2 minutes)
-
-```plaintext
-┌─────────────────────────────────────────────────────────┐
+
+
+┌─────────────────────────────────────────────────────────┐
│ 6. Requester activates approved session │
│ - Receives emergency JWT token │
│ - Token valid for 2 hours (or requested duration) │
@@ -47310,12 +45087,9 @@ Incident properly documented
│ - Real-time alert: "Break-glass activated" │
│ - Monitoring dashboard shows active session │
└─────────────────────────────────────────────────────────┘
-```plaintext
-
-### Phase 4: Usage (Variable)
-
-```plaintext
-┌─────────────────────────────────────────────────────────┐
+
+
+┌─────────────────────────────────────────────────────────┐
│ 8. Requester performs emergency actions │
│ - Uses emergency token for access │
│ - Every action audited │
@@ -47328,12 +45102,9 @@ Incident properly documented
│ - Enforces inactivity timeout (30 min) │
│ - Alerts on unusual patterns │
└─────────────────────────────────────────────────────────┘
-```plaintext
-
-### Phase 5: Revocation (Immediate)
-
-```plaintext
-┌─────────────────────────────────────────────────────────┐
+
+
+┌─────────────────────────────────────────────────────────┐
│ 10. Session ends (one of): │
│ - Manual revocation by requester │
│ - Expiration (max 4 hours) │
@@ -47347,18 +45118,12 @@ Incident properly documented
│ - Incident report generated │
│ - Post-incident review scheduled │
└─────────────────────────────────────────────────────────┘
-```plaintext
-
----
-
-## Using the System
-
-### CLI Commands
-
-#### 1. Request Emergency Access
-
-```bash
-provisioning break-glass request \
+
+
+
+
+
+provisioning break-glass request \
"Production database cluster unresponsive" \
--justification "Need direct SSH access to diagnose PostgreSQL failure. All monitoring shows cluster down. Application completely offline affecting 10,000+ users." \
--resources '["database/*", "server/db-*"]' \
@@ -47374,12 +45139,9 @@ provisioning break-glass request \
# Notifications sent to:
# - security-team@example.com
# - platform-admin@example.com
-```plaintext
-
-#### 2. Approve Request (Approver)
-
-```bash
-# First approver (Security team)
+
+
+# First approver (Security team)
provisioning break-glass approve BG-20251008-001 \
--reason "Emergency verified via incident INC-2025-234. Database cluster confirmed down, affecting production."
@@ -47388,10 +45150,8 @@ provisioning break-glass approve BG-20251008-001 \
# Approver: alice@example.com (Security Team)
# Approvals: 1/2
# Status: Pending (need 1 more approval)
-```plaintext
-
-```bash
-# Second approver (Platform team)
+
+# Second approver (Platform team)
provisioning break-glass approve BG-20251008-001 \
--reason "Confirmed with monitoring. PostgreSQL master node unreachable. Emergency access justified."
@@ -47402,12 +45162,9 @@ provisioning break-glass approve BG-20251008-001 \
# Status: APPROVED
#
# Requester can now activate session
-```plaintext
-
-#### 3. Activate Session
-
-```bash
-provisioning break-glass activate BG-20251008-001
+
+
+provisioning break-glass activate BG-20251008-001
# Output:
# ✓ Emergency session activated
@@ -47424,12 +45181,9 @@ provisioning break-glass activate BG-20251008-001
#
# Export token:
export EMERGENCY_TOKEN="eyJhbGciOiJSUzI1NiIsInR5cCI6IkpXVCJ9..."
-```plaintext
-
-#### 4. Use Emergency Access
-
-```bash
-# SSH to database server
+
+
+# SSH to database server
provisioning ssh connect db-master-01 \
--token $EMERGENCY_TOKEN
@@ -47439,12 +45193,9 @@ sudo tail -f /var/log/postgresql/postgresql.log
# Diagnose issue...
# Fix issue...
-```plaintext
-
-#### 5. Revoke Session
-
-```bash
-# When done, immediately revoke
+
+
+# When done, immediately revoke
provisioning break-glass revoke BGS-20251008-001 \
--reason "Database cluster restored. PostgreSQL master node restarted successfully. All services online."
@@ -47455,187 +45206,183 @@ provisioning break-glass revoke BGS-20251008-001 \
# Audit log: /var/log/provisioning/break-glass/BGS-20251008-001.json
#
# Post-incident review scheduled: 2025-10-09 10:00am
-```plaintext
-
-### Web UI (Control Center)
-
-#### Request Flow
-
-1. **Navigate**: Control Center → Security → Break-Glass
-2. **Click**: "Request Emergency Access"
-3. **Fill Form**:
- - Reason: "Production database cluster down"
- - Justification: (detailed description)
- - Duration: 2 hours
- - Resources: Select from dropdown or wildcard
-4. **Submit**: Request sent to approvers
-
-#### Approver Flow
-
-1. **Receive**: Email/Slack notification
-2. **Navigate**: Control Center → Break-Glass → Pending Requests
-3. **Review**: Request details, reason, justification
-4. **Decision**: Approve or Deny
-5. **Reason**: Provide approval/denial reason
-
-#### Monitor Active Sessions
-
-1. **Navigate**: Control Center → Security → Break-Glass → Active Sessions
-2. **View**: Real-time dashboard of active sessions
- - Who, What, When, How long
- - Actions performed (live)
- - Inactivity timer
-3. **Revoke**: Emergency revoke button (if needed)
-
----
-
-## Examples
-
-### Example 1: Production Database Outage
-
-**Scenario**: PostgreSQL cluster unresponsive, affecting all users
-
-**Request**:
-
-```bash
-provisioning break-glass request \
+
+
+
+
+Navigate : Control Center → Security → Break-Glass
+Click : “Request Emergency Access”
+Fill Form :
+
+Reason: “Production database cluster down”
+Justification: (detailed description)
+Duration: 2 hours
+Resources: Select from dropdown or wildcard
+
+
+Submit : Request sent to approvers
+
+
+
+Receive : Email/Slack notification
+Navigate : Control Center → Break-Glass → Pending Requests
+Review : Request details, reason, justification
+Decision : Approve or Deny
+Reason : Provide approval/denial reason
+
+
+
+Navigate : Control Center → Security → Break-Glass → Active Sessions
+View : Real-time dashboard of active sessions
+
+Who, What, When, How long
+Actions performed (live)
+Inactivity timer
+
+
+Revoke : Emergency revoke button (if needed)
+
+
+
+
+Scenario : PostgreSQL cluster unresponsive, affecting all users
+Request :
+provisioning break-glass request \
"Production PostgreSQL cluster completely unresponsive" \
--justification "Database cluster (3 nodes) not responding. All application services offline. 10,000+ users affected. Need direct SSH to diagnose and restore. Monitoring shows all nodes down. Last known state: replication failure during routine backup." \
--resources '["database/*", "server/db-prod-*"]' \
--duration 2hr
-```plaintext
-
-**Approval 1** (Security):
-> "Verified incident INC-2025-234. Database monitoring confirms cluster down. Application completely offline. Emergency justified."
-
-**Approval 2** (Platform):
-> "Confirmed. PostgreSQL master and replicas unreachable. On-call SRE needs immediate access. Approved."
-
-**Actions Taken**:
-
-1. SSH to db-prod-01, db-prod-02, db-prod-03
-2. Check PostgreSQL status: `systemctl status postgresql`
-3. Review logs: `/var/log/postgresql/`
-4. Diagnose: Disk full on master node
-5. Fix: Clear old WAL files, restart PostgreSQL
-6. Verify: Cluster restored, replication working
-7. Revoke access
-
-**Outcome**: Cluster restored in 47 minutes. Root cause: Backup retention not working.
-
----
-
-### Example 2: Security Incident
-
-**Scenario**: Suspicious activity detected, need immediate containment
-
-**Request**:
-
-```bash
-provisioning break-glass request \
+
+Approval 1 (Security):
+
+“Verified incident INC-2025-234. Database monitoring confirms cluster down. Application completely offline. Emergency justified.”
+
+Approval 2 (Platform):
+
+“Confirmed. PostgreSQL master and replicas unreachable. On-call SRE needs immediate access. Approved.”
+
+Actions Taken :
+
+SSH to db-prod-01, db-prod-02, db-prod-03
+Check PostgreSQL status: systemctl status postgresql
+Review logs: /var/log/postgresql/
+Diagnose: Disk full on master node
+Fix: Clear old WAL files, restart PostgreSQL
+Verify: Cluster restored, replication working
+Revoke access
+
+Outcome : Cluster restored in 47 minutes. Root cause: Backup retention not working.
+
+
+Scenario : Suspicious activity detected, need immediate containment
+Request :
+provisioning break-glass request \
"Active security breach detected - need immediate containment" \
--justification "IDS alerts show unauthorized access from IP 203.0.113.42 to production API servers. Multiple failed sudo attempts. Need to isolate affected servers and investigate. Potential data exfiltration in progress." \
--resources '["server/api-prod-*", "firewall/*", "network/*"]' \
--duration 4hr
-```plaintext
-
-**Approval 1** (Security):
-> "Security incident SI-2025-089 confirmed. IDS shows sustained attack from external IP. Immediate containment required. Approved."
-
-**Approval 2** (Engineering Director):
-> "Concur with security assessment. Production impact acceptable vs risk of data breach. Approved."
-
-**Actions Taken**:
-
-1. Firewall block on 203.0.113.42
-2. Isolate affected API servers
-3. Snapshot servers for forensics
-4. Review access logs
-5. Identify compromised service account
-6. Rotate credentials
-7. Restore from clean backup
-8. Re-enable servers with patched vulnerability
-
-**Outcome**: Breach contained in 3h 15min. No data loss. Vulnerability patched across fleet.
-
----
-
-### Example 3: Accidental Data Deletion
-
-**Scenario**: Critical production data accidentally deleted
-
-**Request**:
-
-```bash
-provisioning break-glass request \
+
+Approval 1 (Security):
+
+“Security incident SI-2025-089 confirmed. IDS shows sustained attack from external IP. Immediate containment required. Approved.”
+
+Approval 2 (Engineering Director):
+
+“Concur with security assessment. Production impact acceptable vs risk of data breach. Approved.”
+
+Actions Taken :
+
+Firewall block on 203.0.113.42
+Isolate affected API servers
+Snapshot servers for forensics
+Review access logs
+Identify compromised service account
+Rotate credentials
+Restore from clean backup
+Re-enable servers with patched vulnerability
+
+Outcome : Breach contained in 3h 15 min. No data loss. Vulnerability patched across fleet.
+
+
+Scenario : Critical production data accidentally deleted
+Request :
+provisioning break-glass request \
"Critical customer data accidentally deleted from production" \
--justification "Database migration script ran against production instead of staging. Deleted 50,000+ customer records. Need immediate restore from backup before data loss is noticed. Normal restore process requires change approval (4-6 hours). Data loss window critical." \
--resources '["database/customers", "backup/*"]' \
--duration 3hr
-```plaintext
-
-**Approval 1** (Platform):
-> "Verified data deletion in production database. 50,284 records deleted at 10:42am. Backup available from 10:00am (42 minutes ago). Time-critical restore needed. Approved."
-
-**Approval 2** (Security):
-> "Risk assessment: Restore from trusted backup less risky than data loss. Emergency justified. Ensure post-incident review of deployment process. Approved."
-
-**Actions Taken**:
-
-1. Stop application writes to affected tables
-2. Identify latest good backup (10:00am)
-3. Restore deleted records from backup
-4. Verify data integrity
-5. Compare record counts
-6. Re-enable application writes
-7. Notify affected users (if any noticed)
-
-**Outcome**: Data restored in 1h 38min. Only 42 minutes of data lost (from backup to deletion). Zero customer impact.
-
----
-
-## Auditing & Compliance
-
-### What is Logged
-
-Every break-glass session logs:
-
-1. **Request Details**:
- - Requester identity
- - Reason and justification
- - Requested resources
- - Requested duration
- - Timestamp
-
-2. **Approval Process**:
- - Each approver identity
- - Approval/denial reason
- - Approval timestamp
- - Team affiliation
-
-3. **Session Activity**:
- - Activation timestamp
- - Every action performed
- - Resources accessed
- - Commands executed
- - Inactivity periods
-
-4. **Revocation**:
- - Revocation reason
- - Who revoked (system or manual)
- - Total duration
- - Final status
-
-### Retention
-
-- **Break-glass logs**: 7 years (immutable)
-- **Cannot be deleted**: Only anonymized for GDPR
-- **Exported to SIEM**: Real-time
-
-### Compliance Reports
-
-```bash
-# Generate break-glass usage report
+
+Approval 1 (Platform):
+
+“Verified data deletion in production database. 50,284 records deleted at 10:42am. Backup available from 10:00am (42 minutes ago). Time-critical restore needed. Approved.”
+
+Approval 2 (Security):
+
+“Risk assessment: Restore from trusted backup less risky than data loss. Emergency justified. Ensure post-incident review of deployment process. Approved.”
+
+Actions Taken :
+
+Stop application writes to affected tables
+Identify latest good backup (10:00am)
+Restore deleted records from backup
+Verify data integrity
+Compare record counts
+Re-enable application writes
+Notify affected users (if any noticed)
+
+Outcome : Data restored in 1h 38 min. Only 42 minutes of data lost (from backup to deletion). Zero customer impact.
+
+
+
+Every break-glass session logs:
+
+
+Request Details :
+
+Requester identity
+Reason and justification
+Requested resources
+Requested duration
+Timestamp
+
+
+
+Approval Process :
+
+Each approver identity
+Approval/denial reason
+Approval timestamp
+Team affiliation
+
+
+
+Session Activity :
+
+Activation timestamp
+Every action performed
+Resources accessed
+Commands executed
+Inactivity periods
+
+
+
+Revocation :
+
+Revocation reason
+Who revoked (system or manual)
+Total duration
+Final status
+
+
+
+
+
+Break-glass logs : 7 years (immutable)
+Cannot be deleted : Only anonymized for GDPR
+Exported to SIEM : Real-time
+
+
+# Generate break-glass usage report
provisioning break-glass audit \
--from "2025-01-01" \
--to "2025-12-31" \
@@ -47649,45 +45396,45 @@ provisioning break-glass audit \
# - Approval times
# - Incidents resolved
# - Misuse incidents (if any)
-```plaintext
-
----
-
-## Post-Incident Review
-
-### Within 24 Hours
-
-**Required attendees**:
-
-- Requester
-- Approvers
-- Security team
-- Incident commander
-
-**Agenda**:
-
-1. **Timeline Review**: What happened, when
-2. **Actions Taken**: What was done with emergency access
-3. **Outcome**: Was issue resolved? Any side effects?
-4. **Process**: Did break-glass work as intended?
-5. **Lessons Learned**: What can be improved?
-
-### Review Checklist
-
-- [ ] Was break-glass appropriate for this incident?
-- [ ] Were approvals granted timely?
-- [ ] Was access used only for stated purpose?
-- [ ] Were any security policies violated?
-- [ ] Could incident be prevented in future?
-- [ ] Do we need policy updates?
-- [ ] Do we need system changes?
-
-### Output
-
-**Incident Report**:
-
-```markdown
-# Break-Glass Incident Report: BG-20251008-001
+
+
+
+
+Required attendees :
+
+Requester
+Approvers
+Security team
+Incident commander
+
+Agenda :
+
+Timeline Review : What happened, when
+Actions Taken : What was done with emergency access
+Outcome : Was issue resolved? Any side effects?
+Process : Did break-glass work as intended?
+Lessons Learned : What can be improved?
+
+
+
+
+Incident Report :
+# Break-Glass Incident Report: BG-20251008-001
**Incident**: Production database cluster outage
**Duration**: 47 minutes
@@ -47722,132 +45469,108 @@ Backup retention job failed silently for 2 weeks, causing WAL files to accumulat
- ✓ Timely approvals
- ✓ No policy violations
- ✓ Access revoked promptly
-```plaintext
-
----
-
-## FAQ
-
-### Q: How quickly can break-glass be activated?
-
-**A**: Typically 15-20 minutes:
-
-- 5 min: Request submission
-- 10 min: Approvals (2 people)
-- 2 min: Activation
-
-In extreme emergencies, approvers can be on standby.
-
-### Q: Can I use break-glass for scheduled maintenance?
-
-**A**: No. Break-glass is for emergencies only. Schedule maintenance through normal change process.
-
-### Q: What if I can't get 2 approvers?
-
-**A**: System requires 2 approvers from different teams. If unavailable:
-
-1. Escalate to on-call manager
-2. Contact security team directly
-3. Use emergency contact list
-
-### Q: Can approvers be from the same team?
-
-**A**: No. System enforces team diversity to prevent collusion.
-
-### Q: What if security team revokes my session?
-
-**A**: Security team can revoke for:
-
-- Suspicious activity
-- Policy violation
-- Incident resolved
-- Misuse detected
-
-You'll receive immediate notification. Contact security team for details.
-
-### Q: Can I extend an active session?
-
-**A**: No. Maximum duration is 4 hours. If you need more time, submit a new request with updated justification.
-
-### Q: What happens if I forget to revoke?
-
-**A**: Session auto-revokes after:
-
-- Maximum duration (4 hours), OR
-- Inactivity timeout (30 minutes)
-
-Always manually revoke when done.
-
-### Q: Is break-glass monitored?
-
-**A**: Yes. Security team monitors in real-time:
-
-- Session activation alerts
-- Action logging
-- Suspicious activity detection
-- Compliance verification
-
-### Q: Can I practice break-glass?
-
-**A**: Yes, in **development environment only**:
-
-```bash
-PROVISIONING_ENV=dev provisioning break-glass request "Test emergency access procedure"
-```plaintext
-
-Never practice in staging or production.
-
----
-
-## Emergency Contacts
-
-### During Incident
-
-| Role | Contact | Response Time |
-|------|---------|---------------|
-| **Security On-Call** | +1-555-SECURITY | 5 minutes |
-| **Platform On-Call** | +1-555-PLATFORM | 5 minutes |
-| **Engineering Director** | +1-555-ENG-DIR | 15 minutes |
-
-### Escalation Path
-
-1. **L1**: On-call SRE
-2. **L2**: Platform team lead
-3. **L3**: Engineering manager
-4. **L4**: Director of Engineering
-5. **L5**: CTO
-
-### Communication Channels
-
-- **Incident Slack**: `#incidents`
-- **Security Slack**: `#security-alerts`
-- **Email**: `security-team@example.com`
-- **PagerDuty**: Break-glass policy
-
----
-
-## Training Certification
-
-**I certify that I have**:
-
-- [ ] Read and understood this training guide
-- [ ] Understand when to use (and not use) break-glass
-- [ ] Know the approval workflow
-- [ ] Can use the CLI commands
-- [ ] Understand auditing and compliance requirements
-- [ ] Will follow post-incident review process
-
-**Signature**: _________________________
-**Date**: _________________________
-**Next Training Due**: _________________________ (1 year)
-
----
-
-**Version**: 1.0.0
-**Maintained By**: Security Team
-**Last Updated**: 2025-10-08
-**Next Review**: 2026-10-08
+
+
+
+A : Typically 15-20 minutes:
+
+5 min: Request submission
+10 min: Approvals (2 people)
+2 min: Activation
+
+In extreme emergencies, approvers can be on standby.
+
+A : No. Break-glass is for emergencies only. Schedule maintenance through normal change process.
+
+A : System requires 2 approvers from different teams. If unavailable:
+
+Escalate to on-call manager
+Contact security team directly
+Use emergency contact list
+
+
+A : No. System enforces team diversity to prevent collusion.
+
+A : Security team can revoke for:
+
+Suspicious activity
+Policy violation
+Incident resolved
+Misuse detected
+
+You’ll receive immediate notification. Contact security team for details.
+
+A : No. Maximum duration is 4 hours. If you need more time, submit a new request with updated justification.
+
+A : Session auto-revokes after:
+
+Maximum duration (4 hours), OR
+Inactivity timeout (30 minutes)
+
+Always manually revoke when done.
+
+A : Yes. Security team monitors in real-time:
+
+Session activation alerts
+Action logging
+Suspicious activity detection
+Compliance verification
+
+
+A : Yes, in development environment only :
+PROVISIONING_ENV=dev provisioning break-glass request "Test emergency access procedure"
+
+Never practice in staging or production.
+
+
+
+Role Contact Response Time
+Security On-Call +1-555-SECURITY 5 minutes
+Platform On-Call +1-555-PLATFORM 5 minutes
+Engineering Director +1-555-ENG-DIR 15 minutes
+
+
+
+
+L1 : On-call SRE
+L2 : Platform team lead
+L3 : Engineering manager
+L4 : Director of Engineering
+L5 : CTO
+
+
+
+Incident Slack : #incidents
+Security Slack : #security-alerts
+Email : security-team@example.com
+PagerDuty : Break-glass policy
+
+
+
+I certify that I have :
+
+Signature : _________________________
+Date : _________________________
+Next Training Due : _________________________ (1 year)
+
+Version : 1.0.0
+Maintained By : Security Team
+Last Updated : 2025-10-08
+Next Review : 2026-10-08
Version : 1.0.0
Date : 2025-10-08
@@ -47870,7 +45593,7 @@ Never practice in staging or production.
Cedar policies control who can do what in the Provisioning platform. This guide helps you create, test, and deploy production-ready Cedar policies that balance security with operational efficiency.
-
+
Fine-grained : Control access at resource + action level
Context-aware : Decisions based on MFA, IP, time, approvals
@@ -47888,48 +45611,37 @@ Never practice in staging or production.
) when {
condition # Context (MFA, IP, time)
};
-```plaintext
-
-### Entities
-
-| Type | Examples | Description |
-|------|----------|-------------|
-| **User** | `User::"alice"` | Individual users |
-| **Team** | `Team::"platform-admin"` | User groups |
-| **Role** | `Role::"Admin"` | Permission levels |
-| **Resource** | `Server::"web-01"` | Infrastructure resources |
-| **Environment** | `Environment::"production"` | Deployment targets |
-
-### Actions
-
-| Category | Actions |
-|----------|---------|
-| **Read** | `read`, `list` |
-| **Write** | `create`, `update`, `delete` |
-| **Deploy** | `deploy`, `rollback` |
-| **Admin** | `ssh`, `execute`, `admin` |
-
----
-
-## Production Policy Strategy
-
-### Security Levels
-
-#### Level 1: Development (Permissive)
-
-```cedar
-// Developers have full access to dev environment
+
+
+Type Examples Description
+User User::"alice"Individual users
+Team Team::"platform-admin"User groups
+Role Role::"Admin"Permission levels
+Resource Server::"web-01"Infrastructure resources
+Environment Environment::"production"Deployment targets
+
+
+
+Category Actions
+Read read, list
+Write create, update, delete
+Deploy deploy, rollback
+Admin ssh, execute, admin
+
+
+
+
+
+
+// Developers have full access to dev environment
permit (
principal in Team::"developers",
action,
resource in Environment::"development"
);
-```plaintext
-
-#### Level 2: Staging (MFA Required)
-
-```cedar
-// All operations require MFA
+
+
+// All operations require MFA
permit (
principal in Team::"developers",
action,
@@ -47937,12 +45649,9 @@ permit (
) when {
context.mfa_verified == true
};
-```plaintext
-
-#### Level 3: Production (MFA + Approval)
-
-```cedar
-// Deployments require MFA + approval
+
+
+// Deployments require MFA + approval
permit (
principal in Team::"platform-admin",
action in [Action::"deploy", Action::"delete"],
@@ -47952,12 +45661,9 @@ permit (
context has approval_id &&
context.approval_id.startsWith("APPROVAL-")
};
-```plaintext
-
-#### Level 4: Critical (Break-Glass Only)
-
-```cedar
-// Only emergency access
+
+
+// Only emergency access
permit (
principal,
action,
@@ -47966,16 +45672,11 @@ permit (
context.emergency_access == true &&
context.session_approved == true
};
-```plaintext
-
----
-
-## Policy Templates
-
-### 1. Role-Based Access Control (RBAC)
-
-```cedar
-// Admin: Full access
+
+
+
+
+// Admin: Full access
permit (
principal in Role::"Admin",
action,
@@ -48012,12 +45713,9 @@ permit (
action in [Action::"read", Action::"list"],
resource is AuditLog
);
-```plaintext
-
-### 2. Team-Based Policies
-
-```cedar
-// Platform team: Infrastructure management
+
+
+// Platform team: Infrastructure management
permit (
principal in Team::"platform",
action in [
@@ -48045,12 +45743,9 @@ permit (
context.mfa_verified == true &&
context.has_approval == true
};
-```plaintext
-
-### 3. Time-Based Restrictions
-
-```cedar
-// Deployments only during business hours
+
+
+// Deployments only during business hours
permit (
principal,
action == Action::"deploy",
@@ -48069,12 +45764,9 @@ permit (
) when {
context.maintenance_window == true
};
-```plaintext
-
-### 4. IP-Based Restrictions
-
-```cedar
-// Production access only from office network
+
+
+// Production access only from office network
permit (
principal,
action,
@@ -48093,12 +45785,9 @@ permit (
context.vpn_connected == true &&
context.mfa_verified == true
};
-```plaintext
-
-### 5. Resource-Specific Policies
-
-```cedar
-// Database servers: Extra protection
+
+
+// Database servers: Extra protection
forbid (
principal,
action == Action::"delete",
@@ -48116,12 +45805,9 @@ permit (
context.approval_count >= 2 &&
context.mfa_verified == true
};
-```plaintext
-
-### 6. Self-Service Policies
-
-```cedar
-// Users can manage their own MFA devices
+
+
+// Users can manage their own MFA devices
permit (
principal,
action in [Action::"create", Action::"delete"],
@@ -48138,25 +45824,19 @@ permit (
) when {
resource.user_id == principal.id
};
-```plaintext
-
----
-
-## Policy Development Workflow
-
-### Step 1: Define Requirements
-
-**Document**:
-
-- Who needs access? (roles, teams, individuals)
-- To what resources? (servers, clusters, environments)
-- What actions? (read, write, deploy, delete)
-- Under what conditions? (MFA, IP, time, approvals)
-
-**Example Requirements Document**:
-
-```markdown
-# Requirement: Production Deployment
+
+
+
+
+Document :
+
+Who needs access? (roles, teams, individuals)
+To what resources? (servers, clusters, environments)
+What actions? (read, write, deploy, delete)
+Under what conditions? (MFA, IP, time, approvals)
+
+Example Requirements Document :
+# Requirement: Production Deployment
**Who**: DevOps team members
**What**: Deploy applications to production
@@ -48165,12 +45845,9 @@ permit (
- MFA verified
- Change request approved
- From office network or VPN
-```plaintext
-
-### Step 2: Write Policy
-
-```cedar
-@id("prod-deploy-devops")
+
+
+@id("prod-deploy-devops")
@description("DevOps can deploy to production during business hours with approval")
permit (
principal in Team::"devops",
@@ -48184,23 +45861,17 @@ permit (
context.time.weekday in ["Monday", "Tuesday", "Wednesday", "Thursday", "Friday"] &&
(context.ip_address.isInRange("10.0.0.0/8") || context.vpn_connected == true)
};
-```plaintext
-
-### Step 3: Validate Syntax
-
-```bash
-# Use Cedar CLI to validate
+
+
+# Use Cedar CLI to validate
cedar validate \
--policies provisioning/config/cedar-policies/production.cedar \
--schema provisioning/config/cedar-policies/schema.cedar
# Expected output: ✓ Policy is valid
-```plaintext
-
-### Step 4: Test in Development
-
-```bash
-# Deploy to development environment first
+
+
+# Deploy to development environment first
cp production.cedar provisioning/config/cedar-policies/development.cedar
# Restart orchestrator to load new policies
@@ -48208,24 +45879,27 @@ systemctl restart provisioning-orchestrator
# Test with real requests
provisioning server create test-server --check
-```plaintext
-
-### Step 5: Review & Approve
-
-**Review Checklist**:
-
-- [ ] Policy syntax valid
-- [ ] Policy ID unique
-- [ ] Description clear
-- [ ] Conditions appropriate for security level
-- [ ] Tested in development
-- [ ] Reviewed by security team
-- [ ] Documented in change log
-
-### Step 6: Deploy to Production
-
-```bash
-# Backup current policies
+
+
+Review Checklist :
+
+
+# Backup current policies
cp provisioning/config/cedar-policies/production.cedar \
provisioning/config/cedar-policies/production.cedar.backup.$(date +%Y%m%d)
@@ -48237,18 +45911,12 @@ provisioning cedar reload
# Verify loaded
provisioning cedar list
-```plaintext
-
----
-
-## Testing Policies
-
-### Unit Testing
-
-Create test cases for each policy:
-
-```yaml
-# tests/cedar/prod-deploy-devops.yaml
+
+
+
+
+Create test cases for each policy:
+# tests/cedar/prod-deploy-devops.yaml
policy_id: prod-deploy-devops
test_cases:
@@ -48282,20 +45950,13 @@ test_cases:
approval_id: "APPROVAL-123"
time: { hour: 22, weekday: "Monday" }
expected: Deny
-```plaintext
-
-Run tests:
-
-```bash
-provisioning cedar test tests/cedar/
-```plaintext
-
-### Integration Testing
-
-Test with real API calls:
-
-```bash
-# Setup test user
+
+Run tests:
+provisioning cedar test tests/cedar/
+
+
+Test with real API calls:
+# Setup test user
export TEST_USER="alice"
export TEST_TOKEN=$(provisioning login --user $TEST_USER --output token)
@@ -48312,30 +45973,21 @@ curl -H "Authorization: Bearer $TEST_TOKEN" \
-X DELETE
# Expected: 403 Forbidden (MFA required)
-```plaintext
-
-### Load Testing
-
-Verify policy evaluation performance:
-
-```bash
-# Generate load
+
+
+Verify policy evaluation performance:
+# Generate load
provisioning cedar bench \
--policies production.cedar \
--requests 10000 \
--concurrency 100
-# Expected: <10ms per evaluation
-```plaintext
-
----
-
-## Deployment
-
-### Development → Staging → Production
-
-```bash
-#!/bin/bash
+# Expected: <10 ms per evaluation
+
+
+
+
+#!/bin/bash
# deploy-policies.sh
ENVIRONMENT=$1 # dev, staging, prod
@@ -48364,12 +46016,9 @@ scp provisioning/config/cedar-policies/$ENVIRONMENT.cedar \
ssh $ENVIRONMENT-orchestrator "provisioning cedar reload"
echo "✅ Policies deployed to $ENVIRONMENT"
-```plaintext
-
-### Rollback Procedure
-
-```bash
-# List backups
+
+
+# List backups
ls -ltr provisioning/config/cedar-policies/backups/production/
# Restore previous version
@@ -48381,16 +46030,11 @@ provisioning cedar reload
# Verify
provisioning cedar list
-```plaintext
-
----
-
-## Monitoring & Auditing
-
-### Monitor Authorization Decisions
-
-```bash
-# Query denied requests (last 24 hours)
+
+
+
+
+# Query denied requests (last 24 hours)
provisioning audit query \
--action authorization_denied \
--from "24h" \
@@ -48403,12 +46047,9 @@ provisioning audit query \
# │ 10:15am │ bob │ deploy │ prod │ MFA not verif │
# │ 11:30am │ alice │ delete │ db-01 │ No approval │
# └─────────┴────────┴──────────┴────────┴────────────────┘
-```plaintext
-
-### Alert on Suspicious Activity
-
-```yaml
-# alerts/cedar-policies.yaml
+
+
+# alerts/cedar-policies.yaml
alerts:
- name: "High Denial Rate"
query: "authorization_denied"
@@ -48420,12 +46061,9 @@ alerts:
query: "action:deploy AND result:denied"
user: "critical-users"
action: "page:oncall"
-```plaintext
-
-### Policy Usage Statistics
-
-```bash
-# Which policies are most used?
+
+
+# Which policies are most used?
provisioning cedar stats --top 10
# Example output:
@@ -48434,26 +46072,20 @@ provisioning cedar stats --top 10
# prod-deploy-devops | 1,234 | 1,100 | 134
# admin-full-access | 892 | 892 | 0
# viewer-read-only | 5,421 | 5,421 | 0
-```plaintext
-
----
-
-## Troubleshooting
-
-### Policy Not Applying
-
-**Symptom**: Policy changes not taking effect
-
-**Solutions**:
-
-1. Verify hot reload:
-
- ```bash
- provisioning cedar reload
- provisioning cedar list # Should show updated timestamp
+
+
+
+Symptom : Policy changes not taking effect
+Solutions :
+Verify hot reload:
+provisioning cedar reload
+provisioning cedar list # Should show updated timestamp
+
+
+
Check orchestrator logs:
journalctl -u provisioning-orchestrator -f | grep cedar
@@ -48478,20 +46110,16 @@ provisioning audit query \
--out json | jq '.authorization'
# Shows which policy evaluated, context used, reason for denial
-```plaintext
-
-### Policy Conflicts
-
-**Symptom**: Multiple policies match, unclear which applies
-
-**Resolution**:
-
-- Cedar uses **deny-override**: If any `forbid` matches, request denied
-- Use `@priority` annotations (higher number = higher priority)
-- Make policies more specific to avoid conflicts
-
-```cedar
-@priority(100)
+
+
+Symptom : Multiple policies match, unclear which applies
+Resolution :
+
+Cedar uses deny-override : If any forbid matches, request denied
+Use @priority annotations (higher number = higher priority)
+Make policies more specific to avoid conflicts
+
+@priority(100)
permit (
principal in Role::"Admin",
action,
@@ -48506,16 +46134,11 @@ forbid (
);
// Admin can do anything EXCEPT delete databases
-```plaintext
-
----
-
-## Best Practices
-
-### 1. Start Restrictive, Loosen Gradually
-
-```cedar
-// ❌ BAD: Too permissive initially
+
+
+
+
+// ❌ BAD: Too permissive initially
permit (principal, action, resource);
// ✅ GOOD: Explicit allow, expand as needed
@@ -48524,12 +46147,9 @@ permit (
action in [Action::"read", Action::"list"],
resource
);
-```plaintext
-
-### 2. Use Annotations
-
-```cedar
-@id("prod-deploy-mfa")
+
+
+@id("prod-deploy-mfa")
@description("Production deployments require MFA verification")
@owner("platform-team")
@reviewed("2025-10-08")
@@ -48541,14 +46161,10 @@ permit (
) when {
context.mfa_verified == true
};
-```plaintext
-
-### 3. Principle of Least Privilege
-
-Give users **minimum permissions** needed:
-
-```cedar
-// ❌ BAD: Overly broad
+
+
+Give users minimum permissions needed:
+// ❌ BAD: Overly broad
permit (principal in Team::"developers", action, resource);
// ✅ GOOD: Specific permissions
@@ -48557,12 +46173,9 @@ permit (
action in [Action::"read", Action::"create", Action::"update"],
resource in Environment::"development"
);
-```plaintext
-
-### 4. Document Context Requirements
-
-```cedar
-// Context required for this policy:
+
+
+// Context required for this policy:
// - mfa_verified: boolean (from JWT claims)
// - approval_id: string (from request header)
// - ip_address: IpAddr (from connection)
@@ -48575,14 +46188,10 @@ permit (
context has approval_id &&
context.ip_address.isInRange("10.0.0.0/8")
};
-```plaintext
-
-### 5. Separate Policies by Concern
-
-**File organization**:
-
-```plaintext
-cedar-policies/
+
+
+File organization :
+cedar-policies/
├── schema.cedar # Entity/action definitions
├── rbac.cedar # Role-based policies
├── teams.cedar # Team-based policies
@@ -48590,12 +46199,9 @@ cedar-policies/
├── ip-restrictions.cedar # Network-based policies
├── production.cedar # Production-specific
└── development.cedar # Development-specific
-```plaintext
-
-### 6. Version Control
-
-```bash
-# Git commit each policy change
+
+
+# Git commit each policy change
git add provisioning/config/cedar-policies/production.cedar
git commit -m "feat(cedar): Add MFA requirement for prod deployments
@@ -48608,26 +46214,25 @@ Reviewed by: security-team
Ticket: SEC-1234"
git push
-```plaintext
-
-### 7. Regular Policy Audits
-
-**Quarterly review**:
-
-- [ ] Remove unused policies
-- [ ] Tighten overly permissive policies
-- [ ] Update for new resources/actions
-- [ ] Verify team memberships current
-- [ ] Test break-glass procedures
-
----
-
-## Quick Reference
-
-### Common Policy Patterns
-
-```cedar
-# Allow all
+
+
+Quarterly review :
+
+
+
+
+# Allow all
permit (principal, action, resource);
# Deny all
@@ -48659,12 +46264,9 @@ permit (
context.time.hour >= 9 &&
context.time.hour <= 17
};
-```plaintext
-
-### Useful Commands
-
-```bash
-# Validate policies
+
+
+# Validate policies
provisioning cedar validate
# Reload policies (hot reload)
@@ -48681,23 +46283,19 @@ provisioning audit query --action authorization_denied
# Policy statistics
provisioning cedar stats
-```plaintext
-
----
-
-## Support
-
-- **Documentation**: `docs/architecture/CEDAR_AUTHORIZATION_IMPLEMENTATION.md`
-- **Policy Examples**: `provisioning/config/cedar-policies/`
-- **Issues**: Report to platform-team
-- **Emergency**: Use break-glass procedure
-
----
-
-**Version**: 1.0.0
-**Maintained By**: Platform Team
-**Last Updated**: 2025-10-08
+
+
+
+Documentation : docs/architecture/CEDAR_AUTHORIZATION_IMPLEMENTATION.md
+Policy Examples : provisioning/config/cedar-policies/
+Issues : Report to platform-team
+Emergency : Use break-glass procedure
+
+
+Version : 1.0.0
+Maintained By : Platform Team
+Last Updated : 2025-10-08
Document Version : 1.0.0
Last Updated : 2025-10-08
@@ -48719,14 +46317,14 @@ provisioning cedar stats
Audit and Compliance
-
-
+
+
Multi-Factor Authentication (MFA) adds a second layer of security beyond passwords. Admins must provide:
Something they know : Password
Something they have : TOTP code (authenticator app) or WebAuthn device (YubiKey, Touch ID)
-
+
Administrators have elevated privileges including:
Server creation/deletion
@@ -48781,16 +46379,11 @@ Week 5+: Maintenance
├─ Regular MFA device audits
├─ Backup code rotation
└─ User support for MFA issues
-```plaintext
-
----
-
-## Admin Enrollment Process
-
-### Step 1: Initial Login (Password Only)
-
-```bash
-# Login with username/password
+
+
+
+
+ # Login with username/password
provisioning login --user admin@example.com --workspace production
# Response (partial token, MFA not yet verified):
@@ -48799,18 +46392,15 @@ provisioning login --user admin@example.com --workspace production
"partial_token": "eyJhbGci...", # Limited access token
"message": "MFA enrollment required for production access"
}
-```plaintext
-
-**Partial token limitations**:
-
-- Cannot access production resources
-- Can only access MFA enrollment endpoints
-- Expires in 15 minutes
-
-### Step 2: Choose MFA Method
-
-```bash
-# Check available MFA methods
+
+Partial token limitations :
+
+Cannot access production resources
+Can only access MFA enrollment endpoints
+Expires in 15 minutes
+
+
+# Check available MFA methods
provisioning mfa methods
# Output:
@@ -48828,21 +46418,16 @@ MFA Status:
WebAuthn: Not enrolled
Backup Codes: Not generated
MFA Required: Yes (production workspace)
-```plaintext
-
-### Step 3: Enroll MFA Device
-
-Choose one or both methods (TOTP + WebAuthn recommended):
-
-- [TOTP Setup](#totp-setup-authenticator-apps)
-- [WebAuthn Setup](#webauthn-setup-hardware-keys)
-
-### Step 4: Verify and Activate
-
-After enrollment, login again with MFA:
-
-```bash
-# Login (returns partial token)
+
+
+Choose one or both methods (TOTP + WebAuthn recommended):
+
+
+After enrollment, login again with MFA:
+# Login (returns partial token)
provisioning login --user admin@example.com --workspace production
# Verify MFA code (returns full access token)
@@ -48851,39 +46436,29 @@ provisioning mfa verify 123456
# Response:
{
"status": "authenticated",
- "access_token": "eyJhbGci...", # Full access token (15min)
+ "access_token": "eyJhbGci...", # Full access token (15 min)
"refresh_token": "eyJhbGci...", # Refresh token (7 days)
"mfa_verified": true,
"expires_in": 900
}
-```plaintext
-
----
-
-## TOTP Setup (Authenticator Apps)
-
-### Supported Authenticator Apps
-
-| App | Platform | Notes |
-|-----|----------|-------|
-| **Google Authenticator** | iOS, Android | Simple, widely used |
-| **Authy** | iOS, Android, Desktop | Cloud backup, multi-device |
-| **1Password** | All platforms | Integrated with password manager |
-| **Microsoft Authenticator** | iOS, Android | Enterprise integration |
-| **Bitwarden** | All platforms | Open source |
-
-### Step-by-Step TOTP Enrollment
-
-#### 1. Initiate TOTP Enrollment
-
-```bash
-provisioning mfa totp enroll
-```plaintext
-
-**Output**:
-
-```plaintext
-╔════════════════════════════════════════════════════════════╗
+
+
+
+
+App Platform Notes
+Google Authenticator iOS, Android Simple, widely used
+Authy iOS, Android, Desktop Cloud backup, multi-device
+1Password All platforms Integrated with password manager
+Microsoft Authenticator iOS, Android Enterprise integration
+Bitwarden All platforms Open source
+
+
+
+
+provisioning mfa totp enroll
+
+Output :
+╔════════════════════════════════════════════════════════════╗
║ TOTP ENROLLMENT ║
╚════════════════════════════════════════════════════════════╝
@@ -48907,42 +46482,38 @@ TOTP Configuration:
Algorithm: SHA1
Digits: 6
Period: 30 seconds
-```plaintext
-
-#### 2. Add to Authenticator App
-
-**Option A: Scan QR Code (Recommended)**
-
-1. Open authenticator app (Google Authenticator, Authy, etc.)
-2. Tap "+" or "Add Account"
-3. Select "Scan QR Code"
-4. Point camera at QR code displayed in terminal
-5. Account added automatically
-
-**Option B: Manual Entry**
-
-1. Open authenticator app
-2. Tap "+" or "Add Account"
-3. Select "Enter a setup key" or "Manual entry"
-4. Enter:
- - **Account name**: <admin@example.com>
- - **Key**: `JBSWY3DPEHPK3PXP` (secret shown above)
- - **Type of key**: Time-based
-5. Save account
-
-#### 3. Verify TOTP Code
-
-```bash
-# Get current code from authenticator app (6 digits, changes every 30s)
+
+
+Option A: Scan QR Code (Recommended)
+
+Open authenticator app (Google Authenticator, Authy, etc.)
+Tap “+” or “Add Account”
+Select “Scan QR Code”
+Point camera at QR code displayed in terminal
+Account added automatically
+
+Option B: Manual Entry
+
+Open authenticator app
+Tap “+” or “Add Account”
+Select “Enter a setup key” or “Manual entry”
+Enter:
+
+Account name : admin@example.com
+Key : JBSWY3DPEHPK3PXP (secret shown above)
+Type of key : Time-based
+
+
+Save account
+
+
+# Get current code from authenticator app (6 digits, changes every 30s)
# Example code: 123456
provisioning mfa totp verify 123456
-```plaintext
-
-**Success Response**:
-
-```plaintext
-✓ TOTP verified successfully!
+
+Success Response :
+✓ TOTP verified successfully!
Backup Codes (SAVE THESE SECURELY):
1. A3B9-C2D7-E1F4
@@ -48961,14 +46532,10 @@ Backup Codes (SAVE THESE SECURELY):
⚠ These codes allow access if you lose your authenticator device
TOTP enrollment complete. MFA is now active for your account.
-```plaintext
-
-#### 4. Save Backup Codes
-
-**Critical**: Store backup codes in a secure location:
-
-```bash
-# Copy backup codes to password manager or encrypted file
+
+
+Critical : Store backup codes in a secure location:
+# Copy backup codes to password manager or encrypted file
# NEVER store in plaintext, email, or cloud storage
# Example: Store in encrypted file
@@ -48976,12 +46543,9 @@ provisioning mfa backup-codes --save-encrypted ~/secure/mfa-backup-codes.enc
# Or display again (requires existing MFA verification)
provisioning mfa backup-codes --show
-```plaintext
-
-#### 5. Test TOTP Login
-
-```bash
-# Logout to test full login flow
+
+
+# Logout to test full login flow
provisioning logout
# Login with password (returns partial token)
@@ -48992,27 +46556,20 @@ provisioning login --user admin@example.com --workspace production
provisioning mfa verify 654321
# ✓ Full access granted
-```plaintext
-
----
-
-## WebAuthn Setup (Hardware Keys)
-
-### Supported WebAuthn Devices
-
-| Device Type | Examples | Security Level |
-|-------------|----------|----------------|
-| **USB Security Keys** | YubiKey 5, SoloKey, Titan Key | Highest |
-| **NFC Keys** | YubiKey 5 NFC, Google Titan | High (mobile compatible) |
-| **Biometric** | Touch ID (macOS), Windows Hello, Face ID | High (convenience) |
-| **Platform Authenticators** | Built-in laptop/phone biometrics | Medium-High |
-
-### Step-by-Step WebAuthn Enrollment
-
-#### 1. Check WebAuthn Support
-
-```bash
-# Verify WebAuthn support on your system
+
+
+
+
+Device Type Examples Security Level
+USB Security Keys YubiKey 5, SoloKey, Titan Key Highest
+NFC Keys YubiKey 5 NFC, Google Titan High (mobile compatible)
+Biometric Touch ID (macOS), Windows Hello, Face ID High (convenience)
+Platform Authenticators Built-in laptop/phone biometrics Medium-High
+
+
+
+
+# Verify WebAuthn support on your system
provisioning mfa webauthn check
# Output:
@@ -49020,18 +46577,12 @@ WebAuthn Support:
✓ Browser: Chrome 120.0 (WebAuthn supported)
✓ Platform: macOS 14.0 (Touch ID available)
✓ USB: YubiKey 5 NFC detected
-```plaintext
-
-#### 2. Initiate WebAuthn Registration
-
-```bash
-provisioning mfa webauthn register --device-name "YubiKey-Admin-Primary"
-```plaintext
-
-**Output**:
-
-```plaintext
-╔════════════════════════════════════════════════════════════╗
+
+
+provisioning mfa webauthn register --device-name "YubiKey-Admin-Primary"
+
+Output :
+╔════════════════════════════════════════════════════════════╗
║ WEBAUTHN DEVICE REGISTRATION ║
╚════════════════════════════════════════════════════════════╝
@@ -49041,62 +46592,51 @@ Relying Party: provisioning.example.com
⚠ Please insert your security key and touch it when it blinks
Waiting for device interaction...
-```plaintext
-
-#### 3. Complete Device Registration
-
-**For USB Security Keys (YubiKey, SoloKey)**:
-
-1. Insert USB key into computer
-2. Terminal shows "Touch your security key"
-3. Touch the gold/silver contact on the key (it will blink)
-4. Registration completes
-
-**For Touch ID (macOS)**:
-
-1. Terminal shows "Touch ID prompt will appear"
-2. Touch ID dialog appears on screen
-3. Place finger on Touch ID sensor
-4. Registration completes
-
-**For Windows Hello**:
-
-1. Terminal shows "Windows Hello prompt"
-2. Windows Hello biometric prompt appears
-3. Complete biometric scan (fingerprint/face)
-4. Registration completes
-
-**Success Response**:
-
-```plaintext
-✓ WebAuthn device registered successfully!
+
+
+For USB Security Keys (YubiKey, SoloKey) :
+
+Insert USB key into computer
+Terminal shows “Touch your security key”
+Touch the gold/silver contact on the key (it will blink)
+Registration completes
+
+For Touch ID (macOS) :
+
+Terminal shows “Touch ID prompt will appear”
+Touch ID dialog appears on screen
+Place finger on Touch ID sensor
+Registration completes
+
+For Windows Hello :
+
+Terminal shows “Windows Hello prompt”
+Windows Hello biometric prompt appears
+Complete biometric scan (fingerprint/face)
+Registration completes
+
+Success Response :
+✓ WebAuthn device registered successfully!
Device Details:
Name: YubiKey-Admin-Primary
Type: USB Security Key
- AAGUID: 2fc0579f-8113-47ea-b116-bb5a8db9202a
+ AAGUID: 2fc0579f-8113-47ea-b116-bb5a8 d9202a
Credential ID: kZj8C3bx...
Registered: 2025-10-08T14:32:10Z
You can now use this device for authentication.
-```plaintext
-
-#### 4. Register Additional Devices (Optional)
-
-**Best Practice**: Register 2+ WebAuthn devices (primary + backup)
-
-```bash
-# Register backup YubiKey
+
+
+Best Practice : Register 2+ WebAuthn devices (primary + backup)
+# Register backup YubiKey
provisioning mfa webauthn register --device-name "YubiKey-Admin-Backup"
# Register Touch ID (for convenience on personal laptop)
provisioning mfa webauthn register --device-name "MacBook-TouchID"
-```plaintext
-
-#### 5. List Registered Devices
-
-```bash
-provisioning mfa webauthn list
+
+
+provisioning mfa webauthn list
# Output:
Registered WebAuthn Devices:
@@ -49114,12 +46654,9 @@ Registered WebAuthn Devices:
Last Used: 2025-10-08T15:20:05Z
Total: 3 devices
-```plaintext
-
-#### 6. Test WebAuthn Login
-
-```bash
-# Logout to test
+
+
+# Logout to test
provisioning logout
# Login with password (partial token)
@@ -49134,18 +46671,12 @@ provisioning mfa webauthn verify
✓ WebAuthn verification successful
✓ Full access granted
-```plaintext
-
----
-
-## Enforcing MFA via Cedar Policies
-
-### Production MFA Enforcement Policy
-
-**Location**: `provisioning/config/cedar-policies/production.cedar`
-
-```cedar
-// Production operations require MFA verification
+
+
+
+
+Location : provisioning/config/cedar-policies/production.cedar
+// Production operations require MFA verification
permit (
principal,
action in [
@@ -49179,14 +46710,10 @@ permit (
context.mfa_verified == true &&
principal.role in [Role::"Admin", Role::"SecurityLead"]
};
-```plaintext
-
-### Development/Staging Policies (MFA Recommended, Not Required)
-
-**Location**: `provisioning/config/cedar-policies/development.cedar`
-
-```cedar
-// Development: MFA recommended but not enforced
+
+
+Location : provisioning/config/cedar-policies/development.cedar
+// Development: MFA recommended but not enforced
permit (
principal,
action,
@@ -49205,12 +46732,9 @@ permit (
// Allow without MFA but log warning
context.mfa_verified == true || context has mfa_warning_acknowledged
};
-```plaintext
-
-### Policy Deployment
-
-```bash
-# Validate Cedar policies
+
+
+# Validate Cedar policies
provisioning cedar validate --policies config/cedar-policies/
# Test policies with sample requests
@@ -49222,12 +46746,9 @@ provisioning cedar deploy production --policies config/cedar-policies/production
# Verify policy is active
provisioning cedar status production
-```plaintext
-
-### Testing MFA Enforcement
-
-```bash
-# Test 1: Production access WITHOUT MFA (should fail)
+
+
+# Test 1: Production access WITHOUT MFA (should fail)
provisioning login --user admin@example.com --workspace production
provisioning server create web-01 --plan medium --check
@@ -49239,18 +46760,12 @@ provisioning mfa verify 123456
provisioning server create web-01 --plan medium --check
# Expected: Server creation initiated
-```plaintext
-
----
-
-## Backup Codes Management
-
-### Generating Backup Codes
-
-Backup codes are automatically generated during first MFA enrollment:
-
-```bash
-# View existing backup codes (requires MFA verification)
+
+
+
+
+Backup codes are automatically generated during first MFA enrollment:
+# View existing backup codes (requires MFA verification)
provisioning mfa backup-codes --show
# Regenerate backup codes (invalidates old ones)
@@ -49274,20 +46789,16 @@ New Backup Codes:
✓ Backup codes regenerated successfully
⚠ Save these codes in a secure location
-```plaintext
-
-### Using Backup Codes
-
-**When to use backup codes**:
-
-- Lost authenticator device (phone stolen, broken)
-- WebAuthn key not available (traveling, left at office)
-- Authenticator app not working (time sync issue)
-
-**Login with backup code**:
-
-```bash
-# Login (partial token)
+
+
+When to use backup codes :
+
+Lost authenticator device (phone stolen, broken)
+WebAuthn key not available (traveling, left at office)
+Authenticator app not working (time sync issue)
+
+Login with backup code :
+# Login (partial token)
provisioning login --user admin@example.com --workspace production
# Use backup code instead of TOTP/WebAuthn
@@ -49298,48 +46809,37 @@ provisioning mfa verify-backup X7Y2-Z9A4-B6C1
⚠ Backup code consumed (9 remaining)
⚠ Enroll a new MFA device as soon as possible
✓ Full access granted (temporary)
-```plaintext
-
-### Backup Code Storage Best Practices
-
-**✅ DO**:
-
-- Store in password manager (1Password, Bitwarden, LastPass)
-- Print and store in physical safe
-- Encrypt and store in secure cloud storage (with encryption key stored separately)
-- Share with trusted IT team member (encrypted)
-
-**❌ DON'T**:
-
-- Email to yourself
-- Store in plaintext file on laptop
-- Save in browser notes/bookmarks
-- Share via Slack/Teams/unencrypted chat
-- Screenshot and save to Photos
-
-**Example: Encrypted Storage**:
-
-```bash
-# Encrypt backup codes with Age
+
+
+✅ DO :
+
+Store in password manager (1Password, Bitwarden, LastPass)
+Print and store in physical safe
+Encrypt and store in secure cloud storage (with encryption key stored separately)
+Share with trusted IT team member (encrypted)
+
+❌ DON’T :
+
+Email to yourself
+Store in plaintext file on laptop
+Save in browser notes/bookmarks
+Share via Slack/Teams/unencrypted chat
+Screenshot and save to Photos
+
+Example: Encrypted Storage :
+# Encrypt backup codes with Age
provisioning mfa backup-codes --export | \
age -p -o ~/secure/mfa-backup-codes.age
# Decrypt when needed
age -d ~/secure/mfa-backup-codes.age
-```plaintext
-
----
-
-## Recovery Procedures
-
-### Scenario 1: Lost Authenticator Device (TOTP)
-
-**Situation**: Phone stolen/broken, authenticator app not accessible
-
-**Recovery Steps**:
-
-```bash
-# Step 1: Use backup code to login
+
+
+
+
+Situation : Phone stolen/broken, authenticator app not accessible
+Recovery Steps :
+# Step 1: Use backup code to login
provisioning login --user admin@example.com --workspace production
provisioning mfa verify-backup X7Y2-Z9A4-B6C1
@@ -49353,16 +46853,11 @@ provisioning mfa totp verify 654321
# Step 4: Generate new backup codes
provisioning mfa backup-codes --regenerate
-```plaintext
-
-### Scenario 2: Lost WebAuthn Key (YubiKey)
-
-**Situation**: YubiKey lost, stolen, or damaged
-
-**Recovery Steps**:
-
-```bash
-# Step 1: Login with alternative method (TOTP or backup code)
+
+
+Situation : YubiKey lost, stolen, or damaged
+Recovery Steps :
+# Step 1: Login with alternative method (TOTP or backup code)
provisioning login --user admin@example.com --workspace production
provisioning mfa verify 123456 # TOTP from authenticator app
@@ -49380,16 +46875,11 @@ This cannot be undone. (yes/no): yes
# Step 4: Register new WebAuthn device
provisioning mfa webauthn register --device-name "YubiKey-Admin-Replacement"
-```plaintext
-
-### Scenario 3: All MFA Methods Lost
-
-**Situation**: Lost phone (TOTP), lost YubiKey, no backup codes
-
-**Recovery Steps** (Requires Admin Assistance):
-
-```bash
-# User contacts Security Team / Platform Admin
+
+
+Situation : Lost phone (TOTP), lost YubiKey, no backup codes
+Recovery Steps (Requires Admin Assistance):
+# User contacts Security Team / Platform Admin
# Admin performs MFA reset (requires 2+ admin approval)
provisioning admin mfa-reset admin@example.com \
@@ -49418,16 +46908,11 @@ provisioning admin mfa-reset approve MFA-RESET-20251008-001 \
# User re-enrolls TOTP and WebAuthn
provisioning mfa totp enroll
provisioning mfa webauthn register --device-name "YubiKey-New"
-```plaintext
-
-### Scenario 4: Backup Codes Depleted
-
-**Situation**: Used 9 out of 10 backup codes
-
-**Recovery Steps**:
-
-```bash
-# Login with last backup code
+
+
+Situation : Used 9 out of 10 backup codes
+Recovery Steps :
+# Login with last backup code
provisioning login --user admin@example.com --workspace production
provisioning mfa verify-backup D9E2-F7G4-H6J1
@@ -49440,31 +46925,22 @@ provisioning mfa verify-backup D9E2-F7G4-H6J1
provisioning mfa backup-codes --regenerate
# Save new codes securely
-```plaintext
-
----
-
-## Troubleshooting
-
-### Issue 1: "Invalid TOTP code" Error
-
-**Symptoms**:
-
-```plaintext
-provisioning mfa verify 123456
+
+
+
+
+Symptoms :
+provisioning mfa verify 123456
✗ Error: Invalid TOTP code
-```plaintext
-
-**Possible Causes**:
-
-1. **Time sync issue** (most common)
-2. Wrong secret key entered during enrollment
-3. Code expired (30-second window)
-
-**Solutions**:
-
-```bash
-# Check time sync (device clock must be accurate)
+
+Possible Causes :
+
+Time sync issue (most common)
+Wrong secret key entered during enrollment
+Code expired (30-second window)
+
+Solutions :
+# Check time sync (device clock must be accurate)
# macOS:
sudo sntp -sS time.apple.com
@@ -49485,21 +46961,14 @@ TOTP Configuration:
date && curl -s http://worldtimeapi.org/api/ip | grep datetime
# If time is off by >30 seconds, sync time and retry
-```plaintext
-
-### Issue 2: WebAuthn Not Detected
-
-**Symptoms**:
-
-```plaintext
-provisioning mfa webauthn register
+
+
+Symptoms :
+provisioning mfa webauthn register
✗ Error: No WebAuthn authenticator detected
-```plaintext
-
-**Solutions**:
-
-```bash
-# Check USB connection (for hardware keys)
+
+Solutions :
+# Check USB connection (for hardware keys)
# macOS:
system_profiler SPUSBDataType | grep -i yubikey
@@ -49513,23 +46982,15 @@ provisioning mfa webauthn check
# For Touch ID: Ensure finger is enrolled in System Preferences
# For Windows Hello: Ensure biometrics are configured in Settings
-```plaintext
-
-### Issue 3: "MFA Required" Despite Verification
-
-**Symptoms**:
-
-```plaintext
-provisioning server create web-01
+
+
+Symptoms :
+provisioning server create web-01
✗ Error: Authorization denied (MFA verification required)
-```plaintext
-
-**Cause**: Access token expired (15 min) or MFA verification not in token claims
-
-**Solution**:
-
-```bash
-# Check token expiration
+
+Cause : Access token expired (15 min) or MFA verification not in token claims
+Solution :
+# Check token expiration
provisioning auth status
# Output:
@@ -49555,16 +47016,11 @@ provisioning auth decode-token
"iat": 1696766400,
"exp": 1696767300
}
-```plaintext
-
-### Issue 4: QR Code Not Displaying
-
-**Symptoms**: QR code appears garbled or doesn't display in terminal
-
-**Solutions**:
-
-```bash
-# Use manual entry instead
+
+
+Symptoms : QR code appears garbled or doesn’t display in terminal
+Solutions :
+# Use manual entry instead
provisioning mfa totp enroll --manual
# Output (no QR code):
@@ -49578,27 +47034,20 @@ Enter this secret manually in your authenticator app.
# Or export QR code to image file
provisioning mfa totp enroll --qr-image ~/mfa-qr.png
open ~/mfa-qr.png # View in image viewer
-```plaintext
-
-### Issue 5: Backup Code Not Working
-
-**Symptoms**:
-
-```plaintext
-provisioning mfa verify-backup X7Y2-Z9A4-B6C1
+
+
+Symptoms :
+provisioning mfa verify-backup X7Y2-Z9A4-B6C1
✗ Error: Invalid or already used backup code
-```plaintext
-
-**Possible Causes**:
-
-1. Code already used (single-use only)
-2. Backup codes regenerated (old codes invalidated)
-3. Typo in code entry
-
-**Solutions**:
-
-```bash
-# Check backup code status (requires alternative login method)
+
+Possible Causes :
+
+Code already used (single-use only)
+Backup codes regenerated (old codes invalidated)
+Typo in code entry
+
+Solutions :
+# Check backup code status (requires alternative login method)
provisioning mfa backup-codes --status
# Output:
@@ -49610,33 +47059,24 @@ Backup Codes Status:
# Contact admin for MFA reset if all codes used
# Or use alternative MFA method (TOTP, WebAuthn)
-```plaintext
-
----
-
-## Best Practices
-
-### For Individual Admins
-
-#### 1. Use Multiple MFA Methods
-
-**✅ Recommended Setup**:
-
-- **Primary**: TOTP (Google Authenticator, Authy)
-- **Backup**: WebAuthn (YubiKey or Touch ID)
-- **Emergency**: Backup codes (stored securely)
-
-```bash
-# Enroll all three
+
+
+
+
+
+✅ Recommended Setup :
+
+Primary : TOTP (Google Authenticator, Authy)
+Backup : WebAuthn (YubiKey or Touch ID)
+Emergency : Backup codes (stored securely)
+
+# Enroll all three
provisioning mfa totp enroll
provisioning mfa webauthn register --device-name "YubiKey-Primary"
provisioning mfa backup-codes --save-encrypted ~/secure/codes.enc
-```plaintext
-
-#### 2. Secure Backup Code Storage
-
-```bash
-# Store in password manager (1Password example)
+
+
+# Store in password manager (1Password example)
provisioning mfa backup-codes --show | \
op item create --category "Secure Note" \
--title "Provisioning MFA Backup Codes" \
@@ -49645,49 +47085,36 @@ provisioning mfa backup-codes --show | \
# Or encrypted file
provisioning mfa backup-codes --export | \
age -p -o ~/secure/mfa-backup-codes.age
-```plaintext
-
-#### 3. Regular Device Audits
-
-```bash
-# Monthly: Review registered devices
+
+
+# Monthly: Review registered devices
provisioning mfa devices --all
# Remove unused/old devices
provisioning mfa webauthn remove "Old-YubiKey"
provisioning mfa totp remove "Old-Phone"
-```plaintext
-
-#### 4. Test Recovery Procedures
-
-```bash
-# Quarterly: Test backup code login
+
+
+# Quarterly: Test backup code login
provisioning logout
provisioning login --user admin@example.com --workspace dev
provisioning mfa verify-backup [test-code]
# Verify backup codes are accessible
cat ~/secure/mfa-backup-codes.enc | age -d
-```plaintext
-
-### For Security Teams
-
-#### 1. MFA Enrollment Verification
-
-```bash
-# Generate MFA enrollment report
+
+
+
+# Generate MFA enrollment report
provisioning admin mfa-report --format csv > mfa-enrollment.csv
# Output (CSV):
# User,MFA_Enabled,TOTP,WebAuthn,Backup_Codes,Last_MFA_Login,Role
# admin@example.com,Yes,Yes,Yes,10,2025-10-08T14:00:00Z,Admin
# dev@example.com,No,No,No,0,Never,Developer
-```plaintext
-
-#### 2. Enforce MFA Deadlines
-
-```bash
-# Set MFA enrollment deadline
+
+
+# Set MFA enrollment deadline
provisioning admin mfa-deadline set 2025-11-01 \
--roles Admin,Developer \
--environment production
@@ -49696,12 +47123,9 @@ provisioning admin mfa-deadline set 2025-11-01 \
provisioning admin mfa-remind \
--users-without-mfa \
--template "MFA enrollment required by Nov 1"
-```plaintext
-
-#### 3. Monitor MFA Usage
-
-```bash
-# Audit: Find production logins without MFA
+
+
+# Audit: Find production logins without MFA
provisioning audit query \
--action "auth:login" \
--filter 'mfa_verified == false && environment == "production"' \
@@ -49710,22 +47134,19 @@ provisioning audit query \
# Alert on repeated MFA failures
provisioning monitoring alert create \
--name "MFA Brute Force" \
- --condition "mfa_failures > 5 in 5min" \
+ --condition "mfa_failures > 5 in 5 min" \
--action "notify security-team"
-```plaintext
-
-#### 4. MFA Reset Policy
-
-**MFA Reset Requirements**:
-
-- User verification (video call + ID check)
-- Support ticket created (incident tracking)
-- 2+ admin approvals (different teams)
-- Time-limited reset window (24 hours)
-- Mandatory re-enrollment before production access
-
-```bash
-# MFA reset workflow
+
+
+MFA Reset Requirements :
+
+User verification (video call + ID check)
+Support ticket created (incident tracking)
+2+ admin approvals (different teams)
+Time-limited reset window (24 hours)
+Mandatory re-enrollment before production access
+
+# MFA reset workflow
provisioning admin mfa-reset create user@example.com \
--reason "Lost all devices" \
--ticket SUPPORT-12345 \
@@ -49733,14 +47154,10 @@ provisioning admin mfa-reset create user@example.com \
# Requires 2 approvals
provisioning admin mfa-reset approve MFA-RESET-001
-```plaintext
-
-### For Platform Admins
-
-#### 1. Cedar Policy Best Practices
-
-```cedar
-// Require MFA for high-risk actions
+
+
+
+// Require MFA for high-risk actions
permit (
principal,
action in [
@@ -49754,12 +47171,9 @@ permit (
context.mfa_verified == true &&
context.mfa_age_seconds < 300 // MFA verified within last 5 minutes
};
-```plaintext
-
-#### 2. MFA Grace Periods (For Rollout)
-
-```bash
-# Development: No MFA required
+
+
+# Development: No MFA required
export PROVISIONING_MFA_REQUIRED=false
# Staging: MFA recommended (warnings only)
@@ -49767,19 +47181,16 @@ export PROVISIONING_MFA_REQUIRED=warn
# Production: MFA mandatory (strict enforcement)
export PROVISIONING_MFA_REQUIRED=true
-```plaintext
-
-#### 3. Backup Admin Account
-
-**Emergency Admin** (break-glass scenario):
-
-- Separate admin account with MFA enrollment
-- Credentials stored in physical safe
-- Only used when primary admins locked out
-- Requires incident report after use
-
-```bash
-# Create emergency admin
+
+
+Emergency Admin (break-glass scenario):
+
+Separate admin account with MFA enrollment
+Credentials stored in physical safe
+Only used when primary admins locked out
+Requires incident report after use
+
+# Create emergency admin
provisioning admin create emergency-admin@example.com \
--role EmergencyAdmin \
--mfa-required true \
@@ -49788,18 +47199,12 @@ provisioning admin create emergency-admin@example.com \
# Print backup codes and store in safe
provisioning mfa backup-codes --show --user emergency-admin@example.com > emergency-codes.txt
# [Print and store in physical safe]
-```plaintext
-
----
-
-## Audit and Compliance
-
-### MFA Audit Logging
-
-All MFA events are logged to the audit system:
-
-```bash
-# View MFA enrollment events
+
+
+
+
+All MFA events are logged to the audit system:
+# View MFA enrollment events
provisioning audit query \
--action-type "mfa:*" \
--since 30d
@@ -49823,14 +47228,10 @@ provisioning audit query \
"ip_address": "203.0.113.42"
}
]
-```plaintext
-
-### Compliance Reports
-
-#### SOC2 Compliance (Access Control)
-
-```bash
-# Generate SOC2 access control report
+
+
+
+# Generate SOC2 access control report
provisioning compliance report soc2 \
--control "CC6.1" \
--period "2025-Q3"
@@ -49848,12 +47249,9 @@ Evidence:
- Cedar policy: production.cedar (lines 15-25)
- Audit logs: mfa-verification-logs-2025-q3.json
- Enrollment report: mfa-enrollment-status.csv
-```plaintext
-
-#### ISO 27001 Compliance (A.9.4.2 - Secure Log-on)
-
-```bash
-# ISO 27001 A.9.4.2 compliance report
+
+
+# ISO 27001 A.9.4.2 compliance report
provisioning compliance report iso27001 \
--control "A.9.4.2" \
--format pdf \
@@ -49865,12 +47263,9 @@ provisioning compliance report iso27001 \
# 3. Audit Trail
# 4. Policy Enforcement
# 5. Recovery Procedures
-```plaintext
-
-#### GDPR Compliance (MFA Data Handling)
-
-```bash
-# GDPR data subject request (MFA data export)
+
+
+# GDPR data subject request (MFA data export)
provisioning compliance gdpr export admin@example.com \
--include mfa
@@ -49894,12 +47289,9 @@ provisioning compliance gdpr export admin@example.com \
# GDPR deletion (MFA data removal after account deletion)
provisioning compliance gdpr delete admin@example.com --include-mfa
-```plaintext
-
-### MFA Metrics Dashboard
-
-```bash
-# Generate MFA metrics
+
+
+# Generate MFA metrics
provisioning admin mfa-metrics --period 30d
# Output:
@@ -49928,16 +47320,11 @@ Incidents:
MFA Resets: 2
Lost Devices: 3
Lockouts: 1
-```plaintext
-
----
-
-## Quick Reference Card
-
-### Daily Admin Operations
-
-```bash
-# Login with MFA
+
+
+
+
+# Login with MFA
provisioning login --user admin@example.com --workspace production
provisioning mfa verify 123456
@@ -49946,12 +47333,9 @@ provisioning mfa status
# View registered devices
provisioning mfa devices
-```plaintext
-
-### MFA Management
-
-```bash
-# TOTP
+
+
+# TOTP
provisioning mfa totp enroll # Enroll TOTP
provisioning mfa totp verify 123456 # Verify TOTP code
provisioning mfa totp unenroll # Remove TOTP
@@ -49965,12 +47349,9 @@ provisioning mfa webauthn remove "YubiKey" # Remove device
provisioning mfa backup-codes --show # View codes
provisioning mfa backup-codes --regenerate # Generate new codes
provisioning mfa verify-backup X7Y2-Z9A4-B6C1 # Use backup code
-```plaintext
-
-### Emergency Procedures
-
-```bash
-# Lost device recovery (use backup code)
+
+
+# Lost device recovery (use backup code)
provisioning login --user admin@example.com
provisioning mfa verify-backup [code]
provisioning mfa totp enroll # Re-enroll new device
@@ -49980,88 +47361,102 @@ provisioning admin mfa-reset user@example.com --reason "Lost all devices"
# Check MFA compliance
provisioning admin mfa-report
-```plaintext
-
----
-
-## Summary Checklist
-
-### For New Admins
-
-- [ ] Complete initial login with password
-- [ ] Enroll TOTP (Google Authenticator, Authy)
-- [ ] Verify TOTP code successfully
-- [ ] Save backup codes in password manager
-- [ ] Register WebAuthn device (YubiKey or Touch ID)
-- [ ] Test full login flow with MFA
-- [ ] Store backup codes in secure location
-- [ ] Verify production access works with MFA
-
-### For Security Team
-
-- [ ] Deploy Cedar MFA enforcement policies
-- [ ] Verify 100% admin MFA enrollment
-- [ ] Configure MFA audit logging
-- [ ] Setup MFA compliance reports (SOC2, ISO 27001)
-- [ ] Document MFA reset procedures
-- [ ] Train admins on MFA usage
-- [ ] Create emergency admin account (break-glass)
-- [ ] Schedule quarterly MFA audits
-
-### For Platform Team
-
-- [ ] Configure MFA settings in `config/mfa.toml`
-- [ ] Deploy Cedar policies with MFA requirements
-- [ ] Setup monitoring for MFA failures
-- [ ] Configure alerts for MFA bypass attempts
-- [ ] Document MFA architecture in ADR
-- [ ] Test MFA enforcement in all environments
-- [ ] Verify audit logs capture MFA events
-- [ ] Create runbooks for MFA incidents
-
----
-
-## Support and Resources
-
-### Documentation
-
-- **MFA Implementation**: `/docs/architecture/MFA_IMPLEMENTATION_SUMMARY.md`
-- **Cedar Policies**: `/docs/operations/CEDAR_POLICIES_PRODUCTION_GUIDE.md`
-- **Break-Glass**: `/docs/operations/BREAK_GLASS_TRAINING_GUIDE.md`
-- **Audit Logging**: `/docs/architecture/AUDIT_LOGGING_IMPLEMENTATION.md`
-
-### Configuration Files
-
-- **MFA Config**: `provisioning/config/mfa.toml`
-- **Cedar Policies**: `provisioning/config/cedar-policies/production.cedar`
-- **Control Center**: `provisioning/platform/control-center/config.toml`
-
-### CLI Help
-
-```bash
-provisioning mfa help # MFA command help
+
+
+
+
+
+
+
+
+
+
+
+
+
+MFA Implementation : /docs/architecture/MFA_IMPLEMENTATION_SUMMARY.md
+Cedar Policies : /docs/operations/CEDAR_POLICIES_PRODUCTION_GUIDE.md
+Break-Glass : /docs/operations/BREAK_GLASS_TRAINING_GUIDE.md
+Audit Logging : /docs/architecture/AUDIT_LOGGING_IMPLEMENTATION.md
+
+
+
+MFA Config : provisioning/config/mfa.toml
+Cedar Policies : provisioning/config/cedar-policies/production.cedar
+Control Center : provisioning/platform/control-center/config.toml
+
+
+provisioning mfa help # MFA command help
provisioning mfa totp --help # TOTP-specific help
provisioning mfa webauthn --help # WebAuthn-specific help
-```plaintext
-
-### Contact
-
-- **Security Team**: <security@example.com>
-- **Platform Team**: <platform@example.com>
-- **Support Ticket**: <https://support.example.com>
-
----
-
-**Document Status**: ✅ Complete
-**Review Date**: 2025-11-08
-**Maintained By**: Security Team, Platform Team
+
+
+
+Document Status : ✅ Complete
+Review Date : 2025-11-08
+Maintained By : Security Team, Platform Team
A Rust-based orchestrator service that coordinates infrastructure provisioning workflows with pluggable storage backends and comprehensive migration tools.
Source : provisioning/platform/orchestrator/
-
+
The orchestrator implements a hybrid multi-storage approach:
Rust Orchestrator : Handles coordination, queuing, and parallel execution
@@ -50069,7 +47464,7 @@ provisioning mfa webauthn --help # WebAuthn-specific help
Pluggable Storage : Multiple storage backends with seamless migration
REST API : HTTP interface for workflow submission and monitoring
-
+
Multi-Storage Backends : Filesystem, SurrealDB Embedded, and SurrealDB Server options
Task Queue : Priority-based task scheduling with retry logic
@@ -50084,7 +47479,7 @@ provisioning mfa webauthn --help # WebAuthn-specific help
Multi-Node Support : Test complex topologies including Kubernetes and etcd clusters
Docker Integration : Automated container lifecycle management via Docker API
-
+
Default Build (Filesystem Only) :
cd provisioning/platform/orchestrator
@@ -50113,7 +47508,7 @@ cargo run --features surrealdb -- --storage-type surrealdb-server \
"wait": true
}'
-
+
GET /health - Service health status
@@ -50176,7 +47571,7 @@ provisioning test topology load kubernetes_3node | test env cluster kubernetes
Best For Development Production Distributed
-
+
cd provisioning/platform/control-center
cargo build --release
-
+
Copy and edit the configuration:
cp config.toml.example config.toml
@@ -50374,8 +47769,8 @@ detection_threshold = 2.5
context.geo.country in ["US", "CA", "GB", "DE"]
};
-
-
+
+
# Validate policies
control-center policy validate policies/
@@ -50395,7 +47790,7 @@ control-center compliance hipaa
# Generate compliance report
control-center compliance report --format html
-
+
POST /policies/evaluate - Evaluate policy decision
@@ -50410,7 +47805,7 @@ control-center compliance report --format html
GET /policies/{id}/versions/{version} - Get specific version
POST /policies/{id}/rollback/{version} - Rollback to version
-
+
GET /compliance/soc2 - SOC2 compliance check
GET /compliance/hipaa - HIPAA compliance check
@@ -50422,8 +47817,8 @@ control-center compliance report --format html
GET /anomalies/{id} - Get anomaly details
POST /anomalies/detect - Trigger anomaly detection
-
-
+
+
Policy Engine (src/policies/engine.rs)
@@ -50474,8 +47869,8 @@ control-center compliance report --format html
Template-based : Policy generation through templates
Environment-aware : Different configs for dev/test/prod
-
-
+
+
FROM rust:1.75 as builder
WORKDIR /app
COPY . .
@@ -50487,7 +47882,7 @@ COPY --from=builder /app/target/release/control-center /usr/local/bin/
EXPOSE 8080
CMD ["control-center", "server"]
-
+
apiVersion: apps/v1
kind: Deployment
metadata:
@@ -50505,7 +47900,7 @@ spec:
- name: DATABASE_URL
value: "surreal://surrealdb:8000"
-
+
Architecture : Cedar Authorization
User Guide : Authentication Layer
@@ -50525,33 +47920,26 @@ spec:
Live Progress : Real-time deployment progress and logs
Health Checks : Automatic service health verification
-
+
cd provisioning/platform/installer
cargo build --release
cargo install --path .
-```plaintext
-
-## Usage
-
-### Interactive TUI (Default)
-
-```bash
-provisioning-installer
-```plaintext
-
-The TUI guides you through:
-
-1. Platform detection (Docker, Podman, K8s, OrbStack)
-2. Deployment mode selection (Solo, Multi-User, CI/CD, Enterprise)
-3. Service selection (check/uncheck services)
-4. Configuration (domain, ports, secrets)
-5. Live deployment with progress tracking
-6. Success screen with access URLs
-
-### Headless Mode (Automation)
-
-```bash
-# Quick deploy with auto-detection
+
+
+
+provisioning-installer
+
+The TUI guides you through:
+
+Platform detection (Docker, Podman, K8s, OrbStack)
+Deployment mode selection (Solo, Multi-User, CI/CD, Enterprise)
+Service selection (check/uncheck services)
+Configuration (domain, ports, secrets)
+Live deployment with progress tracking
+Success screen with access URLs
+
+
+# Quick deploy with auto-detection
provisioning-installer --headless --mode solo --yes
# Fully specified
@@ -50565,82 +47953,58 @@ provisioning-installer \
# Use existing config file
provisioning-installer --headless --config my-deployment.toml --yes
-```plaintext
-
-### Configuration Generation
-
-```bash
-# Generate config without deploying
+
+
+# Generate config without deploying
provisioning-installer --config-only
# Deploy later with generated config
provisioning-installer --headless --config ~/.provisioning/installer-config.toml --yes
-```plaintext
-
-## Deployment Platforms
-
-### Docker Compose
-
-```bash
-provisioning-installer --platform docker --mode solo
-```plaintext
-
-**Requirements**: Docker 20.10+, docker-compose 2.0+
-
-### OrbStack (macOS)
-
-```bash
-provisioning-installer --platform orbstack --mode solo
-```plaintext
-
-**Requirements**: OrbStack installed, 4GB RAM, 2 CPU cores
-
-### Podman (Rootless)
-
-```bash
-provisioning-installer --platform podman --mode solo
-```plaintext
-
-**Requirements**: Podman 4.0+, systemd
-
-### Kubernetes
-
-```bash
-provisioning-installer --platform kubernetes --mode enterprise
-```plaintext
-
-**Requirements**: kubectl configured, Helm 3.0+
-
-## Deployment Modes
-
-### Solo Mode (Development)
-
-- **Services**: 5 core services
-- **Resources**: 2 CPU cores, 4GB RAM, 20GB disk
-- **Use case**: Single developer, local testing
-
-### Multi-User Mode (Team)
-
-- **Services**: 7 services
-- **Resources**: 4 CPU cores, 8GB RAM, 50GB disk
-- **Use case**: Team collaboration, shared infrastructure
-
-### CI/CD Mode (Automation)
-
-- **Services**: 8-10 services
-- **Resources**: 8 CPU cores, 16GB RAM, 100GB disk
-- **Use case**: Automated pipelines, webhooks
-
-### Enterprise Mode (Production)
-
-- **Services**: 15+ services
-- **Resources**: 16 CPU cores, 32GB RAM, 500GB disk
-- **Use case**: Production deployments, full observability
-
-## CLI Options
-
-```plaintext
-provisioning-installer [OPTIONS]
+
+
+
+provisioning-installer --platform docker --mode solo
+
+Requirements : Docker 20.10+, docker-compose 2.0+
+
+provisioning-installer --platform orbstack --mode solo
+
+Requirements : OrbStack installed, 4 GB RAM, 2 CPU cores
+
+provisioning-installer --platform podman --mode solo
+
+Requirements : Podman 4.0+, systemd
+
+provisioning-installer --platform kubernetes --mode enterprise
+
+Requirements : kubectl configured, Helm 3.0+
+
+
+
+Services : 5 core services
+Resources : 2 CPU cores, 4 GB RAM, 20 GB disk
+Use case : Single developer, local testing
+
+
+
+Services : 7 services
+Resources : 4 CPU cores, 8 GB RAM, 50 GB disk
+Use case : Team collaboration, shared infrastructure
+
+
+
+Services : 8-10 services
+Resources : 8 CPU cores, 16 GB RAM, 100 GB disk
+Use case : Automated pipelines, webhooks
+
+
+
+Services : 15+ services
+Resources : 16 CPU cores, 32 GB RAM, 500 GB disk
+Use case : Production deployments, full observability
+
+
+provisioning-installer [OPTIONS]
OPTIONS:
--headless Run in headless mode (no TUI)
@@ -50653,43 +48017,31 @@ OPTIONS:
--config <FILE> Use existing config file
-h, --help Print help
-V, --version Print version
-```plaintext
-
-## CI/CD Integration
-
-### GitLab CI
-
-```yaml
-deploy_platform:
+
+
+
+deploy_platform:
stage: deploy
script:
- provisioning-installer --headless --mode cicd --platform kubernetes --yes
only:
- main
-```plaintext
-
-### GitHub Actions
-
-```yaml
-- name: Deploy Provisioning Platform
+
+
+- name: Deploy Provisioning Platform
run: |
provisioning-installer --headless --mode cicd --platform docker --yes
-```plaintext
-
-## Nushell Scripts (Fallback)
-
-If the Rust binary is unavailable:
-
-```bash
-cd provisioning/platform/installer/scripts
-nu deploy.nu --mode solo --platform orbstack --yes
-```plaintext
-
-## Related Documentation
-
-- **Deployment Guide**: [Platform Deployment](../guides/from-scratch.md)
-- **Architecture**: [Platform Overview](../architecture/ARCHITECTURE_OVERVIEW.md)
+
+If the Rust binary is unavailable:
+cd provisioning/platform/installer/scripts
+nu deploy.nu --mode solo --platform orbstack --yes
+
+
+
A comprehensive installer system supporting interactive, headless, and unattended deployment modes with automatic configuration management via TOML and MCP integration.
@@ -50750,13 +48102,13 @@ provisioning-installer --headless --mode cicd --config ci-config.toml --yes
Suitable for production deployments
Comprehensive logging and audit trails
-
+
Each mode configures resource allocation and features appropriately:
Mode CPUs Memory Use Case
-Solo 2 4GB Single user development
-MultiUser 4 8GB Team development, testing
-CICD 8 16GB CI/CD pipelines, testing
-Enterprise 16 32GB Production deployment
+Solo 2 4 GB Single user development
+MultiUser 4 8 GB Team development, testing
+CICD 8 16 GB CI/CD pipelines, testing
+Enterprise 16 32 GB Production deployment
@@ -50789,7 +48141,7 @@ endpoint = "http://localhost:9090"
MCP Integration - AI-powered intelligent defaults
Built-in Defaults - System defaults
-
+
Model Context Protocol integration provides intelligent configuration:
7 AI-Powered Settings Tools :
@@ -50826,7 +48178,7 @@ taskserv install-self kubernetes
taskserv install-self prometheus
taskserv install-self cilium
-
+
# Show interactive installer
provisioning-installer
@@ -50854,7 +48206,7 @@ provisioning-installer --unattended --config config.toml
# Headless with specific settings
provisioning-installer --headless --mode solo --provider upcloud --cpu 2 --memory 4096 --yes
-
+
# Define in Git
cat > infrastructure/installer.toml << EOF
@@ -50894,7 +48246,7 @@ resource "null_resource" "provisioning_installer" {
kubernetes-ha.toml - High-availability Kubernetes
multicloud.toml - Multi-provider setup
-
+
User Guide : user/provisioning-installer-guide.md
Deployment Guide : operations/installer-deployment-guide.md
@@ -50937,7 +48289,7 @@ provisioning-installer --config-suggest
CORS Support : Configurable cross-origin resource sharing
Health Checks : Built-in health and readiness endpoints
-
+
┌─────────────────┐
│ REST Client │
│ (curl, CI/CD) │
@@ -50964,21 +48316,14 @@ provisioning-installer --config-suggest
│ - CLI wrapper │
│ - Timeout │
└─────────────────┘
-```plaintext
-
-## Installation
-
-```bash
-cd provisioning/platform/provisioning-server
+
+
+cd provisioning/platform/provisioning-server
cargo build --release
-```plaintext
-
-## Configuration
-
-Create `config.toml`:
-
-```toml
-[server]
+
+
+Create config.toml:
+[server]
host = "0.0.0.0"
port = 8083
cors_enabled = true
@@ -50996,14 +48341,10 @@ max_concurrent_operations = 10
[logging]
level = "info"
json_format = false
-```plaintext
-
-## Usage
-
-### Starting the Server
-
-```bash
-# Using config file
+
+
+
+# Using config file
provisioning-server --config config.toml
# Custom settings
@@ -51013,114 +48354,90 @@ provisioning-server \
--jwt-secret "my-secret" \
--cli-path "/usr/local/bin/provisioning" \
--log-level debug
-```plaintext
-
-### Authentication
-
-#### Login
-
-```bash
-curl -X POST http://localhost:8083/v1/auth/login \
+
+
+
+curl -X POST http://localhost:8083/v1/auth/login \
-H "Content-Type: application/json" \
-d '{
"username": "admin",
"password": "admin123"
}'
-```plaintext
-
-Response:
-
-```json
-{
+
+Response:
+{
"token": "eyJhbGc...",
"refresh_token": "eyJhbGc...",
"expires_in": 86400
}
-```plaintext
-
-#### Using Token
-
-```bash
-export TOKEN="eyJhbGc..."
+
+
+export TOKEN="eyJhbGc..."
curl -X GET http://localhost:8083/v1/servers \
-H "Authorization: Bearer $TOKEN"
-```plaintext
-
-## API Endpoints
-
-### Authentication
-
-- `POST /v1/auth/login` - User login
-- `POST /v1/auth/refresh` - Refresh access token
-
-### Servers
-
-- `GET /v1/servers` - List all servers
-- `POST /v1/servers/create` - Create new server
-- `DELETE /v1/servers/{id}` - Delete server
-- `GET /v1/servers/{id}/status` - Get server status
-
-### Taskservs
-
-- `GET /v1/taskservs` - List all taskservs
-- `POST /v1/taskservs/create` - Create taskserv
-- `DELETE /v1/taskservs/{id}` - Delete taskserv
-- `GET /v1/taskservs/{id}/status` - Get taskserv status
-
-### Workflows
-
-- `POST /v1/workflows/submit` - Submit workflow
-- `GET /v1/workflows/{id}` - Get workflow details
-- `GET /v1/workflows/{id}/status` - Get workflow status
-- `POST /v1/workflows/{id}/cancel` - Cancel workflow
-
-### Operations
-
-- `GET /v1/operations` - List all operations
-- `GET /v1/operations/{id}` - Get operation status
-- `POST /v1/operations/{id}/cancel` - Cancel operation
-
-### System
-
-- `GET /health` - Health check (no auth required)
-- `GET /v1/version` - Version information
-- `GET /v1/metrics` - Prometheus metrics
-
-## RBAC Roles
-
-### Admin Role
-
-Full system access including all operations, workspace management, and system administration.
-
-### Operator Role
-
-Infrastructure operations including create/delete servers, taskservs, clusters, and workflow management.
-
-### Developer Role
-
-Read access plus SSH to servers, view workflows and operations.
-
-### Viewer Role
-
-Read-only access to all resources and status information.
-
-## Security Best Practices
-
-1. **Change Default Credentials**: Update all default usernames/passwords
-2. **Use Strong JWT Secret**: Generate secure random string (32+ characters)
-3. **Enable TLS**: Use HTTPS in production
-4. **Restrict CORS**: Configure specific allowed origins
-5. **Enable mTLS**: For client certificate authentication
-6. **Regular Token Rotation**: Implement token refresh strategy
-7. **Audit Logging**: Enable audit logs for compliance
-
-## CI/CD Integration
-
-### GitHub Actions
-
-```yaml
-- name: Deploy Infrastructure
+
+
+
+
+POST /v1/auth/login - User login
+POST /v1/auth/refresh - Refresh access token
+
+
+
+GET /v1/servers - List all servers
+POST /v1/servers/create - Create new server
+DELETE /v1/servers/{id} - Delete server
+GET /v1/servers/{id}/status - Get server status
+
+
+
+GET /v1/taskservs - List all taskservs
+POST /v1/taskservs/create - Create taskserv
+DELETE /v1/taskservs/{id} - Delete taskserv
+GET /v1/taskservs/{id}/status - Get taskserv status
+
+
+
+POST /v1/workflows/submit - Submit workflow
+GET /v1/workflows/{id} - Get workflow details
+GET /v1/workflows/{id}/status - Get workflow status
+POST /v1/workflows/{id}/cancel - Cancel workflow
+
+
+
+GET /v1/operations - List all operations
+GET /v1/operations/{id} - Get operation status
+POST /v1/operations/{id}/cancel - Cancel operation
+
+
+
+GET /health - Health check (no auth required)
+GET /v1/version - Version information
+GET /v1/metrics - Prometheus metrics
+
+
+
+Full system access including all operations, workspace management, and system administration.
+
+Infrastructure operations including create/delete servers, taskservs, clusters, and workflow management.
+
+Read access plus SSH to servers, view workflows and operations.
+
+Read-only access to all resources and status information.
+
+
+Change Default Credentials : Update all default usernames/passwords
+Use Strong JWT Secret : Generate secure random string (32+ characters)
+Enable TLS : Use HTTPS in production
+Restrict CORS : Configure specific allowed origins
+Enable mTLS : For client certificate authentication
+Regular Token Rotation : Implement token refresh strategy
+Audit Logging : Enable audit logs for compliance
+
+
+
+- name: Deploy Infrastructure
run: |
TOKEN=$(curl -X POST https://api.example.com/v1/auth/login \
-H "Content-Type: application/json" \
@@ -51130,14 +48447,13 @@ Read-only access to all resources and status information.
curl -X POST https://api.example.com/v1/servers/create \
-H "Authorization: Bearer $TOKEN" \
-H "Content-Type: application/json" \
- -d '{"workspace": "production", "provider": "upcloud", "plan": "2xCPU-4GB"}'
-```plaintext
-
-## Related Documentation
-
-- **API Reference**: [REST API Documentation](../api/rest-api.md)
-- **Architecture**: [API Gateway Integration](../architecture/integration-patterns.md)
+ -d '{"workspace": "production", "provider": "upcloud", "plan": "2xCPU-4 GB"}'
+
+
This comprehensive guide covers creating, managing, and maintaining infrastructure using Infrastructure Automation.
@@ -51162,18 +48478,12 @@ Read-only access to all resources and status information.
Plan → Create → Deploy → Monitor → Scale → Update → Retire
-```plaintext
-
-Each phase has specific commands and considerations.
-
-## Server Management
-
-### Understanding Server Configuration
-
-Servers are defined in KCL configuration files:
-
-```kcl
-# Example server configuration
+
+Each phase has specific commands and considerations.
+
+
+Servers are defined in Nickel configuration files:
+# Example server configuration
import models.server
servers: [
@@ -51191,9 +48501,9 @@ servers: [
# Storage configuration
storage = {
- root_size = "50GB"
+ root_size = "50 GB"
additional = [
- {name = "data", size = "100GB", type = "gp3"}
+ {name = "data", size = "100 GB", type = "gp3"}
]
}
@@ -51212,14 +48522,10 @@ servers: [
}
}
]
-```plaintext
-
-### Server Lifecycle Commands
-
-#### Creating Servers
-
-```bash
-# Plan server creation (dry run)
+
+
+
+# Plan server creation (dry run)
provisioning server create --infra my-infra --check
# Create servers
@@ -51230,12 +48536,9 @@ provisioning server create --infra my-infra --wait --yes
# Create single server type
provisioning server create web --infra my-infra
-```plaintext
-
-#### Managing Existing Servers
-
-```bash
-# List all servers
+
+
+# List all servers
provisioning server list --infra my-infra
# Show detailed server information
@@ -51246,12 +48549,9 @@ provisioning show servers web-01 --infra my-infra
# Get server status
provisioning server status web-01 --infra my-infra
-```plaintext
-
-#### Server Operations
-
-```bash
-# Start/stop servers
+
+
+# Start/stop servers
provisioning server start web-01 --infra my-infra
provisioning server stop web-01 --infra my-infra
@@ -51263,12 +48563,9 @@ provisioning server resize web-01 --plan t3.large --infra my-infra
# Update server configuration
provisioning server update web-01 --infra my-infra
-```plaintext
-
-#### SSH Access
-
-```bash
-# SSH to server
+
+
+# SSH to server
provisioning server ssh web-01 --infra my-infra
# SSH with specific user
@@ -51280,12 +48577,9 @@ provisioning server exec web-01 "systemctl status kubernetes" --infra my-infra
# Copy files to/from server
provisioning server copy local-file.txt web-01:/tmp/ --infra my-infra
provisioning server copy web-01:/var/log/app.log ./logs/ --infra my-infra
-```plaintext
-
-#### Server Deletion
-
-```bash
-# Plan server deletion (dry run)
+
+
+# Plan server deletion (dry run)
provisioning server delete --infra my-infra --check
# Delete specific server
@@ -51296,25 +48590,20 @@ provisioning server delete web-01 --infra my-infra --yes
# Delete but keep storage
provisioning server delete web-01 --infra my-infra --keepstorage
-```plaintext
-
-## Task Service Management
-
-### Understanding Task Services
-
-Task services are software components installed on servers:
-
-- **Container Runtimes**: containerd, cri-o, docker
-- **Orchestration**: kubernetes, nomad
-- **Networking**: cilium, calico, haproxy
-- **Storage**: rook-ceph, longhorn, nfs
-- **Databases**: postgresql, mysql, mongodb
-- **Monitoring**: prometheus, grafana, alertmanager
-
-### Task Service Configuration
-
-```kcl
-# Task service configuration example
+
+
+
+Task services are software components installed on servers:
+
+Container Runtimes : containerd, cri-o, docker
+Orchestration : kubernetes, nomad
+Networking : cilium, calico, haproxy
+Storage : rook-ceph, longhorn, nfs
+Databases : postgresql, mysql, mongodb
+Monitoring : prometheus, grafana, alertmanager
+
+
+# Task service configuration example
taskservs: {
kubernetes: {
version = "1.28"
@@ -51340,7 +48629,7 @@ taskservs: {
version = "15"
port = 5432
max_connections = 200
- shared_buffers = "256MB"
+ shared_buffers = "256 MB"
# High availability
replication = {
@@ -51357,14 +48646,10 @@ taskservs: {
}
}
}
-```plaintext
-
-### Task Service Commands
-
-#### Installing Services
-
-```bash
-# Install single service
+
+
+
+# Install single service
provisioning taskserv create kubernetes --infra my-infra
# Install multiple services
@@ -51375,12 +48660,9 @@ provisioning taskserv create kubernetes --version 1.28 --infra my-infra
# Install on specific servers
provisioning taskserv create postgresql --servers db-01,db-02 --infra my-infra
-```plaintext
-
-#### Managing Services
-
-```bash
-# List available services
+
+
+# List available services
provisioning taskserv list
# List installed services
@@ -51394,12 +48676,9 @@ provisioning taskserv status kubernetes --infra my-infra
# Check service health
provisioning taskserv health kubernetes --infra my-infra
-```plaintext
-
-#### Service Operations
-
-```bash
-# Start/stop services
+
+
+# Start/stop services
provisioning taskserv start kubernetes --infra my-infra
provisioning taskserv stop kubernetes --infra my-infra
@@ -51411,12 +48690,9 @@ provisioning taskserv update kubernetes --infra my-infra
# Configure services
provisioning taskserv configure kubernetes --config cluster.yaml --infra my-infra
-```plaintext
-
-#### Service Removal
-
-```bash
-# Remove service
+
+
+# Remove service
provisioning taskserv delete kubernetes --infra my-infra
# Remove with data cleanup
@@ -51424,12 +48700,9 @@ provisioning taskserv delete postgresql --cleanup-data --infra my-infra
# Remove from specific servers
provisioning taskserv delete kubernetes --servers worker-03 --infra my-infra
-```plaintext
-
-### Version Management
-
-```bash
-# Check for updates
+
+
+# Check for updates
provisioning taskserv check-updates --infra my-infra
# Check specific service updates
@@ -51443,16 +48716,11 @@ provisioning taskserv upgrade kubernetes --infra my-infra
# Upgrade to specific version
provisioning taskserv upgrade kubernetes --version 1.29 --infra my-infra
-```plaintext
-
-## Cluster Management
-
-### Understanding Clusters
-
-Clusters are collections of services that work together to provide functionality:
-
-```kcl
-# Cluster configuration example
+
+
+
+Clusters are collections of services that work together to provide functionality:
+# Cluster configuration example
clusters: {
web_cluster: {
name = "web-application"
@@ -51490,14 +48758,10 @@ clusters: {
}
}
}
-```plaintext
-
-### Cluster Commands
-
-#### Creating Clusters
-
-```bash
-# Create cluster
+
+
+
+# Create cluster
provisioning cluster create web-cluster --infra my-infra
# Create with specific configuration
@@ -51505,12 +48769,9 @@ provisioning cluster create web-cluster --config cluster.yaml --infra my-infra
# Create and deploy
provisioning cluster create web-cluster --deploy --infra my-infra
-```plaintext
-
-#### Managing Clusters
-
-```bash
-# List available clusters
+
+
+# List available clusters
provisioning cluster list
# List deployed clusters
@@ -51521,12 +48782,9 @@ provisioning cluster show web-cluster --infra my-infra
# Get cluster status
provisioning cluster status web-cluster --infra my-infra
-```plaintext
-
-#### Cluster Operations
-
-```bash
-# Deploy cluster
+
+
+# Deploy cluster
provisioning cluster deploy web-cluster --infra my-infra
# Scale cluster
@@ -51537,24 +48795,17 @@ provisioning cluster update web-cluster --infra my-infra
# Rolling update
provisioning cluster update web-cluster --rolling --infra my-infra
-```plaintext
-
-#### Cluster Deletion
-
-```bash
-# Delete cluster
+
+
+# Delete cluster
provisioning cluster delete web-cluster --infra my-infra
# Delete with data cleanup
provisioning cluster delete web-cluster --cleanup --infra my-infra
-```plaintext
-
-## Network Management
-
-### Network Configuration
-
-```kcl
-# Network configuration
+
+
+
+# Network configuration
network: {
vpc = {
cidr = "10.0.0.0/16"
@@ -51609,12 +48860,9 @@ network: {
}
]
}
-```plaintext
-
-### Network Commands
-
-```bash
-# Show network configuration
+
+
+# Show network configuration
provisioning network show --infra my-infra
# Create network resources
@@ -51625,20 +48873,16 @@ provisioning network update --infra my-infra
# Test network connectivity
provisioning network test --infra my-infra
-```plaintext
-
-## Storage Management
-
-### Storage Configuration
-
-```kcl
-# Storage configuration
+
+
+
+# Storage configuration
storage: {
# Block storage
volumes = [
{
name = "app-data"
- size = "100GB"
+ size = "100 GB"
type = "gp3"
encrypted = true
}
@@ -51664,12 +48908,9 @@ storage: {
}
}
}
-```plaintext
-
-### Storage Commands
-
-```bash
-# Create storage resources
+
+
+# Create storage resources
provisioning storage create --infra my-infra
# List storage
@@ -51680,26 +48921,19 @@ provisioning storage backup --infra my-infra
# Restore from backup
provisioning storage restore --backup latest --infra my-infra
-```plaintext
-
-## Monitoring and Observability
-
-### Monitoring Setup
-
-```bash
-# Install monitoring stack
+
+
+
+# Install monitoring stack
provisioning taskserv create prometheus --infra my-infra
provisioning taskserv create grafana --infra my-infra
provisioning taskserv create alertmanager --infra my-infra
# Configure monitoring
provisioning taskserv configure prometheus --config monitoring.yaml --infra my-infra
-```plaintext
-
-### Health Checks
-
-```bash
-# Check overall infrastructure health
+
+
+# Check overall infrastructure health
provisioning health check --infra my-infra
# Check specific components
@@ -51709,12 +48943,9 @@ provisioning health check clusters --infra my-infra
# Continuous monitoring
provisioning health monitor --infra my-infra --watch
-```plaintext
-
-### Metrics and Alerting
-
-```bash
-# Get infrastructure metrics
+
+
+# Get infrastructure metrics
provisioning metrics get --infra my-infra
# Set up alerts
@@ -51722,14 +48953,10 @@ provisioning alerts create --config alerts.yaml --infra my-infra
# List active alerts
provisioning alerts list --infra my-infra
-```plaintext
-
-## Cost Management
-
-### Cost Monitoring
-
-```bash
-# Show current costs
+
+
+
+# Show current costs
provisioning cost show --infra my-infra
# Cost breakdown by component
@@ -51740,12 +48967,9 @@ provisioning cost trends --period 30d --infra my-infra
# Set cost alerts
provisioning cost alert --threshold 1000 --infra my-infra
-```plaintext
-
-### Cost Optimization
-
-```bash
-# Analyze cost optimization opportunities
+
+
+# Analyze cost optimization opportunities
provisioning cost optimize --infra my-infra
# Show unused resources
@@ -51753,14 +48977,10 @@ provisioning cost unused --infra my-infra
# Right-size recommendations
provisioning cost recommendations --infra my-infra
-```plaintext
-
-## Scaling Strategies
-
-### Manual Scaling
-
-```bash
-# Scale servers
+
+
+
+# Scale servers
provisioning server scale --count 5 --infra my-infra
# Scale specific service
@@ -51768,12 +48988,9 @@ provisioning taskserv scale kubernetes --nodes 3 --infra my-infra
# Scale cluster
provisioning cluster scale web-cluster --replicas 10 --infra my-infra
-```plaintext
-
-### Auto-scaling Configuration
-
-```kcl
-# Auto-scaling configuration
+
+
+# Auto-scaling configuration
auto_scaling: {
servers = {
min_count = 2
@@ -51800,14 +49017,10 @@ auto_scaling: {
}
}
}
-```plaintext
-
-## Disaster Recovery
-
-### Backup Strategies
-
-```bash
-# Full infrastructure backup
+
+
+
+# Full infrastructure backup
provisioning backup create --type full --infra my-infra
# Incremental backup
@@ -51815,12 +49028,9 @@ provisioning backup create --type incremental --infra my-infra
# Schedule automated backups
provisioning backup schedule --daily --time "02:00" --infra my-infra
-```plaintext
-
-### Recovery Procedures
-
-```bash
-# List available backups
+
+
+# List available backups
provisioning backup list --infra my-infra
# Restore infrastructure
@@ -51831,14 +49041,10 @@ provisioning restore --backup latest --components servers --infra my-infra
# Test restore (dry run)
provisioning restore --backup latest --test --infra my-infra
-```plaintext
-
-## Advanced Infrastructure Patterns
-
-### Multi-Region Deployment
-
-```kcl
-# Multi-region configuration
+
+
+
+# Multi-region configuration
regions: {
primary = {
name = "us-west-2"
@@ -51865,12 +49071,9 @@ regions: {
}
}
}
-```plaintext
-
-### Blue-Green Deployment
-
-```bash
-# Create green environment
+
+
+# Create green environment
provisioning generate infra --from production --name production-green
# Deploy to green
@@ -51883,12 +49086,9 @@ provisioning network switch --from production --to production-green
# Decommission blue
provisioning server delete --infra production --yes
-```plaintext
-
-### Canary Deployment
-
-```bash
-# Create canary environment
+
+
+# Create canary environment
provisioning cluster create web-cluster-canary --replicas 1 --infra my-infra
# Route small percentage of traffic
@@ -51901,16 +49101,11 @@ provisioning metrics monitor web-cluster-canary --infra my-infra
provisioning cluster promote web-cluster-canary --infra my-infra
# or
provisioning cluster rollback web-cluster-canary --infra my-infra
-```plaintext
-
-## Troubleshooting Infrastructure
-
-### Common Issues
-
-#### Server Creation Failures
-
-```bash
-# Check provider status
+
+
+
+
+# Check provider status
provisioning provider status aws
# Validate server configuration
@@ -51921,12 +49116,9 @@ provisioning provider quota --infra my-infra
# Debug server creation
provisioning --debug server create web-01 --infra my-infra
-```plaintext
-
-#### Service Installation Failures
-
-```bash
-# Check service prerequisites
+
+
+# Check service prerequisites
provisioning taskserv check kubernetes --infra my-infra
# Validate service configuration
@@ -51937,12 +49129,9 @@ provisioning taskserv logs kubernetes --infra my-infra
# Debug service installation
provisioning --debug taskserv create kubernetes --infra my-infra
-```plaintext
-
-#### Network Connectivity Issues
-
-```bash
-# Test network connectivity
+
+
+# Test network connectivity
provisioning network test --infra my-infra
# Check security groups
@@ -51950,12 +49139,9 @@ provisioning network security-groups --infra my-infra
# Trace network path
provisioning network trace --from web-01 --to db-01 --infra my-infra
-```plaintext
-
-### Performance Optimization
-
-```bash
-# Analyze performance bottlenecks
+
+
+# Analyze performance bottlenecks
provisioning performance analyze --infra my-infra
# Get performance recommendations
@@ -51963,30 +49149,22 @@ provisioning performance recommendations --infra my-infra
# Monitor resource utilization
provisioning performance monitor --infra my-infra --duration 1h
-```plaintext
-
-## Testing Infrastructure
-
-The provisioning system includes a comprehensive **Test Environment Service** for automated testing of infrastructure components before deployment.
-
-### Why Test Infrastructure?
-
-Testing infrastructure before production deployment helps:
-
-- **Validate taskserv configurations** before installing on production servers
-- **Test integration** between multiple taskservs
-- **Verify cluster topologies** (Kubernetes, etcd, etc.) before deployment
-- **Catch configuration errors** early in the development cycle
-- **Ensure compatibility** between components
-
-### Test Environment Types
-
-#### 1. Single Taskserv Testing
-
-Test individual taskservs in isolated containers:
-
-```bash
-# Quick test (create, run, cleanup automatically)
+
+
+The provisioning system includes a comprehensive Test Environment Service for automated testing of infrastructure components before deployment.
+
+Testing infrastructure before production deployment helps:
+
+Validate taskserv configurations before installing on production servers
+Test integration between multiple taskservs
+Verify cluster topologies (Kubernetes, etcd, etc.) before deployment
+Catch configuration errors early in the development cycle
+Ensure compatibility between components
+
+
+
+Test individual taskservs in isolated containers:
+# Quick test (create, run, cleanup automatically)
provisioning test quick kubernetes
# Single taskserv with custom resources
@@ -51998,14 +49176,10 @@ provisioning test env single postgres \
# Test with specific infrastructure context
provisioning test env single redis --infra my-infra
-```plaintext
-
-#### 2. Server Simulation
-
-Test complete server configurations with multiple taskservs:
-
-```bash
-# Simulate web server with multiple taskservs
+
+
+Test complete server configurations with multiple taskservs:
+# Simulate web server with multiple taskservs
provisioning test env server web-01 [containerd kubernetes cilium] \
--auto-start
@@ -52013,14 +49187,10 @@ provisioning test env server web-01 [containerd kubernetes cilium] \
provisioning test env server db-01 [postgres redis] \
--infra prod-stack \
--auto-start
-```plaintext
-
-#### 3. Multi-Node Cluster Testing
-
-Test complex cluster topologies before production deployment:
-
-```bash
-# Test 3-node Kubernetes cluster
+
+
+Test complex cluster topologies before production deployment:
+# Test 3-node Kubernetes cluster
provisioning test topology load kubernetes_3node | \
test env cluster kubernetes --auto-start
@@ -52031,12 +49201,9 @@ provisioning test topology load etcd_cluster | \
# Test single-node Kubernetes
provisioning test topology load kubernetes_single | \
test env cluster kubernetes --auto-start
-```plaintext
-
-### Managing Test Environments
-
-```bash
-# List all test environments
+
+
+# List all test environments
provisioning test env list
# Check environment status
@@ -52047,26 +49214,20 @@ provisioning test env logs <env-id>
# Cleanup environment when done
provisioning test env cleanup <env-id>
-```plaintext
-
-### Available Topology Templates
-
-Pre-configured multi-node cluster templates:
-
-| Template | Description | Use Case |
-|----------|-------------|----------|
-| `kubernetes_3node` | 3-node HA K8s cluster | Production-like K8s testing |
-| `kubernetes_single` | All-in-one K8s node | Development K8s testing |
-| `etcd_cluster` | 3-member etcd cluster | Distributed consensus testing |
-| `containerd_test` | Standalone containerd | Container runtime testing |
-| `postgres_redis` | Database stack | Database integration testing |
-
-### Test Environment Workflow
-
-Typical testing workflow:
-
-```bash
-# 1. Test new taskserv before deploying
+
+
+Pre-configured multi-node cluster templates:
+Template Description Use Case
+kubernetes_3node3-node HA K8s cluster Production-like K8s testing
+kubernetes_singleAll-in-one K8s node Development K8s testing
+etcd_cluster3-member etcd cluster Distributed consensus testing
+containerd_testStandalone containerd Container runtime testing
+postgres_redisDatabase stack Database integration testing
+
+
+
+Typical testing workflow:
+# 1. Test new taskserv before deploying
provisioning test quick kubernetes
# 2. If successful, test server configuration
@@ -52080,14 +49241,10 @@ provisioning test topology load kubernetes_3node | \
# 4. Deploy to production
provisioning server create --infra production
provisioning taskserv create kubernetes --infra production
-```plaintext
-
-### CI/CD Integration
-
-Integrate infrastructure testing into CI/CD pipelines:
-
-```yaml
-# GitLab CI example
+
+
+Integrate infrastructure testing into CI/CD pipelines:
+# GitLab CI example
test-infrastructure:
stage: test
script:
@@ -52107,19 +49264,16 @@ test-infrastructure:
when: on_failure
paths:
- test-logs/
-```plaintext
-
-### Prerequisites
-
-Test environments require:
-
-1. **Docker Running**: Test environments use Docker containers
-
- ```bash
- docker ps # Should work without errors
+
+Test environments require:
+Docker Running : Test environments use Docker containers
+docker ps # Should work without errors
+
+
+
Orchestrator Running : The orchestrator manages test containers
cd provisioning/platform/orchestrator
./scripts/start-orchestrator.nu --background
@@ -52149,20 +49303,13 @@ taskservs = ["postgres"]
[my_cluster.nodes.resources]
cpu_millicores = 1000
memory_mb = 2048
-```plaintext
-
-Load and test custom topology:
-
-```bash
-provisioning test env cluster custom-app custom-topology.toml --auto-start
-```plaintext
-
-#### Integration Testing
-
-Test taskserv dependencies:
-
-```bash
-# Test Kubernetes dependencies in order
+
+Load and test custom topology:
+provisioning test env cluster custom-app custom-topology.toml --auto-start
+
+
+Test taskserv dependencies:
+# Test Kubernetes dependencies in order
provisioning test quick containerd
provisioning test quick etcd
provisioning test quick kubernetes
@@ -52172,29 +49319,24 @@ provisioning test quick cilium
provisioning test env server k8s-stack \
[containerd etcd kubernetes cilium] \
--auto-start
-```plaintext
-
-### Documentation
-
-For complete test environment documentation:
-
-- **Test Environment Guide**: `docs/user/test-environment-guide.md`
-- **Detailed Usage**: `docs/user/test-environment-usage.md`
-- **Orchestrator README**: `provisioning/platform/orchestrator/README.md`
-
-## Best Practices
-
-### 1. Infrastructure Design
-
-- **Principle of Least Privilege**: Grant minimal necessary access
-- **Defense in Depth**: Multiple layers of security
-- **High Availability**: Design for failure resilience
-- **Scalability**: Plan for growth from the start
-
-### 2. Operational Excellence
-
-```bash
-# Always validate before applying changes
+
+
+For complete test environment documentation:
+
+Test Environment Guide : docs/user/test-environment-guide.md
+Detailed Usage : docs/user/test-environment-usage.md
+Orchestrator README : provisioning/platform/orchestrator/README.md
+
+
+
+
+Principle of Least Privilege : Grant minimal necessary access
+Defense in Depth : Multiple layers of security
+High Availability : Design for failure resilience
+Scalability : Plan for growth from the start
+
+
+# Always validate before applying changes
provisioning validate config --infra my-infra
# Use check mode for dry runs
@@ -52205,25 +49347,19 @@ provisioning health monitor --infra my-infra
# Regular backups
provisioning backup schedule --daily --infra my-infra
-```plaintext
-
-### 3. Security
-
-```bash
-# Regular security updates
+
+
+# Regular security updates
provisioning taskserv update --security-only --infra my-infra
# Encrypt sensitive data
-provisioning sops settings.k --infra my-infra
+provisioning sops settings.ncl --infra my-infra
# Audit access
provisioning audit logs --infra my-infra
-```plaintext
-
-### 4. Cost Optimization
-
-```bash
-# Regular cost reviews
+
+
+# Regular cost reviews
provisioning cost analyze --infra my-infra
# Right-size resources
@@ -52231,78 +49367,58 @@ provisioning cost optimize --apply --infra my-infra
# Use reserved instances for predictable workloads
provisioning server reserve --infra my-infra
-```plaintext
-
-## Next Steps
-
-Now that you understand infrastructure management:
-
-1. **Learn about extensions**: [Extension Development Guide](extension-development.md)
-2. **Master configuration**: [Configuration Guide](configuration.md)
-3. **Explore advanced examples**: [Examples and Tutorials](examples/)
-4. **Set up monitoring and alerting**
-5. **Implement automated scaling**
-6. **Plan disaster recovery procedures**
-
-You now have the knowledge to build and manage robust, scalable cloud infrastructure!
+
+Now that you understand infrastructure management:
+
+Learn about extensions : Extension Development Guide
+Master configuration : Configuration Guide
+Explore advanced examples : Examples and Tutorials
+Set up monitoring and alerting
+Implement automated scaling
+Plan disaster recovery procedures
+
+You now have the knowledge to build and manage robust, scalable cloud infrastructure!
-
+
The Infrastructure-from-Code system automatically detects technologies in your project and infers infrastructure requirements based on organization-specific rules. It consists of three main commands:
detect : Scan a project and identify technologies
complete : Analyze gaps and recommend infrastructure components
ifc : Full-pipeline orchestration (workflow)
-
+
Scan a project directory for detected technologies:
provisioning detect /path/to/project --out json
-```plaintext
-
-**Output Example:**
-
-```json
-{
+
+Output Example:
+{
"detections": [
{"technology": "nodejs", "confidence": 0.95},
{"technology": "postgres", "confidence": 0.92}
],
"overall_confidence": 0.93
}
-```plaintext
-
-### 2. Analyze Infrastructure Gaps
-
-Get a completeness assessment and recommendations:
-
-```bash
-provisioning complete /path/to/project --out json
-```plaintext
-
-**Output Example:**
-
-```json
-{
+
+
+Get a completeness assessment and recommendations:
+provisioning complete /path/to/project --out json
+
+Output Example:
+{
"completeness": 1.0,
"changes_needed": 2,
"is_safe": true,
"change_summary": "+ Adding: postgres-backup, pg-monitoring"
}
-```plaintext
-
-### 3. Run Full Workflow
-
-Orchestrate detection → completion → assessment pipeline:
-
-```bash
-provisioning ifc /path/to/project --org default
-```plaintext
-
-**Output:**
-
-```plaintext
-━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
+
+
+Orchestrate detection → completion → assessment pipeline:
+provisioning ifc /path/to/project --org default
+
+Output:
+━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
🔄 Infrastructure-from-Code Workflow
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
@@ -52315,35 +49431,26 @@ STEP 2: Infrastructure Completion
✓ Completeness: 1%
✅ Workflow Complete
-```plaintext
-
-## Command Reference
-
-### detect
-
-Scan and detect technologies in a project.
-
-**Usage:**
-
-```bash
-provisioning detect [PATH] [OPTIONS]
-```plaintext
-
-**Arguments:**
-
-- `PATH`: Project directory to analyze (default: current directory)
-
-**Options:**
-
-- `-o, --out TEXT`: Output format - `text`, `json`, `yaml` (default: `text`)
-- `-C, --high-confidence-only`: Only show detections with confidence > 0.8
-- `--pretty`: Pretty-print JSON/YAML output
-- `-x, --debug`: Enable debug output
-
-**Examples:**
-
-```bash
-# Detect with default text output
+
+
+
+Scan and detect technologies in a project.
+Usage:
+provisioning detect [PATH] [OPTIONS]
+
+Arguments:
+
+PATH: Project directory to analyze (default: current directory)
+
+Options:
+
+-o, --out TEXT: Output format - text, json, yaml (default: text)
+-C, --high-confidence-only: Only show detections with confidence > 0.8
+--pretty: Pretty-print JSON/YAML output
+-x, --debug: Enable debug output
+
+Examples:
+# Detect with default text output
provisioning detect /path/to/project
# Get JSON output for parsing
@@ -52354,33 +49461,25 @@ provisioning detect /path/to/project --high-confidence-only
# Pretty-printed YAML output
provisioning detect /path/to/project --out yaml --pretty
-```plaintext
-
-### complete
-
-Analyze infrastructure completeness and recommend changes.
-
-**Usage:**
-
-```bash
-provisioning complete [PATH] [OPTIONS]
-```plaintext
-
-**Arguments:**
-
-- `PATH`: Project directory to analyze (default: current directory)
-
-**Options:**
-
-- `-o, --out TEXT`: Output format - `text`, `json`, `yaml` (default: `text`)
-- `-c, --check`: Check mode (report only, no changes)
-- `--pretty`: Pretty-print JSON/YAML output
-- `-x, --debug`: Enable debug output
-
-**Examples:**
-
-```bash
-# Analyze completeness
+
+
+Analyze infrastructure completeness and recommend changes.
+Usage:
+provisioning complete [PATH] [OPTIONS]
+
+Arguments:
+
+PATH: Project directory to analyze (default: current directory)
+
+Options:
+
+-o, --out TEXT: Output format - text, json, yaml (default: text)
+-c, --check: Check mode (report only, no changes)
+--pretty: Pretty-print JSON/YAML output
+-x, --debug: Enable debug output
+
+Examples:
+# Analyze completeness
provisioning complete /path/to/project
# Get detailed JSON report
@@ -52388,35 +49487,27 @@ provisioning complete /path/to/project --out json
# Check mode (dry-run, no changes)
provisioning complete /path/to/project --check
-```plaintext
-
-### ifc (workflow)
-
-Run the full Infrastructure-from-Code pipeline.
-
-**Usage:**
-
-```bash
-provisioning ifc [PATH] [OPTIONS]
-```plaintext
-
-**Arguments:**
-
-- `PATH`: Project directory to process (default: current directory)
-
-**Options:**
-
-- `--org TEXT`: Organization name for rule loading (default: `default`)
-- `-o, --out TEXT`: Output format - `text`, `json` (default: `text`)
-- `--apply`: Apply recommendations (future feature)
-- `-v, --verbose`: Verbose output with timing
-- `--pretty`: Pretty-print output
-- `-x, --debug`: Enable debug output
-
-**Examples:**
-
-```bash
-# Run workflow with default rules
+
+
+Run the full Infrastructure-from-Code pipeline.
+Usage:
+provisioning ifc [PATH] [OPTIONS]
+
+Arguments:
+
+PATH: Project directory to process (default: current directory)
+
+Options:
+
+--org TEXT: Organization name for rule loading (default: default)
+-o, --out TEXT: Output format - text, json (default: text)
+--apply: Apply recommendations (future feature)
+-v, --verbose: Verbose output with timing
+--pretty: Pretty-print output
+-x, --debug: Enable debug output
+
+Examples:
+# Run workflow with default rules
provisioning ifc /path/to/project
# Run with organization-specific rules
@@ -52427,20 +49518,13 @@ provisioning ifc /path/to/project --verbose
# JSON output for automation
provisioning ifc /path/to/project --out json
-```plaintext
-
-## Organization-Specific Inference Rules
-
-Customize how infrastructure is inferred for your organization.
-
-### Understanding Inference Rules
-
-An inference rule tells the system: "If we detect technology X, we should recommend taskservice Y."
-
-**Rule Structure:**
-
-```yaml
-version: "1.0.0"
+
+
+Customize how infrastructure is inferred for your organization.
+
+An inference rule tells the system: “If we detect technology X, we should recommend taskservice Y.”
+Rule Structure:
+version: "1.0.0"
organization: "your-org"
rules:
- name: "rule-name"
@@ -52449,14 +49533,10 @@ rules:
confidence: 0.85
reason: "Why this taskserv is needed"
required: true
-```plaintext
-
-### Creating Custom Rules
-
-Create an organization-specific rules file:
-
-```bash
-# ACME Corporation rules
+
+
+Create an organization-specific rules file:
+# ACME Corporation rules
cat > $PROVISIONING/config/inference-rules/acme-corp.yaml << 'EOF'
version: "1.0.0"
organization: "acme-corp"
@@ -52484,73 +49564,49 @@ rules:
reason: "ACME requires monitoring on production services"
required: true
EOF
-```plaintext
-
-Then use them:
-
-```bash
-provisioning ifc /path/to/project --org acme-corp
-```plaintext
-
-### Default Rules
-
-If no organization rules are found, the system uses sensible defaults:
-
-- Node.js + Express → Redis (caching)
-- Node.js → Nginx (reverse proxy)
-- Database → Backup (data protection)
-- Docker → Kubernetes (orchestration)
-- Python → Gunicorn (WSGI server)
-- PostgreSQL → Monitoring (production safety)
-
-## Output Formats
-
-### Text Output (Default)
-
-Human-readable format with visual indicators:
-
-```plaintext
-STEP 1: Technology Detection
+
+Then use them:
+provisioning ifc /path/to/project --org acme-corp
+
+
+If no organization rules are found, the system uses sensible defaults:
+
+Node.js + Express → Redis (caching)
+Node.js → Nginx (reverse proxy)
+Database → Backup (data protection)
+Docker → Kubernetes (orchestration)
+Python → Gunicorn (WSGI server)
+PostgreSQL → Monitoring (production safety)
+
+
+
+Human-readable format with visual indicators:
+STEP 1: Technology Detection
────────────────────────────
✓ Detected 2 technologies
STEP 2: Infrastructure Completion
─────────────────────────────────
✓ Completeness: 1%
-```plaintext
-
-### JSON Output
-
-Structured format for automation and parsing:
-
-```bash
-provisioning detect /path/to/project --out json | jq '.detections[0]'
-```plaintext
-
-Output:
-
-```json
-{
+
+
+Structured format for automation and parsing:
+provisioning detect /path/to/project --out json | jq '.detections[0]'
+
+Output:
+{
"technology": "nodejs",
"confidence": 0.8333333134651184,
"evidence_count": 1
}
-```plaintext
-
-### YAML Output
-
-Alternative structured format:
-
-```bash
-provisioning detect /path/to/project --out yaml
-```plaintext
-
-## Practical Examples
-
-### Example 1: Node.js + PostgreSQL Project
-
-```bash
-# Step 1: Detect
+
+
+Alternative structured format:
+provisioning detect /path/to/project --out yaml
+
+
+
+# Step 1: Detect
$ provisioning detect my-app
✓ Detected: nodejs, express, postgres, docker
@@ -52563,12 +49619,9 @@ $ provisioning complete my-app
# Step 3: Full workflow
$ provisioning ifc my-app --org acme-corp
-```plaintext
-
-### Example 2: Python Django Project
-
-```bash
-$ provisioning detect django-app --out json
+
+
+$ provisioning detect django-app --out json
{
"detections": [
{"technology": "python", "confidence": 0.95},
@@ -52577,12 +49630,9 @@ $ provisioning detect django-app --out json
}
# Inferred requirements (with gunicorn, monitoring, backup)
-```plaintext
-
-### Example 3: Microservices Architecture
-
-```bash
-$ provisioning ifc microservices/ --org mycompany --verbose
+
+
+$ provisioning ifc microservices/ --org mycompany --verbose
🔍 Processing microservices/
- service-a: nodejs + postgres
- service-b: python + redis
@@ -52591,14 +49641,10 @@ $ provisioning ifc microservices/ --org mycompany --verbose
✓ Detected common patterns
✓ Applied 12 inference rules
✓ Generated deployment plan
-```plaintext
-
-## Integration with Automation
-
-### CI/CD Pipeline Example
-
-```bash
-#!/bin/bash
+
+
+
+#!/bin/bash
# Check infrastructure completeness in CI/CD
PROJECT_PATH=${1:-.}
@@ -52610,86 +49656,61 @@ if (( $(echo "$COMPLETENESS < 0.9" | bc -l) )); then
fi
echo "✅ Infrastructure is complete: $COMPLETENESS"
-```plaintext
-
-### Configuration as Code Integration
-
-```bash
-# Generate JSON for infrastructure config
+
+
+# Generate JSON for infrastructure config
provisioning detect /path/to/project --out json > infra-report.json
# Use in your config processing
cat infra-report.json | jq '.detections[]' | while read -r tech; do
echo "Processing technology: $tech"
done
-```plaintext
-
-## Troubleshooting
-
-### "Detector binary not found"
-
-**Solution:** Ensure the provisioning project is properly built:
-
-```bash
-cd $PROVISIONING/platform
+
+
+
+Solution: Ensure the provisioning project is properly built:
+cd $PROVISIONING/platform
cargo build --release --bin provisioning-detector
-```plaintext
-
-### No technologies detected
-
-**Check:**
-
-1. Project path is correct: `provisioning detect /actual/path`
-2. Project contains recognizable technologies (package.json, Dockerfile, requirements.txt, etc.)
-3. Use `--debug` flag for more details: `provisioning detect /path --debug`
-
-### Organization rules not being applied
-
-**Check:**
-
-1. Rules file exists: `$PROVISIONING/config/inference-rules/{org}.yaml`
-2. Organization name is correct: `provisioning ifc /path --org myorg`
-3. Verify rules structure with: `cat $PROVISIONING/config/inference-rules/myorg.yaml`
-
-## Advanced Usage
-
-### Custom Rule Template
-
-Generate a template for a new organization:
-
-```bash
-# Template will be created with proper structure
+
+
+Check:
+
+Project path is correct: provisioning detect /actual/path
+Project contains recognizable technologies (package.json, Dockerfile, requirements.txt, etc.)
+Use --debug flag for more details: provisioning detect /path --debug
+
+
+Check:
+
+Rules file exists: $PROVISIONING/config/inference-rules/{org}.yaml
+Organization name is correct: provisioning ifc /path --org myorg
+Verify rules structure with: cat $PROVISIONING/config/inference-rules/myorg.yaml
+
+
+
+Generate a template for a new organization:
+# Template will be created with proper structure
provisioning rules create --org neworg
-```plaintext
-
-### Validate Rule Files
-
-```bash
-# Check for syntax errors
+
+
+# Check for syntax errors
provisioning rules validate /path/to/rules.yaml
-```plaintext
-
-### Export Rules for Integration
-
-Export as Rust code for embedding:
-
-```bash
-provisioning rules export myorg --format rust > rules.rs
-```plaintext
-
-## Best Practices
-
-1. **Organize by Organization**: Keep separate rules for different organizations
-2. **High Confidence First**: Start with rules you're confident about (confidence > 0.8)
-3. **Document Reasons**: Always fill in the `reason` field for maintainability
-4. **Test Locally**: Run on sample projects before applying organization-wide
-5. **Version Control**: Commit inference rules to version control
-6. **Review Changes**: Always inspect recommendations with `--check` first
-
-## Related Commands
-
-```bash
-# View available taskservs that can be inferred
+
+
+Export as Rust code for embedding:
+provisioning rules export myorg --format rust > rules.rs
+
+
+
+Organize by Organization : Keep separate rules for different organizations
+High Confidence First : Start with rules you’re confident about (confidence > 0.8)
+Document Reasons : Always fill in the reason field for maintainability
+Test Locally : Run on sample projects before applying organization-wide
+Version Control : Commit inference rules to version control
+Review Changes : Always inspect recommendations with --check first
+
+
+# View available taskservs that can be inferred
provisioning taskserv list
# Create inferred infrastructure
@@ -52697,23 +49718,18 @@ provisioning taskserv create {inferred-name}
# View current configuration
provisioning env | grep PROVISIONING
-```plaintext
-
-## Support and Documentation
-
-- **Full CLI Help**: `provisioning help`
-- **Specific Command Help**: `provisioning help detect`
-- **Configuration Guide**: See `CONFIG_ENCRYPTION_GUIDE.md`
-- **Task Services**: See `SERVICE_MANAGEMENT_GUIDE.md`
-
----
-
-## Quick Reference
-
-### 3-Step Workflow
-
-```bash
-# 1. Detect technologies
+
+
+
+Full CLI Help : provisioning help
+Specific Command Help : provisioning help detect
+Configuration Guide : See CONFIG_ENCRYPTION_GUIDE.md
+Task Services : See SERVICE_MANAGEMENT_GUIDE.md
+
+
+
+
+# 1. Detect technologies
provisioning detect /path/to/project
# 2. Analyze infrastructure gaps
@@ -52721,24 +49737,20 @@ provisioning complete /path/to/project
# 3. Run full workflow (detect + complete)
provisioning ifc /path/to/project --org myorg
-```plaintext
-
-### Common Commands
-
-| Task | Command |
-|------|---------|
-| **Detect technologies** | `provisioning detect /path` |
-| **Get JSON output** | `provisioning detect /path --out json` |
-| **Check completeness** | `provisioning complete /path` |
-| **Dry-run (check mode)** | `provisioning complete /path --check` |
-| **Full workflow** | `provisioning ifc /path --org myorg` |
-| **Verbose output** | `provisioning ifc /path --verbose` |
-| **Debug mode** | `provisioning detect /path --debug` |
-
-### Output Formats
-
-```bash
-# Text (human-readable)
+
+
+Task Command
+Detect technologies provisioning detect /path
+Get JSON output provisioning detect /path --out json
+Check completeness provisioning complete /path
+Dry-run (check mode) provisioning complete /path --check
+Full workflow provisioning ifc /path --org myorg
+Verbose output provisioning ifc /path --verbose
+Debug mode provisioning detect /path --debug
+
+
+
+# Text (human-readable)
provisioning detect /path --out text
# JSON (for automation)
@@ -52746,20 +49758,13 @@ provisioning detect /path --out json | jq '.detections'
# YAML (for configuration)
provisioning detect /path --out yaml
-```plaintext
-
-### Organization Rules
-
-#### Use Organization Rules
-
-```bash
-provisioning ifc /path --org acme-corp
-```plaintext
-
-#### Create Rules File
-
-```bash
-mkdir -p $PROVISIONING/config/inference-rules
+
+
+
+provisioning ifc /path --org acme-corp
+
+
+mkdir -p $PROVISIONING/config/inference-rules
cat > $PROVISIONING/config/inference-rules/myorg.yaml << 'EOF'
version: "1.0.0"
organization: "myorg"
@@ -52771,12 +49776,9 @@ rules:
reason: "Caching layer"
required: false
EOF
-```plaintext
-
-### Example: Node.js + PostgreSQL
-
-```bash
-$ provisioning detect myapp
+
+
+$ provisioning detect myapp
✓ Detected: nodejs, postgres
$ provisioning complete myapp
@@ -52786,12 +49788,9 @@ $ provisioning ifc myapp --org default
✓ Detection: 2 technologies
✓ Completion: recommended changes
✅ Workflow complete
-```plaintext
-
-### CI/CD Integration
-
-```bash
-#!/bin/bash
+
+
+#!/bin/bash
# Check infrastructure is complete before deploy
COMPLETENESS=$(provisioning complete . --out json | jq '.completeness')
@@ -52799,74 +49798,61 @@ if (( $(echo "$COMPLETENESS < 0.9" | bc -l) )); then
echo "Infrastructure incomplete: $COMPLETENESS"
exit 1
fi
-```plaintext
-
-### JSON Output Examples
-
-#### Detect Output
-
-```json
-{
+
+
+
+{
"detections": [
{"technology": "nodejs", "confidence": 0.95},
{"technology": "postgres", "confidence": 0.92}
],
"overall_confidence": 0.93
}
-```plaintext
-
-#### Complete Output
-
-```json
-{
+
+
+{
"completeness": 1.0,
"changes_needed": 2,
"is_safe": true,
"change_summary": "+ redis, + monitoring"
}
-```plaintext
-
-### Flag Reference
-
-| Flag | Short | Purpose |
-|------|-------|---------|
-| `--out TEXT` | `-o` | Output format: text, json, yaml |
-| `--debug` | `-x` | Enable debug output |
-| `--pretty` | | Pretty-print JSON/YAML |
-| `--check` | `-c` | Dry-run (detect/complete) |
-| `--org TEXT` | | Organization name (ifc) |
-| `--verbose` | `-v` | Verbose output (ifc) |
-| `--apply` | | Apply changes (ifc, future) |
-
-### Troubleshooting
-
-| Issue | Solution |
-|-------|----------|
-| "Detector binary not found" | `cd $PROVISIONING/platform && cargo build --release` |
-| No technologies detected | Check file types (.py, .js, go.mod, package.json, etc.) |
-| Organization rules not found | Verify file exists: `$PROVISIONING/config/inference-rules/{org}.yaml` |
-| Invalid path error | Use absolute path: `provisioning detect /full/path` |
-
-### Environment Variables
-
-| Variable | Purpose |
-|----------|---------|
-| `$PROVISIONING` | Path to provisioning root |
-| `$PROVISIONING_ORG` | Default organization (optional) |
-
-### Default Inference Rules
-
-- Node.js + Express → Redis (caching)
-- Node.js → Nginx (reverse proxy)
-- Database → Backup (data protection)
-- Docker → Kubernetes (orchestration)
-- Python → Gunicorn (WSGI)
-- PostgreSQL → Monitoring (production)
-
-### Useful Aliases
-
-```bash
-# Add to shell config
+
+
+Flag Short Purpose
+--out TEXT-oOutput format: text, json, yaml
+--debug-xEnable debug output
+--prettyPretty-print JSON/YAML
+--check-cDry-run (detect/complete)
+--org TEXTOrganization name (ifc)
+--verbose-vVerbose output (ifc)
+--applyApply changes (ifc, future)
+
+
+
+Issue Solution
+“Detector binary not found” cd $PROVISIONING/platform && cargo build --release
+No technologies detected Check file types (.py, .js, go.mod, package.json, etc.)
+Organization rules not found Verify file exists: $PROVISIONING/config/inference-rules/{org}.yaml
+Invalid path error Use absolute path: provisioning detect /full/path
+
+
+
+Variable Purpose
+$PROVISIONINGPath to provisioning root
+$PROVISIONING_ORGDefault organization (optional)
+
+
+
+
+Node.js + Express → Redis (caching)
+Node.js → Nginx (reverse proxy)
+Database → Backup (data protection)
+Docker → Kubernetes (orchestration)
+Python → Gunicorn (WSGI)
+PostgreSQL → Monitoring (production)
+
+
+# Add to shell config
alias detect='provisioning detect'
alias complete='provisioning complete'
alias ifc='provisioning ifc'
@@ -52875,63 +49861,49 @@ alias ifc='provisioning ifc'
detect /my/project
complete /my/project
ifc /my/project --org myorg
-```plaintext
-
-### Tips & Tricks
-
-**Parse JSON in bash:**
-
-```bash
-provisioning detect . --out json | \
+
+
+Parse JSON in bash:
+provisioning detect . --out json | \
jq '.detections[] | .technology' | \
sort | uniq
-```plaintext
-
-**Watch for changes:**
-
-```bash
-watch -n 5 'provisioning complete . --out json | jq ".completeness"'
-```plaintext
-
-**Generate reports:**
-
-```bash
-provisioning detect . --out yaml > detection-report.yaml
+
+Watch for changes:
+watch -n 5 'provisioning complete . --out json | jq ".completeness"'
+
+Generate reports:
+provisioning detect . --out yaml > detection-report.yaml
provisioning complete . --out yaml > completion-report.yaml
-```plaintext
-
-**Validate all organizations:**
-
-```bash
-for org in $PROVISIONING/config/inference-rules/*.yaml; do
+
+Validate all organizations:
+for org in $PROVISIONING/config/inference-rules/*.yaml; do
org_name=$(basename "$org" .yaml)
echo "Testing $org_name..."
provisioning ifc . --org "$org_name" --check
done
-```plaintext
-
-### Related Guides
-
-- Full guide: `docs/user/INFRASTRUCTURE_FROM_CODE_GUIDE.md`
-- Inference rules: `docs/user/INFRASTRUCTURE_FROM_CODE_GUIDE.md#organization-specific-inference-rules`
-- Service management: `docs/user/SERVICE_MANAGEMENT_QUICKREF.md`
-- Configuration: `docs/user/CONFIG_ENCRYPTION_QUICKREF.md`
+
+
+Full guide: docs/user/INFRASTRUCTURE_FROM_CODE_GUIDE.md
+Inference rules: docs/user/INFRASTRUCTURE_FROM_CODE_GUIDE.md#organization-specific-inference-rules
+Service management: docs/user/SERVICE_MANAGEMENT_QUICKREF.md
+Configuration: docs/user/CONFIG_ENCRYPTION_QUICKREF.md
+
A comprehensive batch workflow system has been implemented using 10 token-optimized agents achieving 85-90% token efficiency over monolithic approaches. The system enables provider-agnostic batch operations with mixed provider support (UpCloud + AWS + local).
-
+
Provider-Agnostic Design : Single workflows supporting multiple cloud providers
-KCL Schema Integration : Type-safe workflow definitions with comprehensive validation
+Nickel Schema Integration : Type-safe workflow definitions with comprehensive validation
Dependency Resolution : Topological sorting with soft/hard dependency support
State Management : Checkpoint-based recovery with rollback capabilities
Real-time Monitoring : Live workflow progress tracking and health monitoring
Token Optimization : 85-90% efficiency using parallel specialized agents
-# Submit batch workflow from KCL definition
-nu -c "use core/nulib/workflows/batch.nu *; batch submit workflows/example_batch.k"
+# Submit batch workflow from Nickel definition
+nu -c "use core/nulib/workflows/batch.nu *; batch submit workflows/example_batch.ncl"
# Monitor batch workflow progress
nu -c "use core/nulib/workflows/batch.nu *; batch monitor <workflow_id>"
@@ -52948,35 +49920,37 @@ nu -c "use core/nulib/workflows/batch.nu *; batch rollback <workflow_id>"
# Show batch workflow statistics
nu -c "use core/nulib/workflows/batch.nu *; batch stats"
-
-Batch workflows are defined using KCL schemas in kcl/workflows.k:
-# Example batch workflow with mixed providers
-batch_workflow: BatchWorkflow = {
- name = "multi_cloud_deployment"
- version = "1.0.0"
- storage_backend = "surrealdb" # or "filesystem"
- parallel_limit = 5
- rollback_enabled = True
+
+Batch workflows are defined using Nickel configuration in schemas/workflows.ncl:
+# Example batch workflow with mixed providers
+{
+ batch_workflow = {
+ name = "multi_cloud_deployment",
+ version = "1.0.0",
+ storage_backend = "surrealdb", # or "filesystem"
+ parallel_limit = 5,
+ rollback_enabled = true,
operations = [
- {
- id = "upcloud_servers"
- type = "server_batch"
- provider = "upcloud"
- dependencies = []
- server_configs = [
- {name = "web-01", plan = "1xCPU-2GB", zone = "de-fra1"},
- {name = "web-02", plan = "1xCPU-2GB", zone = "us-nyc1"}
- ]
- },
- {
- id = "aws_taskservs"
- type = "taskserv_batch"
- provider = "aws"
- dependencies = ["upcloud_servers"]
- taskservs = ["kubernetes", "cilium", "containerd"]
- }
+ {
+ id = "upcloud_servers",
+ type = "server_batch",
+ provider = "upcloud",
+ dependencies = [],
+ server_configs = [
+ { name = "web-01", plan = "1xCPU-2 GB", zone = "de-fra1" },
+ { name = "web-02", plan = "1xCPU-2 GB", zone = "us-nyc1" }
+ ]
+ },
+ {
+ id = "aws_taskservs",
+ type = "taskserv_batch",
+ provider = "aws",
+ dependencies = ["upcloud_servers"],
+ taskservs = ["kubernetes", "cilium", "containerd"]
+ }
]
+ }
}
@@ -52992,11 +49966,785 @@ batch_workflow: BatchWorkflow = {
Provider Agnostic : Mix UpCloud, AWS, and local providers in single workflows
-Type Safety : KCL schema validation prevents runtime errors
+Type Safety : Nickel schema validation prevents runtime errors
Dependency Management : Automatic resolution with failure handling
State Recovery : Checkpoint-based recovery from any failure point
Real-time Monitoring : Live progress tracking with detailed status
+
+This document provides practical examples of orchestrating complex deployments and operations across multiple cloud providers using the batch workflow system.
+
+
+
+The batch workflow system enables declarative orchestration of operations across multiple providers with:
+
+Dependency Tracking : Define what must complete before what
+Error Handling : Automatic rollback on failure
+Idempotency : Safe to re-run workflows
+Status Tracking : Real-time progress monitoring
+Recovery Checkpoints : Resume from failure points
+
+
+Use Case : Deploy web application across DigitalOcean, AWS, and Hetzner with proper sequencing and dependencies.
+Workflow Characteristics :
+
+Database created first (dependencies)
+Backup storage ready before compute
+Web servers scale once database ready
+Health checks before considering complete
+
+
+# file: workflows/multi-provider-deployment.yml
+
+name: multi-provider-app-deployment
+version: "1.0"
+description: "Deploy web app across three cloud providers"
+
+parameters:
+ do_region: "nyc3"
+ aws_region: "us-east-1"
+ hetzner_location: "nbg1"
+ web_server_count: 3
+
+phases:
+ # Phase 1: Create backup storage first (independent)
+ - name: "provision-backup-storage"
+ provider: "hetzner"
+ description: "Create backup storage volume in Hetzner"
+
+ operations:
+ - id: "create-backup-volume"
+ action: "create-volume"
+ config:
+ name: "webapp-backups"
+ size: 500
+ location: "{{ hetzner_location }}"
+ format: "ext4"
+
+ tags: ["storage", "backup"]
+
+ on_failure: "alert"
+ on_success: "proceed"
+
+ # Phase 2: Create database (independent, but must complete before app)
+ - name: "provision-database"
+ provider: "aws"
+ description: "Create managed PostgreSQL database"
+ depends_on: [] # Can run in parallel with Phase 1
+
+ operations:
+ - id: "create-rds-instance"
+ action: "create-db-instance"
+ config:
+ identifier: "webapp-db"
+ engine: "postgres"
+ engine_version: "14.6"
+ instance_class: "db.t3.medium"
+ allocated_storage: 100
+ multi_az: true
+ backup_retention_days: 30
+
+ tags: ["database", "primary"]
+
+ - id: "create-security-group"
+ action: "create-security-group"
+ config:
+ name: "webapp-db-sg"
+ description: "Security group for RDS"
+
+ depends_on: ["create-rds-instance"]
+
+ - id: "configure-db-access"
+ action: "authorize-security-group"
+ config:
+ group_id: "{{ create-security-group.id }}"
+ protocol: "tcp"
+ port: 5432
+ cidr: "10.0.0.0/8"
+
+ depends_on: ["create-security-group"]
+
+ timeout: 60
+
+ # Phase 3: Create web tier (depends on database being ready)
+ - name: "provision-web-tier"
+ provider: "digitalocean"
+ description: "Create web servers and load balancer"
+ depends_on: ["provision-database"] # Wait for database
+
+ operations:
+ - id: "create-droplets"
+ action: "create-droplet"
+ config:
+ name: "web-server"
+ size: "s-2vcpu-4gb"
+ region: "{{ do_region }}"
+ image: "ubuntu-22-04-x64"
+ count: "{{ web_server_count }}"
+ backups: true
+ monitoring: true
+
+ tags: ["web", "production"]
+
+ timeout: 300
+ retry:
+ max_attempts: 3
+ backoff: exponential
+
+ - id: "create-firewall"
+ action: "create-firewall"
+ config:
+ name: "web-firewall"
+ inbound_rules:
+ - protocol: "tcp"
+ ports: "22"
+ sources: ["0.0.0.0/0"]
+ - protocol: "tcp"
+ ports: "80"
+ sources: ["0.0.0.0/0"]
+ - protocol: "tcp"
+ ports: "443"
+ sources: ["0.0.0.0/0"]
+
+ depends_on: ["create-droplets"]
+
+ - id: "create-load-balancer"
+ action: "create-load-balancer"
+ config:
+ name: "web-lb"
+ algorithm: "round_robin"
+ region: "{{ do_region }}"
+ forwarding_rules:
+ - entry_protocol: "http"
+ entry_port: 80
+ target_protocol: "http"
+ target_port: 80
+ - entry_protocol: "https"
+ entry_port: 443
+ target_protocol: "http"
+ target_port: 80
+ health_check:
+ protocol: "http"
+ port: 80
+ path: "/health"
+ interval: 10
+
+ depends_on: ["create-droplets"]
+
+ # Phase 4: Network configuration (depends on all resources)
+ - name: "configure-networking"
+ description: "Setup VPN tunnels and security between providers"
+ depends_on: ["provision-web-tier"]
+
+ operations:
+ - id: "setup-vpn-tunnel-do-aws"
+ action: "create-vpn-tunnel"
+ config:
+ source_provider: "digitalocean"
+ destination_provider: "aws"
+ protocol: "ipsec"
+ encryption: "aes-256"
+
+ timeout: 120
+
+ - id: "setup-vpn-tunnel-aws-hetzner"
+ action: "create-vpn-tunnel"
+ config:
+ source_provider: "aws"
+ destination_provider: "hetzner"
+ protocol: "ipsec"
+ encryption: "aes-256"
+
+ # Phase 5: Validation and verification
+ - name: "verify-deployment"
+ description: "Verify all resources are operational"
+ depends_on: ["configure-networking"]
+
+ operations:
+ - id: "health-check-droplets"
+ action: "run-health-check"
+ config:
+ targets: "{{ create-droplets.ips }}"
+ endpoint: "/health"
+ expected_status: 200
+ timeout: 30
+
+ timeout: 300
+
+ - id: "health-check-database"
+ action: "verify-database"
+ config:
+ host: "{{ create-rds-instance.endpoint }}"
+ port: 5432
+ database: "postgres"
+ timeout: 30
+
+ - id: "health-check-backup"
+ action: "verify-volume"
+ config:
+ volume_id: "{{ create-backup-volume.id }}"
+ status: "available"
+
+# Rollback strategy: if any phase fails
+rollback:
+ strategy: "automatic"
+ on_phase_failure: "rollback-previous-phases"
+ preserve_data: true
+
+# Notifications
+notifications:
+ on_start: "slack:#deployments"
+ on_phase_complete: "slack:#deployments"
+ on_failure: "slack:#alerts"
+ on_success: "slack:#deployments"
+
+# Validation checks
+pre_flight:
+ - check: "credentials"
+ description: "Verify all provider credentials"
+ - check: "quotas"
+ description: "Verify sufficient quotas in each provider"
+ - check: "dependencies"
+ description: "Verify all dependencies are available"
+
+
+┌─────────────────────────────────────────────────────────┐
+│ Start Deployment │
+└──────────────────┬──────────────────────────────────────┘
+ │
+ ┌──────────┴──────────┐
+ │ │
+ ▼ ▼
+ ┌─────────────┐ ┌──────────────────┐
+ │ Hetzner │ │ AWS │
+ │ Backup │ │ Database │
+ │ (Phase 1) │ │ (Phase 2) │
+ └──────┬──────┘ └────────┬─────────┘
+ │ │
+ │ Ready │ Ready
+ └────────┬───────────┘
+ │
+ ▼
+ ┌──────────────────┐
+ │ DigitalOcean │
+ │ Web Tier │
+ │ (Phase 3) │
+ │ - Droplets │
+ │ - Firewall │
+ │ - Load Balancer │
+ └────────┬─────────┘
+ │
+ ▼
+ ┌──────────────────┐
+ │ Network Setup │
+ │ (Phase 4) │
+ │ - VPN Tunnels │
+ └────────┬─────────┘
+ │
+ ▼
+ ┌──────────────────┐
+ │ Verification │
+ │ (Phase 5) │
+ │ - Health Checks │
+ └────────┬─────────┘
+ │
+ ▼
+ ┌──────────────────┐
+ │ Deployment OK │
+ │ (Ready to use) │
+ └──────────────────┘
+
+
+Use Case : Automated failover from primary provider (DigitalOcean) to backup provider (Hetzner) on detection of failure.
+Workflow Characteristics :
+
+Continuous health monitoring
+Automatic failover trigger
+Database promotion
+DNS update
+Verification before considering complete
+
+
+# file: workflows/multi-provider-dr-failover.yml
+
+name: multi-provider-dr-failover
+version: "1.0"
+description: "Automated failover from DigitalOcean to Hetzner"
+
+parameters:
+ primary_provider: "digitalocean"
+ backup_provider: "hetzner"
+ dns_provider: "aws"
+ health_check_threshold: 3
+
+phases:
+ # Phase 1: Monitor primary provider
+ - name: "monitor-primary"
+ description: "Continuous health monitoring of primary"
+
+ operations:
+ - id: "health-check-primary"
+ action: "run-health-check"
+ config:
+ provider: "{{ primary_provider }}"
+ resources: ["web-servers", "load-balancer"]
+ checks:
+ - type: "http"
+ endpoint: "/health"
+ expected_status: 200
+ - type: "database"
+ host: "db.primary.example.com"
+ query: "SELECT 1"
+ - type: "connectivity"
+ test: "ping"
+ interval: 30 # Check every 30 seconds
+
+ timeout: 300
+
+ - id: "aggregate-health"
+ action: "aggregate-metrics"
+ config:
+ source: "{{ health-check-primary.results }}"
+ failure_threshold: 3 # 3 consecutive failures trigger failover
+
+ # Phase 2: Trigger failover (conditional on failure)
+ - name: "trigger-failover"
+ description: "Activate disaster recovery if primary fails"
+ depends_on: ["monitor-primary"]
+ condition: "{{ aggregate-health.status }} == 'FAILED'"
+
+ operations:
+ - id: "alert-on-failure"
+ action: "send-notification"
+ config:
+ type: "critical"
+ message: "Primary provider ({{ primary_provider }}) has failed. Initiating failover..."
+ recipients: ["ops-team@example.com", "slack:#alerts"]
+
+ - id: "enable-backup-infrastructure"
+ action: "scale-up"
+ config:
+ provider: "{{ backup_provider }}"
+ target: "warm-standby-servers"
+ desired_count: 3
+ instance_type: "cx31"
+
+ timeout: 300
+ retry:
+ max_attempts: 3
+
+ - id: "promote-database-replica"
+ action: "promote-read-replica"
+ config:
+ provider: "aws"
+ replica_identifier: "backup-db-replica"
+ to_master: true
+
+ timeout: 600 # Allow time for promotion
+
+ # Phase 3: Network failover
+ - name: "network-failover"
+ description: "Switch traffic to backup provider"
+ depends_on: ["trigger-failover"]
+
+ operations:
+ - id: "update-load-balancer"
+ action: "reconfigure-load-balancer"
+ config:
+ provider: "{{ dns_provider }}"
+ record: "api.example.com"
+ old_backend: "do-lb-{{ primary_provider }}"
+ new_backend: "hz-lb-{{ backup_provider }}"
+
+ - id: "update-dns"
+ action: "update-dns-record"
+ config:
+ provider: "route53"
+ record: "example.com"
+ old_value: "do-lb-ip"
+ new_value: "hz-lb-ip"
+ ttl: 60
+
+ - id: "update-cdn"
+ action: "update-cdn-origin"
+ config:
+ cdn_provider: "cloudfront"
+ distribution_id: "E123456789ABCDEF"
+ new_origin: "backup-lb.hetzner.com"
+
+ # Phase 4: Verify failover
+ - name: "verify-failover"
+ description: "Verify backup provider is operational"
+ depends_on: ["network-failover"]
+
+ operations:
+ - id: "health-check-backup"
+ action: "run-health-check"
+ config:
+ provider: "{{ backup_provider }}"
+ resources: ["backup-servers"]
+ endpoint: "/health"
+ expected_status: 200
+ timeout: 30
+
+ timeout: 300
+
+ - id: "verify-database"
+ action: "verify-database"
+ config:
+ provider: "aws"
+ database: "backup-db-promoted"
+ query: "SELECT COUNT(*) FROM users"
+ expected_rows: "> 0"
+
+ - id: "verify-traffic"
+ action: "verify-traffic-flow"
+ config:
+ endpoint: "https://example.com"
+ expected_response_time: "< 500 ms"
+ expected_status: 200
+
+ # Phase 5: Activate backup fully
+ - name: "activate-backup"
+ description: "Run at full capacity on backup provider"
+ depends_on: ["verify-failover"]
+
+ operations:
+ - id: "scale-to-production"
+ action: "scale-up"
+ config:
+ provider: "{{ backup_provider }}"
+ target: "all-backup-servers"
+ desired_count: 6
+
+ timeout: 600
+
+ - id: "configure-persistence"
+ action: "enable-persistence"
+ config:
+ provider: "{{ backup_provider }}"
+ resources: ["backup-servers"]
+ persistence_type: "volume"
+
+# Recovery strategy for primary restoration
+recovery:
+ description: "Restore primary provider when recovered"
+ phases:
+ - name: "detect-primary-recovery"
+ operation: "health-check"
+ target: "primary-provider"
+ success_criteria: "3 consecutive successful checks"
+
+ - name: "resync-data"
+ operation: "database-resync"
+ direction: "backup-to-primary"
+ timeout: 3600
+
+ - name: "failback"
+ operation: "switch-traffic"
+ target: "primary-provider"
+ verification: "100% traffic restored"
+
+# Notifications
+notifications:
+ on_failover_start: "pagerduty:critical"
+ on_failover_complete: "slack:#ops"
+ on_failover_failed: ["pagerduty:critical", "email:cto@example.com"]
+ on_recovery_start: "slack:#ops"
+ on_recovery_complete: "slack:#ops"
+
+
+Time Event
+────────────────────────────────────────────────────
+00:00 Health check detects failure (3 consecutive failures)
+00:01 Alert sent to ops team
+00:02 Backup infrastructure scaled to 3 servers
+00:05 Database replica promoted to master
+00:10 DNS updated (TTL=60s, propagation ~2 minutes)
+00:12 Load balancer reconfigured
+00:15 Traffic verified flowing through backup
+00:20 Backup scaled to full production capacity (6 servers)
+00:25 Fully operational on backup provider
+
+Total RTO: 25 minutes (including DNS propagation)
+Data loss (RPO): < 5 minutes (database replication lag)
+
+
+Use Case : Migrate running workloads to cheaper provider (DigitalOcean to Hetzner) for cost reduction.
+Workflow Characteristics :
+
+Parallel deployment on target provider
+Gradual traffic migration
+Rollback capability
+Cost tracking
+
+
+# file: workflows/cost-optimization-migration.yml
+
+name: cost-optimization-migration
+version: "1.0"
+description: "Migrate workload from DigitalOcean to Hetzner for cost savings"
+
+parameters:
+ source_provider: "digitalocean"
+ target_provider: "hetzner"
+ migration_speed: "gradual" # or "aggressive"
+ traffic_split: [10, 25, 50, 75, 100] # Gradual percentages
+
+phases:
+ # Phase 1: Create target infrastructure
+ - name: "create-target-infrastructure"
+ description: "Deploy identical workload on Hetzner"
+
+ operations:
+ - id: "provision-servers"
+ action: "create-server"
+ config:
+ provider: "{{ target_provider }}"
+ name: "migration-app"
+ server_type: "cpx21" # Better price/performance than DO
+ count: 3
+
+ timeout: 300
+
+ # Phase 2: Verify target is ready
+ - name: "verify-target"
+ description: "Health checks on target infrastructure"
+ depends_on: ["create-target-infrastructure"]
+
+ operations:
+ - id: "health-check"
+ action: "run-health-check"
+ config:
+ provider: "{{ target_provider }}"
+ endpoint: "/health"
+
+ timeout: 300
+
+ # Phase 3: Gradual traffic migration
+ - name: "migrate-traffic"
+ description: "Gradually shift traffic to target provider"
+ depends_on: ["verify-target"]
+
+ operations:
+ - id: "set-traffic-10"
+ action: "set-traffic-split"
+ config:
+ source: "{{ source_provider }}"
+ target: "{{ target_provider }}"
+ percentage: 10
+ duration: 300
+
+ - id: "verify-10"
+ action: "verify-traffic-flow"
+ config:
+ target_percentage: 10
+ error_rate_threshold: 0.1
+
+ - id: "set-traffic-25"
+ action: "set-traffic-split"
+ config:
+ percentage: 25
+ duration: 600
+
+ - id: "set-traffic-50"
+ action: "set-traffic-split"
+ config:
+ percentage: 50
+ duration: 900
+
+ - id: "set-traffic-75"
+ action: "set-traffic-split"
+ config:
+ percentage: 75
+ duration: 900
+
+ - id: "set-traffic-100"
+ action: "set-traffic-split"
+ config:
+ percentage: 100
+ duration: 600
+
+ # Phase 4: Cleanup source
+ - name: "cleanup-source"
+ description: "Remove old infrastructure from source provider"
+ depends_on: ["migrate-traffic"]
+
+ operations:
+ - id: "verify-final"
+ action: "run-health-check"
+ config:
+ provider: "{{ target_provider }}"
+ duration: 3600 # Monitor for 1 hour
+
+ - id: "decommission-source"
+ action: "delete-resources"
+ config:
+ provider: "{{ source_provider }}"
+ resources: ["droplets", "load-balancer"]
+ preserve_backups: true
+
+# Cost tracking
+cost_tracking:
+ before:
+ provider: "{{ source_provider }}"
+ estimated_monthly: "$72"
+
+ after:
+ provider: "{{ target_provider }}"
+ estimated_monthly: "$42"
+
+ savings:
+ monthly: "$30"
+ annual: "$360"
+ percentage: "42%"
+
+
+Use Case : Setup database replication across multiple providers and regions for disaster recovery.
+Workflow Characteristics :
+
+Create primary database
+Setup read replicas in other providers
+Configure replication
+Monitor lag
+
+
+# file: workflows/multi-region-replication.yml
+
+name: multi-region-replication
+version: "1.0"
+description: "Setup database replication across providers"
+
+phases:
+ # Primary database
+ - name: "create-primary"
+ provider: "aws"
+ operations:
+ - id: "create-rds"
+ action: "create-db-instance"
+ config:
+ identifier: "app-db-primary"
+ engine: "postgres"
+ instance_class: "db.t3.medium"
+ region: "us-east-1"
+
+ # Secondary replica
+ - name: "create-secondary-replica"
+ depends_on: ["create-primary"]
+ provider: "aws"
+ operations:
+ - id: "create-replica"
+ action: "create-read-replica"
+ config:
+ source: "app-db-primary"
+ region: "eu-west-1"
+ identifier: "app-db-secondary"
+
+ # Tertiary replica in different provider
+ - name: "create-tertiary-replica"
+ depends_on: ["create-primary"]
+ operations:
+ - id: "setup-replication"
+ action: "setup-external-replication"
+ config:
+ source_provider: "aws"
+ source_db: "app-db-primary"
+ target_provider: "hetzner"
+ replication_slot: "hetzner_replica"
+ replication_type: "logical"
+
+ # Monitor replication
+ - name: "monitor-replication"
+ depends_on: ["create-tertiary-replica"]
+ operations:
+ - id: "check-lag"
+ action: "monitor-replication-lag"
+ config:
+ replicas:
+ - name: "secondary"
+ warning_threshold: 300
+ critical_threshold: 600
+ - name: "tertiary"
+ warning_threshold: 1000
+ critical_threshold: 2000
+ interval: 60
+
+
+
+
+Define Clear Dependencies : Explicitly state what must happen before what
+Use Idempotent Operations : Workflows should be safe to re-run
+Set Realistic Timeouts : Account for cloud provider delays
+Plan for Failures : Define rollback strategies
+Test Workflows : Run in staging before production
+
+
+
+Parallel Execution : Run independent phases in parallel for speed
+Checkpoints : Add verification at each phase
+Progressive Deployment : Use gradual traffic shifting
+Monitoring Integration : Track metrics during workflow
+Notifications : Alert team at key points
+
+
+
+Calculate ROI : Track cost savings from optimizations
+Monitor Resource Usage : Watch for over-provisioning
+Implement Cleanup : Remove old resources after migration
+Review Regularly : Reassess provider choices
+
+
+
+Diagnosis :
+provisioning workflow status workflow-id --verbose
+
+Solution :
+
+Increase timeout if legitimate long operation
+Check provider logs for actual status
+Manually intervene if necessary
+Use --skip-phase to skip problematic phase
+
+
+Diagnosis :
+provisioning workflow rollback workflow-id --dry-run
+
+Solution :
+
+Review what resources were created
+Manually delete resources if needed
+Fix root cause of failure
+Re-run workflow
+
+
+Diagnosis :
+provisioning database verify-consistency
+
+Solution :
+
+Check replication lag before failover
+Manually resync if necessary
+Use backup to restore consistency
+Run validation queries
+
+
+Batch workflows enable complex multi-provider orchestration with:
+
+Coordinated deployment across providers
+Automated failover and recovery
+Gradual workload migration
+Cost optimization
+Disaster recovery
+
+Start with simple workflows and gradually add complexity as you gain confidence.
A comprehensive CLI refactoring transforming the monolithic 1,329-line script into a modular, maintainable architecture with domain-driven design.
@@ -53010,7 +50758,7 @@ batch_workflow: BatchWorkflow = {
Test Coverage : Comprehensive test suite with 6 test groups
-
+
[Full docs: provisioning help infra]
s → server (create, delete, list, ssh, price)
@@ -53025,10 +50773,10 @@ batch_workflow: BatchWorkflow = {
bat → batch (submit, list, status, monitor, rollback, cancel, stats)
orch → orchestrator (start, stop, status, health, logs)
-
+
[Full docs: provisioning help dev]
-mod → module (discover, load, list, unload, sync-kcl)
+mod → module (discover, load, list, unload, sync-nickel)
lyr → layer (explain, show, test, stats)
version (check, show, updates, apply, taskserv)
pack (core, provider, list, clean)
@@ -53039,7 +50787,7 @@ batch_workflow: BatchWorkflow = {
ws → workspace (init, create, validate, info, list, migrate)
tpl, tmpl → template (list, types, show, apply, validate)
-
+
[Full docs: provisioning help config]
e → env (show environment variables)
@@ -53074,7 +50822,7 @@ batch_workflow: BatchWorkflow = {
price, cost, costs → price (show pricing)
cst, csts → create-server-task (create server with taskservs)
-
+
The help system works in both directions:
# All these work identically:
provisioning help workspace
@@ -53089,14 +50837,10 @@ provisioning help dev = provisioning dev help
provisioning help ws = provisioning ws help
provisioning help plat = provisioning plat help
provisioning help concept = provisioning concept help
-```plaintext
-
-## CLI Internal Architecture
-
-**File Structure:**
-
-```plaintext
-provisioning/core/nulib/
+
+
+File Structure:
+provisioning/core/nulib/
├── provisioning (211 lines) - Main entry point
├── main_provisioning/
│ ├── flags.nu (139 lines) - Centralized flag handling
@@ -53110,27 +50854,25 @@ provisioning/core/nulib/
│ ├── generation.nu (78 lines)
│ ├── utilities.nu (157 lines)
│ └── configuration.nu (316 lines)
-```plaintext
-
-**For Developers:**
-
-- **Adding commands**: Update appropriate domain handler in `commands/`
-- **Adding shortcuts**: Update command registry in `dispatcher.nu`
-- **Flag changes**: Modify centralized functions in `flags.nu`
-- **Testing**: Run `nu tests/test_provisioning_refactor.nu`
-
-See [ADR-006: CLI Refactoring](../architecture/adr/adr-006-provisioning-cli-refactoring.md) for complete refactoring details.
+For Developers:
+
+Adding commands : Update appropriate domain handler in commands/
+Adding shortcuts : Update command registry in dispatcher.nu
+Flag changes : Modify centralized functions in flags.nu
+Testing : Run nu tests/test_provisioning_refactor.nu
+
+See ADR-006: CLI Refactoring for complete refactoring details.
-The system has been completely migrated from ENV-based to config-driven architecture.
+The system has been migrated from ENV-based to config-driven architecture.
65+ files migrated across entire codebase
200+ ENV variables replaced with 476 config accessors
16 token-efficient agents used for systematic migration
92% token efficiency achieved vs monolithic approach
-
+
Primary Config : config.defaults.toml (system defaults)
User Config : config.user.toml (user preferences)
@@ -53138,7 +50880,7 @@ See [ADR-006: CLI Refactoring](../architecture/adr/adr-006-provisioning-cli-refa
Hierarchical Loading : defaults → user → project → infra → env → runtime
Interpolation : {{paths.base}}, {{env.HOME}}, {{now.date}}, {{git.branch}}
-
+
provisioning validate config - Validate configuration
provisioning env - Show environment variables
@@ -53165,409 +50907,242 @@ See [ADR-006: CLI Refactoring](../architecture/adr/adr-006-provisioning-cli-refa
For existing workspace configs :
-KCL still supported but gradually migrating to Nickel
-Config loader supports both formats during transition
+Nickel is the primary configuration language
+All new workspaces use Nickel exclusively
-This guide shows you how to set up a new infrastructure workspace and extend the provisioning system with custom configurations.
-
-
-# Navigate to the workspace directory
-cd workspace/infra
+This guide shows you how to set up a new infrastructure workspace with Nickel-based configuration and auto-generated documentation.
+
+
+# Interactive workspace creation with prompts
+provisioning workspace init
-# Create your infrastructure directory
-mkdir my-infra
-cd my-infra
+# Or non-interactive with explicit path
+provisioning workspace init my_workspace /path/to/my_workspace
+
+When you run provisioning workspace init, the system automatically:
+
+✅ Creates Nickel-based configuration (config/config.ncl)
+✅ Sets up infrastructure directories with Nickel files (infra/default/)
+✅ Generates 4 workspace guides (deployment, configuration, troubleshooting, README)
+✅ Configures local provider as default
+✅ Creates .gitignore for workspace
+
+
+After running workspace init, your workspace has this structure:
+my_workspace/
+├── config/
+│ ├── config.ncl # Master Nickel configuration
+│ ├── providers/
+│ └── platform/
+│
+├── infra/
+│ └── default/
+│ ├── main.ncl # Infrastructure definition
+│ └── servers.ncl # Server configurations
+│
+├── docs/ # ✨ AUTO-GENERATED GUIDES
+│ ├── README.md # Workspace overview & quick start
+│ ├── deployment-guide.md # Step-by-step deployment
+│ ├── configuration-guide.md # Configuration reference
+│ └── troubleshooting.md # Common issues & solutions
+│
+├── .providers/ # Provider state & cache
+├── .kms/ # KMS data
+├── .provisioning/ # Workspace metadata
+└── workspace.nu # Utility scripts
+
+
+The config/config.ncl file is the master configuration for your workspace:
+{
+ workspace = {
+ name = "my_workspace",
+ path = "/path/to/my_workspace",
+ description = "Workspace: my_workspace",
+ metadata = {
+ owner = "your_username",
+ created = "2025-01-07T19:30:00Z",
+ environment = "development",
+ },
+ },
-# Create the basic structure
-mkdir -p task-servs clusters defs data tmp
-```plaintext
-
-### 2. Set Up KCL Module Dependencies
-
-Create `kcl.mod`:
-
-```toml
-[package]
-name = "my-infra"
-edition = "v0.11.2"
-version = "0.0.1"
-
-[dependencies]
-provisioning = { path = "../../../provisioning/kcl", version = "0.0.1" }
-taskservs = { path = "../../../provisioning/extensions/taskservs", version = "0.0.1" }
-cluster = { path = "../../../provisioning/extensions/cluster", version = "0.0.1" }
-upcloud_prov = { path = "../../../provisioning/extensions/providers/upcloud/kcl", version = "0.0.1" }
-```plaintext
-
-### 3. Create Main Settings
-
-Create `settings.k`:
-
-```kcl
-import provisioning
-
-_settings = provisioning.Settings {
- main_name = "my-infra"
- main_title = "My Infrastructure Project"
-
- # Directories
- settings_path = "./settings.yaml"
- defaults_provs_dirpath = "./defs"
- prov_data_dirpath = "./data"
- created_taskservs_dirpath = "./tmp/NOW_deployment"
-
- # Cluster configuration
- cluster_admin_host = "my-infra-cp-0"
- cluster_admin_user = "root"
- servers_wait_started = 40
-
- # Runtime settings
- runset = {
- wait = True
- output_format = "yaml"
- output_path = "./tmp/NOW"
- inventory_file = "./inventory.yaml"
- use_time = True
- }
+ providers = {
+ local = {
+ name = "local",
+ enabled = true,
+ workspace = "my_workspace",
+ auth = { interface = "local" },
+ paths = {
+ base = ".providers/local",
+ cache = ".providers/local/cache",
+ state = ".providers/local/state",
+ },
+ },
+ },
}
+
+
+Every workspace gets 4 auto-generated guides tailored to your specific configuration:
+README.md - Overview with workspace structure and quick start
+deployment-guide.md - Step-by-step deployment instructions for your infrastructure
+configuration-guide.md - Configuration reference specific to your workspace
+troubleshooting.md - Common issues and solutions for your setup
+These guides are automatically generated based on your workspace’s:
+
+Configured providers
+Infrastructure definitions
+Server configurations
+Taskservs and services
+
+
+After creation, edit the Nickel configuration files:
+# Edit master configuration
+vim config/config.ncl
-_settings
-```plaintext
+# Edit infrastructure definition
+vim infra/default/main.ncl
-### 4. Test Your Setup
+# Edit server definitions
+vim infra/default/servers.ncl
-```bash
-# Test the configuration
-kcl run settings.k
+# Validate Nickel syntax
+nickel typecheck config/config.ncl
+
+
+
+Each workspace gets 4 auto-generated guides in the docs/ directory:
+cd my_workspace
-# Test with the provisioning system
-cd ../../../
-provisioning -c -i my-infra show settings
-```plaintext
+# Overview and quick start
+cat docs/README.md
-## Adding Taskservers
+# Step-by-step deployment
+cat docs/deployment-guide.md
-### Example: Redis
+# Configuration reference
+cat docs/configuration-guide.md
-Create `task-servs/redis.k`:
+# Common issues and solutions
+cat docs/troubleshooting.md
+
+
+Edit the Nickel configuration files to suit your needs:
+# Master configuration (providers, settings)
+vim config/config.ncl
-```kcl
-import taskservs.redis.kcl.redis as redis_schema
+# Infrastructure definition
+vim infra/default/main.ncl
-_taskserv = redis_schema.Redis {
- version = "7.2.3"
- port = 6379
- maxmemory = "512mb"
- maxmemory_policy = "allkeys-lru"
- persistence = True
- bind_address = "0.0.0.0"
+# Server configurations
+vim infra/default/servers.ncl
+
+
+# Check Nickel syntax
+nickel typecheck config/config.ncl
+nickel typecheck infra/default/main.ncl
+
+# Validate with provisioning system
+provisioning validate config
+
+
+To add more infrastructure environments:
+# Create new infrastructure directory
+mkdir infra/production
+mkdir infra/staging
+
+# Create Nickel files for each infrastructure
+cp infra/default/main.ncl infra/production/main.ncl
+cp infra/default/servers.ncl infra/production/servers.ncl
+
+# Edit them for your specific needs
+vim infra/production/servers.ncl
+
+
+To use cloud providers (UpCloud, AWS, etc.), update config/config.ncl:
+providers = {
+ upcloud = {
+ name = "upcloud",
+ enabled = true, # Set to true to enable
+ workspace = "my_workspace",
+ auth = { interface = "API" },
+ paths = {
+ base = ".providers/upcloud",
+ cache = ".providers/upcloud/cache",
+ state = ".providers/upcloud/state",
+ },
+ api = {
+ url = "https://api.upcloud.com/1.3",
+ timeout = 30,
+ },
+ },
}
-
-_taskserv
-```plaintext
-
-Test it:
-
-```bash
-kcl run task-servs/redis.k
-```plaintext
-
-### Example: Kubernetes
-
-Create `task-servs/kubernetes.k`:
-
-```kcl
-import taskservs.kubernetes.kcl.kubernetes as k8s_schema
-
-_taskserv = k8s_schema.Kubernetes {
- version = "1.29.1"
- major_version = "1.29"
- cri = "crio"
- runtime_default = "crun"
- cni = "cilium"
- bind_port = 6443
-}
-
-_taskserv
-```plaintext
-
-### Example: Cilium
-
-Create `task-servs/cilium.k`:
-
-```kcl
-import taskservs.cilium.kcl.cilium as cilium_schema
-
-_taskserv = cilium_schema.Cilium {
- version = "v1.16.5"
-}
-
-_taskserv
-```plaintext
-
-## Using the Provisioning System
-
-### Create Servers
-
-```bash
-# Check configuration first
-provisioning -c -i my-infra server create
+
+
+
+provisioning workspace list
+
+
+provisioning workspace activate my_workspace
+
+
+provisioning workspace active
+
+
+# Dry-run first (check mode)
+provisioning -c server create
# Actually create servers
-provisioning -i my-infra server create
-```plaintext
+provisioning server create
-### Install Taskservs
-
-```bash
-# Install Kubernetes
-provisioning -c -i my-infra taskserv create kubernetes
-
-# Install Cilium
-provisioning -c -i my-infra taskserv create cilium
-
-# Install Redis
-provisioning -c -i my-infra taskserv create redis
-```plaintext
-
-### Manage Clusters
-
-```bash
-# Create cluster
-provisioning -c -i my-infra cluster create
-
-# List cluster components
-provisioning -i my-infra cluster list
-```plaintext
-
-## Directory Structure
-
-Your workspace should look like this:
-
-```plaintext
-workspace/infra/my-infra/
-├── kcl.mod # Module dependencies
-├── settings.k # Main infrastructure settings
-├── task-servs/ # Taskserver configurations
-│ ├── kubernetes.k
-│ ├── cilium.k
-│ ├── redis.k
-│ └── {custom-service}.k
-├── clusters/ # Cluster definitions
-│ └── main.k
-├── defs/ # Provider defaults
-│ ├── upcloud_defaults.k
-│ └── {provider}_defaults.k
-├── data/ # Provider runtime data
-│ ├── upcloud_settings.k
-│ └── {provider}_settings.k
-├── tmp/ # Temporary files
-│ ├── NOW_deployment/
-│ └── NOW_clusters/
-├── inventory.yaml # Generated inventory
-└── settings.yaml # Generated settings
-```plaintext
-
-## Advanced Configuration
-
-### Custom Provider Defaults
-
-Create `defs/upcloud_defaults.k`:
-
-```kcl
-import upcloud_prov.upcloud as upcloud_schema
-
-_defaults = upcloud_schema.UpcloudDefaults {
- zone = "de-fra1"
- plan = "1xCPU-2GB"
- storage_size = 25
- storage_tier = "maxiops"
-}
-
-_defaults
-```plaintext
-
-### Cluster Definitions
-
-Create `clusters/main.k`:
-
-```kcl
-import cluster.main as cluster_schema
-
-_cluster = cluster_schema.MainCluster {
- name = "my-infra-cluster"
- control_plane_count = 1
- worker_count = 2
-
- services = [
- "kubernetes",
- "cilium",
- "redis"
- ]
-}
-
-_cluster
-```plaintext
-
-## Environment-Specific Configurations
-
-### Development Environment
-
-Create `settings-dev.k`:
-
-```kcl
-import provisioning
-
-_settings = provisioning.Settings {
- main_name = "my-infra-dev"
- main_title = "My Infrastructure (Development)"
-
- # Development-specific settings
- servers_wait_started = 20 # Faster for dev
-
- runset = {
- wait = False # Don't wait in dev
- output_format = "json"
- }
-}
-
-_settings
-```plaintext
-
-### Production Environment
-
-Create `settings-prod.k`:
-
-```kcl
-import provisioning
-
-_settings = provisioning.Settings {
- main_name = "my-infra-prod"
- main_title = "My Infrastructure (Production)"
-
- # Production-specific settings
- servers_wait_started = 60 # More conservative
-
- runset = {
- wait = True
- output_format = "yaml"
- use_time = True
- }
-
- # Production security
- secrets = {
- provider = "sops"
- }
-}
-
-_settings
-```plaintext
-
-## Troubleshooting
-
-### Common Issues
-
-#### KCL Module Not Found
-
-```plaintext
-Error: pkgpath provisioning not found
-```plaintext
-
-**Solution**: Ensure the provisioning module is in the expected location:
-
-```bash
-ls ../../../provisioning/extensions/kcl/provisioning/0.0.1/
-```plaintext
-
-If missing, copy the files:
-
-```bash
-mkdir -p ../../../provisioning/extensions/kcl/provisioning/0.0.1
-cp -r ../../../provisioning/kcl/* ../../../provisioning/extensions/kcl/provisioning/0.0.1/
-```plaintext
-
-#### Import Path Errors
-
-```plaintext
-Error: attribute 'Redis' not found in module
-```plaintext
-
-**Solution**: Check the import path:
-
-```kcl
-# Wrong
-import taskservs.redis.default.kcl.redis as redis_schema
-
-# Correct
-import taskservs.redis.kcl.redis as redis_schema
-```plaintext
-
-#### Boolean Value Errors
-
-```plaintext
-Error: name 'true' is not defined
-```plaintext
-
-**Solution**: Use capitalized booleans in KCL:
-
-```kcl
-# Wrong
-enabled = true
-
-# Correct
-enabled = True
-```plaintext
-
-### Debugging Commands
-
-```bash
-# Check KCL syntax
-kcl run settings.k
-
-# Validate configuration
-provisioning -c -i my-infra validate config
-
-# Show current settings
-provisioning -i my-infra show settings
-
-# List available taskservs
-provisioning -i my-infra taskserv list
-
-# Check infrastructure status
-provisioning -i my-infra show servers
-```plaintext
-
-## Next Steps
-
-1. **Customize your settings**: Modify `settings.k` for your specific needs
-2. **Add taskservs**: Create configurations for the services you need
-3. **Test thoroughly**: Use `--check` mode before actual deployment
-4. **Create clusters**: Define complete deployment configurations
-5. **Set up CI/CD**: Integrate with your deployment pipeline
-6. **Monitor**: Set up logging and monitoring for your infrastructure
-
-For more advanced topics, see:
-
-- [KCL Module Guide](../development/KCL_MODULE_GUIDE.md)
-- [Creating Custom Taskservers](../development/CUSTOM_TASKSERVERS.md)
-- [Provider Configuration](../user/PROVIDER_SETUP.md)
+# List created servers
+provisioning server list
+
+
+# Check syntax
+nickel typecheck config/config.ncl
+
+# Example error and solution
+Error: Type checking failed
+Solution: Fix the syntax error shown and retry
+
+
+Refer to the auto-generated docs/troubleshooting.md in your workspace for:
+
+Authentication & credentials issues
+Server deployment problems
+Configuration validation errors
+Network connectivity issues
+Performance issues
+
+
+
+Consult workspace guides : Check the docs/ directory
+Check the docs : provisioning --help, provisioning workspace --help
+Enable debug mode : provisioning --debug server create
+Review logs : Check logs for detailed error information
+
+
+
+Review auto-generated guides in docs/
+Customize configuration in Nickel files
+Test with dry-run before deployment
+Deploy infrastructure
+Monitor and maintain your workspace
+
+For detailed deployment instructions, see docs/deployment-guide.md in your workspace.
Version : 1.0.0
Date : 2025-10-06
Status : ✅ Production Ready
-
+
The provisioning system now includes a centralized workspace management system that allows you to easily switch between multiple workspaces without manually editing configuration files.
-
+
provisioning workspace list
-```plaintext
-
-Output:
-
-```plaintext
-Registered Workspaces:
+
+Output:
+Registered Workspaces:
● librecloud
Path: /Users/Akasha/project-provisioning/workspace_librecloud
@@ -53576,80 +51151,53 @@ Registered Workspaces:
production
Path: /opt/workspaces/production
Last used: 2025-10-05T10:15:30Z
-```plaintext
-
-The green ● indicates the currently active workspace.
-
-### Check Active Workspace
-
-```bash
-provisioning workspace active
-```plaintext
-
-Output:
-
-```plaintext
-Active Workspace:
+
+The green ● indicates the currently active workspace.
+
+provisioning workspace active
+
+Output:
+Active Workspace:
Name: librecloud
Path: /Users/Akasha/project-provisioning/workspace_librecloud
Last used: 2025-10-06T12:29:43Z
-```plaintext
-
-### Switch to Another Workspace
-
-```bash
-# Option 1: Using activate
+
+
+# Option 1: Using activate
provisioning workspace activate production
# Option 2: Using switch (alias)
provisioning workspace switch production
-```plaintext
-
-Output:
-
-```plaintext
-✓ Workspace 'production' activated
+
+Output:
+✓ Workspace 'production' activated
Current workspace: production
Path: /opt/workspaces/production
ℹ All provisioning commands will now use this workspace
-```plaintext
-
-### Register a New Workspace
-
-```bash
-# Register without activating
+
+
+# Register without activating
provisioning workspace register my-project ~/workspaces/my-project
# Register and activate immediately
provisioning workspace register my-project ~/workspaces/my-project --activate
-```plaintext
-
-### Remove Workspace from Registry
-
-```bash
-# With confirmation prompt
+
+
+# With confirmation prompt
provisioning workspace remove old-workspace
# Skip confirmation
provisioning workspace remove old-workspace --force
-```plaintext
-
-**Note**: This only removes the workspace from the registry. The workspace files are NOT deleted.
-
-## Architecture
-
-### Central User Configuration
-
-All workspace information is stored in a central user configuration file:
-
-**Location**: `~/Library/Application Support/provisioning/user_config.yaml`
-
-**Structure**:
-
-```yaml
-# Active workspace (current workspace in use)
+
+Note : This only removes the workspace from the registry. The workspace files are NOT deleted.
+
+
+All workspace information is stored in a central user configuration file:
+Location : ~/Library/Application Support/provisioning/user_config.yaml
+Structure :
+# Active workspace (current workspace in use)
active_workspace: "librecloud"
# Known workspaces (automatically managed)
@@ -53676,31 +51224,34 @@ metadata:
created: "2025-10-06T12:29:43Z"
last_updated: "2025-10-06T13:46:16Z"
version: "1.0.0"
-```plaintext
-
-### How It Works
-
-1. **Workspace Registration**: When you register a workspace, it's added to the `workspaces` list in `user_config.yaml`
-
-2. **Activation**: When you activate a workspace:
- - `active_workspace` is updated to the workspace name
- - The workspace's `last_used` timestamp is updated
- - All provisioning commands now use this workspace's configuration
-
-3. **Configuration Loading**: The config loader reads `active_workspace` from `user_config.yaml` and loads:
- - `workspace_path/config/provisioning.yaml`
- - `workspace_path/config/providers/*.toml`
- - `workspace_path/config/platform/*.toml`
- - `workspace_path/config/kms.toml`
-
-## Advanced Features
-
-### User Preferences
-
-You can set global user preferences that apply across all workspaces:
-
-```bash
-# Get a preference value
+
+
+
+
+Workspace Registration : When you register a workspace, it’s added to the workspaces list in user_config.yaml
+
+
+Activation : When you activate a workspace:
+
+active_workspace is updated to the workspace name
+The workspace’s last_used timestamp is updated
+All provisioning commands now use this workspace’s configuration
+
+
+
+Configuration Loading : The config loader reads active_workspace from user_config.yaml and loads:
+
+workspace_path/config/provisioning.yaml
+workspace_path/config/providers/*.toml
+workspace_path/config/platform/*.toml
+workspace_path/config/kms.toml
+
+
+
+
+
+You can set global user preferences that apply across all workspaces:
+# Get a preference value
provisioning workspace get-preference editor
# Set a preference value
@@ -53708,23 +51259,19 @@ provisioning workspace set-preference editor "code"
# View all preferences
provisioning workspace preferences
-```plaintext
-
-**Available Preferences**:
-
-- `editor`: Default editor for config files (vim, code, nano, etc.)
-- `output_format`: Default output format (yaml, json, toml)
-- `confirm_delete`: Require confirmation for deletions (true/false)
-- `confirm_deploy`: Require confirmation for deployments (true/false)
-- `default_log_level`: Default log level (debug, info, warn, error)
-- `preferred_provider`: Preferred cloud provider (aws, upcloud, local)
-
-### Output Formats
-
-List workspaces in different formats:
-
-```bash
-# Table format (default)
+
+Available Preferences :
+
+editor: Default editor for config files (vim, code, nano, etc.)
+output_format: Default output format (yaml, json, toml)
+confirm_delete: Require confirmation for deletions (true/false)
+confirm_deploy: Require confirmation for deployments (true/false)
+default_log_level: Default log level (debug, info, warn, error)
+preferred_provider: Preferred cloud provider (aws, upcloud, local)
+
+
+List workspaces in different formats:
+# Table format (default)
provisioning workspace list
# JSON format
@@ -53732,32 +51279,31 @@ provisioning workspace list --format json
# YAML format
provisioning workspace list --format yaml
-```plaintext
-
-### Quiet Mode
-
-Activate workspace without output messages:
-
-```bash
-provisioning workspace activate production --quiet
-```plaintext
-
-## Workspace Requirements
-
-For a workspace to be activated, it must have:
-
-1. **Directory exists**: The workspace directory must exist on the filesystem
-
-2. **Config directory**: Must have a `config/` directory
+
+
+Activate workspace without output messages:
+provisioning workspace activate production --quiet
+
+
+For a workspace to be activated, it must have:
+
+
+Directory exists : The workspace directory must exist on the filesystem
+
+
+Config directory : Must have a config/ directory
+
+workspace_name/
+└── config/
+ ├── provisioning.yaml # Required
+ ├── providers/ # Optional
+ ├── platform/ # Optional
+ └── kms.toml # Optional
-workspace_name/
-└── config/
-├── provisioning.yaml # Required
-├── providers/ # Optional
-├── platform/ # Optional
-└── kms.toml # Optional
-
+
+
+
3. **Main config file**: Must have `config/provisioning.yaml`
If these requirements are not met, the activation will fail with helpful error messages:
@@ -53767,167 +51313,109 @@ If these requirements are not met, the activation will fail with helpful error m
💡 Available workspaces:
[list of workspaces]
💡 Register it first with: provisioning workspace register my-project <path>
-```plaintext
-
-```plaintext
-✗ Workspace is not migrated to new config system
+
+✗ Workspace is not migrated to new config system
💡 Missing: /path/to/workspace/config
💡 Run migration: provisioning workspace migrate my-project
-```plaintext
-
-## Migration from Old System
-
-If you have workspaces using the old context system (`ws_{name}.yaml` files), they still work but you should register them in the new system:
-
-```bash
-# Register existing workspace
+
+
+If you have workspaces using the old context system (ws_{name}.yaml files), they still work but you should register them in the new system:
+# Register existing workspace
provisioning workspace register old-workspace ~/workspaces/old-workspace
# Activate it
provisioning workspace activate old-workspace
-```plaintext
-
-The old `ws_{name}.yaml` files are still supported for backward compatibility, but the new centralized system is recommended.
-
-## Best Practices
-
-### 1. **One Active Workspace at a Time**
-
-Only one workspace can be active at a time. All provisioning commands use the active workspace's configuration.
-
-### 2. **Use Descriptive Names**
-
-Use clear, descriptive names for your workspaces:
-
-```bash
-# ✅ Good
+
+The old ws_{name}.yaml files are still supported for backward compatibility, but the new centralized system is recommended.
+
+
+Only one workspace can be active at a time. All provisioning commands use the active workspace’s configuration.
+
+Use clear, descriptive names for your workspaces:
+# ✅ Good
provisioning workspace register production-us-east ~/workspaces/prod-us-east
provisioning workspace register dev-local ~/workspaces/dev
# ❌ Avoid
provisioning workspace register ws1 ~/workspaces/workspace1
provisioning workspace register temp ~/workspaces/t
-```plaintext
-
-### 3. **Keep Workspaces Organized**
-
-Store all workspaces in a consistent location:
-
-```bash
-~/workspaces/
+
+
+Store all workspaces in a consistent location:
+~/workspaces/
├── production/
├── staging/
├── development/
└── testing/
-```plaintext
-
-### 4. **Regular Cleanup**
-
-Remove workspaces you no longer use:
-
-```bash
-# List workspaces to see which ones are unused
+
+
+Remove workspaces you no longer use:
+# List workspaces to see which ones are unused
provisioning workspace list
# Remove old workspace
provisioning workspace remove old-workspace
-```plaintext
-
-### 5. **Backup User Config**
-
-Periodically backup your user configuration:
-
-```bash
-cp "~/Library/Application Support/provisioning/user_config.yaml" \
+
+
+Periodically backup your user configuration:
+cp "~/Library/Application Support/provisioning/user_config.yaml" \
"~/Library/Application Support/provisioning/user_config.yaml.backup"
-```plaintext
-
-## Troubleshooting
-
-### Workspace Not Found
-
-**Problem**: `✗ Workspace 'name' not found in registry`
-
-**Solution**: Register the workspace first:
-
-```bash
-provisioning workspace register name /path/to/workspace
-```plaintext
-
-### Missing Configuration
-
-**Problem**: `✗ Missing workspace configuration`
-
-**Solution**: Ensure the workspace has a `config/provisioning.yaml` file. Run migration if needed:
-
-```bash
-provisioning workspace migrate name
-```plaintext
-
-### Directory Not Found
-
-**Problem**: `✗ Workspace directory not found: /path/to/workspace`
-
-**Solution**:
-
-1. Check if the workspace was moved or deleted
-2. Update the path or remove from registry:
-
-```bash
-provisioning workspace remove name
+
+
+
+Problem : ✗ Workspace 'name' not found in registry
+Solution : Register the workspace first:
+provisioning workspace register name /path/to/workspace
+
+
+Problem : ✗ Missing workspace configuration
+Solution : Ensure the workspace has a config/provisioning.yaml file. Run migration if needed:
+provisioning workspace migrate name
+
+
+Problem : ✗ Workspace directory not found: /path/to/workspace
+Solution :
+
+Check if the workspace was moved or deleted
+Update the path or remove from registry:
+
+provisioning workspace remove name
provisioning workspace register name /new/path
-```plaintext
-
-### Corrupted User Config
-
-**Problem**: `Error: Failed to parse user config`
-
-**Solution**: The system automatically creates a backup and regenerates the config. Check:
-
-```bash
-ls -la "~/Library/Application Support/provisioning/user_config.yaml"*
-```plaintext
-
-Restore from backup if needed:
-
-```bash
-cp "~/Library/Application Support/provisioning/user_config.yaml.backup.TIMESTAMP" \
+
+
+Problem : Error: Failed to parse user config
+Solution : The system automatically creates a backup and regenerates the config. Check:
+ls -la "~/Library/Application Support/provisioning/user_config.yaml"*
+
+Restore from backup if needed:
+cp "~/Library/Application Support/provisioning/user_config.yaml.backup.TIMESTAMP" \
"~/Library/Application Support/provisioning/user_config.yaml"
-```plaintext
-
-## CLI Commands Reference
-
-| Command | Alias | Description |
-|---------|-------|-------------|
-| `provisioning workspace activate <name>` | - | Activate a workspace |
-| `provisioning workspace switch <name>` | - | Alias for activate |
-| `provisioning workspace list` | - | List all registered workspaces |
-| `provisioning workspace active` | - | Show currently active workspace |
-| `provisioning workspace register <name> <path>` | - | Register a new workspace |
-| `provisioning workspace remove <name>` | - | Remove workspace from registry |
-| `provisioning workspace preferences` | - | Show user preferences |
-| `provisioning workspace set-preference <key> <value>` | - | Set a preference |
-| `provisioning workspace get-preference <key>` | - | Get a preference value |
-
-## Integration with Config System
-
-The workspace switching system is fully integrated with the new target-based configuration system:
-
-### Configuration Hierarchy (Priority: Low → High)
-
-```plaintext
-1. Workspace config workspace/{name}/config/provisioning.yaml
+
+
+Command Alias Description
+provisioning workspace activate <name>- Activate a workspace
+provisioning workspace switch <name>- Alias for activate
+provisioning workspace list- List all registered workspaces
+provisioning workspace active- Show currently active workspace
+provisioning workspace register <name> <path>- Register a new workspace
+provisioning workspace remove <name>- Remove workspace from registry
+provisioning workspace preferences- Show user preferences
+provisioning workspace set-preference <key> <value>- Set a preference
+provisioning workspace get-preference <key>- Get a preference value
+
+
+
+The workspace switching system is fully integrated with the new target-based configuration system:
+
+1. Workspace config workspace/{name}/config/provisioning.yaml
2. Provider configs workspace/{name}/config/providers/*.toml
3. Platform configs workspace/{name}/config/platform/*.toml
4. User context ~/Library/Application Support/provisioning/ws_{name}.yaml (legacy)
5. User config ~/Library/Application Support/provisioning/user_config.yaml (new)
6. Environment variables PROVISIONING_*
-```plaintext
-
-### Example Workflow
-
-```bash
-# 1. Create and activate development workspace
+
+
+# 1. Create and activate development workspace
provisioning workspace register dev ~/workspaces/dev --activate
# 2. Work on development
@@ -53945,59 +51433,39 @@ provisioning taskserv create kubernetes
provisioning workspace switch dev
# All commands now use dev workspace config
-```plaintext
-
-## KCL Workspace Configuration
-
-Starting with v3.6.0, workspaces use **KCL (Kusion Configuration Language)** for type-safe, schema-validated configurations instead of YAML.
-
-### What Changed
-
-**Before (YAML)**:
-
-```yaml
-workspace:
- name: myworkspace
- version: 1.0.0
-paths:
- base: /path/to/workspace
-```plaintext
-
-**Now (KCL - Type-Safe)**:
-
-```kcl
-import provisioning.workspace_config as ws
-
-workspace_config = ws.WorkspaceConfig {
- workspace: {
- name: "myworkspace"
- version: "1.0.0" # Validated: must be semantic (X.Y.Z)
- }
- paths: {
- base: "/path/to/workspace"
- # ... all paths with type checking
- }
+
+
+Starting with v3.7.0, workspaces use Nickel for type-safe, schema-validated configurations.
+
+Nickel Configuration (Type-Safe):
+{
+ workspace = {
+ name = "myworkspace",
+ version = "1.0.0",
+ },
+ paths = {
+ base = "/path/to/workspace",
+ infra = "/path/to/workspace/infra",
+ config = "/path/to/workspace/config",
+ },
}
-```plaintext
-
-### Benefits of KCL Configuration
-
-- ✅ **Type Safety**: Catch configuration errors at load time, not runtime
-- ✅ **Schema Validation**: Required fields, value constraints, format checking
-- ✅ **Immutability**: Enforced immutable defaults prevent accidental changes
-- ✅ **Self-Documenting**: Schema descriptions provide instant documentation
-- ✅ **IDE Support**: KCL editor extensions with auto-completion
-
-### Viewing Workspace Configuration
-
-```bash
-# View your KCL workspace configuration
+
+
+
+✅ Type Safety : Catch configuration errors at load time, not runtime
+✅ Schema Validation : Required fields, value constraints, format checking
+✅ Lazy Evaluation : Only computes what’s needed
+✅ Self-Documenting : Records provide instant documentation
+✅ Merging : Powerful record merging for composition
+
+
+# View your Nickel workspace configuration
provisioning workspace config show
# View in different formats
provisioning workspace config show --format=yaml # YAML output
provisioning workspace config show --format=json # JSON output
-provisioning workspace config show --format=kcl # Raw KCL file
+provisioning workspace config show --format=nickel # Raw Nickel file
# Validate configuration
provisioning workspace config validate
@@ -54005,64 +51473,23 @@ provisioning workspace config validate
# Show configuration hierarchy
provisioning workspace config hierarchy
-```plaintext
-
-### Migrating Existing Workspaces
-
-If you have workspaces with YAML configs (`provisioning.yaml`), you can migrate them to KCL:
-
-```bash
-# Migrate single workspace
-provisioning workspace migrate-config myworkspace
-
-# Migrate all workspaces
-provisioning workspace migrate-config --all
-
-# Preview changes without applying
-provisioning workspace migrate-config myworkspace --check
-
-# Create backup before migration
-provisioning workspace migrate-config myworkspace --backup
-
-# Force overwrite existing KCL files
-provisioning workspace migrate-config myworkspace --force
-```plaintext
-
-**How it works**:
-
-1. Reads existing `provisioning.yaml`
-2. Converts to KCL using workspace configuration schema
-3. Validates converted KCL against schema
-4. Backs up original YAML (optional)
-5. Saves new `provisioning.k` file
-
-### Backward Compatibility
-
-✅ **Full backward compatibility maintained**:
-
-- Existing YAML configs (`provisioning.yaml`) continue to work
-- Config loader checks for KCL files first, falls back to YAML
-- No breaking changes - migrate at your own pace
-- Both formats can coexist during transition
-
-## See Also
-
-- **Configuration Guide**: `docs/architecture/adr/ADR-010-configuration-format-strategy.md`
-- **Migration Complete**: [Migration Guide](../guides/from-scratch.md)
-- **From-Scratch Guide**: [From-Scratch Guide](../guides/from-scratch.md)
-- **KCL Patterns**: KCL Module System
-
----
-
-**Maintained By**: Infrastructure Team
-**Version**: 1.1.0 (Updated for KCL)
-**Status**: ✅ Production Ready
-**Last Updated**: 2025-12-03
+
+
+Configuration Guide : docs/architecture/adr/ADR-010-configuration-format-strategy.md
+Migration Guide : Nickel Migration
+From-Scratch Guide : From-Scratch Guide
+Nickel Patterns : Nickel Language Module System
+
+
+Maintained By : Infrastructure Team
+Version : 2.0.0 (Updated for Nickel)
+Status : ✅ Production Ready
+Last Updated : 2025-12-03
A centralized workspace management system has been implemented, allowing seamless switching between multiple workspaces without manually editing configuration files. This builds upon the target-based configuration system.
-
+
Centralized Configuration : Single user_config.yaml file stores all workspace information
Simple CLI Commands : Switch workspaces with a single command
@@ -54072,7 +51499,7 @@ provisioning workspace migrate-config myworkspace --force
Automatic Updates : Last-used timestamps and metadata automatically managed
Validation : Ensures workspaces have required configuration before activation
-
+
# List all registered workspaces
provisioning workspace list
@@ -54097,16 +51524,11 @@ provisioning workspace set-preference <key> <value>
# Get user preference
provisioning workspace get-preference <key>
-```plaintext
-
-## Central User Configuration
-
-**Location**: `~/Library/Application Support/provisioning/user_config.yaml`
-
-**Structure**:
-
-```yaml
-# Active workspace (current workspace in use)
+
+
+Location : ~/Library/Application Support/provisioning/user_config.yaml
+Structure :
+# Active workspace (current workspace in use)
active_workspace: "librecloud"
# Known workspaces (automatically managed)
@@ -54133,12 +51555,9 @@ metadata:
created: "2025-10-06T12:29:43Z"
last_updated: "2025-10-06T13:46:16Z"
version: "1.0.0"
-```plaintext
-
-## Usage Example
-
-```bash
-# Start with workspace librecloud active
+
+
+# Start with workspace librecloud active
$ provisioning workspace active
Active Workspace:
Name: librecloud
@@ -54170,38 +51589,32 @@ Path: /opt/workspaces/production
# All subsequent commands use production workspace
$ provisioning server list
$ provisioning taskserv create kubernetes
-```plaintext
-
-## Integration with Config System
-
-The workspace switching system integrates seamlessly with the configuration system:
-
-1. **Active Workspace Detection**: Config loader reads `active_workspace` from `user_config.yaml`
-2. **Workspace Validation**: Ensures workspace has required `config/provisioning.yaml`
-3. **Configuration Loading**: Loads workspace-specific configs automatically
-4. **Automatic Timestamps**: Updates `last_used` on workspace activation
-
-**Configuration Hierarchy** (Priority: Low → High):
-
-```plaintext
-1. Workspace config workspace/{name}/config/provisioning.yaml
+
+
+The workspace switching system integrates seamlessly with the configuration system:
+
+Active Workspace Detection : Config loader reads active_workspace from user_config.yaml
+Workspace Validation : Ensures workspace has required config/provisioning.yaml
+Configuration Loading : Loads workspace-specific configs automatically
+Automatic Timestamps : Updates last_used on workspace activation
+
+Configuration Hierarchy (Priority: Low → High):
+1. Workspace config workspace/{name}/config/provisioning.yaml
2. Provider configs workspace/{name}/config/providers/*.toml
3. Platform configs workspace/{name}/config/platform/*.toml
4. User config ~/Library/Application Support/provisioning/user_config.yaml
5. Environment variables PROVISIONING_*
-```plaintext
-
-## Benefits
-
-- ✅ **No Manual Config Editing**: Switch workspaces with single command
-- ✅ **Multiple Workspaces**: Manage dev, staging, production simultaneously
-- ✅ **User Preferences**: Global settings across all workspaces
-- ✅ **Automatic Tracking**: Last-used timestamps, active workspace markers
-- ✅ **Safe Operations**: Validation before activation, confirmation prompts
-- ✅ **Backward Compatible**: Old `ws_{name}.yaml` files still supported
-
-For more detailed information, see [Workspace Switching Guide](../infrastructure/workspace-switching-guide.md).
+
+
+✅ No Manual Config Editing : Switch workspaces with single command
+✅ Multiple Workspaces : Manage dev, staging, production simultaneously
+✅ User Preferences : Global settings across all workspaces
+✅ Automatic Tracking : Last-used timestamps, active workspace markers
+✅ Safe Operations : Validation before activation, confirmation prompts
+✅ Backward Compatible : Old ws_{name}.yaml files still supported
+
+For more detailed information, see Workspace Switching Guide .
Complete command-line reference for Infrastructure Automation. This guide covers all commands, options, and usage patterns.
@@ -54213,11 +51626,11 @@ For more detailed information, see [Workspace Switching Guide](../infrastructure
Integration with other tools
Advanced command combinations
-
+
All provisioning commands follow this structure:
provisioning [global-options] <command> [subcommand] [command-options] [arguments]
-
+
These options can be used with any command:
Option Short Description Example
--infra-iSpecify infrastructure --infra production
@@ -54230,7 +51643,7 @@ For more detailed information, see [Workspace Switching Guide](../infrastructure
--help-hShow help --help
-
+
Format Description Use Case
textHuman-readable text Terminal viewing
jsonJSON format Scripting, APIs
@@ -54320,7 +51733,7 @@ provisioning server create --infra my-infra --wait
provisioning server create web-01 --infra my-infra
# Create with custom settings
-provisioning server create --infra my-infra --settings custom.k
+provisioning server create --infra my-infra --settings custom.ncl
Options:
@@ -54417,7 +51830,7 @@ provisioning server price --infra my-infra --compare
--monthly - Monthly cost estimates
--compare - Compare costs across providers
-
+
Install and configure task services on servers.
# Install service on all eligible servers
@@ -54666,7 +52079,7 @@ provisioning show servers --infra my-infra --out json
data - Raw infrastructure data
-List various types of resources.
+List resource types (servers, networks, volumes, etc.).
# List providers
provisioning list providers
@@ -54699,7 +52112,7 @@ provisioning validate config --infra my-infra
provisioning validate config --detailed --infra my-infra
# Validate specific file
-provisioning validate config settings.k --infra my-infra
+provisioning validate config settings.ncl --infra my-infra
# Quick validation
provisioning validate quick --infra my-infra
@@ -54786,16 +52199,16 @@ provisioning nu --script my-script.nu
Edit encrypted configuration files using SOPS.
# Edit encrypted file
-provisioning sops settings.k --infra my-infra
+provisioning sops settings.ncl --infra my-infra
# Encrypt new file
-provisioning sops --encrypt new-secrets.k --infra my-infra
+provisioning sops --encrypt new-secrets.ncl --infra my-infra
# Decrypt for viewing
-provisioning sops --decrypt secrets.k --infra my-infra
+provisioning sops --decrypt secrets.ncl --infra my-infra
# Rotate keys
-provisioning sops --rotate-keys secrets.k --infra my-infra
+provisioning sops --rotate-keys secrets.ncl --infra my-infra
Options:
# Submit batch workflow
-provisioning workflows batch submit my-workflow.k
+provisioning workflows batch submit my-workflow.ncl
# Monitor workflow progress
provisioning workflows batch monitor workflow-123
@@ -54879,7 +52292,7 @@ provisioning orchestrator health
4 - Permission denied
5 - Resource not found
-
+
Control behavior through environment variables:
# Enable debug mode
export PROVISIONING_DEBUG=true
@@ -54893,7 +52306,7 @@ export PROVISIONING_OUTPUT_FORMAT=json
# Disable interactive prompts
export PROVISIONING_NONINTERACTIVE=true
-
+
#!/bin/bash
# Example batch script
@@ -54919,7 +52332,7 @@ provisioning cluster create web-app --infra production --yes
echo "Deployment completed successfully"
-
+
# Get server list as JSON
servers=$(provisioning server list --infra my-infra --out json)
@@ -54971,7 +52384,7 @@ deploy_infrastructure() {
deploy_infrastructure "production"
-
+
# GitLab CI example
deploy:
script:
@@ -54982,7 +52395,7 @@ deploy:
only:
- main
-
+
# Health check script
#!/bin/bash
@@ -55023,12 +52436,12 @@ echo "Backup completed: $BACKUP_DIR"
Version : 2.0.0
Date : 2025-10-06
Status : Implemented
-
+
The provisioning system now uses a workspace-based configuration architecture where each workspace has its own complete configuration structure. This replaces the old ENV-based and template-only system.
config.defaults.toml is ONLY a template, NEVER loaded at runtime
This file exists solely as a reference template for generating workspace configurations. The system does NOT load it during operation.
-
+
Configuration is loaded in the following order (lowest to highest priority):
Workspace Config (Base): {workspace}/config/provisioning.yaml
@@ -55037,7 +52450,7 @@ echo "Backup completed: $BACKUP_DIR"
User Context : ~/Library/Application Support/provisioning/ws_{name}.yaml
Environment Variables : PROVISIONING_* (highest priority)
-
+
When a workspace is initialized, the following structure is created:
{workspace}/
├── config/
@@ -55060,66 +52473,53 @@ echo "Backup completed: $BACKUP_DIR"
│ └── keys/
├── generated/ # Generated files
└── .gitignore # Workspace gitignore
-```plaintext
-
-## Template System
-
-Templates are located at: `/Users/Akasha/project-provisioning/provisioning/config/templates/`
-
-### Available Templates
-
-1. **workspace-provisioning.yaml.template** - Main workspace configuration
-2. **provider-aws.toml.template** - AWS provider configuration
-3. **provider-local.toml.template** - Local provider configuration
-4. **provider-upcloud.toml.template** - UpCloud provider configuration
-5. **kms.toml.template** - KMS configuration
-6. **user-context.yaml.template** - User context configuration
-
-### Template Variables
-
-Templates support the following interpolation variables:
-
-- `{{workspace.name}}` - Workspace name
-- `{{workspace.path}}` - Absolute path to workspace
-- `{{now.iso}}` - Current timestamp in ISO format
-- `{{env.HOME}}` - User's home directory
-- `{{env.*}}` - Environment variables (safe list only)
-- `{{paths.base}}` - Base path (after config load)
-
-## Workspace Initialization
-
-### Command
-
-```bash
-# Using the workspace init function
+
+
+Templates are located at: /Users/Akasha/project-provisioning/provisioning/config/templates/
+
+
+workspace-provisioning.yaml.template - Main workspace configuration
+provider-aws.toml.template - AWS provider configuration
+provider-local.toml.template - Local provider configuration
+provider-upcloud.toml.template - UpCloud provider configuration
+kms.toml.template - KMS configuration
+user-context.yaml.template - User context configuration
+
+
+Templates support the following interpolation variables:
+
+{{workspace.name}} - Workspace name
+{{workspace.path}} - Absolute path to workspace
+{{now.iso}} - Current timestamp in ISO format
+{{env.HOME}} - User’s home directory
+{{env.*}} - Environment variables (safe list only)
+{{paths.base}} - Base path (after config load)
+
+
+
+# Using the workspace init function
nu -c "use provisioning/core/nulib/lib_provisioning/workspace/init.nu *; workspace-init 'my-workspace' '/path/to/workspace' --providers ['aws' 'local'] --activate"
-```plaintext
-
-### Process
-
-1. **Create Directory Structure**: All necessary directories
-2. **Generate Config from Template**: Creates `config/provisioning.yaml`
-3. **Generate Provider Configs**: For each specified provider
-4. **Generate KMS Config**: Security configuration
-5. **Create User Context** (if --activate): User-specific overrides
-6. **Create .gitignore**: Ignore runtime/cache files
-
-## User Context
-
-User context files are stored per workspace:
-
-**Location**: `~/Library/Application Support/provisioning/ws_{workspace_name}.yaml`
-
-### Purpose
-
-- Store user-specific overrides (debug settings, output preferences)
-- Mark active workspace
-- Override workspace paths if needed
-
-### Example
-
-```yaml
-workspace:
+
+
+
+Create Directory Structure : All necessary directories
+Generate Config from Template : Creates config/provisioning.yaml
+Generate Provider Configs : For each specified provider
+Generate KMS Config : Security configuration
+Create User Context (if –activate): User-specific overrides
+Create .gitignore : Ignore runtime/cache files
+
+
+User context files are stored per workspace:
+Location : ~/Library/Application Support/provisioning/ws_{workspace_name}.yaml
+
+
+Store user-specific overrides (debug settings, output preferences)
+Mark active workspace
+Override workspace paths if needed
+
+
+workspace:
name: "my-workspace"
path: "/path/to/my-workspace"
active: true
@@ -55133,144 +52533,99 @@ output:
providers:
default: "aws"
-```plaintext
-
-## Configuration Loading Process
-
-### 1. Determine Active Workspace
-
-```nushell
-# Check user config directory for active workspace
+
+
+
+# Check user config directory for active workspace
let user_config_dir = ~/Library/Application Support/provisioning/
let active_workspace = (find workspace with active: true in ws_*.yaml files)
-```plaintext
-
-### 2. Load Workspace Config
-
-```nushell
-# Load main workspace config
+
+
+# Load main workspace config
let workspace_config = {workspace.path}/config/provisioning.yaml
-```plaintext
-
-### 3. Load Provider Configs
-
-```nushell
-# Merge all provider configs
+
+
+# Merge all provider configs
for provider in {workspace.path}/config/providers/*.toml {
merge provider config
}
-```plaintext
-
-### 4. Load Platform Configs
-
-```nushell
-# Merge all platform configs
+
+
+# Merge all platform configs
for platform in {workspace.path}/config/platform/*.toml {
merge platform config
}
-```plaintext
-
-### 5. Apply User Context
-
-```nushell
-# Apply user-specific overrides
+
+
+# Apply user-specific overrides
let user_context = ~/Library/Application Support/provisioning/ws_{name}.yaml
merge user_context (highest config priority)
-```plaintext
-
-### 6. Apply Environment Variables
-
-```nushell
-# Final overrides from environment
+
+
+# Final overrides from environment
PROVISIONING_DEBUG=true
PROVISIONING_LOG_LEVEL=debug
PROVISIONING_PROVIDER=aws
# etc.
-```plaintext
-
-## Migration from Old System
-
-### Before (ENV-based)
-
-```bash
-export PROVISIONING=/usr/local/provisioning
+
+
+
+export PROVISIONING=/usr/local/provisioning
export PROVISIONING_INFRA_PATH=/path/to/infra
export PROVISIONING_DEBUG=true
# ... many ENV variables
-```plaintext
-
-### After (Workspace-based)
-
-```bash
-# Initialize workspace
+
+
+# Initialize workspace
workspace-init "production" "/workspaces/prod" --providers ["aws"] --activate
# All config is now in workspace
# No ENV variables needed (except for overrides)
-```plaintext
-
-### Breaking Changes
-
-1. **`config.defaults.toml` NOT loaded** - Only used as template
-2. **Workspace required** - Must have active workspace or be in workspace directory
-3. **New config locations** - User config in `~/Library/Application Support/provisioning/`
-4. **YAML main config** - `provisioning.yaml` instead of TOML
-
-## Workspace Management Commands
-
-### Initialize Workspace
-
-```nushell
-use provisioning/core/nulib/lib_provisioning/workspace/init.nu *
+
+
+
+config.defaults.toml NOT loaded - Only used as template
+Workspace required - Must have active workspace or be in workspace directory
+New config locations - User config in ~/Library/Application Support/provisioning/
+YAML main config - provisioning.yaml instead of TOML
+
+
+
+use provisioning/core/nulib/lib_provisioning/workspace/init.nu *
workspace-init "my-workspace" "/path/to/workspace" --providers ["aws" "local"] --activate
-```plaintext
-
-### List Workspaces
-
-```nushell
-workspace-list
-```plaintext
-
-### Activate Workspace
-
-```nushell
-workspace-activate "my-workspace"
-```plaintext
-
-### Get Active Workspace
-
-```nushell
-workspace-get-active
-```plaintext
-
-## Implementation Files
-
-### Core Files
-
-1. **Template Directory**: `/Users/Akasha/project-provisioning/provisioning/config/templates/`
-2. **Workspace Init**: `/Users/Akasha/project-provisioning/provisioning/core/nulib/lib_provisioning/workspace/init.nu`
-3. **Config Loader**: `/Users/Akasha/project-provisioning/provisioning/core/nulib/lib_provisioning/config/loader.nu`
-
-### Key Changes in Config Loader
-
-#### Removed
-
-- `get-defaults-config-path()` - No longer loads config.defaults.toml
-- Old hierarchy with user/project/infra TOML files
-
-#### Added
-
-- `get-active-workspace()` - Finds active workspace from user config
-- Support for YAML config files
-- Provider and platform config merging
-- User context loading
-
-## Configuration Schema
-
-### Main Workspace Config (provisioning.yaml)
-
-```yaml
-workspace:
+
+
+workspace-list
+
+
+workspace-activate "my-workspace"
+
+
+workspace-get-active
+
+
+
+
+Template Directory : /Users/Akasha/project-provisioning/provisioning/config/templates/
+Workspace Init : /Users/Akasha/project-provisioning/provisioning/core/nulib/lib_provisioning/workspace/init.nu
+Config Loader : /Users/Akasha/project-provisioning/provisioning/core/nulib/lib_provisioning/config/loader.nu
+
+
+
+
+get-defaults-config-path() - No longer loads config.defaults.toml
+Old hierarchy with user/project/infra TOML files
+
+
+
+get-active-workspace() - Finds active workspace from user config
+Support for YAML config files
+Provider and platform config merging
+User context loading
+
+
+
+workspace:
name: string
version: string
created: timestamp
@@ -55296,12 +52651,9 @@ providers:
default: string
# ... all other sections
-```plaintext
-
-### Provider Config (providers/*.toml)
-
-```toml
-[provider]
+
+
+[provider]
name = "aws"
enabled = true
workspace = "workspace-name"
@@ -55313,12 +52665,9 @@ region = "us-east-1"
[provider.paths]
base = "{workspace}/.providers/aws"
cache = "{workspace}/.providers/aws/cache"
-```plaintext
-
-### User Context (ws_{name}.yaml)
-
-```yaml
-workspace:
+
+
+workspace:
name: string
path: string
active: bool
@@ -55329,98 +52678,78 @@ debug:
output:
format: string
-```plaintext
-
-## Benefits
-
-1. **No Template Loading**: config.defaults.toml is template-only
-2. **Workspace Isolation**: Each workspace is self-contained
-3. **Explicit Configuration**: No hidden defaults from ENV
-4. **Clear Hierarchy**: Predictable override behavior
-5. **Multi-Workspace Support**: Easy switching between workspaces
-6. **User Overrides**: Per-workspace user preferences
-7. **Version Control**: Workspace configs can be committed (except secrets)
-
-## Security Considerations
-
-### Generated .gitignore
-
-The workspace .gitignore excludes:
-
-- `.cache/` - Cache files
-- `.runtime/` - Runtime data
-- `.providers/` - Provider state
-- `.kms/keys/` - Secret keys
-- `generated/` - Generated files
-- `*.log` - Log files
-
-### Secret Management
-
-- KMS keys stored in `.kms/keys/` (gitignored)
-- SOPS config references keys, doesn't store them
-- Provider credentials in user-specific locations (not workspace)
-
-## Troubleshooting
-
-### No Active Workspace Error
-
-```plaintext
-Error: No active workspace found. Please initialize or activate a workspace.
-```plaintext
-
-**Solution**: Initialize or activate a workspace:
-
-```bash
-workspace-init "my-workspace" "/path/to/workspace" --activate
-```plaintext
-
-### Config File Not Found
-
-```plaintext
-Error: Required configuration file not found: {workspace}/config/provisioning.yaml
-```plaintext
-
-**Solution**: The workspace config is corrupted or deleted. Re-initialize:
-
-```bash
-workspace-init "workspace-name" "/existing/path" --providers ["aws"]
-```plaintext
-
-### Provider Not Configured
-
-**Solution**: Add provider config to workspace:
-
-```bash
-# Generate provider config manually
-generate-provider-config "/workspace/path" "workspace-name" "aws"
-```plaintext
-
-## Future Enhancements
-
-1. **Workspace Templates**: Pre-configured workspace templates (dev, prod, test)
-2. **Workspace Import/Export**: Share workspace configurations
-3. **Remote Workspace**: Load workspace from remote Git repository
-4. **Workspace Validation**: Comprehensive workspace health checks
-5. **Config Migration Tool**: Automated migration from old ENV-based system
-
-## Summary
-
-- **config.defaults.toml is ONLY a template** - Never loaded at runtime
-- **Workspaces are self-contained** - Complete config structure generated from templates
-- **New hierarchy**: Workspace → Provider → Platform → User Context → ENV
-- **User context for overrides** - Stored in ~/Library/Application Support/provisioning/
-- **Clear, explicit configuration** - No hidden defaults
-
-## Related Documentation
-
-- Template files: `provisioning/config/templates/`
-- Workspace init: `provisioning/core/nulib/lib_provisioning/workspace/init.nu`
-- Config loader: `provisioning/core/nulib/lib_provisioning/config/loader.nu`
-- User guide: `docs/user/workspace-management.md`
+
+
+No Template Loading : config.defaults.toml is template-only
+Workspace Isolation : Each workspace is self-contained
+Explicit Configuration : No hidden defaults from ENV
+Clear Hierarchy : Predictable override behavior
+Multi-Workspace Support : Easy switching between workspaces
+User Overrides : Per-workspace user preferences
+Version Control : Workspace configs can be committed (except secrets)
+
+
+
+The workspace .gitignore excludes:
+
+.cache/ - Cache files
+.runtime/ - Runtime data
+.providers/ - Provider state
+.kms/keys/ - Secret keys
+generated/ - Generated files
+*.log - Log files
+
+
+
+KMS keys stored in .kms/keys/ (gitignored)
+SOPS config references keys, doesn’t store them
+Provider credentials in user-specific locations (not workspace)
+
+
+
+Error: No active workspace found. Please initialize or activate a workspace.
+
+Solution : Initialize or activate a workspace:
+workspace-init "my-workspace" "/path/to/workspace" --activate
+
+
+Error: Required configuration file not found: {workspace}/config/provisioning.yaml
+
+Solution : The workspace config is corrupted or deleted. Re-initialize:
+workspace-init "workspace-name" "/existing/path" --providers ["aws"]
+
+
+Solution : Add provider config to workspace:
+# Generate provider config manually
+generate-provider-config "/workspace/path" "workspace-name" "aws"
+
+
+
+Workspace Templates : Pre-configured workspace templates (dev, prod, test)
+Workspace Import/Export : Share workspace configurations
+Remote Workspace : Load workspace from remote Git repository
+Workspace Validation : Comprehensive workspace health checks
+Config Migration Tool : Automated migration from old ENV-based system
+
+
+
+config.defaults.toml is ONLY a template - Never loaded at runtime
+Workspaces are self-contained - Complete config structure generated from templates
+New hierarchy : Workspace → Provider → Platform → User Context → ENV
+User context for overrides - Stored in ~/Library/Application Support/provisioning/
+Clear, explicit configuration - No hidden defaults
+
+
+
+Template files: provisioning/config/templates/
+Workspace init: provisioning/core/nulib/lib_provisioning/workspace/init.nu
+Config loader: provisioning/core/nulib/lib_provisioning/config/loader.nu
+User guide: docs/user/workspace-management.md
+
This guide covers generating and managing temporary credentials (dynamic secrets) instead of using static secrets. See the Quick Reference section below for fast lookup.
-
+
Quick Start : Generate temporary credentials instead of using static secrets
@@ -55444,10 +52773,10 @@ generate-provider-config "/workspace/path" "workspace-name" "aws"
Type TTL Range Renewable Use Case
-AWS STS 15min - 12h ✅ Yes Cloud resource provisioning
-SSH Keys 10min - 24h ❌ No Temporary server access
-UpCloud 30min - 8h ❌ No UpCloud API operations
-Vault 5min - 24h ✅ Yes Any Vault-backed secret
+AWS STS 15 min - 12 h ✅ Yes Cloud resource provisioning
+SSH Keys 10 min - 24 h ❌ No Temporary server access
+UpCloud 30 min - 8 h ❌ No UpCloud API operations
+Vault 5 min - 24 h ✅ Yes Any Vault-backed secret
@@ -55516,7 +52845,7 @@ rm ~/.ssh/temp_key
secrets revoke ($key.id) --reason "fixed"
-
+
File : provisioning/platform/orchestrator/config.defaults.toml
[secrets]
default_ttl_hours = 1
@@ -55531,7 +52860,7 @@ upcloud_username = "${UPCLOUD_USER}"
upcloud_password = "${UPCLOUD_PASS}"
-
+
→ Check service initialization
@@ -55539,9 +52868,9 @@ upcloud_password = "${UPCLOUD_PASS}"
→ Generate new secret instead
-→ Check provider requirements (e.g., AWS needs ‘role’)
+→ Check provider requirements (for example, AWS needs ‘role’)
-
+
✅ No static credentials stored
✅ Automatic expiration (1-12 hours)
@@ -55551,13 +52880,13 @@ upcloud_password = "${UPCLOUD_PASS}"
✅ TLS in transit
-
+
Orchestrator logs : provisioning/platform/orchestrator/data/orchestrator.log
Debug secrets : secrets list | where is_expired == true
Version : 1.0.0 | Date : 2025-10-06
-
+
# Check current mode
provisioning mode current
@@ -55569,84 +52898,69 @@ provisioning mode switch <mode-name>
# Validate mode configuration
provisioning mode validate
-```plaintext
-
----
-
-## Available Modes
-
-| Mode | Use Case | Auth | Orchestrator | OCI Registry |
-|------|----------|------|--------------|--------------|
-| **solo** | Local development | None | Local binary | Local Zot (optional) |
-| **multi-user** | Team collaboration | Token (JWT) | Remote | Remote Harbor |
-| **cicd** | CI/CD pipelines | Token (CI injected) | Remote | Remote Harbor |
-| **enterprise** | Production | mTLS | Kubernetes HA | Harbor HA + DR |
-
----
-
-## Mode Comparison
-
-### Solo Mode
-
-- ✅ **Best for**: Individual developers
-- 🔐 **Authentication**: None
-- 🚀 **Services**: Local orchestrator only
-- 📦 **Extensions**: Local filesystem
-- 🔒 **Workspace Locking**: Disabled
-- 💾 **Resource Limits**: Unlimited
-
-### Multi-User Mode
-
-- ✅ **Best for**: Development teams (5-20 developers)
-- 🔐 **Authentication**: Token (JWT, 24h expiry)
-- 🚀 **Services**: Remote orchestrator, control-center, DNS, git
-- 📦 **Extensions**: OCI registry (Harbor)
-- 🔒 **Workspace Locking**: Enabled (Gitea provider)
-- 💾 **Resource Limits**: 10 servers, 32 cores, 128GB per user
-
-### CI/CD Mode
-
-- ✅ **Best for**: Automated pipelines
-- 🔐 **Authentication**: Token (1h expiry, CI/CD injected)
-- 🚀 **Services**: Remote orchestrator, DNS, git
-- 📦 **Extensions**: OCI registry (always pull latest)
-- 🔒 **Workspace Locking**: Disabled (stateless)
-- 💾 **Resource Limits**: 5 servers, 16 cores, 64GB per pipeline
-
-### Enterprise Mode
-
-- ✅ **Best for**: Large enterprises with strict compliance
-- 🔐 **Authentication**: mTLS (TLS 1.3)
-- 🚀 **Services**: All services on Kubernetes (HA)
-- 📦 **Extensions**: OCI registry (signature verification)
-- 🔒 **Workspace Locking**: Required (etcd provider)
-- 💾 **Resource Limits**: 20 servers, 64 cores, 256GB per user
-
----
-
-## Common Operations
-
-### Initialize Mode System
-
-```bash
-provisioning mode init
-```plaintext
-
-### Check Current Mode
-
-```bash
-provisioning mode current
+
+
+
+Mode Use Case Auth Orchestrator OCI Registry
+solo Local development None Local binary Local Zot (optional)
+multi-user Team collaboration Token (JWT) Remote Remote Harbor
+cicd CI/CD pipelines Token (CI injected) Remote Remote Harbor
+enterprise Production mTLS Kubernetes HA Harbor HA + DR
+
+
+
+
+
+
+✅ Best for : Individual developers
+🔐 Authentication : None
+🚀 Services : Local orchestrator only
+📦 Extensions : Local filesystem
+🔒 Workspace Locking : Disabled
+💾 Resource Limits : Unlimited
+
+
+
+✅ Best for : Development teams (5-20 developers)
+🔐 Authentication : Token (JWT, 24h expiry)
+🚀 Services : Remote orchestrator, control-center, DNS, git
+📦 Extensions : OCI registry (Harbor)
+🔒 Workspace Locking : Enabled (Gitea provider)
+💾 Resource Limits : 10 servers, 32 cores, 128 GB per user
+
+
+
+✅ Best for : Automated pipelines
+🔐 Authentication : Token (1h expiry, CI/CD injected)
+🚀 Services : Remote orchestrator, DNS, git
+📦 Extensions : OCI registry (always pull latest)
+🔒 Workspace Locking : Disabled (stateless)
+💾 Resource Limits : 5 servers, 16 cores, 64 GB per pipeline
+
+
+
+✅ Best for : Large enterprises with strict compliance
+🔐 Authentication : mTLS (TLS 1.3)
+🚀 Services : All services on Kubernetes (HA)
+📦 Extensions : OCI registry (signature verification)
+🔒 Workspace Locking : Required (etcd provider)
+💾 Resource Limits : 20 servers, 64 cores, 256 GB per user
+
+
+
+
+provisioning mode init
+
+
+provisioning mode current
# Output:
# mode: solo
# configured: true
# config_file: ~/.provisioning/config/active-mode.yaml
-```plaintext
-
-### List All Modes
-
-```bash
-provisioning mode list
+
+
+provisioning mode list
# Output:
# ┌───────────────┬───────────────────────────────────┬─────────┐
@@ -55657,12 +52971,9 @@ provisioning mode list
# │ cicd │ CI/CD pipeline execution │ │
# │ enterprise │ Production enterprise deployment │ │
# └───────────────┴───────────────────────────────────┴─────────┘
-```plaintext
-
-### Switch Mode
-
-```bash
-# Switch with confirmation
+
+
+# Switch with confirmation
provisioning mode switch multi-user
# Dry run (preview changes)
@@ -55670,32 +52981,23 @@ provisioning mode switch multi-user --dry-run
# With validation
provisioning mode switch multi-user --validate
-```plaintext
-
-### Show Mode Details
-
-```bash
-# Show current mode
+
+
+# Show current mode
provisioning mode show
# Show specific mode
provisioning mode show enterprise
-```plaintext
-
-### Validate Mode
-
-```bash
-# Validate current mode
+
+
+# Validate current mode
provisioning mode validate
# Validate specific mode
provisioning mode validate cicd
-```plaintext
-
-### Compare Modes
-
-```bash
-provisioning mode compare solo multi-user
+
+
+provisioning mode compare solo multi-user
# Output shows differences in:
# - Authentication
@@ -55703,16 +53005,11 @@ provisioning mode compare solo multi-user
# - Extension sources
# - Workspace locking
# - Security settings
-```plaintext
-
----
-
-## OCI Registry Management
-
-### Solo Mode Only
-
-```bash
-# Start local OCI registry
+
+
+
+
+# Start local OCI registry
provisioning mode oci-registry start
# Check registry status
@@ -55723,18 +53020,12 @@ provisioning mode oci-registry logs
# Stop registry
provisioning mode oci-registry stop
-```plaintext
-
-**Note**: OCI registry management only works in solo mode with local deployment.
-
----
-
-## Mode-Specific Workflows
-
-### Solo Mode Workflow
-
-```bash
-# 1. Initialize (defaults to solo)
+
+Note : OCI registry management only works in solo mode with local deployment.
+
+
+
+# 1. Initialize (defaults to solo)
provisioning workspace init
# 2. Start orchestrator
@@ -55749,12 +53040,9 @@ provisioning server create web-01 --check
provisioning taskserv create kubernetes
# Extensions loaded from local filesystem
-```plaintext
-
-### Multi-User Mode Workflow
-
-```bash
-# 1. Switch to multi-user mode
+
+
+# 1. Switch to multi-user mode
provisioning mode switch multi-user
# 2. Authenticate
@@ -55773,12 +53061,9 @@ provisioning server create web-01
# 6. Unlock workspace
provisioning workspace unlock my-infra
-```plaintext
-
-### CI/CD Mode Workflow
-
-```yaml
-# GitLab CI example
+
+
+# GitLab CI example
deploy:
stage: deploy
script:
@@ -55799,12 +53084,9 @@ deploy:
after_script:
- provisioning workspace cleanup
-```plaintext
-
-### Enterprise Mode Workflow
-
-```bash
-# 1. Switch to enterprise mode
+
+
+# 1. Switch to enterprise mode
provisioning mode switch enterprise
# 2. Verify Kubernetes connectivity
@@ -55829,63 +53111,42 @@ provisioning infra create
# 8. Release workspace
provisioning workspace unlock prod-deployment
-```plaintext
-
----
-
-## Configuration Files
-
-### Mode Templates
-
-```plaintext
-workspace/config/modes/
+
+
+
+
+workspace/config/modes/
├── solo.yaml # Solo mode configuration
├── multi-user.yaml # Multi-user mode configuration
├── cicd.yaml # CI/CD mode configuration
└── enterprise.yaml # Enterprise mode configuration
-```plaintext
-
-### Active Mode Configuration
-
-```plaintext
-~/.provisioning/config/active-mode.yaml
-```plaintext
-
-This file is created/updated when you switch modes.
-
----
-
-## OCI Registry Namespaces
-
-All modes use the following OCI registry namespaces:
-
-| Namespace | Purpose | Example |
-|-----------|---------|---------|
-| `*-extensions` | Extension artifacts | `provisioning-extensions/upcloud:latest` |
-| `*-kcl` | KCL package artifacts | `provisioning-kcl/lib:v1.0.0` |
-| `*-platform` | Platform service images | `provisioning-platform/orchestrator:latest` |
-| `*-test` | Test environment images | `provisioning-test/ubuntu:22.04` |
-
-**Note**: Prefix varies by mode (`dev-`, `provisioning-`, `cicd-`, `prod-`)
-
----
-
-## Troubleshooting
-
-### Mode switch fails
-
-```bash
-# Validate mode first
+
+
+~/.provisioning/config/active-mode.yaml
+
+This file is created/updated when you switch modes.
+
+
+All modes use the following OCI registry namespaces:
+Namespace Purpose Example
+*-extensionsExtension artifacts provisioning-extensions/upcloud:latest
+*-schemasNickel schema artifacts provisioning-schemas/lib:v1.0.0
+*-platformPlatform service images provisioning-platform/orchestrator:latest
+*-testTest environment images provisioning-test/ubuntu:22.04
+
+
+Note : Prefix varies by mode (dev-, provisioning-, cicd-, prod-)
+
+
+
+# Validate mode first
provisioning mode validate <mode-name>
# Check runtime requirements
provisioning mode validate <mode-name> --check-requirements
-```plaintext
-
-### Cannot start OCI registry (solo mode)
-
-```bash
-# Check if registry binary is installed
+
+
+# Check if registry binary is installed
which zot
# Install Zot
@@ -55894,12 +53155,9 @@ which zot
# Check if port 5000 is available
lsof -i :5000
-```plaintext
-
-### Authentication fails (multi-user/cicd/enterprise)
-
-```bash
-# Check token expiry
+
+
+# Check token expiry
provisioning auth status
# Re-authenticate
@@ -55908,12 +53166,9 @@ provisioning auth login
# For enterprise mTLS, verify certificates
ls -la /etc/provisioning/certs/
# Should contain: client.crt, client.key, ca.crt
-```plaintext
-
-### Workspace locking issues (multi-user/enterprise)
-
-```bash
-# Check lock status
+
+
+# Check lock status
provisioning workspace lock-status <workspace-name>
# Force unlock (use with caution)
@@ -55925,12 +53180,9 @@ curl -I https://git.company.local
# Enterprise: Check etcd cluster
etcdctl endpoint health
-```plaintext
-
-### OCI registry connection fails
-
-```bash
-# Test registry connectivity
+
+
+# Test registry connectivity
curl https://harbor.company.local/v2/
# Check authentication token
@@ -55941,111 +53193,86 @@ ping harbor.company.local
# For Harbor, check credentials
docker login harbor.company.local
-```plaintext
-
----
-
-## Environment Variables
-
-| Variable | Purpose | Example |
-|----------|---------|---------|
-| `PROVISIONING_MODE` | Override active mode | `export PROVISIONING_MODE=cicd` |
-| `PROVISIONING_WORKSPACE_CONFIG` | Override config location | `~/.provisioning/config` |
-| `PROVISIONING_PROJECT_ROOT` | Project root directory | `/opt/project-provisioning` |
-
----
-
-## Best Practices
-
-### 1. Use Appropriate Mode
-
-- **Solo**: Individual development, experimentation
-- **Multi-User**: Team collaboration, shared infrastructure
-- **CI/CD**: Automated testing and deployment
-- **Enterprise**: Production deployments, compliance requirements
-
-### 2. Validate Before Switching
-
-```bash
-provisioning mode validate <mode-name>
-```plaintext
-
-### 3. Backup Active Configuration
-
-```bash
-# Automatic backup created when switching
+
+
+
+Variable Purpose Example
+PROVISIONING_MODEOverride active mode export PROVISIONING_MODE=cicd
+PROVISIONING_WORKSPACE_CONFIGOverride config location ~/.provisioning/config
+PROVISIONING_PROJECT_ROOTProject root directory /opt/project-provisioning
+
+
+
+
+
+
+Solo : Individual development, experimentation
+Multi-User : Team collaboration, shared infrastructure
+CI/CD : Automated testing and deployment
+Enterprise : Production deployments, compliance requirements
+
+
+provisioning mode validate <mode-name>
+
+
+# Automatic backup created when switching
ls ~/.provisioning/config/active-mode.yaml.backup
-```plaintext
-
-### 4. Use Check Mode
-
-```bash
-provisioning server create --check
-```plaintext
-
-### 5. Lock Workspaces in Multi-User/Enterprise
-
-```bash
-provisioning workspace lock <workspace-name>
+
+
+provisioning server create --check
+
+
+provisioning workspace lock <workspace-name>
# ... make changes ...
provisioning workspace unlock <workspace-name>
-```plaintext
-
-### 6. Pull Extensions from OCI (Multi-User/CI/CD/Enterprise)
-
-```bash
-# Don't use local extensions in shared modes
-provisioning extension pull <extension-name>
-```plaintext
-
----
-
-## Security Considerations
-
-### Solo Mode
-
-- ⚠️ No authentication (local development only)
-- ⚠️ No encryption (sensitive data should use SOPS)
-- ✅ Isolated environment
-
-### Multi-User Mode
-
-- ✅ Token-based authentication
-- ✅ TLS in transit
-- ✅ Audit logging
-- ⚠️ No encryption at rest (configure as needed)
-
-### CI/CD Mode
-
-- ✅ Token authentication (short expiry)
-- ✅ Full encryption (at rest + in transit)
-- ✅ KMS for secrets
-- ✅ Vulnerability scanning (critical threshold)
-- ✅ Image signing required
-
-### Enterprise Mode
-
-- ✅ mTLS authentication
-- ✅ Full encryption (at rest + in transit)
-- ✅ KMS for all secrets
-- ✅ Vulnerability scanning (critical threshold)
-- ✅ Image signing + signature verification
-- ✅ Network isolation
-- ✅ Compliance policies (SOC2, ISO27001, HIPAA)
-
----
-
-## Support and Documentation
-
-- **Implementation Summary**: `MODE_SYSTEM_IMPLEMENTATION_SUMMARY.md`
-- **KCL Schemas**: `provisioning/kcl/modes.k`, `provisioning/kcl/oci_registry.k`
-- **Mode Templates**: `workspace/config/modes/*.yaml`
-- **Commands**: `provisioning/core/nulib/lib_provisioning/mode/`
-
----
-
-**Last Updated**: 2025-10-06 | **Version**: 1.0.0
+
+# Don't use local extensions in shared modes
+provisioning extension pull <extension-name>
+
+
+
+
+
+⚠️ No authentication (local development only)
+⚠️ No encryption (sensitive data should use SOPS)
+✅ Isolated environment
+
+
+
+✅ Token-based authentication
+✅ TLS in transit
+✅ Audit logging
+⚠️ No encryption at rest (configure as needed)
+
+
+
+✅ Token authentication (short expiry)
+✅ Full encryption (at rest + in transit)
+✅ KMS for secrets
+✅ Vulnerability scanning (critical threshold)
+✅ Image signing required
+
+
+
+✅ mTLS authentication
+✅ Full encryption (at rest + in transit)
+✅ KMS for all secrets
+✅ Vulnerability scanning (critical threshold)
+✅ Image signing + signature verification
+✅ Network isolation
+✅ Compliance policies (SOC2, ISO27001, HIPAA)
+
+
+
+
+Implementation Summary : MODE_SYSTEM_IMPLEMENTATION_SUMMARY.md
+Nickel Schemas : provisioning/schemas/modes.ncl, provisioning/schemas/oci_registry.ncl
+Mode Templates : workspace/config/modes/*.yaml
+Commands : provisioning/core/nulib/lib_provisioning/mode/
+
+
+Last Updated : 2025-10-06 | Version : 1.0.0
Complete guide to workspace management in the provisioning platform.
@@ -56059,7 +53286,7 @@ provisioning extension pull <extension-name>
Workspace registry management
Backup and restore operations
-
+
# List all workspaces
provisioning workspace list
@@ -56085,7 +53312,7 @@ provisioning workspace active
Last Updated : 2025-10-06
System Version : 2.0.5+
-
+
Overview
Workspace Requirement
@@ -56096,7 +53323,7 @@ provisioning workspace active
Best Practices
-
+
The provisioning system now enforces mandatory workspace requirements for all infrastructure operations. This ensures:
Consistent Environment : All operations run in a well-defined workspace
@@ -56104,7 +53331,7 @@ provisioning workspace active
Safe Migrations : Automatic migration framework with backup/rollback support
Configuration Isolation : Each workspace has isolated configurations and state
-
+
✅ Mandatory Workspace : Most commands require an active workspace
✅ Version Tracking : Workspaces track system, schema, and format versions
@@ -56134,7 +53361,7 @@ provisioning workspace active
nu - Start Nushell session
nuinfo - Nushell information
-
+
If you run a command without an active workspace, you’ll see:
✗ Workspace Required
@@ -56150,18 +53377,12 @@ To get started:
3. List available workspaces:
provisioning workspace list
-```plaintext
-
----
-
-## Version Tracking
-
-### Workspace Metadata
-
-Each workspace maintains metadata in `.provisioning/metadata.yaml`:
-
-```yaml
-workspace:
+
+
+
+
+Each workspace maintains metadata in .provisioning/metadata.yaml:
+workspace:
name: "my-workspace"
path: "/path/to/workspace"
@@ -56178,34 +53399,29 @@ migration_history: []
compatibility:
min_provisioning_version: "2.0.0"
min_schema_version: "1.0.0"
-```plaintext
-
-### Version Components
-
-#### 1. Provisioning Version
-
-- **What**: Version of the provisioning system (CLI + libraries)
-- **Example**: `2.0.5`
-- **Purpose**: Ensures workspace is compatible with current system
-
-#### 2. Schema Version
-
-- **What**: Version of KCL schemas used in workspace
-- **Example**: `1.0.0`
-- **Purpose**: Tracks configuration schema compatibility
-
-#### 3. Workspace Format Version
-
-- **What**: Version of workspace directory structure
-- **Example**: `2.0.0`
-- **Purpose**: Ensures workspace has required directories and files
-
-### Checking Workspace Version
-
-View workspace version information:
-
-```bash
-# Check active workspace version
+
+
+
+
+What : Version of the provisioning system (CLI + libraries)
+Example : 2.0.5
+Purpose : Ensures workspace is compatible with current system
+
+
+
+What : Version of KCL schemas used in workspace
+Example : 1.0.0
+Purpose : Tracks configuration schema compatibility
+
+
+
+What : Version of workspace directory structure
+Example : 2.0.0
+Purpose : Ensures workspace has required directories and files
+
+
+View workspace version information:
+# Check active workspace version
provisioning workspace version
# Check specific workspace version
@@ -56213,12 +53429,9 @@ provisioning workspace version my-workspace
# JSON output
provisioning workspace version --format json
-```plaintext
-
-**Example Output**:
-
-```plaintext
-Workspace Version Information
+
+Example Output :
+Workspace Version Information
System:
Version: 2.0.5
@@ -56239,26 +53452,19 @@ Compatibility:
Migrations:
Total: 0
-```plaintext
-
----
-
-## Migration Framework
-
-### When Migration is Needed
-
-Migration is required when:
-
-1. **No Metadata**: Workspace created before version tracking (< 2.0.5)
-2. **Version Mismatch**: System version is newer than workspace version
-3. **Breaking Changes**: Major version update with structural changes
-
-### Compatibility Scenarios
-
-#### Scenario 1: No Metadata (Unknown Version)
-
-```plaintext
-Workspace version is incompatible:
+
+
+
+
+Migration is required when:
+
+No Metadata : Workspace created before version tracking (< 2.0.5)
+Version Mismatch : System version is newer than workspace version
+Breaking Changes : Major version update with structural changes
+
+
+
+Workspace version is incompatible:
Workspace: my-workspace
Path: /path/to/workspace
@@ -56268,47 +53474,30 @@ This workspace needs migration:
Run workspace migration:
provisioning workspace migrate my-workspace
-```plaintext
-
-#### Scenario 2: Migration Available
-
-```plaintext
-ℹ Migration available: Workspace can be updated from 2.0.0 to 2.0.5
+
+
+ℹ Migration available: Workspace can be updated from 2.0.0 to 2.0.5
Run: provisioning workspace migrate my-workspace
-```plaintext
-
-#### Scenario 3: Workspace Too New
-
-```plaintext
-Workspace version (3.0.0) is newer than system (2.0.5)
+
+
+Workspace version (3.0.0) is newer than system (2.0.5)
Workspace is newer than the system:
Workspace version: 3.0.0
System version: 2.0.5
Upgrade the provisioning system to use this workspace.
-```plaintext
-
-### Running Migrations
-
-#### Basic Migration
-
-Migrate active workspace to current system version:
-
-```bash
-provisioning workspace migrate
-```plaintext
-
-#### Migrate Specific Workspace
-
-```bash
-provisioning workspace migrate my-workspace
-```plaintext
-
-#### Migration Options
-
-```bash
-# Skip backup (not recommended)
+
+
+
+Migrate active workspace to current system version:
+provisioning workspace migrate
+
+
+provisioning workspace migrate my-workspace
+
+
+# Skip backup (not recommended)
provisioning workspace migrate --skip-backup
# Force without confirmation
@@ -56316,23 +53505,19 @@ provisioning workspace migrate --force
# Migrate to specific version
provisioning workspace migrate --target-version 2.1.0
-```plaintext
-
-### Migration Process
-
-When you run a migration:
-
-1. **Validation**: System validates workspace exists and needs migration
-2. **Backup**: Creates timestamped backup in `.workspace_backups/`
-3. **Confirmation**: Prompts for confirmation (unless `--force`)
-4. **Migration**: Applies migration steps sequentially
-5. **Verification**: Validates migration success
-6. **Metadata Update**: Records migration in workspace metadata
-
-**Example Migration Output**:
-
-```plaintext
-Workspace Migration
+
+
+When you run a migration:
+
+Validation : System validates workspace exists and needs migration
+Backup : Creates timestamped backup in .workspace_backups/
+Confirmation : Prompts for confirmation (unless --force)
+Migration : Applies migration steps sequentially
+Verification : Validates migration success
+Metadata Update : Records migration in workspace metadata
+
+Example Migration Output :
+Workspace Migration
Workspace: my-workspace
Path: /path/to/workspace
@@ -56356,44 +53541,31 @@ Migrating workspace to version 2.0.5...
✓ Initialize metadata completed
✓ Migration completed successfully
-```plaintext
-
-### Workspace Backups
-
-#### List Backups
-
-```bash
-# List backups for active workspace
+
+
+
+# List backups for active workspace
provisioning workspace list-backups
# List backups for specific workspace
provisioning workspace list-backups my-workspace
-```plaintext
-
-**Example Output**:
-
-```plaintext
-Workspace Backups for my-workspace
+
+Example Output :
+Workspace Backups for my-workspace
name created reason size
my-workspace_backup_20251006_1200 2025-10-06T12:00:00Z pre_migration 2.3 MB
my-workspace_backup_20251005_1500 2025-10-05T15:00:00Z pre_migration 2.1 MB
-```plaintext
-
-#### Restore from Backup
-
-```bash
-# Restore workspace from backup
+
+
+# Restore workspace from backup
provisioning workspace restore-backup /path/to/backup
# Force restore without confirmation
provisioning workspace restore-backup /path/to/backup --force
-```plaintext
-
-**Restore Process**:
-
-```plaintext
-Restore Workspace from Backup
+
+Restore Process :
+Restore Workspace from Backup
Backup: /path/.workspace_backups/my-workspace_backup_20251006_1200
Original path: /path/to/workspace
@@ -56406,16 +53578,11 @@ Reason: pre_migration
Continue with restore? (y/N): y
✓ Workspace restored from backup
-```plaintext
-
----
-
-## Command Reference
-
-### Workspace Version Commands
-
-```bash
-# Show workspace version information
+
+
+
+
+# Show workspace version information
provisioning workspace version [workspace-name] [--format table|json|yaml]
# Check compatibility
@@ -56429,12 +53596,9 @@ provisioning workspace list-backups [workspace-name]
# Restore from backup
provisioning workspace restore-backup <backup-path> [--force]
-```plaintext
-
-### Workspace Management Commands
-
-```bash
-# List all workspaces
+
+
+# List all workspaces
provisioning workspace list
# Show active workspace
@@ -56451,18 +53615,12 @@ provisioning workspace register <name> <path>
# Remove workspace from registry
provisioning workspace remove <name> [--force]
-```plaintext
-
----
-
-## Troubleshooting
-
-### Problem: "No active workspace"
-
-**Solution**: Activate or create a workspace
-
-```bash
-# List available workspaces
+
+
+
+
+Solution : Activate or create a workspace
+# List available workspaces
provisioning workspace list
# Activate existing workspace
@@ -56470,50 +53628,33 @@ provisioning workspace activate my-workspace
# Or create new workspace
provisioning workspace init new-workspace
-```plaintext
-
-### Problem: "Workspace has invalid structure"
-
-**Symptoms**: Missing directories or configuration files
-
-**Solution**: Run migration to fix structure
-
-```bash
-provisioning workspace migrate my-workspace
-```plaintext
-
-### Problem: "Workspace version is incompatible"
-
-**Solution**: Run migration to upgrade workspace
-
-```bash
-provisioning workspace migrate
-```plaintext
-
-### Problem: Migration Failed
-
-**Solution**: Restore from automatic backup
-
-```bash
-# List backups
+
+
+Symptoms : Missing directories or configuration files
+Solution : Run migration to fix structure
+provisioning workspace migrate my-workspace
+
+
+Solution : Run migration to upgrade workspace
+provisioning workspace migrate
+
+
+Solution : Restore from automatic backup
+# List backups
provisioning workspace list-backups
# Restore from most recent backup
provisioning workspace restore-backup /path/to/backup
-```plaintext
-
-### Problem: Can't Activate Workspace After Migration
-
-**Possible Causes**:
-
-1. Migration failed partially
-2. Workspace path changed
-3. Metadata corrupted
-
-**Solutions**:
-
-```bash
-# Check workspace compatibility
+
+
+Possible Causes :
+
+Migration failed partially
+Workspace path changed
+Metadata corrupted
+
+Solutions :
+# Check workspace compatibility
provisioning workspace check-compatibility my-workspace
# If corrupted, restore from backup
@@ -56522,102 +53663,64 @@ provisioning workspace restore-backup /path/to/backup
# If path changed, re-register
provisioning workspace remove my-workspace
provisioning workspace register my-workspace /new/path --activate
-```plaintext
-
----
-
-## Best Practices
-
-### 1. Always Use Named Workspaces
-
-Create workspaces for different environments:
-
-```bash
-provisioning workspace init dev ~/workspaces/dev --activate
+
+
+
+
+Create workspaces for different environments:
+provisioning workspace init dev ~/workspaces/dev --activate
provisioning workspace init staging ~/workspaces/staging
provisioning workspace init production ~/workspaces/production
-```plaintext
-
-### 2. Let System Create Backups
-
-Never use `--skip-backup` for important workspaces. Backups are cheap, data loss is expensive.
-
-```bash
-# Good: Default with backup
+
+
+Never use --skip-backup for important workspaces. Backups are cheap, data loss is expensive.
+# Good: Default with backup
provisioning workspace migrate
# Risky: No backup
provisioning workspace migrate --skip-backup # DON'T DO THIS
-```plaintext
-
-### 3. Check Compatibility Before Operations
-
-Before major operations, verify workspace compatibility:
-
-```bash
-provisioning workspace check-compatibility
-```plaintext
-
-### 4. Migrate After System Upgrades
-
-After upgrading the provisioning system:
-
-```bash
-# Check if migration available
+
+
+Before major operations, verify workspace compatibility:
+provisioning workspace check-compatibility
+
+
+After upgrading the provisioning system:
+# Check if migration available
provisioning workspace version
# Migrate if needed
provisioning workspace migrate
-```plaintext
-
-### 5. Keep Backups for Safety
-
-Don't immediately delete old backups:
-
-```bash
-# List backups
+
+
+Don’t immediately delete old backups:
+# List backups
provisioning workspace list-backups
# Keep at least 2-3 recent backups
-```plaintext
-
-### 6. Use Version Control for Workspace Configs
-
-Initialize git in workspace directory:
-
-```bash
-cd ~/workspaces/my-workspace
+
+
+Initialize git in workspace directory:
+cd ~/workspaces/my-workspace
git init
git add config/ infra/
git commit -m "Initial workspace configuration"
-```plaintext
-
-Exclude runtime and cache directories in `.gitignore`:
-
-```gitignore
-.cache/
+
+Exclude runtime and cache directories in .gitignore:
+.cache/
.runtime/
.provisioning/
.workspace_backups/
-```plaintext
-
-### 7. Document Custom Migrations
-
-If you need custom migration steps, document them:
-
-```bash
-# Create migration notes
+
+
+If you need custom migration steps, document them:
+# Create migration notes
echo "Custom steps for v2 to v3 migration" > MIGRATION_NOTES.md
-```plaintext
-
----
-
-## Migration History
-
-Each migration is recorded in workspace metadata:
-
-```yaml
-migration_history:
+
+
+
+Each migration is recorded in workspace metadata:
+migration_history:
- from_version: "unknown"
to_version: "2.0.5"
migration_type: "metadata_initialization"
@@ -56631,29 +53734,21 @@ migration_history:
timestamp: "2025-10-15T10:30:00Z"
success: true
notes: "Updated to workspace switching support"
-```plaintext
-
-View migration history:
-
-```bash
-provisioning workspace version --format yaml | grep -A 10 "migration_history"
-```plaintext
-
----
-
-## Summary
-
-The workspace enforcement and version tracking system provides:
-
-- **Safety**: Mandatory workspace prevents accidental operations outside defined environments
-- **Compatibility**: Version tracking ensures workspace works with current system
-- **Upgradability**: Migration framework handles version transitions safely
-- **Recoverability**: Automatic backups protect against migration failures
-
-**Key Commands**:
-
-```bash
-# Create workspace
+
+View migration history:
+provisioning workspace version --format yaml | grep -A 10 "migration_history"
+
+
+
+The workspace enforcement and version tracking system provides:
+
+Safety : Mandatory workspace prevents accidental operations outside defined environments
+Compatibility : Version tracking ensures workspace works with current system
+Upgradability : Migration framework handles version transitions safely
+Recoverability : Automatic backups protect against migration failures
+
+Key Commands :
+# Create workspace
provisioning workspace init my-workspace --activate
# Check version
@@ -56664,32 +53759,25 @@ provisioning workspace migrate
# List backups
provisioning workspace list-backups
-```plaintext
-
-For more information, see:
-
-- **Workspace Switching Guide**: `docs/user/WORKSPACE_SWITCHING_GUIDE.md`
-- **Quick Reference**: `provisioning sc` or `provisioning guide quickstart`
-- **Help System**: `provisioning help workspace`
-
----
-
-**Questions or Issues?**
-
-Check the troubleshooting section or run:
-
-```bash
-provisioning workspace check-compatibility
-```plaintext
-
-This will provide specific guidance for your situation.
+For more information, see:
+
+Workspace Switching Guide : docs/user/WORKSPACE_SWITCHING_GUIDE.md
+Quick Reference : provisioning sc or provisioning guide quickstart
+Help System : provisioning help workspace
+
+
+Questions or Issues?
+Check the troubleshooting section or run:
+provisioning workspace check-compatibility
+
+This will provide specific guidance for your situation.
Version : 1.0.0
Last Updated : 2025-12-04
-
+
The Workspace:Infrastructure Reference System provides a unified notation for managing workspaces and their associated infrastructure. This system eliminates the need to specify infrastructure separately and enables convenient defaults.
-
+
Use the -ws flag with workspace:infra notation:
# Use production workspace with sgoyol infrastructure for this command only
@@ -56697,54 +53785,42 @@ provisioning server list -ws production:sgoyol
# Use default infrastructure of active workspace
provisioning taskserv create kubernetes
-```plaintext
-
-### Persistent Activation
-
-Activate a workspace with a default infrastructure:
-
-```bash
-# Activate librecloud workspace and set wuji as default infra
+
+
+Activate a workspace with a default infrastructure:
+# Activate librecloud workspace and set wuji as default infra
provisioning workspace activate librecloud:wuji
# Now all commands use librecloud:wuji by default
provisioning server list
-```plaintext
-
-## Notation Syntax
-
-### Basic Format
-
-```plaintext
-workspace:infra
-```plaintext
-
-| Part | Description | Example |
-|------|-------------|---------|
-| `workspace` | Workspace name | `librecloud` |
-| `:` | Separator | - |
-| `infra` | Infrastructure name | `wuji` |
-
-### Examples
-
-| Notation | Workspace | Infrastructure |
-|----------|-----------|-----------------|
-| `librecloud:wuji` | librecloud | wuji |
-| `production:sgoyol` | production | sgoyol |
-| `dev:local` | dev | local |
-| `librecloud` | librecloud | (from default or context) |
-
-## Resolution Priority
-
-When no infrastructure is explicitly specified, the system uses this priority order:
-
-1. **Explicit `--infra` flag** (highest)
-
- ```bash
- provisioning server list --infra another-infra
+
+
+workspace:infra
+
+Part Description Example
+workspaceWorkspace name librecloud
+:Separator -
+infraInfrastructure name wuji
+
+
+
+Notation Workspace Infrastructure
+librecloud:wujilibrecloud wuji
+production:sgoyolproduction sgoyol
+dev:localdev local
+librecloudlibrecloud (from default or context)
+
+
+
+When no infrastructure is explicitly specified, the system uses this priority order:
+Explicit --infra flag (highest)
+provisioning server list --infra another-infra
+
+
+
PWD Detection
cd workspace_librecloud/infra/wuji
provisioning server list # Auto-detects wuji
@@ -56773,14 +53849,10 @@ provisioning server list -ws production:sgoyol # Shows production:sgoyol
# Back to original context
provisioning server list # Shows librecloud:wuji again
-```plaintext
-
-### Pattern 2: Persistent Workspace Activation
-
-Set a workspace as active with a default infrastructure:
-
-```bash
-# List available workspaces
+
+
+Set a workspace as active with a default infrastructure:
+# List available workspaces
provisioning workspace list
# Activate with infra notation
@@ -56789,20 +53861,16 @@ provisioning workspace activate production:sgoyol
# All subsequent commands use production:sgoyol
provisioning server list
provisioning taskserv create kubernetes
-```plaintext
-
-### Pattern 3: PWD-Based Inference
-
-The system auto-detects workspace and infrastructure from your current directory:
-
-```bash
-# Your workspace structure
+
+
+The system auto-detects workspace and infrastructure from your current directory:
+# Your workspace structure
workspace_librecloud/
infra/
wuji/
- settings.k
+ settings.ncl
another/
- settings.k
+ settings.ncl
# Navigation auto-detects context
cd workspace_librecloud/infra/wuji
@@ -56810,14 +53878,10 @@ provisioning server list # Uses wuji automatically
cd ../another
provisioning server list # Switches to another
-```plaintext
-
-### Pattern 4: Default Infrastructure Management
-
-Set a workspace-specific default infrastructure:
-
-```bash
-# During activation
+
+
+Set a workspace-specific default infrastructure:
+# During activation
provisioning workspace activate librecloud:wuji
# Or explicitly after activation
@@ -56825,14 +53889,10 @@ provisioning workspace set-default-infra librecloud another-infra
# View current defaults
provisioning workspace list
-```plaintext
-
-## Command Reference
-
-### Workspace Commands
-
-```bash
-# Activate workspace with infra
+
+
+
+# Activate workspace with infra
provisioning workspace activate workspace:infra
# Switch to different workspace
@@ -56849,12 +53909,9 @@ provisioning workspace set-default-infra workspace_name infra_name
# Get default infrastructure
provisioning workspace get-default-infra workspace_name
-```plaintext
-
-### Common Commands with `-ws`
-
-```bash
-# Server operations
+
+
+# Server operations
provisioning server create -ws workspace:infra
provisioning server list -ws workspace:infra
provisioning server delete name -ws workspace:infra
@@ -56866,48 +53923,42 @@ provisioning taskserv delete kubernetes -ws workspace:infra
# Infrastructure operations
provisioning infra validate -ws workspace:infra
provisioning infra list -ws workspace:infra
-```plaintext
-
-## Features
-
-### ✅ Unified Notation
-
-- Single `workspace:infra` format for all references
-- Works with all provisioning commands
-- Backward compatible with existing workflows
-
-### ✅ Temporal Override
-
-- Use `-ws` flag for single-command overrides
-- No permanent state changes
-- Automatically reverted after command
-
-### ✅ Persistent Defaults
-
-- Set default infrastructure per workspace
-- Eliminates repetitive `--infra` flags
-- Survives across sessions
-
-### ✅ Smart Detection
-
-- Auto-detects workspace from directory
-- Auto-detects infrastructure from PWD
-- Fallback to configured defaults
-
-### ✅ Error Handling
-
-- Clear error messages when infra not found
-- Validation of workspace and infra existence
-- Helpful hints for missing configurations
-
-## Environment Context
-
-### TEMP_WORKSPACE Variable
-
-The system uses `$env.TEMP_WORKSPACE` for temporal overrides:
-
-```bash
-# Set temporarily (via -ws flag automatically)
+
+
+
+
+Single workspace:infra format for all references
+Works with all provisioning commands
+Backward compatible with existing workflows
+
+
+
+Use -ws flag for single-command overrides
+No permanent state changes
+Automatically reverted after command
+
+
+
+Set default infrastructure per workspace
+Eliminates repetitive --infra flags
+Survives across sessions
+
+
+
+Auto-detects workspace from directory
+Auto-detects infrastructure from PWD
+Fallback to configured defaults
+
+
+
+Clear error messages when infra not found
+Validation of workspace and infra existence
+Helpful hints for missing configurations
+
+
+
+The system uses $env.TEMP_WORKSPACE for temporal overrides:
+# Set temporarily (via -ws flag automatically)
$env.TEMP_WORKSPACE = "production"
# Check current context
@@ -56915,14 +53966,10 @@ echo $env.TEMP_WORKSPACE
# Clear after use
hide-env TEMP_WORKSPACE
-```plaintext
-
-## Validation
-
-### Validating Notation
-
-```bash
-# Valid notation formats
+
+
+
+# Valid notation formats
librecloud:wuji # Standard format
production:sgoyol.v2 # With dots and hyphens
dev-01:local-test # Multiple hyphens
@@ -56930,12 +53977,9 @@ prod123:infra456 # Numeric names
# Special characters
lib-cloud_01:wu-ji.v2 # Mix of all allowed chars
-```plaintext
-
-### Error Cases
-
-```bash
-# Workspace not found
+
+
+# Workspace not found
provisioning workspace activate unknown:infra
# Error: Workspace 'unknown' not found in registry
@@ -56946,16 +53990,11 @@ provisioning workspace activate librecloud:unknown
# Empty specification
provisioning workspace activate ""
# Error: Workspace '' not found in registry
-```plaintext
-
-## Configuration
-
-### User Configuration
-
-Default infrastructure is stored in `~/Library/Application Support/provisioning/user_config.yaml`:
-
-```yaml
-active_workspace: "librecloud"
+
+
+
+Default infrastructure is stored in ~/Library/Application Support/provisioning/user_config.yaml:
+active_workspace: "librecloud"
workspaces:
- name: "librecloud"
@@ -56967,87 +54006,59 @@ workspaces:
path: "/opt/workspaces/production"
last_used: "2025-12-03T15:30:00Z"
default_infra: "sgoyol"
-```plaintext
-
-### Workspace Schema
-
-In `provisioning/kcl/workspace_config.k`:
-
-```kcl
-schema InfraConfig:
- """Infrastructure context settings"""
- current: str
- default?: str # Default infrastructure for workspace
-```plaintext
-
-## Best Practices
-
-### 1. Use Persistent Activation for Long Sessions
-
-```bash
-# Good: Activate at start of session
+
+
+In provisioning/schemas/workspace_config.ncl:
+{
+ InfraConfig = {
+ current | String, # Infrastructure context settings
+ default | String | optional, # Default infrastructure for workspace
+ },
+}
+
+
+
+# Good: Activate at start of session
provisioning workspace activate production:sgoyol
# Then use simple commands
provisioning server list
provisioning taskserv create kubernetes
-```plaintext
-
-### 2. Use Temporal Override for Ad-Hoc Operations
-
-```bash
-# Good: Quick one-off operation
+
+
+# Good: Quick one-off operation
provisioning server list -ws production:other-infra
# Avoid: Repeated -ws flags
provisioning server list -ws prod:infra1
provisioning taskserv list -ws prod:infra1 # Better to activate once
-```plaintext
-
-### 3. Navigate with PWD for Context Awareness
-
-```bash
-# Good: Navigate to infrastructure directory
+
+
+# Good: Navigate to infrastructure directory
cd workspace_librecloud/infra/wuji
provisioning server list # Auto-detects context
# Works well with: cd - history, terminal multiplexer panes
-```plaintext
-
-### 4. Set Meaningful Defaults
-
-```bash
-# Good: Default to production infrastructure
+
+
+# Good: Default to production infrastructure
provisioning workspace activate production:main-infra
# Avoid: Default to dev infrastructure in production workspace
-```plaintext
-
-## Troubleshooting
-
-### Issue: "Workspace not found in registry"
-
-**Solution**: Register the workspace first
-
-```bash
-provisioning workspace register librecloud /path/to/workspace_librecloud
-```plaintext
-
-### Issue: "Infrastructure not found"
-
-**Solution**: Verify infrastructure directory exists
-
-```bash
-ls workspace_librecloud/infra/ # Check available infras
+
+
+
+Solution : Register the workspace first
+provisioning workspace register librecloud /path/to/workspace_librecloud
+
+
+Solution : Verify infrastructure directory exists
+ls workspace_librecloud/infra/ # Check available infras
provisioning workspace activate librecloud:wuji # Use correct name
-```plaintext
-
-### Issue: Temporal override not working
-
-**Solution**: Ensure you're using `-ws` flag correctly
-
-```bash
-# Correct
+
+
+Solution : Ensure you’re using -ws flag correctly
+# Correct
provisioning server list -ws production:sgoyol
# Incorrect (missing space)
@@ -57055,51 +54066,36 @@ provisioning server list-wsproduction:sgoyol
# Incorrect (ws is not a command)
provisioning -ws production:sgoyol server list
-```plaintext
-
-### Issue: PWD detection not working
-
-**Solution**: Navigate to proper infrastructure directory
-
-```bash
-# Must be in workspace structure
+
+
+Solution : Navigate to proper infrastructure directory
+# Must be in workspace structure
cd workspace_name/infra/infra_name
# Then run command
provisioning server list
-```plaintext
-
-## Migration from Old System
-
-### Old Way
-
-```bash
-provisioning workspace activate librecloud
+
+
+
+provisioning workspace activate librecloud
provisioning --infra wuji server list
provisioning --infra wuji taskserv create kubernetes
-```plaintext
-
-### New Way
-
-```bash
-provisioning workspace activate librecloud:wuji
+
+
+provisioning workspace activate librecloud:wuji
provisioning server list
provisioning taskserv create kubernetes
-```plaintext
-
-## Performance Notes
-
-- **Notation parsing**: <1ms per command
-- **Workspace detection**: <5ms from PWD
-- **Workspace switching**: ~100ms (includes platform activation)
-- **Temporal override**: No additional overhead
-
-## Backward Compatibility
-
-All existing commands and flags continue to work:
-
-```bash
-# Old syntax still works
+
+
+
+Notation parsing : <1 ms per command
+Workspace detection : <5 ms from PWD
+Workspace switching : ~100 ms (includes platform activation)
+Temporal override : No additional overhead
+
+
+All existing commands and flags continue to work:
+# Old syntax still works
provisioning --infra wuji server list
# New syntax also works
@@ -57108,17 +54104,16 @@ provisioning server list -ws librecloud:wuji
# Mix and match
provisioning --infra other-infra server list -ws librecloud:wuji
# Uses other-infra (explicit flag takes priority)
-```plaintext
-
-## See Also
-
-- `provisioning help workspace` - Workspace commands
-- `provisioning help infra` - Infrastructure commands
-- `docs/architecture/ARCHITECTURE_OVERVIEW.md` - Overall architecture
-- `docs/user/WORKSPACE_SWITCHING_GUIDE.md` - Workspace switching details
+
+
+provisioning help workspace - Workspace commands
+provisioning help infra - Infrastructure commands
+docs/architecture/ARCHITECTURE_OVERVIEW.md - Overall architecture
+docs/user/WORKSPACE_SWITCHING_GUIDE.md - Workspace switching details
+
-
+
The workspace configuration management commands provide a comprehensive set of tools for viewing, editing, validating, and managing workspace configurations.
Command Description
@@ -57132,7 +54127,7 @@ provisioning --infra other-infra server list -ws librecloud:wuji
-Display the complete workspace configuration in various formats.
+Display the complete workspace configuration in JSON, YAML, TOML, and other formats.
# Show active workspace config (YAML format)
provisioning workspace config show
@@ -57147,37 +54142,27 @@ provisioning workspace config show --out toml
# Show specific workspace in JSON
provisioning workspace config show my-workspace --out json
-```plaintext
-
-**Output:** Complete workspace configuration in the specified format
-
-### Validate Workspace Configuration
-
-Validate all configuration files for syntax and required sections.
-
-```bash
-# Validate active workspace
+
+Output: Complete workspace configuration in the specified format
+
+Validate all configuration files for syntax and required sections.
+# Validate active workspace
provisioning workspace config validate
# Validate specific workspace
provisioning workspace config validate my-workspace
-```plaintext
-
-**Checks performed:**
-
-- Main config (`provisioning.yaml`) - YAML syntax and required sections
-- Provider configs (`providers/*.toml`) - TOML syntax
-- Platform service configs (`platform/*.toml`) - TOML syntax
-- KMS config (`kms.toml`) - TOML syntax
-
-**Output:** Validation report with success/error indicators
-
-### Generate Provider Configuration
-
-Generate a provider configuration file from a template.
-
-```bash
-# Generate AWS provider config for active workspace
+
+Checks performed:
+
+Main config (provisioning.yaml) - YAML syntax and required sections
+Provider configs (providers/*.toml) - TOML syntax
+Platform service configs (platform/*.toml) - TOML syntax
+KMS config (kms.toml) - TOML syntax
+
+Output: Validation report with success/error indicators
+
+Generate a provider configuration file from a template.
+# Generate AWS provider config for active workspace
provisioning workspace config generate provider aws
# Generate UpCloud provider config for specific workspace
@@ -57185,22 +54170,17 @@ provisioning workspace config generate provider upcloud --infra my-workspace
# Generate local provider config
provisioning workspace config generate provider local
-```plaintext
-
-**What it does:**
-
-1. Locates provider template in `extensions/providers/{name}/config.defaults.toml`
-2. Interpolates workspace-specific values (`{{workspace.name}}`, `{{workspace.path}}`)
-3. Saves to `{workspace}/config/providers/{name}.toml`
-
-**Output:** Generated configuration file ready for customization
-
-### Edit Configuration Files
-
-Open configuration files in your editor for modification.
-
-```bash
-# Edit main workspace config
+
+What it does:
+
+Locates provider template in extensions/providers/{name}/config.defaults.toml
+Interpolates workspace-specific values ({{workspace.name}}, {{workspace.path}})
+Saves to {workspace}/config/providers/{name}.toml
+
+Output: Generated configuration file ready for customization
+
+Open configuration files in your editor for modification.
+# Edit main workspace config
provisioning workspace config edit main
# Edit specific provider config
@@ -57214,43 +54194,34 @@ provisioning workspace config edit kms
# Edit for specific workspace
provisioning workspace config edit provider upcloud --infra my-workspace
-```plaintext
-
-**Editor used:** Value of `$EDITOR` environment variable (defaults to `vi`)
-
-**Config types:**
-
-- `main` - Main workspace configuration (`provisioning.yaml`)
-- `provider <name>` - Provider configuration (`providers/{name}.toml`)
-- `platform <name>` - Platform service configuration (`platform/{name}.toml`)
-- `kms` - KMS configuration (`kms.toml`)
-
-### Show Configuration Hierarchy
-
-Display the configuration loading hierarchy and precedence.
-
-```bash
-# Show hierarchy for active workspace
+
+Editor used: Value of $EDITOR environment variable (defaults to vi)
+Config types:
+
+main - Main workspace configuration (provisioning.yaml)
+provider <name> - Provider configuration (providers/{name}.toml)
+platform <name> - Platform service configuration (platform/{name}.toml)
+kms - KMS configuration (kms.toml)
+
+
+Display the configuration loading hierarchy and precedence.
+# Show hierarchy for active workspace
provisioning workspace config hierarchy
# Show hierarchy for specific workspace
provisioning workspace config hierarchy my-workspace
-```plaintext
-
-**Output:** Visual hierarchy showing:
-
-1. Environment Variables (highest priority)
-2. User Context
-3. Platform Services
-4. Provider Configs
-5. Workspace Config (lowest priority)
-
-### List Configuration Files
-
-List all configuration files for a workspace.
-
-```bash
-# List all configs
+
+Output: Visual hierarchy showing:
+
+Environment Variables (highest priority)
+User Context
+Platform Services
+Provider Configs
+Workspace Config (lowest priority)
+
+
+List all configuration files for a workspace.
+# List all configs
provisioning workspace config list
# List only provider configs
@@ -57264,21 +54235,17 @@ provisioning workspace config list --type kms
# List for specific workspace
provisioning workspace config list my-workspace --type all
-```plaintext
-
-**Output:** Table of configuration files with type, name, and path
-
-## Workspace Selection
-
-All config commands support two ways to specify the workspace:
-
-1. **Active Workspace** (default):
-
- ```bash
- provisioning workspace config show
+Output: Table of configuration files with type, name, and path
+
+All config commands support two ways to specify the workspace:
+Active Workspace (default):
+provisioning workspace config show
+
+
+
Specific Workspace (using --infra flag):
provisioning workspace config show --infra my-workspace
@@ -57298,26 +54265,20 @@ All config commands support two ways to specify the workspace:
│ │ ├── control-center.toml
│ │ └── mcp.toml
│ └── kms.toml # KMS configuration
-```plaintext
-
-## Configuration Hierarchy
-
-Configuration values are loaded in the following order (highest to lowest priority):
-
-1. **Environment Variables** - `PROVISIONING_*` variables
-2. **User Context** - `~/Library/Application Support/provisioning/ws_{name}.yaml`
-3. **Platform Services** - `{workspace}/config/platform/*.toml`
-4. **Provider Configs** - `{workspace}/config/providers/*.toml`
-5. **Workspace Config** - `{workspace}/config/provisioning.yaml`
-
-Higher priority values override lower priority values.
-
-## Examples
-
-### Complete Workflow
-
-```bash
-# 1. Create new workspace with activation
+
+
+Configuration values are loaded in the following order (highest to lowest priority):
+
+Environment Variables - PROVISIONING_* variables
+User Context - ~/Library/Application Support/provisioning/ws_{name}.yaml
+Platform Services - {workspace}/config/platform/*.toml
+Provider Configs - {workspace}/config/providers/*.toml
+Workspace Config - {workspace}/config/provisioning.yaml
+
+Higher priority values override lower priority values.
+
+
+# 1. Create new workspace with activation
provisioning workspace init my-project ~/workspaces/my-project --providers [aws,local] --activate
# 2. Validate configuration
@@ -57340,12 +54301,9 @@ provisioning workspace config show --out json
# 8. Validate everything
provisioning workspace config validate
-```plaintext
-
-### Multi-Workspace Management
-
-```bash
-# Create multiple workspaces
+
+
+# Create multiple workspaces
provisioning workspace init dev ~/workspaces/dev --activate
provisioning workspace init staging ~/workspaces/staging
provisioning workspace init prod ~/workspaces/prod
@@ -57358,12 +54316,9 @@ provisioning workspace config show prod --out yaml
# Edit provider for specific workspace
provisioning workspace config edit provider aws --infra prod
-```plaintext
-
-### Configuration Troubleshooting
-
-```bash
-# 1. Validate all configs
+
+
+# 1. Validate all configs
provisioning workspace config validate
# 2. If errors, check hierarchy
@@ -57377,14 +54332,10 @@ provisioning workspace config edit provider aws
# 5. Validate again
provisioning workspace config validate
-```plaintext
-
-## Integration with Other Commands
-
-Config commands integrate seamlessly with other workspace operations:
-
-```bash
-# Create workspace with providers
+
+
+Config commands integrate seamlessly with other workspace operations:
+# Create workspace with providers
provisioning workspace init my-app ~/apps/my-app --providers [aws,upcloud] --activate
# Generate additional configs
@@ -57395,37 +54346,42 @@ provisioning workspace config validate
# Deploy infrastructure
provisioning server create --infra my-app
-```plaintext
-
-## Tips
-
-1. **Always validate after editing**: Run `workspace config validate` after manual edits
-
-2. **Use hierarchy to understand precedence**: Run `workspace config hierarchy` to see which config files are being used
-
-3. **Generate from templates**: Use `config generate provider` rather than creating configs manually
-
-4. **Check before activation**: Validate a workspace before activating it as default
-
-5. **Use --out json for scripting**: JSON output is easier to parse in scripts
-
-## See Also
-
-- [Workspace Initialization](workspace-initialization.md)
-- [Provider Configuration](provider-configuration.md)
-- Configuration Architecture
-
-This guide covers the unified configuration rendering system in the CLI daemon that supports KCL, Nickel, and Tera template engines.
-
-The CLI daemon (cli-daemon) provides a high-performance REST API for rendering configurations in three different formats:
+
+
+
+Always validate after editing : Run workspace config validate after manual edits
+
+
+Use hierarchy to understand precedence : Run workspace config hierarchy to see which config files are being used
+
+
+Generate from templates : Use config generate provider rather than creating configs manually
+
+
+Check before activation : Validate a workspace before activating it as default
+
+
+Use –out json for scripting : JSON output is easier to parse in scripts
+
+
+
-KCL : Type-safe infrastructure configuration language (familiar, existing patterns)
-Nickel : Functional configuration language with lazy evaluation (excellent for complex configs)
-Tera : Jinja2-compatible template engine (simple templating)
+Workspace Initialization
+Provider Configuration
+Configuration Architecture
-All three renderers are accessible through a single unified API endpoint with intelligent caching to minimize latency.
-
+
+This guide covers the unified configuration rendering system in the CLI daemon that supports Nickel and Tera template engines. KCL support is deprecated.
+
+The CLI daemon (cli-daemon) provides a high-performance REST API for rendering configurations in multiple formats:
+
+Nickel : Functional configuration language with lazy evaluation and type safety (primary choice)
+Tera : Jinja2-compatible template engine (simple templating)
+KCL : Type-safe infrastructure configuration language (legacy - deprecated)
+
+All renderers are accessible through a single unified API endpoint with intelligent caching to minimize latency.
+
The daemon runs on port 9091 by default:
# Start in background
@@ -57433,48 +54389,33 @@ provisioning server create --infra my-app
# Check it's running
curl http://localhost:9091/health
-```plaintext
-
-### Simple KCL Rendering
-
-```bash
-curl -X POST http://localhost:9091/config/render \
+
+
+curl -X POST http://localhost:9091/config/render \
-H "Content-Type: application/json" \
-d '{
- "language": "kcl",
- "content": "name = \"my-server\"\ncpu = 4\nmemory = 8192",
+ "language": "nickel",
+ "content": "{ name = \"my-server\", cpu = 4, memory = 8192 }",
"name": "server-config"
}'
-```plaintext
-
-**Response**:
-
-```json
-{
- "rendered": "name = \"my-server\"\ncpu = 4\nmemory = 8192",
+
+Response :
+{
+ "rendered": "{ name = \"my-server\", cpu = 4, memory = 8192 }",
"error": null,
- "language": "kcl",
- "execution_time_ms": 45
+ "language": "nickel",
+ "execution_time_ms": 23
}
-```plaintext
-
-## REST API Reference
-
-### POST /config/render
-
-Render a configuration in any supported language.
-
-**Request Headers**:
-
-```plaintext
-Content-Type: application/json
-```plaintext
-
-**Request Body**:
-
-```json
-{
- "language": "kcl|nickel|tera",
+
+
+
+Render a configuration in any supported language.
+Request Headers :
+Content-Type: application/json
+
+Request Body :
+{
+ "language": "nickel|tera|kcl",
"content": "...configuration content...",
"context": {
"key1": "value1",
@@ -57482,53 +54423,41 @@ Content-Type: application/json
},
"name": "optional-config-name"
}
-```plaintext
-
-**Parameters**:
-
-| Parameter | Type | Required | Description |
-|-----------|------|----------|-------------|
-| `language` | string | Yes | One of: `kcl`, `nickel`, `tera` |
-| `content` | string | Yes | The configuration or template content to render |
-| `context` | object | No | Variables to pass to the configuration (JSON object) |
-| `name` | string | No | Optional name for logging purposes |
-
-**Response** (Success):
-
-```json
-{
+
+Parameters :
+Parameter Type Required Description
+languagestring Yes One of: nickel, tera, kcl (deprecated)
+contentstring Yes The configuration or template content to render
+contextobject No Variables to pass to the configuration (JSON object)
+namestring No Optional name for logging purposes
+
+
+Response (Success):
+{
"rendered": "...rendered output...",
"error": null,
"language": "kcl",
"execution_time_ms": 23
}
-```plaintext
-
-**Response** (Error):
-
-```json
-{
+
+Response (Error):
+{
"rendered": null,
"error": "KCL evaluation failed: undefined variable 'name'",
"language": "kcl",
"execution_time_ms": 18
}
-```plaintext
-
-**Status Codes**:
-
-- `200 OK` - Rendering completed (check `error` field in body for evaluation errors)
-- `400 Bad Request` - Invalid request format
-- `500 Internal Server Error` - Daemon error
-
-### GET /config/stats
-
-Get rendering statistics across all languages.
-
-**Response**:
-
-```json
-{
+
+Status Codes :
+
+200 OK - Rendering completed (check error field in body for evaluation errors)
+400 Bad Request - Invalid request format
+500 Internal Server Error - Daemon error
+
+
+Get rendering statistics across all languages.
+Response :
+{
"total_renders": 156,
"successful_renders": 154,
"failed_renders": 2,
@@ -57540,27 +54469,19 @@ Get rendering statistics across all languages.
"nickel_cache_hits": 35,
"tera_cache_hits": 18
}
-```plaintext
-
-### POST /config/stats/reset
-
-Reset all rendering statistics.
-
-**Response**:
-
-```json
-{
+
+
+Reset all rendering statistics.
+Response :
+{
"status": "success",
"message": "Configuration rendering statistics reset"
}
-```plaintext
-
-## KCL Rendering
-
-### Basic KCL Configuration
-
-```bash
-curl -X POST http://localhost:9091/config/render \
+
+
+Note : KCL is deprecated. Use Nickel for new configurations.
+
+curl -X POST http://localhost:9091/config/render \
-H "Content-Type: application/json" \
-d '{
"language": "kcl",
@@ -57578,14 +54499,10 @@ tags = {
",
"name": "prod-server-config"
}'
-```plaintext
-
-### KCL with Context Variables
-
-Pass context variables using the `-D` flag syntax internally:
-
-```bash
-curl -X POST http://localhost:9091/config/render \
+
+
+Pass context variables using the -D flag syntax internally:
+curl -X POST http://localhost:9091/config/render \
-H "Content-Type: application/json" \
-d '{
"language": "kcl",
@@ -57603,20 +54520,16 @@ memory = option(\"memory_mb\", default=2048)
},
"name": "server-with-context"
}'
-```plaintext
-
-### Expected KCL Rendering Time
-
-- **First render (cache miss)**: 20-50ms
-- **Cached render (same content)**: 1-5ms
-- **Large configs (100+ variables)**: 50-100ms
-
-## Nickel Rendering
-
-### Basic Nickel Configuration
-
-```bash
-curl -X POST http://localhost:9091/config/render \
+
+
+
+First render (cache miss) : 20-50 ms
+Cached render (same content) : 1-5 ms
+Large configs (100+ variables) : 50-100 ms
+
+
+
+curl -X POST http://localhost:9091/config/render \
-H "Content-Type: application/json" \
-d '{
"language": "nickel",
@@ -57633,14 +54546,10 @@ curl -X POST http://localhost:9091/config/render \
}",
"name": "nickel-server-config"
}'
-```plaintext
-
-### Nickel with Lazy Evaluation
-
-Nickel excels at evaluating only what's needed:
-
-```bash
-curl -X POST http://localhost:9091/config/render \
+
+
+Nickel excels at evaluating only what’s needed:
+curl -X POST http://localhost:9091/config/render \
-H "Content-Type: application/json" \
-d '{
"language": "nickel",
@@ -57662,22 +54571,17 @@ curl -X POST http://localhost:9091/config/render \
"only_server": true
}
}'
-```plaintext
-
-### Expected Nickel Rendering Time
-
-- **First render (cache miss)**: 30-60ms
-- **Cached render (same content)**: 1-5ms
-- **Large configs with lazy evaluation**: 40-80ms
-
-**Advantage**: Nickel only computes fields that are actually used in the output
-
-## Tera Template Rendering
-
-### Basic Tera Template
-
-```bash
-curl -X POST http://localhost:9091/config/render \
+
+
+
+First render (cache miss) : 30-60 ms
+Cached render (same content) : 1-5 ms
+Large configs with lazy evaluation : 40-80 ms
+
+Advantage : Nickel only computes fields that are actually used in the output
+
+
+curl -X POST http://localhost:9091/config/render \
-H "Content-Type: application/json" \
-d '{
"language": "tera",
@@ -57711,14 +54615,10 @@ Monitoring: DISABLED
},
"name": "server-template"
}'
-```plaintext
-
-### Tera Filters and Functions
-
-Tera supports Jinja2-compatible filters and functions:
-
-```bash
-curl -X POST http://localhost:9091/config/render \
+
+
+Tera supports Jinja2-compatible filters and functions:
+curl -X POST http://localhost:9091/config/render \
-H "Content-Type: application/json" \
-d '{
"language": "tera",
@@ -57742,130 +54642,97 @@ Cost estimate: \${{ monthly_cost | round(precision=2) }}
]
}
}'
-```plaintext
-
-### Expected Tera Rendering Time
-
-- **Simple templates**: 4-10ms
-- **Complex templates with loops**: 10-20ms
-- **Always fast** (template is pre-compiled)
-
-## Performance Characteristics
-
-### Caching Strategy
-
-All three renderers use LRU (Least Recently Used) caching:
-
-- **Cache Size**: 100 entries per renderer
-- **Cache Key**: SHA256 hash of (content + context)
-- **Cache Hit**: Typically < 5ms
-- **Cache Miss**: Language-dependent (20-60ms)
-
-**To maximize cache hits**:
-
-1. Render the same config multiple times → hits after first render
-2. Use static content when possible → better cache reuse
-3. Monitor cache hit ratio via `/config/stats`
-
-### Benchmarks
-
-Comparison of rendering times (on commodity hardware):
-
-| Scenario | KCL | Nickel | Tera |
-|----------|-----|--------|------|
-| Simple config (10 vars) | 20ms | 30ms | 5ms |
-| Medium config (50 vars) | 35ms | 45ms | 8ms |
-| Large config (100+ vars) | 50-100ms | 50-80ms | 10ms |
-| Cached render | 1-5ms | 1-5ms | 1-5ms |
-
-### Memory Usage
-
-- Each renderer keeps 100 cached entries in memory
-- Average config size in cache: ~5KB
-- Maximum memory per renderer: ~500KB + overhead
-
-## Error Handling
-
-### Common Errors
-
-#### KCL Binary Not Found
-
-**Error Response**:
-
-```json
-{
+
+
+
+Simple templates : 4-10 ms
+Complex templates with loops : 10-20 ms
+Always fast (template is pre-compiled)
+
+
+
+All three renderers use LRU (Least Recently Used) caching:
+
+Cache Size : 100 entries per renderer
+Cache Key : SHA256 hash of (content + context)
+Cache Hit : Typically < 5 ms
+Cache Miss : Language-dependent (20-60 ms)
+
+To maximize cache hits :
+
+Render the same config multiple times → hits after first render
+Use static content when possible → better cache reuse
+Monitor cache hit ratio via /config/stats
+
+
+Comparison of rendering times (on commodity hardware):
+Scenario KCL Nickel Tera
+Simple config (10 vars) 20 ms 30 ms 5 ms
+Medium config (50 vars) 35 ms 45 ms 8 ms
+Large config (100+ vars) 50-100 ms 50-80 ms 10 ms
+Cached render 1-5 ms 1-5 ms 1-5 ms
+
+
+
+
+Each renderer keeps 100 cached entries in memory
+Average config size in cache: ~5 KB
+Maximum memory per renderer: ~500 KB + overhead
+
+
+
+
+Error Response :
+{
"rendered": null,
"error": "KCL binary not found in PATH. Install KCL or set KCL_PATH environment variable",
"language": "kcl",
"execution_time_ms": 0
}
-```plaintext
-
-**Solution**:
-
-```bash
-# Install KCL
+
+Solution :
+# Install KCL
kcl version
# Or set explicit path
export KCL_PATH=/usr/local/bin/kcl
-```plaintext
-
-#### Invalid KCL Syntax
-
-**Error Response**:
-
-```json
-{
+
+
+Error Response :
+{
"rendered": null,
"error": "KCL evaluation failed: Parse error at line 3: expected '='",
"language": "kcl",
"execution_time_ms": 12
}
-```plaintext
-
-**Solution**: Verify KCL syntax. Run `kcl eval file.k` directly for better error messages.
-
-#### Missing Context Variable
-
-**Error Response**:
-
-```json
-{
+
+Solution : Verify Nickel syntax. Run nickel eval file.ncl directly for better error messages.
+
+Error Response :
+{
"rendered": null,
"error": "KCL evaluation failed: undefined variable 'required_var'",
"language": "kcl",
"execution_time_ms": 8
}
-```plaintext
-
-**Solution**: Provide required context variables or use `option()` with defaults.
-
-#### Invalid JSON in Context
-
-**HTTP Status**: `400 Bad Request`
-**Body**: Error message about invalid JSON
-
-**Solution**: Ensure context is valid JSON.
-
-## Integration Examples
-
-### Using with Nushell
-
-```nushell
-# Render a KCL config from Nushell
-let config = open workspace/config/provisioning.k | into string
+
+Solution : Provide required context variables or use option() with defaults.
+
+HTTP Status : 400 Bad Request
+Body : Error message about invalid JSON
+Solution : Ensure context is valid JSON.
+
+
+# Render a Nickel config from Nushell
+let config = open workspace/config/provisioning.ncl | into string
let response = curl -X POST http://localhost:9091/config/render \
-H "Content-Type: application/json" \
- -d $"{{ language: \"kcl\", content: $config }}" | from json
+ -d $"{{ language: \"nickel\", content: $config }}" | from json
print $response.rendered
-```plaintext
-
-### Using with Python
-
-```python
-import requests
+
+
+import requests
import json
def render_config(language, content, context=None, name=None):
@@ -57885,8 +54752,8 @@ def render_config(language, content, context=None, name=None):
# Example usage
result = render_config(
- "kcl",
- 'name = "server"\ncpu = 4',
+ "nickel",
+ '{name = "server", cpu = 4}',
{"name": "prod-server"},
"my-config"
)
@@ -57896,12 +54763,9 @@ if result["error"]:
else:
print(f"Rendered in {result['execution_time_ms']}ms")
print(result["rendered"])
-```plaintext
-
-### Using with Curl
-
-```bash
-#!/bin/bash
+
+
+#!/bin/bash
# Function to render config
render_config() {
@@ -57921,74 +54785,52 @@ EOF
}
# Usage
-render_config "kcl" "name = \"my-server\"" "server-config"
-```plaintext
-
-## Troubleshooting
-
-### Daemon Won't Start
-
-**Check log level**:
-
-```bash
-PROVISIONING_LOG_LEVEL=debug ./target/release/cli-daemon
-```plaintext
-
-**Verify Nushell binary**:
-
-```bash
-which nu
+render_config "nickel" "{name = \"my-server\"}" "server-config"
+
+
+
+Check log level :
+PROVISIONING_LOG_LEVEL=debug ./target/release/cli-daemon
+
+Verify Nushell binary :
+which nu
# or set explicit path
NUSHELL_PATH=/usr/local/bin/nu ./target/release/cli-daemon
-```plaintext
-
-### Very Slow Rendering
-
-**Check cache hit rate**:
-
-```bash
-curl http://localhost:9091/config/stats | jq '.kcl_cache_hits / .kcl_renders'
-```plaintext
-
-**If low cache hit rate**: Rendering same configs repeatedly?
-
-**Monitor execution time**:
-
-```bash
-curl http://localhost:9091/config/render ... | jq '.execution_time_ms'
-```plaintext
-
-### Rendering Hangs
-
-**Set timeout** (depends on client):
-
-```bash
-curl --max-time 10 -X POST http://localhost:9091/config/render ...
-```plaintext
-
-**Check daemon logs** for stuck processes.
-
-### Out of Memory
-
-**Reduce cache size** (rebuild with modified config) or restart daemon.
-
-## Best Practices
-
-1. **Choose right language for task**:
- - KCL: Familiar, type-safe, use if already in ecosystem
- - Nickel: Large configs with lazy evaluation needs
- - Tera: Simple templating, fastest
-
-2. **Use context variables** instead of hardcoding values:
-
- ```json
- "context": {
- "environment": "production",
- "replica_count": 3
- }
+
+Check cache hit rate :
+curl http://localhost:9091/config/stats | jq '.nickel_cache_hits / .nickel_renders'
+
+If low cache hit rate : Rendering same configs repeatedly?
+Monitor execution time :
+curl http://localhost:9091/config/render ... | jq '.execution_time_ms'
+
+
+Set timeout (depends on client):
+curl --max-time 10 -X POST http://localhost:9091/config/render ...
+
+Check daemon logs for stuck processes.
+
+Reduce cache size (rebuild with modified config) or restart daemon.
+
+Choose right language for task :
+
+KCL: Familiar, type-safe, use if already in ecosystem
+Nickel: Large configs with lazy evaluation needs
+Tera: Simple templating, fastest
+
+
+
+Use context variables instead of hardcoding values:
+"context": {
+ "environment": "production",
+ "replica_count": 3
+}
+
+
+
Monitor statistics to understand performance:
watch -n 1 'curl -s http://localhost:9091/config/stats | jq'
@@ -58000,7 +54842,7 @@ curl --max-time 10 -X POST http://localhost:9091/config/render ...
Error handling : Always check error field in response
-
+
KCL Documentation
Nickel User Manual
@@ -58008,15 +54850,12 @@ curl --max-time 10 -X POST http://localhost:9091/config/render ...
CLI Daemon Architecture: provisioning/platform/cli-daemon/README.md
-
+
POST http://localhost:9091/config/render
-```plaintext
-
-### Request Template
-
-```bash
-curl -X POST http://localhost:9091/config/render \
+
+
+curl -X POST http://localhost:9091/config/render \
-H "Content-Type: application/json" \
-d '{
"language": "kcl|nickel|tera",
@@ -58024,60 +54863,44 @@ curl -X POST http://localhost:9091/config/render \
"context": {...},
"name": "optional-name"
}'
-```plaintext
-
-### Quick Examples
-
-#### KCL - Simple Config
-
-```bash
-curl -X POST http://localhost:9091/config/render \
+
+
+
+curl -X POST http://localhost:9091/config/render \
-H "Content-Type: application/json" \
-d '{
"language": "kcl",
"content": "name = \"server\"\ncpu = 4\nmemory = 8192"
}'
-```plaintext
-
-#### KCL - With Context
-
-```bash
-curl -X POST http://localhost:9091/config/render \
+
+
+curl -X POST http://localhost:9091/config/render \
-H "Content-Type: application/json" \
-d '{
"language": "kcl",
"content": "name = option(\"server_name\")\nenvironment = option(\"env\", default=\"dev\")",
"context": {"server_name": "prod-01", "env": "production"}
}'
-```plaintext
-
-#### Nickel - Simple Config
-
-```bash
-curl -X POST http://localhost:9091/config/render \
+
+
+curl -X POST http://localhost:9091/config/render \
-H "Content-Type: application/json" \
-d '{
"language": "nickel",
"content": "{name = \"server\", cpu = 4, memory = 8192}"
}'
-```plaintext
-
-#### Tera - Template with Loops
-
-```bash
-curl -X POST http://localhost:9091/config/render \
+
+
+curl -X POST http://localhost:9091/config/render \
-H "Content-Type: application/json" \
-d '{
"language": "tera",
"content": "{% for task in tasks %}{{ task }}\n{% endfor %}",
"context": {"tasks": ["kubernetes", "postgres", "redis"]}
}'
-```plaintext
-
-### Statistics
-
-```bash
-# Get stats
+
+
+# Get stats
curl http://localhost:9091/config/stats
# Reset stats
@@ -58085,41 +54908,32 @@ curl -X POST http://localhost:9091/config/stats/reset
# Watch stats in real-time
watch -n 1 'curl -s http://localhost:9091/config/stats | jq'
-```plaintext
-
-### Performance Guide
-
-| Language | Cold | Cached | Use Case |
-|----------|------|--------|----------|
-| **KCL** | 20-50ms | 1-5ms | Type-safe infrastructure configs |
-| **Nickel** | 30-60ms | 1-5ms | Large configs, lazy evaluation |
-| **Tera** | 5-20ms | 1-5ms | Simple templating |
-
-### Status Codes
-
-| Code | Meaning |
-|------|---------|
-| 200 | Success (check `error` field for evaluation errors) |
-| 400 | Invalid request |
-| 500 | Daemon error |
-
-### Response Fields
-
-```json
-{
+
+
+Language Cold Cached Use Case
+KCL 20-50 ms 1-5 ms Type-safe infrastructure configs
+Nickel 30-60 ms 1-5 ms Large configs, lazy evaluation
+Tera 5-20 ms 1-5 ms Simple templating
+
+
+
+Code Meaning
+200 Success (check error field for evaluation errors)
+400 Invalid request
+500 Daemon error
+
+
+
+{
"rendered": "...output or null on error",
"error": "...error message or null on success",
"language": "kcl|nickel|tera",
"execution_time_ms": 23
}
-```plaintext
-
-### Languages Comparison
-
-#### KCL
-
-```kcl
-name = "server"
+
+
+
+name = "server"
type = "web"
cpu = 4
memory = 8192
@@ -58128,15 +54942,11 @@ tags = {
env = "prod"
team = "platform"
}
-```plaintext
-
-**Pros**: Familiar syntax, type-safe, existing patterns
-**Cons**: Eager evaluation, verbose for simple cases
-
-#### Nickel
-
-```nickel
-{
+
+Pros : Familiar syntax, type-safe, existing patterns
+Cons : Eager evaluation, verbose for simple cases
+
+{
name = "server",
type = "web",
cpu = 4,
@@ -58146,174 +54956,123 @@ tags = {
team = "platform"
}
}
-```plaintext
-
-**Pros**: Lazy evaluation, functional style, compact
-**Cons**: Different paradigm, smaller ecosystem
-
-#### Tera
-
-```jinja2
-Server: {{ name }}
+
+Pros : Lazy evaluation, functional style, compact
+Cons : Different paradigm, smaller ecosystem
+
+Server: {{ name }}
Type: {{ type | upper }}
{% for tag_name, tag_value in tags %}
- {{ tag_name }}: {{ tag_value }}
{% endfor %}
-```plaintext
-
-**Pros**: Fast, simple, familiar template syntax
-**Cons**: No validation, template-only
-
-### Caching
-
-**How it works**: SHA256(content + context) → cached result
-
-**Cache hit**: < 5ms
-**Cache miss**: 20-60ms (language dependent)
-**Cache size**: 100 entries per language
-
-**Cache stats**:
-
-```bash
-curl -s http://localhost:9091/config/stats | jq '{
+
+Pros : Fast, simple, familiar template syntax
+Cons : No validation, template-only
+
+How it works : SHA256(content + context) → cached result
+Cache hit : < 5 ms
+Cache miss : 20-60 ms (language dependent)
+Cache size : 100 entries per language
+Cache stats :
+curl -s http://localhost:9091/config/stats | jq '{
kcl_cache_hits: .kcl_cache_hits,
kcl_renders: .kcl_renders,
kcl_hit_ratio: (.kcl_cache_hits / .kcl_renders * 100)
}'
-```plaintext
-
-### Common Tasks
-
-#### Batch Rendering
-
-```bash
-#!/bin/bash
-for config in configs/*.k; do
+
+
+
+#!/bin/bash
+for config in configs/*.ncl; do
curl -X POST http://localhost:9091/config/render \
-H "Content-Type: application/json" \
-d "$(jq -n --arg content \"$(cat $config)\" \
- '{language: "kcl", content: $content}')"
+ '{language: "nickel", content: $content}')"
done
-```plaintext
+
+
+# Nickel validation
+nickel typecheck my-config.ncl
-#### Validate Before Rendering
-
-```bash
-# KCL validation
-kcl eval --strict my-config.k
-
-# Nickel validation (via daemon first render)
+# Daemon validation (via first render)
curl ... # catches errors in response
-```plaintext
-
-#### Monitor Cache Performance
-
-```bash
-#!/bin/bash
+
+
+#!/bin/bash
while true; do
STATS=$(curl -s http://localhost:9091/config/stats)
- HIT_RATIO=$( echo "$STATS" | jq '.kcl_cache_hits / .kcl_renders * 100')
+ HIT_RATIO=$( echo "$STATS" | jq '.nickel_cache_hits / .nickel_renders * 100')
echo "Cache hit ratio: ${HIT_RATIO}%"
sleep 5
done
-```plaintext
-
-### Error Examples
-
-#### Missing Binary
-
-```json
-{
- "error": "KCL binary not found. Install KCL or set KCL_PATH",
+
+
+
+{
+ "error": "Nickel binary not found. Install Nickel or set NICKEL_PATH",
"rendered": null
}
-```plaintext
-
-**Fix**: `export KCL_PATH=/path/to/kcl` or install KCL
-
-#### Syntax Error
-
-```json
-{
- "error": "KCL evaluation failed: Parse error at line 3",
+
+Fix : export NICKEL_PATH=/path/to/nickel or install Nickel
+
+{
+ "error": "Nickel type checking failed: Type mismatch at line 3",
"rendered": null
}
-```plaintext
-
-**Fix**: Check KCL syntax, run `kcl eval file.k` directly
-
-#### Missing Variable
-
-```json
-{
- "error": "KCL evaluation failed: undefined variable 'name'",
+
+Fix : Check Nickel syntax, run nickel typecheck file.ncl directly
+
+{
+ "error": "Nickel evaluation failed: undefined variable 'name'",
"rendered": null
}
-```plaintext
+
+Fix : Provide in context or define as optional field with default
+
+
+use lib_provisioning
-**Fix**: Provide in `context` or use `option()` with default
-
-### Integration Quick Start
-
-#### Nushell
-
-```nushell
-use lib_provisioning
-
-let config = open server.k | into string
+let config = open server.ncl | into string
let result = (curl -X POST http://localhost:9091/config/render \
-H "Content-Type: application/json" \
- -d {language: "kcl", content: $config} | from json)
+ -d {language: "nickel", content: $config} | from json)
if ($result.error != null) {
error $result.error
} else {
print $result.rendered
}
-```plaintext
-
-#### Python
-
-```python
-import requests
+
+
+import requests
resp = requests.post("http://localhost:9091/config/render", json={
- "language": "kcl",
- "content": 'name = "server"',
+ "language": "nickel",
+ "content": '{name = "server"}',
"context": {}
})
result = resp.json()
print(result["rendered"] if not result["error"] else f"Error: {result['error']}")
-```plaintext
-
-#### Bash
-
-```bash
-render() {
+
+
+render() {
curl -s -X POST http://localhost:9091/config/render \
-H "Content-Type: application/json" \
-d "$1" | jq '.'
}
# Usage
-render '{"language":"kcl","content":"name = \"server\""}'
-```plaintext
-
-### Environment Variables
-
-```bash
-# Daemon configuration
+render '{"language":"nickel","content":"{name = \"server\"}"}'
+
+
+# Daemon configuration
PROVISIONING_LOG_LEVEL=debug # Log level
DAEMON_BIND=127.0.0.1:9091 # Bind address
NUSHELL_PATH=/usr/local/bin/nu # Nushell binary
-KCL_PATH=/usr/local/bin/kcl # KCL binary
NICKEL_PATH=/usr/local/bin/nickel # Nickel binary
-```plaintext
-
-### Useful Commands
-
-```bash
-# Health check
+
+
+# Health check
curl http://localhost:9091/health
# Daemon info
@@ -58327,20 +55086,26 @@ curl -s http://localhost:9091/config/stats | jq '{
total: .total_renders,
success_rate: (.successful_renders / .total_renders * 100),
avg_time: .average_time_ms,
- cache_hit_rate: ((.kcl_cache_hits + .nickel_cache_hits) / (.kcl_renders + .nickel_renders) * 100)
+ cache_hit_rate: ((.nickel_cache_hits + .tera_cache_hits) / (.nickel_renders + .tera_renders) * 100)
}'
-```plaintext
-
-### Troubleshooting Checklist
-
-- [ ] Daemon running? `curl http://localhost:9091/health`
-- [ ] Correct content for language?
-- [ ] Valid JSON in context?
-- [ ] Binary available? (KCL/Nickel)
-- [ ] Check log level? `PROVISIONING_LOG_LEVEL=debug`
-- [ ] Cache hit rate? `/config/stats`
-- [ ] Error in response? Check `error` field
+
+
This comprehensive guide explains the configuration system of the Infrastructure Automation platform, helping you understand, customize, and manage all configuration aspects.
@@ -58354,7 +55119,7 @@ curl -s http://localhost:9091/config/stats | jq '{
Advanced configuration patterns
-
+
The system uses a layered configuration approach with clear precedence rules:
Runtime CLI arguments (highest precedence)
↓ (overrides)
@@ -58367,35 +55132,26 @@ Project Config (./provisioning.toml)
User Config (~/.config/provisioning/config.toml)
↓ (overrides)
System Defaults (config.defaults.toml) (lowest precedence)
-```plaintext
-
-### Configuration File Types
-
-| File Type | Purpose | Location | Format |
-|-----------|---------|----------|--------|
-| **System Defaults** | Base system configuration | `config.defaults.toml` | TOML |
-| **User Config** | Personal preferences | `~/.config/provisioning/config.toml` | TOML |
-| **Project Config** | Project-wide settings | `./provisioning.toml` | TOML |
-| **Infrastructure Config** | Infra-specific settings | `./.provisioning.toml` | TOML |
-| **Environment Config** | Environment overrides | `config.{env}.toml` | TOML |
-| **Infrastructure Definitions** | Infrastructure as Code | `settings.k`, `*.k` | KCL |
-
-## Understanding Configuration Sections
-
-### Core System Configuration
-
-```toml
-[core]
+
+
+File Type Purpose Location Format
+System Defaults Base system configuration config.defaults.tomlTOML
+User Config Personal preferences ~/.config/provisioning/config.tomlTOML
+Project Config Project-wide settings ./provisioning.tomlTOML
+Infrastructure Config Infra-specific settings ./.provisioning.tomlTOML
+Environment Config Environment overrides config.{env}.tomlTOML
+Infrastructure Definitions Infrastructure as Code main.ncl, *.nclNickel
+
+
+
+
+[core]
version = "1.0.0" # System version
name = "provisioning" # System identifier
-```plaintext
-
-### Path Configuration
-
-The most critical configuration section that defines where everything is located:
-
-```toml
-[paths]
+
+
+The most critical configuration section that defines where everything is located:
+[paths]
# Base directory - all other paths derive from this
base = "/usr/local/provisioning"
@@ -58411,35 +55167,26 @@ core = "{{paths.base}}/core"
[paths.files]
# Important file locations
-settings_file = "settings.k"
+settings_file = "settings.ncl"
keys = "{{paths.base}}/keys.yaml"
requirements = "{{paths.base}}/requirements.yaml"
-```plaintext
-
-### Debug and Logging
-
-```toml
-[debug]
+
+
+[debug]
enabled = false # Enable debug mode
metadata = false # Show internal metadata
check = false # Default to check mode (dry run)
remote = false # Enable remote debugging
log_level = "info" # Logging verbosity
no_terminal = false # Disable terminal features
-```plaintext
-
-### Output Configuration
-
-```toml
-[output]
+
+
+[output]
file_viewer = "less" # File viewer command
format = "yaml" # Default output format (json, yaml, toml, text)
-```plaintext
-
-### Provider Configuration
-
-```toml
-[providers]
+
+
+[providers]
default = "local" # Default provider
[providers.aws]
@@ -58456,12 +55203,9 @@ interface = "CLI"
api_url = ""
auth = ""
interface = "CLI"
-```plaintext
-
-### Encryption (SOPS) Configuration
-
-```toml
-[sops]
+
+
+[sops]
use_sops = true # Enable SOPS encryption
config_path = "{{paths.base}}/.sops.yaml"
@@ -58470,78 +55214,50 @@ key_search_paths = [
"{{paths.base}}/keys/age.txt",
"~/.config/sops/age/keys.txt"
]
-```plaintext
-
-## Configuration Interpolation
-
-The system supports powerful interpolation patterns for dynamic configuration values.
-
-### Basic Interpolation Patterns
-
-#### Path Interpolation
-
-```toml
-# Reference other path values
+
+
+The system supports powerful interpolation patterns for dynamic configuration values.
+
+
+# Reference other path values
templates = "{{paths.base}}/my-templates"
custom_path = "{{paths.providers}}/custom"
-```plaintext
-
-#### Environment Variable Interpolation
-
-```toml
-# Access environment variables
+
+
+# Access environment variables
user_home = "{{env.HOME}}"
current_user = "{{env.USER}}"
custom_path = "{{env.CUSTOM_PATH || /default/path}}" # With fallback
-```plaintext
-
-#### Date/Time Interpolation
-
-```toml
-# Dynamic date/time values
+
+
+# Dynamic date/time values
log_file = "{{paths.base}}/logs/app-{{now.date}}.log"
backup_dir = "{{paths.base}}/backups/{{now.timestamp}}"
-```plaintext
-
-#### Git Information Interpolation
-
-```toml
-# Git repository information
+
+
+# Git repository information
deployment_branch = "{{git.branch}}"
version_tag = "{{git.tag}}"
commit_hash = "{{git.commit}}"
-```plaintext
-
-#### Cross-Section References
-
-```toml
-# Reference values from other sections
+
+
+# Reference values from other sections
database_host = "{{providers.aws.database_endpoint}}"
api_key = "{{sops.decrypted_key}}"
-```plaintext
-
-### Advanced Interpolation
-
-#### Function Calls
-
-```toml
-# Built-in functions
+
+
+
+# Built-in functions
config_path = "{{path.join(env.HOME, .config, provisioning)}}"
safe_name = "{{str.lower(str.replace(project.name, ' ', '-'))}}"
-```plaintext
-
-#### Conditional Expressions
-
-```toml
-# Conditional logic
+
+
+# Conditional logic
debug_level = "{{debug.enabled && 'debug' || 'info'}}"
storage_path = "{{env.STORAGE_PATH || path.join(paths.base, 'storage')}}"
-```plaintext
-
-### Interpolation Examples
-
-```toml
-[paths]
+
+
+[paths]
base = "/opt/provisioning"
workspace = "{{env.HOME}}/provisioning-workspace"
current_project = "{{paths.workspace}}/{{env.PROJECT_NAME || 'default'}}"
@@ -58557,27 +55273,20 @@ connection_string = "postgresql://{{env.DB_USER}}:{{env.DB_PASS}}@{{env.DB_HOST
[notifications]
slack_channel = "#{{env.TEAM_NAME || 'general'}}-notifications"
email_subject = "Deployment {{deployment.environment}} - {{deployment.timestamp}}"
-```plaintext
-
-## Environment-Specific Configuration
-
-### Environment Detection
-
-The system automatically detects the environment using:
-
-1. **PROVISIONING_ENV** environment variable
-2. **Git branch patterns** (dev, staging, main/master)
-3. **Directory patterns** (development, staging, production)
-4. **Explicit configuration**
-
-### Environment Configuration Files
-
-Create environment-specific configurations:
-
-#### Development Environment (`config.dev.toml`)
-
-```toml
-[core]
+
+
+
+The system automatically detects the environment using:
+
+PROVISIONING_ENV environment variable
+Git branch patterns (dev, staging, main/master)
+Directory patterns (development, staging, production)
+Explicit configuration
+
+
+Create environment-specific configurations:
+
+[core]
name = "provisioning-dev"
[debug]
@@ -58593,12 +55302,9 @@ enabled = false # Disable caching for development
[notifications]
enabled = false # No notifications in dev
-```plaintext
-
-#### Testing Environment (`config.test.toml`)
-
-```toml
-[core]
+
+
+[core]
name = "provisioning-test"
[debug]
@@ -58612,12 +55318,9 @@ default = "local"
[infrastructure]
auto_cleanup = true # Clean up test resources
resource_prefix = "test-{{git.branch}}-"
-```plaintext
-
-#### Production Environment (`config.prod.toml`)
-
-```toml
-[core]
+
+
+[core]
name = "provisioning-prod"
[debug]
@@ -58635,12 +55338,9 @@ encrypt_backups = true
[notifications]
enabled = true
critical_only = true
-```plaintext
-
-### Environment Switching
-
-```bash
-# Set environment for session
+
+
+# Set environment for session
export PROVISIONING_ENV=dev
provisioning env
@@ -58649,26 +55349,18 @@ provisioning --environment prod server create
# Switch environment permanently
provisioning env set prod
-```plaintext
-
-## User Configuration Customization
-
-### Creating Your User Configuration
-
-```bash
-# Initialize user configuration from template
+
+
+
+# Initialize user configuration from template
provisioning init config
# Or copy and customize
cp config-examples/config.user.toml ~/.config/provisioning/config.toml
-```plaintext
-
-### Common User Customizations
-
-#### Developer Setup
-
-```toml
-[paths]
+
+
+
+[paths]
base = "/Users/alice/dev/provisioning"
[debug]
@@ -58686,12 +55378,9 @@ file_viewer = "code"
key_search_paths = [
"/Users/alice/.config/sops/age/keys.txt"
]
-```plaintext
-
-#### Operations Engineer Setup
-
-```toml
-[paths]
+
+
+[paths]
base = "/opt/provisioning"
[debug]
@@ -58707,12 +55396,9 @@ format = "yaml"
[notifications]
enabled = true
email = "ops-team@company.com"
-```plaintext
-
-#### Team Lead Setup
-
-```toml
-[paths]
+
+
+[paths]
base = "/home/teamlead/provisioning"
[debug]
@@ -58732,14 +55418,10 @@ key_search_paths = [
"/secure/keys/team-lead.txt",
"~/.config/sops/age/keys.txt"
]
-```plaintext
-
-## Project-Specific Configuration
-
-### Project Configuration File (`provisioning.toml`)
-
-```toml
-[project]
+
+
+
+[project]
name = "web-application"
description = "Main web application infrastructure"
version = "2.1.0"
@@ -58768,12 +55450,9 @@ backup_required = true
[notifications]
slack_webhook = "https://hooks.slack.com/services/..."
team_email = "platform-team@company.com"
-```plaintext
-
-### Infrastructure-Specific Configuration (`.provisioning.toml`)
-
-```toml
-[infrastructure]
+
+
+[infrastructure]
name = "production-web-app"
environment = "production"
region = "us-west-2"
@@ -58798,14 +55477,10 @@ security_group_id = "sg-12345678"
enabled = true
retention_days = 90
alerting_enabled = true
-```plaintext
-
-## Configuration Validation
-
-### Built-in Validation
-
-```bash
-# Validate current configuration
+
+
+
+# Validate current configuration
provisioning validate config
# Detailed validation with warnings
@@ -58816,14 +55491,10 @@ provisioning validate config strict
# Validate specific environment
provisioning validate config --environment prod
-```plaintext
-
-### Custom Validation Rules
-
-Create custom validation in your configuration:
-
-```toml
-[validation]
+
+
+Create custom validation in your configuration:
+[validation]
# Custom validation rules
required_sections = ["paths", "providers", "debug"]
required_env_vars = ["AWS_REGION", "PROJECT_NAME"]
@@ -58838,16 +55509,11 @@ writable_required = ["paths.base", "paths.cache"]
# Security validation
require_encryption = true
min_key_length = 32
-```plaintext
-
-## Troubleshooting Configuration
-
-### Common Configuration Issues
-
-#### Issue 1: Path Not Found Errors
-
-```bash
-# Problem: Base path doesn't exist
+
+
+
+
+# Problem: Base path doesn't exist
# Check current configuration
provisioning env | grep paths.base
@@ -58857,12 +55523,9 @@ ls -la /path/shown/above
# Fix: Update user config
nano ~/.config/provisioning/config.toml
# Set correct paths.base = "/correct/path"
-```plaintext
-
-#### Issue 2: Interpolation Failures
-
-```bash
-# Problem: {{env.VARIABLE}} not resolving
+
+
+# Problem: {{env.VARIABLE}} not resolving
# Check environment variables
env | grep VARIABLE
@@ -58871,12 +55534,9 @@ provisioning validate interpolation test
# Debug interpolation
provisioning --debug validate interpolation validate
-```plaintext
-
-#### Issue 3: SOPS Encryption Errors
-
-```bash
-# Problem: Cannot decrypt SOPS files
+
+
+# Problem: Cannot decrypt SOPS files
# Check SOPS configuration
provisioning sops config
@@ -58884,13 +55544,10 @@ provisioning sops config
ls -la ~/.config/sops/age/keys.txt
# Test decryption
-sops -d encrypted-file.k
-```plaintext
-
-#### Issue 4: Provider Authentication
-
-```bash
-# Problem: Provider authentication failed
+sops -d encrypted-file.ncl
+
+
+# Problem: Provider authentication failed
# Check provider configuration
provisioning show providers
@@ -58899,12 +55556,9 @@ provisioning provider test aws
# Verify credentials
aws configure list # For AWS
-```plaintext
-
-### Configuration Debugging
-
-```bash
-# Show current configuration hierarchy
+
+
+# Show current configuration hierarchy
provisioning config show --hierarchy
# Show configuration sources
@@ -58916,12 +55570,9 @@ provisioning config interpolated
# Debug specific section
provisioning config debug paths
provisioning config debug providers
-```plaintext
-
-### Configuration Reset
-
-```bash
-# Reset to defaults
+
+
+# Reset to defaults
provisioning config reset
# Reset specific section
@@ -58929,14 +55580,10 @@ provisioning config reset providers
# Backup current config before reset
provisioning config backup
-```plaintext
-
-## Advanced Configuration Patterns
-
-### Dynamic Configuration Loading
-
-```toml
-[dynamic]
+
+
+
+[dynamic]
# Load configuration from external sources
config_urls = [
"https://config.company.com/provisioning/base.toml",
@@ -58948,12 +55595,9 @@ load_if_exists = [
"./local-overrides.toml",
"../shared/team-config.toml"
]
-```plaintext
-
-### Configuration Templating
-
-```toml
-[templates]
+
+
+[templates]
# Template-based configuration
base_template = "aws-web-app"
template_vars = {
@@ -58964,12 +55608,9 @@ template_vars = {
# Template inheritance
extends = ["base-web", "monitoring", "security"]
-```plaintext
-
-### Multi-Region Configuration
-
-```toml
-[regions]
+
+
+[regions]
primary = "us-west-2"
secondary = "us-east-1"
@@ -58980,12 +55621,9 @@ availability_zones = ["us-west-2a", "us-west-2b", "us-west-2c"]
[regions.us-east-1]
providers.aws.region = "us-east-1"
availability_zones = ["us-east-1a", "us-east-1b", "us-east-1c"]
-```plaintext
-
-### Configuration Profiles
-
-```toml
-[profiles]
+
+
+[profiles]
active = "development"
[profiles.development]
@@ -59002,25 +55640,18 @@ cost_controls.max_budget = 1000.00
debug.enabled = false
providers.default = "aws"
security.strict_mode = true
-```plaintext
-
-## Configuration Management Best Practices
-
-### 1. Version Control
-
-```bash
-# Track configuration changes
+
+
+
+# Track configuration changes
git add provisioning.toml
git commit -m "feat(config): add production settings"
# Use branches for configuration experiments
git checkout -b config/new-provider
-```plaintext
-
-### 2. Documentation
-
-```toml
-# Document your configuration choices
+
+
+# Document your configuration choices
[paths]
# Using custom base path for team shared installation
base = "/opt/team-provisioning"
@@ -59029,47 +55660,35 @@ base = "/opt/team-provisioning"
# Debug enabled for troubleshooting infrastructure issues
enabled = true
log_level = "debug" # Temporary while debugging network problems
-```plaintext
-
-### 3. Validation
-
-```bash
-# Always validate before committing
+
+
+# Always validate before committing
provisioning validate config
git add . && git commit -m "update config"
-```plaintext
-
-### 4. Backup
-
-```bash
-# Regular configuration backups
+
+
+# Regular configuration backups
provisioning config export --format yaml > config-backup-$(date +%Y%m%d).yaml
# Automated backup script
echo '0 2 * * * provisioning config export > ~/backups/config-$(date +\%Y\%m\%d).yaml' | crontab -
-```plaintext
-
-### 5. Security
-
-- Never commit sensitive values in plain text
-- Use SOPS for encrypting secrets
-- Rotate encryption keys regularly
-- Audit configuration access
-
-```bash
-# Encrypt sensitive configuration
-sops -e settings.k > settings.encrypted.k
+
+
+
+Never commit sensitive values in plain text
+Use SOPS for encrypting secrets
+Rotate encryption keys regularly
+Audit configuration access
+
+# Encrypt sensitive configuration
+sops -e settings.ncl > settings.encrypted.ncl
# Audit configuration changes
git log -p -- provisioning.toml
-```plaintext
-
-## Configuration Migration
-
-### Migrating from Environment Variables
-
-```bash
-# Old: Environment variables
+
+
+
+# Old: Environment variables
export PROVISIONING_DEBUG=true
export PROVISIONING_PROVIDER=aws
@@ -59079,12 +55698,9 @@ enabled = true
[providers]
default = "aws"
-```plaintext
-
-### Upgrading Configuration Format
-
-```bash
-# Check for configuration updates needed
+
+
+# Check for configuration updates needed
provisioning config check-version
# Migrate to new format
@@ -59092,33 +55708,30 @@ provisioning config migrate --from 1.0 --to 2.0
# Validate migrated configuration
provisioning validate config
-```plaintext
-
-## Next Steps
-
-Now that you understand the configuration system:
-
-1. **Create your user configuration**: `provisioning init config`
-2. **Set up environment-specific configs** for your workflow
-3. **Learn CLI commands**: [CLI Reference](cli-reference.md)
-4. **Practice with examples**: [Examples and Tutorials](examples/)
-5. **Troubleshoot issues**: [Troubleshooting Guide](troubleshooting-guide.md)
-
-You now have complete control over how provisioning behaves in your environment!
+
+Now that you understand the configuration system:
+
+Create your user configuration : provisioning init config
+Set up environment-specific configs for your workflow
+Learn CLI commands : CLI Reference
+Practice with examples : Examples and Tutorials
+Troubleshoot issues : Troubleshooting Guide
+
+You now have complete control over how provisioning behaves in your environment!
Version : 1.0.0
Date : 2025-10-09
Status : Production Ready
-
+
A comprehensive authentication layer has been integrated into the provisioning system to secure sensitive operations. The system uses nu_plugin_auth for JWT authentication with MFA support, providing enterprise-grade security with graceful user experience.
-
+
RS256 asymmetric signing
-Access tokens (15min) + refresh tokens (7d)
+Access tokens (15 min) + refresh tokens (7 d)
OS keyring storage (macOS Keychain, Windows Credential Manager, Linux Secret Service)
@@ -59148,7 +55761,7 @@ You now have complete control over how provisioning behaves in your environment!
Helpful guidance for setup
-
+
# Interactive login (password prompt)
provisioning auth login <username>
@@ -59158,43 +55771,29 @@ provisioning auth login <username> --save
# Custom control center URL
provisioning auth login admin --url http://control.example.com:9080
-```plaintext
-
-### 2. Enroll MFA (First Time)
-
-```bash
-# Enroll TOTP (Google Authenticator)
+
+
+# Enroll TOTP (Google Authenticator)
provisioning auth mfa enroll totp
# Scan QR code with authenticator app
# Or enter secret manually
-```plaintext
-
-### 3. Verify MFA (For Sensitive Operations)
-
-```bash
-# Get 6-digit code from authenticator app
+
+
+# Get 6-digit code from authenticator app
provisioning auth mfa verify --code 123456
-```plaintext
-
-### 4. Check Authentication Status
-
-```bash
-# View current authentication status
+
+
+# View current authentication status
provisioning auth status
# Verify token is valid
provisioning auth verify
-```plaintext
-
----
-
-## Protected Operations
-
-### Server Operations
-
-```bash
-# ✅ CREATE - Requires auth (prod: +MFA)
+
+
+
+
+# ✅ CREATE - Requires auth (prod: +MFA)
provisioning server create web-01 # Auth required
provisioning server create web-01 --check # Auth skipped (check mode)
@@ -59205,12 +55804,9 @@ provisioning server delete web-01 --check # Auth skipped (check mode)
# 📖 READ - No auth required
provisioning server list # No auth required
provisioning server ssh web-01 # No auth required
-```plaintext
-
-### Task Service Operations
-
-```bash
-# ✅ CREATE - Requires auth (prod: +MFA)
+
+
+# ✅ CREATE - Requires auth (prod: +MFA)
provisioning taskserv create kubernetes # Auth required
provisioning taskserv create kubernetes --check # Auth skipped
@@ -59219,39 +55815,28 @@ provisioning taskserv delete kubernetes # Auth + MFA required
# 📖 READ - No auth required
provisioning taskserv list # No auth required
-```plaintext
-
-### Cluster Operations
-
-```bash
-# ✅ CREATE - Requires auth (prod: +MFA)
+
+
+# ✅ CREATE - Requires auth (prod: +MFA)
provisioning cluster create buildkit # Auth required
provisioning cluster create buildkit --check # Auth skipped
# ❌ DELETE - Requires auth + MFA
provisioning cluster delete buildkit # Auth + MFA required
-```plaintext
-
-### Batch Workflows
-
-```bash
-# ✅ SUBMIT - Requires auth (prod: +MFA)
-provisioning batch submit workflow.k # Auth required
-provisioning batch submit workflow.k --skip-auth # Auth skipped (if allowed)
+
+
+# ✅ SUBMIT - Requires auth (prod: +MFA)
+provisioning batch submit workflow.ncl # Auth required
+provisioning batch submit workflow.ncl --skip-auth # Auth skipped (if allowed)
# 📖 READ - No auth required
provisioning batch list # No auth required
provisioning batch status <task-id> # No auth required
-```plaintext
-
----
-
-## Configuration
-
-### Security Settings (`config.defaults.toml`)
-
-```toml
-[security]
+
+
+
+
+[security]
require_auth = true # Enable authentication system
require_mfa_for_production = true # MFA for prod environment
require_mfa_for_destructive = true # MFA for delete operations
@@ -59266,12 +55851,9 @@ auth_enabled = true # Enable nu_plugin_auth
[platform.control_center]
url = "http://localhost:9080" # Control center URL
-```plaintext
-
-### Environment-Specific Configuration
-
-```toml
-# Development
+
+
+# Development
[environments.dev]
security.bypass.allow_skip_auth = true # Allow auth bypass in dev
@@ -59279,16 +55861,11 @@ security.bypass.allow_skip_auth = true # Allow auth bypass in dev
[environments.prod]
security.bypass.allow_skip_auth = false # Never allow bypass
security.require_mfa_for_production = true
-```plaintext
-
----
-
-## Authentication Bypass (Dev/Test Only)
-
-### Environment Variable Method
-
-```bash
-# Export environment variable (dev/test only)
+
+
+
+
+# Export environment variable (dev/test only)
export PROVISIONING_SKIP_AUTH=true
# Run operations without authentication
@@ -59296,33 +55873,21 @@ provisioning server create web-01
# Unset when done
unset PROVISIONING_SKIP_AUTH
-```plaintext
-
-### Per-Command Flag
-
-```bash
-# Some commands support --skip-auth flag
-provisioning batch submit workflow.k --skip-auth
-```plaintext
-
-### Check Mode (Always Bypasses Auth)
-
-```bash
-# Check mode is always allowed without auth
+
+
+# Some commands support --skip-auth flag
+provisioning batch submit workflow.ncl --skip-auth
+
+
+# Check mode is always allowed without auth
provisioning server create web-01 --check
provisioning taskserv create kubernetes --check
-```plaintext
-
-⚠️ **WARNING**: Auth bypass should ONLY be used in development/testing environments. Production systems should have `security.bypass.allow_skip_auth = false`.
-
----
-
-## Error Messages
-
-### Not Authenticated
-
-```plaintext
-❌ Authentication Required
+
+⚠️ WARNING : Auth bypass should ONLY be used in development/testing environments. Production systems should have security.bypass.allow_skip_auth = false.
+
+
+
+❌ Authentication Required
Operation: server create web-01
You must be logged in to perform this operation.
@@ -59331,16 +55896,11 @@ To login:
provisioning auth login <username>
Note: Your credentials will be securely stored in the system keyring.
-```plaintext
-
-**Solution**: Run `provisioning auth login <username>`
-
----
-
-### MFA Required
-
-```plaintext
-❌ MFA Verification Required
+
+Solution : Run provisioning auth login <username>
+
+
+❌ MFA Verification Required
Operation: server delete web-01
Reason: destructive operation (delete/destroy)
@@ -59351,33 +55911,22 @@ To verify MFA:
Don't have MFA set up?
Run: provisioning auth mfa enroll totp
-```plaintext
-
-**Solution**: Run `provisioning auth mfa verify --code 123456`
-
----
-
-### Token Expired
-
-```plaintext
-❌ Authentication Required
+
+Solution : Run provisioning auth mfa verify --code 123456
+
+
+❌ Authentication Required
Operation: server create web-02
You must be logged in to perform this operation.
Error: Token verification failed
-```plaintext
-
-**Solution**: Token expired, re-login with `provisioning auth login <username>`
-
----
-
-## Audit Logging
-
-All authenticated operations are logged to the audit log file with the following information:
-
-```json
-{
+
+Solution : Token expired, re-login with provisioning auth login <username>
+
+
+All authenticated operations are logged to the audit log file with the following information:
+{
"timestamp": "2025-10-09 14:32:15",
"user": "admin",
"operation": "server_create",
@@ -59389,12 +55938,9 @@ All authenticated operations are logged to the audit log file with the following
},
"mfa_verified": true
}
-```plaintext
-
-### Viewing Audit Logs
-
-```bash
-# View raw audit log
+
+
+# View raw audit log
cat provisioning/logs/audit.log
# Filter by user
@@ -59405,44 +55951,31 @@ cat provisioning/logs/audit.log | jq '. | select(.operation == "server_create")'
# Filter by date
cat provisioning/logs/audit.log | jq '. | select(.timestamp | startswith("2025-10-09"))'
-```plaintext
-
----
-
-## Integration with Control Center
-
-The authentication system integrates with the provisioning platform's control center REST API:
-
-- **POST /api/auth/login** - Login with credentials
-- **POST /api/auth/logout** - Revoke tokens
-- **POST /api/auth/verify** - Verify token validity
-- **GET /api/auth/sessions** - List active sessions
-- **POST /api/mfa/enroll** - Enroll MFA device
-- **POST /api/mfa/verify** - Verify MFA code
-
-### Starting Control Center
-
-```bash
-# Start control center (required for authentication)
+
+
+
+The authentication system integrates with the provisioning platform’s control center REST API:
+
+POST /api/auth/login - Login with credentials
+POST /api/auth/logout - Revoke tokens
+POST /api/auth/verify - Verify token validity
+GET /api/auth/sessions - List active sessions
+POST /api/mfa/enroll - Enroll MFA device
+POST /api/mfa/verify - Verify MFA code
+
+
+# Start control center (required for authentication)
cd provisioning/platform/control-center
cargo run --release
-```plaintext
-
-Or use the orchestrator which includes control center:
-
-```bash
-cd provisioning/platform/orchestrator
+
+Or use the orchestrator which includes control center:
+cd provisioning/platform/orchestrator
./scripts/start-orchestrator.nu --background
-```plaintext
-
----
-
-## Testing Authentication
-
-### Manual Testing
-
-```bash
-# 1. Start control center
+
+
+
+
+# 1. Start control center
cd provisioning/platform/control-center
cargo run --release &
@@ -59457,75 +55990,51 @@ provisioning auth logout
# 5. Try creating server (should fail - not authenticated)
provisioning server create test-server --check
-```plaintext
-
-### Automated Testing
-
-```bash
-# Run authentication tests
+
+
+# Run authentication tests
nu provisioning/core/nulib/lib_provisioning/plugins/auth_test.nu
-```plaintext
-
----
-
-## Troubleshooting
-
-### Plugin Not Available
-
-**Error**: `Authentication plugin not available`
-
-**Solution**:
-
-1. Check plugin is built: `ls provisioning/core/plugins/nushell-plugins/nu_plugin_auth/target/release/`
-2. Register plugin: `plugin add target/release/nu_plugin_auth`
-3. Use plugin: `plugin use auth`
-4. Verify: `which auth`
-
----
-
-### Control Center Not Running
-
-**Error**: `Cannot connect to control center`
-
-**Solution**:
-
-1. Start control center: `cd provisioning/platform/control-center && cargo run --release`
-2. Or use orchestrator: `cd provisioning/platform/orchestrator && ./scripts/start-orchestrator.nu --background`
-3. Check URL is correct in config: `provisioning config get platform.control_center.url`
-
----
-
-### MFA Not Working
-
-**Error**: `Invalid MFA code`
-
-**Solutions**:
-
-- Ensure time is synchronized (TOTP codes are time-based)
-- Code expires every 30 seconds, get fresh code
-- Verify you're using the correct authenticator app entry
-- Re-enroll if needed: `provisioning auth mfa enroll totp`
-
----
-
-### Keyring Access Issues
-
-**Error**: `Keyring storage unavailable`
-
-**macOS**: Grant Keychain access to Terminal/iTerm2 in System Preferences → Security & Privacy
-
-**Linux**: Ensure `gnome-keyring` or `kwallet` is running
-
-**Windows**: Check Windows Credential Manager is accessible
-
----
-
-## Architecture
-
-### Authentication Flow
-
-```plaintext
-┌─────────────┐
+
+
+
+
+Error : Authentication plugin not available
+Solution :
+
+Check plugin is built: ls provisioning/core/plugins/nushell-plugins/nu_plugin_auth/target/release/
+Register plugin: plugin add target/release/nu_plugin_auth
+Use plugin: plugin use auth
+Verify: which auth
+
+
+
+Error : Cannot connect to control center
+Solution :
+
+Start control center: cd provisioning/platform/control-center && cargo run --release
+Or use orchestrator: cd provisioning/platform/orchestrator && ./scripts/start-orchestrator.nu --background
+Check URL is correct in config: provisioning config get platform.control_center.url
+
+
+
+Error : Invalid MFA code
+Solutions :
+
+Ensure time is synchronized (TOTP codes are time-based)
+Code expires every 30 seconds, get fresh code
+Verify you’re using the correct authenticator app entry
+Re-enroll if needed: provisioning auth mfa enroll totp
+
+
+
+Error : Keyring storage unavailable
+macOS : Grant Keychain access to Terminal/iTerm2 in System Preferences → Security & Privacy
+Linux : Ensure gnome-keyring or kwallet is running
+Windows : Check Windows Credential Manager is accessible
+
+
+
+┌─────────────┐
│ User Command│
└──────┬──────┘
│
@@ -59579,12 +56088,9 @@ nu provisioning/core/nulib/lib_provisioning/plugins/auth_test.nu
│ - Log to audit.log │
│ - Include user, timestamp, MFA │
└─────────────────────────────────┘
-```plaintext
-
-### File Structure
-
-```plaintext
-provisioning/
+
+
+provisioning/
├── config/
│ └── config.defaults.toml # Security configuration
├── core/nulib/
@@ -59606,154 +56112,111 @@ provisioning/
│ └── src/auth/ # JWT auth implementation
└── logs/
└── audit.log # Audit trail
-```plaintext
-
----
-
-## Related Documentation
-
-- **Security System Overview**: `docs/architecture/ADR-009-security-system-complete.md`
-- **JWT Authentication**: `docs/architecture/JWT_AUTH_IMPLEMENTATION.md`
-- **MFA Implementation**: `docs/architecture/MFA_IMPLEMENTATION_SUMMARY.md`
-- **Plugin README**: `provisioning/core/plugins/nushell-plugins/nu_plugin_auth/README.md`
-- **Control Center**: `provisioning/platform/control-center/README.md`
-
----
-
-## Summary of Changes
-
-| File | Changes | Lines Added |
-|------|---------|-------------|
-| `lib_provisioning/plugins/auth.nu` | Added security policy enforcement functions | +260 |
-| `config/config.defaults.toml` | Added security configuration section | +19 |
-| `servers/create.nu` | Added auth check for server creation | +25 |
-| `workflows/batch.nu` | Added auth check for batch workflow submission | +43 |
-| `main_provisioning/commands/infrastructure.nu` | Added auth checks for all infrastructure commands | +90 |
-| `lib_provisioning/providers/interface.nu` | Added authentication guidelines for providers | +65 |
-| **Total** | **6 files modified** | **~500 lines** |
-
----
-
-## Best Practices
-
-### For Users
-
-1. **Always login**: Keep your session active to avoid interruptions
-2. **Use keyring**: Save credentials with `--save` flag for persistence
-3. **Enable MFA**: Use MFA for production operations
-4. **Check mode first**: Always test with `--check` before actual operations
-5. **Monitor audit logs**: Review audit logs regularly for security
-
-### For Developers
-
-1. **Check auth early**: Verify authentication before expensive operations
-2. **Log operations**: Always log authenticated operations for audit
-3. **Clear error messages**: Provide helpful guidance for auth failures
-4. **Respect check mode**: Always skip auth in check/dry-run mode
-5. **Test both paths**: Test with and without authentication
-
-### For Operators
-
-1. **Production hardening**: Set `allow_skip_auth = false` in production
-2. **MFA enforcement**: Require MFA for all production environments
-3. **Monitor audit logs**: Set up log monitoring and alerts
-4. **Token rotation**: Configure short token timeouts (15min default)
-5. **Backup authentication**: Ensure multiple admins have MFA enrolled
-
----
-
-## License
-
-MIT License - See LICENSE file for details
-
----
-
-## Quick Reference
-
-**Version**: 1.0.0
-**Last Updated**: 2025-10-09
-
----
-
-### Quick Commands
-
-#### Login
-
-```bash
-provisioning auth login <username> # Interactive password
+
+
+
+
+Security System Overview : docs/architecture/adr-009-security-system-complete.md
+JWT Authentication : docs/architecture/JWT_AUTH_IMPLEMENTATION.md
+MFA Implementation : docs/architecture/MFA_IMPLEMENTATION_SUMMARY.md
+Plugin README : provisioning/core/plugins/nushell-plugins/nu_plugin_auth/README.md
+Control Center : provisioning/platform/control-center/README.md
+
+
+
+File Changes Lines Added
+lib_provisioning/plugins/auth.nuAdded security policy enforcement functions +260
+config/config.defaults.tomlAdded security configuration section +19
+servers/create.nuAdded auth check for server creation +25
+workflows/batch.nuAdded auth check for batch workflow submission +43
+main_provisioning/commands/infrastructure.nuAdded auth checks for all infrastructure commands +90
+lib_provisioning/providers/interface.nuAdded authentication guidelines for providers +65
+Total 6 files modified ~500 lines
+
+
+
+
+
+
+Always login : Keep your session active to avoid interruptions
+Use keyring : Save credentials with --save flag for persistence
+Enable MFA : Use MFA for production operations
+Check mode first : Always test with --check before actual operations
+Monitor audit logs : Review audit logs regularly for security
+
+
+
+Check auth early : Verify authentication before expensive operations
+Log operations : Always log authenticated operations for audit
+Clear error messages : Provide helpful guidance for auth failures
+Respect check mode : Always skip auth in check/dry-run mode
+Test both paths : Test with and without authentication
+
+
+
+Production hardening : Set allow_skip_auth = false in production
+MFA enforcement : Require MFA for all production environments
+Monitor audit logs : Set up log monitoring and alerts
+Token rotation : Configure short token timeouts (15 min default)
+Backup authentication : Ensure multiple admins have MFA enrolled
+
+
+
+MIT License - See LICENSE file for details
+
+
+Version : 1.0.0
+Last Updated : 2025-10-09
+
+
+
+provisioning auth login <username> # Interactive password
provisioning auth login <username> --save # Save to keyring
-```plaintext
-
-#### MFA
-
-```bash
-provisioning auth mfa enroll totp # Enroll TOTP
+
+
+provisioning auth mfa enroll totp # Enroll TOTP
provisioning auth mfa verify --code 123456 # Verify code
-```plaintext
-
-#### Status
-
-```bash
-provisioning auth status # Show auth status
+
+
+provisioning auth status # Show auth status
provisioning auth verify # Verify token
-```plaintext
-
-#### Logout
-
-```bash
-provisioning auth logout # Logout current session
+
+
+provisioning auth logout # Logout current session
provisioning auth logout --all # Logout all sessions
-```plaintext
-
----
-
-### Protected Operations
-
-| Operation | Auth | MFA (Prod) | MFA (Delete) | Check Mode |
-|-----------|------|------------|--------------|------------|
-| `server create` | ✅ | ✅ | ❌ | Skip |
-| `server delete` | ✅ | ✅ | ✅ | Skip |
-| `server list` | ❌ | ❌ | ❌ | - |
-| `taskserv create` | ✅ | ✅ | ❌ | Skip |
-| `taskserv delete` | ✅ | ✅ | ✅ | Skip |
-| `cluster create` | ✅ | ✅ | ❌ | Skip |
-| `cluster delete` | ✅ | ✅ | ✅ | Skip |
-| `batch submit` | ✅ | ✅ | ❌ | - |
-
----
-
-### Bypass Authentication (Dev/Test Only)
-
-#### Environment Variable
-
-```bash
-export PROVISIONING_SKIP_AUTH=true
+
+
+
+Operation Auth MFA (Prod) MFA (Delete) Check Mode
+server create✅ ✅ ❌ Skip
+server delete✅ ✅ ✅ Skip
+server list❌ ❌ ❌ -
+taskserv create✅ ✅ ❌ Skip
+taskserv delete✅ ✅ ✅ Skip
+cluster create✅ ✅ ❌ Skip
+cluster delete✅ ✅ ✅ Skip
+batch submit✅ ✅ ❌ -
+
+
+
+
+
+export PROVISIONING_SKIP_AUTH=true
provisioning server create test
unset PROVISIONING_SKIP_AUTH
-```plaintext
-
-#### Check Mode (Always Allowed)
-
-```bash
-provisioning server create prod --check
+
+
+provisioning server create prod --check
provisioning taskserv delete k8s --check
-```plaintext
-
-#### Config Flag
-
-```toml
-[security.bypass]
+
+
+[security.bypass]
allow_skip_auth = true # Only in dev/test
-```plaintext
-
----
-
-### Configuration
-
-#### Security Settings
-
-```toml
-[security]
+
+
+
+
+[security]
require_auth = true
require_mfa_for_production = true
require_mfa_for_destructive = true
@@ -59767,58 +56230,38 @@ auth_enabled = true
[platform.control_center]
url = "http://localhost:3000"
-```plaintext
-
----
-
-### Error Messages
-
-#### Not Authenticated
-
-```plaintext
-❌ Authentication Required
+
+
+
+
+❌ Authentication Required
Operation: server create web-01
To login: provisioning auth login <username>
-```plaintext
-
-**Fix**: `provisioning auth login <username>`
-
-#### MFA Required
-
-```plaintext
-❌ MFA Verification Required
+
+Fix : provisioning auth login <username>
+
+❌ MFA Verification Required
Operation: server delete web-01
Reason: destructive operation
-```plaintext
-
-**Fix**: `provisioning auth mfa verify --code <code>`
-
-#### Token Expired
-
-```plaintext
-Error: Token verification failed
-```plaintext
-
-**Fix**: Re-login: `provisioning auth login <username>`
-
----
-
-### Troubleshooting
-
-| Error | Solution |
-|-------|----------|
-| Plugin not available | `plugin add target/release/nu_plugin_auth` |
-| Control center offline | Start: `cd provisioning/platform/control-center && cargo run` |
-| Invalid MFA code | Get fresh code (expires in 30s) |
-| Token expired | Re-login: `provisioning auth login <username>` |
-| Keyring access denied | Grant app access in system settings |
-
----
-
-### Audit Logs
-
-```bash
-# View audit log
+
+Fix : provisioning auth mfa verify --code <code>
+
+Error: Token verification failed
+
+Fix : Re-login: provisioning auth login <username>
+
+
+Error Solution
+Plugin not available plugin add target/release/nu_plugin_auth
+Control center offline Start: cd provisioning/platform/control-center && cargo run
+Invalid MFA code Get fresh code (expires in 30s)
+Token expired Re-login: provisioning auth login <username>
+Keyring access denied Grant app access in system settings
+
+
+
+
+# View audit log
cat provisioning/logs/audit.log
# Filter by user
@@ -59826,84 +56269,56 @@ cat provisioning/logs/audit.log | jq '. | select(.user == "admin")'
# Filter by operation
cat provisioning/logs/audit.log | jq '. | select(.operation == "server_create")'
-```plaintext
-
----
-
-### CI/CD Integration
-
-#### Option 1: Skip Auth (Dev/Test Only)
-
-```bash
-export PROVISIONING_SKIP_AUTH=true
+
+
+
+
+export PROVISIONING_SKIP_AUTH=true
provisioning server create ci-server
-```plaintext
-
-#### Option 2: Check Mode
-
-```bash
-provisioning server create ci-server --check
-```plaintext
-
-#### Option 3: Service Account (Future)
-
-```bash
-export PROVISIONING_AUTH_TOKEN="<token>"
+
+
+provisioning server create ci-server --check
+
+
+export PROVISIONING_AUTH_TOKEN="<token>"
provisioning server create ci-server
-```plaintext
-
----
-
-### Performance
-
-| Operation | Auth Overhead |
-|-----------|---------------|
-| Server create | ~20ms |
-| Taskserv create | ~20ms |
-| Batch submit | ~20ms |
-| Check mode | 0ms (skipped) |
-
----
-
-### Related Docs
-
-- **Full Guide**: `docs/user/AUTHENTICATION_LAYER_GUIDE.md`
-- **Implementation**: `AUTHENTICATION_LAYER_IMPLEMENTATION_SUMMARY.md`
-- **Security ADR**: `docs/architecture/ADR-009-security-system-complete.md`
-
----
-
-**Quick Help**: `provisioning help auth` or `provisioning auth --help`
-
----
-
-**Last Updated**: 2025-10-09
-**Maintained By**: Security Team
-
----
-
-## Setup Guide
-
-### Complete Authentication Setup Guide
-
-Current Settings (from your config)
-
-```plaintext
-[security]
+
+
+
+Operation Auth Overhead
+Server create ~20 ms
+Taskserv create ~20 ms
+Batch submit ~20 ms
+Check mode 0 ms (skipped)
+
+
+
+
+
+Full Guide : docs/user/AUTHENTICATION_LAYER_GUIDE.md
+Implementation : AUTHENTICATION_LAYER_IMPLEMENTATION_SUMMARY.md
+Security ADR : docs/architecture/adr-009-security-system-complete.md
+
+
+Quick Help : provisioning help auth or provisioning auth --help
+
+Last Updated : 2025-10-09
+Maintained By : Security Team
+
+
+
+Current Settings (from your config)
+[security]
require_auth = true # ✅ Auth is REQUIRED
allow_skip_auth = false # ❌ Cannot skip with env var
auth_timeout = 3600 # Token valid for 1 hour
[platform.control_center]
url = "http://localhost:3000" # Control Center endpoint
-```plaintext
-
-### STEP 1: Start Control Center
-
-The Control Center is the authentication backend:
-
-```bash
-# Check if it's already running
+
+
+The Control Center is the authentication backend:
+# Check if it's already running
curl http://localhost:3000/health
# If not running, start it
@@ -59913,20 +56328,13 @@ cargo run --release &
# Wait for it to start (may take 30-60 seconds)
sleep 30
curl http://localhost:3000/health
-```plaintext
-
-Expected Output:
-
-```json
-{"status": "healthy"}
-```plaintext
-
-### STEP 2: Find Default Credentials
-
-Check for default user setup:
-
-```bash
-# Look for initialization scripts
+
+Expected Output:
+{"status": "healthy"}
+
+
+Check for default user setup:
+# Look for initialization scripts
ls -la /Users/Akasha/project-provisioning/provisioning/platform/control-center/
# Check for README or setup instructions
@@ -59934,14 +56342,10 @@ cat /Users/Akasha/project-provisioning/provisioning/platform/control-center/READ
# Or check for default config
cat /Users/Akasha/project-provisioning/provisioning/platform/control-center/config.toml 2>/dev/null || echo "Config not found"
-```plaintext
-
-### STEP 3: Log In
-
-Once you have credentials (usually admin / password from setup):
-
-```bash
-# Interactive login - will prompt for password
+
+
+Once you have credentials (usually admin / password from setup):
+# Interactive login - will prompt for password
provisioning auth login
# Or with username
@@ -59949,12 +56353,9 @@ provisioning auth login admin
# Verify you're logged in
provisioning auth status
-```plaintext
-
-Expected Success Output:
-
-```plaintext
-✓ Login successful!
+
+Expected Success Output:
+✓ Login successful!
User: admin
Role: admin
@@ -59962,69 +56363,46 @@ Expires: 2025-10-22T14:30:00Z
MFA: false
Session active and ready
-```plaintext
-
-### STEP 4: Now Create Your Server
-
-Once authenticated:
-
-```bash
-# Try server creation again
+
+
+Once authenticated:
+# Try server creation again
provisioning server create sgoyol --check
# Or with full details
provisioning server create sgoyol --infra workspace_librecloud --check
-```plaintext
-
-### 🛠️ Alternative: Skip Auth for Development
-
-If you want to bypass authentication temporarily for testing:
-
-#### Option A: Edit config to allow skip
-
-```bash
-# You would need to parse and modify TOML - easier to do next option
-```plaintext
-
-#### Option B: Use environment variable (if allowed by config)
-
-```bash
-export PROVISIONING_SKIP_AUTH=true
+
+
+If you want to bypass authentication temporarily for testing:
+
+# You would need to parse and modify TOML - easier to do next option
+
+
+export PROVISIONING_SKIP_AUTH=true
provisioning server create sgoyol
unset PROVISIONING_SKIP_AUTH
-```plaintext
-
-#### Option C: Use check mode (always works, no auth needed)
-
-```bash
-provisioning server create sgoyol --check
-```plaintext
-
-#### Option D: Modify config.defaults.toml (permanent for dev)
-
-Edit: `provisioning/config/config.defaults.toml`
-
-Change line 193 to:
-
-```toml
-allow_skip_auth = true
-```plaintext
-
-### 🔍 Troubleshooting
-
-| Problem | Solution |
-|----------------------------|---------------------------------------------------------------------|
-| Control Center won't start | Check port 3000 not in use: `lsof -i :3000` |
-| "No token found" error | Login with: `provisioning auth login` |
-| Login fails | Verify Control Center is running: `curl http://localhost:3000/health` |
-| Token expired | Re-login: `provisioning auth login` |
-| Plugin not available | Using HTTP fallback - this is OK, works without plugin |
-
+
+provisioning server create sgoyol --check
+
+
+Edit: provisioning/config/config.defaults.toml
+Change line 193 to:
+allow_skip_auth = true
+
+
+Problem Solution
+Control Center won’t start Check port 3000 not in use: lsof -i :3000
+“No token found” error Login with: provisioning auth login
+Login fails Verify Control Center is running: curl http://localhost:3000/health
+Token expired Re-login: provisioning auth login
+Plugin not available Using HTTP fallback - this is OK, works without plugin
+
+
Version : 1.0.0
Last Updated : 2025-10-08
Status : Production Ready
-
+
The Provisioning Platform includes a comprehensive configuration encryption system that provides:
Transparent Encryption/Decryption : Configs are automatically decrypted on load
@@ -60033,7 +56411,7 @@ allow_skip_auth = true
SOPS Integration : Industry-standard encryption with SOPS
Sensitive Data Detection : Automatic scanning for unencrypted sensitive data
-
+
Prerequisites
Quick Start
@@ -60045,7 +56423,7 @@ allow_skip_auth = true
Troubleshooting
-
+
@@ -60074,7 +56452,7 @@ apt install age
-
+
# Check SOPS
sops --version
@@ -60083,59 +56461,39 @@ age --version
# Check AWS CLI (optional)
aws --version
-```plaintext
-
----
-
-## Quick Start
-
-### 1. Initialize Encryption
-
-Generate Age keys and create SOPS configuration:
-
-```bash
-provisioning config init-encryption --kms age
-```plaintext
-
-This will:
-
-- Generate Age key pair in `~/.config/sops/age/keys.txt`
-- Display your public key (recipient)
-- Create `.sops.yaml` in your project
-
-### 2. Set Environment Variables
-
-Add to your shell profile (`~/.zshrc` or `~/.bashrc`):
-
-```bash
-# Age encryption
+
+
+
+
+Generate Age keys and create SOPS configuration:
+provisioning config init-encryption --kms age
+
+This will:
+
+Generate Age key pair in ~/.config/sops/age/keys.txt
+Display your public key (recipient)
+Create .sops.yaml in your project
+
+
+Add to your shell profile (~/.zshrc or ~/.bashrc):
+# Age encryption
export SOPS_AGE_RECIPIENTS="age1ql3z7hjy54pw3hyww5ayyfg7zqgvc7w3j2elw8zmrj2kg5sfn9aqmcac8p"
export PROVISIONING_KAGE="$HOME/.config/sops/age/keys.txt"
-```plaintext
-
-Replace the recipient with your actual public key.
-
-### 3. Validate Setup
-
-```bash
-provisioning config validate-encryption
-```plaintext
-
-Expected output:
-
-```plaintext
-✅ Encryption configuration is valid
+
+Replace the recipient with your actual public key.
+
+provisioning config validate-encryption
+
+Expected output:
+✅ Encryption configuration is valid
SOPS installed: true
Age backend: true
KMS enabled: false
Errors: 0
Warnings: 0
-```plaintext
-
-### 4. Encrypt Your First Config
-
-```bash
-# Create a config with sensitive data
+
+
+# Create a config with sensitive data
cat > workspace/config/secure.yaml <<EOF
database:
host: localhost
@@ -60148,52 +56506,35 @@ provisioning config encrypt workspace/config/secure.yaml --in-place
# Verify it's encrypted
provisioning config is-encrypted workspace/config/secure.yaml
-```plaintext
-
----
-
-## Configuration Encryption
-
-### File Naming Conventions
-
-Encrypted files should follow these patterns:
-
-- `*.enc.yaml` - Encrypted YAML files
-- `*.enc.yml` - Encrypted YAML files (alternative)
-- `*.enc.toml` - Encrypted TOML files
-- `secure.yaml` - Files in workspace/config/
-
-The `.sops.yaml` configuration automatically applies encryption rules based on file paths.
-
-### Encrypt a Configuration File
-
-#### Basic Encryption
-
-```bash
-# Encrypt and create new file
+
+
+
+
+Encrypted files should follow these patterns:
+
+*.enc.yaml - Encrypted YAML files
+*.enc.yml - Encrypted YAML files (alternative)
+*.enc.toml - Encrypted TOML files
+secure.yaml - Files in workspace/config/
+
+The .sops.yaml configuration automatically applies encryption rules based on file paths.
+
+
+# Encrypt and create new file
provisioning config encrypt secrets.yaml
# Output: secrets.yaml.enc
-```plaintext
-
-#### In-Place Encryption
-
-```bash
-# Encrypt and replace original
+
+
+# Encrypt and replace original
provisioning config encrypt secrets.yaml --in-place
-```plaintext
-
-#### Specify Output Path
-
-```bash
-# Encrypt to specific location
+
+
+# Encrypt to specific location
provisioning config encrypt secrets.yaml --output workspace/config/secure.enc.yaml
-```plaintext
-
-#### Choose KMS Backend
-
-```bash
-# Use Age (default)
+
+
+# Use Age (default)
provisioning config encrypt secrets.yaml --kms age
# Use AWS KMS
@@ -60201,12 +56542,9 @@ provisioning config encrypt secrets.yaml --kms aws-kms
# Use Vault
provisioning config encrypt secrets.yaml --kms vault
-```plaintext
-
-### Decrypt a Configuration File
-
-```bash
-# Decrypt to new file
+
+
+# Decrypt to new file
provisioning config decrypt secrets.enc.yaml
# Decrypt in-place
@@ -60214,84 +56552,67 @@ provisioning config decrypt secrets.enc.yaml --in-place
# Decrypt to specific location
provisioning config decrypt secrets.enc.yaml --output plaintext.yaml
-```plaintext
-
-### Edit Encrypted Files
-
-The system provides a secure editing workflow:
-
-```bash
-# Edit encrypted file (auto decrypt -> edit -> re-encrypt)
+
+
+The system provides a secure editing workflow:
+# Edit encrypted file (auto decrypt -> edit -> re-encrypt)
provisioning config edit-secure workspace/config/secure.enc.yaml
-```plaintext
-
-This will:
-
-1. Decrypt the file temporarily
-2. Open in your `$EDITOR` (vim/nano/etc)
-3. Re-encrypt when you save and close
-4. Remove temporary decrypted file
-
-### Check Encryption Status
-
-```bash
-# Check if file is encrypted
+
+This will:
+
+Decrypt the file temporarily
+Open in your $EDITOR (vim/nano/etc)
+Re-encrypt when you save and close
+Remove temporary decrypted file
+
+
+# Check if file is encrypted
provisioning config is-encrypted workspace/config/secure.yaml
# Get detailed encryption info
provisioning config encryption-info workspace/config/secure.yaml
-```plaintext
-
----
-
-## KMS Backends
-
-### Age (Recommended for Development)
-
-**Pros**:
-
-- Simple file-based keys
-- No external dependencies
-- Fast and secure
-- Works offline
-
-**Setup**:
-
-```bash
-# Initialize
+
+
+
+
+Pros :
+
+Simple file-based keys
+No external dependencies
+Fast and secure
+Works offline
+
+Setup :
+# Initialize
provisioning config init-encryption --kms age
# Set environment variables
export SOPS_AGE_RECIPIENTS="age1..." # Your public key
export PROVISIONING_KAGE="$HOME/.config/sops/age/keys.txt"
-```plaintext
-
-**Encrypt/Decrypt**:
-
-```bash
-provisioning config encrypt secrets.yaml --kms age
-provisioning config decrypt secrets.enc.yaml
-```plaintext
-
-### AWS KMS (Production)
-
-**Pros**:
-
-- Centralized key management
-- Audit logging
-- IAM integration
-- Key rotation
-
-**Setup**:
-
-1. Create KMS key in AWS Console
-2. Configure AWS credentials:
-
- ```bash
- aws configure
+Encrypt/Decrypt :
+provisioning config encrypt secrets.yaml --kms age
+provisioning config decrypt secrets.enc.yaml
+
+
+Pros :
+
+Centralized key management
+Audit logging
+IAM integration
+Key rotation
+
+Setup :
+Create KMS key in AWS Console
+
+
+Configure AWS credentials:
+aws configure
+
+
+
Update .sops.yaml:
creation_rules:
- path_regex: .*\.enc\.yaml$
@@ -60302,27 +56623,24 @@ provisioning config decrypt secrets.enc.yaml
Encrypt/Decrypt :
provisioning config encrypt secrets.yaml --kms aws-kms
provisioning config decrypt secrets.enc.yaml
-```plaintext
-
-### HashiCorp Vault (Enterprise)
-
-**Pros**:
-
-- Dynamic secrets
-- Centralized secret management
-- Audit logging
-- Policy-based access
-
-**Setup**:
-
-1. Configure Vault address and token:
-
- ```bash
- export VAULT_ADDR="https://vault.example.com:8200"
- export VAULT_TOKEN="s.xxxxxxxxxxxxxx"
+
+Pros :
+
+Dynamic secrets
+Centralized secret management
+Audit logging
+Policy-based access
+
+Setup :
+Configure Vault address and token:
+export VAULT_ADDR="https://vault.example.com:8200"
+export VAULT_TOKEN="s.xxxxxxxxxxxxxx"
+
+
+
Update configuration:
# workspace/config/provisioning.yaml
kms:
@@ -60337,60 +56655,55 @@ kms:
Encrypt/Decrypt :
provisioning config encrypt secrets.yaml --kms vault
provisioning config decrypt secrets.enc.yaml
-```plaintext
-
-### Cosmian KMS (Confidential Computing)
-
-**Pros**:
-
-- Confidential computing support
-- Zero-knowledge architecture
-- Post-quantum ready
-- Cloud-agnostic
-
-**Setup**:
-
-1. Deploy Cosmian KMS server
-2. Update configuration:
-
- ```toml
- kms:
- enabled: true
- mode: "remote"
- remote:
- endpoint: "https://kms.example.com:9998"
- auth_method: "certificate"
- client_cert: "/path/to/client.crt"
- client_key: "/path/to/client.key"
+
+Pros :
+
+Confidential computing support
+Zero-knowledge architecture
+Post-quantum ready
+Cloud-agnostic
+
+Setup :
+
+
+Deploy Cosmian KMS server
+
+
+Update configuration:
+kms:
+ enabled: true
+ mode: "remote"
+ remote:
+ endpoint: "https://kms.example.com:9998"
+ auth_method: "certificate"
+ client_cert: "/path/to/client.crt"
+ client_key: "/path/to/client.key"
+
+
+
Encrypt/Decrypt :
provisioning config encrypt secrets.yaml --kms cosmian
provisioning config decrypt secrets.enc.yaml
-```plaintext
-
----
-
-## CLI Commands
-
-### Configuration Encryption Commands
-
-| Command | Description |
-|---------|-------------|
-| `config encrypt <file>` | Encrypt configuration file |
-| `config decrypt <file>` | Decrypt configuration file |
-| `config edit-secure <file>` | Edit encrypted file securely |
-| `config rotate-keys <file> <key>` | Rotate encryption keys |
-| `config is-encrypted <file>` | Check if file is encrypted |
-| `config encryption-info <file>` | Show encryption details |
-| `config validate-encryption` | Validate encryption setup |
-| `config scan-sensitive <dir>` | Find unencrypted sensitive configs |
-| `config encrypt-all <dir>` | Encrypt all sensitive configs |
-| `config init-encryption` | Initialize encryption (generate keys) |
-
-### Examples
-
-```bash
-# Encrypt workspace config
+
+
+
+
+Command Description
+config encrypt <file>Encrypt configuration file
+config decrypt <file>Decrypt configuration file
+config edit-secure <file>Edit encrypted file securely
+config rotate-keys <file> <key>Rotate encryption keys
+config is-encrypted <file>Check if file is encrypted
+config encryption-info <file>Show encryption details
+config validate-encryptionValidate encryption setup
+config scan-sensitive <dir>Find unencrypted sensitive configs
+config encrypt-all <dir>Encrypt all sensitive configs
+config init-encryptionInitialize encryption (generate keys)
+
+
+
+# Encrypt workspace config
provisioning config encrypt workspace/config/secure.yaml --in-place
# Edit encrypted file
@@ -60410,111 +56723,87 @@ provisioning config encryption-info workspace/config/secure.yaml
# Validate setup
provisioning config validate-encryption
-```plaintext
-
----
-
-## Integration with Config Loader
-
-### Automatic Decryption
-
-The config loader automatically detects and decrypts encrypted files:
-
-```nushell
-# Load encrypted config (automatically decrypted in memory)
+
+
+
+
+The config loader automatically detects and decrypts encrypted files:
+# Load encrypted config (automatically decrypted in memory)
use lib_provisioning/config/loader.nu
let config = (load-provisioning-config --debug)
-```plaintext
-
-**Key Features**:
-
-- **Transparent**: No code changes needed
-- **Memory-Only**: Decrypted content never written to disk
-- **Fallback**: If decryption fails, attempts to load as plain file
-- **Debug Support**: Shows decryption status with `--debug` flag
-
-### Manual Loading
-
-```nushell
-use lib_provisioning/config/encryption.nu
+
+Key Features :
+
+Transparent : No code changes needed
+Memory-Only : Decrypted content never written to disk
+Fallback : If decryption fails, attempts to load as plain file
+Debug Support : Shows decryption status with --debug flag
+
+
+use lib_provisioning/config/encryption.nu
# Load encrypted config
let secure_config = (load-encrypted-config "workspace/config/secure.enc.yaml")
# Memory-only decryption (no file created)
let decrypted_content = (decrypt-config-memory "workspace/config/secure.enc.yaml")
-```plaintext
-
-### Configuration Hierarchy with Encryption
-
-The system supports encrypted files at any level:
-
-```plaintext
-1. workspace/{name}/config/provisioning.yaml ← Can be encrypted
+
+
+The system supports encrypted files at any level:
+1. workspace/{name}/config/provisioning.yaml ← Can be encrypted
2. workspace/{name}/config/providers/*.toml ← Can be encrypted
3. workspace/{name}/config/platform/*.toml ← Can be encrypted
4. ~/.../provisioning/ws_{name}.yaml ← Can be encrypted
5. Environment variables (PROVISIONING_*) ← Plain text
-```plaintext
-
----
-
-## Best Practices
-
-### 1. Encrypt All Sensitive Data
-
-**Always encrypt configs containing**:
-
-- Passwords
-- API keys
-- Secret keys
-- Private keys
-- Tokens
-- Credentials
-
-**Scan for unencrypted sensitive data**:
-
-```bash
-provisioning config scan-sensitive workspace --recursive
-```plaintext
-
-### 2. Use Appropriate KMS Backend
-
-| Environment | Recommended Backend |
-|-------------|---------------------|
-| Development | Age (file-based) |
-| Staging | AWS KMS or Vault |
-| Production | AWS KMS or Vault |
-| CI/CD | AWS KMS with IAM roles |
-
-### 3. Key Management
-
-**Age Keys**:
-
-- Store private keys securely: `~/.config/sops/age/keys.txt`
-- Set file permissions: `chmod 600 ~/.config/sops/age/keys.txt`
-- Backup keys securely (encrypted backup)
-- Never commit private keys to git
-
-**AWS KMS**:
-
-- Use separate keys per environment
-- Enable key rotation
-- Use IAM policies for access control
-- Monitor usage with CloudTrail
-
-**Vault**:
-
-- Use transit engine for encryption
-- Enable audit logging
-- Implement least-privilege policies
-- Regular policy reviews
-
-### 4. File Organization
-
-```plaintext
-workspace/
+
+
+
+
+Always encrypt configs containing :
+
+Passwords
+API keys
+Secret keys
+Private keys
+Tokens
+Credentials
+
+Scan for unencrypted sensitive data :
+provisioning config scan-sensitive workspace --recursive
+
+
+Environment Recommended Backend
+Development Age (file-based)
+Staging AWS KMS or Vault
+Production AWS KMS or Vault
+CI/CD AWS KMS with IAM roles
+
+
+
+Age Keys :
+
+Store private keys securely: ~/.config/sops/age/keys.txt
+Set file permissions: chmod 600 ~/.config/sops/age/keys.txt
+Backup keys securely (encrypted backup)
+Never commit private keys to git
+
+AWS KMS :
+
+Use separate keys per environment
+Enable key rotation
+Use IAM policies for access control
+Monitor usage with CloudTrail
+
+Vault :
+
+Use transit engine for encryption
+Enable audit logging
+Implement least-privilege policies
+Regular policy reviews
+
+
+workspace/
└── config/
├── provisioning.yaml # Plain (no secrets)
├── secure.yaml # Encrypted (SOPS auto-detects)
@@ -60523,14 +56812,10 @@ workspace/
│ └── aws-credentials.enc.toml # Encrypted
└── platform/
└── database.enc.yaml # Encrypted
-```plaintext
-
-### 5. Git Integration
-
-**Add to `.gitignore`**:
-
-```gitignore
-# Unencrypted sensitive files
+
+
+Add to .gitignore :
+# Unencrypted sensitive files
**/secrets.yaml
**/credentials.yaml
**/*.dec.yaml
@@ -60539,131 +56824,91 @@ workspace/
# Temporary decrypted files
*.tmp.yaml
*.tmp.toml
-```plaintext
-
-**Commit encrypted files**:
-
-```bash
-# Encrypted files are safe to commit
+
+Commit encrypted files :
+# Encrypted files are safe to commit
git add workspace/config/secure.enc.yaml
git commit -m "Add encrypted configuration"
-```plaintext
-
-### 6. Rotation Strategy
-
-**Regular Key Rotation**:
-
-```bash
-# Generate new Age key
+
+
+Regular Key Rotation :
+# Generate new Age key
age-keygen -o ~/.config/sops/age/keys-new.txt
# Update .sops.yaml with new recipient
# Rotate keys for file
provisioning config rotate-keys workspace/config/secure.yaml <new-key-id>
-```plaintext
-
-**Frequency**:
-
-- Development: Annually
-- Production: Quarterly
-- After team member departure: Immediately
-
-### 7. Audit and Monitoring
-
-**Track encryption status**:
-
-```bash
-# Regular scans
+
+Frequency :
+
+Development: Annually
+Production: Quarterly
+After team member departure: Immediately
+
+
+Track encryption status :
+# Regular scans
provisioning config scan-sensitive workspace --recursive
# Validate encryption setup
provisioning config validate-encryption
-```plaintext
-
-**Monitor access** (with Vault/AWS KMS):
-
-- Enable audit logging
-- Review access patterns
-- Alert on anomalies
-
----
-
-## Troubleshooting
-
-### SOPS Not Found
-
-**Error**:
-
-```plaintext
-SOPS binary not found
-```plaintext
-
-**Solution**:
-
-```bash
-# Install SOPS
+
+Monitor access (with Vault/AWS KMS):
+
+Enable audit logging
+Review access patterns
+Alert on anomalies
+
+
+
+
+Error :
+SOPS binary not found
+
+Solution :
+# Install SOPS
brew install sops
# Verify
sops --version
-```plaintext
-
-### Age Key Not Found
-
-**Error**:
-
-```plaintext
-Age key file not found: ~/.config/sops/age/keys.txt
-```plaintext
-
-**Solution**:
-
-```bash
-# Generate new key
+
+
+Error :
+Age key file not found: ~/.config/sops/age/keys.txt
+
+Solution :
+# Generate new key
mkdir -p ~/.config/sops/age
age-keygen -o ~/.config/sops/age/keys.txt
# Set environment variable
export PROVISIONING_KAGE="$HOME/.config/sops/age/keys.txt"
-```plaintext
-
-### SOPS_AGE_RECIPIENTS Not Set
-
-**Error**:
-
-```plaintext
-no AGE_RECIPIENTS for file.yaml
-```plaintext
-
-**Solution**:
-
-```bash
-# Extract public key from private key
+
+
+Error :
+no AGE_RECIPIENTS for file.yaml
+
+Solution :
+# Extract public key from private key
grep "public key:" ~/.config/sops/age/keys.txt
# Set environment variable
export SOPS_AGE_RECIPIENTS="age1ql3z7hjy54pw3hyww5ayyfg7zqgvc7w3j2elw8zmrj2kg5sfn9aqmcac8p"
-```plaintext
-
-### Decryption Failed
-
-**Error**:
-
-```plaintext
-Failed to decrypt configuration file
-```plaintext
-
-**Solutions**:
-
-1. **Wrong key**:
-
- ```bash
- # Verify you have the correct private key
- provisioning config validate-encryption
+
+Error :
+Failed to decrypt configuration file
+
+Solutions :
+Wrong key :
+# Verify you have the correct private key
+provisioning config validate-encryption
+
+
+
File corrupted :
# Check file integrity
sops --decrypt workspace/config/secure.yaml
@@ -60679,30 +56924,20 @@ head -20 workspace/config/secure.yaml
Error :
AccessDeniedException: User is not authorized to perform: kms:Decrypt
-```plaintext
-
-**Solution**:
-
-```bash
-# Check AWS credentials
+
+Solution :
+# Check AWS credentials
aws sts get-caller-identity
# Verify KMS key policy allows your IAM user/role
aws kms describe-key --key-id <key-arn>
-```plaintext
-
-### Vault Connection Failed
-
-**Error**:
-
-```plaintext
-Vault encryption failed: connection refused
-```plaintext
-
-**Solution**:
-
-```bash
-# Verify Vault address
+
+
+Error :
+Vault encryption failed: connection refused
+
+Solution :
+# Verify Vault address
echo $VAULT_ADDR
# Check connectivity
@@ -60710,65 +56945,54 @@ curl -k $VAULT_ADDR/v1/sys/health
# Verify token
vault token lookup
-```plaintext
-
----
-
-## Security Considerations
-
-### Threat Model
-
-**Protected Against**:
-
-- ✅ Plaintext secrets in git
-- ✅ Accidental secret exposure
-- ✅ Unauthorized file access
-- ✅ Key compromise (with rotation)
-
-**Not Protected Against**:
-
-- ❌ Memory dumps during decryption
-- ❌ Root/admin access to running process
-- ❌ Compromised Age/KMS keys
-- ❌ Social engineering
-
-### Security Best Practices
-
-1. **Principle of Least Privilege**: Only grant decryption access to those who need it
-2. **Key Separation**: Use different keys for different environments
-3. **Regular Audits**: Review who has access to keys
-4. **Secure Key Storage**: Never store private keys in git
-5. **Rotation**: Regularly rotate encryption keys
-6. **Monitoring**: Monitor decryption operations (with AWS KMS/Vault)
-
----
-
-## Additional Resources
-
-- **SOPS Documentation**: <https://github.com/mozilla/sops>
-- **Age Encryption**: <https://age-encryption.org/>
-- **AWS KMS**: <https://aws.amazon.com/kms/>
-- **HashiCorp Vault**: <https://www.vaultproject.io/>
-- **Cosmian KMS**: <https://www.cosmian.com/>
-
----
-
-## Support
-
-For issues or questions:
-
-- Check troubleshooting section above
-- Run: `provisioning config validate-encryption`
-- Review logs with `--debug` flag
-
----
-
-## Quick Reference
-
-### Setup (One-time)
-
-```bash
-# 1. Initialize encryption
+
+
+
+
+Protected Against :
+
+✅ Plaintext secrets in git
+✅ Accidental secret exposure
+✅ Unauthorized file access
+✅ Key compromise (with rotation)
+
+Not Protected Against :
+
+❌ Memory dumps during decryption
+❌ Root/admin access to running process
+❌ Compromised Age/KMS keys
+❌ Social engineering
+
+
+
+Principle of Least Privilege : Only grant decryption access to those who need it
+Key Separation : Use different keys for different environments
+Regular Audits : Review who has access to keys
+Secure Key Storage : Never store private keys in git
+Rotation : Regularly rotate encryption keys
+Monitoring : Monitor decryption operations (with AWS KMS/Vault)
+
+
+
+
+
+
+For issues or questions:
+
+Check troubleshooting section above
+Run: provisioning config validate-encryption
+Review logs with --debug flag
+
+
+
+
+# 1. Initialize encryption
provisioning config init-encryption --kms age
# 2. Set environment variables (add to ~/.zshrc or ~/.bashrc)
@@ -60777,35 +57001,30 @@ export PROVISIONING_KAGE="$HOME/.config/sops/age/keys.txt"
# 3. Validate setup
provisioning config validate-encryption
-```plaintext
-
-### Common Commands
-
-| Task | Command |
-|------|---------|
-| **Encrypt file** | `provisioning config encrypt secrets.yaml --in-place` |
-| **Decrypt file** | `provisioning config decrypt secrets.enc.yaml` |
-| **Edit encrypted** | `provisioning config edit-secure secrets.enc.yaml` |
-| **Check if encrypted** | `provisioning config is-encrypted secrets.yaml` |
-| **Scan for unencrypted** | `provisioning config scan-sensitive workspace --recursive` |
-| **Encrypt all sensitive** | `provisioning config encrypt-all workspace/config --kms age` |
-| **Validate setup** | `provisioning config validate-encryption` |
-| **Show encryption info** | `provisioning config encryption-info secrets.yaml` |
-
-### File Naming Conventions
-
-Automatically encrypted by SOPS:
-
-- `workspace/*/config/secure.yaml` ← Auto-encrypted
-- `*.enc.yaml` ← Auto-encrypted
-- `*.enc.yml` ← Auto-encrypted
-- `*.enc.toml` ← Auto-encrypted
-- `workspace/*/config/providers/*credentials*.toml` ← Auto-encrypted
-
-### Quick Workflow
-
-```bash
-# Create config with secrets
+
+
+Task Command
+Encrypt file provisioning config encrypt secrets.yaml --in-place
+Decrypt file provisioning config decrypt secrets.enc.yaml
+Edit encrypted provisioning config edit-secure secrets.enc.yaml
+Check if encrypted provisioning config is-encrypted secrets.yaml
+Scan for unencrypted provisioning config scan-sensitive workspace --recursive
+Encrypt all sensitive provisioning config encrypt-all workspace/config --kms age
+Validate setup provisioning config validate-encryption
+Show encryption info provisioning config encryption-info secrets.yaml
+
+
+
+Automatically encrypted by SOPS:
+
+workspace/*/config/secure.yaml ← Auto-encrypted
+*.enc.yaml ← Auto-encrypted
+*.enc.yml ← Auto-encrypted
+*.enc.toml ← Auto-encrypted
+workspace/*/config/providers/*credentials*.toml ← Auto-encrypted
+
+
+# Create config with secrets
cat > workspace/config/secure.yaml <<EOF
database:
password: supersecret
@@ -60823,41 +57042,36 @@ provisioning config edit-secure workspace/config/secure.yaml
# Configs are auto-decrypted when loaded
provisioning env # Automatically decrypts secure.yaml
-```plaintext
-
-### KMS Backends
-
-| Backend | Use Case | Setup Command |
-|---------|----------|---------------|
-| **Age** | Development, simple setup | `provisioning config init-encryption --kms age` |
-| **AWS KMS** | Production, AWS environments | Configure in `.sops.yaml` |
-| **Vault** | Enterprise, dynamic secrets | Set `VAULT_ADDR` and `VAULT_TOKEN` |
-| **Cosmian** | Confidential computing | Configure in `config.toml` |
-
-### Security Checklist
-
-- ✅ Encrypt all files with passwords, API keys, secrets
-- ✅ Never commit unencrypted secrets to git
-- ✅ Set file permissions: `chmod 600 ~/.config/sops/age/keys.txt`
-- ✅ Add plaintext files to `.gitignore`: `*.dec.yaml`, `secrets.yaml`
-- ✅ Regular key rotation (quarterly for production)
-- ✅ Separate keys per environment (dev/staging/prod)
-- ✅ Backup Age keys securely (encrypted backup)
-
-### Troubleshooting
-
-| Problem | Solution |
-|---------|----------|
-| `SOPS binary not found` | `brew install sops` |
-| `Age key file not found` | `provisioning config init-encryption --kms age` |
-| `SOPS_AGE_RECIPIENTS not set` | `export SOPS_AGE_RECIPIENTS="age1..."` |
-| `Decryption failed` | Check key file: `provisioning config validate-encryption` |
-| `AWS KMS Access Denied` | Verify IAM permissions: `aws sts get-caller-identity` |
-
-### Testing
-
-```bash
-# Run all encryption tests
+
+
+Backend Use Case Setup Command
+Age Development, simple setup provisioning config init-encryption --kms age
+AWS KMS Production, AWS environments Configure in .sops.yaml
+Vault Enterprise, dynamic secrets Set VAULT_ADDR and VAULT_TOKEN
+Cosmian Confidential computing Configure in config.toml
+
+
+
+
+✅ Encrypt all files with passwords, API keys, secrets
+✅ Never commit unencrypted secrets to git
+✅ Set file permissions: chmod 600 ~/.config/sops/age/keys.txt
+✅ Add plaintext files to .gitignore: *.dec.yaml, secrets.yaml
+✅ Regular key rotation (quarterly for production)
+✅ Separate keys per environment (dev/staging/prod)
+✅ Backup Age keys securely (encrypted backup)
+
+
+Problem Solution
+SOPS binary not foundbrew install sops
+Age key file not foundprovisioning config init-encryption --kms age
+SOPS_AGE_RECIPIENTS not setexport SOPS_AGE_RECIPIENTS="age1..."
+Decryption failedCheck key file: provisioning config validate-encryption
+AWS KMS Access DeniedVerify IAM permissions: aws sts get-caller-identity
+
+
+
+# Run all encryption tests
nu provisioning/core/nulib/lib_provisioning/config/encryption_tests.nu
# Run specific test
@@ -60869,14 +57083,10 @@ nu provisioning/core/nulib/lib_provisioning/config/encryption_tests.nu test-full
# Test KMS backend
use lib_provisioning/kms/client.nu
kms-test --backend age
-```plaintext
-
-### Integration
-
-Configs are **automatically decrypted** when loaded:
-
-```nushell
-# Nushell code - encryption is transparent
+
+
+Configs are automatically decrypted when loaded:
+# Nushell code - encryption is transparent
use lib_provisioning/config/loader.nu
# Auto-decrypts encrypted files in memory
@@ -60884,46 +57094,35 @@ let config = (load-provisioning-config)
# Access secrets normally
let db_password = ($config | get database.password)
-```plaintext
-
-### Emergency Key Recovery
-
-If you lose your Age key:
-
-1. **Check backups**: `~/.config/sops/age/keys.txt.backup`
-2. **Check other systems**: Keys might be on other dev machines
-3. **Contact team**: Team members with access can re-encrypt for you
-4. **Rotate secrets**: If keys are lost, rotate all secrets
-
-### Advanced
-
-#### Multiple Recipients (Team Access)
-
-```yaml
-# .sops.yaml
+
+
+If you lose your Age key:
+
+Check backups : ~/.config/sops/age/keys.txt.backup
+Check other systems : Keys might be on other dev machines
+Contact team : Team members with access can re-encrypt for you
+Rotate secrets : If keys are lost, rotate all secrets
+
+
+
+# .sops.yaml
creation_rules:
- path_regex: .*\.enc\.yaml$
age: >-
age1ql3z7hjy54pw3hyww5ayyfg7zqgvc7w3j2elw8zmrj2kg5sfn9aqmcac8p,
age1ql3z7hjy54pw3hyww5ayyfg7zqgvc7w3j2elw8zmrj2kg5sfn9aqmcac8q
-```plaintext
-
-#### Key Rotation
-
-```bash
-# Generate new key
+
+
+# Generate new key
age-keygen -o ~/.config/sops/age/keys-new.txt
# Update .sops.yaml with new recipient
# Rotate keys for file
provisioning config rotate-keys workspace/config/secure.yaml <new-key-id>
-```plaintext
-
-#### Scan and Encrypt All
-
-```bash
-# Find all unencrypted sensitive configs
+
+
+# Find all unencrypted sensitive configs
provisioning config scan-sensitive workspace --recursive
# Encrypt them all
@@ -60931,19 +57130,16 @@ provisioning config encrypt-all workspace --kms age --recursive
# Verify
provisioning config scan-sensitive workspace --recursive
-```plaintext
-
-### Documentation
-
-- **Full Guide**: `docs/user/CONFIG_ENCRYPTION_GUIDE.md`
-- **SOPS Docs**: <https://github.com/mozilla/sops>
-- **Age Docs**: <https://age-encryption.org/>
-
----
-
-**Last Updated**: 2025-10-08
-**Version**: 1.0.0
+
+
+
+Last Updated : 2025-10-08
+Version : 1.0.0
A comprehensive security system with 39,699 lines across 12 components providing enterprise-grade protection for infrastructure automation.
@@ -61086,14 +57282,14 @@ provisioning compliance gdpr export <user>
Standards : AES-256, TLS 1.3, envelope encryption
Coverage : At-rest and in-transit encryption
-
+
-Overhead : <20ms per secure operation
+Overhead : <20 ms per secure operation
Tests : 350+ comprehensive test cases
Endpoints : 83+ REST API endpoints
CLI Commands : 111+ security-related commands
-
+
Component Command Purpose
Login provisioning loginUser authentication
MFA TOTP provisioning mfa totp enrollSetup time-based MFA
@@ -61107,27 +57303,27 @@ provisioning compliance gdpr export <user>
Audit provisioning audit query --user alice --action deploy --from 24hSearch audit logs
-
+
Security system is integrated throughout provisioning platform:
Embedded : All authentication/authorization checks
-Non-blocking : <20ms overhead on operations
+Non-blocking : <20 ms overhead on operations
Graceful degradation : Fallback mechanisms for partial failures
Hot reload : Policies update without service restart
-
+
Security policies and settings are defined in:
provisioning/kcl/security.k - KCL security schema definitions
provisioning/config/security/*.toml - Security policy configurations
Environment-specific overrides in workspace/config/
-
+
# Show security help
@@ -61143,9 +57339,9 @@ provisioning secrets --help
Date : 2025-10-08
Status : Production-ready
-
+
RustyVault is a self-hosted, Rust-based secrets management system that provides a Vault-compatible API . The provisioning platform now supports RustyVault as a KMS backend alongside Age, Cosmian, AWS KMS, and HashiCorp Vault.
-
+
Self-hosted : Full control over your key management infrastructure
Pure Rust : Better performance and memory safety
@@ -61162,26 +57358,18 @@ provisioning secrets --help
├── AWS KMS (cloud-native AWS)
├── HashiCorp Vault (enterprise, external)
└── RustyVault (self-hosted, embedded) ✨ NEW
-```plaintext
-
----
-
-## Installation
-
-### Option 1: Standalone RustyVault Server
-
-```bash
-# Install RustyVault binary
+
+
+
+
+# Install RustyVault binary
cargo install rusty_vault
# Start RustyVault server
rustyvault server -config=/path/to/config.hcl
-```plaintext
-
-### Option 2: Docker Deployment
-
-```bash
-# Pull RustyVault image (if available)
+
+
+# Pull RustyVault image (if available)
docker pull tongsuo/rustyvault:latest
# Run RustyVault container
@@ -61191,30 +57379,21 @@ docker run -d \
-v $(pwd)/config:/vault/config \
-v $(pwd)/data:/vault/data \
tongsuo/rustyvault:latest
-```plaintext
-
-### Option 3: From Source
-
-```bash
-# Clone repository
+
+
+# Clone repository
git clone https://github.com/Tongsuo-Project/RustyVault.git
cd RustyVault
# Build and run
cargo build --release
./target/release/rustyvault server -config=config.hcl
-```plaintext
-
----
-
-## Configuration
-
-### RustyVault Server Configuration
-
-Create `rustyvault-config.hcl`:
-
-```hcl
-# RustyVault Server Configuration
+
+
+
+
+Create rustyvault-config.hcl:
+# RustyVault Server Configuration
storage "file" {
path = "/vault/data"
@@ -61231,12 +57410,9 @@ cluster_addr = "https://127.0.0.1:8201"
# Enable Transit secrets engine
default_lease_ttl = "168h"
max_lease_ttl = "720h"
-```plaintext
-
-### Initialize RustyVault
-
-```bash
-# Initialize (first time only)
+
+
+# Initialize (first time only)
export VAULT_ADDR='http://127.0.0.1:8200'
rustyvault operator init
@@ -61247,12 +57423,9 @@ rustyvault operator unseal <unseal_key_3>
# Save root token
export RUSTYVAULT_TOKEN='<root_token>'
-```plaintext
-
-### Enable Transit Engine
-
-```bash
-# Enable transit secrets engine
+
+
+# Enable transit secrets engine
rustyvault secrets enable transit
# Create encryption key
@@ -61260,16 +57433,11 @@ rustyvault write -f transit/keys/provisioning-main
# Verify key creation
rustyvault read transit/keys/provisioning-main
-```plaintext
-
----
-
-## KMS Service Configuration
-
-### Update `provisioning/config/kms.toml`
-
-```toml
-[kms]
+
+
+
+
+[kms]
type = "rustyvault"
server_url = "http://localhost:8200"
token = "${RUSTYVAULT_TOKEN}"
@@ -61284,12 +57452,9 @@ audit_logging = true
[tls]
enabled = false # Set true with HTTPS
-```plaintext
-
-### Environment Variables
-
-```bash
-# RustyVault connection
+
+
+# RustyVault connection
export RUSTYVAULT_ADDR="http://localhost:8200"
export RUSTYVAULT_TOKEN="s.xxxxxxxxxxxxxxxxxxxxxx"
export RUSTYVAULT_MOUNT_POINT="transit"
@@ -61299,27 +57464,19 @@ export RUSTYVAULT_TLS_VERIFY="true"
# KMS service
export KMS_BACKEND="rustyvault"
export KMS_BIND_ADDR="0.0.0.0:8081"
-```plaintext
-
----
-
-## Usage
-
-### Start KMS Service
-
-```bash
-# With RustyVault backend
+
+
+
+
+# With RustyVault backend
cd provisioning/platform/kms-service
cargo run
# With custom config
cargo run -- --config=/path/to/kms.toml
-```plaintext
-
-### CLI Operations
-
-```bash
-# Encrypt configuration file
+
+
+# Encrypt configuration file
provisioning kms encrypt provisioning/config/secrets.yaml
# Decrypt configuration
@@ -61330,12 +57487,9 @@ provisioning kms generate-key --spec AES256
# Health check
provisioning kms health
-```plaintext
-
-### REST API Usage
-
-```bash
-# Health check
+
+
+# Health check
curl http://localhost:8081/health
# Encrypt data
@@ -61358,18 +57512,12 @@ curl -X POST http://localhost:8081/decrypt \
curl -X POST http://localhost:8081/datakey/generate \
-H "Content-Type: application/json" \
-d '{"key_spec": "AES_256"}'
-```plaintext
-
----
-
-## Advanced Features
-
-### Context-based Encryption (AAD)
-
-Additional authenticated data binds encrypted data to specific contexts:
-
-```bash
-# Encrypt with context
+
+
+
+
+Additional authenticated data binds encrypted data to specific contexts:
+# Encrypt with context
curl -X POST http://localhost:8081/encrypt \
-d '{
"plaintext": "c2VjcmV0",
@@ -61382,14 +57530,10 @@ curl -X POST http://localhost:8081/decrypt \
"ciphertext": "vault:v1:...",
"context": "environment=prod,service=api"
}'
-```plaintext
-
-### Envelope Encryption
-
-For large files, use envelope encryption:
-
-```bash
-# 1. Generate data key
+
+
+For large files, use envelope encryption:
+# 1. Generate data key
DATA_KEY=$(curl -X POST http://localhost:8081/datakey/generate \
-d '{"key_spec": "AES_256"}' | jq -r '.plaintext')
@@ -61398,12 +57542,9 @@ openssl enc -aes-256-cbc -in large-file.bin -out encrypted.bin -K $DATA_KEY
# 3. Store encrypted data key (from response)
echo "vault:v1:..." > encrypted-data-key.txt
-```plaintext
-
-### Key Rotation
-
-```bash
-# Rotate encryption key in RustyVault
+
+
+# Rotate encryption key in RustyVault
rustyvault write -f transit/keys/provisioning-main/rotate
# Verify new version
@@ -61412,18 +57553,12 @@ rustyvault read transit/keys/provisioning-main
# Rewrap existing ciphertext with new key version
curl -X POST http://localhost:8081/rewrap \
-d '{"ciphertext": "vault:v1:..."}'
-```plaintext
-
----
-
-## Production Deployment
-
-### High Availability Setup
-
-Deploy multiple RustyVault instances behind a load balancer:
-
-```yaml
-# docker-compose.yml
+
+
+
+
+Deploy multiple RustyVault instances behind a load balancer:
+# docker-compose.yml
version: '3.8'
services:
@@ -61456,12 +57591,9 @@ services:
volumes:
vault-data-1:
vault-data-2:
-```plaintext
-
-### TLS Configuration
-
-```toml
-# kms.toml
+
+
+# kms.toml
[kms]
type = "rustyvault"
server_url = "https://vault.example.com:8200"
@@ -61473,26 +57605,18 @@ enabled = true
cert_path = "/etc/kms/certs/server.crt"
key_path = "/etc/kms/certs/server.key"
ca_path = "/etc/kms/certs/ca.crt"
-```plaintext
-
-### Auto-Unseal (AWS KMS)
-
-```hcl
-# rustyvault-config.hcl
+
+
+# rustyvault-config.hcl
seal "awskms" {
region = "us-east-1"
kms_key_id = "arn:aws:kms:us-east-1:123456789012:key/..."
}
-```plaintext
-
----
-
-## Monitoring
-
-### Health Checks
-
-```bash
-# RustyVault health
+
+
+
+
+# RustyVault health
curl http://localhost:8200/v1/sys/health
# KMS service health
@@ -61500,77 +57624,52 @@ curl http://localhost:8081/health
# Metrics (if enabled)
curl http://localhost:8081/metrics
-```plaintext
-
-### Audit Logging
-
-Enable audit logging in RustyVault:
-
-```hcl
-# rustyvault-config.hcl
+
+
+Enable audit logging in RustyVault:
+# rustyvault-config.hcl
audit {
path = "/vault/logs/audit.log"
format = "json"
}
-```plaintext
-
----
-
-## Troubleshooting
-
-### Common Issues
-
-**1. Connection Refused**
-
-```bash
-# Check RustyVault is running
+
+
+
+
+1. Connection Refused
+# Check RustyVault is running
curl http://localhost:8200/v1/sys/health
# Check token is valid
export VAULT_ADDR='http://localhost:8200'
rustyvault token lookup
-```plaintext
-
-**2. Authentication Failed**
-
-```bash
-# Verify token in environment
+
+2. Authentication Failed
+# Verify token in environment
echo $RUSTYVAULT_TOKEN
# Renew token if needed
rustyvault token renew
-```plaintext
-
-**3. Key Not Found**
-
-```bash
-# List available keys
+
+3. Key Not Found
+# List available keys
rustyvault list transit/keys
# Create missing key
rustyvault write -f transit/keys/provisioning-main
-```plaintext
-
-**4. TLS Verification Failed**
-
-```bash
-# Disable TLS verification (dev only)
+
+4. TLS Verification Failed
+# Disable TLS verification (dev only)
export RUSTYVAULT_TLS_VERIFY=false
# Or add CA certificate
export RUSTYVAULT_CACERT=/path/to/ca.crt
-```plaintext
-
----
-
-## Migration from Other Backends
-
-### From HashiCorp Vault
-
-RustyVault is API-compatible, minimal changes required:
-
-```bash
-# Old config (Vault)
+
+
+
+
+RustyVault is API-compatible, minimal changes required:
+# Old config (Vault)
[kms]
type = "vault"
address = "https://vault.example.com:8200"
@@ -61581,39 +57680,29 @@ token = "${VAULT_TOKEN}"
type = "rustyvault"
server_url = "http://rustyvault.example.com:8200"
token = "${RUSTYVAULT_TOKEN}"
-```plaintext
-
-### From Age
-
-Re-encrypt existing encrypted files:
-
-```bash
-# 1. Decrypt with Age
+
+
+Re-encrypt existing encrypted files:
+# 1. Decrypt with Age
provisioning kms decrypt --backend age secrets.enc > secrets.plain
# 2. Encrypt with RustyVault
provisioning kms encrypt --backend rustyvault secrets.plain > secrets.rustyvault.enc
-```plaintext
-
----
-
-## Security Considerations
-
-### Best Practices
-
-1. **Enable TLS**: Always use HTTPS in production
-2. **Rotate Tokens**: Regularly rotate RustyVault tokens
-3. **Least Privilege**: Use policies to restrict token permissions
-4. **Audit Logging**: Enable and monitor audit logs
-5. **Backup Keys**: Secure backup of unseal keys and root token
-6. **Network Isolation**: Run RustyVault in isolated network segment
-
-### Token Policies
-
-Create restricted policy for KMS service:
-
-```hcl
-# kms-policy.hcl
+
+
+
+
+
+Enable TLS : Always use HTTPS in production
+Rotate Tokens : Regularly rotate RustyVault tokens
+Least Privilege : Use policies to restrict token permissions
+Audit Logging : Enable and monitor audit logs
+Backup Keys : Secure backup of unseal keys and root token
+Network Isolation : Run RustyVault in isolated network segment
+
+
+Create restricted policy for KMS service:
+# kms-policy.hcl
path "transit/encrypt/provisioning-main" {
capabilities = ["update"]
}
@@ -61625,62 +57714,50 @@ path "transit/decrypt/provisioning-main" {
path "transit/datakey/plaintext/provisioning-main" {
capabilities = ["update"]
}
-```plaintext
-
-Apply policy:
-
-```bash
-rustyvault policy write kms-service kms-policy.hcl
-rustyvault token create -policy=kms-service
-```plaintext
-
----
-
-## Performance
-
-### Benchmarks (Estimated)
-
-| Operation | Latency | Throughput |
-|-----------|---------|------------|
-| Encrypt | 5-15ms | 2,000-5,000 ops/sec |
-| Decrypt | 5-15ms | 2,000-5,000 ops/sec |
-| Generate Key | 10-20ms | 1,000-2,000 ops/sec |
-
-*Actual performance depends on hardware, network, and RustyVault configuration*
-
-### Optimization Tips
-
-1. **Connection Pooling**: Reuse HTTP connections
-2. **Batching**: Batch multiple operations when possible
-3. **Caching**: Cache data keys for envelope encryption
-4. **Local Unseal**: Use auto-unseal for faster restarts
-
----
-
-## Related Documentation
-
-- **KMS Service**: `docs/user/CONFIG_ENCRYPTION_GUIDE.md`
-- **Dynamic Secrets**: `docs/user/DYNAMIC_SECRETS_QUICK_REFERENCE.md`
-- **Security System**: `docs/architecture/ADR-009-security-system-complete.md`
-- **RustyVault GitHub**: <https://github.com/Tongsuo-Project/RustyVault>
-
----
-
-## Support
-
-- **GitHub Issues**: <https://github.com/Tongsuo-Project/RustyVault/issues>
-- **Documentation**: <https://github.com/Tongsuo-Project/RustyVault/tree/main/docs>
-- **Community**: <https://users.rust-lang.org/t/rustyvault-a-hashicorp-vault-replacement-in-rust/103943>
-
----
-
-**Last Updated**: 2025-10-08
-**Maintained By**: Architecture Team
+Apply policy:
+rustyvault policy write kms-service kms-policy.hcl
+rustyvault token create -policy=kms-service
+
+
+
+
+Operation Latency Throughput
+Encrypt 5-15 ms 2,000-5,000 ops/sec
+Decrypt 5-15 ms 2,000-5,000 ops/sec
+Generate Key 10-20 ms 1,000-2,000 ops/sec
+
+
+Actual performance depends on hardware, network, and RustyVault configuration
+
+
+Connection Pooling : Reuse HTTP connections
+Batching : Batch multiple operations when possible
+Caching : Cache data keys for envelope encryption
+Local Unseal : Use auto-unseal for faster restarts
+
+
+
+
+KMS Service : docs/user/CONFIG_ENCRYPTION_GUIDE.md
+Dynamic Secrets : docs/user/DYNAMIC_SECRETS_QUICK_REFERENCE.md
+Security System : docs/architecture/adr-009-security-system-complete.md
+RustyVault GitHub : https://github.com/Tongsuo-Project/RustyVault
+
+
+
+
+
+Last Updated : 2025-10-08
+Maintained By : Architecture Team
-SecretumVault is an enterprise-grade, post-quantum ready secrets management system integrated as the 4th KMS backend in the provisioning platform, alongside Age (dev), Cosmian (prod), and RustyVault (self-hosted).
-
-
+SecretumVault is an enterprise-grade, post-quantum ready secrets management system integrated as the fourth KMS backend in the provisioning platform, alongside Age (dev), Cosmian (prod), and RustyVault (self-hosted).
+
+
SecretumVault provides:
Post-Quantum Cryptography : Ready for quantum-resistant algorithms
@@ -61698,10 +57775,10 @@ rustyvault token create -policy=kms-service
Self-Hosted Enterprise SecretumVault + etcd Full control, HA support
-
+
Storage : Filesystem (~/.config/provisioning/secretumvault/data)
-Performance : <3ms encryption/decryption
+Performance : <3 ms encryption/decryption
Setup : No separate service required
Best For : Local development and testing
export PROVISIONING_ENV=dev
@@ -61710,7 +57787,7 @@ provisioning kms encrypt config.yaml
Storage : SurrealDB (document database)
-Performance : <10ms operations
+Performance : <10 ms operations
Setup : Start SecretumVault service separately
Best For : Team testing, staging environments
# Start SecretumVault service
@@ -61725,7 +57802,7 @@ provisioning kms encrypt config.yaml
Storage : etcd cluster (3+ nodes)
-Performance : <10ms operations (99th percentile)
+Performance : <10 ms operations (ninety-ninth percentile)
Setup : etcd cluster + SecretumVault service
Best For : Production deployments with HA requirements
# Setup etcd cluster (3 nodes minimum)
@@ -61746,8 +57823,8 @@ export SECRETUMVAULT_STORAGE=etcd
provisioning kms encrypt config.yaml
-
-
+
+
Variable Purpose Default Example
PROVISIONING_ENVDeployment environment devstaging, prod
KMS_DEV_BACKENDDevelopment KMS backend agesecretumvault
@@ -61759,7 +57836,7 @@ provisioning kms encrypt config.yaml
SECRETUMVAULT_TLS_VERIFYVerify TLS certificates falsetrue
-
+
System Defaults : provisioning/config/secretumvault.toml
KMS Config : provisioning/config/kms.toml
Edit these files to customize:
@@ -61771,7 +57848,7 @@ provisioning kms encrypt config.yaml
Audit logging
Key rotation policies
-
+
# Encrypt a file
provisioning kms encrypt config.yaml
@@ -61814,7 +57891,7 @@ provisioning kms version
# Detailed KMS status
provisioning kms status
-
+
# Rotate encryption key
provisioning kms rotate-key provisioning-master
@@ -61907,7 +57984,7 @@ connection_url = "postgresql://user:pass@localhost:5432/secretumvault"
max_connections = 10
ssl_mode = "require"
-
+
Error : “Failed to connect to SecretumVault service”
Solutions :
@@ -62027,7 +58104,7 @@ provisioning config validate
View audit logs :
tail -f ~/.config/provisioning/logs/secretumvault-audit.log
-
+
-
+
Restrict who can access SecretumVault admin UI
Use strong authentication (MFA preferred)
Audit all secrets access
Implement least-privilege principle
-
+
Rotate keys regularly (every 90 days recommended)
Keep old versions for decryption
@@ -62070,7 +58147,7 @@ provisioning config validate
Store backups securely
Keep backup keys separate from encrypted data
-
+
# Export all secrets encrypted with Age
provisioning secrets export --backend age --output secrets.json
@@ -62121,7 +58198,7 @@ request_timeout = 30
cache_ttl = 600
-
+
All operations are logged:
# View recent audit events
provisioning kms audit --limit 100
@@ -62132,7 +58209,7 @@ provisioning kms audit export --output audit.json
# Audit specific operations
provisioning kms audit --action encrypt --from 24h
-
+
# Generate compliance report
provisioning compliance report --backend secretumvault
@@ -62172,7 +58249,7 @@ export SECRETUMVAULT_STORAGE=etcd
# Region 2 (for failover)
export SECRETUMVAULT_URL_FALLBACK=https://kms-us-west.example.com
-
+
Documentation : docs/user/SECRETUMVAULT_KMS_GUIDE.md (this file)
Configuration Template : provisioning/config/secretumvault.toml
@@ -62180,7 +58257,7 @@ export SECRETUMVAULT_URL_FALLBACK=https://kms-us-west.example.com
Issues : Report issues with provisioning kms debug
Logs : Check ~/.config/provisioning/logs/secretumvault-*.log
-
+
Age KMS Guide - Simple local encryption
Cosmian KMS Guide - Enterprise confidential computing
@@ -62188,25 +58265,21 @@ export SECRETUMVAULT_URL_FALLBACK=https://kms-us-west.example.com
KMS Overview - KMS backend comparison
-
+
The fastest way to use temporal SSH keys:
# Auto-generate, deploy, and connect (key auto-revoked after disconnect)
ssh connect server.example.com
# Connect with custom user and TTL
-ssh connect server.example.com --user deploy --ttl 30min
+ssh connect server.example.com --user deploy --ttl 30 min
# Keep key active after disconnect
ssh connect server.example.com --keep
-```plaintext
-
-### Manual Key Management
-
-For more control over the key lifecycle:
-
-```bash
-# 1. Generate key
+
+
+For more control over the key lifecycle:
+# 1. Generate key
ssh generate-key server.example.com --user root --ttl 1hr
# Output:
@@ -62231,54 +58304,41 @@ ssh -i /path/to/private/key root@server.example.com
# 4. Revoke when done
ssh revoke-key abc-123-def-456
-```plaintext
-
-## Key Features
-
-### Automatic Expiration
-
-All keys expire automatically after their TTL:
-
-- **Default TTL**: 1 hour
-- **Configurable**: From 5 minutes to 24 hours
-- **Background Cleanup**: Automatic removal from servers every 5 minutes
-
-### Multiple Key Types
-
-Choose the right key type for your use case:
-
-| Type | Description | Use Case |
-|------|-------------|----------|
-| **dynamic** (default) | Generated Ed25519 keys | Quick SSH access |
-| **ca** | Vault CA-signed certificate | Enterprise with SSH CA |
-| **otp** | Vault one-time password | Single-use access |
-
-### Security Benefits
-
-✅ No static SSH keys to manage
+
+
+
+All keys expire automatically after their TTL:
+
+Default TTL : 1 hour
+Configurable : From 5 minutes to 24 hours
+Background Cleanup : Automatic removal from servers every 5 minutes
+
+
+Choose the right key type for your use case:
+Type Description Use Case
+dynamic (default)Generated Ed25519 keys Quick SSH access
+ca Vault CA-signed certificate Enterprise with SSH CA
+otp Vault one-time password Single-use access
+
+
+
+✅ No static SSH keys to manage
✅ Short-lived credentials (1 hour default)
✅ Automatic cleanup on expiration
✅ Audit trail for all operations
-✅ Private keys never stored on disk
-
-## Common Usage Patterns
-
-### Development Workflow
-
-```bash
-# Quick SSH for debugging
-ssh connect dev-server.local --ttl 30min
+✅ Private keys never stored on disk
+
+
+# Quick SSH for debugging
+ssh connect dev-server.local --ttl 30 min
# Execute commands
ssh root@dev-server.local "systemctl status nginx"
# Connection closes, key auto-revokes
-```plaintext
-
-### Production Deployment
-
-```bash
-# Generate key with longer TTL for deployment
+
+
+# Generate key with longer TTL for deployment
ssh generate-key prod-server.example.com --ttl 2hr
# Deploy to server
@@ -62289,81 +58349,53 @@ ssh -i /tmp/deploy-key root@prod-server.example.com < deploy.sh
# Manual revoke when done
ssh revoke-key <key-id>
-```plaintext
-
-### Multi-Server Access
-
-```bash
-# Generate one key
+
+
+# Generate one key
ssh generate-key server01.example.com --ttl 1hr
# Use the same private key for multiple servers (if you have provisioning access)
# Note: Currently each key is server-specific, multi-server support coming soon
-```plaintext
-
-## Command Reference
-
-### ssh generate-key
-
-Generate a new temporal SSH key.
-
-**Syntax**:
-
-```bash
-ssh generate-key <server> [options]
-```plaintext
-
-**Options**:
-
-- `--user <name>`: SSH user (default: root)
-- `--ttl <duration>`: Key lifetime (default: 1hr)
-- `--type <ca|otp|dynamic>`: Key type (default: dynamic)
-- `--ip <address>`: Allowed IP (OTP mode only)
-- `--principal <name>`: Principal (CA mode only)
-
-**Examples**:
-
-```bash
-# Basic usage
+
+
+
+Generate a new temporal SSH key.
+Syntax :
+ssh generate-key <server> [options]
+
+Options :
+
+--user <name>: SSH user (default: root)
+--ttl <duration>: Key lifetime (default: 1hr)
+--type <ca|otp|dynamic>: Key type (default: dynamic)
+--ip <address>: Allowed IP (OTP mode only)
+--principal <name>: Principal (CA mode only)
+
+Examples :
+# Basic usage
ssh generate-key server.example.com
# Custom user and TTL
-ssh generate-key server.example.com --user deploy --ttl 30min
+ssh generate-key server.example.com --user deploy --ttl 30 min
# Vault CA mode
ssh generate-key server.example.com --type ca --principal admin
-```plaintext
-
-### ssh deploy-key
-
-Deploy a generated key to the target server.
-
-**Syntax**:
-
-```bash
-ssh deploy-key <key-id>
-```plaintext
-
-**Example**:
-
-```bash
-ssh deploy-key abc-123-def-456
-```plaintext
-
-### ssh list-keys
-
-List all active SSH keys.
-
-**Syntax**:
-
-```bash
-ssh list-keys [--expired]
-```plaintext
-
-**Examples**:
-
-```bash
-# List active keys
+
+
+Deploy a generated key to the target server.
+Syntax :
+ssh deploy-key <key-id>
+
+Example :
+ssh deploy-key abc-123-def-456
+
+
+List all active SSH keys.
+Syntax :
+ssh list-keys [--expired]
+
+Examples :
+# List active keys
ssh list-keys
# Show only deployed keys
@@ -62371,61 +58403,37 @@ ssh list-keys | where deployed == true
# Include expired keys
ssh list-keys --expired
-```plaintext
-
-### ssh get-key
-
-Get detailed information about a specific key.
-
-**Syntax**:
-
-```bash
-ssh get-key <key-id>
-```plaintext
-
-**Example**:
-
-```bash
-ssh get-key abc-123-def-456
-```plaintext
-
-### ssh revoke-key
-
-Immediately revoke a key (removes from server and tracking).
-
-**Syntax**:
-
-```bash
-ssh revoke-key <key-id>
-```plaintext
-
-**Example**:
-
-```bash
-ssh revoke-key abc-123-def-456
-```plaintext
-
-### ssh connect
-
-Auto-generate, deploy, connect, and revoke (all-in-one).
-
-**Syntax**:
-
-```bash
-ssh connect <server> [options]
-```plaintext
-
-**Options**:
-
-- `--user <name>`: SSH user (default: root)
-- `--ttl <duration>`: Key lifetime (default: 1hr)
-- `--type <ca|otp|dynamic>`: Key type (default: dynamic)
-- `--keep`: Don't revoke after disconnect
-
-**Examples**:
-
-```bash
-# Quick connection
+
+
+Get detailed information about a specific key.
+Syntax :
+ssh get-key <key-id>
+
+Example :
+ssh get-key abc-123-def-456
+
+
+Immediately revoke a key (removes from server and tracking).
+Syntax :
+ssh revoke-key <key-id>
+
+Example :
+ssh revoke-key abc-123-def-456
+
+
+Auto-generate, deploy, connect, and revoke (all-in-one).
+Syntax :
+ssh connect <server> [options]
+
+Options :
+
+--user <name>: SSH user (default: root)
+--ttl <duration>: Key lifetime (default: 1hr)
+--type <ca|otp|dynamic>: Key type (default: dynamic)
+--keep: Don’t revoke after disconnect
+
+Examples :
+# Quick connection
ssh connect server.example.com
# Custom user
@@ -62433,22 +58441,14 @@ ssh connect server.example.com --user deploy
# Keep key active after disconnect
ssh connect server.example.com --keep
-```plaintext
-
-### ssh stats
-
-Show SSH key statistics.
-
-**Syntax**:
-
-```bash
-ssh stats
-```plaintext
-
-**Example Output**:
-
-```plaintext
-SSH Key Statistics:
+
+
+Show SSH key statistics.
+Syntax :
+ssh stats
+
+Example Output :
+SSH Key Statistics:
Total generated: 42
Active keys: 10
Expired keys: 32
@@ -62460,63 +58460,38 @@ Keys by type:
Last cleanup: 2024-01-01T12:00:00Z
Cleaned keys: 5
-```plaintext
-
-### ssh cleanup
-
-Manually trigger cleanup of expired keys.
-
-**Syntax**:
-
-```bash
-ssh cleanup
-```plaintext
-
-### ssh test
-
-Run a quick test of the SSH key system.
-
-**Syntax**:
-
-```bash
-ssh test <server> [--user <name>]
-```plaintext
-
-**Example**:
-
-```bash
-ssh test server.example.com --user root
-```plaintext
-
-### ssh help
-
-Show help information.
-
-**Syntax**:
-
-```bash
-ssh help
-```plaintext
-
-## Duration Formats
-
-The `--ttl` option accepts various duration formats:
-
-| Format | Example | Meaning |
-|--------|---------|---------|
-| Minutes | `30min` | 30 minutes |
-| Hours | `2hr` | 2 hours |
-| Mixed | `1hr 30min` | 1.5 hours |
-| Seconds | `3600sec` | 1 hour |
-
-## Working with Private Keys
-
-### Saving Private Keys
-
-When you generate a key, save the private key immediately:
-
-```bash
-# Generate and save to file
+
+
+Manually trigger cleanup of expired keys.
+Syntax :
+ssh cleanup
+
+
+Run a quick test of the SSH key system.
+Syntax :
+ssh test <server> [--user <name>]
+
+Example :
+ssh test server.example.com --user root
+
+
+Show help information.
+Syntax :
+ssh help
+
+
+The --ttl option accepts various duration formats:
+Format Example Meaning
+Minutes 30 min30 minutes
+Hours 2hr2 hours
+Mixed 1hr 30 min1.5 hours
+Seconds 3600sec1 hour
+
+
+
+
+When you generate a key, save the private key immediately:
+# Generate and save to file
ssh generate-key server.example.com | get private_key | save -f ~/.ssh/temp_key
chmod 600 ~/.ssh/temp_key
@@ -62525,14 +58500,10 @@ ssh -i ~/.ssh/temp_key root@server.example.com
# Cleanup
rm ~/.ssh/temp_key
-```plaintext
-
-### Using SSH Agent
-
-Add the temporary key to your SSH agent:
-
-```bash
-# Generate key and extract private key
+
+
+Add the temporary key to your SSH agent:
+# Generate key and extract private key
ssh generate-key server.example.com | get private_key | save -f /tmp/temp_key
chmod 600 /tmp/temp_key
@@ -62545,23 +58516,18 @@ ssh root@server.example.com
# Remove from agent
ssh-add -d /tmp/temp_key
rm /tmp/temp_key
-```plaintext
-
-## Troubleshooting
-
-### Key Deployment Fails
-
-**Problem**: `ssh deploy-key` returns error
-
-**Solutions**:
-
-1. Check SSH connectivity to server:
-
- ```bash
- ssh root@server.example.com
+
+
+Problem : ssh deploy-key returns error
+Solutions :
+Check SSH connectivity to server:
+ssh root@server.example.com
+
+
+
Verify provisioning key is configured:
echo $PROVISIONING_SSH_KEY
@@ -62612,12 +58578,12 @@ rm /tmp/temp_key
-
-
+
+
Short TTLs : Use the shortest TTL that works for your task
-ssh connect server.example.com --ttl 30min
+ssh connect server.example.com --ttl 30 min
@@ -62657,7 +58623,7 @@ ssh revoke-key $KEY_ID
-
+
If your organization uses HashiCorp Vault:
@@ -62666,12 +58632,9 @@ ssh generate-key server.example.com --type ca --principal admin --ttl 1hr
# Vault signs your public key
# Server must trust Vault CA certificate
-```plaintext
-
-**Setup** (one-time):
-
-```bash
-# On servers, add to /etc/ssh/sshd_config:
+
+Setup (one-time):
+# On servers, add to /etc/ssh/sshd_config:
TrustedUserCAKeys /etc/ssh/trusted-user-ca-keys.pem
# Get Vault CA public key:
@@ -62680,23 +58643,16 @@ vault read -field=public_key ssh/config/ca | \
# Restart SSH:
sudo systemctl restart sshd
-```plaintext
-
-#### OTP Mode
-
-```bash
-# Generate one-time password
+
+
+# Generate one-time password
ssh generate-key server.example.com --type otp --ip 192.168.1.100
# Use the OTP to connect (single use only)
-```plaintext
-
-### Scripting
-
-Use in scripts for automated operations:
-
-```nushell
-# deploy.nu
+
+
+Use in scripts for automated operations:
+# deploy.nu
def deploy [target: string] {
let key = (ssh generate-key $target --ttl 1hr)
ssh deploy-key $key.id
@@ -62711,14 +58667,10 @@ def deploy [target: string] {
# Always cleanup
ssh revoke-key $key.id
}
-```plaintext
-
-## API Integration
-
-For programmatic access, use the REST API:
-
-```bash
-# Generate key
+
+
+For programmatic access, use the REST API:
+# Generate key
curl -X POST http://localhost:9090/api/v1/ssh/generate \
-H "Content-Type: application/json" \
-d '{
@@ -62736,55 +58688,44 @@ curl http://localhost:9090/api/v1/ssh/keys
# Get stats
curl http://localhost:9090/api/v1/ssh/stats
-```plaintext
-
-## FAQ
-
-**Q: Can I use the same key for multiple servers?**
-A: Currently, each key is tied to a specific server. Multi-server support is planned.
-
-**Q: What happens if the orchestrator crashes?**
-A: Keys in memory are lost, but keys already deployed to servers remain until their expiration time.
-
-**Q: Can I extend the TTL of an existing key?**
-A: No, you must generate a new key. This is by design for security.
-
-**Q: What's the maximum TTL?**
-A: Configurable by admin, default maximum is 24 hours.
-
-**Q: Are private keys stored anywhere?**
-A: Private keys exist only in memory during generation and are shown once to the user. They are never written to disk by the system.
-
-**Q: What happens if cleanup fails?**
-A: The key remains in authorized_keys until the next cleanup run. You can trigger manual cleanup with `ssh cleanup`.
-
-**Q: Can I use this with non-root users?**
-A: Yes, use `--user <username>` when generating the key.
-
-**Q: How do I know when my key will expire?**
-A: Use `ssh get-key <key-id>` to see the exact expiration timestamp.
-
-## Support
-
-For issues or questions:
-
-1. Check orchestrator logs: `tail -f ./data/orchestrator.log`
-2. Run diagnostics: `ssh stats`
-3. Test connectivity: `ssh test server.example.com`
-4. Review documentation: `SSH_KEY_MANAGEMENT.md`
-
-## See Also
-
-- **Architecture**: `SSH_KEY_MANAGEMENT.md`
-- **Implementation**: `SSH_IMPLEMENTATION_SUMMARY.md`
-- **Configuration**: `config/ssh-config.toml.example`
+
+Q: Can I use the same key for multiple servers?
+A: Currently, each key is tied to a specific server. Multi-server support is planned.
+Q: What happens if the orchestrator crashes?
+A: Keys in memory are lost, but keys already deployed to servers remain until their expiration time.
+Q: Can I extend the TTL of an existing key?
+A: No, you must generate a new key. This is by design for security.
+Q: What’s the maximum TTL?
+A: Configurable by admin, default maximum is 24 hours.
+Q: Are private keys stored anywhere?
+A: Private keys exist only in memory during generation and are shown once to the user. They are never written to disk by the system.
+Q: What happens if cleanup fails?
+A: The key remains in authorized_keys until the next cleanup run. You can trigger manual cleanup with ssh cleanup.
+Q: Can I use this with non-root users?
+A: Yes, use --user <username> when generating the key.
+Q: How do I know when my key will expire?
+A: Use ssh get-key <key-id> to see the exact expiration timestamp.
+
+For issues or questions:
+
+Check orchestrator logs: tail -f ./data/orchestrator.log
+Run diagnostics: ssh stats
+Test connectivity: ssh test server.example.com
+Review documentation: SSH_KEY_MANAGEMENT.md
+
+
+
+Architecture : SSH_KEY_MANAGEMENT.md
+Implementation : SSH_IMPLEMENTATION_SUMMARY.md
+Configuration : config/ssh-config.toml.example
+
Version : 1.0.0
Last Updated : 2025-10-09
Target Audience : Developers, DevOps Engineers, System Administrators
-
+
Overview
Why Native Plugins?
@@ -62803,7 +58744,7 @@ For issues or questions:
FAQ
-
+
The Provisioning Platform provides three native Nushell plugins that dramatically improve performance and user experience compared to traditional HTTP API calls:
Plugin Purpose Performance Gain
nu_plugin_auth JWT authentication, MFA, session management 20% faster
@@ -62811,71 +58752,57 @@ For issues or questions:
nu_plugin_orchestrator Orchestrator operations without HTTP overhead 50x faster
-
+
Traditional HTTP Flow:
User Command → HTTP Request → Network → Server Processing → Response → Parse JSON
- Total: ~50-100ms per operation
+ Total: ~50-100 ms per operation
Plugin Flow:
User Command → Direct Rust Function Call → Return Nushell Data Structure
- Total: ~1-10ms per operation
-```plaintext
-
-### Key Features
-
-✅ **Performance**: 10-50x faster than HTTP API
-✅ **Type Safety**: Full Nushell type system integration
-✅ **Pipeline Support**: Native Nushell data structures
-✅ **Offline Capability**: KMS and orchestrator work without network
-✅ **OS Integration**: Native keyring for secure token storage
-✅ **Graceful Fallback**: HTTP still available if plugins not installed
-
----
-
-## Why Native Plugins?
-
-### Performance Comparison
-
-Real-world benchmarks from production workload:
-
-| Operation | HTTP API | Plugin | Improvement | Speedup |
-|-----------|----------|--------|-------------|---------|
-| **KMS Encrypt (RustyVault)** | ~50ms | ~5ms | -45ms | **10x** |
-| **KMS Decrypt (RustyVault)** | ~50ms | ~5ms | -45ms | **10x** |
-| **KMS Encrypt (Age)** | ~30ms | ~3ms | -27ms | **10x** |
-| **KMS Decrypt (Age)** | ~30ms | ~3ms | -27ms | **10x** |
-| **Orchestrator Status** | ~30ms | ~1ms | -29ms | **30x** |
-| **Orchestrator Tasks List** | ~50ms | ~5ms | -45ms | **10x** |
-| **Orchestrator Validate** | ~100ms | ~10ms | -90ms | **10x** |
-| **Auth Login** | ~100ms | ~80ms | -20ms | 1.25x |
-| **Auth Verify** | ~50ms | ~10ms | -40ms | **5x** |
-| **Auth MFA Verify** | ~80ms | ~60ms | -20ms | 1.3x |
-
-### Use Case: Batch Processing
-
-**Scenario**: Encrypt 100 configuration files
-
-```nushell
-# HTTP API approach
+ Total: ~1-10 ms per operation
+
+
+✅ Performance : 10-50x faster than HTTP API
+✅ Type Safety : Full Nushell type system integration
+✅ Pipeline Support : Native Nushell data structures
+✅ Offline Capability : KMS and orchestrator work without network
+✅ OS Integration : Native keyring for secure token storage
+✅ Graceful Fallback : HTTP still available if plugins not installed
+
+
+
+Real-world benchmarks from production workload:
+Operation HTTP API Plugin Improvement Speedup
+KMS Encrypt (RustyVault) ~50 ms ~5 ms -45 ms 10x
+KMS Decrypt (RustyVault) ~50 ms ~5 ms -45 ms 10x
+KMS Encrypt (Age) ~30 ms ~3 ms -27 ms 10x
+KMS Decrypt (Age) ~30 ms ~3 ms -27 ms 10x
+Orchestrator Status ~30 ms ~1 ms -29 ms 30x
+Orchestrator Tasks List ~50 ms ~5 ms -45 ms 10x
+Orchestrator Validate ~100 ms ~10 ms -90 ms 10x
+Auth Login ~100 ms ~80 ms -20 ms 1.25x
+Auth Verify ~50 ms ~10 ms -40 ms 5x
+Auth MFA Verify ~80 ms ~60 ms -20 ms 1.3x
+
+
+
+Scenario : Encrypt 100 configuration files
+# HTTP API approach
ls configs/*.yaml | each { |file|
http post http://localhost:9998/encrypt { data: (open $file) }
} | save encrypted/
-# Total time: ~5 seconds (50ms × 100)
+# Total time: ~5 seconds (50 ms × 100)
# Plugin approach
ls configs/*.yaml | each { |file|
kms encrypt (open $file) --backend rustyvault
} | save encrypted/
-# Total time: ~0.5 seconds (5ms × 100)
+# Total time: ~0.5 seconds (5 ms × 100)
# Result: 10x faster
-```plaintext
-
-### Developer Experience Benefits
-
-**1. Native Nushell Integration**
-
-```nushell
-# HTTP: Parse JSON, check status codes
+
+
+1. Native Nushell Integration
+# HTTP: Parse JSON, check status codes
let result = http post http://localhost:9998/encrypt { data: "secret" }
if $result.status == "success" {
$result.encrypted
@@ -62886,96 +58813,71 @@ if $result.status == "success" {
# Plugin: Direct return values
kms encrypt "secret"
# Returns encrypted string directly, errors use Nushell's error system
-```plaintext
-
-**2. Pipeline Friendly**
-
-```nushell
-# HTTP: Requires wrapping, JSON parsing
+
+2. Pipeline Friendly
+# HTTP: Requires wrapping, JSON parsing
["secret1", "secret2"] | each { |s|
(http post http://localhost:9998/encrypt { data: $s }).encrypted
}
# Plugin: Natural pipeline flow
["secret1", "secret2"] | each { |s| kms encrypt $s }
-```plaintext
-
-**3. Tab Completion**
-
-```nushell
-# All plugin commands have full tab completion
+
+3. Tab Completion
+# All plugin commands have full tab completion
kms <TAB>
# → encrypt, decrypt, generate-key, status, backends
kms encrypt --<TAB>
# → --backend, --key, --context
-```plaintext
-
----
-
-## Prerequisites
-
-### Required Software
-
-| Software | Minimum Version | Purpose |
-|----------|----------------|---------|
-| **Nushell** | 0.107.1 | Shell and plugin runtime |
-| **Rust** | 1.75+ | Building plugins from source |
-| **Cargo** | (included with Rust) | Build tool |
-
-### Optional Dependencies
-
-| Software | Purpose | Platform |
-|----------|---------|----------|
-| **gnome-keyring** | Secure token storage | Linux |
-| **kwallet** | Secure token storage | Linux (KDE) |
-| **age** | Age encryption backend | All |
-| **RustyVault** | High-performance KMS | All |
-
-### Platform Support
-
-| Platform | Status | Notes |
-|----------|--------|-------|
-| **macOS** | ✅ Full | Keychain integration |
-| **Linux** | ✅ Full | Requires keyring service |
-| **Windows** | ✅ Full | Credential Manager integration |
-| **FreeBSD** | ⚠️ Partial | No keyring integration |
-
----
-
-## Installation
-
-### Step 1: Clone or Navigate to Plugin Directory
-
-```bash
-cd /Users/Akasha/project-provisioning/provisioning/core/plugins/nushell-plugins
-```plaintext
-
-### Step 2: Build All Plugins
-
-```bash
-# Build in release mode (optimized for performance)
+
+
+
+
+Software Minimum Version Purpose
+Nushell 0.107.1 Shell and plugin runtime
+Rust 1.75+ Building plugins from source
+Cargo (included with Rust) Build tool
+
+
+
+Software Purpose Platform
+gnome-keyring Secure token storage Linux
+kwallet Secure token storage Linux (KDE)
+age Age encryption backend All
+RustyVault High-performance KMS All
+
+
+
+Platform Status Notes
+macOS ✅ Full Keychain integration
+Linux ✅ Full Requires keyring service
+Windows ✅ Full Credential Manager integration
+FreeBSD ⚠️ Partial No keyring integration
+
+
+
+
+
+cd /Users/Akasha/project-provisioning/provisioning/core/plugins/nushell-plugins
+
+
+# Build in release mode (optimized for performance)
cargo build --release --all
# Or build individually
cargo build --release -p nu_plugin_auth
cargo build --release -p nu_plugin_kms
cargo build --release -p nu_plugin_orchestrator
-```plaintext
-
-**Expected output:**
-
-```plaintext
- Compiling nu_plugin_auth v0.1.0
+
+Expected output:
+ Compiling nu_plugin_auth v0.1.0
Compiling nu_plugin_kms v0.1.0
Compiling nu_plugin_orchestrator v0.1.0
Finished release [optimized] target(s) in 2m 15s
-```plaintext
-
-### Step 3: Register Plugins with Nushell
-
-```bash
-# Register all three plugins
+
+
+# Register all three plugins
plugin add target/release/nu_plugin_auth
plugin add target/release/nu_plugin_kms
plugin add target/release/nu_plugin_orchestrator
@@ -62984,50 +58886,36 @@ plugin add target/release/nu_plugin_orchestrator
plugin add $PWD/target/release/nu_plugin_auth
plugin add $PWD/target/release/nu_plugin_kms
plugin add $PWD/target/release/nu_plugin_orchestrator
-```plaintext
-
-### Step 4: Verify Installation
-
-```bash
-# List registered plugins
+
+
+# List registered plugins
plugin list | where name =~ "auth|kms|orch"
# Test each plugin
auth --help
kms --help
orch --help
-```plaintext
-
-**Expected output:**
-
-```plaintext
-╭───┬─────────────────────────┬─────────┬───────────────────────────────────╮
+
+Expected output:
+╭───┬─────────────────────────┬─────────┬───────────────────────────────────╮
│ # │ name │ version │ filename │
├───┼─────────────────────────┼─────────┼───────────────────────────────────┤
│ 0 │ nu_plugin_auth │ 0.1.0 │ .../nu_plugin_auth │
│ 1 │ nu_plugin_kms │ 0.1.0 │ .../nu_plugin_kms │
│ 2 │ nu_plugin_orchestrator │ 0.1.0 │ .../nu_plugin_orchestrator │
╰───┴─────────────────────────┴─────────┴───────────────────────────────────╯
-```plaintext
-
-### Step 5: Configure Environment (Optional)
-
-```bash
-# Add to ~/.config/nushell/env.nu
+
+
+# Add to ~/.config/nushell/env.nu
$env.RUSTYVAULT_ADDR = "http://localhost:8200"
$env.RUSTYVAULT_TOKEN = "your-vault-token"
$env.CONTROL_CENTER_URL = "http://localhost:3000"
$env.ORCHESTRATOR_DATA_DIR = "/opt/orchestrator/data"
-```plaintext
-
----
-
-## Quick Start (5 Minutes)
-
-### 1. Authentication Workflow
-
-```nushell
-# Login (password prompted securely)
+
+
+
+
+# Login (password prompted securely)
auth login admin
# ✓ Login successful
# User: admin
@@ -63054,12 +58942,9 @@ auth mfa verify --code 123456
# Logout
auth logout
# ✓ Logged out successfully
-```plaintext
-
-### 2. KMS Operations
-
-```nushell
-# Encrypt data
+
+
+# Encrypt data
kms encrypt "my secret data"
# vault:v1:8GawgGuP...
@@ -63077,12 +58962,9 @@ kms status
# Encrypt with specific backend
kms encrypt "data" --backend age --key age1xxxxxxx
-```plaintext
-
-### 3. Orchestrator Operations
-
-```nushell
-# Check orchestrator status (no HTTP call)
+
+
+# Check orchestrator status (no HTTP call)
orch status
# {
# "active_tasks": 5,
@@ -63091,7 +58973,7 @@ orch status
# }
# Validate workflow
-orch validate workflows/deploy.k
+orch validate workflows/deploy.ncl
# {
# "valid": true,
# "workflow": { "name": "deploy_k8s", "operations": 5 }
@@ -63100,61 +58982,48 @@ orch validate workflows/deploy.k
# List running tasks
orch tasks --status running
# [ { "task_id": "task_123", "name": "deploy_k8s", "progress": 45 } ]
-```plaintext
-
-### 4. Combined Workflow
-
-```nushell
-# Complete authenticated deployment pipeline
+
+
+# Complete authenticated deployment pipeline
auth login admin
| if $in.success { auth verify }
| if $in.active {
- orch validate workflows/production.k
+ orch validate workflows/production.ncl
| if $in.valid {
kms encrypt (open secrets.yaml | to json)
| save production-secrets.enc
}
}
# ✓ Pipeline completed successfully
-```plaintext
-
----
-
-## Authentication Plugin (nu_plugin_auth)
-
-The authentication plugin manages JWT-based authentication, MFA enrollment/verification, and session management with OS-native keyring integration.
-
-### Available Commands
-
-| Command | Purpose | Example |
-|---------|---------|---------|
-| `auth login` | Login and store JWT | `auth login admin` |
-| `auth logout` | Logout and clear tokens | `auth logout` |
-| `auth verify` | Verify current session | `auth verify` |
-| `auth sessions` | List active sessions | `auth sessions` |
-| `auth mfa enroll` | Enroll in MFA | `auth mfa enroll totp` |
-| `auth mfa verify` | Verify MFA code | `auth mfa verify --code 123456` |
-
-### Command Reference
-
-#### `auth login <username> [password]`
-
-Login to provisioning platform and store JWT tokens securely in OS keyring.
-
-**Arguments:**
-
-- `username` (required): Username for authentication
-- `password` (optional): Password (prompted if not provided)
-
-**Flags:**
-
-- `--url <url>`: Control center URL (default: `http://localhost:3000`)
-- `--password <password>`: Password (alternative to positional argument)
-
-**Examples:**
-
-```nushell
-# Interactive password prompt (recommended)
+
+
+
+The authentication plugin manages JWT-based authentication, MFA enrollment/verification, and session management with OS-native keyring integration.
+
+Command Purpose Example
+auth loginLogin and store JWT auth login admin
+auth logoutLogout and clear tokens auth logout
+auth verifyVerify current session auth verify
+auth sessionsList active sessions auth sessions
+auth mfa enrollEnroll in MFA auth mfa enroll totp
+auth mfa verifyVerify MFA code auth mfa verify --code 123456
+
+
+
+
+Login to provisioning platform and store JWT tokens securely in OS keyring.
+Arguments:
+
+username (required): Username for authentication
+password (optional): Password (prompted if not provided)
+
+Flags:
+
+--url <url>: Control center URL (default: http://localhost:3000)
+--password <password>: Password (alternative to positional argument)
+
+Examples:
+# Interactive password prompt (recommended)
auth login admin
# Password: ••••••••
# ✓ Login successful
@@ -63171,28 +59040,23 @@ auth login admin --url https://control-center.example.com
# Pipeline usage
let creds = { username: "admin", password: (input --suppress-output "Password: ") }
auth login $creds.username $creds.password
-```plaintext
-
-**Token Storage Locations:**
-
-- **macOS**: Keychain Access (`login` keychain)
-- **Linux**: Secret Service API (gnome-keyring, kwallet)
-- **Windows**: Windows Credential Manager
-
-**Security Notes:**
-
-- Tokens encrypted at rest by OS
-- Requires user authentication to access (macOS Touch ID, Linux password)
-- Never stored in plain text files
-
-#### `auth logout`
-
-Logout from current session and remove stored tokens from keyring.
-
-**Examples:**
-
-```nushell
-# Simple logout
+
+Token Storage Locations:
+
+macOS : Keychain Access (login keychain)
+Linux : Secret Service API (gnome-keyring, kwallet)
+Windows : Windows Credential Manager
+
+Security Notes:
+
+Tokens encrypted at rest by OS
+Requires user authentication to access (macOS Touch ID, Linux password)
+Never stored in plain text files
+
+
+Logout from current session and remove stored tokens from keyring.
+Examples:
+# Simple logout
auth logout
# ✓ Logged out successfully
@@ -63206,24 +59070,19 @@ if (auth verify | get active) {
auth sessions | each { |sess|
auth logout --session-id $sess.session_id
}
-```plaintext
-
-#### `auth verify`
-
-Verify current session status and check token validity.
-
-**Returns:**
-
-- `active` (bool): Whether session is active
-- `user` (string): Username
-- `role` (string): User role
-- `expires_at` (datetime): Token expiration
-- `mfa_verified` (bool): MFA verification status
-
-**Examples:**
-
-```nushell
-# Check if logged in
+
+
+Verify current session status and check token validity.
+Returns:
+
+active (bool): Whether session is active
+user (string): Username
+role (string): User role
+expires_at (datetime): Token expiration
+mfa_verified (bool): MFA verification status
+
+Examples:
+# Check if logged in
auth verify
# {
# "active": true,
@@ -63246,16 +59105,11 @@ if ($session.expires_at | into datetime) < (date now) {
echo "Session expired, re-authenticating..."
auth login $session.user
}
-```plaintext
-
-#### `auth sessions`
-
-List all active sessions for current user.
-
-**Examples:**
-
-```nushell
-# List all sessions
+
+
+List all active sessions for current user.
+Examples:
+# List all sessions
auth sessions
# [
# {
@@ -63275,20 +59129,15 @@ auth sessions | where ip_address =~ "192.168"
# Count active sessions
auth sessions | length
-```plaintext
-
-#### `auth mfa enroll <type>`
-
-Enroll in Multi-Factor Authentication (TOTP or WebAuthn).
-
-**Arguments:**
-
-- `type` (required): MFA type (`totp` or `webauthn`)
-
-**TOTP Enrollment:**
-
-```nushell
-auth mfa enroll totp
+
+
+Enroll in Multi-Factor Authentication (TOTP or WebAuthn).
+Arguments:
+
+type (required): MFA type (totp or webauthn)
+
+TOTP Enrollment:
+auth mfa enroll totp
# ✓ TOTP enrollment initiated
#
# Scan this QR code with your authenticator app:
@@ -63307,12 +59156,9 @@ auth mfa enroll totp
# 2. MNOP-QRST-UVWX
# 3. YZAB-CDEF-GHIJ
# (8 more codes...)
-```plaintext
-
-**WebAuthn Enrollment:**
-
-```nushell
-auth mfa enroll webauthn
+
+WebAuthn Enrollment:
+auth mfa enroll webauthn
# ✓ WebAuthn enrollment initiated
#
# Insert your security key and touch the button...
@@ -63321,36 +59167,31 @@ auth mfa enroll webauthn
# ✓ Security key registered successfully
# Device: YubiKey 5 NFC
# Created: 2025-10-09T13:00:00Z
-```plaintext
-
-**Supported Authenticator Apps:**
-
-- Google Authenticator
-- Microsoft Authenticator
-- Authy
-- 1Password
-- Bitwarden
-
-**Supported Hardware Keys:**
-
-- YubiKey (all models)
-- Titan Security Key
-- Feitian ePass
-- macOS Touch ID
-- Windows Hello
-
-#### `auth mfa verify --code <code>`
-
-Verify MFA code (TOTP or backup code).
-
-**Flags:**
-
-- `--code <code>` (required): 6-digit TOTP code or backup code
-
-**Examples:**
-
-```nushell
-# Verify TOTP code
+
+Supported Authenticator Apps:
+
+Google Authenticator
+Microsoft Authenticator
+Authy
+1Password
+Bitwarden
+
+Supported Hardware Keys:
+
+YubiKey (all models)
+Titan Security Key
+Feitian ePass
+macOS Touch ID
+Windows Hello
+
+
+Verify MFA code (TOTP or backup code).
+Flags:
+
+--code <code> (required): 6-digit TOTP code or backup code
+
+Examples:
+# Verify TOTP code
auth mfa verify --code 123456
# ✓ MFA verification successful
@@ -63362,12 +59203,9 @@ auth mfa verify --code ABCD-EFGH-IJKL
# Pipeline usage
let code = input "MFA code: "
auth mfa verify --code $code
-```plaintext
-
-**Error Cases:**
-
-```nushell
-# Invalid code
+
+Error Cases:
+# Invalid code
auth mfa verify --code 999999
# Error: Invalid MFA code
# → Verify time synchronization on your device
@@ -63381,40 +59219,29 @@ auth mfa verify --code 123456
auth mfa verify --code 123456
# Error: MFA not enrolled for this user
# → Run: auth mfa enroll totp
-```plaintext
-
-### Environment Variables
-
-| Variable | Description | Default |
-|----------|-------------|---------|
-| `USER` | Default username | Current OS user |
-| `CONTROL_CENTER_URL` | Control center URL | `http://localhost:3000` |
-| `AUTH_KEYRING_SERVICE` | Keyring service name | `provisioning-auth` |
-
-### Troubleshooting Authentication
-
-**"No active session"**
-
-```nushell
-# Solution: Login first
+
+
+Variable Description Default
+USERDefault username Current OS user
+CONTROL_CENTER_URLControl center URL http://localhost:3000
+AUTH_KEYRING_SERVICEKeyring service name provisioning-auth
+
+
+
+“No active session”
+# Solution: Login first
auth login <username>
-```plaintext
-
-**"Keyring error" (macOS)**
-
-```bash
-# Check Keychain Access permissions
+
+“Keyring error” (macOS)
+# Check Keychain Access permissions
# System Preferences → Security & Privacy → Privacy → Full Disk Access
# Add: /Applications/Nushell.app (or /usr/local/bin/nu)
# Or grant access manually
security unlock-keychain ~/Library/Keychains/login.keychain-db
-```plaintext
-
-**"Keyring error" (Linux)**
-
-```bash
-# Install keyring service
+
+“Keyring error” (Linux)
+# Install keyring service
sudo apt install gnome-keyring # Ubuntu/Debian
sudo dnf install gnome-keyring # Fedora
sudo pacman -S gnome-keyring # Arch
@@ -63425,12 +59252,9 @@ sudo apt install kwalletmanager
# Start keyring daemon
eval $(gnome-keyring-daemon --start)
export $(gnome-keyring-daemon --start --components=secrets)
-```plaintext
-
-**"MFA verification failed"**
-
-```nushell
-# Check time synchronization (TOTP requires accurate time)
+
+“MFA verification failed”
+# Check time synchronization (TOTP requires accurate time)
# macOS:
sudo sntp -sS time.apple.com
@@ -63441,89 +59265,77 @@ sudo systemctl restart systemd-timesyncd
# Use backup code if TOTP not working
auth mfa verify --code ABCD-EFGH-IJKL
-```plaintext
-
----
-
-## KMS Plugin (nu_plugin_kms)
-
-The KMS plugin provides high-performance encryption and decryption using multiple backend providers.
-
-### Supported Backends
-
-| Backend | Performance | Use Case | Setup Complexity |
-|---------|------------|----------|------------------|
-| **rustyvault** | ⚡ Very Fast (~5ms) | Production KMS | Medium |
-| **age** | ⚡ Very Fast (~3ms) | Local development | Low |
-| **cosmian** | 🐢 Moderate (~30ms) | Cloud KMS | Medium |
-| **aws** | 🐢 Moderate (~50ms) | AWS environments | Medium |
-| **vault** | 🐢 Moderate (~40ms) | Enterprise KMS | High |
-
-### Backend Selection Guide
-
-**Choose `rustyvault` when:**
-
-- ✅ Running in production with high throughput requirements
-- ✅ Need ~5ms encryption/decryption latency
-- ✅ Have RustyVault server deployed
-- ✅ Require key rotation and versioning
-
-**Choose `age` when:**
-
-- ✅ Developing locally without external dependencies
-- ✅ Need simple file encryption
-- ✅ Want ~3ms latency
-- ❌ Don't need centralized key management
-
-**Choose `cosmian` when:**
-
-- ✅ Using Cosmian KMS service
-- ✅ Need cloud-based key management
-- ⚠️ Can accept ~30ms latency
-
-**Choose `aws` when:**
-
-- ✅ Deployed on AWS infrastructure
-- ✅ Using AWS IAM for access control
-- ✅ Need AWS KMS integration
-- ⚠️ Can accept ~50ms latency
-
-**Choose `vault` when:**
-
-- ✅ Using HashiCorp Vault enterprise
-- ✅ Need advanced policy management
-- ✅ Require audit trails
-- ⚠️ Can accept ~40ms latency
-
-### Available Commands
-
-| Command | Purpose | Example |
-|---------|---------|---------|
-| `kms encrypt` | Encrypt data | `kms encrypt "secret"` |
-| `kms decrypt` | Decrypt data | `kms decrypt "vault:v1:..."` |
-| `kms generate-key` | Generate DEK | `kms generate-key --spec AES256` |
-| `kms status` | Backend status | `kms status` |
-
-### Command Reference
-
-#### `kms encrypt <data> [--backend <backend>]`
-
-Encrypt data using specified KMS backend.
-
-**Arguments:**
-
-- `data` (required): Data to encrypt (string or binary)
-
-**Flags:**
-
-- `--backend <backend>`: KMS backend (`rustyvault`, `age`, `cosmian`, `aws`, `vault`)
-- `--key <key>`: Key ID or recipient (backend-specific)
-- `--context <context>`: Additional authenticated data (AAD)
-
-**Examples:**
-
-```nushell
-# Auto-detect backend from environment
+
+
+
+The KMS plugin provides high-performance encryption and decryption using multiple backend providers.
+
+Backend Performance Use Case Setup Complexity
+rustyvault ⚡ Very Fast (~5 ms) Production KMS Medium
+age ⚡ Very Fast (~3 ms) Local development Low
+cosmian 🐢 Moderate (~30 ms) Cloud KMS Medium
+aws 🐢 Moderate (~50 ms) AWS environments Medium
+vault 🐢 Moderate (~40 ms) Enterprise KMS High
+
+
+
+Choose rustyvault when:
+
+✅ Running in production with high throughput requirements
+✅ Need ~5 ms encryption/decryption latency
+✅ Have RustyVault server deployed
+✅ Require key rotation and versioning
+
+Choose age when:
+
+✅ Developing locally without external dependencies
+✅ Need simple file encryption
+✅ Want ~3 ms latency
+❌ Don’t need centralized key management
+
+Choose cosmian when:
+
+✅ Using Cosmian KMS service
+✅ Need cloud-based key management
+⚠️ Can accept ~30 ms latency
+
+Choose aws when:
+
+✅ Deployed on AWS infrastructure
+✅ Using AWS IAM for access control
+✅ Need AWS KMS integration
+⚠️ Can accept ~50 ms latency
+
+Choose vault when:
+
+✅ Using HashiCorp Vault enterprise
+✅ Need advanced policy management
+✅ Require audit trails
+⚠️ Can accept ~40 ms latency
+
+
+Command Purpose Example
+kms encryptEncrypt data kms encrypt "secret"
+kms decryptDecrypt data kms decrypt "vault:v1:..."
+kms generate-keyGenerate DEK kms generate-key --spec AES256
+kms statusBackend status kms status
+
+
+
+
+Encrypt data using specified KMS backend.
+Arguments:
+
+data (required): Data to encrypt (string or binary)
+
+Flags:
+
+--backend <backend>: KMS backend (rustyvault, age, cosmian, aws, vault)
+--key <key>: Key ID or recipient (backend-specific)
+--context <context>: Additional authenticated data (AAD)
+
+Examples:
+# Auto-detect backend from environment
kms encrypt "secret configuration data"
# vault:v1:8GawgGuP+emDKX5q...
@@ -63552,32 +59364,27 @@ ls configs/*.yaml | each { |file|
kms encrypt (open $file.name) --backend age
| save $"encrypted/($file.name).enc"
}
-```plaintext
-
-**Output Formats:**
-
-- **RustyVault**: `vault:v1:base64_ciphertext`
-- **Age**: `-----BEGIN AGE ENCRYPTED FILE-----...-----END AGE ENCRYPTED FILE-----`
-- **AWS**: `base64_aws_kms_ciphertext`
-- **Cosmian**: `cosmian:v1:base64_ciphertext`
-
-#### `kms decrypt <encrypted> [--backend <backend>]`
-
-Decrypt KMS-encrypted data.
-
-**Arguments:**
-
-- `encrypted` (required): Encrypted data (detects format automatically)
-
-**Flags:**
-
-- `--backend <backend>`: KMS backend (auto-detected from format if not specified)
-- `--context <context>`: Additional authenticated data (must match encryption context)
-
-**Examples:**
-
-```nushell
-# Auto-detect backend from format
+
+Output Formats:
+
+RustyVault : vault:v1:base64_ciphertext
+Age : -----BEGIN AGE ENCRYPTED FILE-----...-----END AGE ENCRYPTED FILE-----
+AWS : base64_aws_kms_ciphertext
+Cosmian : cosmian:v1:base64_ciphertext
+
+
+Decrypt KMS-encrypted data.
+Arguments:
+
+encrypted (required): Encrypted data (detects format automatically)
+
+Flags:
+
+--backend <backend>: KMS backend (auto-detected from format if not specified)
+--context <context>: Additional authenticated data (must match encryption context)
+
+Examples:
+# Auto-detect backend from format
kms decrypt "vault:v1:8GawgGuP..."
# secret configuration data
@@ -63606,12 +59413,9 @@ open secrets.json
| kms decrypt
| str trim
| psql --dbname mydb --password
-```plaintext
-
-**Error Cases:**
-
-```nushell
-# Invalid ciphertext
+
+Error Cases:
+# Invalid ciphertext
kms decrypt "invalid_data"
# Error: Invalid ciphertext format
# → Verify data was encrypted with KMS
@@ -63625,21 +59429,16 @@ kms decrypt "vault:v1:abc..." --context "wrong=context"
kms decrypt "vault:v1:abc..."
# Error: Failed to connect to RustyVault at http://localhost:8200
# → Check RustyVault is running: curl http://localhost:8200/v1/sys/health
-```plaintext
-
-#### `kms generate-key [--spec <spec>]`
-
-Generate data encryption key (DEK) using KMS envelope encryption.
-
-**Flags:**
-
-- `--spec <spec>`: Key specification (`AES128` or `AES256`, default: `AES256`)
-- `--backend <backend>`: KMS backend
-
-**Examples:**
-
-```nushell
-# Generate AES-256 key
+
+
+Generate data encryption key (DEK) using KMS envelope encryption.
+Flags:
+
+--spec <spec>: Key specification (AES128 or AES256, default: AES256)
+--backend <backend>: KMS backend
+
+Examples:
+# Generate AES-256 key
kms generate-key
# {
# "plaintext": "rKz3N8xPq...", # base64-encoded key
@@ -63662,22 +59461,17 @@ let encrypted_data = ($data | openssl enc -aes-256-cbc -K $dek.plaintext)
let envelope = open secure_data.json
let dek = kms decrypt $envelope.encrypted_key
$envelope.data | openssl enc -d -aes-256-cbc -K $dek
-```plaintext
-
-**Use Cases:**
-
-- Envelope encryption (encrypt large data locally, protect DEK with KMS)
-- Database field encryption
-- File encryption with key wrapping
-
-#### `kms status`
-
-Show KMS backend status, configuration, and health.
-
-**Examples:**
-
-```nushell
-# Show current backend status
+
+Use Cases:
+
+Envelope encryption (encrypt large data locally, protect DEK with KMS)
+Database field encryption
+File encryption with key wrapping
+
+
+Show KMS backend status, configuration, and health.
+Examples:
+# Show current backend status
kms status
# {
# "backend": "rustyvault",
@@ -63705,29 +59499,20 @@ if (kms status | get status) == "healthy" {
} else {
error make { msg: "KMS unhealthy" }
}
-```plaintext
-
-### Backend Configuration
-
-#### RustyVault Backend
-
-```bash
-# Environment variables
+
+
+
+# Environment variables
export RUSTYVAULT_ADDR="http://localhost:8200"
export RUSTYVAULT_TOKEN="hvs.xxxxxxxxxxxxx"
export RUSTYVAULT_MOUNT="transit" # Transit engine mount point
export RUSTYVAULT_KEY="provisioning-main" # Default key name
-```plaintext
-
-```nushell
-# Usage
+
+# Usage
kms encrypt "data" --backend rustyvault --key provisioning-main
-```plaintext
-
-**Setup RustyVault:**
-
-```bash
-# Start RustyVault
+
+Setup RustyVault:
+# Start RustyVault
rustyvault server -dev
# Enable transit engine
@@ -63735,46 +59520,33 @@ rustyvault secrets enable transit
# Create encryption key
rustyvault write -f transit/keys/provisioning-main
-```plaintext
-
-#### Age Backend
-
-```bash
-# Generate Age keypair
+
+
+# Generate Age keypair
age-keygen -o ~/.age/key.txt
# Environment variables
export AGE_IDENTITY="$HOME/.age/key.txt" # Private key
export AGE_RECIPIENT="age1xxxxxxxxx" # Public key (from key.txt)
-```plaintext
-
-```nushell
-# Usage
+
+# Usage
kms encrypt "data" --backend age
kms decrypt (open file.enc) --backend age
-```plaintext
-
-#### AWS KMS Backend
-
-```bash
-# AWS credentials
+
+
+# AWS credentials
export AWS_REGION="us-east-1"
export AWS_ACCESS_KEY_ID="AKIAXXXXX"
export AWS_SECRET_ACCESS_KEY="xxxxx"
# KMS configuration
export AWS_KMS_KEY_ID="alias/provisioning"
-```plaintext
-
-```nushell
-# Usage
+
+# Usage
kms encrypt "data" --backend aws --key alias/provisioning
-```plaintext
-
-**Setup AWS KMS:**
-
-```bash
-# Create KMS key
+
+Setup AWS KMS:
+# Create KMS key
aws kms create-key --description "Provisioning Platform"
# Create alias
@@ -63783,71 +59555,52 @@ aws kms create-alias --alias-name alias/provisioning --target-key-id <key-id&
# Grant permissions
aws kms create-grant --key-id <key-id> --grantee-principal <role-arn> \
--operations Encrypt Decrypt GenerateDataKey
-```plaintext
-
-#### Cosmian Backend
-
-```bash
-# Cosmian KMS configuration
+
+
+# Cosmian KMS configuration
export KMS_HTTP_URL="http://localhost:9998"
export KMS_HTTP_BACKEND="cosmian"
export COSMIAN_API_KEY="your-api-key"
-```plaintext
-
-```nushell
-# Usage
+
+# Usage
kms encrypt "data" --backend cosmian
-```plaintext
-
-#### Vault Backend (HashiCorp)
-
-```bash
-# Vault configuration
+
+
+# Vault configuration
export VAULT_ADDR="https://vault.example.com:8200"
export VAULT_TOKEN="hvs.xxxxxxxxxxxxx"
export VAULT_MOUNT="transit"
export VAULT_KEY="provisioning"
-```plaintext
-
-```nushell
-# Usage
+
+# Usage
kms encrypt "data" --backend vault --key provisioning
-```plaintext
-
-### Performance Benchmarks
-
-**Test Setup:**
-
-- Data size: 1KB
-- Iterations: 1000
-- Hardware: Apple M1, 16GB RAM
-- Network: localhost
-
-**Results:**
-
-| Backend | Encrypt (avg) | Decrypt (avg) | Throughput (ops/sec) |
-|---------|---------------|---------------|----------------------|
-| RustyVault | 4.8ms | 5.1ms | ~200 |
-| Age | 2.9ms | 3.2ms | ~320 |
-| Cosmian HTTP | 31ms | 29ms | ~33 |
-| AWS KMS | 52ms | 48ms | ~20 |
-| Vault | 38ms | 41ms | ~25 |
-
-**Scaling Test (1000 operations):**
-
-```nushell
-# RustyVault: ~5 seconds
+
+
+Test Setup:
+
+Data size: 1 KB
+Iterations: 1000
+Hardware: Apple M1, 16 GB RAM
+Network: localhost
+
+Results:
+Backend Encrypt (avg) Decrypt (avg) Throughput (ops/sec)
+RustyVault 4.8 ms 5.1 ms ~200
+Age 2.9 ms 3.2 ms ~320
+Cosmian HTTP 31 ms 29 ms ~33
+AWS KMS 52 ms 48 ms ~20
+Vault 38 ms 41 ms ~25
+
+
+Scaling Test (1000 operations):
+# RustyVault: ~5 seconds
0..1000 | each { |_| kms encrypt "data" --backend rustyvault } | length
# Age: ~3 seconds
0..1000 | each { |_| kms encrypt "data" --backend age } | length
-```plaintext
-
-### Troubleshooting KMS
-
-**"RustyVault connection failed"**
-
-```bash
-# Check RustyVault is running
+
+
+“RustyVault connection failed”
+# Check RustyVault is running
curl http://localhost:8200/v1/sys/health
# Expected: { "initialized": true, "sealed": false }
@@ -63857,12 +59610,9 @@ echo $env.RUSTYVAULT_TOKEN
# Test authentication
curl -H "X-Vault-Token: $RUSTYVAULT_TOKEN" $RUSTYVAULT_ADDR/v1/sys/health
-```plaintext
-
-**"Age encryption failed"**
-
-```bash
-# Check Age keys exist
+
+“Age encryption failed”
+# Check Age keys exist
ls -la ~/.age/
# Expected: key.txt
@@ -63875,12 +59625,9 @@ cat ~/.age/key.txt | head -1
# Extract public key
export AGE_RECIPIENT=$(grep "public key:" ~/.age/key.txt | cut -d: -f2 | tr -d ' ')
echo $AGE_RECIPIENT
-```plaintext
-
-**"AWS KMS access denied"**
-
-```bash
-# Verify AWS credentials
+
+“AWS KMS access denied”
+# Verify AWS credentials
aws sts get-caller-identity
# Expected: Account, UserId, Arn
@@ -63889,36 +59636,26 @@ aws kms describe-key --key-id alias/provisioning
# Test encryption
aws kms encrypt --key-id alias/provisioning --plaintext "test"
-```plaintext
-
----
-
-## Orchestrator Plugin (nu_plugin_orchestrator)
-
-The orchestrator plugin provides direct file-based access to orchestrator state, eliminating HTTP overhead for status queries and validation.
-
-### Available Commands
-
-| Command | Purpose | Example |
-|---------|---------|---------|
-| `orch status` | Orchestrator status | `orch status` |
-| `orch validate` | Validate workflow | `orch validate workflow.k` |
-| `orch tasks` | List tasks | `orch tasks --status running` |
-
-### Command Reference
-
-#### `orch status [--data-dir <dir>]`
-
-Get orchestrator status from local files (no HTTP, ~1ms latency).
-
-**Flags:**
-
-- `--data-dir <dir>`: Data directory (default from `ORCHESTRATOR_DATA_DIR`)
-
-**Examples:**
-
-```nushell
-# Default data directory
+
+
+
+The orchestrator plugin provides direct file-based access to orchestrator state, eliminating HTTP overhead for status queries and validation.
+
+Command Purpose Example
+orch statusOrchestrator status orch status
+orch validateValidate workflow orch validate workflow.ncl
+orch tasksList tasks orch tasks --status running
+
+
+
+
+Get orchestrator status from local files (no HTTP, ~1 ms latency).
+Flags:
+
+--data-dir <dir>: Data directory (default from ORCHESTRATOR_DATA_DIR)
+
+Examples:
+# Default data directory
orch status
# {
# "active_tasks": 5,
@@ -63943,25 +59680,20 @@ while true {
if (orch status | get failed_tasks) > 0 {
echo "⚠️ Failed tasks detected!"
}
-```plaintext
-
-#### `orch validate <workflow.k> [--strict]`
-
-Validate workflow KCL file syntax and structure.
-
-**Arguments:**
-
-- `workflow.k` (required): Path to KCL workflow file
-
-**Flags:**
-
-- `--strict`: Enable strict validation (warnings as errors)
-
-**Examples:**
-
-```nushell
-# Basic validation
-orch validate workflows/deploy.k
+
+
+Validate workflow Nickel file syntax and structure.
+Arguments:
+
+workflow.ncl (required): Path to Nickel workflow file
+
+Flags:
+
+--strict: Enable strict validation (warnings as errors)
+
+Examples:
+# Basic validation
+orch validate workflows/deploy.ncl
# {
# "valid": true,
# "workflow": {
@@ -63974,13 +59706,13 @@ orch validate workflows/deploy.k
# }
# Strict mode (warnings cause failure)
-orch validate workflows/deploy.k --strict
+orch validate workflows/deploy.ncl --strict
# Error: Validation failed with warnings:
# - Operation 'create_servers': Missing retry_policy
# - Operation 'install_k8s': Resource limits not specified
# Validate all workflows
-ls workflows/*.k | each { |file|
+ls workflows/*.ncl | each { |file|
let result = orch validate $file.name
if $result.valid {
echo $"✓ ($file.name)"
@@ -63991,39 +59723,34 @@ ls workflows/*.k | each { |file|
# CI/CD validation
try {
- orch validate workflow.k --strict
+ orch validate workflow.ncl --strict
echo "✓ Validation passed"
} catch {
echo "✗ Validation failed"
exit 1
}
-```plaintext
-
-**Validation Checks:**
-
-- ✅ KCL syntax correctness
-- ✅ Required fields present (`name`, `version`, `operations`)
-- ✅ Dependency graph valid (no cycles)
-- ✅ Resource limits within bounds
-- ✅ Provider configurations valid
-- ✅ Operation types supported
-- ⚠️ Optional: Retry policies defined
-- ⚠️ Optional: Resource limits specified
-
-#### `orch tasks [--status <status>] [--limit <n>]`
-
-List orchestrator tasks from local state.
-
-**Flags:**
-
-- `--status <status>`: Filter by status (`pending`, `running`, `completed`, `failed`)
-- `--limit <n>`: Limit results (default: 100)
-- `--data-dir <dir>`: Data directory
-
-**Examples:**
-
-```nushell
-# All tasks (last 100)
+
+Validation Checks:
+
+✅ KCL syntax correctness
+✅ Required fields present (name, version, operations)
+✅ Dependency graph valid (no cycles)
+✅ Resource limits within bounds
+✅ Provider configurations valid
+✅ Operation types supported
+⚠️ Optional: Retry policies defined
+⚠️ Optional: Resource limits specified
+
+
+List orchestrator tasks from local state.
+Flags:
+
+--status <status>: Filter by status (pending, running, completed, failed)
+--limit <n>: Limit results (default: 100)
+--data-dir <dir>: Data directory
+
+Examples:
+# All tasks (last 100)
orch tasks
# [
# {
@@ -64056,42 +59783,33 @@ watch {
orch tasks | group-by status | each { |group|
{ status: $group.0, count: ($group.1 | length) }
}
-```plaintext
-
-### Environment Variables
-
-| Variable | Description | Default |
-|----------|-------------|---------|
-| `ORCHESTRATOR_DATA_DIR` | Data directory | `provisioning/platform/orchestrator/data` |
-
-### Performance Comparison
-
-| Operation | HTTP API | Plugin | Latency Reduction |
-|-----------|----------|--------|-------------------|
-| Status query | ~30ms | ~1ms | **97% faster** |
-| Validate workflow | ~100ms | ~10ms | **90% faster** |
-| List tasks | ~50ms | ~5ms | **90% faster** |
-
-**Use Case: CI/CD Pipeline**
-
-```nushell
-# HTTP approach (slow)
+
+
+Variable Description Default
+ORCHESTRATOR_DATA_DIRData directory provisioning/platform/orchestrator/data
+
+
+
+Operation HTTP API Plugin Latency Reduction
+Status query ~30 ms ~1 ms 97% faster
+Validate workflow ~100 ms ~10 ms 90% faster
+List tasks ~50 ms ~5 ms 90% faster
+
+
+Use Case: CI/CD Pipeline
+# HTTP approach (slow)
http get http://localhost:9090/tasks --status running
| each { |task| http get $"http://localhost:9090/tasks/($task.id)" }
-# Total: ~500ms for 10 tasks
+# Total: ~500 ms for 10 tasks
# Plugin approach (fast)
orch tasks --status running
-# Total: ~5ms for 10 tasks
+# Total: ~5 ms for 10 tasks
# Result: 100x faster
-```plaintext
-
-### Troubleshooting Orchestrator
-
-**"Failed to read status"**
-
-```bash
-# Check data directory exists
+
+
+“Failed to read status”
+# Check data directory exists
ls -la provisioning/platform/orchestrator/data/
# Create if missing
@@ -64099,23 +59817,17 @@ mkdir -p provisioning/platform/orchestrator/data
# Check permissions (must be readable)
chmod 755 provisioning/platform/orchestrator/data
-```plaintext
+
+“Workflow validation failed”
+# Use strict mode for detailed errors
+orch validate workflows/deploy.ncl --strict
-**"Workflow validation failed"**
-
-```nushell
-# Use strict mode for detailed errors
-orch validate workflows/deploy.k --strict
-
-# Check KCL syntax manually
-kcl fmt workflows/deploy.k
-kcl run workflows/deploy.k
-```plaintext
-
-**"No tasks found"**
-
-```bash
-# Check orchestrator running
+# Check Nickel syntax manually
+nickel typecheck workflows/deploy.ncl
+nickel eval workflows/deploy.ncl
+
+“No tasks found”
+# Check orchestrator running
ps aux | grep orchestrator
# Start orchestrator if not running
@@ -64124,18 +59836,12 @@ cd provisioning/platform/orchestrator
# Check task files
ls provisioning/platform/orchestrator/data/tasks/
-```plaintext
-
----
-
-## Integration Examples
-
-### Example 1: Complete Authenticated Deployment
-
-Full workflow with authentication, secrets, and deployment:
-
-```nushell
-# Step 1: Login with MFA
+
+
+
+
+Full workflow with authentication, secrets, and deployment:
+# Step 1: Login with MFA
auth login admin
auth mfa verify --code (input "MFA code: ")
@@ -64145,7 +59851,7 @@ if (orch status | get health) != "healthy" {
}
# Step 3: Validate deployment workflow
-let validation = orch validate workflows/production-deploy.k --strict
+let validation = orch validate workflows/production-deploy.ncl --strict
if not $validation.valid {
error make { msg: $"Validation failed: ($validation.errors)" }
}
@@ -64167,14 +59873,10 @@ while (orch tasks --status running | length) > 0 {
}
echo "✓ Deployment complete"
-```plaintext
-
-### Example 2: Batch Secret Rotation
-
-Rotate all secrets in multiple environments:
-
-```nushell
-# Rotate database passwords
+
+
+Rotate all secrets in multiple environments:
+# Rotate database passwords
["dev", "staging", "production"] | each { |env|
# Generate new password
let new_password = (openssl rand -base64 32)
@@ -64191,14 +59893,10 @@ Rotate all secrets in multiple environments:
echo $"✓ Rotated password for ($env)"
}
-```plaintext
-
-### Example 3: Multi-Environment Deployment
-
-Deploy to multiple environments with validation:
-
-```nushell
-# Define environments
+
+
+Deploy to multiple environments with validation:
+# Define environments
let environments = [
{ name: "dev", validate: "basic" },
{ name: "staging", validate: "strict" },
@@ -64218,9 +59916,9 @@ $environments | each { |env|
# Validate workflow
let validation = if $env.validate == "strict" {
- orch validate $"workflows/($env.name)-deploy.k" --strict
+ orch validate $"workflows/($env.name)-deploy.ncl" --strict
} else {
- orch validate $"workflows/($env.name)-deploy.k"
+ orch validate $"workflows/($env.name)-deploy.ncl"
}
if not $validation.valid {
@@ -64236,14 +59934,10 @@ $environments | each { |env|
echo $"✓ Deployed to ($env.name)"
}
-```plaintext
-
-### Example 4: Automated Backup and Encryption
-
-Backup configuration files with encryption:
-
-```nushell
-# Backup script
+
+
+Backup configuration files with encryption:
+# Backup script
let backup_dir = $"backups/(date now | format date "%Y%m%d-%H%M%S")"
mkdir $backup_dir
@@ -64263,14 +59957,10 @@ ls configs/**/*.yaml | each { |file|
} | save $"($backup_dir)/manifest.json"
echo $"✓ Backup complete: ($backup_dir)"
-```plaintext
-
-### Example 5: Health Monitoring Dashboard
-
-Real-time health monitoring:
-
-```nushell
-# Health dashboard
+
+
+Real-time health monitoring:
+# Health dashboard
while true {
clear
@@ -64302,74 +59992,57 @@ while true {
sleep 10sec
}
-```plaintext
-
----
-
-## Best Practices
-
-### When to Use Plugins vs HTTP
-
-**✅ Use Plugins When:**
-
-- Performance is critical (high-frequency operations)
-- Working in pipelines (Nushell data structures)
-- Need offline capability (KMS, orchestrator local ops)
-- Building automation scripts
-- CI/CD pipelines
-
-**Use HTTP When:**
-
-- Calling from external systems (not Nushell)
-- Need consistent REST API interface
-- Cross-language integration
-- Web UI backend
-
-### Performance Optimization
-
-**1. Batch Operations**
-
-```nushell
-# ❌ Slow: Individual HTTP calls in loop
+
+
+
+
+✅ Use Plugins When:
+
+Performance is critical (high-frequency operations)
+Working in pipelines (Nushell data structures)
+Need offline capability (KMS, orchestrator local ops)
+Building automation scripts
+CI/CD pipelines
+
+Use HTTP When:
+
+Calling from external systems (not Nushell)
+Need consistent REST API interface
+Cross-language integration
+Web UI backend
+
+
+1. Batch Operations
+# ❌ Slow: Individual HTTP calls in loop
ls configs/*.yaml | each { |file|
http post http://localhost:9998/encrypt { data: (open $file.name) }
}
-# Total: ~5 seconds (50ms × 100)
+# Total: ~5 seconds (50 ms × 100)
# ✅ Fast: Plugin in pipeline
ls configs/*.yaml | each { |file|
kms encrypt (open $file.name)
}
-# Total: ~0.5 seconds (5ms × 100)
-```plaintext
-
-**2. Parallel Processing**
-
-```nushell
-# Process multiple operations in parallel
+# Total: ~0.5 seconds (5 ms × 100)
+
+2. Parallel Processing
+# Process multiple operations in parallel
ls configs/*.yaml
| par-each { |file|
kms encrypt (open $file.name) | save $"encrypted/($file.name).enc"
}
-```plaintext
-
-**3. Caching Session State**
-
-```nushell
-# Cache auth verification
+
+3. Caching Session State
+# Cache auth verification
let $auth_cache = auth verify
if $auth_cache.active {
# Use cached result instead of repeated calls
echo $"Authenticated as ($auth_cache.user)"
}
-```plaintext
-
-### Error Handling
-
-**Graceful Degradation:**
-
-```nushell
-# Try plugin, fallback to HTTP if unavailable
+
+
+Graceful Degradation:
+# Try plugin, fallback to HTTP if unavailable
def kms_encrypt [data: string] {
try {
kms encrypt $data
@@ -64377,12 +60050,9 @@ def kms_encrypt [data: string] {
http post http://localhost:9998/encrypt { data: $data } | get encrypted
}
}
-```plaintext
-
-**Comprehensive Error Handling:**
-
-```nushell
-# Handle all error cases
+
+Comprehensive Error Handling:
+# Handle all error cases
def safe_deployment [] {
# Check authentication
let auth_status = try {
@@ -64402,7 +60072,7 @@ def safe_deployment [] {
# Validate workflow
let validation = try {
- orch validate workflow.k --strict
+ orch validate workflow.ncl --strict
} catch {
error make { msg: "Workflow validation failed" }
}
@@ -64413,65 +60083,46 @@ def safe_deployment [] {
provisioning cluster create production
}
}
-```plaintext
-
-### Security Best Practices
-
-**1. Never Log Decrypted Data**
-
-```nushell
-# ❌ BAD: Logs plaintext password
+
+
+1. Never Log Decrypted Data
+# ❌ BAD: Logs plaintext password
let password = kms decrypt $encrypted_password
echo $"Password: ($password)" # Visible in logs!
# ✅ GOOD: Use directly without logging
let password = kms decrypt $encrypted_password
psql --dbname mydb --password $password # Not logged
-```plaintext
-
-**2. Use Context (AAD) for Critical Data**
-
-```nushell
-# Encrypt with context
+
+2. Use Context (AAD) for Critical Data
+# Encrypt with context
let context = $"user=(whoami),env=production,date=(date now | format date "%Y-%m-%d")"
kms encrypt $sensitive_data --context $context
# Decrypt requires same context
kms decrypt $encrypted --context $context
-```plaintext
-
-**3. Rotate Backup Codes**
-
-```nushell
-# After using backup code, generate new set
+
+3. Rotate Backup Codes
+# After using backup code, generate new set
auth mfa verify --code ABCD-EFGH-IJKL
# Warning: Backup code used
auth mfa regenerate-backups
# New backup codes generated
-```plaintext
-
-**4. Limit Token Lifetime**
-
-```nushell
-# Check token expiration before long operations
+
+4. Limit Token Lifetime
+# Check token expiration before long operations
let session = auth verify
let expires_in = (($session.expires_at | into datetime) - (date now))
-if $expires_in < 5min {
+if $expires_in < 5 min {
echo "⚠️ Token expiring soon, re-authenticating..."
auth login $session.user
}
-```plaintext
-
----
-
-## Troubleshooting
-
-### Common Issues Across Plugins
-
-**"Plugin not found"**
-
-```bash
-# Check plugin registration
+
+
+
+
+“Plugin not found”
+# Check plugin registration
plugin list | where name =~ "auth|kms|orch"
# Re-register if missing
@@ -64483,12 +60134,9 @@ plugin add target/release/nu_plugin_orchestrator
# Restart Nushell
exit
nu
-```plaintext
-
-**"Plugin command failed"**
-
-```nushell
-# Enable debug mode
+
+“Plugin command failed”
+# Enable debug mode
$env.RUST_LOG = "debug"
# Run command again to see detailed errors
@@ -64496,25 +60144,18 @@ kms encrypt "test"
# Check plugin version compatibility
plugin list | where name =~ "kms" | select name version
-```plaintext
-
-**"Permission denied"**
-
-```bash
-# Check plugin executable permissions
+
+“Permission denied”
+# Check plugin executable permissions
ls -l provisioning/core/plugins/nushell-plugins/target/release/nu_plugin_*
# Should show: -rwxr-xr-x
# Fix if needed
chmod +x provisioning/core/plugins/nushell-plugins/target/release/nu_plugin_*
-```plaintext
-
-### Platform-Specific Issues
-
-**macOS Issues:**
-
-```bash
-# "cannot be opened because the developer cannot be verified"
+
+
+macOS Issues:
+# "cannot be opened because the developer cannot be verified"
xattr -d com.apple.quarantine target/release/nu_plugin_auth
xattr -d com.apple.quarantine target/release/nu_plugin_kms
xattr -d com.apple.quarantine target/release/nu_plugin_orchestrator
@@ -64522,57 +60163,41 @@ xattr -d com.apple.quarantine target/release/nu_plugin_orchestrator
# Keychain access denied
# System Preferences → Security & Privacy → Privacy → Full Disk Access
# Add: /usr/local/bin/nu
-```plaintext
-
-**Linux Issues:**
-
-```bash
-# Keyring service not running
+
+Linux Issues:
+# Keyring service not running
systemctl --user status gnome-keyring-daemon
systemctl --user start gnome-keyring-daemon
# Missing dependencies
sudo apt install libssl-dev pkg-config # Ubuntu/Debian
sudo dnf install openssl-devel # Fedora
-```plaintext
-
-**Windows Issues:**
-
-```powershell
-# Credential Manager access denied
+
+Windows Issues:
+# Credential Manager access denied
# Control Panel → User Accounts → Credential Manager
# Ensure Windows Credential Manager service is running
# Missing Visual C++ runtime
# Download from: https://aka.ms/vs/17/release/vc_redist.x64.exe
-```plaintext
-
-### Debugging Techniques
-
-**Enable Verbose Logging:**
-
-```nushell
-# Set log level
+
+
+Enable Verbose Logging:
+# Set log level
$env.RUST_LOG = "debug,nu_plugin_auth=trace"
# Run command
auth login admin
# Check logs
-```plaintext
-
-**Test Plugin Directly:**
-
-```bash
-# Test plugin communication (advanced)
+
+Test Plugin Directly:
+# Test plugin communication (advanced)
echo '{"Call": [0, {"name": "auth", "call": "login", "args": ["admin", "password"]}]}' \
| target/release/nu_plugin_auth
-```plaintext
-
-**Check Plugin Health:**
-
-```nushell
-# Test each plugin
+
+Check Plugin Health:
+# Test each plugin
auth --help # Should show auth commands
kms --help # Should show kms commands
orch --help # Should show orch commands
@@ -64581,18 +60206,12 @@ orch --help # Should show orch commands
auth verify # Should return session status
kms status # Should return backend status
orch status # Should return orchestrator status
-```plaintext
-
----
-
-## Migration Guide
-
-### Migrating from HTTP to Plugin-Based
-
-**Phase 1: Install Plugins (No Breaking Changes)**
-
-```bash
-# Build and register plugins
+
+
+
+
+Phase 1: Install Plugins (No Breaking Changes)
+# Build and register plugins
cd provisioning/core/plugins/nushell-plugins
cargo build --release --all
plugin add target/release/nu_plugin_auth
@@ -64601,12 +60220,9 @@ plugin add target/release/nu_plugin_orchestrator
# Verify HTTP still works
http get http://localhost:9090/health
-```plaintext
-
-**Phase 2: Update Scripts Incrementally**
-
-```nushell
-# Before (HTTP)
+
+Phase 2: Update Scripts Incrementally
+# Before (HTTP)
def encrypt_config [file: string] {
let data = open $file
let result = http post http://localhost:9998/encrypt { data: $data }
@@ -64624,12 +60240,9 @@ def encrypt_config [file: string] {
}
$encrypted | save $"($file).enc"
}
-```plaintext
-
-**Phase 3: Test Migration**
-
-```nushell
-# Run side-by-side comparison
+
+Phase 3: Test Migration
+# Run side-by-side comparison
def test_migration [] {
let test_data = "test secret data"
@@ -64647,12 +60260,9 @@ def test_migration [] {
echo $"HTTP: ($http_time)ms"
echo $"Speedup: (($http_time / $plugin_time))x"
}
-```plaintext
-
-**Phase 4: Gradual Rollout**
-
-```nushell
-# Use feature flag for controlled rollout
+
+Phase 4: Gradual Rollout
+# Use feature flag for controlled rollout
$env.USE_PLUGINS = true
def encrypt_with_flag [data: string] {
@@ -64662,23 +60272,17 @@ def encrypt_with_flag [data: string] {
(http post http://localhost:9998/encrypt { data: $data }).encrypted
}
}
-```plaintext
-
-**Phase 5: Full Migration**
-
-```nushell
-# Replace all HTTP calls with plugin calls
+
+Phase 5: Full Migration
+# Replace all HTTP calls with plugin calls
# Remove fallback logic once stable
def encrypt_config [file: string] {
let data = open $file
kms encrypt $data --backend rustyvault | save $"($file).enc"
}
-```plaintext
-
-### Rollback Strategy
-
-```nushell
-# If issues arise, quickly rollback
+
+
+# If issues arise, quickly rollback
def rollback_to_http [] {
# Remove plugin registrations
plugin rm nu_plugin_auth
@@ -64688,28 +60292,20 @@ def rollback_to_http [] {
# Restart Nushell
exec nu
}
-```plaintext
-
----
-
-## Advanced Configuration
-
-### Custom Plugin Paths
-
-```nushell
-# ~/.config/nushell/config.nu
+
+
+
+
+# ~/.config/nushell/config.nu
$env.PLUGIN_PATH = "/opt/provisioning/plugins"
# Register from custom location
plugin add $"($env.PLUGIN_PATH)/nu_plugin_auth"
plugin add $"($env.PLUGIN_PATH)/nu_plugin_kms"
plugin add $"($env.PLUGIN_PATH)/nu_plugin_orchestrator"
-```plaintext
-
-### Environment-Specific Configuration
-
-```nushell
-# ~/.config/nushell/env.nu
+
+
+# ~/.config/nushell/env.nu
# Development environment
if ($env.ENV? == "dev") {
@@ -64728,12 +60324,9 @@ if ($env.ENV? == "prod") {
$env.RUSTYVAULT_ADDR = "https://vault.example.com"
$env.CONTROL_CENTER_URL = "https://control.example.com"
}
-```plaintext
-
-### Plugin Aliases
-
-```nushell
-# ~/.config/nushell/config.nu
+
+
+# ~/.config/nushell/config.nu
# Auth shortcuts
alias login = auth login
@@ -64748,12 +60341,9 @@ alias decrypt = kms decrypt
alias status = orch status
alias tasks = orch tasks
alias validate = orch validate
-```plaintext
-
-### Custom Commands
-
-```nushell
-# ~/.config/nushell/custom_commands.nu
+
+
+# ~/.config/nushell/custom_commands.nu
# Encrypt all files in directory
def encrypt-dir [dir: string] {
@@ -64781,34 +60371,27 @@ def watch-deployments [] {
sleep 5sec
}
}
-```plaintext
-
----
-
-## Security Considerations
-
-### Threat Model
-
-**What Plugins Protect Against:**
-
-- ✅ Network eavesdropping (no HTTP for KMS/orch)
-- ✅ Token theft from files (keyring storage)
-- ✅ Credential exposure in logs (prompt-based input)
-- ✅ Man-in-the-middle attacks (local file access)
-
-**What Plugins Don't Protect Against:**
-
-- ❌ Memory dumping (decrypted data in RAM)
-- ❌ Malicious plugins (trust registry only)
-- ❌ Compromised OS keyring
-- ❌ Physical access to machine
-
-### Secure Deployment
-
-**1. Verify Plugin Integrity**
-
-```bash
-# Check plugin signatures (if available)
+
+
+
+
+What Plugins Protect Against:
+
+✅ Network eavesdropping (no HTTP for KMS/orch)
+✅ Token theft from files (keyring storage)
+✅ Credential exposure in logs (prompt-based input)
+✅ Man-in-the-middle attacks (local file access)
+
+What Plugins Don’t Protect Against:
+
+❌ Memory dumping (decrypted data in RAM)
+❌ Malicious plugins (trust registry only)
+❌ Compromised OS keyring
+❌ Physical access to machine
+
+
+1. Verify Plugin Integrity
+# Check plugin signatures (if available)
sha256sum target/release/nu_plugin_auth
# Compare with published checksums
@@ -64816,12 +60399,9 @@ sha256sum target/release/nu_plugin_auth
git clone https://github.com/provisioning-platform/plugins
cd plugins
cargo build --release --all
-```plaintext
-
-**2. Restrict Plugin Access**
-
-```bash
-# Set plugin permissions (only owner can execute)
+
+2. Restrict Plugin Access
+# Set plugin permissions (only owner can execute)
chmod 700 target/release/nu_plugin_*
# Store in protected directory
@@ -64829,24 +60409,18 @@ sudo mkdir -p /opt/provisioning/plugins
sudo chown $(whoami):$(whoami) /opt/provisioning/plugins
sudo chmod 755 /opt/provisioning/plugins
mv target/release/nu_plugin_* /opt/provisioning/plugins/
-```plaintext
-
-**3. Audit Plugin Usage**
-
-```nushell
-# Log plugin calls (for compliance)
+
+3. Audit Plugin Usage
+# Log plugin calls (for compliance)
def logged_encrypt [data: string] {
let timestamp = date now
let result = kms encrypt $data
{ timestamp: $timestamp, action: "encrypt" } | save --append audit.log
$result
}
-```plaintext
-
-**4. Rotate Credentials Regularly**
-
-```nushell
-# Weekly credential rotation script
+
+4. Rotate Credentials Regularly
+# Weekly credential rotation script
def rotate_credentials [] {
# Re-authenticate
auth logout
@@ -64861,117 +60435,84 @@ def rotate_credentials [] {
kms encrypt $plain | save $file.name
}
}
-```plaintext
-
----
-
-## FAQ
-
-**Q: Can I use plugins without RustyVault/Age installed?**
-
-A: Yes, authentication and orchestrator plugins work independently. KMS plugin requires at least one backend configured (Age is easiest for local dev).
-
-**Q: Do plugins work in CI/CD pipelines?**
-
-A: Yes, plugins work great in CI/CD. For headless environments (no keyring), use environment variables for auth or file-based tokens.
-
-```bash
-# CI/CD example
+
+
+
+Q: Can I use plugins without RustyVault/Age installed?
+A: Yes, authentication and orchestrator plugins work independently. KMS plugin requires at least one backend configured (Age is easiest for local dev).
+Q: Do plugins work in CI/CD pipelines?
+A: Yes, plugins work great in CI/CD. For headless environments (no keyring), use environment variables for auth or file-based tokens.
+# CI/CD example
export CONTROL_CENTER_TOKEN="jwt-token-here"
kms encrypt "data" --backend age
-```plaintext
-
-**Q: How do I update plugins?**
-
-A: Rebuild and re-register:
-
-```bash
-cd provisioning/core/plugins/nushell-plugins
+
+Q: How do I update plugins?
+A: Rebuild and re-register:
+cd provisioning/core/plugins/nushell-plugins
git pull
cargo build --release --all
plugin add --force target/release/nu_plugin_auth
plugin add --force target/release/nu_plugin_kms
plugin add --force target/release/nu_plugin_orchestrator
-```plaintext
-
-**Q: Can I use multiple KMS backends simultaneously?**
-
-A: Yes, specify `--backend` for each operation:
-
-```nushell
-kms encrypt "data1" --backend rustyvault
+
+Q: Can I use multiple KMS backends simultaneously?
+A: Yes, specify --backend for each operation:
+kms encrypt "data1" --backend rustyvault
kms encrypt "data2" --backend age
kms encrypt "data3" --backend aws
-```plaintext
-
-**Q: What happens if a plugin crashes?**
-
-A: Nushell isolates plugin crashes. The command fails with an error, but Nushell continues running. Check logs with `$env.RUST_LOG = "debug"`.
-
-**Q: Are plugins compatible with older Nushell versions?**
-
-A: Plugins require Nushell 0.107.1+. For older versions, use HTTP API.
-
-**Q: How do I backup MFA enrollment?**
-
-A: Save backup codes securely (password manager, encrypted file). QR code can be re-scanned from the same secret.
-
-```nushell
-# Save backup codes
+
+Q: What happens if a plugin crashes?
+A: Nushell isolates plugin crashes. The command fails with an error, but Nushell continues running. Check logs with $env.RUST_LOG = "debug".
+Q: Are plugins compatible with older Nushell versions?
+A: Plugins require Nushell 0.107.1+. For older versions, use HTTP API.
+Q: How do I backup MFA enrollment?
+A: Save backup codes securely (password manager, encrypted file). QR code can be re-scanned from the same secret.
+# Save backup codes
auth mfa enroll totp | save mfa-backup-codes.txt
kms encrypt (open mfa-backup-codes.txt) | save mfa-backup-codes.enc
rm mfa-backup-codes.txt
-```plaintext
-
-**Q: Can plugins work offline?**
-
-A: Partially:
-
-- ✅ `kms` with Age backend (fully offline)
-- ✅ `orch` status/tasks (reads local files)
-- ❌ `auth` (requires control center)
-- ❌ `kms` with RustyVault/AWS/Vault (requires network)
-
-**Q: How do I troubleshoot plugin performance?**
-
-A: Use Nushell's timing:
-
-```nushell
-timeit { kms encrypt "data" }
-# 5ms 123μs 456ns
+
+Q: Can plugins work offline?
+A: Partially:
+
+✅ kms with Age backend (fully offline)
+✅ orch status/tasks (reads local files)
+❌ auth (requires control center)
+❌ kms with RustyVault/AWS/Vault (requires network)
+
+Q: How do I troubleshoot plugin performance?
+A: Use Nushell’s timing:
+timeit { kms encrypt "data" }
+# 5 ms 123μs 456 ns
timeit { http post http://localhost:9998/encrypt { data: "data" } }
-# 52ms 789μs 123ns
-```plaintext
-
----
-
-## Related Documentation
-
-- **Security System**: `/Users/Akasha/project-provisioning/docs/architecture/ADR-009-security-system-complete.md`
-- **JWT Authentication**: `/Users/Akasha/project-provisioning/docs/architecture/JWT_AUTH_IMPLEMENTATION.md`
-- **Config Encryption**: `/Users/Akasha/project-provisioning/docs/user/CONFIG_ENCRYPTION_GUIDE.md`
-- **RustyVault Integration**: `/Users/Akasha/project-provisioning/RUSTYVAULT_INTEGRATION_SUMMARY.md`
-- **MFA Implementation**: `/Users/Akasha/project-provisioning/docs/architecture/MFA_IMPLEMENTATION_SUMMARY.md`
-- **Nushell Plugins Reference**: `/Users/Akasha/project-provisioning/docs/user/NUSHELL_PLUGINS_GUIDE.md`
-
----
-
-**Version**: 1.0.0
-**Maintained By**: Platform Team
-**Last Updated**: 2025-10-09
-**Feedback**: Open an issue or contact <platform-team@example.com>
+# 52 ms 789μs 123 ns
+
+
+
+Security System : /Users/Akasha/project-provisioning/docs/architecture/adr-009-security-system-complete.md
+JWT Authentication : /Users/Akasha/project-provisioning/docs/architecture/JWT_AUTH_IMPLEMENTATION.md
+Config Encryption : /Users/Akasha/project-provisioning/docs/user/CONFIG_ENCRYPTION_GUIDE.md
+RustyVault Integration : /Users/Akasha/project-provisioning/RUSTYVAULT_INTEGRATION_SUMMARY.md
+MFA Implementation : /Users/Akasha/project-provisioning/docs/architecture/MFA_IMPLEMENTATION_SUMMARY.md
+Nushell Plugins Reference : /Users/Akasha/project-provisioning/docs/user/NUSHELL_PLUGINS_GUIDE.md
+
+
+Version : 1.0.0
+Maintained By : Platform Team
+Last Updated : 2025-10-09
+Feedback : Open an issue or contact platform-team@example.com
Complete guide to authentication, KMS, and orchestrator plugins.
-
+
Three native Nushell plugins provide high-performance integration with the provisioning platform:
nu_plugin_auth - JWT authentication and MFA operations
nu_plugin_kms - Key management (RustyVault, Age, Cosmian, AWS, Vault)
nu_plugin_orchestrator - Orchestrator operations (status, validate, tasks)
-
+
Performance Advantages :
10x faster than HTTP API calls (KMS operations)
@@ -64987,8 +60528,8 @@ timeit { http post http://localhost:9998/encrypt { data: "data" } }
Error handling - Nushell-native error messages
-
-
+
+
Nushell 0.107.1+
Rust toolchain (for building from source)
@@ -65006,24 +60547,18 @@ cargo build --release -p nu_plugin_orchestrator
cargo build --release -p nu_plugin_auth
cargo build --release -p nu_plugin_kms
cargo build --release -p nu_plugin_orchestrator
-```plaintext
-
-### Register with Nushell
-
-```bash
-# Register all plugins
+
+
+# Register all plugins
plugin add target/release/nu_plugin_auth
plugin add target/release/nu_plugin_kms
plugin add target/release/nu_plugin_orchestrator
# Verify registration
plugin list | where name =~ "provisioning"
-```plaintext
-
-### Verify Installation
-
-```bash
-# Test auth commands
+
+
+# Test auth commands
auth --help
# Test KMS commands
@@ -65031,34 +60566,25 @@ kms --help
# Test orchestrator commands
orch --help
-```plaintext
-
----
-
-## Plugin: nu_plugin_auth
-
-Authentication plugin for JWT login, MFA enrollment, and session management.
-
-### Commands
-
-#### `auth login <username> [password]`
-
-Login to provisioning platform and store JWT tokens securely.
-
-**Arguments**:
-
-- `username` (required): Username for authentication
-- `password` (optional): Password (prompts interactively if not provided)
-
-**Flags**:
-
-- `--url <url>`: Control center URL (default: `http://localhost:9080`)
-- `--password <password>`: Password (alternative to positional argument)
-
-**Examples**:
-
-```nushell
-# Interactive password prompt (recommended)
+
+
+
+Authentication plugin for JWT login, MFA enrollment, and session management.
+
+
+Login to provisioning platform and store JWT tokens securely.
+Arguments :
+
+username (required): Username for authentication
+password (optional): Password (prompts interactively if not provided)
+
+Flags :
+
+--url <url>: Control center URL (default: http://localhost:9080)
+--password <password>: Password (alternative to positional argument)
+
+Examples :
+# Interactive password prompt (recommended)
auth login admin
# Password in command (not recommended for production)
@@ -65069,94 +60595,64 @@ auth login admin --url http://control-center:9080
# Pipeline usage
"admin" | auth login
-```plaintext
-
-**Token Storage**:
-Tokens are stored securely in OS-native keyring:
-
-- **macOS**: Keychain Access
-- **Linux**: Secret Service (gnome-keyring, kwallet)
-- **Windows**: Credential Manager
-
-**Success Output**:
-
-```plaintext
-✓ Login successful
+
+Token Storage :
+Tokens are stored securely in OS-native keyring:
+
+macOS : Keychain Access
+Linux : Secret Service (gnome-keyring, kwallet)
+Windows : Credential Manager
+
+Success Output :
+✓ Login successful
User: admin
Role: Admin
Expires: 2025-10-09T14:30:00Z
-```plaintext
-
----
-
-#### `auth logout`
-
-Logout from current session and remove stored tokens.
-
-**Examples**:
-
-```nushell
-# Simple logout
+
+
+
+Logout from current session and remove stored tokens.
+Examples :
+# Simple logout
auth logout
# Pipeline usage (conditional logout)
if (auth verify | get active) { auth logout }
-```plaintext
-
-**Success Output**:
-
-```plaintext
-✓ Logged out successfully
-```plaintext
-
----
-
-#### `auth verify`
-
-Verify current session and check token validity.
-
-**Examples**:
-
-```nushell
-# Check session status
+
+Success Output :
+✓ Logged out successfully
+
+
+
+Verify current session and check token validity.
+Examples :
+# Check session status
auth verify
# Pipeline usage
auth verify | if $in.active { echo "Session valid" } else { echo "Session expired" }
-```plaintext
-
-**Success Output**:
-
-```json
-{
+
+Success Output :
+{
"active": true,
"user": "admin",
"role": "Admin",
"expires_at": "2025-10-09T14:30:00Z",
"mfa_verified": true
}
-```plaintext
-
----
-
-#### `auth sessions`
-
-List all active sessions for current user.
-
-**Examples**:
-
-```nushell
-# List sessions
+
+
+
+List all active sessions for current user.
+Examples :
+# List sessions
auth sessions
# Filter by date
auth sessions | where created_at > (date now | date to-timezone UTC | into string)
-```plaintext
-
-**Output Format**:
-
-```json
-[
+
+Output Format :
+[
{
"session_id": "sess_abc123",
"created_at": "2025-10-09T12:00:00Z",
@@ -65165,32 +60661,23 @@ auth sessions | where created_at > (date now | date to-timezone UTC | into st
"user_agent": "nushell/0.107.1"
}
]
-```plaintext
-
----
-
-#### `auth mfa enroll <type>`
-
-Enroll in MFA (TOTP or WebAuthn).
-
-**Arguments**:
-
-- `type` (required): MFA type (`totp` or `webauthn`)
-
-**Examples**:
-
-```nushell
-# Enroll TOTP (Google Authenticator, Authy)
+
+
+
+Enroll in MFA (TOTP or WebAuthn).
+Arguments :
+
+type (required): MFA type (totp or webauthn)
+
+Examples :
+# Enroll TOTP (Google Authenticator, Authy)
auth mfa enroll totp
# Enroll WebAuthn (YubiKey, Touch ID, Windows Hello)
auth mfa enroll webauthn
-```plaintext
-
-**TOTP Enrollment Output**:
-
-```plaintext
-✓ TOTP enrollment initiated
+
+TOTP Enrollment Output :
+✓ TOTP enrollment initiated
Scan this QR code with your authenticator app:
@@ -65207,51 +60694,35 @@ Backup codes (save securely):
1. ABCD-EFGH-IJKL
2. MNOP-QRST-UVWX
...
-```plaintext
-
----
-
-#### `auth mfa verify --code <code>`
-
-Verify MFA code (TOTP or backup code).
-
-**Flags**:
-
-- `--code <code>` (required): 6-digit TOTP code or backup code
-
-**Examples**:
-
-```nushell
-# Verify TOTP code
+
+
+
+Verify MFA code (TOTP or backup code).
+Flags :
+
+--code <code> (required): 6-digit TOTP code or backup code
+
+Examples :
+# Verify TOTP code
auth mfa verify --code 123456
# Verify backup code
auth mfa verify --code ABCD-EFGH-IJKL
-```plaintext
-
-**Success Output**:
-
-```plaintext
-✓ MFA verification successful
-```plaintext
-
----
-
-### Environment Variables
-
-| Variable | Description | Default |
-|----------|-------------|---------|
-| `USER` | Default username | Current OS user |
-| `CONTROL_CENTER_URL` | Control center URL | `http://localhost:9080` |
-
----
-
-### Error Handling
-
-**Common Errors**:
-
-```nushell
-# "No active session"
+
+Success Output :
+✓ MFA verification successful
+
+
+
+Variable Description Default
+USERDefault username Current OS user
+CONTROL_CENTER_URLControl center URL http://localhost:9080
+
+
+
+
+Common Errors :
+# "No active session"
Error: No active session found
→ Run: auth login <username>
@@ -65274,44 +60745,34 @@ Error: Failed to access keyring
# "Keyring error" (Linux)
Error: Failed to access keyring
→ Install gnome-keyring or kwallet
-```plaintext
-
----
-
-## Plugin: nu_plugin_kms
-
-Key Management Service plugin supporting multiple backends.
-
-### Supported Backends
-
-| Backend | Description | Use Case |
-|---------|-------------|----------|
-| `rustyvault` | RustyVault Transit engine | Production KMS |
-| `age` | Age encryption (local) | Development/testing |
-| `cosmian` | Cosmian KMS (HTTP) | Cloud KMS |
-| `aws` | AWS KMS | AWS environments |
-| `vault` | HashiCorp Vault | Enterprise KMS |
-
-### Commands
-
-#### `kms encrypt <data> [--backend <backend>]`
-
-Encrypt data using KMS.
-
-**Arguments**:
-
-- `data` (required): Data to encrypt (string or binary)
-
-**Flags**:
-
-- `--backend <backend>`: KMS backend (`rustyvault`, `age`, `cosmian`, `aws`, `vault`)
-- `--key <key>`: Key ID or recipient (backend-specific)
-- `--context <context>`: Additional authenticated data (AAD)
-
-**Examples**:
-
-```nushell
-# Auto-detect backend from environment
+
+
+
+Key Management Service plugin supporting multiple backends.
+
+Backend Description Use Case
+rustyvaultRustyVault Transit engine Production KMS
+ageAge encryption (local) Development/testing
+cosmianCosmian KMS (HTTP) Cloud KMS
+awsAWS KMS AWS environments
+vaultHashiCorp Vault Enterprise KMS
+
+
+
+
+Encrypt data using KMS.
+Arguments :
+
+data (required): Data to encrypt (string or binary)
+
+Flags :
+
+--backend <backend>: KMS backend (rustyvault, age, cosmian, aws, vault)
+--key <key>: Key ID or recipient (backend-specific)
+--context <context>: Additional authenticated data (AAD)
+
+Examples :
+# Auto-detect backend from environment
kms encrypt "secret data"
# RustyVault
@@ -65325,33 +60786,24 @@ kms encrypt "data" --backend aws --key alias/provisioning
# With context (AAD)
kms encrypt "data" --backend rustyvault --key provisioning-main --context "user=admin"
-```plaintext
-
-**Output Format**:
-
-```plaintext
-vault:v1:abc123def456...
-```plaintext
-
----
-
-#### `kms decrypt <encrypted> [--backend <backend>]`
-
-Decrypt KMS-encrypted data.
-
-**Arguments**:
-
-- `encrypted` (required): Encrypted data (base64 or KMS format)
-
-**Flags**:
-
-- `--backend <backend>`: KMS backend (auto-detected if not specified)
-- `--context <context>`: Additional authenticated data (AAD, must match encryption)
-
-**Examples**:
-
-```nushell
-# Auto-detect backend
+
+Output Format :
+vault:v1:abc123def456...
+
+
+
+Decrypt KMS-encrypted data.
+Arguments :
+
+encrypted (required): Encrypted data (base64 or KMS format)
+
+Flags :
+
+--backend <backend>: KMS backend (auto-detected if not specified)
+--context <context>: Additional authenticated data (AAD, must match encryption)
+
+Examples :
+# Auto-detect backend
kms decrypt "vault:v1:abc123def456..."
# RustyVault explicit
@@ -65362,29 +60814,20 @@ kms decrypt "-----BEGIN AGE ENCRYPTED FILE-----..." --backend age
# With context
kms decrypt "vault:v1:abc123..." --backend rustyvault --context "user=admin"
-```plaintext
-
-**Output**:
-
-```plaintext
-secret data
-```plaintext
-
----
-
-#### `kms generate-key [--spec <spec>]`
-
-Generate data encryption key (DEK) using KMS.
-
-**Flags**:
-
-- `--spec <spec>`: Key specification (`AES128` or `AES256`, default: `AES256`)
-- `--backend <backend>`: KMS backend
-
-**Examples**:
-
-```nushell
-# Generate AES-256 key
+
+Output :
+secret data
+
+
+
+Generate data encryption key (DEK) using KMS.
+Flags :
+
+--spec <spec>: Key specification (AES128 or AES256, default: AES256)
+--backend <backend>: KMS backend
+
+Examples :
+# Generate AES-256 key
kms generate-key
# Generate AES-128 key
@@ -65392,112 +60835,75 @@ kms generate-key --spec AES128
# Specific backend
kms generate-key --backend rustyvault
-```plaintext
-
-**Output Format**:
-
-```json
-{
+
+Output Format :
+{
"plaintext": "base64-encoded-key",
"ciphertext": "vault:v1:encrypted-key",
"spec": "AES256"
}
-```plaintext
-
----
-
-#### `kms status`
-
-Show KMS backend status and configuration.
-
-**Examples**:
-
-```nushell
-# Show status
+
+
+
+Show KMS backend status and configuration.
+Examples :
+# Show status
kms status
# Filter to specific backend
kms status | where backend == "rustyvault"
-```plaintext
-
-**Output Format**:
-
-```json
-{
+
+Output Format :
+{
"backend": "rustyvault",
"status": "healthy",
"url": "http://localhost:8200",
"mount_point": "transit",
"version": "0.1.0"
}
-```plaintext
-
----
-
-### Environment Variables
-
-**RustyVault Backend**:
-
-```bash
-export RUSTYVAULT_ADDR="http://localhost:8200"
+
+
+
+RustyVault Backend :
+export RUSTYVAULT_ADDR="http://localhost:8200"
export RUSTYVAULT_TOKEN="your-token-here"
export RUSTYVAULT_MOUNT="transit"
-```plaintext
-
-**Age Backend**:
-
-```bash
-export AGE_RECIPIENT="age1xxxxxxxxx"
+
+Age Backend :
+export AGE_RECIPIENT="age1xxxxxxxxx"
export AGE_IDENTITY="/path/to/key.txt"
-```plaintext
-
-**HTTP Backend (Cosmian)**:
-
-```bash
-export KMS_HTTP_URL="http://localhost:9998"
+
+HTTP Backend (Cosmian) :
+export KMS_HTTP_URL="http://localhost:9998"
export KMS_HTTP_BACKEND="cosmian"
-```plaintext
-
-**AWS KMS**:
-
-```bash
-export AWS_REGION="us-east-1"
+
+AWS KMS :
+export AWS_REGION="us-east-1"
export AWS_ACCESS_KEY_ID="..."
export AWS_SECRET_ACCESS_KEY="..."
-```plaintext
-
----
-
-### Performance Comparison
-
-| Operation | HTTP API | Plugin | Improvement |
-|-----------|----------|--------|-------------|
-| Encrypt (RustyVault) | ~50ms | ~5ms | **10x faster** |
-| Decrypt (RustyVault) | ~50ms | ~5ms | **10x faster** |
-| Encrypt (Age) | ~30ms | ~3ms | **10x faster** |
-| Decrypt (Age) | ~30ms | ~3ms | **10x faster** |
-| Generate Key | ~60ms | ~8ms | **7.5x faster** |
-
----
-
-## Plugin: nu_plugin_orchestrator
-
-Orchestrator operations plugin for status, validation, and task management.
-
-### Commands
-
-#### `orch status [--data-dir <dir>]`
-
-Get orchestrator status from local files (no HTTP).
-
-**Flags**:
-
-- `--data-dir <dir>`: Data directory (default: `provisioning/platform/orchestrator/data`)
-
-**Examples**:
-
-```nushell
-# Default data dir
+
+
+
+Operation HTTP API Plugin Improvement
+Encrypt (RustyVault) ~50 ms ~5 ms 10x faster
+Decrypt (RustyVault) ~50 ms ~5 ms 10x faster
+Encrypt (Age) ~30 ms ~3 ms 10x faster
+Decrypt (Age) ~30 ms ~3 ms 10x faster
+Generate Key ~60 ms ~8 ms 7.5x faster
+
+
+
+
+Orchestrator operations plugin for status, validation, and task management.
+
+
+Get orchestrator status from local files (no HTTP).
+Flags :
+
+--data-dir <dir>: Data directory (default: provisioning/platform/orchestrator/data)
+
+Examples :
+# Default data dir
orch status
# Custom dir
@@ -65505,12 +60911,9 @@ orch status --data-dir ./custom/data
# Pipeline usage
orch status | if $in.active_tasks > 0 { echo "Tasks running" }
-```plaintext
-
-**Output Format**:
-
-```json
-{
+
+Output Format :
+{
"active_tasks": 5,
"completed_tasks": 120,
"failed_tasks": 2,
@@ -65518,39 +60921,30 @@ orch status | if $in.active_tasks > 0 { echo "Tasks running" }
"uptime": "2d 4h 15m",
"health": "healthy"
}
-```plaintext
-
----
-
-#### `orch validate <workflow.k> [--strict]`
-
-Validate workflow KCL file.
-
-**Arguments**:
-
-- `workflow.k` (required): Path to KCL workflow file
-
-**Flags**:
-
-- `--strict`: Enable strict validation (all checks, warnings as errors)
-
-**Examples**:
-
-```nushell
-# Basic validation
-orch validate workflows/deploy.k
+
+
+
+Validate workflow Nickel file.
+Arguments :
+
+workflow.ncl (required): Path to Nickel workflow file
+
+Flags :
+
+--strict: Enable strict validation (all checks, warnings as errors)
+
+Examples :
+# Basic validation
+orch validate workflows/deploy.ncl
# Strict mode
-orch validate workflows/deploy.k --strict
+orch validate workflows/deploy.ncl --strict
# Pipeline usage
-ls workflows/*.k | each { |file| orch validate $file.name }
-```plaintext
-
-**Output Format**:
-
-```json
-{
+ls workflows/*.ncl | each { |file| orch validate $file.name }
+
+Output Format :
+{
"valid": true,
"workflow": {
"name": "deploy_k8s_cluster",
@@ -65560,32 +60954,26 @@ ls workflows/*.k | each { |file| orch validate $file.name }
"warnings": [],
"errors": []
}
-```plaintext
-
-**Validation Checks**:
-
-- KCL syntax errors
-- Required fields present
-- Dependency graph valid (no cycles)
-- Resource limits within bounds
-- Provider configurations valid
-
----
-
-#### `orch tasks [--status <status>] [--limit <n>]`
-
-List orchestrator tasks.
-
-**Flags**:
-
-- `--status <status>`: Filter by status (`pending`, `running`, `completed`, `failed`)
-- `--limit <n>`: Limit number of results (default: 100)
-- `--data-dir <dir>`: Data directory (default from `ORCHESTRATOR_DATA_DIR`)
-
-**Examples**:
-
-```nushell
-# All tasks
+
+Validation Checks :
+
+KCL syntax errors
+Required fields present
+Dependency graph valid (no cycles)
+Resource limits within bounds
+Provider configurations valid
+
+
+
+List orchestrator tasks.
+Flags :
+
+--status <status>: Filter by status (pending, running, completed, failed)
+--limit <n>: Limit number of results (default: 100)
+--data-dir <dir>: Data directory (default from ORCHESTRATOR_DATA_DIR)
+
+Examples :
+# All tasks
orch tasks
# Pending tasks only
@@ -65596,12 +60984,9 @@ orch tasks --status running --limit 10
# Pipeline usage
orch tasks --status failed | each { |task| echo $"Failed: ($task.name)" }
-```plaintext
-
-**Output Format**:
-
-```json
-[
+
+Output Format :
+[
{
"task_id": "task_abc123",
"name": "deploy_kubernetes",
@@ -65612,43 +60997,31 @@ orch tasks --status failed | each { |task| echo $"Failed: ($task.name)" }
"progress": 45
}
]
-```plaintext
-
----
-
-### Environment Variables
-
-| Variable | Description | Default |
-|----------|-------------|---------|
-| `ORCHESTRATOR_DATA_DIR` | Data directory | `provisioning/platform/orchestrator/data` |
-
----
-
-### Performance Comparison
-
-| Operation | HTTP API | Plugin | Improvement |
-|-----------|----------|--------|-------------|
-| Status | ~30ms | ~3ms | **10x faster** |
-| Validate | ~100ms | ~10ms | **10x faster** |
-| Tasks List | ~50ms | ~5ms | **10x faster** |
-
----
-
-## Pipeline Examples
-
-### Authentication Flow
-
-```nushell
-# Login and verify in one pipeline
+
+
+
+Variable Description Default
+ORCHESTRATOR_DATA_DIRData directory provisioning/platform/orchestrator/data
+
+
+
+
+Operation HTTP API Plugin Improvement
+Status ~30 ms ~3 ms 10x faster
+Validate ~100 ms ~10 ms 10x faster
+Tasks List ~50 ms ~5 ms 10x faster
+
+
+
+
+
+# Login and verify in one pipeline
auth login admin
| if $in.success { auth verify }
| if $in.mfa_required { auth mfa verify --code (input "MFA code: ") }
-```plaintext
-
-### KMS Operations
-
-```nushell
-# Encrypt multiple secrets
+
+
+# Encrypt multiple secrets
["secret1", "secret2", "secret3"]
| each { |data| kms encrypt $data --backend rustyvault }
| save encrypted_secrets.json
@@ -65657,86 +61030,63 @@ auth login admin
open encrypted_secrets.json
| each { |enc| kms decrypt $enc }
| each { |plain| echo $"Decrypted: ($plain)" }
-```plaintext
-
-### Orchestrator Monitoring
-
-```nushell
-# Monitor running tasks
+
+
+# Monitor running tasks
while true {
orch tasks --status running
| each { |task| echo $"($task.name): ($task.progress)%" }
sleep 5sec
}
-```plaintext
-
-### Combined Workflow
-
-```nushell
-# Complete deployment workflow
+
+
+# Complete deployment workflow
auth login admin
| auth mfa verify --code (input "MFA: ")
- | orch validate workflows/deploy.k
+ | orch validate workflows/deploy.ncl
| if $in.valid {
orch tasks --status pending
| where priority > 5
| each { |task| echo $"High priority: ($task.name)" }
}
-```plaintext
-
----
-
-## Troubleshooting
-
-### Auth Plugin
-
-**"No active session"**:
-
-```nushell
-auth login <username>
-```plaintext
-
-**"Keyring error" (macOS)**:
-
-- Check Keychain Access permissions
-- Security & Privacy → Privacy → Full Disk Access → Add Nushell
-
-**"Keyring error" (Linux)**:
-
-```bash
-# Install keyring service
+
+
+
+
+“No active session” :
+auth login <username>
+
+“Keyring error” (macOS) :
+
+Check Keychain Access permissions
+Security & Privacy → Privacy → Full Disk Access → Add Nushell
+
+“Keyring error” (Linux) :
+# Install keyring service
sudo apt install gnome-keyring # Ubuntu/Debian
sudo dnf install gnome-keyring # Fedora
# Or use KWallet
sudo apt install kwalletmanager
-```plaintext
-
-**"MFA verification failed"**:
-
-- Check time synchronization (TOTP requires accurate clocks)
-- Use backup codes if TOTP not working
-- Re-enroll MFA if device lost
-
----
-
-### KMS Plugin
-
-**"RustyVault connection failed"**:
-
-```bash
-# Check RustyVault running
+
+“MFA verification failed” :
+
+Check time synchronization (TOTP requires accurate clocks)
+Use backup codes if TOTP not working
+Re-enroll MFA if device lost
+
+
+
+“RustyVault connection failed” :
+# Check RustyVault running
curl http://localhost:8200/v1/sys/health
# Set environment
export RUSTYVAULT_ADDR="http://localhost:8200"
export RUSTYVAULT_TOKEN="your-token"
-```plaintext
-
-**"Age encryption failed"**:
-
-```bash
-# Check Age keys
+
+“Age encryption failed” :
+# Check Age keys
ls -la ~/.age/
# Generate new key if needed
@@ -65745,58 +61095,39 @@ age-keygen -o ~/.age/key.txt
# Set environment
export AGE_RECIPIENT="age1xxxxxxxxx"
export AGE_IDENTITY="$HOME/.age/key.txt"
-```plaintext
-
-**"AWS KMS access denied"**:
-
-```bash
-# Check AWS credentials
+
+“AWS KMS access denied” :
+# Check AWS credentials
aws sts get-caller-identity
# Check KMS key policy
aws kms describe-key --key-id alias/provisioning
-```plaintext
-
----
-
-### Orchestrator Plugin
-
-**"Failed to read status"**:
-
-```bash
-# Check data directory exists
+
+
+
+“Failed to read status” :
+# Check data directory exists
ls provisioning/platform/orchestrator/data/
# Create if missing
mkdir -p provisioning/platform/orchestrator/data
-```plaintext
-
-**"Workflow validation failed"**:
-
-```nushell
-# Use strict mode for detailed errors
-orch validate workflows/deploy.k --strict
-```plaintext
-
-**"No tasks found"**:
-
-```bash
-# Check orchestrator running
+
+“Workflow validation failed” :
+# Use strict mode for detailed errors
+orch validate workflows/deploy.ncl --strict
+
+“No tasks found” :
+# Check orchestrator running
ps aux | grep orchestrator
# Start orchestrator
cd provisioning/platform/orchestrator
./scripts/start-orchestrator.nu --background
-```plaintext
-
----
-
-## Development
-
-### Building from Source
-
-```bash
-cd provisioning/core/plugins/nushell-plugins
+
+
+
+
+cd provisioning/core/plugins/nushell-plugins
# Clean build
cargo clean
@@ -65813,12 +61144,9 @@ cargo test -p nu_plugin_orchestrator
# Run all tests
cargo test --all
-```plaintext
-
-### Adding to CI/CD
-
-```yaml
-name: Build Nushell Plugins
+
+
+name: Build Nushell Plugins
on: [push, pull_request]
@@ -65848,18 +61176,12 @@ jobs:
with:
name: plugins
path: provisioning/core/plugins/nushell-plugins/target/release/nu_plugin_*
-```plaintext
-
----
-
-## Advanced Usage
-
-### Custom Plugin Configuration
-
-Create `~/.config/nushell/plugin_config.nu`:
-
-```nushell
-# Auth plugin defaults
+
+
+
+
+Create ~/.config/nushell/plugin_config.nu:
+# Auth plugin defaults
$env.CONTROL_CENTER_URL = "https://control-center.example.com"
# KMS plugin defaults
@@ -65868,14 +61190,10 @@ $env.RUSTYVAULT_MOUNT = "transit"
# Orchestrator plugin defaults
$env.ORCHESTRATOR_DATA_DIR = "/opt/orchestrator/data"
-```plaintext
-
-### Plugin Aliases
-
-Add to `~/.config/nushell/config.nu`:
-
-```nushell
-# Auth shortcuts
+
+
+Add to ~/.config/nushell/config.nu:
+# Auth shortcuts
alias login = auth login
alias logout = auth logout
@@ -65887,88 +61205,68 @@ alias decrypt = kms decrypt
alias status = orch status
alias validate = orch validate
alias tasks = orch tasks
-```plaintext
-
----
-
-## Security Best Practices
-
-### Authentication
-
-✅ **DO**: Use interactive password prompts
-✅ **DO**: Enable MFA for production environments
-✅ **DO**: Verify session before sensitive operations
-❌ **DON'T**: Pass passwords in command line (visible in history)
-❌ **DON'T**: Store tokens in plain text files
-
-### KMS Operations
-
-✅ **DO**: Use context (AAD) for encryption when available
-✅ **DO**: Rotate KMS keys regularly
-✅ **DO**: Use hardware-backed keys (WebAuthn, YubiKey) when possible
-❌ **DON'T**: Share Age private keys
-❌ **DON'T**: Log decrypted data
-
-### Orchestrator
-
-✅ **DO**: Validate workflows in strict mode before production
-✅ **DO**: Monitor task status regularly
-✅ **DO**: Use appropriate data directory permissions (700)
-❌ **DON'T**: Run orchestrator as root
-❌ **DON'T**: Expose data directory over network shares
-
----
-
-## FAQ
-
-**Q: Why use plugins instead of HTTP API?**
-A: Plugins are 10x faster, have better Nushell integration, and eliminate HTTP overhead.
-
-**Q: Can I use plugins without orchestrator running?**
-A: `auth` and `kms` work independently. `orch` requires access to orchestrator data directory.
-
-**Q: How do I update plugins?**
-A: Rebuild and re-register: `cargo build --release --all && plugin add target/release/nu_plugin_*`
-
-**Q: Are plugins cross-platform?**
-A: Yes, plugins work on macOS, Linux, and Windows (with appropriate keyring services).
-
-**Q: Can I use multiple KMS backends simultaneously?**
-A: Yes, specify `--backend` flag for each operation.
-
-**Q: How do I backup MFA enrollment?**
-A: Save backup codes securely (password manager, encrypted file). QR code can be re-scanned.
-
----
-
-## Related Documentation
-
-- **Security System**: `docs/architecture/ADR-009-security-system-complete.md`
-- **JWT Auth**: `docs/architecture/JWT_AUTH_IMPLEMENTATION.md`
-- **Config Encryption**: `docs/user/CONFIG_ENCRYPTION_GUIDE.md`
-- **RustyVault Integration**: `RUSTYVAULT_INTEGRATION_SUMMARY.md`
-- **MFA Implementation**: `docs/architecture/MFA_IMPLEMENTATION_SUMMARY.md`
-
----
-
-**Version**: 1.0.0
-**Last Updated**: 2025-10-09
-**Maintained By**: Platform Team
+
+
+
+✅ DO : Use interactive password prompts
+✅ DO : Enable MFA for production environments
+✅ DO : Verify session before sensitive operations
+❌ DON’T : Pass passwords in command line (visible in history)
+❌ DON’T : Store tokens in plain text files
+
+✅ DO : Use context (AAD) for encryption when available
+✅ DO : Rotate KMS keys regularly
+✅ DO : Use hardware-backed keys (WebAuthn, YubiKey) when possible
+❌ DON’T : Share Age private keys
+❌ DON’T : Log decrypted data
+
+✅ DO : Validate workflows in strict mode before production
+✅ DO : Monitor task status regularly
+✅ DO : Use appropriate data directory permissions (700)
+❌ DON’T : Run orchestrator as root
+❌ DON’T : Expose data directory over network shares
+
+
+Q: Why use plugins instead of HTTP API?
+A: Plugins are 10x faster, have better Nushell integration, and eliminate HTTP overhead.
+Q: Can I use plugins without orchestrator running?
+A: auth and kms work independently. orch requires access to orchestrator data directory.
+Q: How do I update plugins?
+A: Rebuild and re-register: cargo build --release --all && plugin add target/release/nu_plugin_*
+Q: Are plugins cross-platform?
+A: Yes, plugins work on macOS, Linux, and Windows (with appropriate keyring services).
+Q: Can I use multiple KMS backends simultaneously?
+A: Yes, specify --backend flag for each operation.
+Q: How do I backup MFA enrollment?
+A: Save backup codes securely (password manager, encrypted file). QR code can be re-scanned.
+
+
+
+Security System : docs/architecture/adr-009-security-system-complete.md
+JWT Auth : docs/architecture/JWT_AUTH_IMPLEMENTATION.md
+Config Encryption : docs/user/CONFIG_ENCRYPTION_GUIDE.md
+RustyVault Integration : RUSTYVAULT_INTEGRATION_SUMMARY.md
+MFA Implementation : docs/architecture/MFA_IMPLEMENTATION_SUMMARY.md
+
+
+Version : 1.0.0
+Last Updated : 2025-10-09
+Maintained By : Platform Team
For complete documentation on Nushell plugins including installation, configuration, and advanced usage, see:
-
+
Native Nushell plugins eliminate HTTP overhead and provide direct Rust-to-Nushell integration for critical platform operations.
Plugin Operation HTTP Latency Plugin Latency Speedup
-nu_plugin_kms Encrypt (RustyVault) ~50ms ~5ms 10x
-nu_plugin_kms Decrypt (RustyVault) ~50ms ~5ms 10x
-nu_plugin_orchestrator Status query ~30ms ~1ms 30x
-nu_plugin_auth Verify session ~50ms ~10ms 5x
+nu_plugin_kms Encrypt (RustyVault) ~50 ms ~5 ms 10x
+nu_plugin_kms Decrypt (RustyVault) ~50 ms ~5 ms 10x
+nu_plugin_orchestrator Status query ~30 ms ~1 ms 30x
+nu_plugin_auth Verify session ~50 ms ~10 ms 5x
@@ -65999,7 +61297,7 @@ A: Save backup codes securely (password manager, encrypted file). QR code can be
-
+
# Authentication
auth login admin
auth verify
@@ -66011,10 +61309,10 @@ kms decrypt "vault:v1:abc123..."
# Orchestrator
orch status
-orch validate workflows/deploy.k
+orch validate workflows/deploy.ncl
orch tasks --status running
-
+
cd provisioning/core/plugins/nushell-plugins
cargo build --release --all
@@ -66023,23 +61321,23 @@ plugin add target/release/nu_plugin_auth
plugin add target/release/nu_plugin_kms
plugin add target/release/nu_plugin_orchestrator
-
-✅ 10x faster KMS operations (5ms vs 50ms)
-✅ 30-50x faster orchestrator queries (1ms vs 30-50ms)
+
+✅ 10x faster KMS operations (5 ms vs 50 ms)
+✅ 30-50x faster orchestrator queries (1 ms vs 30-50 ms)
✅ Native Nushell integration with data structures and pipelines
✅ Offline capability (KMS with Age, orchestrator local ops)
✅ OS-native keyring for secure token storage
See Plugin Integration Guide for complete information.
-
+
Three high-performance Nushell plugins have been integrated into the provisioning system to provide 10-50x performance improvements over HTTP-based operations:
nu_plugin_auth - JWT authentication with system keyring integration
nu_plugin_kms - Multi-backend KMS encryption
nu_plugin_orchestrator - Local orchestrator operations
-
-
+
+
provisioning auth login <username> [password]
# Examples
@@ -66086,7 +61384,7 @@ provisioning auth login --url http://localhost:8081 admin
provisioning auth verify
provisioning auth verify --local
-
+
provisioning auth logout
# Example
@@ -66099,7 +61397,7 @@ provisioning auth logout
provisioning auth sessions
provisioning auth sessions --active
-
+
10x faster than HTTP fallback
Supports multiple backends: RustyVault, Age, AWS KMS, HashiCorp Vault, Cosmian
@@ -66117,7 +61415,7 @@ provisioning kms encrypt "secret" --backend rustyvault --key my-key
provisioning kms decrypt $encrypted_data
provisioning kms decrypt $encrypted --backend age
-
+
provisioning kms status
# Output shows current backend and availability
@@ -66127,7 +61425,7 @@ provisioning kms decrypt $encrypted --backend age
# Shows all available KMS backends
-
+
30x faster than HTTP fallback
Local file-based orchestration without network overhead.
@@ -66146,19 +61444,19 @@ provisioning orch tasks --status pending
provisioning orch tasks --status running --limit 10
-provisioning orch validate <workflow.k> [--strict]
+provisioning orch validate <workflow.ncl> [--strict]
# Examples
-provisioning orch validate workflows/deployment.k
-provisioning orch validate workflows/deployment.k --strict
+provisioning orch validate workflows/deployment.ncl
+provisioning orch validate workflows/deployment.ncl --strict
-provisioning orch submit <workflow.k> [--priority <0-100>] [--check]
+provisioning orch submit <workflow.ncl> [--priority <0-100>] [--check]
# Examples
-provisioning orch submit workflows/deployment.k
-provisioning orch submit workflows/critical.k --priority 90
-provisioning orch submit workflows/test.k --check
+provisioning orch submit workflows/deployment.ncl
+provisioning orch submit workflows/critical.ncl --priority 90
+provisioning orch submit workflows/test.ncl --check
provisioning orch monitor <task_id> [--once] [--interval <ms>] [--timeout <s>]
@@ -66192,14 +61490,14 @@ provisioning orch monitor task-456 --interval 5000 --timeout 600
# Shows all provisioning plugins registered with Nushell
-
+
Operation With Plugin HTTP Fallback Speedup
-Auth verify ~10ms ~50ms 5x
-Auth login ~15ms ~100ms 7x
-KMS encrypt ~5-8ms ~50ms 10x
-KMS decrypt ~5-8ms ~50ms 10x
-Orch status ~1-5ms ~30ms 30x
-Orch tasks list ~2-10ms ~50ms 25x
+Auth verify ~10 ms ~50 ms 5x
+Auth login ~15 ms ~100 ms 7x
+KMS encrypt ~5-8 ms ~50 ms 10x
+KMS decrypt ~5-8 ms ~50 ms 10x
+Orch status ~1-5 ms ~30 ms 30x
+Orch tasks list ~2-10 ms ~50 ms 25x
@@ -66214,7 +61512,7 @@ $ provisioning auth verify
Token is valid (slower)
This ensures the system remains functional even if plugins aren’t available.
-
+
Make sure you:
@@ -66248,7 +61546,7 @@ provisioning encrypt secret # Alias
provisioning orch status # Full command
provisioning orch-status # Alias
-
+
For orchestrator operations, specify custom data directory:
provisioning orch status --data-dir /custom/orchestrator/data
@@ -66290,7 +61588,7 @@ cd nu_plugin_orchestrator && cargo build --release && cd ..
cd ../..
nu install-and-register.nu
-
+
The plugins follow Nushell’s plugin protocol:
Plugin Binary : Compiled Rust binary in target/release/
@@ -66305,7 +61603,7 @@ nu install-and-register.nu
Orchestrator operations are local file-based (no network exposure)
All operations are logged in provisioning audit logs
-
+
For issues or questions:
Check plugin status: provisioning plugin test
@@ -66317,7 +61615,7 @@ nu install-and-register.nu
Status : Production Ready
Date : 2025-11-19
Version : 1.0.0
-
+
The provisioning system supports secure SSH key retrieval from multiple secret sources, eliminating hardcoded filesystem dependencies and enabling enterprise-grade security. SSH keys are retrieved from configured secret sources (SOPS, KMS, RustyVault) with automatic fallback to local-dev mode for development environments.
@@ -66339,24 +61637,18 @@ nu install-and-register.nu
PROVISIONING_SOPS_ENABLED=true
PROVISIONING_SOPS_SECRETS_FILE=/path/to/secrets.enc.yaml
PROVISIONING_SOPS_AGE_KEY_FILE=$HOME/.age/provisioning
-```plaintext
-
-**Secrets File Structure** (provisioning/secrets.enc.yaml):
-
-```yaml
-# Encrypted with sops
+
+Secrets File Structure (provisioning/secrets.enc.yaml):
+# Encrypted with sops
ssh:
web-01:
ubuntu: /path/to/id_rsa
root: /path/to/root_id_rsa
db-01:
postgres: /path/to/postgres_id_rsa
-```plaintext
-
-**Setup Instructions**:
-
-```bash
-# 1. Install sops and age
+
+Setup Instructions :
+# 1. Install sops and age
brew install sops age
# 2. Generate Age key (store securely!)
@@ -66381,43 +61673,32 @@ mv secrets.yaml provisioning/secrets.enc.yaml
export PROVISIONING_SECRET_SOURCE=sops
export PROVISIONING_SOPS_SECRETS_FILE=$(pwd)/provisioning/secrets.enc.yaml
export PROVISIONING_SOPS_AGE_KEY_FILE=$HOME/.age/provisioning
-```plaintext
-
-### 2. KMS (Key Management Service)
-
-AWS KMS or compatible key management service.
-
-**Pros**:
-
-- ✅ Cloud-native security
-- ✅ Automatic key rotation
-- ✅ Audit logging built-in
-- ✅ High availability
-
-**Cons**:
-
-- ❌ Requires AWS account/credentials
-- ❌ API calls add latency (~50ms)
-- ❌ Cost per API call
-
-**Environment Variables**:
-
-```bash
-PROVISIONING_SECRET_SOURCE=kms
+
+
+AWS KMS or compatible key management service.
+Pros :
+
+✅ Cloud-native security
+✅ Automatic key rotation
+✅ Audit logging built-in
+✅ High availability
+
+Cons :
+
+❌ Requires AWS account/credentials
+❌ API calls add latency (~50 ms)
+❌ Cost per API call
+
+Environment Variables :
+PROVISIONING_SECRET_SOURCE=kms
PROVISIONING_KMS_ENABLED=true
PROVISIONING_KMS_REGION=us-east-1
-```plaintext
-
-**Secret Storage Pattern**:
-
-```plaintext
-provisioning/ssh-keys/{hostname}/{username}
-```plaintext
-
-**Setup Instructions**:
-
-```bash
-# 1. Create KMS key (one-time)
+
+Secret Storage Pattern :
+provisioning/ssh-keys/{hostname}/{username}
+
+Setup Instructions :
+# 1. Create KMS key (one-time)
aws kms create-key \
--description "Provisioning SSH Keys" \
--region us-east-1
@@ -66437,45 +61718,34 @@ export AWS_PROFILE=provisioning
# or
export AWS_ACCESS_KEY_ID=...
export AWS_SECRET_ACCESS_KEY=...
-```plaintext
-
-### 3. RustyVault (Hashicorp Vault-Compatible)
-
-Self-hosted or managed Vault instance for secrets.
-
-**Pros**:
-
-- ✅ Self-hosted option
-- ✅ Fine-grained access control
-- ✅ Multiple authentication methods
-- ✅ Easy key rotation
-
-**Cons**:
-
-- ❌ Requires Vault instance
-- ❌ More operational overhead
-- ❌ Network latency
-
-**Environment Variables**:
-
-```bash
-PROVISIONING_SECRET_SOURCE=vault
+
+
+Self-hosted or managed Vault instance for secrets.
+Pros :
+
+✅ Self-hosted option
+✅ Fine-grained access control
+✅ Multiple authentication methods
+✅ Easy key rotation
+
+Cons :
+
+❌ Requires Vault instance
+❌ More operational overhead
+❌ Network latency
+
+Environment Variables :
+PROVISIONING_SECRET_SOURCE=vault
PROVISIONING_VAULT_ENABLED=true
PROVISIONING_VAULT_ADDRESS=http://localhost:8200
PROVISIONING_VAULT_TOKEN=hvs.CAESIAoICQ...
-```plaintext
-
-**Secret Storage Pattern**:
-
-```plaintext
-GET /v1/secret/ssh-keys/{hostname}/{username}
+
+Secret Storage Pattern :
+GET /v1/secret/ssh-keys/{hostname}/{username}
# Returns: {"key_content": "-----BEGIN OPENSSH PRIVATE KEY-----..."}
-```plaintext
-
-**Setup Instructions**:
-
-```bash
-# 1. Start Vault (if not already running)
+
+Setup Instructions :
+# 1. Start Vault (if not already running)
docker run -p 8200:8200 \
-e VAULT_DEV_ROOT_TOKEN_ID=provisioning \
vault server -dev
@@ -66499,45 +61769,35 @@ vault write auth/approle/role/provisioning \
token_max_ttl=4h
vault read auth/approle/role/provisioning/role-id
vault write -f auth/approle/role/provisioning/secret-id
-```plaintext
-
-### 4. Local-Dev (Fallback)
-
-Local filesystem SSH keys (development only).
-
-**Pros**:
-
-- ✅ No setup required
-- ✅ Fast (local filesystem)
-- ✅ Works offline
-
-**Cons**:
-
-- ❌ NOT for production
-- ❌ Hardcoded filesystem dependency
-- ❌ No key rotation
-
-**Environment Variables**:
-
-```bash
-PROVISIONING_ENVIRONMENT=local-dev
-```plaintext
-
-**Behavior**:
-
-Standard paths checked (in order):
-
-1. `$HOME/.ssh/id_rsa`
-2. `$HOME/.ssh/id_ed25519`
-3. `$HOME/.ssh/provisioning`
-4. `$HOME/.ssh/provisioning_rsa`
-
-## Auto-Detection Logic
-
-When `PROVISIONING_SECRET_SOURCE` is not explicitly set, the system auto-detects in this order:
-
-```plaintext
-1. PROVISIONING_SOPS_ENABLED=true or PROVISIONING_SOPS_SECRETS_FILE set?
+
+
+Local filesystem SSH keys (development only).
+Pros :
+
+✅ No setup required
+✅ Fast (local filesystem)
+✅ Works offline
+
+Cons :
+
+❌ NOT for production
+❌ Hardcoded filesystem dependency
+❌ No key rotation
+
+Environment Variables :
+PROVISIONING_ENVIRONMENT=local-dev
+
+Behavior :
+Standard paths checked (in order):
+
+$HOME/.ssh/id_rsa
+$HOME/.ssh/id_ed25519
+$HOME/.ssh/provisioning
+$HOME/.ssh/provisioning_rsa
+
+
+When PROVISIONING_SECRET_SOURCE is not explicitly set, the system auto-detects in this order:
+1. PROVISIONING_SOPS_ENABLED=true or PROVISIONING_SOPS_SECRETS_FILE set?
→ Use SOPS
2. PROVISIONING_KMS_ENABLED=true or PROVISIONING_KMS_REGION set?
→ Use KMS
@@ -66545,33 +61805,25 @@ When `PROVISIONING_SECRET_SOURCE` is not explicitly set, the system auto-detects
→ Use Vault
4. Otherwise
→ Use local-dev (with warnings in production environments)
-```plaintext
-
-## Configuration Matrix
-
-| Secret Source | Env Variables | Enabled in |
-|---|---|---|
-| **SOPS** | `PROVISIONING_SOPS_*` | Development, Staging, Production |
-| **KMS** | `PROVISIONING_KMS_*` | Staging, Production (with AWS) |
-| **Vault** | `PROVISIONING_VAULT_*` | Development, Staging, Production |
-| **Local-dev** | `PROVISIONING_ENVIRONMENT=local-dev` | Development only |
-
-## Production Recommended Setup
-
-### Minimal Setup (Single Source)
-
-```bash
-# Using Vault (recommended for self-hosted)
+
+
+Secret Source Env Variables Enabled in
+SOPS PROVISIONING_SOPS_*Development, Staging, Production
+KMS PROVISIONING_KMS_*Staging, Production (with AWS)
+Vault PROVISIONING_VAULT_*Development, Staging, Production
+Local-dev PROVISIONING_ENVIRONMENT=local-devDevelopment only
+
+
+
+
+# Using Vault (recommended for self-hosted)
export PROVISIONING_SECRET_SOURCE=vault
export PROVISIONING_VAULT_ADDRESS=https://vault.example.com:8200
export PROVISIONING_VAULT_TOKEN=hvs.CAESIAoICQ...
export PROVISIONING_ENVIRONMENT=production
-```plaintext
-
-### Enhanced Setup (Fallback Chain)
-
-```bash
-# Primary: Vault
+
+
+# Primary: Vault
export PROVISIONING_VAULT_ADDRESS=https://vault.primary.com:8200
export PROVISIONING_VAULT_TOKEN=hvs.CAESIAoICQ...
@@ -66582,12 +61834,9 @@ export PROVISIONING_SOPS_AGE_KEY_FILE=/etc/provisioning/.age/key
# Environment
export PROVISIONING_ENVIRONMENT=production
export PROVISIONING_SECRET_SOURCE=vault # Explicit: use Vault first
-```plaintext
-
-### High-Availability Setup
-
-```bash
-# Use KMS (managed service)
+
+
+# Use KMS (managed service)
export PROVISIONING_SECRET_SOURCE=kms
export PROVISIONING_KMS_REGION=us-east-1
export AWS_PROFILE=provisioning-admin
@@ -66596,14 +61845,10 @@ export AWS_PROFILE=provisioning-admin
export PROVISIONING_VAULT_ADDRESS=https://vault-ha.example.com:8200
export PROVISIONING_VAULT_NAMESPACE=provisioning
export PROVISIONING_ENVIRONMENT=production
-```plaintext
-
-## Validation & Testing
-
-### Check Configuration
-
-```bash
-# Nushell
+
+
+
+# Nushell
provisioning secrets status
# Show secret source and configuration
@@ -66611,12 +61856,9 @@ provisioning secrets validate
# Detailed diagnostics
provisioning secrets diagnose
-```plaintext
-
-### Test SSH Key Retrieval
-
-```bash
-# Test specific host/user
+
+
+# Test specific host/user
provisioning secrets get-key web-01 ubuntu
# Test all configured hosts
@@ -66624,14 +61866,10 @@ provisioning secrets validate-all
# Dry-run SSH with retrieved key
provisioning ssh --test-key web-01 ubuntu
-```plaintext
-
-## Migration Path
-
-### From Local-Dev to SOPS
-
-```bash
-# 1. Create SOPS secrets file with existing keys
+
+
+
+# 1. Create SOPS secrets file with existing keys
cat > secrets.yaml << 'EOF'
ssh:
web-01:
@@ -66650,12 +61888,9 @@ mv secrets.yaml provisioning/secrets.enc.yaml
export PROVISIONING_SECRET_SOURCE=sops
export PROVISIONING_SOPS_SECRETS_FILE=$(pwd)/provisioning/secrets.enc.yaml
export PROVISIONING_SOPS_AGE_KEY_FILE=$HOME/.age/provisioning
-```plaintext
-
-### From SOPS to Vault
-
-```bash
-# 1. Decrypt SOPS file
+
+
+# 1. Decrypt SOPS file
sops -d provisioning/secrets.enc.yaml > /tmp/secrets.yaml
# 2. Import to Vault
@@ -66668,23 +61903,16 @@ export PROVISIONING_VAULT_TOKEN=hvs.CAESIAoICQ...
# 4. Validate retrieval works
provisioning secrets validate-all
-```plaintext
-
-## Security Best Practices
-
-### 1. Never Commit Secrets
-
-```bash
-# Add to .gitignore
+
+
+
+# Add to .gitignore
echo "provisioning/secrets.enc.yaml" >> .gitignore
echo ".age/provisioning" >> .gitignore
echo ".vault-token" >> .gitignore
-```plaintext
-
-### 2. Rotate Keys Regularly
-
-```bash
-# SOPS: Rotate Age key
+
+
+# SOPS: Rotate Age key
age-keygen -o ~/.age/provisioning.new
# Update all secrets with new key
@@ -66694,12 +61922,9 @@ aws kms enable-key-rotation --key-id alias/provisioning
# Vault: Set TTL on secrets
vault write -f secret/metadata/ssh-keys/web-01/ubuntu \
delete_version_after=2160h # 90 days
-```plaintext
-
-### 3. Restrict Access
-
-```bash
-# SOPS: Protect Age key
+
+
+# SOPS: Protect Age key
chmod 600 ~/.age/provisioning
# KMS: Restrict IAM permissions
@@ -66711,12 +61936,9 @@ aws iam put-user-policy --user-name provisioning \
vault write auth/approle/role/provisioning \
token_ttl=1h \
secret_id_ttl=30m
-```plaintext
-
-### 4. Audit Logging
-
-```bash
-# KMS: Enable CloudTrail
+
+
+# KMS: Enable CloudTrail
aws cloudtrail put-event-selectors \
--trail-name provisioning-trail \
--event-selectors ReadWriteType=All
@@ -66726,14 +61948,10 @@ vault audit list
# SOPS: Version control (encrypted)
git log -p provisioning/secrets.enc.yaml
-```plaintext
-
-## Troubleshooting
-
-### SOPS Issues
-
-```bash
-# Test Age decryption
+
+
+
+# Test Age decryption
sops -d provisioning/secrets.enc.yaml
# Verify Age key
@@ -66742,12 +61960,9 @@ age-keygen -l ~/.age/provisioning
# Regenerate if needed
rm ~/.age/provisioning
age-keygen -o ~/.age/provisioning
-```plaintext
-
-### KMS Issues
-
-```bash
-# Test AWS credentials
+
+
+# Test AWS credentials
aws sts get-caller-identity
# Check KMS key permissions
@@ -66755,12 +61970,9 @@ aws kms describe-key --key-id alias/provisioning
# List secrets
aws secretsmanager list-secrets --filters Name=name,Values=provisioning
-```plaintext
-
-### Vault Issues
-
-```bash
-# Check Vault status
+
+
+# Check Vault status
vault status
# Test authentication
@@ -66772,29 +61984,20 @@ vault kv list secret/ssh-keys/
# Check audit logs
vault audit list
vault read sys/audit
-```plaintext
-
-## FAQ
-
-**Q: Can I use multiple secret sources simultaneously?**
-A: Yes, configure multiple sources and set `PROVISIONING_SECRET_SOURCE` to specify primary. If primary fails, manual fallback to secondary is supported.
-
-**Q: What happens if secret retrieval fails?**
-A: System logs the error and fails fast. No automatic fallback to local filesystem (for security).
-
-**Q: Can I cache SSH keys?**
-A: Currently not, keys are retrieved fresh for each operation. Use local caching at OS level (ssh-agent) if needed.
-
-**Q: How do I rotate keys?**
-A: Update the secret in your configured source (SOPS/KMS/Vault) and retrieve fresh on next operation.
-
-**Q: Is local-dev mode secure?**
-A: No - it's development only. Production requires SOPS/KMS/Vault.
-
-## Architecture
-
-```plaintext
-SSH Operation
+
+
+Q: Can I use multiple secret sources simultaneously?
+A: Yes, configure multiple sources and set PROVISIONING_SECRET_SOURCE to specify primary. If primary fails, manual fallback to secondary is supported.
+Q: What happens if secret retrieval fails?
+A: System logs the error and fails fast. No automatic fallback to local filesystem (for security).
+Q: Can I cache SSH keys?
+A: Currently not, keys are retrieved fresh for each operation. Use local caching at OS level (ssh-agent) if needed.
+Q: How do I rotate keys?
+A: Update the secret in your configured source (SOPS/KMS/Vault) and retrieve fresh on next operation.
+Q: Is local-dev mode secure?
+A: No - it’s development only. Production requires SOPS/KMS/Vault.
+
+SSH Operation
↓
SecretsManager (Nushell/Rust)
↓
@@ -66810,14 +62013,10 @@ SecretsManager (Nushell/Rust)
Return SSH Key Path/Content
↓
SSH Operation Completes
-```plaintext
-
-## Integration with SSH Utilities
-
-SSH operations automatically use secrets manager:
-
-```nushell
-# Automatic secret retrieval
+
+
+SSH operations automatically use secrets manager:
+# Automatic secret retrieval
ssh-cmd-smart $settings $server false "command" $ip
# Internally:
# 1. Determine secret source
@@ -66828,13 +62027,10 @@ ssh-cmd-smart $settings $server false "command" $ip
# Batch operations also integrate
ssh-batch-execute $servers $settings "command"
# Per-host: Retrieves key → executes → cleans up
-```plaintext
-
----
-
-**For Support**: See `docs/user/TROUBLESHOOTING_GUIDE.md`
-**For Integration**: See `provisioning/core/nulib/lib_provisioning/platform/secrets.nu`
+
+For Support : See docs/user/TROUBLESHOOTING_GUIDE.md
+For Integration : See provisioning/core/nulib/lib_provisioning/platform/secrets.nu
@@ -66842,7 +62038,7 @@ ssh-batch-execute $servers $settings "command"
Source : provisioning/platform/kms-service/
-
+
Age : Fast, offline encryption (development)
RustyVault : Self-hosted Vault-compatible API
@@ -66850,7 +62046,7 @@ ssh-batch-execute $servers $settings "command"
AWS KMS : Cloud-native key management
HashiCorp Vault : Enterprise secrets management
-
+
┌─────────────────────────────────────────────────────────┐
│ KMS Service │
├─────────────────────────────────────────────────────────┤
@@ -66868,14 +62064,10 @@ ssh-batch-execute $servers $settings "command"
│ ├─ RustyVault Client (self-hosted) │
│ └─ Cosmian KMS Client (enterprise) │
└─────────────────────────────────────────────────────────┘
-```plaintext
-
-## Quick Start
-
-### Development Setup (Age)
-
-```bash
-# 1. Generate Age keys
+
+
+
+# 1. Generate Age keys
mkdir -p ~/.config/provisioning/age
age-keygen -o ~/.config/provisioning/age/private_key.txt
age-keygen -y ~/.config/provisioning/age/private_key.txt > ~/.config/provisioning/age/public_key.txt
@@ -66886,48 +62078,35 @@ export PROVISIONING_ENV=dev
# 3. Start KMS service
cd provisioning/platform/kms-service
cargo run --bin kms-service
-```plaintext
-
-### Production Setup (Cosmian)
-
-```bash
-# Set environment variables
+
+
+# Set environment variables
export PROVISIONING_ENV=prod
export COSMIAN_KMS_URL=https://your-kms.example.com
export COSMIAN_API_KEY=your-api-key-here
# Start KMS service
cargo run --bin kms-service
-```plaintext
-
-## REST API Examples
-
-### Encrypt Data
-
-```bash
-curl -X POST http://localhost:8082/api/v1/kms/encrypt \
+
+
+
+curl -X POST http://localhost:8082/api/v1/kms/encrypt \
-H "Content-Type: application/json" \
-d '{
"plaintext": "SGVsbG8sIFdvcmxkIQ==",
"context": "env=prod,service=api"
}'
-```plaintext
-
-### Decrypt Data
-
-```bash
-curl -X POST http://localhost:8082/api/v1/kms/decrypt \
+
+
+curl -X POST http://localhost:8082/api/v1/kms/decrypt \
-H "Content-Type: application/json" \
-d '{
"ciphertext": "...",
"context": "env=prod,service=api"
}'
-```plaintext
-
-## Nushell CLI Integration
-
-```bash
-# Encrypt data
+
+
+# Encrypt data
"secret-data" | kms encrypt
"api-key" | kms encrypt --context "env=prod,service=api"
@@ -66944,37 +62123,32 @@ kms health
# Encrypt/decrypt files
kms encrypt-file config.yaml
kms decrypt-file config.yaml.enc
-```plaintext
-
-## Backend Comparison
-
-| Feature | Age | RustyVault | Cosmian KMS | AWS KMS | Vault |
-|---------|-----|------------|-------------|---------|-------|
-| **Setup** | Simple | Self-hosted | Server setup | AWS account | Enterprise |
-| **Speed** | Very fast | Fast | Fast | Fast | Fast |
-| **Network** | No | Yes | Yes | Yes | Yes |
-| **Key Rotation** | Manual | Automatic | Automatic | Automatic | Automatic |
-| **Data Keys** | No | Yes | Yes | Yes | Yes |
-| **Audit Logging** | No | Yes | Full | Full | Full |
-| **Confidential** | No | No | Yes (SGX/SEV) | No | No |
-| **License** | MIT | Apache 2.0 | Proprietary | Proprietary | BSL/Enterprise |
-| **Cost** | Free | Free | Paid | Paid | Paid |
-| **Use Case** | Dev/Test | Self-hosted | Privacy | AWS Cloud | Enterprise |
-
-## Integration Points
-
-1. **Config Encryption** (SOPS Integration)
-2. **Dynamic Secrets** (Provider API Keys)
-3. **SSH Key Management**
-4. **Orchestrator** (Workflow Data)
-5. **Control Center** (Audit Logs)
-
-## Deployment
-
-### Docker
-
-```dockerfile
-FROM rust:1.70 as builder
+
+
+Feature Age RustyVault Cosmian KMS AWS KMS Vault
+Setup Simple Self-hosted Server setup AWS account Enterprise
+Speed Very fast Fast Fast Fast Fast
+Network No Yes Yes Yes Yes
+Key Rotation Manual Automatic Automatic Automatic Automatic
+Data Keys No Yes Yes Yes Yes
+Audit Logging No Yes Full Full Full
+Confidential No No Yes (SGX/SEV) No No
+License MIT Apache 2.0 Proprietary Proprietary BSL/Enterprise
+Cost Free Free Paid Paid Paid
+Use Case Dev/Test Self-hosted Privacy AWS Cloud Enterprise
+
+
+
+
+Config Encryption (SOPS Integration)
+Dynamic Secrets (Provider API Keys)
+SSH Key Management
+Orchestrator (Workflow Data)
+Control Center (Audit Logs)
+
+
+
+FROM rust:1.70 as builder
WORKDIR /app
COPY . .
RUN cargo build --release
@@ -66985,12 +62159,9 @@ RUN apt-get update && \
rm -rf /var/lib/apt/lists/*
COPY --from=builder /app/target/release/kms-service /usr/local/bin/
ENTRYPOINT ["kms-service"]
-```plaintext
-
-### Kubernetes
-
-```yaml
-apiVersion: apps/v1
+
+
+apiVersion: apps/v1
kind: Deployment
metadata:
name: kms-service
@@ -67008,29 +62179,28 @@ spec:
value: "https://kms.example.com"
ports:
- containerPort: 8082
-```plaintext
-
-## Security Best Practices
-
-1. **Development**: Use Age for dev/test only, never for production secrets
-2. **Production**: Always use Cosmian KMS with TLS verification enabled
-3. **API Keys**: Never hardcode, use environment variables
-4. **Key Rotation**: Enable automatic rotation (90 days recommended)
-5. **Context Encryption**: Always use encryption context (AAD)
-6. **Network Access**: Restrict KMS service access with firewall rules
-7. **Monitoring**: Enable health checks and monitor operation metrics
-
-## Related Documentation
-
-- **User Guide**: [KMS Guide](../user/RUSTYVAULT_KMS_GUIDE.md)
-- **Migration**: [KMS Simplification](../migration/KMS_SIMPLIFICATION.md)
+
+
+Development : Use Age for dev/test only, never for production secrets
+Production : Always use Cosmian KMS with TLS verification enabled
+API Keys : Never hardcode, use environment variables
+Key Rotation : Enable automatic rotation (90 days recommended)
+Context Encryption : Always use encryption context (AAD)
+Network Access : Restrict KMS service access with firewall rules
+Monitoring : Enable health checks and monitor operation metrics
+
+
+
Complete guide to using Gitea integration for workspace management, extension distribution, and collaboration.
Version: 1.0.0
Last Updated: 2025-10-06
-
+
Overview
Setup
@@ -67042,7 +62212,7 @@ spec:
Troubleshooting
-
+
The Gitea integration provides:
Workspace Git Integration : Version control for workspaces
@@ -67051,7 +62221,7 @@ spec:
Collaboration : Share workspaces and extensions across teams
Service Management : Deploy and manage local Gitea instance
-
+
┌─────────────────────────────────────────────────────────┐
│ Provisioning System │
├─────────────────────────────────────────────────────────┤
@@ -67074,27 +62244,20 @@ spec:
│ Gitea Service │
│ (Local/Remote)│
└────────────────┘
-```plaintext
-
----
-
-## Setup
-
-### Prerequisites
-
-- **Nushell 0.107.1+**
-- **Git** installed and configured
-- **Docker** (for local Gitea deployment) or access to remote Gitea instance
-- **SOPS** (for encrypted token storage)
-
-### Configuration
-
-#### 1. Add Gitea Configuration to KCL
-
-Edit your `provisioning/kcl/modes.k` or workspace config:
-
-```kcl
-import provisioning.gitea as gitea
+
+
+
+
+
+Nushell 0.107.1+
+Git installed and configured
+Docker (for local Gitea deployment) or access to remote Gitea instance
+SOPS (for encrypted token storage)
+
+
+
+Edit your provisioning/schemas/modes.ncl or workspace config:
+import provisioning.gitea as gitea
# Local Docker deployment
_gitea_config = gitea.GiteaConfig {
@@ -67128,33 +62291,27 @@ _gitea_remote = gitea.GiteaConfig {
username = "myuser"
}
}
-```plaintext
-
-#### 2. Create Gitea Access Token
-
-For local Gitea:
-
-1. Start Gitea: `provisioning gitea start`
-2. Open <http://localhost:3000>
-3. Register admin account
-4. Go to Settings → Applications → Generate New Token
-5. Save token to encrypted file:
-
-```bash
-# Create encrypted token file
+
+
+For local Gitea:
+
+Start Gitea: provisioning gitea start
+Open http://localhost:3000
+Register admin account
+Go to Settings → Applications → Generate New Token
+Save token to encrypted file:
+
+# Create encrypted token file
echo "your-gitea-token" | sops --encrypt /dev/stdin > ~/.provisioning/secrets/gitea-token.enc
-```plaintext
-
-For remote Gitea:
-
-1. Login to your Gitea instance
-2. Generate personal access token
-3. Save encrypted as above
-
-#### 3. Verify Setup
-
-```bash
-# Check Gitea status
+
+For remote Gitea:
+
+Login to your Gitea instance
+Generate personal access token
+Save encrypted as above
+
+
+# Check Gitea status
provisioning gitea status
# Validate token
@@ -67162,46 +62319,34 @@ provisioning gitea auth validate
# Show current user
provisioning gitea user
-```plaintext
-
----
-
-## Workspace Git Integration
-
-### Initialize Workspace with Git
-
-When creating a new workspace, enable git integration:
-
-```bash
-# Initialize new workspace with Gitea
+
+
+
+
+When creating a new workspace, enable git integration:
+# Initialize new workspace with Gitea
provisioning workspace init my-workspace --git --remote gitea
# Or initialize existing workspace
cd workspace_my-workspace
provisioning gitea workspace init . my-workspace --remote gitea
-```plaintext
-
-This will:
-
-1. Initialize git repository in workspace
-2. Create repository on Gitea (`workspaces/my-workspace`)
-3. Add remote origin
-4. Push initial commit
-
-### Clone Existing Workspace
-
-```bash
-# Clone from Gitea
+
+This will:
+
+Initialize git repository in workspace
+Create repository on Gitea (workspaces/my-workspace)
+Add remote origin
+Push initial commit
+
+
+# Clone from Gitea
provisioning workspace clone workspaces/my-workspace ./workspace_my-workspace
# Or using full identifier
provisioning workspace clone my-workspace ./workspace_my-workspace
-```plaintext
-
-### Push/Pull Changes
-
-```bash
-# Push workspace changes
+
+
+# Push workspace changes
cd workspace_my-workspace
provisioning workspace push --message "Updated infrastructure configs"
@@ -67210,12 +62355,9 @@ provisioning workspace pull
# Sync (pull + push)
provisioning workspace sync
-```plaintext
-
-### Branch Management
-
-```bash
-# Create branch
+
+
+# Create branch
provisioning workspace branch create feature-new-cluster
# Switch branch
@@ -67226,12 +62368,9 @@ provisioning workspace branch list
# Delete branch
provisioning workspace branch delete feature-new-cluster
-```plaintext
-
-### Git Status
-
-```bash
-# Get workspace git status
+
+
+# Get workspace git status
provisioning workspace git status
# Show uncommitted changes
@@ -67239,24 +62378,18 @@ provisioning workspace git diff
# Show staged changes
provisioning workspace git diff --staged
-```plaintext
-
----
-
-## Workspace Locking
-
-Distributed locking prevents concurrent modifications to workspaces using Gitea issues.
-
-### Lock Types
-
-- **read**: Multiple readers allowed, blocks writers
-- **write**: Exclusive access, blocks all other locks
-- **deploy**: Exclusive access for deployments
-
-### Acquire Lock
-
-```bash
-# Acquire write lock
+
+
+
+Distributed locking prevents concurrent modifications to workspaces using Gitea issues.
+
+
+read : Multiple readers allowed, blocks writers
+write : Exclusive access, blocks all other locks
+deploy : Exclusive access for deployments
+
+
+# Acquire write lock
provisioning gitea lock acquire my-workspace write \
--operation "Deploying servers" \
--expiry "2025-10-06T14:00:00Z"
@@ -67266,12 +62399,9 @@ provisioning gitea lock acquire my-workspace write \
# Lock ID: 42
# Type: write
# User: provisioning
-```plaintext
-
-### Check Lock Status
-
-```bash
-# List locks for workspace
+
+
+# List locks for workspace
provisioning gitea lock list my-workspace
# List all active locks
@@ -67279,53 +62409,34 @@ provisioning gitea lock list
# Get lock details
provisioning gitea lock info my-workspace 42
-```plaintext
-
-### Release Lock
-
-```bash
-# Release lock
+
+
+# Release lock
provisioning gitea lock release my-workspace 42
-```plaintext
-
-### Force Release Lock (Admin)
-
-```bash
-# Force release stuck lock
+
+
+# Force release stuck lock
provisioning gitea lock force-release my-workspace 42 \
--reason "Deployment failed, releasing lock"
-```plaintext
-
-### Automatic Locking
-
-Use `with-workspace-lock` for automatic lock management:
-
-```nushell
-use lib_provisioning/gitea/locking.nu *
+
+
+Use with-workspace-lock for automatic lock management:
+use lib_provisioning/gitea/locking.nu *
with-workspace-lock "my-workspace" "deploy" "Server deployment" {
# Your deployment code here
# Lock automatically released on completion or error
}
-```plaintext
-
-### Lock Cleanup
-
-```bash
-# Cleanup expired locks
+
+
+# Cleanup expired locks
provisioning gitea lock cleanup
-```plaintext
-
----
-
-## Extension Publishing
-
-Publish taskservs, providers, and clusters as versioned releases on Gitea.
-
-### Publish Extension
-
-```bash
-# Publish taskserv
+
+
+
+Publish taskservs, providers, and clusters as versioned releases on Gitea.
+
+# Publish taskserv
provisioning gitea extension publish \
./extensions/taskservs/database/postgres \
1.2.0 \
@@ -67341,49 +62452,37 @@ provisioning gitea extension publish \
provisioning gitea extension publish \
./extensions/clusters/buildkit \
1.0.0
-```plaintext
-
-This will:
-
-1. Validate extension structure
-2. Create git tag (if workspace is git repo)
-3. Package extension as `.tar.gz`
-4. Create Gitea release
-5. Upload package as release asset
-
-### List Published Extensions
-
-```bash
-# List all extensions
+
+This will:
+
+Validate extension structure
+Create git tag (if workspace is git repo)
+Package extension as .tar.gz
+Create Gitea release
+Upload package as release asset
+
+
+# List all extensions
provisioning gitea extension list
# Filter by type
provisioning gitea extension list --type taskserv
provisioning gitea extension list --type provider
provisioning gitea extension list --type cluster
-```plaintext
-
-### Download Extension
-
-```bash
-# Download specific version
+
+
+# Download specific version
provisioning gitea extension download postgres 1.2.0 \
--destination ./extensions/taskservs/database
# Extension is downloaded and extracted automatically
-```plaintext
-
-### Extension Metadata
-
-```bash
-# Get extension information
+
+
+# Get extension information
provisioning gitea extension info postgres 1.2.0
-```plaintext
-
-### Publishing Workflow
-
-```bash
-# 1. Make changes to extension
+
+
+# 1. Make changes to extension
cd extensions/taskservs/database/postgres
# 2. Update version in kcl/kcl.mod
@@ -67395,16 +62494,11 @@ git commit -m "Release v1.2.0"
# 5. Publish to Gitea
provisioning gitea extension publish . 1.2.0
-```plaintext
-
----
-
-## Service Management
-
-### Start/Stop Gitea
-
-```bash
-# Start Gitea (local mode)
+
+
+
+
+# Start Gitea (local mode)
provisioning gitea start
# Stop Gitea
@@ -67412,12 +62506,9 @@ provisioning gitea stop
# Restart Gitea
provisioning gitea restart
-```plaintext
-
-### Check Status
-
-```bash
-# Get service status
+
+
+# Get service status
provisioning gitea status
# Output:
@@ -67429,12 +62520,9 @@ provisioning gitea status
# URL: http://localhost:3000
# Container: provisioning-gitea
# Health: ✓ OK
-```plaintext
-
-### View Logs
-
-```bash
-# View recent logs
+
+
+# View recent logs
provisioning gitea logs
# Follow logs
@@ -67442,12 +62530,9 @@ provisioning gitea logs --follow
# Show specific number of lines
provisioning gitea logs --lines 200
-```plaintext
-
-### Install Gitea Binary
-
-```bash
-# Install latest version
+
+
+# Install latest version
provisioning gitea install
# Install specific version
@@ -67455,16 +62540,11 @@ provisioning gitea install 1.21.0
# Custom install directory
provisioning gitea install --install-dir ~/bin
-```plaintext
-
----
-
-## API Reference
-
-### Repository Operations
-
-```nushell
-use lib_provisioning/gitea/api_client.nu *
+
+
+
+
+use lib_provisioning/gitea/api_client.nu *
# Create repository
create-repository "my-org" "my-repo" "Description" true
@@ -67477,12 +62557,9 @@ delete-repository "my-org" "my-repo" --force
# List repositories
list-repositories "my-org"
-```plaintext
-
-### Release Operations
-
-```nushell
-# Create release
+
+
+# Create release
create-release "my-org" "my-repo" "v1.0.0" "Release Name" "Notes"
# Upload asset
@@ -67493,12 +62570,9 @@ get-release-by-tag "my-org" "my-repo" "v1.0.0"
# List releases
list-releases "my-org" "my-repo"
-```plaintext
-
-### Workspace Operations
-
-```nushell
-use lib_provisioning/gitea/workspace_git.nu *
+
+
+use lib_provisioning/gitea/workspace_git.nu *
# Initialize workspace git
init-workspace-git "./workspace_test" "test" --remote "gitea"
@@ -67511,12 +62585,9 @@ push-workspace "./workspace_my-workspace" "Updated configs"
# Pull changes
pull-workspace "./workspace_my-workspace"
-```plaintext
-
-### Locking Operations
-
-```nushell
-use lib_provisioning/gitea/locking.nu *
+
+
+use lib_provisioning/gitea/locking.nu *
# Acquire lock
let lock = acquire-workspace-lock "my-workspace" "write" "Deployment"
@@ -67529,20 +62600,13 @@ is-workspace-locked "my-workspace" "write"
# List locks
list-workspace-locks "my-workspace"
-```plaintext
-
----
-
-## Troubleshooting
-
-### Gitea Not Starting
-
-**Problem**: `provisioning gitea start` fails
-
-**Solutions**:
-
-```bash
-# Check Docker status
+
+
+
+
+Problem : provisioning gitea start fails
+Solutions :
+# Check Docker status
docker ps
# Check if port is in use
@@ -67554,16 +62618,11 @@ provisioning gitea logs
# Remove old container
docker rm -f provisioning-gitea
provisioning gitea start
-```plaintext
-
-### Token Authentication Failed
-
-**Problem**: `provisioning gitea auth validate` returns false
-
-**Solutions**:
-
-```bash
-# Verify token file exists
+
+
+Problem : provisioning gitea auth validate returns false
+Solutions :
+# Verify token file exists
ls ~/.provisioning/secrets/gitea-token.enc
# Test decryption
@@ -67572,16 +62631,11 @@ sops --decrypt ~/.provisioning/secrets/gitea-token.enc
# Regenerate token in Gitea UI
# Save new token
echo "new-token" | sops --encrypt /dev/stdin > ~/.provisioning/secrets/gitea-token.enc
-```plaintext
-
-### Cannot Push to Repository
-
-**Problem**: Git push fails with authentication error
-
-**Solutions**:
-
-```bash
-# Check remote URL
+
+
+Problem : Git push fails with authentication error
+Solutions :
+# Check remote URL
cd workspace_my-workspace
git remote -v
@@ -67590,16 +62644,11 @@ git remote set-url origin http://username:token@localhost:3000/org/repo.git
# Or use SSH
git remote set-url origin git@localhost:workspaces/my-workspace.git
-```plaintext
-
-### Lock Already Exists
-
-**Problem**: Cannot acquire lock, workspace already locked
-
-**Solutions**:
-
-```bash
-# Check active locks
+
+
+Problem : Cannot acquire lock, workspace already locked
+Solutions :
+# Check active locks
provisioning gitea lock list my-workspace
# Get lock details
@@ -67607,88 +62656,70 @@ provisioning gitea lock info my-workspace 42
# If lock is stale, force release
provisioning gitea lock force-release my-workspace 42 --reason "Stale lock"
-```plaintext
-
-### Extension Validation Failed
-
-**Problem**: Extension publishing fails validation
-
-**Solutions**:
-
-```bash
-# Check extension structure
+
+
+Problem : Extension publishing fails validation
+Solutions :
+# Check extension structure
ls -la extensions/taskservs/myservice/
# Required:
-# - kcl/kcl.mod
-# - kcl/*.k (main schema file)
+# - schemas/manifest.toml
+# - schemas/*.ncl (main schema file)
-# Verify kcl.mod format
-cat extensions/taskservs/myservice/kcl/kcl.mod
+# Verify manifest.toml format
+cat extensions/taskservs/myservice/schemas/manifest.toml
# Should have:
# [package]
# name = "myservice"
# version = "1.0.0"
-```plaintext
-
-### Docker Volume Permissions
-
-**Problem**: Gitea Docker container has permission errors
-
-**Solutions**:
-
-```bash
-# Fix data directory permissions
+
+
+Problem : Gitea Docker container has permission errors
+Solutions :
+# Fix data directory permissions
sudo chown -R 1000:1000 ~/.provisioning/gitea
# Or recreate with correct permissions
provisioning gitea stop --remove
rm -rf ~/.provisioning/gitea
provisioning gitea start
-```plaintext
-
----
-
-## Best Practices
-
-### Workspace Management
-
-1. **Always use locking** for concurrent operations
-2. **Commit frequently** with descriptive messages
-3. **Use branches** for experimental changes
-4. **Sync before operations** to get latest changes
-
-### Extension Publishing
-
-1. **Follow semantic versioning** (MAJOR.MINOR.PATCH)
-2. **Update CHANGELOG.md** for each release
-3. **Test extensions** before publishing
-4. **Use prerelease flag** for beta versions
-
-### Security
-
-1. **Encrypt tokens** with SOPS
-2. **Use private repositories** for sensitive workspaces
-3. **Rotate tokens** regularly
-4. **Audit lock history** via Gitea issues
-
-### Performance
-
-1. **Cleanup expired locks** periodically
-2. **Use shallow clones** for large workspaces
-3. **Archive old releases** to reduce storage
-4. **Monitor Gitea resources** for local deployments
-
----
-
-## Advanced Usage
-
-### Custom Gitea Deployment
-
-Edit `docker-compose.yml`:
-
-```yaml
-services:
+
+
+
+
+
+Always use locking for concurrent operations
+Commit frequently with descriptive messages
+Use branches for experimental changes
+Sync before operations to get latest changes
+
+
+
+Follow semantic versioning (MAJOR.MINOR.PATCH)
+Update CHANGELOG.md for each release
+Test extensions before publishing
+Use prerelease flag for beta versions
+
+
+
+Encrypt tokens with SOPS
+Use private repositories for sensitive workspaces
+Rotate tokens regularly
+Audit lock history via Gitea issues
+
+
+
+Cleanup expired locks periodically
+Use shallow clones for large workspaces
+Archive old releases to reduce storage
+Monitor Gitea resources for local deployments
+
+
+
+
+Edit docker-compose.yml:
+services:
gitea:
image: gitea/gitea:1.21
environment:
@@ -67697,48 +62728,37 @@ services:
# Add custom settings
volumes:
- /custom/path/gitea:/data
-```plaintext
-
-### Webhooks Integration
-
-Configure webhooks for automated workflows:
-
-```kcl
-import provisioning.gitea as gitea
+
+
+Configure webhooks for automated workflows:
+import provisioning.gitea as gitea
_webhook = gitea.GiteaWebhook {
url = "https://provisioning.example.com/api/webhooks/gitea"
events = ["push", "pull_request", "release"]
secret = "webhook-secret"
}
-```plaintext
-
-### Batch Extension Publishing
-
-```bash
-# Publish all taskservs with same version
+
+
+# Publish all taskservs with same version
provisioning gitea extension publish-batch \
./extensions/taskservs \
1.0.0 \
--extension-type taskserv
-```plaintext
-
----
-
-## References
-
-- **Gitea API Documentation**: <https://docs.gitea.com/api/>
-- **KCL Schema**: `/Users/Akasha/project-provisioning/provisioning/kcl/gitea.k`
-- **API Client**: `/Users/Akasha/project-provisioning/provisioning/core/nulib/lib_provisioning/gitea/api_client.nu`
-- **Workspace Git**: `/Users/Akasha/project-provisioning/provisioning/core/nulib/lib_provisioning/gitea/workspace_git.nu`
-- **Locking**: `/Users/Akasha/project-provisioning/provisioning/core/nulib/lib_provisioning/gitea/locking.nu`
-
----
-
-**Version:** 1.0.0
-**Maintained By:** Provisioning Team
-**Last Updated:** 2025-10-06
+
+
+
+Gitea API Documentation : https://docs.gitea.com/api/
+Nickel Schema : /Users/Akasha/project-provisioning/provisioning/schemas/gitea.ncl
+API Client : /Users/Akasha/project-provisioning/provisioning/core/nulib/lib_provisioning/gitea/api_client.nu
+Workspace Git : /Users/Akasha/project-provisioning/provisioning/core/nulib/lib_provisioning/gitea/workspace_git.nu
+Locking : /Users/Akasha/project-provisioning/provisioning/core/nulib/lib_provisioning/gitea/locking.nu
+
+
+Version: 1.0.0
+Maintained By: Provisioning Team
+Last Updated: 2025-10-06
This guide helps you choose between different service mesh and ingress controller options for your Kubernetes deployments.
@@ -67770,7 +62790,7 @@ provisioning gitea extension publish-batch \
✅ Comprehensive feature set
✅ Built-in Istio Gateway ingress controller
✅ Advanced traffic management
-✅ Excellent observability (Kiali, Grafana, Jaeger)
+✅ Strong observability (Kiali, Grafana, Jaeger)
✅ Virtual services, destination rules, traffic policies
✅ Mutual TLS (mTLS) with automatic certificate rotation
✅ Canary deployments and traffic mirroring
@@ -67779,7 +62799,7 @@ provisioning gitea extension publish-batch \
CPU: 500m (Pilot) + 100m per gateway
Memory: 2048Mi (Pilot) + 128Mi per gateway
-Relatively high overhead
+High overhead
Pros :
@@ -67806,442 +62826,363 @@ provisioning gitea extension publish-batch \
Installation :
provisioning taskserv create istio
-```plaintext
-
----
-
-#### Linkerd
-
-**Version**: 2.16.0
-
-**Best for**: Lightweight, high-performance service mesh with minimal complexity
-
-**Key Features**:
-
-- ✅ Ultra-lightweight (minimal resource footprint)
-- ✅ Simple configuration
-- ✅ Automatic mTLS with certificate rotation
-- ✅ Fast sidecar startup (built in Rust)
-- ✅ Live traffic visualization
-- ✅ Service topology and dependency discovery
-- ✅ Golden metrics out of the box (latency, success rate, throughput)
-
-**Resource Requirements**:
-
-- CPU proxy: 100m request, 1000m limit
-- Memory proxy: 20Mi request, 250Mi limit
-- Very lightweight compared to Istio
-
-**Pros**:
-
-- Minimal resource overhead
-- Simple, intuitive configuration
-- Fast startup and deployment
-- Built in Rust for performance
-- Excellent golden metrics
-- Good for resource-constrained environments
-- Can run alongside Istio
-
-**Cons**:
-
-- Fewer advanced features than Istio
-- Requires external ingress controller
-- Smaller ecosystem and fewer integrations
-- Less feature-rich traffic management
-- Requires cert-manager for mTLS
-
-**Use when**:
-
-- You want simplicity and minimal overhead
-- Running on resource-constrained clusters
-- You prefer straightforward configuration
-- You don't need advanced traffic management
-- You're using Kubernetes 1.21+
-
-**Installation**:
-
-```bash
-# Linkerd requires cert-manager
+
+
+
+Version : 2.16.0
+Best for : Lightweight, high-performance service mesh with minimal complexity
+Key Features :
+
+✅ Ultra-lightweight (minimal resource footprint)
+✅ Simple configuration
+✅ Automatic mTLS with certificate rotation
+✅ Fast sidecar startup (built in Rust)
+✅ Live traffic visualization
+✅ Service topology and dependency discovery
+✅ Golden metrics out of the box (latency, success rate, throughput)
+
+Resource Requirements :
+
+CPU proxy: 100m request, 1000m limit
+Memory proxy: 20Mi request, 250Mi limit
+Very lightweight compared to Istio
+
+Pros :
+
+Minimal resource overhead
+Simple, intuitive configuration
+Fast startup and deployment
+Built in Rust for performance
+Excellent golden metrics
+Good for resource-constrained environments
+Can run alongside Istio
+
+Cons :
+
+Fewer advanced features than Istio
+Requires external ingress controller
+Smaller ecosystem and fewer integrations
+Less feature-rich traffic management
+Requires cert-manager for mTLS
+
+Use when :
+
+You want simplicity and minimal overhead
+Running on resource-constrained clusters
+You prefer straightforward configuration
+You don’t need advanced traffic management
+You’re using Kubernetes 1.21+
+
+Installation :
+# Linkerd requires cert-manager
provisioning taskserv create cert-manager
provisioning taskserv create linkerd
provisioning taskserv create nginx-ingress # Or traefik/contour
-```plaintext
-
----
-
-#### Cilium
-
-**Version**: See existing Cilium taskserv
-
-**Best for**: CNI-based networking with integrated service mesh
-
-**Key Features**:
-
-- ✅ CNI and service mesh in one solution
-- ✅ eBPF-based for high performance
-- ✅ Network policy enforcement
-- ✅ Service mesh mode (optional)
-- ✅ Hubble for observability
-- ✅ Cluster mesh for multi-cluster
-
-**Pros**:
-
-- Replaces CNI plugin entirely
-- High-performance eBPF kernel networking
-- Can serve as both CNI and service mesh
-- No sidecar needed (uses eBPF)
-- Network policy support
-
-**Cons**:
-
-- Requires Linux kernel with eBPF support
-- Service mesh mode is secondary feature
-- More complex than Linkerd
-- Not as mature in service mesh role
-
-**Use when**:
-
-- You need both CNI and service mesh
-- You're on modern Linux kernels with eBPF
-- You want kernel-level networking
-
----
-
-### Ingress Controller Options
-
-#### Nginx Ingress
-
-**Version**: 1.12.0
-
-**Best for**: Most Kubernetes deployments - proven, reliable, widely supported
-
-**Key Features**:
-
-- ✅ Battle-tested and production-proven
-- ✅ Most popular ingress controller
-- ✅ Extensive documentation and community
-- ✅ Rich configuration options
-- ✅ SSL/TLS termination
-- ✅ URL rewriting and routing
-- ✅ Rate limiting and DDoS protection
-
-**Pros**:
-
-- Proven stability in production
-- Widest community and ecosystem
-- Extensive documentation
-- Multiple commercial support options
-- Works with any service mesh
-- Moderate resource footprint
-
-**Cons**:
-
-- Configuration can be verbose
-- Limited middleware ecosystem (compared to Traefik)
-- No automatic TLS with Let's Encrypt
-- Configuration via annotations
-
-**Use when**:
-
-- You want proven stability
-- Wide community support is important
-- You need traditional ingress controller
-- You're building production systems
-- You want abundant documentation
-
-**Installation**:
-
-```bash
+
+
+
+Version : See existing Cilium taskserv
+Best for : CNI-based networking with integrated service mesh
+Key Features :
+
+✅ CNI and service mesh in one solution
+✅ eBPF-based for high performance
+✅ Network policy enforcement
+✅ Service mesh mode (optional)
+✅ Hubble for observability
+✅ Cluster mesh for multi-cluster
+
+Pros :
+
+Replaces CNI plugin entirely
+High-performance eBPF kernel networking
+Can serve as both CNI and service mesh
+No sidecar needed (uses eBPF)
+Network policy support
+
+Cons :
+
+Requires Linux kernel with eBPF support
+Service mesh mode is secondary feature
+More complex than Linkerd
+Not as mature in service mesh role
+
+Use when :
+
+You need both CNI and service mesh
+You’re on modern Linux kernels with eBPF
+You want kernel-level networking
+
+
+
+
+Version : 1.12.0
+Best for : Most Kubernetes deployments - proven, reliable, widely supported
+Key Features :
+
+✅ Battle-tested and production-proven
+✅ Most popular ingress controller
+✅ Extensive documentation and community
+✅ Rich configuration options
+✅ SSL/TLS termination
+✅ URL rewriting and routing
+✅ Rate limiting and DDoS protection
+
+Pros :
+
+Proven stability in production
+Widest community and ecosystem
+Extensive documentation
+Multiple commercial support options
+Works with any service mesh
+Moderate resource footprint
+
+Cons :
+
+Configuration can be verbose
+Limited middleware ecosystem (compared to Traefik)
+No automatic TLS with Let’s Encrypt
+Configuration via annotations
+
+Use when :
+
+You want proven stability
+Wide community support is important
+You need traditional ingress controller
+You’re building production systems
+You want abundant documentation
+
+Installation :
+provisioning taskserv create nginx-ingress
+
+With Linkerd :
+provisioning taskserv create linkerd
provisioning taskserv create nginx-ingress
-```plaintext
-
-**With Linkerd**:
-
-```bash
+
+
+
+Version : 3.3.0
+Best for : Modern cloud-native applications with dynamic service discovery
+Key Features :
+
+✅ Automatic service discovery
+✅ Native Let’s Encrypt support
+✅ Middleware system for advanced routing
+✅ Built-in dashboard and metrics
+✅ API-driven configuration
+✅ Dynamic configuration updates
+✅ Support for multiple protocols (HTTP, TCP, gRPC)
+
+Pros :
+
+Modern, cloud-native design
+Automatic TLS with Let’s Encrypt
+Middleware ecosystem for extensibility
+Built-in dashboard for monitoring
+Dynamic configuration without restart
+API-driven approach
+Growing community
+
+Cons :
+
+Different configuration paradigm (IngressRoute CRD)
+Smaller community than Nginx
+Learning curve for traditional ops
+Less mature than Nginx
+
+Use when :
+
+You want modern cloud-native features
+Automatic TLS is important
+You like middleware-based routing
+You want dynamic configuration
+You’re building microservices platforms
+
+Installation :
+provisioning taskserv create traefik
+
+With Linkerd :
+provisioning taskserv create linkerd
+provisioning taskserv create traefik
+
+
+
+Version : 1.31.0
+Best for : Envoy-based ingress with simple CRD configuration
+Key Features :
+
+✅ Envoy proxy backend (same as Istio)
+✅ Simple CRD-based configuration
+✅ HTTPProxy CRD for advanced routing
+✅ Service delegation and composition
+✅ External authorization
+✅ Rate limiting support
+
+Pros :
+
+Uses same Envoy proxy as Istio
+Simple but powerful configuration
+Good for multi-tenant clusters
+CRD-based (declarative)
+Good documentation
+
+Cons :
+
+Smaller community than Nginx/Traefik
+Fewer integrations and plugins
+Less feature-rich than Traefik
+Fewer real-world examples
+
+Use when :
+
+You want Envoy proxy for consistency with Istio
+You prefer simple configuration
+You like CRD-based approach
+You need multi-tenant support
+
+Installation :
+provisioning taskserv create contour
+
+
+
+Version : 0.15.0
+Best for : High-performance environments requiring advanced load balancing
+Key Features :
+
+✅ HAProxy backend for performance
+✅ Advanced load balancing algorithms
+✅ High throughput
+✅ Flexible configuration
+✅ Proven performance
+
+Pros :
+
+Excellent performance
+Advanced load balancing options
+Battle-tested HAProxy backend
+Good for high-traffic scenarios
+
+Cons :
+
+Less Kubernetes-native than others
+Smaller community
+Configuration complexity
+Fewer modern features
+
+Use when :
+
+Performance is critical
+High traffic is expected
+You need advanced load balancing
+
+
+
+
+Why : Lightweight mesh + proven ingress = great balance
+provisioning taskserv create cert-manager
provisioning taskserv create linkerd
provisioning taskserv create nginx-ingress
-```plaintext
-
----
-
-#### Traefik
-
-**Version**: 3.3.0
-
-**Best for**: Modern cloud-native applications with dynamic service discovery
-
-**Key Features**:
-
-- ✅ Automatic service discovery
-- ✅ Native Let's Encrypt support
-- ✅ Middleware system for advanced routing
-- ✅ Built-in dashboard and metrics
-- ✅ API-driven configuration
-- ✅ Dynamic configuration updates
-- ✅ Support for multiple protocols (HTTP, TCP, gRPC)
-
-**Pros**:
-
-- Modern, cloud-native design
-- Automatic TLS with Let's Encrypt
-- Middleware ecosystem for extensibility
-- Built-in dashboard for monitoring
-- Dynamic configuration without restart
-- API-driven approach
-- Growing community
-
-**Cons**:
-
-- Different configuration paradigm (IngressRoute CRD)
-- Smaller community than Nginx
-- Learning curve for traditional ops
-- Less mature than Nginx
-
-**Use when**:
-
-- You want modern cloud-native features
-- Automatic TLS is important
-- You like middleware-based routing
-- You want dynamic configuration
-- You're building microservices platforms
-
-**Installation**:
-
-```bash
-provisioning taskserv create traefik
-```plaintext
-
-**With Linkerd**:
-
-```bash
+
+Pros :
+
+Minimal overhead
+Simple to manage
+Proven stability
+Good observability
+
+Cons :
+
+Less advanced features than Istio
+
+
+
+Why : All-in-one service mesh with built-in gateway
+provisioning taskserv create istio
+
+Pros :
+
+Unified traffic management
+Powerful observability
+No external ingress needed
+Rich features
+
+Cons :
+
+Higher resource usage
+More complex
+
+
+
+Why : Lightweight mesh + modern ingress
+provisioning taskserv create cert-manager
provisioning taskserv create linkerd
provisioning taskserv create traefik
-```plaintext
-
----
-
-#### Contour
-
-**Version**: 1.31.0
-
-**Best for**: Envoy-based ingress with simple CRD configuration
-
-**Key Features**:
-
-- ✅ Envoy proxy backend (same as Istio)
-- ✅ Simple CRD-based configuration
-- ✅ HTTPProxy CRD for advanced routing
-- ✅ Service delegation and composition
-- ✅ External authorization
-- ✅ Rate limiting support
-
-**Pros**:
-
-- Uses same Envoy proxy as Istio
-- Simple but powerful configuration
-- Good for multi-tenant clusters
-- CRD-based (declarative)
-- Good documentation
-
-**Cons**:
-
-- Smaller community than Nginx/Traefik
-- Fewer integrations and plugins
-- Less feature-rich than Traefik
-- Fewer real-world examples
-
-**Use when**:
-
-- You want Envoy proxy for consistency with Istio
-- You prefer simple configuration
-- You like CRD-based approach
-- You need multi-tenant support
-
-**Installation**:
-
-```bash
-provisioning taskserv create contour
-```plaintext
-
----
-
-#### HAProxy Ingress
-
-**Version**: 0.15.0
-
-**Best for**: High-performance environments requiring advanced load balancing
-
-**Key Features**:
-
-- ✅ HAProxy backend for performance
-- ✅ Advanced load balancing algorithms
-- ✅ High throughput
-- ✅ Flexible configuration
-- ✅ Proven performance
-
-**Pros**:
-
-- Excellent performance
-- Advanced load balancing options
-- Battle-tested HAProxy backend
-- Good for high-traffic scenarios
-
-**Cons**:
-
-- Less Kubernetes-native than others
-- Smaller community
-- Configuration complexity
-- Fewer modern features
-
-**Use when**:
-
-- Performance is critical
-- High traffic is expected
-- You need advanced load balancing
-
----
-
-## Recommended Combinations
-
-### 1. Linkerd + Nginx Ingress (Recommended for most users)
-
-**Why**: Lightweight mesh + proven ingress = great balance
-
-```bash
-provisioning taskserv create cert-manager
-provisioning taskserv create linkerd
-provisioning taskserv create nginx-ingress
-```plaintext
-
-**Pros**:
-
-- Minimal overhead
-- Simple to manage
-- Proven stability
-- Good observability
-
-**Cons**:
-
-- Less advanced features than Istio
-
----
-
-### 2. Istio (Standalone)
-
-**Why**: All-in-one service mesh with built-in gateway
-
-```bash
-provisioning taskserv create istio
-```plaintext
-
-**Pros**:
-
-- Unified traffic management
-- Powerful observability
-- No external ingress needed
-- Rich features
-
-**Cons**:
-
-- Higher resource usage
-- More complex
-
----
-
-### 3. Linkerd + Traefik
-
-**Why**: Lightweight mesh + modern ingress
-
-```bash
-provisioning taskserv create cert-manager
-provisioning taskserv create linkerd
-provisioning taskserv create traefik
-```plaintext
-
-**Pros**:
-
-- Minimal overhead
-- Modern features
-- Automatic TLS
-
----
-
-### 4. No Mesh + Nginx Ingress (Simple deployments)
-
-**Why**: Just get traffic in without service mesh
-
-```bash
-provisioning taskserv create nginx-ingress
-```plaintext
-
-**Pros**:
-
-- Simplest setup
-- Minimal overhead
-- Proven stability
-
----
-
-## Decision Matrix
-
-| Requirement | Istio | Linkerd | Cilium | Nginx | Traefik | Contour | HAProxy |
-|-----------|-------|---------|--------|-------|---------|---------|---------|
-| Lightweight | ❌ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ |
-| Simple Config | ❌ | ✅ | ⚠️ | ⚠️ | ✅ | ✅ | ❌ |
-| Full Features | ✅ | ⚠️ | ✅ | ⚠️ | ✅ | ⚠️ | ✅ |
-| Auto TLS | ❌ | ❌ | ❌ | ❌ | ✅ | ❌ | ❌ |
-| Service Mesh | ✅ | ✅ | ✅ | ❌ | ❌ | ❌ | ❌ |
-| Performance | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ |
-| Community | ✅ | ✅ | ✅ | ✅ | ✅ | ⚠️ | ⚠️ |
-
-## Migration Paths
-
-### From Istio to Linkerd
-
-1. Install Linkerd alongside Istio
-2. Gradually migrate services (add Linkerd annotations)
-3. Verify Linkerd handles traffic correctly
-4. Install external ingress controller (Nginx/Traefik)
-5. Update Istio Virtual Services to use new ingress
-6. Remove Istio once migration complete
-
-### Between Ingress Controllers
-
-1. Install new ingress controller
-2. Create duplicate Ingress resources pointing to new controller
-3. Test with new ingress (use IngressClassName)
-4. Update DNS/load balancer to point to new ingress
-5. Drain connections from old ingress
-6. Remove old ingress controller
-
----
-
-## Examples
-
-Complete examples of how to configure service meshes and ingress controllers in your workspace.
-
-### Example 1: Linkerd + Nginx Ingress Deployment
-
-This is the recommended configuration for most deployments - lightweight and proven.
-
-#### Step 1: Create Taskserv Configurations
-
-**File**: `workspace/infra/my-cluster/taskservs/cert-manager.k`
-
-```kcl
-import provisioning.extensions.taskservs.infrastructure.cert_manager as cm
+
+Pros :
+
+Minimal overhead
+Modern features
+Automatic TLS
+
+
+
+Why : Just get traffic in without service mesh
+provisioning taskserv create nginx-ingress
+
+Pros :
+
+Simplest setup
+Minimal overhead
+Proven stability
+
+
+
+Requirement Istio Linkerd Cilium Nginx Traefik Contour HAProxy
+Lightweight ❌ ✅ ✅ ✅ ✅ ✅ ✅
+Simple Config ❌ ✅ ⚠️ ⚠️ ✅ ✅ ❌
+Full Features ✅ ⚠️ ✅ ⚠️ ✅ ⚠️ ✅
+Auto TLS ❌ ❌ ❌ ❌ ✅ ❌ ❌
+Service Mesh ✅ ✅ ✅ ❌ ❌ ❌ ❌
+Performance ✅ ✅ ✅ ✅ ✅ ✅ ✅
+Community ✅ ✅ ✅ ✅ ✅ ⚠️ ⚠️
+
+
+
+
+
+Install Linkerd alongside Istio
+Gradually migrate services (add Linkerd annotations)
+Verify Linkerd handles traffic correctly
+Install external ingress controller (Nginx/Traefik)
+Update Istio Virtual Services to use new ingress
+Remove Istio once migration complete
+
+
+
+Install new ingress controller
+Create duplicate Ingress resources pointing to new controller
+Test with new ingress (use IngressClassName)
+Update DNS/load balancer to point to new ingress
+Drain connections from old ingress
+Remove old ingress controller
+
+
+
+Complete examples of how to configure service meshes and ingress controllers in your workspace.
+
+This is the recommended configuration for most deployments - lightweight and proven.
+
+File : workspace/infra/my-cluster/taskservs/cert-manager.ncl
+import provisioning.extensions.taskservs.infrastructure.cert_manager as cm
# Cert-manager is required for Linkerd's mTLS certificates
_taskserv = cm.CertManager {
version = "v1.15.0"
namespace = "cert-manager"
}
-```plaintext
-
-**File**: `workspace/infra/my-cluster/taskservs/linkerd.k`
-
-```kcl
-import provisioning.extensions.taskservs.networking.linkerd as linkerd
+
+File : workspace/infra/my-cluster/taskservs/linkerd.ncl
+import provisioning.extensions.taskservs.networking.linkerd as linkerd
# Lightweight service mesh with minimal overhead
_taskserv = linkerd.Linkerd {
@@ -68266,12 +63207,9 @@ _taskserv = linkerd.Linkerd {
proxy_memory_limit = "250Mi"
}
}
-```plaintext
-
-**File**: `workspace/infra/my-cluster/taskservs/nginx-ingress.k`
-
-```kcl
-import provisioning.extensions.taskservs.networking.nginx_ingress as nginx
+
+File : workspace/infra/my-cluster/taskservs/nginx-ingress.ncl
+import provisioning.extensions.taskservs.networking.nginx_ingress as nginx
# Battle-tested ingress controller
_taskserv = nginx.NginxIngress {
@@ -68293,12 +63231,9 @@ _taskserv = nginx.NginxIngress {
memory_limit = "500Mi"
}
}
-```plaintext
-
-#### Step 2: Deploy Service Mesh Components
-
-```bash
-# Install cert-manager (prerequisite for Linkerd)
+
+
+# Install cert-manager (prerequisite for Linkerd)
provisioning taskserv create cert-manager
# Install Linkerd service mesh
@@ -68310,14 +63245,10 @@ provisioning taskserv create nginx-ingress
# Verify installation
linkerd check
kubectl get deploy -n ingress-nginx
-```plaintext
-
-#### Step 3: Configure Application Deployment
-
-**File**: `workspace/infra/my-cluster/clusters/web-api.k`
-
-```kcl
-import provisioning.kcl.k8s_deploy as k8s
+
+
+File : workspace/infra/my-cluster/clusters/web-api.ncl
+import provisioning.kcl.k8s_deploy as k8s
import provisioning.extensions.taskservs.networking.nginx_ingress as nginx
# Define the web API service with Linkerd service mesh and Nginx ingress
@@ -68375,14 +63306,10 @@ service = k8s.K8sDeploy {
]
}
}
-```plaintext
-
-#### Step 4: Create Ingress Resource
-
-**File**: `workspace/infra/my-cluster/ingress/web-api-ingress.yaml`
-
-```yaml
-apiVersion: networking.k8s.io/v1
+
+
+File : workspace/infra/my-cluster/ingress/web-api-ingress.yaml
+apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: web-api
@@ -68407,20 +63334,13 @@ spec:
name: web-api
port:
number: 8080
-```plaintext
-
----
-
-### Example 2: Istio (Standalone) Deployment
-
-Complete service mesh with built-in ingress gateway.
-
-#### Step 1: Install Istio
-
-**File**: `workspace/infra/my-cluster/taskservs/istio.k`
-
-```kcl
-import provisioning.extensions.taskservs.networking.istio as istio
+
+
+
+Complete service mesh with built-in ingress gateway.
+
+File : workspace/infra/my-cluster/taskservs/istio.ncl
+import provisioning.extensions.taskservs.networking.istio as istio
# Full-featured service mesh
_taskserv = istio.Istio {
@@ -68455,24 +63375,17 @@ _taskserv = istio.Istio {
gateway_memory = "128Mi"
}
}
-```plaintext
-
-#### Step 2: Deploy Istio
-
-```bash
-# Install Istio
+
+
+# Install Istio
provisioning taskserv create istio
# Verify installation
istioctl verify-install
-```plaintext
-
-#### Step 3: Configure Application with Istio
-
-**File**: `workspace/infra/my-cluster/clusters/api-service.k`
-
-```kcl
-import provisioning.kcl.k8s_deploy as k8s
+
+
+File : workspace/infra/my-cluster/clusters/api-service.ncl
+import provisioning.kcl.k8s_deploy as k8s
service = k8s.K8sDeploy {
name = "api-service"
@@ -68538,20 +63451,13 @@ service = k8s.K8sDeploy {
]
}
}
-```plaintext
-
----
-
-### Example 3: Linkerd + Traefik (Modern Cloud-Native)
-
-Lightweight mesh with modern ingress controller and automatic TLS.
-
-#### Step 1: Create Configurations
-
-**File**: `workspace/infra/my-cluster/taskservs/linkerd.k`
-
-```kcl
-import provisioning.extensions.taskservs.networking.linkerd as linkerd
+
+
+
+Lightweight mesh with modern ingress controller and automatic TLS.
+
+File : workspace/infra/my-cluster/taskservs/linkerd.ncl
+import provisioning.extensions.taskservs.networking.linkerd as linkerd
_taskserv = linkerd.Linkerd {
version = "2.16.0"
@@ -68559,12 +63465,9 @@ _taskserv = linkerd.Linkerd {
viz_enabled = True
prometheus = True
}
-```plaintext
-
-**File**: `workspace/infra/my-cluster/taskservs/traefik.k`
-
-```kcl
-import provisioning.extensions.taskservs.networking.traefik as traefik
+
+File : workspace/infra/my-cluster/taskservs/traefik.ncl
+import provisioning.extensions.taskservs.networking.traefik as traefik
# Modern ingress with middleware and auto-TLS
_taskserv = traefik.Traefik {
@@ -68587,22 +63490,15 @@ _taskserv = traefik.Traefik {
memory_limit = "512Mi"
}
}
-```plaintext
-
-#### Step 2: Deploy
-
-```bash
-provisioning taskserv create cert-manager
+
+
+provisioning taskserv create cert-manager
provisioning taskserv create linkerd
provisioning taskserv create traefik
-```plaintext
-
-#### Step 3: Create Traefik IngressRoute
-
-**File**: `workspace/infra/my-cluster/ingress/api-route.yaml`
-
-```yaml
-apiVersion: traefik.io/v1alpha1
+
+
+File : workspace/infra/my-cluster/ingress/api-route.yaml
+apiVersion: traefik.io/v1alpha1
kind: IngressRoute
metadata:
name: api
@@ -68620,40 +63516,26 @@ spec:
certResolver: letsencrypt
domains:
- main: api.example.com
-```plaintext
-
----
-
-### Example 4: Minimal Setup (Just Nginx, No Service Mesh)
-
-For simple deployments that don't need service mesh.
-
-#### Step 1: Install Nginx
-
-**File**: `workspace/infra/my-cluster/taskservs/nginx-ingress.k`
-
-```kcl
-import provisioning.extensions.taskservs.networking.nginx_ingress as nginx
+
+
+
+For simple deployments that don’t need service mesh.
+
+File : workspace/infra/my-cluster/taskservs/nginx-ingress.ncl
+import provisioning.extensions.taskservs.networking.nginx_ingress as nginx
_taskserv = nginx.NginxIngress {
version = "1.12.0"
replicas = 2
prometheus_metrics = True
}
-```plaintext
-
-#### Step 2: Deploy
-
-```bash
-provisioning taskserv create nginx-ingress
-```plaintext
-
-#### Step 3: Application Configuration
-
-**File**: `workspace/infra/my-cluster/clusters/simple-app.k`
-
-```kcl
-import provisioning.kcl.k8s_deploy as k8s
+
+
+provisioning taskserv create nginx-ingress
+
+
+File : workspace/infra/my-cluster/clusters/simple-app.ncl
+import provisioning.kcl.k8s_deploy as k8s
service = k8s.K8sDeploy {
name = "simple-app"
@@ -68680,14 +63562,10 @@ service = k8s.K8sDeploy {
ports = [{ name = "http", typ = "TCP", target = 80 }]
}
}
-```plaintext
-
-#### Step 4: Create Ingress
-
-**File**: `workspace/infra/my-cluster/ingress/simple-app-ingress.yaml`
-
-```yaml
-apiVersion: networking.k8s.io/v1
+
+
+File : workspace/infra/my-cluster/ingress/simple-app-ingress.yaml
+apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: simple-app
@@ -68705,51 +63583,35 @@ spec:
name: simple-app
port:
number: 80
-```plaintext
-
----
-
-## Enable Sidecar Injection for Services
-
-### For Linkerd
-
-```bash
-# Label namespace for automatic sidecar injection
+
+
+
+
+# Label namespace for automatic sidecar injection
kubectl annotate namespace production linkerd.io/inject=enabled
# Or add annotation to specific deployment
kubectl annotate pod my-pod linkerd.io/inject=enabled
-```plaintext
-
-### For Istio
-
-```bash
-# Label namespace for automatic sidecar injection
+
+
+# Label namespace for automatic sidecar injection
kubectl label namespace production istio-injection=enabled
# Verify injection
kubectl describe pod -n production | grep istio-proxy
-```plaintext
-
----
-
-## Monitoring and Observability
-
-### Linkerd Dashboard
-
-```bash
-# Open Linkerd Viz dashboard
+
+
+
+
+# Open Linkerd Viz dashboard
linkerd viz dashboard
# View service topology
linkerd viz stat ns
linkerd viz tap -n production
-```plaintext
-
-### Istio Dashboards
-
-```bash
-# Kiali (service mesh visualization)
+
+
+# Kiali (service mesh visualization)
kubectl port-forward -n istio-system svc/kiali 20000:20000
# http://localhost:20000
@@ -68760,26 +63622,17 @@ kubectl port-forward -n istio-system svc/grafana 3000:3000
# Jaeger (distributed tracing)
kubectl port-forward -n istio-system svc/jaeger-query 16686:16686
# http://localhost:16686
-```plaintext
-
-### Traefik Dashboard
-
-```bash
-# Forward Traefik dashboard
+
+
+# Forward Traefik dashboard
kubectl port-forward -n traefik svc/traefik 8080:8080
# http://localhost:8080/dashboard/
-```plaintext
-
----
-
-## Quick Reference
-
-### Installation Commands
-
-#### Service Mesh - Istio
-
-```bash
-# Install Istio (includes built-in ingress gateway)
+
+
+
+
+
+# Install Istio (includes built-in ingress gateway)
provisioning taskserv create istio
# Verify installation
@@ -68791,12 +63644,9 @@ kubectl label namespace default istio-injection=enabled
# View Kiali dashboard
kubectl port-forward -n istio-system svc/kiali 20000:20000
# Open: http://localhost:20000
-```plaintext
-
-#### Service Mesh - Linkerd
-
-```bash
-# Install cert-manager first (Linkerd requirement)
+
+
+# Install cert-manager first (Linkerd requirement)
provisioning taskserv create cert-manager
# Install Linkerd
@@ -68810,12 +63660,9 @@ kubectl annotate namespace default linkerd.io/inject=enabled
# View live dashboard
linkerd viz dashboard
-```plaintext
-
-#### Ingress Controllers
-
-```bash
-# Install Nginx Ingress (most popular)
+
+
+# Install Nginx Ingress (most popular)
provisioning taskserv create nginx-ingress
# Install Traefik (modern cloud-native)
@@ -68826,16 +63673,11 @@ provisioning taskserv create contour
# Install HAProxy Ingress (high-performance)
provisioning taskserv create haproxy-ingress
-```plaintext
-
-### Common Installation Combinations
-
-#### Option 1: Linkerd + Nginx Ingress (Recommended)
-
-**Lightweight mesh + proven ingress**
-
-```bash
-# Step 1: Install cert-manager
+
+
+
+Lightweight mesh + proven ingress
+# Step 1: Install cert-manager
provisioning taskserv create cert-manager
# Step 2: Install Linkerd
@@ -68851,14 +63693,10 @@ kubectl get deploy -n ingress-nginx
# Step 5: Create sample application with Linkerd
kubectl annotate namespace default linkerd.io/inject=enabled
kubectl apply -f my-app.yaml
-```plaintext
-
-#### Option 2: Istio (Standalone)
-
-**Full-featured service mesh with built-in gateway**
-
-```bash
-# Install Istio
+
+
+Full-featured service mesh with built-in gateway
+# Install Istio
provisioning taskserv create istio
# Verify
@@ -68869,14 +63707,10 @@ kubectl label namespace default istio-injection=enabled
# Deploy applications
kubectl apply -f my-app.yaml
-```plaintext
-
-#### Option 3: Linkerd + Traefik
-
-**Lightweight mesh + modern ingress with auto TLS**
-
-```bash
-# Install prerequisites
+
+
+Lightweight mesh + modern ingress with auto TLS
+# Install prerequisites
provisioning taskserv create cert-manager
# Install service mesh
@@ -68887,26 +63721,18 @@ provisioning taskserv create traefik
# Enable sidecar injection
kubectl annotate namespace default linkerd.io/inject=enabled
-```plaintext
-
-#### Option 4: Just Nginx Ingress (No Mesh)
-
-**Simple deployments without service mesh**
-
-```bash
-# Install ingress controller
+
+
+Simple deployments without service mesh
+# Install ingress controller
provisioning taskserv create nginx-ingress
# Deploy applications
kubectl apply -f ingress.yaml
-```plaintext
-
-### Verification Commands
-
-#### Check Linkerd
-
-```bash
-# Full system check
+
+
+
+# Full system check
linkerd check
# Specific component checks
@@ -68917,12 +63743,9 @@ linkerd check -n default # Custom namespace
# View version
linkerd version --client
linkerd version --server
-```plaintext
-
-#### Check Istio
-
-```bash
-# Full system analysis
+
+
+# Full system analysis
istioctl analyze
# By namespace
@@ -68933,12 +63756,9 @@ istioctl verify-install
# Check version
istioctl version
-```plaintext
-
-#### Check Ingress Controllers
-
-```bash
-# List ingress resources
+
+
+# List ingress resources
kubectl get ingress -A
# Get ingress details
@@ -68951,14 +63771,10 @@ kubectl logs -n ingress-nginx -l app.kubernetes.io/name=ingress-nginx
# Traefik specific
kubectl get deploy -n traefik
kubectl logs -n traefik deployment/traefik
-```plaintext
-
-### Troubleshooting
-
-#### Service Mesh Issues
-
-```bash
-# Linkerd - Check proxy status
+
+
+
+# Linkerd - Check proxy status
linkerd check -n <namespace>
# Linkerd - View service topology
@@ -68969,12 +63785,9 @@ kubectl describe pod -n <namespace> # Look for istio-proxy container
# Istio - View traffic policies
istioctl analyze
-```plaintext
-
-#### Ingress Controller Issues
-
-```bash
-# Check ingress controller logs
+
+
+# Check ingress controller logs
kubectl logs -n ingress-nginx deployment/ingress-nginx-controller
kubectl logs -n traefik deployment/traefik
@@ -68984,14 +63797,10 @@ kubectl describe ingress <name> -n <namespace>
# Check ingress controller service
kubectl get svc -n ingress-nginx
kubectl get svc -n traefik
-```plaintext
-
-### Uninstallation
-
-#### Remove Linkerd
-
-```bash
-# Remove annotations from namespaces
+
+
+
+# Remove annotations from namespaces
kubectl annotate namespace <namespace> linkerd.io/inject- --all
# Uninstall Linkerd
@@ -68999,12 +63808,9 @@ linkerd uninstall | kubectl delete -f -
# Remove Linkerd namespace
kubectl delete namespace linkerd
-```plaintext
-
-#### Remove Istio
-
-```bash
-# Remove labels from namespaces
+
+
+# Remove labels from namespaces
kubectl label namespace <namespace> istio-injection- --all
# Uninstall Istio
@@ -69012,95 +63818,77 @@ istioctl uninstall --purge
# Remove Istio namespace
kubectl delete namespace istio-system
-```plaintext
-
-#### Remove Ingress Controllers
-
-```bash
-# Nginx
+
+
+# Nginx
helm uninstall ingress-nginx -n ingress-nginx
kubectl delete namespace ingress-nginx
# Traefik
helm uninstall traefik -n traefik
kubectl delete namespace traefik
-```plaintext
-
-### Performance Tuning
-
-#### Linkerd Resource Limits
-
-```bash
-# Adjust proxy resource limits in linkerd.k
+
+
+
+# Adjust proxy resource limits in linkerd.ncl
_taskserv = linkerd.Linkerd {
resources: {
proxy_cpu_limit = "2000m" # Increase if needed
proxy_memory_limit = "512Mi" # Increase if needed
}
}
-```plaintext
-
-#### Istio Profile Selection
-
-```bash
-# Different resource profiles available
+
+
+# Different resource profiles available
profile = "default" # Full features (default)
profile = "demo" # Demo mode (more resources)
profile = "minimal" # Minimal (lower resources)
profile = "remote" # Control plane only (advanced)
-```plaintext
-
----
-
-## Complete Workspace Directory Structure
-
-After implementing these examples, your workspace should look like:
-
-```plaintext
-workspace/infra/my-cluster/
+
+
+
+After implementing these examples, your workspace should look like:
+workspace/infra/my-cluster/
├── taskservs/
-│ ├── cert-manager.k # For Linkerd mTLS
-│ ├── linkerd.k # Service mesh option
-│ ├── istio.k # OR Istio option
-│ ├── nginx-ingress.k # Ingress controller
-│ └── traefik.k # Alternative ingress
+│ ├── cert-manager.ncl # For Linkerd mTLS
+│ ├── linkerd.ncl # Service mesh option
+│ ├── istio.ncl # OR Istio option
+│ ├── nginx-ingress.ncl # Ingress controller
+│ └── traefik.ncl # Alternative ingress
├── clusters/
-│ ├── web-api.k # Application with Linkerd + Nginx
-│ ├── api-service.k # Application with Istio
-│ └── simple-app.k # App without service mesh
+│ ├── web-api.ncl # Application with Linkerd + Nginx
+│ ├── api-service.ncl # Application with Istio
+│ └── simple-app.ncl # App without service mesh
├── ingress/
│ ├── web-api-ingress.yaml # Nginx Ingress resource
│ ├── api-route.yaml # Traefik IngressRoute
│ └── simple-app-ingress.yaml # Simple Ingress
└── config.toml # Infrastructure-specific config
-```plaintext
-
----
-
-## Next Steps
-
-1. **Choose your deployment model** (Linkerd+Nginx, Istio, or plain Nginx)
-2. **Create taskserv KCL files** in `workspace/infra/<cluster>/taskservs/`
-3. **Install components** using `provisioning taskserv create`
-4. **Create application deployments** with appropriate mesh/ingress configuration
-5. **Monitor and observe** using the appropriate dashboard
-
----
-
-## Additional Resources
-
-- **Linkerd Documentation**: <https://linkerd.io/>
-- **Istio Documentation**: <https://istio.io/>
-- **Nginx Ingress**: <https://kubernetes.github.io/ingress-nginx/>
-- **Traefik Documentation**: <https://doc.traefik.io/>
-- **Contour Documentation**: <https://projectcontour.io/>
-- **Cilium Documentation**: <https://docs.cilium.io/>
+
+
+
+Choose your deployment model (Linkerd+Nginx, Istio, or plain Nginx)
+Create taskserv KCL files in workspace/infra/<cluster>/taskservs/
+Install components using provisioning taskserv create
+Create application deployments with appropriate mesh/ingress configuration
+Monitor and observe using the appropriate dashboard
+
+
+
+
Version : 1.0.0
Date : 2025-10-06
Audience : Users and Developers
-
+
Overview
Quick Start
@@ -69111,7 +63899,7 @@ workspace/infra/my-cluster/
Troubleshooting
-
+
The OCI registry integration enables distribution and management of provisioning extensions as OCI artifacts. This provides:
Standard Distribution : Use industry-standard OCI registries
@@ -69120,11 +63908,11 @@ workspace/infra/my-cluster/
Caching : Efficient caching to reduce downloads
Security : TLS, authentication, and vulnerability scanning support
-
+
OCI (Open Container Initiative) artifacts are packaged files distributed through container registries. Unlike Docker images which contain applications, OCI artifacts can contain any type of content - in our case, provisioning extensions (KCL schemas, Nushell scripts, templates, etc.).
-
-
+
+
Install one of the following OCI tools:
# ORAS (recommended)
brew install oras
@@ -69134,34 +63922,25 @@ go install github.com/google/go-containerregistry/cmd/crane@latest
# Skopeo (RedHat's tool)
brew install skopeo
-```plaintext
-
-### 1. Start Local OCI Registry (Development)
-
-```bash
-# Start lightweight OCI registry (Zot)
+
+
+# Start lightweight OCI registry (Zot)
provisioning oci-registry start
# Verify registry is running
curl http://localhost:5000/v2/_catalog
-```plaintext
-
-### 2. Pull an Extension
-
-```bash
-# Pull Kubernetes extension from registry
+
+
+# Pull Kubernetes extension from registry
provisioning oci pull kubernetes:1.28.0
# Pull with specific registry
provisioning oci pull kubernetes:1.28.0 \
--registry harbor.company.com \
--namespace provisioning-extensions
-```plaintext
-
-### 3. List Available Extensions
-
-```bash
-# List all extensions
+
+
+# List all extensions
provisioning oci list
# Search for specific extension
@@ -69169,14 +63948,10 @@ provisioning oci search kubernetes
# Show available versions
provisioning oci tags kubernetes
-```plaintext
-
-### 4. Configure Workspace to Use OCI
-
-Edit `workspace/config/provisioning.yaml`:
-
-```yaml
-dependencies:
+
+
+Edit workspace/config/provisioning.yaml:
+dependencies:
extensions:
source_type: "oci"
@@ -69189,12 +63964,9 @@ dependencies:
taskservs:
- "oci://localhost:5000/provisioning-extensions/kubernetes:1.28.0"
- "oci://localhost:5000/provisioning-extensions/containerd:1.7.0"
-```plaintext
-
-### 5. Resolve Dependencies
-
-```bash
-# Resolve and install all dependencies
+
+
+# Resolve and install all dependencies
provisioning dep resolve
# Check what will be installed
@@ -69202,143 +63974,103 @@ provisioning dep resolve --dry-run
# Show dependency tree
provisioning dep tree kubernetes
-```plaintext
-
----
-
-## OCI Commands Reference
-
-### Pull Extension
-
-**Download extension from OCI registry**
-
-```bash
-provisioning oci pull <artifact>:<version> [OPTIONS]
+
+
+
+
+Download extension from OCI registry
+provisioning oci pull <artifact>:<version> [OPTIONS]
# Examples:
provisioning oci pull kubernetes:1.28.0
provisioning oci pull redis:7.0.0 --registry harbor.company.com
provisioning oci pull postgres:15.0 --insecure # Skip TLS verification
-```plaintext
-
-**Options**:
-
-- `--registry <endpoint>`: Override registry (default: from config)
-- `--namespace <name>`: Override namespace (default: provisioning-extensions)
-- `--destination <path>`: Local installation path
-- `--insecure`: Skip TLS certificate verification
-
----
-
-### Push Extension
-
-**Publish extension to OCI registry**
-
-```bash
-provisioning oci push <source-path> <name> <version> [OPTIONS]
+
+Options :
+
+--registry <endpoint>: Override registry (default: from config)
+--namespace <name>: Override namespace (default: provisioning-extensions)
+--destination <path>: Local installation path
+--insecure: Skip TLS certificate verification
+
+
+
+Publish extension to OCI registry
+provisioning oci push <source-path> <name> <version> [OPTIONS]
# Examples:
provisioning oci push ./extensions/taskservs/redis redis 1.0.0
provisioning oci push ./my-provider aws 2.1.0 --registry localhost:5000
-```plaintext
-
-**Options**:
-
-- `--registry <endpoint>`: Target registry
-- `--namespace <name>`: Target namespace
-- `--insecure`: Skip TLS verification
-
-**Prerequisites**:
-
-- Extension must have valid `manifest.yaml`
-- Must be logged in to registry (see `oci login`)
-
----
-
-### List Extensions
-
-**Show available extensions in registry**
-
-```bash
-provisioning oci list [OPTIONS]
+
+Options :
+
+--registry <endpoint>: Target registry
+--namespace <name>: Target namespace
+--insecure: Skip TLS verification
+
+Prerequisites :
+
+Extension must have valid manifest.yaml
+Must be logged in to registry (see oci login)
+
+
+
+Show available extensions in registry
+provisioning oci list [OPTIONS]
# Examples:
provisioning oci list
provisioning oci list --namespace provisioning-platform
provisioning oci list --registry harbor.company.com
-```plaintext
-
-**Output**:
-
-```plaintext
-┬───────────────┬──────────────────┬─────────────────────────┬─────────────────────────────────────────────┐
+
+Output :
+┬───────────────┬──────────────────┬─────────────────────────┬─────────────────────────────────────────────┐
│ name │ registry │ namespace │ reference │
├───────────────┼──────────────────┼─────────────────────────┼─────────────────────────────────────────────┤
│ kubernetes │ localhost:5000 │ provisioning-extensions │ localhost:5000/provisioning-extensions/... │
│ containerd │ localhost:5000 │ provisioning-extensions │ localhost:5000/provisioning-extensions/... │
│ cilium │ localhost:5000 │ provisioning-extensions │ localhost:5000/provisioning-extensions/... │
└───────────────┴──────────────────┴─────────────────────────┴─────────────────────────────────────────────┘
-```plaintext
-
----
-
-### Search Extensions
-
-**Search for extensions matching query**
-
-```bash
-provisioning oci search <query> [OPTIONS]
+
+
+
+Search for extensions matching query
+provisioning oci search <query> [OPTIONS]
# Examples:
provisioning oci search kube
provisioning oci search postgres
provisioning oci search "container-*"
-```plaintext
-
----
-
-### Show Tags (Versions)
-
-**Display all available versions of an extension**
-
-```bash
-provisioning oci tags <artifact-name> [OPTIONS]
+
+
+
+Display all available versions of an extension
+provisioning oci tags <artifact-name> [OPTIONS]
# Examples:
provisioning oci tags kubernetes
provisioning oci tags redis --registry harbor.company.com
-```plaintext
-
-**Output**:
-
-```plaintext
-┬────────────┬─────────┬──────────────────────────────────────────────────────┐
+
+Output :
+┬────────────┬─────────┬──────────────────────────────────────────────────────┐
│ artifact │ version │ reference │
├────────────┼─────────┼──────────────────────────────────────────────────────┤
│ kubernetes │ 1.29.0 │ localhost:5000/provisioning-extensions/kubernetes... │
│ kubernetes │ 1.28.0 │ localhost:5000/provisioning-extensions/kubernetes... │
│ kubernetes │ 1.27.0 │ localhost:5000/provisioning-extensions/kubernetes... │
└────────────┴─────────┴──────────────────────────────────────────────────────┘
-```plaintext
-
----
-
-### Inspect Extension
-
-**Show detailed manifest and metadata**
-
-```bash
-provisioning oci inspect <artifact>:<version> [OPTIONS]
+
+
+
+Show detailed manifest and metadata
+provisioning oci inspect <artifact>:<version> [OPTIONS]
# Examples:
provisioning oci inspect kubernetes:1.28.0
provisioning oci inspect redis:7.0.0 --format json
-```plaintext
-
-**Output**:
-
-```yaml
-name: kubernetes
+
+Output :
+name: kubernetes
type: taskserv
version: 1.28.0
description: Kubernetes container orchestration platform
@@ -69350,75 +64082,53 @@ dependencies:
platforms:
- linux/amd64
- linux/arm64
-```plaintext
-
----
-
-### Login to Registry
-
-**Authenticate with OCI registry**
-
-```bash
-provisioning oci login <registry> [OPTIONS]
+
+
+
+Authenticate with OCI registry
+provisioning oci login <registry> [OPTIONS]
# Examples:
provisioning oci login localhost:5000
provisioning oci login harbor.company.com --username admin
provisioning oci login registry.io --password-stdin < token.txt
provisioning oci login registry.io --token-file ~/.provisioning/tokens/registry
-```plaintext
-
-**Options**:
-
-- `--username <user>`: Username (default: `_token`)
-- `--password-stdin`: Read password from stdin
-- `--token-file <path>`: Read token from file
-
-**Note**: Credentials are stored in Docker config (`~/.docker/config.json`)
-
----
-
-### Logout from Registry
-
-**Remove stored credentials**
-
-```bash
-provisioning oci logout <registry>
+
+Options :
+
+--username <user>: Username (default: _token)
+--password-stdin: Read password from stdin
+--token-file <path>: Read token from file
+
+Note : Credentials are stored in Docker config (~/.docker/config.json)
+
+
+Remove stored credentials
+provisioning oci logout <registry>
# Example:
provisioning oci logout harbor.company.com
-```plaintext
-
----
-
-### Delete Extension
-
-**Remove extension from registry**
-
-```bash
-provisioning oci delete <artifact>:<version> [OPTIONS]
+
+
+
+Remove extension from registry
+provisioning oci delete <artifact>:<version> [OPTIONS]
# Examples:
provisioning oci delete kubernetes:1.27.0
provisioning oci delete redis:6.0.0 --force # Skip confirmation
-```plaintext
-
-**Options**:
-
-- `--force`: Skip confirmation prompt
-- `--registry <endpoint>`: Target registry
-- `--namespace <name>`: Target namespace
-
-**Warning**: This operation is irreversible. Use with caution.
-
----
-
-### Copy Extension
-
-**Copy extension between registries**
-
-```bash
-provisioning oci copy <source> <destination> [OPTIONS]
+
+Options :
+
+--force: Skip confirmation prompt
+--registry <endpoint>: Target registry
+--namespace <name>: Target namespace
+
+Warning : This operation is irreversible. Use with caution.
+
+
+Copy extension between registries
+provisioning oci copy <source> <destination> [OPTIONS]
# Examples:
# Copy between namespaces in same registry
@@ -69430,16 +64140,11 @@ provisioning oci copy \
provisioning oci copy \
localhost:5000/provisioning-extensions/kubernetes:1.28.0 \
harbor.company.com/provisioning/kubernetes:1.28.0
-```plaintext
-
----
-
-### Show OCI Configuration
-
-**Display current OCI settings**
-
-```bash
-provisioning oci config
+
+
+
+Display current OCI settings
+provisioning oci config
# Output:
{
@@ -69452,18 +64157,12 @@ provisioning oci config
cache_dir: "~/.provisioning/oci-cache"
tls_enabled: false
}
-```plaintext
-
----
-
-## Dependency Management
-
-### Dependency Configuration
-
-Dependencies are configured in `workspace/config/provisioning.yaml`:
-
-```yaml
-dependencies:
+
+
+
+
+Dependencies are configured in workspace/config/provisioning.yaml:
+dependencies:
# Core provisioning system
core:
source: "oci://harbor.company.com/provisioning-core:v3.5.0"
@@ -69497,12 +64196,9 @@ dependencies:
oci:
registry: "harbor.company.com"
namespace: "provisioning-platform"
-```plaintext
-
-### Resolve Dependencies
-
-```bash
-# Resolve and install all configured dependencies
+
+
+# Resolve and install all configured dependencies
provisioning dep resolve
# Dry-run (show what would be installed)
@@ -69510,12 +64206,9 @@ provisioning dep resolve --dry-run
# Resolve with specific version constraints
provisioning dep resolve --update # Update to latest versions
-```plaintext
-
-### Check for Updates
-
-```bash
-# Check all dependencies for updates
+
+
+# Check all dependencies for updates
provisioning dep check-updates
# Output:
@@ -69526,22 +64219,16 @@ provisioning dep check-updates
│ containerd │ 1.7.0 │ 1.7.0 │ false │
│ etcd │ 3.5.0 │ 3.5.1 │ true │
└─────────────┴─────────┴────────┴──────────────────┘
-```plaintext
-
-### Update Dependency
-
-```bash
-# Update specific extension to latest version
+
+
+# Update specific extension to latest version
provisioning dep update kubernetes
# Update to specific version
provisioning dep update kubernetes --version 1.29.0
-```plaintext
-
-### Dependency Tree
-
-```bash
-# Show dependency tree for extension
+
+
+# Show dependency tree for extension
provisioning dep tree kubernetes
# Output:
@@ -69550,35 +64237,27 @@ kubernetes:1.28.0
│ └── runc:1.1.0
├── etcd:3.5.0
└── kubectl:1.28.0
-```plaintext
-
-### Validate Dependencies
-
-```bash
-# Validate dependency graph (check for cycles, conflicts)
+
+
+# Validate dependency graph (check for cycles, conflicts)
provisioning dep validate
# Validate specific extension
provisioning dep validate kubernetes
-```plaintext
-
----
-
-## Extension Development
-
-### Create New Extension
-
-```bash
-# Generate extension from template
+
+
+
+
+# Generate extension from template
provisioning generate extension taskserv redis
# Directory structure created:
# extensions/taskservs/redis/
-# ├── kcl/
-# │ ├── kcl.mod
-# │ ├── redis.k
-# │ ├── version.k
-# │ └── dependencies.k
+# ├── schemas/
+# │ ├── manifest.toml
+# │ ├── main.ncl
+# │ ├── version.ncl
+# │ └── dependencies.ncl
# ├── scripts/
# │ ├── install.nu
# │ ├── check.nu
@@ -69588,14 +64267,10 @@ provisioning generate extension taskserv redis
# │ └── README.md
# ├── tests/
# └── manifest.yaml
-```plaintext
-
-### Extension Manifest
-
-Edit `manifest.yaml`:
-
-```yaml
-name: redis
+
+
+Edit manifest.yaml:
+name: redis
type: taskserv
version: 1.0.0
description: Redis in-memory data structure store
@@ -69617,12 +64292,9 @@ platforms:
- linux/arm64
min_provisioning_version: "3.0.0"
-```plaintext
-
-### Test Extension Locally
-
-```bash
-# Load extension from local path
+
+
+# Load extension from local path
provisioning module load taskserv workspace_dev redis --source local
# Test installation
@@ -69630,36 +64302,27 @@ provisioning taskserv create redis --infra test-env --check
# Run tests
provisioning test extension redis
-```plaintext
-
-### Validate Extension
-
-```bash
-# Validate extension structure
+
+
+# Validate extension structure
provisioning oci package validate ./extensions/taskservs/redis
# Output:
✓ Extension structure valid
Warnings:
- Missing docs/README.md (recommended)
-```plaintext
-
-### Package Extension
-
-```bash
-# Package as OCI artifact
+
+
+# Package as OCI artifact
provisioning oci package ./extensions/taskservs/redis
# Output: redis-1.0.0.tar.gz
# Inspect package
provisioning oci inspect-artifact redis-1.0.0.tar.gz
-```plaintext
-
-### Publish Extension
-
-```bash
-# Login to registry (one-time)
+
+
+# Login to registry (one-time)
provisioning oci login localhost:5000
# Publish extension
@@ -69670,18 +64333,12 @@ provisioning oci tags redis
# Share with team
echo "Published: oci://localhost:5000/provisioning-extensions/redis:1.0.0"
-```plaintext
-
----
-
-## Registry Setup
-
-### Local Registry (Development)
-
-**Using Zot (lightweight)**:
-
-```bash
-# Start Zot registry
+
+
+
+
+Using Zot (lightweight) :
+# Start Zot registry
provisioning oci-registry start
# Configuration:
@@ -69695,12 +64352,9 @@ provisioning oci-registry stop
# Check status
provisioning oci-registry status
-```plaintext
-
-**Manual Zot Setup**:
-
-```bash
-# Install Zot
+
+Manual Zot Setup :
+# Install Zot
brew install project-zot/tap/zot
# Create config
@@ -69721,25 +64375,21 @@ EOF
# Run Zot
zot serve zot-config.json
-```plaintext
-
----
-
-### Remote Registry (Production)
-
-**Using Harbor**:
-
-1. **Deploy Harbor**:
-
- ```bash
- # Using Docker Compose
- wget https://github.com/goharbor/harbor/releases/download/v2.9.0/harbor-offline-installer-v2.9.0.tgz
- tar xvf harbor-offline-installer-v2.9.0.tgz
- cd harbor
- ./install.sh
+
+
+Using Harbor :
+Deploy Harbor :
+# Using Docker Compose
+wget https://github.com/goharbor/harbor/releases/download/v2.9.0/harbor-offline-installer-v2.9.0.tgz
+tar xvf harbor-offline-installer-v2.9.0.tgz
+cd harbor
+./install.sh
+
+
+
Configure Workspace :
# workspace/config/provisioning.yaml
dependencies:
@@ -69761,7 +64411,7 @@ dependencies:
-
+
Error : “No OCI tool found. Install oras, crane, or skopeo”
Solution :
@@ -69773,34 +64423,22 @@ go install github.com/google/go-containerregistry/cmd/crane@latest
# Or install Skopeo
brew install skopeo
-```plaintext
-
----
-
-### Connection Refused
-
-**Error**: "Connection refused to localhost:5000"
-
-**Solution**:
-
-```bash
-# Check if registry is running
+
+
+
+Error : “Connection refused to localhost:5000”
+Solution :
+# Check if registry is running
curl http://localhost:5000/v2/_catalog
# Start local registry if not running
provisioning oci-registry start
-```plaintext
-
----
-
-### TLS Certificate Error
-
-**Error**: "x509: certificate signed by unknown authority"
-
-**Solution**:
-
-```bash
-# For development, use --insecure flag
+
+
+
+Error : “x509: certificate signed by unknown authority”
+Solution :
+# For development, use --insecure flag
provisioning oci pull kubernetes:1.28.0 --insecure
# For production, configure TLS properly in workspace config:
@@ -69809,18 +64447,12 @@ provisioning oci pull kubernetes:1.28.0 --insecure
# oci:
# tls_enabled: true
# # Add CA certificate to system trust store
-```plaintext
-
----
-
-### Authentication Failed
-
-**Error**: "unauthorized: authentication required"
-
-**Solution**:
-
-```bash
-# Login to registry
+
+
+
+Error : “unauthorized: authentication required”
+Solution :
+# Login to registry
provisioning oci login localhost:5000
# Or provide auth token in config:
@@ -69828,23 +64460,18 @@ provisioning oci login localhost:5000
# extensions:
# oci:
# auth_token_path: "~/.provisioning/tokens/oci"
-```plaintext
-
----
-
-### Extension Not Found
-
-**Error**: "Dependency not found: kubernetes"
-
-**Solutions**:
-
-1. **Check registry endpoint**:
-
- ```bash
- provisioning oci config
+
+
+Error : “Dependency not found: kubernetes”
+Solutions :
+Check registry endpoint :
+provisioning oci config
+
+
+
List available extensions :
provisioning oci list
@@ -69871,95 +64498,70 @@ provisioning dep validate kubernetes
provisioning dep tree kubernetes
# Fix circular dependencies in extension manifests
-```plaintext
-
----
-
-## Best Practices
-
-### Version Pinning
-
-✅ **DO**: Pin to specific versions in production
-
-```yaml
-modules:
+
+
+
+
+✅ DO : Pin to specific versions in production
+modules:
taskservs:
- "oci://registry/kubernetes:1.28.0" # Specific version
-```plaintext
-
-❌ **DON'T**: Use `latest` tag in production
-
-```yaml
-modules:
+
+❌ DON’T : Use latest tag in production
+modules:
taskservs:
- "oci://registry/kubernetes:latest" # Unpredictable
-```plaintext
-
----
-
-### Semantic Versioning
-
-✅ **DO**: Follow semver (MAJOR.MINOR.PATCH)
-
-- `1.0.0` → `1.0.1`: Backward-compatible bug fix
-- `1.0.0` → `1.1.0`: Backward-compatible new feature
-- `1.0.0` → `2.0.0`: Breaking change
-
-❌ **DON'T**: Use arbitrary version numbers
-
-- `v1`, `version-2`, `latest-stable`
-
----
-
-### Dependency Management
-
-✅ **DO**: Specify version constraints
-
-```yaml
-dependencies:
+
+
+
+✅ DO : Follow semver (MAJOR.MINOR.PATCH)
+
+1.0.0 → 1.0.1: Backward-compatible bug fix
+1.0.0 → 1.1.0: Backward-compatible new feature
+1.0.0 → 2.0.0: Breaking change
+
+❌ DON’T : Use arbitrary version numbers
+
+v1, version-2, latest-stable
+
+
+
+✅ DO : Specify version constraints
+dependencies:
containerd: ">=1.7.0"
etcd: "^3.5.0" # 3.5.x compatible
-```plaintext
-
-❌ **DON'T**: Leave dependencies unversioned
-
-```yaml
-dependencies:
- containerd: "*" # Too permissive
-```plaintext
-
----
-
-### Security
-
-✅ **DO**:
-
-- Use TLS for remote registries
-- Rotate authentication tokens regularly
-- Scan images for vulnerabilities (Harbor)
-- Sign artifacts (cosign)
-
-❌ **DON'T**:
-
-- Use `--insecure` in production
-- Store passwords in config files
-- Skip certificate verification
-
----
-
-## Related Documentation
-
-- [Multi-Repository Architecture](../architecture/MULTI_REPO_ARCHITECTURE.md) - Overall architecture
-- [Extension Development Guide](extension-development.md) - Create extensions
-- [Dependency Resolution](dependency-resolution.md) - How dependencies work
-- OCI Client Library - Low-level API
-
----
-
-**Maintained By**: Documentation Team
-**Last Updated**: 2025-10-06
-**Next Review**: 2026-01-06
+❌ DON’T : Leave dependencies unversioned
+dependencies:
+ containerd: "*" # Too permissive
+
+
+
+✅ DO :
+
+Use TLS for remote registries
+Rotate authentication tokens regularly
+Scan images for vulnerabilities (Harbor)
+Sign artifacts (cosign)
+
+❌ DON’T :
+
+Use --insecure in production
+Store passwords in config files
+Skip certificate verification
+
+
+
+
+
+Maintained By : Documentation Team
+Last Updated : 2025-10-06
+Next Review : 2026-01-06
Date : 2025-11-23
Version : 1.0.0
@@ -69968,7 +64570,7 @@ dependencies:
Access powerful functionality from prov-ecosystem and provctl directly through provisioning CLI.
-
+
Four integrated feature sets:
Feature Purpose Best For
Runtime Abstraction Unified Docker/Podman/OrbStack/Colima/nerdctl Multi-platform deployments
@@ -69989,28 +64591,18 @@ provisioning runtime detect
# 3. Verify runtime works
provisioning runtime info
-```plaintext
-
-**Expected Output**:
-
-```plaintext
-Available runtimes:
+
+Expected Output :
+Available runtimes:
• docker
• podman
-```plaintext
-
----
-
-## 1️⃣ Runtime Abstraction
-
-### What It Does
-
-Automatically detects and uses Docker, Podman, OrbStack, Colima, or nerdctl - whichever is available on your system. Eliminates hardcoding "docker" commands.
-
-### Commands
-
-```bash
-# Detect available runtime
+
+
+
+
+Automatically detects and uses Docker, Podman, OrbStack, Colima, or nerdctl - whichever is available on your system. Eliminates hardcoding “docker” commands.
+
+# Detect available runtime
provisioning runtime detect
# Output: "Detected runtime: docker"
@@ -70029,53 +64621,38 @@ provisioning runtime list
# Adapt docker-compose for detected runtime
provisioning runtime compose ./docker-compose.yml
# Output: docker compose -f ./docker-compose.yml
-```plaintext
-
-### Examples
-
-**Use Case 1: Works on macOS with OrbStack, Linux with Docker**
-
-```bash
-# User on macOS with OrbStack
+
+
+Use Case 1: Works on macOS with OrbStack, Linux with Docker
+# User on macOS with OrbStack
$ provisioning runtime exec "docker run -it ubuntu bash"
# Automatically uses orbctl (OrbStack)
# User on Linux with Docker
$ provisioning runtime exec "docker run -it ubuntu bash"
# Automatically uses docker
-```plaintext
-
-**Use Case 2: Run docker-compose with detected runtime**
-
-```bash
-# Detect and run compose
+
+Use Case 2: Run docker-compose with detected runtime
+# Detect and run compose
$ compose_cmd=$(provisioning runtime compose ./docker-compose.yml)
$ eval $compose_cmd up -d
# Works with docker, podman, nerdctl automatically
-```plaintext
-
-### Configuration
-
-No configuration needed! Runtime is auto-detected in order:
-
-1. Docker (macOS: OrbStack first; Linux: Docker first)
-2. Podman
-3. OrbStack (macOS)
-4. Colima (macOS)
-5. nerdctl
-
----
-
-## 2️⃣ SSH Advanced Operations
-
-### What It Does
-
-Advanced SSH with connection pooling (90% faster), circuit breaker for fault isolation, and deployment strategies (rolling, blue-green, canary).
-
-### Commands
-
-```bash
-# Create SSH pool connection to host
+
+
+No configuration needed! Runtime is auto-detected in order:
+
+Docker (macOS: OrbStack first; Linux: Docker first)
+Podman
+OrbStack (macOS)
+Colima (macOS)
+nerdctl
+
+
+
+
+Advanced SSH with connection pooling (90% faster), circuit breaker for fault isolation, and deployment strategies (rolling, blue-green, canary).
+
+# Create SSH pool connection to host
provisioning ssh pool connect server.example.com root --port 22 --timeout 30
# Check pool status
@@ -70091,20 +64668,16 @@ provisioning ssh retry-config exponential --max-retries 3
# Check circuit breaker status
provisioning ssh circuit-breaker
# Output: state=closed, failures=0/5
-```plaintext
-
-### Deployment Strategies
-
-| Strategy | Use Case | Risk |
-|----------|----------|------|
-| **Rolling** | Gradual rollout across hosts | Low (but slower) |
-| **Blue-Green** | Zero-downtime, instant rollback | Very low |
-| **Canary** | Test on small % before full rollout | Very low (5% at risk) |
-
-### Example: Multi-Host Deployment
-
-```bash
-# Set up SSH pool
+
+
+Strategy Use Case Risk
+Rolling Gradual rollout across hosts Low (but slower)
+Blue-Green Zero-downtime, instant rollback Very low
+Canary Test on small % before full rollout Very low (5% at risk)
+
+
+
+# Set up SSH pool
provisioning ssh pool connect srv01.example.com root
provisioning ssh pool connect srv02.example.com root
provisioning ssh pool connect srv03.example.com root
@@ -70115,33 +64688,23 @@ provisioning ssh pool exec [srv01, srv02, srv03] "systemctl restart myapp" --str
# Check status
provisioning ssh pool status
# Output: connections=3, active=0, idle=3, circuit_breaker=green
-```plaintext
-
-### Retry Strategies
-
-```bash
-# Exponential backoff: 100ms, 200ms, 400ms, 800ms...
+
+
+# Exponential backoff: 100 ms, 200 ms, 400 ms, 800 ms...
provisioning ssh retry-config exponential --max-retries 5
-# Linear backoff: 100ms, 200ms, 300ms, 400ms...
+# Linear backoff: 100 ms, 200 ms, 300 ms, 400 ms...
provisioning ssh retry-config linear --max-retries 3
-# Fibonacci backoff: 100ms, 100ms, 200ms, 300ms, 500ms...
+# Fibonacci backoff: 100 ms, 100 ms, 200 ms, 300 ms, 500 ms...
provisioning ssh retry-config fibonacci --max-retries 4
-```plaintext
-
----
-
-## 3️⃣ Backup System
-
-### What It Does
-
-Multi-backend backup management with Restic, BorgBackup, Tar, or Rsync. Supports local, S3, SFTP, REST API, and Backblaze B2 repositories.
-
-### Commands
-
-```bash
-# Create backup job
+
+
+
+
+Multi-backend backup management with Restic, BorgBackup, Tar, or Rsync. Supports local, S3, SFTP, REST API, and Backblaze B2 repositories.
+
+# Create backup job
provisioning backup create daily-backup /data /var/lib \
--backend restic \
--repository s3://my-bucket/backups
@@ -70163,21 +64726,17 @@ provisioning backup retention
# Check backup job status
provisioning backup status backup-job-001
-```plaintext
-
-### Backend Comparison
-
-| Backend | Speed | Compression | Best For |
-|---------|-------|-------------|----------|
-| Restic | ⚡⚡⚡ | Excellent | Cloud backups |
-| BorgBackup | ⚡⚡ | Excellent | Large archives |
-| Tar | ⚡⚡⚡ | Good | Simple backups |
-| Rsync | ⚡⚡⚡ | None | Incremental syncs |
-
-### Example: Automated Daily Backups to S3
-
-```bash
-# Create backup configuration
+
+
+Backend Speed Compression Best For
+Restic ⚡⚡⚡ Excellent Cloud backups
+BorgBackup ⚡⚡ Excellent Large archives
+Tar ⚡⚡⚡ Good Simple backups
+Rsync ⚡⚡⚡ None Incremental syncs
+
+
+
+# Create backup configuration
provisioning backup create app-backup /opt/myapp /var/lib/myapp \
--backend restic \
--repository s3://prod-backups/myapp
@@ -70194,30 +64753,20 @@ provisioning backup retention \
# Verify backup was created
provisioning backup list
-```plaintext
-
-### Dry-Run (Test First)
-
-```bash
-# Test backup without actually creating it
+
+
+# Test backup without actually creating it
provisioning backup create test-backup /data --check
# Test restore without actually restoring
provisioning backup restore snapshot-001 --check
-```plaintext
-
----
-
-## 4️⃣ GitOps Event-Driven Deployments
-
-### What It Does
-
-Automatically trigger deployments from Git events (push, PR, webhook, scheduled). Supports GitHub, GitLab, Gitea.
-
-### Commands
-
-```bash
-# Load GitOps rules from configuration file
+
+
+
+
+Automatically trigger deployments from Git events (push, PR, webhook, scheduled). Supports GitHub, GitLab, Gitea.
+
+# Load GitOps rules from configuration file
provisioning gitops rules ./gitops-rules.yaml
# Watch for Git events (starts webhook listener)
@@ -70236,14 +64785,10 @@ provisioning gitops deployments --status running
# Show GitOps status
provisioning gitops status
# Output: active_rules=5, total=42, successful=40, failed=2
-```plaintext
-
-### Example: GitOps Configuration
-
-**File: `gitops-rules.yaml`**
-
-```yaml
-rules:
+
+
+File: gitops-rules.yaml
+rules:
- name: deploy-prod
provider: github
repository: https://github.com/myorg/myrepo
@@ -70266,12 +64811,9 @@ rules:
- staging
command: "provisioning deploy"
require_approval: false
-```plaintext
-
-**Then:**
-
-```bash
-# Load rules
+
+Then:
+# Load rules
provisioning gitops rules ./gitops-rules.yaml
# Watch for events
@@ -70279,20 +64821,13 @@ provisioning gitops watch --provider github
# When you push to main, deployment auto-triggers!
# git push origin main → provisioning deploy runs automatically
-```plaintext
-
----
-
-## 5️⃣ Service Management
-
-### What It Does
-
-Install, start, stop, and manage services across systemd (Linux), launchd (macOS), runit, and OpenRC.
-
-### Commands
-
-```bash
-# Install service
+
+
+
+
+Install, start, stop, and manage services across systemd (Linux), launchd (macOS), runit, and OpenRC.
+
+# Install service
provisioning service install myapp /usr/local/bin/myapp \
--user myapp \
--working-dir /opt/myapp
@@ -70316,12 +64851,9 @@ provisioning service list
# Detect init system
provisioning service detect-init
# Output: systemd (Linux), launchd (macOS), etc.
-```plaintext
-
-### Example: Install Custom Service
-
-```bash
-# On Linux (systemd)
+
+
+# On Linux (systemd)
provisioning service install provisioning-worker \
/usr/local/bin/provisioning-worker \
--user provisioning \
@@ -70336,24 +64868,16 @@ provisioning service install provisioning-worker \
# Service file is generated automatically for your platform
provisioning service start provisioning-worker
provisioning service status provisioning-worker
-```plaintext
-
----
-
-## 🎯 Common Workflows
-
-### Workflow 1: Multi-Platform Deployment
-
-```bash
-# Works on macOS with OrbStack, Linux with Docker, etc.
+
+
+
+
+# Works on macOS with OrbStack, Linux with Docker, etc.
provisioning runtime detect # Detects your platform
provisioning runtime exec "docker ps" # Uses your runtime
-```plaintext
-
-### Workflow 2: Large-Scale SSH Operations
-
-```bash
-# Connect to multiple servers
+
+
+# Connect to multiple servers
for host in srv01 srv02 srv03; do
provisioning ssh pool connect $host.example.com root
done
@@ -70363,12 +64887,9 @@ provisioning ssh pool exec [srv01, srv02, srv03] \
"systemctl restart app" \
--strategy rolling \
--retry exponential
-```plaintext
-
-### Workflow 3: Automated Backups
-
-```bash
-# Create backup job
+
+
+# Create backup job
provisioning backup create daily /opt/app /data \
--backend restic \
--repository s3://backups
@@ -70378,12 +64899,9 @@ provisioning backup schedule daily "0 2 * * *"
# Verify it works
provisioning backup list
-```plaintext
-
-### Workflow 4: Continuous Deployment from Git
-
-```bash
-# Define rules in YAML
+
+
+# Define rules in YAML
cat > gitops-rules.yaml << 'EOF'
rules:
- name: deploy-prod
@@ -70400,60 +64918,49 @@ provisioning gitops rules ./gitops-rules.yaml
provisioning gitops watch --provider github
# Now pushing to main auto-deploys!
-```plaintext
-
----
-
-## 🔧 Advanced Configuration
-
-### Using with KCL Configuration
-
-All integrations support KCL schemas for advanced configuration:
-
-```kcl
-import provisioning.integrations as integ
-
-# Runtime configuration
-integrations: integ.IntegrationConfig = {
+
+
+
+
+All integrations support Nickel schemas for advanced configuration:
+let { IntegrationConfig } = import "provisioning/integrations.ncl" in
+{
+ integrations = {
+ # Runtime configuration
runtime = {
- preferred = "podman"
- check_order = ["podman", "docker", "nerdctl"]
- timeout_secs = 5
- enable_cache = True
- }
+ preferred = "podman",
+ check_order = ["podman", "docker", "nerdctl"],
+ timeout_secs = 5,
+ enable_cache = true,
+ },
# Backup with retention policy
backup = {
- default_backend = "restic"
- default_repository = {
- type = "s3"
- bucket = "prod-backups"
- prefix = "daily"
- }
- jobs = []
- verify_after_backup = True
- }
+ default_backend = "restic",
+ default_repository = {
+ type = "s3",
+ bucket = "prod-backups",
+ prefix = "daily",
+ },
+ jobs = [],
+ verify_after_backup = true,
+ },
# GitOps rules with approval
gitops = {
- rules = []
- default_strategy = "blue-green"
- dry_run_by_default = False
- enable_audit_log = True
- }
+ rules = [],
+ default_strategy = "blue-green",
+ dry_run_by_default = false,
+ enable_audit_log = true,
+ },
+ }
}
-```plaintext
-
----
-
-## 💡 Tips & Tricks
-
-### Tip 1: Dry-Run Mode
-
-All major operations support `--check` for testing:
-
-```bash
-provisioning runtime exec "systemctl restart app" --check
+
+
+
+
+All major operations support --check for testing:
+provisioning runtime exec "systemctl restart app" --check
# Output: Would execute: [docker exec ...]
provisioning backup create test /data --check
@@ -70461,24 +64968,16 @@ provisioning backup create test /data --check
provisioning gitops trigger deploy-test --check
# Output: Deployment would trigger
-```plaintext
-
-### Tip 2: Output Formats
-
-Some commands support JSON output:
-
-```bash
-provisioning runtime list --out json
+
+
+Some commands support JSON output:
+provisioning runtime list --out json
provisioning backup list --out json
provisioning gitops deployments --out json
-```plaintext
-
-### Tip 3: Integration with Scripts
-
-Chain commands in shell scripts:
-
-```bash
-#!/bin/bash
+
+
+Chain commands in shell scripts:
+#!/bin/bash
# Detect runtime and use it
RUNTIME=$(provisioning runtime detect | grep -oP 'docker|podman|nerdctl')
@@ -70494,18 +64993,12 @@ provisioning deploy
# Verify with GitOps
provisioning gitops status
-```plaintext
-
----
-
-## 🐛 Troubleshooting
-
-### Problem: "No container runtime detected"
-
-**Solution**: Install Docker, Podman, or OrbStack:
-
-```bash
-# macOS
+
+
+
+
+Solution : Install Docker, Podman, or OrbStack:
+# macOS
brew install orbstack
# Linux
@@ -70513,50 +65006,36 @@ sudo apt-get install docker.io
# Then verify
provisioning runtime detect
-```plaintext
-
-### Problem: SSH connection timeout
-
-**Solution**: Check port and timeout settings:
-
-```bash
-# Use different port
+
+
+Solution : Check port and timeout settings:
+# Use different port
provisioning ssh pool connect server.example.com root --port 2222
# Increase timeout
provisioning ssh pool connect server.example.com root --timeout 60
-```plaintext
-
-### Problem: Backup fails with "Permission denied"
-
-**Solution**: Check permissions on backup path:
-
-```bash
-# Check if user can read target paths
+
+
+Solution : Check permissions on backup path:
+# Check if user can read target paths
ls -l /data # Should be readable
# Run with elevated privileges if needed
sudo provisioning backup create mybak /data --backend restic
-```plaintext
-
----
-
-## 📚 Learn More
-
-| Topic | Location |
-|-------|----------|
-| Architecture | `docs/architecture/ECOSYSTEM_INTEGRATION.md` |
-| CLI Help | `provisioning help integrations` |
-| Rust Bridge | `provisioning/platform/integrations/provisioning-bridge/` |
-| Nushell Modules | `provisioning/core/nulib/lib_provisioning/integrations/` |
-| KCL Schemas | `provisioning/kcl/integrations/` |
-
----
-
-## 🆘 Need Help?
-
-```bash
-# General help
+
+
+
+Topic Location
+Architecture docs/architecture/ECOSYSTEM_INTEGRATION.md
+CLI Help provisioning help integrations
+Rust Bridge provisioning/platform/integrations/provisioning-bridge/
+Nushell Modules provisioning/core/nulib/lib_provisioning/integrations/
+Nickel Schemas provisioning/schemas/integrations/
+
+
+
+
+# General help
provisioning help integrations
# Specific command help
@@ -70567,13 +65046,10 @@ provisioning gitops --help
# System diagnostics
provisioning status
provisioning health
-```plaintext
-
----
-
-**Last Updated**: 2025-11-23
-**Version**: 1.0.0
+
+Last Updated : 2025-11-23
+Version : 1.0.0
Status : ✅ COMPLETED - All phases (1-6) implemented and tested
@@ -70604,12 +65080,9 @@ provisioning workspace register librecloud /Users/Akasha/project-provisioning/wo
# Verify
provisioning workspace list
provisioning workspace active
-```plaintext
-
-### 2. Create your first database secret
-
-```bash
-# Create PostgreSQL credential
+
+
+# Create PostgreSQL credential
provisioning secrets create database postgres \
--workspace librecloud \
--infra wuji \
@@ -70618,37 +65091,24 @@ provisioning secrets create database postgres \
--host db.local \
--port 5432 \
--database myapp
-```plaintext
-
-### 3. Retrieve the secret
-
-```bash
-# Get credential (requires Cedar authorization)
+
+
+# Get credential (requires Cedar authorization)
provisioning secrets get librecloud/wuji/postgres/admin_password
-```plaintext
-
-### 4. List secrets by domain
-
-```bash
-# List all PostgreSQL secrets
+
+
+# List all PostgreSQL secrets
provisioning secrets list --workspace librecloud --domain postgres
# List all infrastructure secrets
provisioning secrets list --workspace librecloud --infra wuji
-```plaintext
-
----
-
-## 📚 Complete Guide by Phases
-
-### Phase 1: Database and Application Secrets
-
-#### 1.1 Create Database Credentials
-
-**REST Endpoint**:
-
-```bash
-POST /api/v1/secrets/database
+
+
+
+
+
+REST Endpoint :
+POST /api/v1/secrets/database
Content-Type: application/json
{
@@ -70661,12 +65121,9 @@ Content-Type: application/json
"username": "admin",
"password": "encrypted_password"
}
-```plaintext
-
-**CLI Command**:
-
-```bash
-provisioning secrets create database postgres \
+
+CLI Command :
+provisioning secrets create database postgres \
--workspace librecloud \
--infra wuji \
--user admin \
@@ -70674,49 +65131,35 @@ provisioning secrets create database postgres \
--host db.librecloud.internal \
--port 5432 \
--database production_db
-```plaintext
-
-**Result**: Secret stored in SurrealDB with KMS encryption
-
-```plaintext
-✓ Secret created: librecloud/wuji/postgres/admin_password
+
+Result : Secret stored in SurrealDB with KMS encryption
+✓ Secret created: librecloud/wuji/postgres/admin_password
Workspace: librecloud
Infrastructure: wuji
Domain: postgres
Type: Database
Encrypted: Yes (KMS)
-```plaintext
-
-#### 1.2 Create Application Secrets
-
-**REST API**:
-
-```bash
-POST /api/v1/secrets/application
+
+
+REST API :
+POST /api/v1/secrets/application
{
"workspace_id": "librecloud",
"app_name": "myapp-web",
"key_type": "api_token",
"value": "sk_live_abc123xyz"
}
-```plaintext
-
-**CLI**:
-
-```bash
-provisioning secrets create app myapp-web \
+
+CLI :
+provisioning secrets create app myapp-web \
--workspace librecloud \
--domain web \
--type api_token \
--value "sk_live_abc123xyz"
-```plaintext
-
-#### 1.3 List Secrets
-
-**REST API**:
-
-```bash
-GET /api/v1/secrets/list?workspace=librecloud&domain=postgres
+
+
+REST API :
+GET /api/v1/secrets/list?workspace=librecloud&domain=postgres
Response:
{
@@ -70731,12 +65174,9 @@ Response:
}
]
}
-```plaintext
-
-**CLI**:
-
-```bash
-# All workspace secrets
+
+CLI :
+# All workspace secrets
provisioning secrets list --workspace librecloud
# Filter by domain
@@ -70744,25 +65184,18 @@ provisioning secrets list --workspace librecloud --domain postgres
# Filter by infrastructure
provisioning secrets list --workspace librecloud --infra wuji
-```plaintext
-
-#### 1.4 Retrieve a Secret
-
-**REST API**:
-
-```bash
-GET /api/v1/secrets/librecloud/wuji/postgres/admin_password
+
+
+REST API :
+GET /api/v1/secrets/librecloud/wuji/postgres/admin_password
Requires:
- Header: Authorization: Bearer <jwt_token>
- Cedar verification: [user has read permission]
- If MFA required: mfa_verified=true in JWT
-```plaintext
-
-**CLI**:
-
-```bash
-# Get full secret
+
+CLI :
+# Get full secret
provisioning secrets get librecloud/wuji/postgres/admin_password
# Output:
@@ -70771,18 +65204,12 @@ provisioning secrets get librecloud/wuji/postgres/admin_password
# User: admin
# Database: production_db
# Password: [encrypted in transit]
-```plaintext
-
----
-
-### Phase 2: SSH Keys and Provider Credentials
-
-#### 2.1 Temporal SSH Keys (Auto-expiring)
-
-**Use Case**: Temporary server access (max 24 hours)
-
-```bash
-# Generate temporary SSH key (TTL 2 hours)
+
+
+
+
+Use Case : Temporary server access (max 24 hours)
+# Generate temporary SSH key (TTL 2 hours)
provisioning secrets create ssh \
--workspace librecloud \
--infra wuji \
@@ -70795,21 +65222,17 @@ provisioning secrets create ssh \
# TTL: 2 hours
# Expires at: 2025-12-06T12:00:00Z
# Private Key: [encrypted]
-```plaintext
-
-**Technical Details**:
-
-- Generated in real-time by Orchestrator
-- Stored in memory (TTL-based)
-- Automatic revocation on expiry
-- Complete audit trail in vault_audit
-
-#### 2.2 Permanent SSH Keys (Stored)
-
-**Use Case**: Long-duration infrastructure keys
-
-```bash
-# Create permanent SSH key (stored in DB)
+
+Technical Details :
+
+Generated in real-time by Orchestrator
+Stored in memory (TTL-based)
+Automatic revocation on expiry
+Complete audit trail in vault_audit
+
+
+Use Case : Long-duration infrastructure keys
+# Create permanent SSH key (stored in DB)
provisioning secrets create ssh \
--workspace librecloud \
--infra wuji \
@@ -70821,14 +65244,10 @@ provisioning secrets create ssh \
# Storage: SurrealDB (encrypted)
# Rotation: Manual (or automatic if configured)
# Access: Cedar controlled
-```plaintext
-
-#### 2.3 Provider Credentials
-
-**UpCloud API (Temporal)**:
-
-```bash
-provisioning secrets create provider upcloud \
+
+
+UpCloud API (Temporal) :
+provisioning secrets create provider upcloud \
--workspace librecloud \
--roles "server,network,storage" \
--ttl 4h
@@ -70838,12 +65257,9 @@ provisioning secrets create provider upcloud \
# Token: tmp_upcloud_abc123
# Roles: server, network, storage
# TTL: 4 hours
-```plaintext
-
-**UpCloud API (Permanent)**:
-
-```bash
-provisioning secrets create provider upcloud \
+
+UpCloud API (Permanent) :
+provisioning secrets create provider upcloud \
--workspace librecloud \
--roles "server,network" \
--permanent
@@ -70853,27 +65269,20 @@ provisioning secrets create provider upcloud \
# Token: upcloud_live_xyz789
# Storage: SurrealDB
# Rotation: Manual
-```plaintext
-
----
-
-### Phase 3: Auto Rotation
-
-#### 3.1 Plan Automatic Rotation
-
-**Predefined Rotation Policies**:
-
-| Type | Prod | Dev |
-|------|------|-----|
-| **Database** | Every 30d | Every 90d |
-| **Application** | Every 60d | Every 14d |
-| **SSH** | Every 365d | Every 90d |
-| **Provider** | Every 180d | Every 30d |
-
-**Force Immediate Rotation**:
-
-```bash
-# Force rotation now
+
+
+
+
+Predefined Rotation Policies :
+Type Prod Dev
+Database Every 30d Every 90d
+Application Every 60d Every 14d
+SSH Every 365d Every 90d
+Provider Every 180d Every 30d
+
+
+Force Immediate Rotation :
+# Force rotation now
provisioning secrets rotate librecloud/wuji/postgres/admin_password
# Result:
@@ -70882,12 +65291,9 @@ provisioning secrets rotate librecloud/wuji/postgres/admin_password
# New password: [generated]
# Old password: [archived]
# Next rotation: 2025-01-05
-```plaintext
-
-**Check Rotation Status**:
-
-```bash
-GET /api/v1/secrets/{path}/rotation-status
+
+Check Rotation Status :
+GET /api/v1/secrets/{path}/rotation-status
Response:
{
@@ -70898,14 +65304,10 @@ Response:
"days_remaining": 30,
"failure_count": 0
}
-```plaintext
-
-#### 3.2 Rotation Job Scheduler (Background)
-
-System automatically runs rotations every hour:
-
-```plaintext
-┌─────────────────────────────────┐
+
+
+System automatically runs rotations every hour:
+┌─────────────────────────────────┐
│ Rotation Job Scheduler │
│ - Interval: 1 hour │
│ - Max concurrency: 5 rotations │
@@ -70921,30 +65323,21 @@ System automatically runs rotations every hour:
Update SurrealDB
↓
Log to audit trail
-```plaintext
-
-**Check Scheduler Status**:
-
-```bash
-provisioning secrets scheduler status
+
+Check Scheduler Status :
+provisioning secrets scheduler status
# Result:
# Status: Running
# Last check: 2025-12-06T11:00:00Z
# Completed rotations: 24
# Failed rotations: 0
-```plaintext
-
----
-
-### Phase 3.2: Share Secrets Across Workspaces
-
-#### Create a Grant (Access Authorization)
-
-**Scenario**: Share DB credential between `librecloud` and `staging`
-
-```bash
-# REST API
+
+
+
+
+Scenario : Share DB credential between librecloud and staging
+# REST API
POST /api/v1/secrets/{path}/grant
{
@@ -70965,12 +65358,9 @@ POST /api/v1/secrets/{path}/grant
"granted_at": "2025-12-06T10:00:00Z",
"access_count": 0
}
-```plaintext
-
-**CLI**:
-
-```bash
-provisioning secrets grant \
+
+CLI :
+provisioning secrets grant \
--secret librecloud/wuji/postgres/admin_password \
--target-workspace staging \
--permission read
@@ -70980,12 +65370,9 @@ provisioning secrets grant \
# Target workspace: staging
# Permission: Read
# Approval required: No
-```plaintext
-
-#### Revoke a Grant
-
-```bash
-# Revoke access immediately
+
+
+# Revoke access immediately
POST /api/v1/secrets/grant/{grant_id}/revoke
{
"reason": "User left the team"
@@ -70998,12 +65385,9 @@ provisioning secrets revoke-grant grant-12345 \
# ✓ Grant revoked
# Status: Revoked
# Access records: 42
-```plaintext
-
-#### List Grants
-
-```bash
-# All workspace grants
+
+
+# All workspace grants
GET /api/v1/secrets/grants?workspace=librecloud
# Response:
@@ -71020,16 +65404,11 @@ GET /api/v1/secrets/grants?workspace=librecloud
}
]
}
-```plaintext
-
----
-
-### Phase 3.4: Monitoring and Alerts
-
-#### Dashboard Metrics
-
-```bash
-GET /api/v1/secrets/monitoring/dashboard
+
+
+
+
+GET /api/v1/secrets/monitoring/dashboard
Response:
{
@@ -71059,12 +65438,9 @@ Response:
"failed": 2
}
}
-```plaintext
-
-**CLI**:
-
-```bash
-provisioning secrets monitoring dashboard
+
+CLI :
+provisioning secrets monitoring dashboard
# ✓ Secrets Dashboard - Librecloud
#
@@ -71080,12 +65456,9 @@ provisioning secrets monitoring dashboard
# - librecloud/app/api_token (7 days)
#
# 📊 Rotations completed: 40/45 (89%)
-```plaintext
-
-#### Expiring Secrets Alerts
-
-```bash
-GET /api/v1/secrets/monitoring/expiring?days=7
+
+
+GET /api/v1/secrets/monitoring/expiring?days=7
Response:
{
@@ -71099,18 +65472,12 @@ Response:
}
]
}
-```plaintext
-
----
-
-## 🔐 Cedar Authorization
-
-All operations are protected by **Cedar policies**:
-
-### Example Policy: Production Secret Access
-
-```cedar
-// Requires MFA for production secrets
+
+
+
+All operations are protected by Cedar policies :
+
+// Requires MFA for production secrets
@id("prod-secret-access-mfa")
permit (
principal,
@@ -71130,12 +65497,9 @@ permit (
) when {
resource.lifecycle == "permanent"
};
-```plaintext
-
-### Verify Authorization
-
-```bash
-# Test Cedar decision
+
+
+# Test Cedar decision
provisioning policies check alice can access secret:librecloud/postgres/password
# Result:
@@ -71145,16 +65509,11 @@ provisioning policies check alice can access secret:librecloud/postgres/password
# - Role: database_admin
# - MFA verified: Yes
# - Workspace: librecloud
-```plaintext
-
----
-
-## 🏗️ Data Structure
-
-### Secret in Database
-
-```sql
--- Table vault_secrets (SurrealDB)
+
+
+
+
+-- Table vault_secrets (SurrealDB)
{
id: "secret:uuid123",
path: "librecloud/wuji/postgres/admin_password",
@@ -71180,12 +65539,9 @@ provisioning policies check alice can access secret:librecloud/postgres/password
username: "admin"
}
}
-```plaintext
-
-### Secret Hierarchy
-
-```plaintext
-librecloud (Workspace)
+
+
+librecloud (Workspace)
├── wuji (Infrastructure)
│ ├── postgres (Domain)
│ │ ├── admin_password
@@ -71204,16 +65560,11 @@ librecloud (Workspace)
└── auth (Domain)
├── jwt_secret
└── oauth_client_secret
-```plaintext
-
----
-
-## 🔄 Complete Workflows
-
-### Workflow 1: Create and Rotate Database Credential
-
-```plaintext
-1. Admin creates credential
+
+
+
+
+1. Admin creates credential
POST /api/v1/secrets/database
2. System encrypts with KMS
@@ -71241,12 +65592,9 @@ librecloud (Workspace)
├─ If 7 days remaining → WARNING alert
├─ If 3 days remaining → CRITICAL alert
└─ If expired → EXPIRED alert
-```plaintext
-
-### Workflow 2: Share Secret Between Workspaces
-
-```plaintext
-1. Admin of librecloud creates grant
+
+
+1. Admin of librecloud creates grant
POST /api/v1/secrets/{path}/grant
2. Cedar verifies authorization
@@ -71273,12 +65621,9 @@ librecloud (Workspace)
├─ Exact timestamp
├─ Success/failure
└─ Increment access count in grant
-```plaintext
-
-### Workflow 3: Access Temporal SSH Secret
-
-```plaintext
-1. User requests temporary SSH key
+
+
+1. User requests temporary SSH key
POST /api/v1/secrets/ssh
{ttl: "2h"}
@@ -71301,16 +65646,11 @@ librecloud (Workspace)
├─ TTL expires → Auto revokes
├─ Later attempts → Access denied
└─ Audit: automatic revocation
-```plaintext
-
----
-
-## 📝 Practical Examples
-
-### Example 1: Manage PostgreSQL Secrets
-
-```bash
-# 1. Create credential
+
+
+
+
+# 1. Create credential
provisioning secrets create database postgres \
--workspace librecloud \
--infra wuji \
@@ -71337,12 +65677,9 @@ provisioning secrets rotate librecloud/wuji/postgres/admin_password
# 6. Check status
provisioning secrets monitoring dashboard | grep postgres
-```plaintext
-
-### Example 2: Temporary SSH Access
-
-```bash
-# 1. Generate temporary SSH key (4 hours)
+
+
+# 1. Generate temporary SSH key (4 hours)
provisioning secrets create ssh \
--workspace librecloud \
--infra wuji \
@@ -71360,12 +65697,9 @@ ssh -i ~/.ssh/web01_temp ubuntu@web01.librecloud.internal
# → Key revoked automatically
# → New SSH attempts fail
# → Access logged in audit
-```plaintext
-
-### Example 3: CI/CD Integration
-
-```yaml
-# GitLab CI / GitHub Actions
+
+
+# GitLab CI / GitHub Actions
jobs:
deploy:
script:
@@ -71383,28 +65717,23 @@ jobs:
# → Workspace: librecloud
# → Secrets accessed: 2
# → Status: success
-```plaintext
-
----
-
-## 🛡️ Security
-
-### Encryption
-
-- **At Rest**: AES-256-GCM with KMS key rotation
-- **In Transit**: TLS 1.3
-- **In Memory**: Automatic cleanup of sensitive variables
-
-### Access Control
-
-- **Cedar**: All operations evaluated against policies
-- **MFA**: Required for production secrets
-- **Workspace Isolation**: Data separation at DB level
-
-### Audit
-
-```json
-{
+
+
+
+
+
+At Rest : AES-256-GCM with KMS key rotation
+In Transit : TLS 1.3
+In Memory : Automatic cleanup of sensitive variables
+
+
+
+Cedar : All operations evaluated against policies
+MFA : Required for production secrets
+Workspace Isolation : Data separation at DB level
+
+
+{
"timestamp": "2025-12-06T10:30:45Z",
"user_id": "alice",
"workspace": "librecloud",
@@ -71415,16 +65744,11 @@ jobs:
"mfa_verified": true,
"cedar_policy": "prod-secret-access-mfa"
}
-```plaintext
-
----
-
-## 📊 Test Results
-
-### All 25 Integration Tests Passing
-
-```plaintext
-✅ Phase 3.1: Rotation Scheduler (9 tests)
+
+
+
+
+✅ Phase 3.1: Rotation Scheduler (9 tests)
- Schedule creation
- Status transitions
- Failure tracking
@@ -71446,27 +65770,18 @@ jobs:
✅ Integration Tests (3 tests)
- Multi-service workflows
- End-to-end scenarios
-```plaintext
-
-**Execution**:
-
-```bash
-cargo test --test secrets_phases_integration_test
+
+Execution :
+cargo test --test secrets_phases_integration_test
test result: ok. 25 passed; 0 failed
-```plaintext
-
----
-
-## 🆘 Troubleshooting
-
-### Problem: "Authorization denied by Cedar policy"
-
-**Cause**: User lacks permissions in policy
-**Solution**:
-
-```bash
-# Check user and permission
+
+
+
+
+Cause : User lacks permissions in policy
+Solution :
+# Check user and permission
provisioning policies check $USER can access secret:librecloud/postgres/admin_password
# Check roles
@@ -71477,15 +65792,11 @@ provisioning secrets grant \
--secret librecloud/wuji/postgres/admin_password \
--target-workspace $WORKSPACE \
--permission read
-```plaintext
-
-### Problem: "Secret not found"
-
-**Cause**: Typo in path or workspace doesn't exist
-**Solution**:
-
-```bash
-# List available secrets
+
+
+Cause : Typo in path or workspace doesn’t exist
+Solution :
+# List available secrets
provisioning secrets list --workspace librecloud
# Check active workspace
@@ -71493,15 +65804,11 @@ provisioning workspace active
# Switch workspace if needed
provisioning workspace switch librecloud
-```plaintext
-
-### Problem: "MFA required"
-
-**Cause**: Operation requires MFA but not verified
-**Solution**:
-
-```bash
-# Check MFA status
+
+
+Cause : Operation requires MFA but not verified
+Solution :
+# Check MFA status
provisioning auth status
# Enroll if not configured
@@ -71509,30 +65816,25 @@ provisioning mfa totp enroll
# Use MFA token on next access
provisioning secrets get librecloud/wuji/postgres/admin_password --mfa-code 123456
-```plaintext
-
----
-
-## 📚 Complete Documentation
-
-- **REST API**: `/docs/api/secrets-api.md`
-- **CLI Reference**: `provisioning secrets --help`
-- **Cedar Policies**: `provisioning/config/cedar-policies/secrets.cedar`
-- **Architecture**: `/docs/architecture/SECRETS_SERVICE_LAYER.md`
-- **Security**: `/docs/user/SECRETS_SECURITY_GUIDE.md`
-
----
-
-## 🎯 Next Steps (Future)
-
-1. **Phase 7**: Web UI Dashboard for visual management
-2. **Phase 8**: HashiCorp Vault integration
-3. **Phase 9**: Multi-datacenter secret replication
-
----
-
-**Status**: ✅ Secrets Service Layer - COMPLETED AND TESTED
+
+
+
+REST API : /docs/api/secrets-api.md
+CLI Reference : provisioning secrets --help
+Cedar Policies : provisioning/config/cedar-policies/secrets.cedar
+Architecture : /docs/architecture/SECRETS_SERVICE_LAYER.md
+Security : /docs/user/SECRETS_SECURITY_GUIDE.md
+
+
+
+
+Phase 7 : Web UI Dashboard for visual management
+Phase 8 : HashiCorp Vault integration
+Phase 9 : Multi-datacenter secret replication
+
+
+Status : ✅ Secrets Service Layer - COMPLETED AND TESTED
Comprehensive OCI (Open Container Initiative) registry deployment and management for the provisioning system.
@@ -71544,7 +65846,7 @@ provisioning secrets get librecloud/wuji/postgres/admin_password --mfa-code 1234
Harbor (Recommended for Production): Full-featured enterprise registry
Distribution (OCI Reference): Official OCI reference implementation
-
+
Multi-Registry Support : Zot, Harbor, Distribution
Namespace Organization : Logical separation of artifacts
@@ -71555,7 +65857,7 @@ provisioning secrets get librecloud/wuji/postgres/admin_password --mfa-code 1234
TLS/SSL : Secure communication
UI Interface : Web-based management (Zot, Harbor)
-
+
cd provisioning/platform/oci-registry/zot
docker-compose up -d
@@ -71603,7 +65905,7 @@ nu -c "use provisioning/core/nulib/lib_provisioning/oci_registry; oci-registry h
# List namespaces
nu -c "use provisioning/core/nulib/lib_provisioning/oci_registry; oci-registry namespaces"
-
+
# Start
docker-compose up -d
@@ -71627,8 +65929,8 @@ docker-compose down -v
Best For Dev/CI Production Compliance
-
-
+
+
Zot/Distribution (htpasswd) :
htpasswd -Bc htpasswd provisioning
docker login localhost:5000
@@ -71637,22 +65939,22 @@ docker login localhost:5000
docker login localhost
# Username: admin / Password: Harbor12345
-
-
+
+
# API check
curl http://localhost:5000/v2/
# Catalog check
curl http://localhost:5000/v2/_catalog
-
+
Zot :
curl http://localhost:5000/metrics
Harbor :
curl http://localhost:9090/metrics
-
+
Architecture : OCI Integration
User Guide : OCI Registry Guide
@@ -71662,9 +65964,9 @@ curl http://localhost:5000/v2/_catalog
Date : 2025-10-06
Status : Production Ready
-
+
The Test Environment Service provides automated containerized testing for taskservs, servers, and multi-node clusters. Built into the orchestrator, it eliminates manual Docker management and provides realistic test scenarios.
-
+
┌─────────────────────────────────────────────────┐
│ Orchestrator (port 8080) │
│ ┌──────────────────────────────────────────┐ │
@@ -71682,16 +65984,11 @@ curl http://localhost:5000/v2/_catalog
│ • Resource Limits │
│ • Volume Mounts │
└────────────────────────┘
-```plaintext
-
-## Test Environment Types
-
-### 1. Single Taskserv Test
-
-Test individual taskserv in isolated container.
-
-```bash
-# Basic test
+
+
+
+Test individual taskserv in isolated container.
+# Basic test
provisioning test env single kubernetes
# With resource limits
@@ -71699,50 +65996,39 @@ provisioning test env single redis --cpu 2000 --memory 4096
# Auto-start and cleanup
provisioning test quick postgres
-```plaintext
-
-### 2. Server Simulation
-
-Simulate complete server with multiple taskservs.
-
-```bash
-# Server with taskservs
+
+
+Simulate complete server with multiple taskservs.
+# Server with taskservs
provisioning test env server web-01 [containerd kubernetes cilium]
# With infrastructure context
provisioning test env server db-01 [postgres redis] --infra prod-stack
-```plaintext
-
-### 3. Cluster Topology
-
-Multi-node cluster simulation from templates.
-
-```bash
-# 3-node Kubernetes cluster
+
+
+Multi-node cluster simulation from templates.
+# 3-node Kubernetes cluster
provisioning test topology load kubernetes_3node | test env cluster kubernetes --auto-start
# etcd cluster
provisioning test topology load etcd_cluster | test env cluster etcd
-```plaintext
-
-## Quick Start
-
-### Prerequisites
-
-1. **Docker running:**
-
- ```bash
- docker ps # Should work without errors
+
+
+Docker running:
+docker ps # Should work without errors
+
+
+
Orchestrator running:
cd provisioning/platform/orchestrator
./scripts/start-orchestrator.nu --background
-
+
# 1. Quick test (fastest)
provisioning test quick kubernetes
@@ -71761,41 +66047,30 @@ provisioning test env logs <env-id>
# Cleanup
provisioning test env cleanup <env-id>
-```plaintext
-
-## Topology Templates
-
-### Available Templates
-
-```bash
-# List templates
+
+
+
+# List templates
provisioning test topology list
-```plaintext
-
-| Template | Description | Nodes |
-|----------|-------------|-------|
-| `kubernetes_3node` | K8s HA cluster | 1 CP + 2 workers |
-| `kubernetes_single` | All-in-one K8s | 1 node |
-| `etcd_cluster` | etcd cluster | 3 members |
-| `containerd_test` | Standalone containerd | 1 node |
-| `postgres_redis` | Database stack | 2 nodes |
-
-### Using Templates
-
-```bash
-# Load and use template
+
+Template Description Nodes
+kubernetes_3nodeK8s HA cluster 1 CP + 2 workers
+kubernetes_singleAll-in-one K8s 1 node
+etcd_clusteretcd cluster 3 members
+containerd_testStandalone containerd 1 node
+postgres_redisDatabase stack 2 nodes
+
+
+
+# Load and use template
provisioning test topology load kubernetes_3node | test env cluster kubernetes
# View template
provisioning test topology load etcd_cluster
-```plaintext
-
-### Custom Topology
-
-Create `my-topology.toml`:
-
-```toml
-[my_cluster]
+
+
+Create my-topology.toml:
+[my_cluster]
name = "My Custom Cluster"
cluster_type = "custom"
@@ -71817,14 +66092,10 @@ memory_mb = 2048
[my_cluster.network]
subnet = "172.30.0.0/16"
-```plaintext
-
-## Commands Reference
-
-### Environment Management
-
-```bash
-# Create from config
+
+
+
+# Create from config
provisioning test env create <config>
# Single taskserv
@@ -71844,12 +66115,9 @@ provisioning test env get <env-id>
# Show status
provisioning test env status <env-id>
-```plaintext
-
-### Test Execution
-
-```bash
-# Run tests
+
+
+# Run tests
provisioning test env run <env-id> [--tests [test1, test2]]
# View logs
@@ -71857,21 +66125,14 @@ provisioning test env logs <env-id>
# Cleanup
provisioning test env cleanup <env-id>
-```plaintext
-
-### Quick Test
-
-```bash
-# One-command test (create, run, cleanup)
+
+
+# One-command test (create, run, cleanup)
provisioning test quick <taskserv> [--infra NAME]
-```plaintext
-
-## REST API
-
-### Create Environment
-
-```bash
-curl -X POST http://localhost:9090/test/environments/create \
+
+
+
+curl -X POST http://localhost:9090/test/environments/create \
-H "Content-Type: application/json" \
-d '{
"config": {
@@ -71888,107 +66149,70 @@ curl -X POST http://localhost:9090/test/environments/create \
"auto_start": true,
"auto_cleanup": false
}'
-```plaintext
-
-### List Environments
-
-```bash
-curl http://localhost:9090/test/environments
-```plaintext
-
-### Run Tests
-
-```bash
-curl -X POST http://localhost:9090/test/environments/{id}/run \
+
+
+curl http://localhost:9090/test/environments
+
+
+curl -X POST http://localhost:9090/test/environments/{id}/run \
-H "Content-Type: application/json" \
-d '{
"tests": [],
"timeout_seconds": 300
}'
-```plaintext
-
-### Cleanup
-
-```bash
-curl -X DELETE http://localhost:9090/test/environments/{id}
-```plaintext
-
-## Use Cases
-
-### 1. Taskserv Development
-
-Test taskserv before deployment:
-
-```bash
-# Test new taskserv version
+
+
+curl -X DELETE http://localhost:9090/test/environments/{id}
+
+
+
+Test taskserv before deployment:
+# Test new taskserv version
provisioning test env single my-taskserv --auto-start
# Check logs
provisioning test env logs <env-id>
-```plaintext
-
-### 2. Multi-Taskserv Integration
-
-Test taskserv combinations:
-
-```bash
-# Test kubernetes + cilium + containerd
+
+
+Test taskserv combinations:
+# Test kubernetes + cilium + containerd
provisioning test env server k8s-test [kubernetes cilium containerd] --auto-start
-```plaintext
-
-### 3. Cluster Validation
-
-Test cluster configurations:
-
-```bash
-# Test 3-node etcd cluster
+
+
+Test cluster configurations:
+# Test 3-node etcd cluster
provisioning test topology load etcd_cluster | test env cluster etcd --auto-start
-```plaintext
-
-### 4. CI/CD Integration
-
-```yaml
-# .gitlab-ci.yml
+
+
+# .gitlab-ci.yml
test-taskserv:
stage: test
script:
- provisioning test quick kubernetes
- provisioning test quick redis
- provisioning test quick postgres
-```plaintext
-
-## Advanced Features
-
-### Resource Limits
-
-```bash
-# Custom CPU and memory
+
+
+
+# Custom CPU and memory
provisioning test env single postgres \
--cpu 4000 \
--memory 8192
-```plaintext
-
-### Network Isolation
-
-Each environment gets isolated network:
-
-- Subnet: 172.20.0.0/16 (default)
-- DNS enabled
-- Container-to-container communication
-
-### Auto-Cleanup
-
-```bash
-# Auto-cleanup after tests
+
+
+Each environment gets isolated network:
+
+Subnet: 172.20.0.0/16 (default)
+DNS enabled
+Container-to-container communication
+
+
+# Auto-cleanup after tests
provisioning test env single redis --auto-start --auto-cleanup
-```plaintext
-
-### Multiple Environments
-
-Run tests in parallel:
-
-```bash
-# Create multiple environments
+
+
+Run tests in parallel:
+# Create multiple environments
provisioning test env single kubernetes --auto-start &
provisioning test env single postgres --auto-start &
provisioning test env single redis --auto-start &
@@ -71997,153 +66221,103 @@ wait
# List all
provisioning test env list
-```plaintext
-
-## Troubleshooting
-
-### Docker not running
-
-```plaintext
-Error: Failed to connect to Docker
-```plaintext
-
-**Solution:**
-
-```bash
-# Check Docker
+
+
+
+Error: Failed to connect to Docker
+
+Solution:
+# Check Docker
docker ps
# Start Docker daemon
sudo systemctl start docker # Linux
open -a Docker # macOS
-```plaintext
-
-### Orchestrator not running
-
-```plaintext
-Error: Connection refused (port 8080)
-```plaintext
-
-**Solution:**
-
-```bash
-cd provisioning/platform/orchestrator
+
+
+Error: Connection refused (port 8080)
+
+Solution:
+cd provisioning/platform/orchestrator
./scripts/start-orchestrator.nu --background
-```plaintext
-
-### Environment creation fails
-
-Check logs:
-
-```bash
-provisioning test env logs <env-id>
-```plaintext
-
-Check Docker:
-
-```bash
-docker ps -a
+
+
+Check logs:
+provisioning test env logs <env-id>
+
+Check Docker:
+docker ps -a
docker logs <container-id>
-```plaintext
-
-### Out of resources
-
-```plaintext
-Error: Cannot allocate memory
-```plaintext
-
-**Solution:**
-
-```bash
-# Cleanup old environments
+
+
+Error: Cannot allocate memory
+
+Solution:
+# Cleanup old environments
provisioning test env list | each {|env| provisioning test env cleanup $env.id }
# Or cleanup Docker
docker system prune -af
-```plaintext
-
-## Best Practices
-
-### 1. Use Templates
-
-Reuse topology templates instead of recreating:
-
-```bash
-provisioning test topology load kubernetes_3node | test env cluster kubernetes
-```plaintext
-
-### 2. Auto-Cleanup
-
-Always use auto-cleanup in CI/CD:
-
-```bash
-provisioning test quick <taskserv> # Includes auto-cleanup
-```plaintext
-
-### 3. Resource Planning
-
-Adjust resources based on needs:
-
-- Development: 1-2 cores, 2GB RAM
-- Integration: 2-4 cores, 4-8GB RAM
-- Production-like: 4+ cores, 8+ GB RAM
-
-### 4. Parallel Testing
-
-Run independent tests in parallel:
-
-```bash
-for taskserv in [kubernetes postgres redis] {
+
+
+
+Reuse topology templates instead of recreating:
+provisioning test topology load kubernetes_3node | test env cluster kubernetes
+
+
+Always use auto-cleanup in CI/CD:
+provisioning test quick <taskserv> # Includes auto-cleanup
+
+
+Adjust resources based on needs:
+
+Development: 1-2 cores, 2 GB RAM
+Integration: 2-4 cores, 4-8 GB RAM
+Production-like: 4+ cores, 8+ GB RAM
+
+
+Run independent tests in parallel:
+for taskserv in [kubernetes postgres redis] {
provisioning test quick $taskserv &
}
wait
-```plaintext
-
-## Configuration
-
-### Default Settings
-
-- Base image: `ubuntu:22.04`
-- CPU: 1000 millicores (1 core)
-- Memory: 2048 MB (2GB)
-- Network: 172.20.0.0/16
-
-### Custom Config
-
-```bash
-# Override defaults
+
+
+
+
+Base image: ubuntu:22.04
+CPU: 1000 millicores (1 core)
+Memory: 2048 MB (2 GB)
+Network: 172.20.0.0/16
+
+
+# Override defaults
provisioning test env single postgres \
--base-image debian:12 \
--cpu 2000 \
--memory 4096
-```plaintext
-
----
-
-## Related Documentation
-
-- [Test Environment API](../api/test-environment-api.md)
-- [Topology Templates](../architecture/test-topologies.md)
-- [Orchestrator Guide](orchestrator-guide.md)
-- [Taskserv Development](taskserv-development.md)
-
----
-
-## Version History
-
-| Version | Date | Changes |
-|---------|------|---------|
-| 1.0.0 | 2025-10-06 | Initial test environment service |
-
----
-
-**Maintained By**: Infrastructure Team
+
+
+
+
+
+Version Date Changes
+1.0.0 2025-10-06 Initial test environment service
+
+
+
+Maintained By : Infrastructure Team
A comprehensive containerized test environment service has been integrated into the orchestrator, enabling automated testing of taskservs, complete servers, and multi-node clusters without manual Docker management.
-
+
Automated Container Management : No manual Docker operations required
Three Test Environment Types : Single taskserv, server simulation, multi-node clusters
@@ -72154,8 +66328,8 @@ provisioning test env single postgres \
Auto-Cleanup : Optional automatic cleanup after tests complete
CI/CD Integration : Easy integration into automated pipelines
-
-
+
+
Test individual taskserv in isolated container:
# Quick test (create, run, cleanup)
provisioning test quick kubernetes
@@ -72165,26 +66339,18 @@ provisioning test env single postgres --cpu 2000 --memory 4096 --auto-start --au
# With infrastructure context
provisioning test env single redis --infra my-project
-```plaintext
-
-### 2. Server Simulation
-
-Test complete server configurations with multiple taskservs:
-
-```bash
-# Simulate web server
+
+
+Test complete server configurations with multiple taskservs:
+# Simulate web server
provisioning test env server web-01 [containerd kubernetes cilium] --auto-start
# Simulate database server
provisioning test env server db-01 [postgres redis] --infra prod-stack --auto-start
-```plaintext
-
-### 3. Multi-Node Cluster Topology
-
-Test complex cluster configurations before deployment:
-
-```bash
-# 3-node Kubernetes HA cluster
+
+
+Test complex cluster configurations before deployment:
+# 3-node Kubernetes HA cluster
provisioning test topology load kubernetes_3node | test env cluster kubernetes --auto-start
# etcd cluster
@@ -72192,12 +66358,9 @@ provisioning test topology load etcd_cluster | test env cluster etcd --auto-star
# Single-node Kubernetes
provisioning test topology load kubernetes_single | test env cluster kubernetes
-```plaintext
-
-## Test Environment Management
-
-```bash
-# List all test environments
+
+
+# List all test environments
provisioning test env list
# Check environment status
@@ -72211,47 +66374,42 @@ provisioning test env run <env-id>
# Cleanup environment
provisioning test env cleanup <env-id>
-```plaintext
-
-## Available Topology Templates
-
-Predefined multi-node cluster templates in `provisioning/config/test-topologies.toml`:
-
-| Template | Description | Nodes | Use Case |
-|----------|-------------|-------|----------|
-| `kubernetes_3node` | K8s HA cluster | 1 CP + 2 workers | Production-like testing |
-| `kubernetes_single` | All-in-one K8s | 1 node | Development testing |
-| `etcd_cluster` | etcd cluster | 3 members | Distributed consensus |
-| `containerd_test` | Standalone containerd | 1 node | Container runtime |
-| `postgres_redis` | Database stack | 2 nodes | Database integration |
-
-## REST API Endpoints
-
-The orchestrator exposes test environment endpoints:
-
-- **Create Environment**: `POST http://localhost:9090/v1/test/environments/create`
-- **List Environments**: `GET http://localhost:9090/v1/test/environments`
-- **Get Environment**: `GET http://localhost:9090/v1/test/environments/{id}`
-- **Run Tests**: `POST http://localhost:9090/v1/test/environments/{id}/run`
-- **Cleanup**: `DELETE http://localhost:9090/v1/test/environments/{id}`
-- **Get Logs**: `GET http://localhost:9090/v1/test/environments/{id}/logs`
-
-## Prerequisites
-
-1. **Docker Running**: Test environments require Docker daemon
-
- ```bash
- docker ps # Should work without errors
+
+Predefined multi-node cluster templates in provisioning/config/test-topologies.toml:
+Template Description Nodes Use Case
+kubernetes_3nodeK8s HA cluster 1 CP + 2 workers Production-like testing
+kubernetes_singleAll-in-one K8s 1 node Development testing
+etcd_clusteretcd cluster 3 members Distributed consensus
+containerd_testStandalone containerd 1 node Container runtime
+postgres_redisDatabase stack 2 nodes Database integration
+
+
+
+The orchestrator exposes test environment endpoints:
+
+Create Environment : POST http://localhost:9090/v1/test/environments/create
+List Environments : GET http://localhost:9090/v1/test/environments
+Get Environment : GET http://localhost:9090/v1/test/environments/{id}
+Run Tests : POST http://localhost:9090/v1/test/environments/{id}/run
+Cleanup : DELETE http://localhost:9090/v1/test/environments/{id}
+Get Logs : GET http://localhost:9090/v1/test/environments/{id}/logs
+
+
+Docker Running : Test environments require Docker daemon
+docker ps # Should work without errors
+
+
+
Orchestrator Running : Start the orchestrator to manage test containers
cd provisioning/platform/orchestrator
./scripts/start-orchestrator.nu --background
-
+
User Command (CLI/API)
↓
Test Orchestrator (Rust)
@@ -72265,27 +66423,24 @@ Isolated Test Containers
• Resource limits
• Volume mounts
• Multi-node support
-```plaintext
-
-## Configuration
-
-- **Topology Templates**: `provisioning/config/test-topologies.toml`
-- **Default Resources**: 1000 millicores CPU, 2048 MB memory
-- **Network**: 172.20.0.0/16 (default subnet)
-- **Base Image**: ubuntu:22.04 (configurable)
-
-## Use Cases
-
-1. **Taskserv Development**: Test new taskservs before deployment
-2. **Integration Testing**: Validate taskserv combinations
-3. **Cluster Validation**: Test multi-node configurations
-4. **CI/CD Integration**: Automated infrastructure testing
-5. **Production Simulation**: Test production-like deployments safely
-
-## CI/CD Integration Example
-
-```yaml
-# GitLab CI
+
+
+
+Topology Templates : provisioning/config/test-topologies.toml
+Default Resources : 1000 millicores CPU, 2048 MB memory
+Network : 172.20.0.0/16 (default subnet)
+Base Image : ubuntu:22.04 (configurable)
+
+
+
+Taskserv Development : Test new taskservs before deployment
+Integration Testing : Validate taskserv combinations
+Cluster Validation : Test multi-node configurations
+CI/CD Integration : Automated infrastructure testing
+Production Simulation : Test production-like deployments safely
+
+
+# GitLab CI
test-infrastructure:
stage: test
script:
@@ -72299,31 +66454,28 @@ test-infrastructure:
when: on_failure
paths:
- test-logs/
-```plaintext
-
-## Documentation
-
-Complete documentation available:
-
-- **User Guide**: [Test Environment Guide](../testing/test-environment-guide.md)
-- **Detailed Usage**: [Test Environment Usage](../testing/test-environment-usage.md)
-- **Orchestrator README**: [Orchestrator](../operations/orchestrator-system.md)
-
-## Command Shortcuts
-
-Test commands are integrated into the CLI with shortcuts:
-
-- `test` or `tst` - Test command prefix
-- `test quick <taskserv>` - One-command test
-- `test env single/server/cluster` - Create test environments
-- `test topology load/list` - Manage topology templates
+
+Complete documentation available:
+
+
+Test commands are integrated into the CLI with shortcuts:
+
+test or tst - Test command prefix
+test quick <taskserv> - One-command test
+test env single/server/cluster - Create test environments
+test topology load/list - Manage topology templates
+
Version : 1.0.0
Date : 2025-10-06
Status : Production Ready
-
+
The taskserv validation and testing system provides comprehensive evaluation of infrastructure services before deployment, reducing errors and increasing confidence in deployments.
@@ -72337,65 +66489,47 @@ Test commands are integrated into the CLI with shortcuts:
Command:
provisioning taskserv validate kubernetes --level static
-```plaintext
-
-### 2. Dependency Validation
-
-Checks taskserv dependencies, conflicts, and requirements.
-
-**What it checks:**
-
-- Required dependencies are available
-- Optional dependencies status
-- Conflicting taskservs
-- Resource requirements (memory, CPU, disk)
-- Health check configuration
-
-**Command:**
-
-```bash
-provisioning taskserv validate kubernetes --level dependencies
-```plaintext
-
-**Check against infrastructure:**
-
-```bash
-provisioning taskserv check-deps kubernetes --infra my-project
-```plaintext
-
-### 3. Check Mode (Dry-Run)
-
-Enhanced check mode that performs validation and previews deployment without making changes.
-
-**What it does:**
-
-- Runs static validation
-- Validates dependencies
-- Previews configuration generation
-- Lists files to be deployed
-- Checks prerequisites (without SSH in check mode)
-
-**Command:**
-
-```bash
-provisioning taskserv create kubernetes --check
-```plaintext
-
-### 4. Sandbox Testing
-
-Tests taskserv in isolated container environment before actual deployment.
-
-**What it tests:**
-
-- Package prerequisites
-- Configuration validity
-- Script execution
-- Health check simulation
-
-**Command:**
-
-```bash
-# Test with Docker
+
+
+Checks taskserv dependencies, conflicts, and requirements.
+What it checks:
+
+Required dependencies are available
+Optional dependencies status
+Conflicting taskservs
+Resource requirements (memory, CPU, disk)
+Health check configuration
+
+Command:
+provisioning taskserv validate kubernetes --level dependencies
+
+Check against infrastructure:
+provisioning taskserv check-deps kubernetes --infra my-project
+
+
+Enhanced check mode that performs validation and previews deployment without making changes.
+What it does:
+
+Runs static validation
+Validates dependencies
+Previews configuration generation
+Lists files to be deployed
+Checks prerequisites (without SSH in check mode)
+
+Command:
+provisioning taskserv create kubernetes --check
+
+
+Tests taskserv in isolated container environment before actual deployment.
+What it tests:
+
+Package prerequisites
+Configuration validity
+Script execution
+Health check simulation
+
+Command:
+# Test with Docker
provisioning taskserv test kubernetes --runtime docker
# Test with Podman
@@ -72403,16 +66537,11 @@ provisioning taskserv test kubernetes --runtime podman
# Keep container for inspection
provisioning taskserv test kubernetes --runtime docker --keep
-```plaintext
-
----
-
-## Complete Validation Workflow
-
-### Recommended Validation Sequence
-
-```bash
-# 1. Static validation (fastest, no infrastructure needed)
+
+
+
+
+# 1. Static validation (fastest, no infrastructure needed)
provisioning taskserv validate kubernetes --level static -v
# 2. Dependency validation
@@ -72426,35 +66555,25 @@ provisioning taskserv test kubernetes --runtime docker
# 5. Actual deployment (after all validations pass)
provisioning taskserv create kubernetes
-```plaintext
-
-### Quick Validation (All Levels)
-
-```bash
-# Run all validation levels
+
+
+# Run all validation levels
provisioning taskserv validate kubernetes --level all -v
-```plaintext
-
----
-
-## Validation Commands Reference
-
-### `provisioning taskserv validate <taskserv>`
-
-Multi-level validation framework.
-
-**Options:**
-
-- `--level <level>` - Validation level: static, dependencies, health, all (default: all)
-- `--infra <name>` - Infrastructure context
-- `--settings <path>` - Settings file path
-- `--verbose` - Verbose output
-- `--out <format>` - Output format: json, yaml, text
-
-**Examples:**
-
-```bash
-# Complete validation
+
+
+
+
+Multi-level validation framework.
+Options:
+
+--level <level> - Validation level: static, dependencies, health, all (default: all)
+--infra <name> - Infrastructure context
+--settings <path> - Settings file path
+--verbose - Verbose output
+--out <format> - Output format: json, yaml, text
+
+Examples:
+# Complete validation
provisioning taskserv validate kubernetes
# Only static validation
@@ -72465,64 +66584,49 @@ provisioning taskserv validate kubernetes -v
# JSON output
provisioning taskserv validate kubernetes --out json
-```plaintext
-
-### `provisioning taskserv check-deps <taskserv>`
-
-Check dependencies against infrastructure.
-
-**Options:**
-
-- `--infra <name>` - Infrastructure context
-- `--settings <path>` - Settings file path
-- `--verbose` - Verbose output
-
-**Examples:**
-
-```bash
-# Check dependencies
+
+
+Check dependencies against infrastructure.
+Options:
+
+--infra <name> - Infrastructure context
+--settings <path> - Settings file path
+--verbose - Verbose output
+
+Examples:
+# Check dependencies
provisioning taskserv check-deps kubernetes --infra my-project
# Verbose output
provisioning taskserv check-deps kubernetes --infra my-project -v
-```plaintext
-
-### `provisioning taskserv create <taskserv> --check`
-
-Enhanced check mode with full validation and preview.
-
-**Options:**
-
-- `--check` - Enable check mode (no actual deployment)
-- `--verbose` - Verbose output
-- All standard create options
-
-**Examples:**
-
-```bash
-# Check mode with verbose output
+
+
+Enhanced check mode with full validation and preview.
+Options:
+
+--check - Enable check mode (no actual deployment)
+--verbose - Verbose output
+All standard create options
+
+Examples:
+# Check mode with verbose output
provisioning taskserv create kubernetes --check -v
# Check specific server
provisioning taskserv create kubernetes server-01 --check
-```plaintext
-
-### `provisioning taskserv test <taskserv>`
-
-Sandbox testing in isolated environment.
-
-**Options:**
-
-- `--runtime <name>` - Runtime: docker, podman, native (default: docker)
-- `--infra <name>` - Infrastructure context
-- `--settings <path>` - Settings file path
-- `--keep` - Keep container after test
-- `--verbose` - Verbose output
-
-**Examples:**
-
-```bash
-# Test with Docker
+
+
+Sandbox testing in isolated environment.
+Options:
+
+--runtime <name> - Runtime: docker, podman, native (default: docker)
+--infra <name> - Infrastructure context
+--settings <path> - Settings file path
+--keep - Keep container after test
+--verbose - Verbose output
+
+Examples:
+# Test with Docker
provisioning taskserv test kubernetes --runtime docker
# Test with Podman
@@ -72533,25 +66637,20 @@ provisioning taskserv test kubernetes --keep -v
# Connect to kept container
docker exec -it taskserv-test-kubernetes bash
-```plaintext
-
----
-
-## Validation Output
-
-### Static Validation
-
-```plaintext
-Taskserv Validation
+
+
+
+
+Taskserv Validation
Taskserv: kubernetes
Level: static
-Validating KCL schemas for kubernetes...
- Checking kubernetes.k...
+Validating Nickel schemas for kubernetes...
+ Checking main.ncl...
✓ Valid
- Checking version.k...
+ Checking version.ncl...
✓ Valid
- Checking dependencies.k...
+ Checking dependencies.ncl...
✓ Valid
Validating templates for kubernetes...
@@ -72561,18 +66660,15 @@ Validating templates for kubernetes...
✓ Basic syntax OK
Validation Summary
-✓ kcl: 0 errors, 0 warnings
+✓ nickel: 0 errors, 0 warnings
✓ templates: 0 errors, 0 warnings
✓ scripts: 0 errors, 0 warnings
Overall Status
✓ VALID - 0 warnings
-```plaintext
-
-### Dependency Validation
-
-```plaintext
-Dependency Validation Report
+
+
+Dependency Validation Report
Taskserv: kubernetes
Status: VALID
@@ -72589,12 +66685,9 @@ Optional Dependencies:
Conflicts:
• docker
• podman
-```plaintext
-
-### Check Mode Output
-
-```plaintext
-Check Mode: kubernetes on server-01
+
+
+Check Mode: kubernetes on server-01
→ Running static validation...
✓ Static validation passed
@@ -72617,12 +66710,9 @@ Check Mode Summary
✓ All validations passed
💡 Taskserv can be deployed with: provisioning taskserv create kubernetes
-```plaintext
-
-### Test Output
-
-```plaintext
-Taskserv Sandbox Testing
+
+
+Taskserv Sandbox Testing
Taskserv: kubernetes
Runtime: docker
@@ -72652,16 +66742,11 @@ Detailed Results:
✓ Health check: Health check configuration valid: http://localhost:6443/healthz
✓ All tests passed
-```plaintext
-
----
-
-## Integration with CI/CD
-
-### GitLab CI Example
-
-```yaml
-validate-taskservs:
+
+
+
+
+validate-taskservs:
stage: validate
script:
- provisioning taskserv validate kubernetes --level all --out json
@@ -72682,12 +66767,9 @@ deploy-taskservs:
- test-taskservs
only:
- main
-```plaintext
-
-### GitHub Actions Example
-
-```yaml
-name: Taskserv Validation
+
+
+name: Taskserv Validation
on: [push, pull_request]
@@ -72708,20 +66790,13 @@ jobs:
- name: Test in Sandbox
run: |
provisioning taskserv test kubernetes --runtime docker
-```plaintext
-
----
-
-## Troubleshooting
-
-### shellcheck not found
-
-If shellcheck is not available, script validation will be skipped with a warning.
-
-**Install shellcheck:**
-
-```bash
-# macOS
+
+
+
+
+If shellcheck is not available, script validation will be skipped with a warning.
+Install shellcheck:
+# macOS
brew install shellcheck
# Ubuntu/Debian
@@ -72729,16 +66804,11 @@ apt install shellcheck
# Fedora
dnf install shellcheck
-```plaintext
-
-### Docker/Podman not available
-
-Sandbox testing requires Docker or Podman.
-
-**Check runtime:**
-
-```bash
-# Docker
+
+
+Sandbox testing requires Docker or Podman.
+Check runtime:
+# Docker
docker ps
# Podman
@@ -72746,67 +66816,50 @@ podman ps
# Use native mode (limited testing)
provisioning taskserv test kubernetes --runtime native
-```plaintext
-
-### KCL validation errors
-
-KCL schema errors indicate syntax or semantic problems.
-
-**Common fixes:**
-
-- Check schema syntax in `.k` files
-- Validate imports and dependencies
-- Run `kcl fmt` to format files
-- Check `kcl.mod` dependencies
-
-### Dependency conflicts
-
-If conflicting taskservs are detected:
-
-- Remove conflicting taskserv first
-- Check infrastructure configuration
-- Review dependency declarations in `dependencies.k`
-
----
-
-## Advanced Usage
-
-### Custom Validation Scripts
-
-You can create custom validation scripts by extending the validation framework:
-
-```nushell
-# custom_validation.nu
+
+
+Nickel type checking errors indicate syntax or type problems.
+Common fixes:
+
+Check schema syntax in .ncl files
+Validate imports and dependencies
+Run nickel format to format files
+Check manifest.toml dependencies
+
+
+If conflicting taskservs are detected:
+
+Remove conflicting taskserv first
+Check infrastructure configuration
+Review dependency declarations in dependencies.ncl
+
+
+
+
+You can create custom validation scripts by extending the validation framework:
+# custom_validation.nu
use provisioning/core/nulib/taskservs/validate.nu *
def custom-validate [taskserv: string] {
# Custom validation logic
- let result = (validate-kcl-schemas $taskserv --verbose=true)
+ let result = (validate-nickel-schemas $taskserv --verbose=true)
# Additional custom checks
# ...
return $result
}
-```plaintext
-
-### Batch Validation
-
-Validate multiple taskservs:
-
-```bash
-# Validate all taskservs in infrastructure
+
+
+Validate multiple taskservs:
+# Validate all taskservs in infrastructure
for taskserv in (provisioning taskserv list | get name) {
provisioning taskserv validate $taskserv
}
-```plaintext
-
-### Automated Testing
-
-Create test suite for all taskservs:
-
-```bash
-#!/usr/bin/env nu
+
+
+Create test suite for all taskservs:
+#!/usr/bin/env nu
let taskservs = ["kubernetes", "containerd", "cilium", "etcd"]
@@ -72814,55 +66867,47 @@ for ts in $taskservs {
print $"Testing ($ts)..."
provisioning taskserv test $ts --runtime docker
}
-```plaintext
-
----
-
-## Best Practices
-
-### Before Deployment
-
-1. **Always validate** before deploying to production
-2. **Run check mode** to preview changes
-3. **Test in sandbox** for critical services
-4. **Check dependencies** in infrastructure context
-
-### During Development
-
-1. **Validate frequently** during taskserv development
-2. **Use verbose mode** to understand validation details
-3. **Fix warnings** even if validation passes
-4. **Keep containers** for debugging test failures
-
-### In CI/CD
-
-1. **Fail fast** on validation errors
-2. **Require all tests pass** before merge
-3. **Generate reports** in JSON format for analysis
-4. **Archive test results** for audit trail
-
----
-
-## Related Documentation
-
-- [Taskserv Development Guide](taskserv-development-guide.md)
-- KCL Schema Reference
-- [Dependency Management](dependency-management.md)
-- [CI/CD Integration](cicd-integration.md)
-
----
-
-## Version History
-
-| Version | Date | Changes |
-|---------|------|---------|
-| 1.0.0 | 2025-10-06 | Initial validation and testing guide |
-
----
-
-**Maintained By**: Infrastructure Team
-**Review Cycle**: Quarterly
+
+
+
+
+Always validate before deploying to production
+Run check mode to preview changes
+Test in sandbox for critical services
+Check dependencies in infrastructure context
+
+
+
+Validate frequently during taskserv development
+Use verbose mode to understand validation details
+Fix warnings even if validation passes
+Keep containers for debugging test failures
+
+
+
+Fail fast on validation errors
+Require all tests pass before merge
+Generate reports in JSON format for analysis
+Archive test results for audit trail
+
+
+
+
+
+
+Version Date Changes
+1.0.0 2025-10-06 Initial validation and testing guide
+
+
+
+Maintained By : Infrastructure Team
+Review Cycle : Quarterly
This comprehensive troubleshooting guide helps you diagnose and resolve common issues with Infrastructure Automation.
@@ -72883,43 +66928,32 @@ provisioning validate config
# Check specific component status
provisioning show servers --infra my-infra
provisioning taskserv list --infra my-infra --installed
-```plaintext
-
-### 2. Gather Information
-
-```bash
-# Enable debug mode for detailed output
+
+
+# Enable debug mode for detailed output
provisioning --debug <command>
# Check logs and errors
provisioning show logs --infra my-infra
-```plaintext
-
-### 3. Use Diagnostic Commands
-
-```bash
-# Validate configuration
+
+
+# Validate configuration
provisioning validate config --detailed
# Test connectivity
provisioning provider test aws
provisioning network test --infra my-infra
-```plaintext
-
-## Installation and Setup Issues
-
-### Issue: Installation Fails
-
-**Symptoms:**
-
-- Installation script errors
-- Missing dependencies
-- Permission denied errors
-
-**Diagnosis:**
-
-```bash
-# Check system requirements
+
+
+
+Symptoms:
+
+Installation script errors
+Missing dependencies
+Permission denied errors
+
+Diagnosis:
+# Check system requirements
uname -a
df -h
whoami
@@ -72927,67 +66961,47 @@ whoami
# Check permissions
ls -la /usr/local/
sudo -l
-```plaintext
-
-**Solutions:**
-
-#### Permission Issues
-
-```bash
-# Run installer with sudo
+
+Solutions:
+
+# Run installer with sudo
sudo ./install-provisioning
# Or install to user directory
./install-provisioning --prefix=$HOME/provisioning
export PATH="$HOME/provisioning/bin:$PATH"
-```plaintext
-
-#### Missing Dependencies
-
-```bash
-# Ubuntu/Debian
+
+
+# Ubuntu/Debian
sudo apt update
sudo apt install -y curl wget tar build-essential
# RHEL/CentOS
sudo dnf install -y curl wget tar gcc make
-```plaintext
-
-#### Architecture Issues
-
-```bash
-# Check architecture
+
+
+# Check architecture
uname -m
# Download correct architecture package
# x86_64: Intel/AMD 64-bit
# arm64: ARM 64-bit (Apple Silicon)
wget https://releases.example.com/provisioning-linux-x86_64.tar.gz
-```plaintext
-
-### Issue: Command Not Found
-
-**Symptoms:**
-
-```plaintext
-bash: provisioning: command not found
-```plaintext
-
-**Diagnosis:**
-
-```bash
-# Check if provisioning is installed
+
+
+Symptoms:
+bash: provisioning: command not found
+
+Diagnosis:
+# Check if provisioning is installed
which provisioning
ls -la /usr/local/bin/provisioning
# Check PATH
echo $PATH
-```plaintext
-
-**Solutions:**
-
-```bash
-# Add to PATH
+
+Solutions:
+# Add to PATH
export PATH="/usr/local/bin:$PATH"
# Make permanent (add to shell profile)
@@ -72996,21 +67010,14 @@ source ~/.bashrc
# Create symlink if missing
sudo ln -sf /usr/local/provisioning/core/nulib/provisioning /usr/local/bin/provisioning
-```plaintext
-
-### Issue: Nushell Plugin Errors
-
-**Symptoms:**
-
-```plaintext
-Plugin not found: nu_plugin_kcl
+
+
+Symptoms:
+Plugin not found: nu_plugin_kcl
Plugin registration failed
-```plaintext
-
-**Diagnosis:**
-
-```bash
-# Check Nushell version
+
+Diagnosis:
+# Check Nushell version
nu --version
# Check KCL installation (required for nu_plugin_kcl)
@@ -73018,12 +67025,9 @@ kcl version
# Check plugin registration
nu -c "version | get installed_plugins"
-```plaintext
-
-**Solutions:**
-
-```bash
-# Install KCL CLI (required for nu_plugin_kcl)
+
+Solutions:
+# Install KCL CLI (required for nu_plugin_kcl)
# Download from: https://github.com/kcl-lang/cli/releases
# Re-register plugins
@@ -73031,34 +67035,23 @@ nu -c "plugin add /usr/local/provisioning/plugins/nu_plugin_kcl"
nu -c "plugin add /usr/local/provisioning/plugins/nu_plugin_tera"
# Restart Nushell after plugin registration
-```plaintext
-
-## Configuration Issues
-
-### Issue: Configuration Not Found
-
-**Symptoms:**
-
-```plaintext
-Configuration file not found
+
+
+
+Symptoms:
+Configuration file not found
Failed to load configuration
-```plaintext
-
-**Diagnosis:**
-
-```bash
-# Check configuration file locations
+
+Diagnosis:
+# Check configuration file locations
provisioning env | grep config
# Check if files exist
ls -la ~/.config/provisioning/
ls -la /usr/local/provisioning/config.defaults.toml
-```plaintext
-
-**Solutions:**
-
-```bash
-# Initialize user configuration
+
+Solutions:
+# Initialize user configuration
provisioning init config
# Create missing directories
@@ -73069,35 +67062,24 @@ cp /usr/local/provisioning/config-examples/config.user.toml ~/.config/provisioni
# Verify configuration
provisioning validate config
-```plaintext
-
-### Issue: Configuration Validation Errors
-
-**Symptoms:**
-
-```plaintext
-Configuration validation failed
+
+
+Symptoms:
+Configuration validation failed
Invalid configuration value
Missing required field
-```plaintext
-
-**Diagnosis:**
-
-```bash
-# Detailed validation
+
+Diagnosis:
+# Detailed validation
provisioning validate config --detailed
# Check specific sections
provisioning config show --section paths
provisioning config show --section providers
-```plaintext
-
-**Solutions:**
-
-#### Path Configuration Issues
-
-```bash
-# Check base path exists
+
+Solutions:
+
+# Check base path exists
ls -la /path/to/provisioning
# Update configuration
@@ -73106,12 +67088,9 @@ nano ~/.config/provisioning/config.toml
# Fix paths section
[paths]
base = "/correct/path/to/provisioning"
-```plaintext
-
-#### Provider Configuration Issues
-
-```bash
-# Test provider connectivity
+
+
+# Test provider connectivity
provisioning provider test aws
# Check credentials
@@ -73121,21 +67100,14 @@ upcloud-cli config # For UpCloud
# Update provider configuration
[providers.aws]
interface = "CLI" # or "API"
-```plaintext
-
-### Issue: Interpolation Failures
-
-**Symptoms:**
-
-```plaintext
-Interpolation pattern not resolved: {{env.VARIABLE}}
+
+
+Symptoms:
+Interpolation pattern not resolved: {{env.VARIABLE}}
Template rendering failed
-```plaintext
-
-**Diagnosis:**
-
-```bash
-# Test interpolation
+
+Diagnosis:
+# Test interpolation
provisioning validate interpolation test
# Check environment variables
@@ -73143,12 +67115,9 @@ env | grep VARIABLE
# Debug interpolation
provisioning --debug validate interpolation validate
-```plaintext
-
-**Solutions:**
-
-```bash
-# Set missing environment variables
+
+Solutions:
+# Set missing environment variables
export MISSING_VARIABLE="value"
# Use fallback values in configuration
@@ -73157,24 +67126,16 @@ config_value = "{{env.VARIABLE || 'default_value'}}"
# Check interpolation syntax
# Correct: {{env.HOME}}
# Incorrect: ${HOME} or $HOME
-```plaintext
-
-## Server Management Issues
-
-### Issue: Server Creation Fails
-
-**Symptoms:**
-
-```plaintext
-Failed to create server
+
+
+
+Symptoms:
+Failed to create server
Provider API error
Insufficient quota
-```plaintext
-
-**Diagnosis:**
-
-```bash
-# Check provider status
+
+Diagnosis:
+# Check provider status
provisioning provider status aws
# Test connectivity
@@ -73186,14 +67147,10 @@ provisioning provider quota --infra my-infra
# Debug server creation
provisioning --debug server create web-01 --infra my-infra --check
-```plaintext
-
-**Solutions:**
-
-#### API Authentication Issues
-
-```bash
-# AWS
+
+Solutions:
+
+# AWS
aws configure list
aws sts get-caller-identity
@@ -73204,12 +67161,9 @@ upcloud-cli account show
aws configure # For AWS
export UPCLOUD_USERNAME="your-username"
export UPCLOUD_PASSWORD="your-password"
-```plaintext
-
-#### Quota/Limit Issues
-
-```bash
-# Check current usage
+
+
+# Check current usage
provisioning show costs --infra my-infra
# Request quota increase from provider
@@ -73217,12 +67171,9 @@ provisioning show costs --infra my-infra
# Use smaller instance types
# Reduce number of servers
-```plaintext
-
-#### Network/Connectivity Issues
-
-```bash
-# Test network connectivity
+
+
+# Test network connectivity
curl -v https://api.aws.amazon.com
curl -v https://api.upcloud.com
@@ -73231,22 +67182,15 @@ nslookup api.aws.amazon.com
# Check firewall rules
# Ensure outbound HTTPS (port 443) is allowed
-```plaintext
-
-### Issue: SSH Access Fails
-
-**Symptoms:**
-
-```plaintext
-Connection refused
+
+
+Symptoms:
+Connection refused
Permission denied
Host key verification failed
-```plaintext
-
-**Diagnosis:**
-
-```bash
-# Check server status
+
+Diagnosis:
+# Check server status
provisioning server list --infra my-infra
# Test SSH manually
@@ -73254,14 +67198,10 @@ ssh -v user@server-ip
# Check SSH configuration
provisioning show servers web-01 --infra my-infra
-```plaintext
-
-**Solutions:**
-
-#### Connection Issues
-
-```bash
-# Wait for server to be fully ready
+
+Solutions:
+
+# Wait for server to be fully ready
provisioning server list --infra my-infra --status
# Check security groups/firewall
@@ -73269,12 +67209,9 @@ provisioning server list --infra my-infra --status
# Use correct IP address
provisioning show servers web-01 --infra my-infra | grep ip
-```plaintext
-
-#### Authentication Issues
-
-```bash
-# Check SSH key
+
+
+# Check SSH key
ls -la ~/.ssh/
ssh-add -l
@@ -73283,34 +67220,23 @@ ssh-keygen -t ed25519 -f ~/.ssh/provisioning_key
# Use specific key
provisioning server ssh web-01 --key ~/.ssh/provisioning_key --infra my-infra
-```plaintext
-
-#### Host Key Issues
-
-```bash
-# Remove old host key
+
+
+# Remove old host key
ssh-keygen -R server-ip
# Accept new host key
ssh -o StrictHostKeyChecking=accept-new user@server-ip
-```plaintext
-
-## Task Service Issues
-
-### Issue: Service Installation Fails
-
-**Symptoms:**
-
-```plaintext
-Service installation failed
+
+
+
+Symptoms:
+Service installation failed
Package not found
Dependency conflicts
-```plaintext
-
-**Diagnosis:**
-
-```bash
-# Check service prerequisites
+
+Diagnosis:
+# Check service prerequisites
provisioning taskserv check kubernetes --infra my-infra
# Debug installation
@@ -73318,14 +67244,10 @@ provisioning --debug taskserv create kubernetes --infra my-infra --check
# Check server resources
provisioning server ssh web-01 --command "free -h && df -h" --infra my-infra
-```plaintext
-
-**Solutions:**
-
-#### Resource Issues
-
-```bash
-# Check available resources
+
+Solutions:
+
+# Check available resources
provisioning server ssh web-01 --command "
echo 'Memory:' && free -h
echo 'Disk:' && df -h
@@ -73334,12 +67256,9 @@ provisioning server ssh web-01 --command "
# Upgrade server if needed
provisioning server resize web-01 --plan larger-plan --infra my-infra
-```plaintext
-
-#### Package Repository Issues
-
-```bash
-# Update package lists
+
+
+# Update package lists
provisioning server ssh web-01 --command "
sudo apt update && sudo apt upgrade -y
" --infra my-infra
@@ -73348,32 +67267,22 @@ provisioning server ssh web-01 --command "
provisioning server ssh web-01 --command "
curl -I https://download.docker.com/linux/ubuntu/
" --infra my-infra
-```plaintext
-
-#### Dependency Issues
-
-```bash
-# Install missing dependencies
+
+
+# Install missing dependencies
provisioning taskserv create containerd --infra my-infra
# Then install dependent service
provisioning taskserv create kubernetes --infra my-infra
-```plaintext
-
-### Issue: Service Not Running
-
-**Symptoms:**
-
-```plaintext
-Service status: failed
+
+
+Symptoms:
+Service status: failed
Service not responding
Health check failures
-```plaintext
-
-**Diagnosis:**
-
-```bash
-# Check service status
+
+Diagnosis:
+# Check service status
provisioning taskserv status kubernetes --infra my-infra
# Check service logs
@@ -73384,58 +67293,40 @@ provisioning server ssh web-01 --command "
sudo systemctl status kubernetes
sudo journalctl -u kubernetes --no-pager -n 50
" --infra my-infra
-```plaintext
-
-**Solutions:**
-
-#### Configuration Issues
-
-```bash
-# Reconfigure service
+
+Solutions:
+
+# Reconfigure service
provisioning taskserv configure kubernetes --infra my-infra
# Reset to defaults
provisioning taskserv reset kubernetes --infra my-infra
-```plaintext
-
-#### Port Conflicts
-
-```bash
-# Check port usage
+
+
+# Check port usage
provisioning server ssh web-01 --command "
sudo netstat -tulpn | grep :6443
sudo ss -tulpn | grep :6443
" --infra my-infra
# Change port configuration or stop conflicting service
-```plaintext
-
-#### Permission Issues
-
-```bash
-# Fix permissions
+
+
+# Fix permissions
provisioning server ssh web-01 --command "
sudo chown -R kubernetes:kubernetes /var/lib/kubernetes
sudo chmod 600 /etc/kubernetes/admin.conf
" --infra my-infra
-```plaintext
-
-## Cluster Management Issues
-
-### Issue: Cluster Deployment Fails
-
-**Symptoms:**
-
-```plaintext
-Cluster deployment failed
+
+
+
+Symptoms:
+Cluster deployment failed
Pod creation errors
Service unavailable
-```plaintext
-
-**Diagnosis:**
-
-```bash
-# Check cluster status
+
+Diagnosis:
+# Check cluster status
provisioning cluster status web-cluster --infra my-infra
# Check Kubernetes cluster
@@ -73446,14 +67337,10 @@ provisioning server ssh master-01 --command "
# Check cluster logs
provisioning cluster logs web-cluster --infra my-infra
-```plaintext
-
-**Solutions:**
-
-#### Node Issues
-
-```bash
-# Check node status
+
+Solutions:
+
+# Check node status
provisioning server ssh master-01 --command "
kubectl describe nodes
" --infra my-infra
@@ -73466,12 +67353,9 @@ provisioning server ssh master-01 --command "
# Rejoin node
provisioning taskserv configure kubernetes --infra my-infra --servers worker-01
-```plaintext
-
-#### Resource Constraints
-
-```bash
-# Check resource usage
+
+
+# Check resource usage
provisioning server ssh master-01 --command "
kubectl top nodes
kubectl top pods --all-namespaces
@@ -73480,34 +67364,26 @@ provisioning server ssh master-01 --command "
# Scale down or add more nodes
provisioning cluster scale web-cluster --replicas 3 --infra my-infra
provisioning server create worker-04 --infra my-infra
-```plaintext
-
-#### Network Issues
-
-```bash
-# Check network plugin
+
+
+# Check network plugin
provisioning server ssh master-01 --command "
kubectl get pods -n kube-system | grep cilium
" --infra my-infra
# Restart network plugin
provisioning taskserv restart cilium --infra my-infra
-```plaintext
-
-## Performance Issues
-
-### Issue: Slow Operations
-
-**Symptoms:**
-
-- Commands take very long to complete
-- Timeouts during operations
-- High CPU/memory usage
-
-**Diagnosis:**
-
-```bash
-# Check system resources
+
+
+
+Symptoms:
+
+Commands take very long to complete
+Timeouts during operations
+High CPU/memory usage
+
+Diagnosis:
+# Check system resources
top
htop
free -h
@@ -73519,66 +67395,49 @@ traceroute api.aws.amazon.com
# Profile command execution
time provisioning server list --infra my-infra
-```plaintext
-
-**Solutions:**
-
-#### Local System Issues
-
-```bash
-# Close unnecessary applications
+
+Solutions:
+
+# Close unnecessary applications
# Upgrade system resources
# Use SSD storage if available
# Increase timeout values
export PROVISIONING_TIMEOUT=600 # 10 minutes
-```plaintext
-
-#### Network Issues
-
-```bash
-# Use region closer to your location
+
+
+# Use region closer to your location
[providers.aws]
region = "us-west-1" # Closer region
# Enable connection pooling/caching
[cache]
enabled = true
-```plaintext
-
-#### Large Infrastructure Issues
-
-```bash
-# Use parallel operations
+
+
+# Use parallel operations
provisioning server create --infra my-infra --parallel 4
# Filter results
provisioning server list --infra my-infra --filter "status == 'running'"
-```plaintext
-
-### Issue: High Memory Usage
-
-**Symptoms:**
-
-- System becomes unresponsive
-- Out of memory errors
-- Swap usage high
-
-**Diagnosis:**
-
-```bash
-# Check memory usage
+
+
+Symptoms:
+
+System becomes unresponsive
+Out of memory errors
+Swap usage high
+
+Diagnosis:
+# Check memory usage
free -h
ps aux --sort=-%mem | head
# Check for memory leaks
valgrind provisioning server list --infra my-infra
-```plaintext
-
-**Solutions:**
-
-```bash
-# Increase system memory
+
+Solutions:
+# Increase system memory
# Close other applications
# Use streaming operations for large datasets
@@ -73587,140 +67446,97 @@ export PROVISIONING_GC_ENABLED=true
# Reduce concurrent operations
export PROVISIONING_MAX_PARALLEL=2
-```plaintext
-
-## Network and Connectivity Issues
-
-### Issue: API Connectivity Problems
-
-**Symptoms:**
-
-```plaintext
-Connection timeout
+
+
+
+Symptoms:
+Connection timeout
DNS resolution failed
SSL certificate errors
-```plaintext
-
-**Diagnosis:**
-
-```bash
-# Test basic connectivity
+
+Diagnosis:
+# Test basic connectivity
ping 8.8.8.8
curl -I https://api.aws.amazon.com
nslookup api.upcloud.com
# Check SSL certificates
openssl s_client -connect api.aws.amazon.com:443 -servername api.aws.amazon.com
-```plaintext
-
-**Solutions:**
-
-#### DNS Issues
-
-```bash
-# Use alternative DNS
+
+Solutions:
+
+# Use alternative DNS
echo 'nameserver 8.8.8.8' | sudo tee /etc/resolv.conf
# Clear DNS cache
sudo systemctl restart systemd-resolved # Ubuntu
sudo dscacheutil -flushcache # macOS
-```plaintext
-
-#### Proxy/Firewall Issues
-
-```bash
-# Configure proxy if needed
+
+
+# Configure proxy if needed
export HTTP_PROXY=http://proxy.company.com:9090
export HTTPS_PROXY=http://proxy.company.com:9090
# Check firewall rules
sudo ufw status # Ubuntu
sudo firewall-cmd --list-all # RHEL/CentOS
-```plaintext
-
-#### Certificate Issues
-
-```bash
-# Update CA certificates
+
+
+# Update CA certificates
sudo apt update && sudo apt install ca-certificates # Ubuntu
brew install ca-certificates # macOS
# Skip SSL verification (temporary)
export PROVISIONING_SKIP_SSL_VERIFY=true
-```plaintext
-
-## Security and Encryption Issues
-
-### Issue: SOPS Decryption Fails
-
-**Symptoms:**
-
-```plaintext
-SOPS decryption failed
+
+
+
+Symptoms:
+SOPS decryption failed
Age key not found
Invalid key format
-```plaintext
-
-**Diagnosis:**
-
-```bash
-# Check SOPS configuration
+
+Diagnosis:
+# Check SOPS configuration
provisioning sops config
# Test SOPS manually
-sops -d encrypted-file.k
+sops -d encrypted-file.ncl
# Check Age keys
ls -la ~/.config/sops/age/keys.txt
age-keygen -y ~/.config/sops/age/keys.txt
-```plaintext
-
-**Solutions:**
-
-#### Missing Keys
-
-```bash
-# Generate new Age key
+
+Solutions:
+
+# Generate new Age key
age-keygen -o ~/.config/sops/age/keys.txt
# Update SOPS configuration
provisioning sops config --key-file ~/.config/sops/age/keys.txt
-```plaintext
-
-#### Key Permissions
-
-```bash
-# Fix key file permissions
+
+
+# Fix key file permissions
chmod 600 ~/.config/sops/age/keys.txt
chown $(whoami) ~/.config/sops/age/keys.txt
-```plaintext
-
-#### Configuration Issues
-
-```bash
-# Update SOPS configuration in ~/.config/provisioning/config.toml
+
+
+# Update SOPS configuration in ~/.config/provisioning/config.toml
[sops]
use_sops = true
key_search_paths = [
"~/.config/sops/age/keys.txt",
"/path/to/your/key.txt"
]
-```plaintext
-
-### Issue: Access Denied Errors
-
-**Symptoms:**
-
-```plaintext
-Permission denied
+
+
+Symptoms:
+Permission denied
Access denied
Insufficient privileges
-```plaintext
-
-**Diagnosis:**
-
-```bash
-# Check user permissions
+
+Diagnosis:
+# Check user permissions
id
groups
@@ -73730,12 +67546,9 @@ ls -la /usr/local/provisioning/
# Test with sudo
sudo provisioning env
-```plaintext
-
-**Solutions:**
-
-```bash
-# Fix file ownership
+
+Solutions:
+# Fix file ownership
sudo chown -R $(whoami):$(whoami) ~/.config/provisioning/
# Fix permissions
@@ -73744,36 +67557,25 @@ chmod 600 ~/.config/provisioning/config.toml
# Add user to required groups
sudo usermod -a -G docker $(whoami) # For Docker access
-```plaintext
-
-## Data and Storage Issues
-
-### Issue: Disk Space Problems
-
-**Symptoms:**
-
-```plaintext
-No space left on device
+
+
+
+Symptoms:
+No space left on device
Write failed
Disk full
-```plaintext
-
-**Diagnosis:**
-
-```bash
-# Check disk usage
+
+Diagnosis:
+# Check disk usage
df -h
du -sh ~/.config/provisioning/
du -sh /usr/local/provisioning/
# Find large files
find /usr/local/provisioning -type f -size +100M
-```plaintext
-
-**Solutions:**
-
-```bash
-# Clean up cache files
+
+Solutions:
+# Clean up cache files
rm -rf ~/.config/provisioning/cache/*
rm -rf /usr/local/provisioning/.cache/*
@@ -73785,14 +67587,10 @@ rm -rf /tmp/provisioning-*
# Compress old backups
gzip ~/.config/provisioning/backups/*.yaml
-```plaintext
-
-## Recovery Procedures
-
-### Configuration Recovery
-
-```bash
-# Restore from backup
+
+
+
+# Restore from backup
provisioning config restore --backup latest
# Reset to defaults
@@ -73800,12 +67598,9 @@ provisioning config reset
# Recreate configuration
provisioning init config --force
-```plaintext
-
-### Infrastructure Recovery
-
-```bash
-# Check infrastructure status
+
+
+# Check infrastructure status
provisioning show servers --infra my-infra
# Recover failed servers
@@ -73813,25 +67608,18 @@ provisioning server create failed-server --infra my-infra
# Restore from backup
provisioning restore --backup latest --infra my-infra
-```plaintext
-
-### Service Recovery
-
-```bash
-# Restart failed services
+
+
+# Restart failed services
provisioning taskserv restart kubernetes --infra my-infra
# Reinstall corrupted services
provisioning taskserv delete kubernetes --infra my-infra
provisioning taskserv create kubernetes --infra my-infra
-```plaintext
-
-## Prevention Strategies
-
-### Regular Maintenance
-
-```bash
-# Weekly maintenance script
+
+
+
+# Weekly maintenance script
#!/bin/bash
# Update system
@@ -73848,12 +67636,9 @@ provisioning cleanup --older-than 30d
# Create backup
provisioning backup create --name "weekly-$(date +%Y%m%d)"
-```plaintext
-
-### Monitoring Setup
-
-```bash
-# Set up health monitoring
+
+
+# Set up health monitoring
#!/bin/bash
# Check system health every hour
@@ -73861,36 +67646,45 @@ provisioning backup create --name "weekly-$(date +%Y%m%d)"
# Weekly cost reports
0 9 * * 1 /usr/local/bin/provisioning show costs --all | mail -s "Weekly Cost Report" finance@company.com
-```plaintext
-
-### Best Practices
-
-1. **Configuration Management**
- - Version control all configuration files
- - Use check mode before applying changes
- - Regular validation and testing
-
-2. **Security**
- - Regular key rotation
- - Principle of least privilege
- - Audit logs review
-
-3. **Backup Strategy**
- - Automated daily backups
- - Test restore procedures
- - Off-site backup storage
-
-4. **Documentation**
- - Document custom configurations
- - Keep troubleshooting logs
- - Share knowledge with team
-
-## Getting Additional Help
-
-### Debug Information Collection
-
-```bash
-#!/bin/bash
+
+
+
+
+Configuration Management
+
+Version control all configuration files
+Use check mode before applying changes
+Regular validation and testing
+
+
+
+Security
+
+Regular key rotation
+Principle of least privilege
+Audit logs review
+
+
+
+Backup Strategy
+
+Automated daily backups
+Test restore procedures
+Off-site backup storage
+
+
+
+Documentation
+
+Document custom configurations
+Keep troubleshooting logs
+Share knowledge with team
+
+
+
+
+
+#!/bin/bash
# Collect debug information
echo "Collecting provisioning debug information..."
@@ -73919,18 +67713,16 @@ cd /tmp
tar czf provisioning-debug-$(date +%Y%m%d_%H%M%S).tar.gz provisioning-debug/
echo "Debug information collected in: provisioning-debug-*.tar.gz"
-```plaintext
-
-### Support Channels
-
-1. **Built-in Help**
-
- ```bash
- provisioning help
- provisioning help <command>
+
+Built-in Help
+provisioning help
+provisioning help <command>
+
+
+
Documentation
User guides in docs/user/
@@ -73962,7 +67754,7 @@ echo "Debug information collected in: provisioning-debug-*.tar.gz"
Estimated Time : 30-60 minutes
Difficulty : Beginner to Intermediate
-
+
Prerequisites
Step 1: Install Nushell
@@ -73982,40 +67774,36 @@ echo "Debug information collected in: provisioning-debug-*.tar.gz"
Next Steps
-
+
Before starting, ensure you have:
✅ Operating System : macOS, Linux, or Windows (WSL2 recommended)
✅ Administrator Access : Ability to install software and configure system
✅ Internet Connection : For downloading dependencies and accessing cloud providers
-✅ Cloud Provider Credentials : UpCloud, AWS, or local development environment
+✅ Cloud Provider Credentials : UpCloud, Hetzner, AWS, or local development environment
✅ Basic Terminal Knowledge : Comfortable running shell commands
-✅ Text Editor : vim, nano, VSCode, or your preferred editor
+✅ Text Editor : vim, nano, Zed, VSCode, or your preferred editor
CPU : 2+ cores
-RAM : 8GB minimum, 16GB recommended
-Disk : 20GB free space minimum
+RAM : 8 GB minimum, 16 GB recommended
+Disk : 20 GB free space minimum
-Nushell 0.107.1+ is the primary shell and scripting language for the provisioning platform.
+Nushell 0.109.1+ is the primary shell and scripting language for the provisioning platform.
# Install Nushell
brew install nushell
# Verify installation
nu --version
-# Expected: 0.107.1 or higher
-```plaintext
-
-### Linux (via Package Manager)
-
-**Ubuntu/Debian:**
-
-```bash
-# Add Nushell repository
+# Expected: 0.109.1 or higher
+
+
+Ubuntu/Debian:
+# Add Nushell repository
curl -fsSL https://starship.rs/install.sh | bash
# Install Nushell
@@ -74024,26 +67812,17 @@ sudo apt install nushell
# Verify installation
nu --version
-```plaintext
-
-**Fedora:**
-
-```bash
-sudo dnf install nushell
+
+Fedora:
+sudo dnf install nushell
nu --version
-```plaintext
-
-**Arch Linux:**
-
-```bash
-sudo pacman -S nushell
+
+Arch Linux:
+sudo pacman -S nushell
nu --version
-```plaintext
-
-### Linux/macOS (via Cargo)
-
-```bash
-# Install Rust (if not already installed)
+
+
+# Install Rust (if not already installed)
curl --proto '=https' --tlsv1.2 -sSf https://sh.rustup.rs | sh
source $HOME/.cargo/env
@@ -74052,53 +67831,40 @@ cargo install nu --locked
# Verify installation
nu --version
-```plaintext
-
-### Windows (via Winget)
-
-```powershell
-# Install Nushell
+
+
+# Install Nushell
winget install nushell
# Verify installation
nu --version
-```plaintext
-
-### Configure Nushell
-
-```bash
-# Start Nushell
+
+
+# Start Nushell
nu
# Configure (creates default config if not exists)
config nu
-```plaintext
-
----
-
-## Step 2: Install Nushell Plugins (Recommended)
-
-Native plugins provide **10-50x performance improvement** for authentication, KMS, and orchestrator operations.
-
-### Why Install Plugins?
-
-**Performance Gains:**
-
-- 🚀 **KMS operations**: ~5ms vs ~50ms (10x faster)
-- 🚀 **Orchestrator queries**: ~1ms vs ~30ms (30x faster)
-- 🚀 **Batch encryption**: 100 files in 0.5s vs 5s (10x faster)
-
-**Benefits:**
-
-- ✅ Native Nushell integration (pipelines, data structures)
-- ✅ OS keyring for secure token storage
-- ✅ Offline capability (Age encryption, local orchestrator)
-- ✅ Graceful fallback to HTTP if not installed
-
-### Prerequisites for Building Plugins
-
-```bash
-# Install Rust toolchain (if not already installed)
+
+
+
+Native plugins provide 10-50x performance improvement for authentication, KMS, and orchestrator operations.
+
+Performance Gains:
+
+🚀 KMS operations : ~5 ms vs ~50 ms (10x faster)
+🚀 Orchestrator queries : ~1 ms vs ~30 ms (30x faster)
+🚀 Batch encryption : 100 files in 0.5s vs 5s (10x faster)
+
+Benefits:
+
+✅ Native Nushell integration (pipelines, data structures)
+✅ OS keyring for secure token storage
+✅ Offline capability (Age encryption, local orchestrator)
+✅ Graceful fallback to HTTP if not installed
+
+
+# Install Rust toolchain (if not already installed)
curl --proto '=https' --tlsv1.2 -sSf https://sh.rustup.rs | sh
source $HOME/.cargo/env
rustc --version
@@ -74111,12 +67877,9 @@ sudo dnf install openssl-devel # Fedora
# Linux only: Install keyring service (required for auth plugin)
sudo apt install gnome-keyring # Ubuntu/Debian (GNOME)
sudo apt install kwalletmanager # Ubuntu/Debian (KDE)
-```plaintext
-
-### Build Plugins
-
-```bash
-# Navigate to plugins directory
+
+
+# Navigate to plugins directory
cd provisioning/core/plugins/nushell-plugins
# Build all three plugins in release mode (optimized)
@@ -74127,14 +67890,10 @@ cargo build --release --all
# Compiling nu_plugin_kms v0.1.0
# Compiling nu_plugin_orchestrator v0.1.0
# Finished release [optimized] target(s) in 2m 15s
-```plaintext
-
-**Build time**: ~2-5 minutes depending on hardware
-
-### Register Plugins with Nushell
-
-```bash
-# Register all three plugins (full paths recommended)
+
+Build time : ~2-5 minutes depending on hardware
+
+# Register all three plugins (full paths recommended)
plugin add $PWD/target/release/nu_plugin_auth
plugin add $PWD/target/release/nu_plugin_kms
plugin add $PWD/target/release/nu_plugin_orchestrator
@@ -74143,12 +67902,9 @@ plugin add $PWD/target/release/nu_plugin_orchestrator
plugin add target/release/nu_plugin_auth
plugin add target/release/nu_plugin_kms
plugin add target/release/nu_plugin_orchestrator
-```plaintext
-
-### Verify Plugin Installation
-
-```bash
-# List registered plugins
+
+
+# List registered plugins
plugin list | where name =~ "auth|kms|orch"
# Expected output:
@@ -74164,12 +67920,9 @@ plugin list | where name =~ "auth|kms|orch"
auth --help # Should show auth commands
kms --help # Should show kms commands
orch --help # Should show orch commands
-```plaintext
-
-### Configure Plugin Environments
-
-```bash
-# Add to ~/.config/nushell/env.nu
+
+
+# Add to ~/.config/nushell/env.nu
$env.CONTROL_CENTER_URL = "http://localhost:3000"
$env.RUSTYVAULT_ADDR = "http://localhost:8200"
$env.RUSTYVAULT_TOKEN = "your-vault-token-here"
@@ -74178,12 +67931,9 @@ $env.ORCHESTRATOR_DATA_DIR = "provisioning/platform/orchestrator/data"
# For Age encryption (local development)
$env.AGE_IDENTITY = $"($env.HOME)/.age/key.txt"
$env.AGE_RECIPIENT = "age1xxxxxxxxx" # Replace with your public key
-```plaintext
-
-### Test Plugins (Quick Smoke Test)
-
-```bash
-# Test KMS plugin (requires backend configured)
+
+
+# Test KMS plugin (requires backend configured)
kms status
# Expected: { backend: "rustyvault", status: "healthy", ... }
# Or: Error if backend not configured (OK for now)
@@ -74197,50 +67947,25 @@ orch status
auth verify
# Expected: { active: false }
# Or: Error if control center not running (OK for now)
-```plaintext
-
-**Note**: It's OK if plugins show errors at this stage. We'll configure backends and services later.
-
-### Skip Plugins? (Not Recommended)
-
-If you want to skip plugin installation for now:
-
-- ✅ All features work via HTTP API (slower but functional)
-- ⚠️ You'll miss 10-50x performance improvements
-- ⚠️ No offline capability for KMS/orchestrator
-- ℹ️ You can install plugins later anytime
-
-To use HTTP fallback:
-
-```bash
-# System automatically uses HTTP if plugins not available
+
+Note : It’s OK if plugins show errors at this stage. We’ll configure backends and services later.
+
+If you want to skip plugin installation for now:
+
+✅ All features work via HTTP API (slower but functional)
+⚠️ You’ll miss 10-50x performance improvements
+⚠️ No offline capability for KMS/orchestrator
+ℹ️ You can install plugins later anytime
+
+To use HTTP fallback:
+# System automatically uses HTTP if plugins not available
# No configuration changes needed
-```plaintext
-
----
-
-## Step 3: Install Required Tools
-
-### Essential Tools
-
-**KCL (Configuration Language)**
-
-```bash
-# macOS
-brew install kcl
-
-# Linux
-curl -fsSL https://kcl-lang.io/script/install.sh | /bin/bash
-
-# Verify
-kcl version
-# Expected: 0.11.2 or higher
-```plaintext
-
-**SOPS (Secrets Management)**
-
-```bash
-# macOS
+
+
+
+
+SOPS (Secrets Management)
+# macOS
brew install sops
# Linux
@@ -74251,12 +67976,9 @@ sudo chmod +x /usr/local/bin/sops
# Verify
sops --version
# Expected: 3.10.2 or higher
-```plaintext
-
-**Age (Encryption Tool)**
-
-```bash
-# macOS
+
+Age (Encryption Tool)
+# macOS
brew install age
# Linux
@@ -74274,14 +67996,10 @@ age --version
age-keygen -o ~/.age/key.txt
cat ~/.age/key.txt
# Save the public key (age1...) for later
-```plaintext
-
-### Optional but Recommended Tools
-
-**K9s (Kubernetes Management)**
-
-```bash
-# macOS
+
+
+K9s (Kubernetes Management)
+# macOS
brew install k9s
# Linux
@@ -74290,12 +68008,9 @@ curl -sS https://webinstall.dev/k9s | bash
# Verify
k9s version
# Expected: 0.50.6 or higher
-```plaintext
-
-**glow (Markdown Renderer)**
-
-```bash
-# macOS
+
+glow (Markdown Renderer)
+# macOS
brew install glow
# Linux
@@ -74304,27 +68019,19 @@ sudo dnf install glow # Fedora
# Verify
glow --version
-```plaintext
-
----
-
-## Step 4: Clone and Setup Project
-
-### Clone Repository
-
-```bash
-# Clone project
+
+
+
+
+# Clone project
git clone https://github.com/your-org/project-provisioning.git
cd project-provisioning
# Or if already cloned, update to latest
git pull origin main
-```plaintext
-
-### Add CLI to PATH (Optional)
-
-```bash
-# Add to ~/.bashrc or ~/.zshrc
+
+
+# Add to ~/.bashrc or ~/.zshrc
export PATH="$PATH:/Users/Akasha/project-provisioning/provisioning/core/cli"
# Or create symlink
@@ -74333,18 +68040,12 @@ sudo ln -s /Users/Akasha/project-provisioning/provisioning/core/cli/provisioning
# Verify
provisioning version
# Expected: 3.5.0
-```plaintext
-
----
-
-## Step 5: Initialize Workspace
-
-A workspace is a self-contained environment for managing infrastructure.
-
-### Create New Workspace
-
-```bash
-# Initialize new workspace
+
+
+
+A workspace is a self-contained environment for managing infrastructure.
+
+# Initialize new workspace
provisioning workspace init --name production
# Or use interactive mode
@@ -74352,68 +68053,53 @@ provisioning workspace init
# Name: production
# Description: Production infrastructure
# Provider: upcloud
-```plaintext
-
-**What this creates:**
-
-The new workspace initialization now generates **KCL (Kusion Configuration Language) configuration files** for type-safe, schema-validated infrastructure definitions:
-
-```plaintext
-workspace/
+
+What this creates:
+The new workspace initialization now generates Nickel configuration files for type-safe, schema-validated infrastructure definitions:
+workspace/
├── config/
-│ ├── provisioning.k # Main KCL configuration (schema-validated)
+│ ├── config.ncl # Master Nickel configuration (type-safe)
│ ├── providers/
│ │ └── upcloud.toml # Provider-specific settings
│ ├── platform/ # Platform service configs
│ └── kms.toml # Key management settings
-├── infra/ # Infrastructure definitions
-├── extensions/ # Custom modules
-└── runtime/ # Runtime data and state
-```plaintext
+├── infra/
+│ └── default/
+│ ├── main.ncl # Infrastructure entry point
+│ └── servers.ncl # Server definitions
+├── docs/ # Auto-generated guides
+└── workspace.nu # Workspace utility scripts
+
+
+The workspace configuration uses Nickel (type-safe, validated) . This provides:
+
+✅ Type Safety : Schema validation catches errors at load time
+✅ Lazy Evaluation : Only computes what’s needed
+✅ Validation : Record merging, required fields, constraints
+✅ Documentation : Self-documenting with records
+
+Example Nickel config (config.ncl):
+{
+ workspace = {
+ name = "production",
+ version = "1.0.0",
+ created = "2025-12-03T14:30:00Z",
+ },
-### Workspace Configuration Format
+ paths = {
+ base = "/opt/workspaces/production",
+ infra = "/opt/workspaces/production/infra",
+ cache = "/opt/workspaces/production/.cache",
+ },
-The workspace configuration now uses **KCL (type-safe)** instead of YAML. This provides:
-
-- ✅ **Type Safety**: Schema validation catches errors at load time
-- ✅ **Immutability**: Enforces configuration immutability by default
-- ✅ **Validation**: Semantic versioning, required fields, value constraints
-- ✅ **Documentation**: Self-documenting with schema descriptions
-
-**Example KCL config** (`provisioning.k`):
-
-```kcl
-import provisioning.workspace_config as ws
-
-workspace_config = ws.WorkspaceConfig {
- workspace: {
- name: "production"
- version: "1.0.0"
- created: "2025-12-03T14:30:00Z"
- }
-
- paths: {
- base: "/opt/workspaces/production"
- infra: "/opt/workspaces/production/infra"
- cache: "/opt/workspaces/production/.cache"
- # ... other paths
- }
-
- providers: {
- active: ["upcloud"]
- default: "upcloud"
- }
-
- # ... other sections
+ providers = {
+ active = ["upcloud"],
+ default = "upcloud",
+ },
}
-```plaintext
-
-**Backward Compatibility**: If you have existing YAML workspace configs (`provisioning.yaml`), they continue to work. The config loader checks for KCL files first, then falls back to YAML.
-
-### Verify Workspace
-
-```bash
-# Show workspace info
+
+
+# Show workspace info
provisioning workspace info
# List all workspaces
@@ -74422,14 +68108,10 @@ provisioning workspace list
# Show active workspace
provisioning workspace active
# Expected: production
-```plaintext
-
-### View and Validate Workspace Configuration
-
-Now you can inspect and validate your KCL workspace configuration:
-
-```bash
-# View complete workspace configuration
+
+
+Now you can inspect and validate your Nickel workspace configuration:
+# View complete workspace configuration
provisioning workspace config show
# Show specific workspace
@@ -74438,7 +68120,7 @@ provisioning workspace config show production
# View configuration in different formats
provisioning workspace config show --format=json
provisioning workspace config show --format=yaml
-provisioning workspace config show --format=kcl # Raw KCL file
+provisioning workspace config show --format=nickel # Raw Nickel file
# Validate workspace configuration
provisioning workspace config validate
@@ -74446,48 +68128,35 @@ provisioning workspace config validate
# Show configuration hierarchy (priority order)
provisioning workspace config hierarchy
-```plaintext
-
-**Configuration Validation**: The KCL schema automatically validates:
-
-- ✅ Semantic versioning format (e.g., "1.0.0")
-- ✅ Required sections present (workspace, paths, provisioning, etc.)
-- ✅ Valid file paths and types
-- ✅ Provider configuration exists for active providers
-- ✅ KMS and SOPS settings properly configured
-
----
-
-## Step 6: Configure Environment
-
-### Set Provider Credentials
-
-**UpCloud Provider:**
-
-```bash
-# Create provider config
+
+Configuration Validation : The Nickel schema automatically validates:
+
+✅ Semantic versioning format (for example, “1.0.0”)
+✅ Required sections present (workspace, paths, provisioning, etc.)
+✅ Valid file paths and types
+✅ Provider configuration exists for active providers
+✅ KMS and SOPS settings properly configured
+
+
+
+
+UpCloud Provider:
+# Create provider config
vim workspace/config/providers/upcloud.toml
-```plaintext
-
-```toml
-[upcloud]
+
+[upcloud]
username = "your-upcloud-username"
password = "your-upcloud-password" # Will be encrypted
# Default settings
default_zone = "de-fra1"
-default_plan = "2xCPU-4GB"
-```plaintext
-
-**AWS Provider:**
-
-```bash
-# Create AWS config
+default_plan = "2xCPU-4 GB"
+
+AWS Provider:
+# Create AWS config
vim workspace/config/providers/aws.toml
-```plaintext
-
-```toml
-[aws]
+
+[aws]
region = "us-east-1"
access_key_id = "AKIAXXXXX"
secret_access_key = "xxxxx" # Will be encrypted
@@ -74495,12 +68164,9 @@ secret_access_key = "xxxxx" # Will be encrypted
# Default settings
default_instance_type = "t3.medium"
default_region = "us-east-1"
-```plaintext
-
-### Encrypt Sensitive Data
-
-```bash
-# Generate Age key if not done already
+
+
+# Generate Age key if not done already
age-keygen -o ~/.age/key.txt
# Encrypt provider configs
@@ -74513,17 +68179,12 @@ sops --encrypt --age $(cat ~/.age/key.txt | grep "public key:" | cut -d: -f2) \
# Remove plaintext
rm workspace/config/providers/upcloud.toml
-```plaintext
-
-### Configure Local Overrides
-
-```bash
-# Edit user-specific settings
+
+
+# Edit user-specific settings
vim workspace/config/local-overrides.toml
-```plaintext
-
-```toml
-[user]
+
+[user]
name = "admin"
email = "admin@example.com"
@@ -74538,16 +68199,11 @@ use_curl = true # Use curl instead of ureq
[paths]
ssh_key = "~/.ssh/id_ed25519"
-```plaintext
-
----
-
-## Step 7: Discover and Load Modules
-
-### Discover Available Modules
-
-```bash
-# Discover task services
+
+
+
+
+# Discover task services
provisioning module discover taskserv
# Shows: kubernetes, containerd, etcd, cilium, helm, etc.
@@ -74558,12 +68214,9 @@ provisioning module discover provider
# Discover clusters
provisioning module discover cluster
# Shows: buildkit, registry, monitoring, etc.
-```plaintext
-
-### Load Modules into Workspace
-
-```bash
-# Load Kubernetes taskserv
+
+
+# Load Kubernetes taskserv
provisioning module load taskserv production kubernetes
# Load multiple modules
@@ -74575,16 +68228,11 @@ provisioning module load cluster production buildkit
# Verify loaded modules
provisioning module list taskserv production
provisioning module list cluster production
-```plaintext
-
----
-
-## Step 8: Validate Configuration
-
-Before deploying, validate all configuration:
-
-```bash
-# Validate workspace configuration
+
+
+
+Before deploying, validate all configuration:
+# Validate workspace configuration
provisioning workspace validate
# Validate infrastructure configuration
@@ -74598,47 +68246,35 @@ provisioning env
# Show all configuration and environment
provisioning allenv
-```plaintext
-
-**Expected output:**
-
-```plaintext
-✓ Configuration valid
+
+Expected output:
+✓ Configuration valid
✓ Provider credentials configured
✓ Workspace initialized
✓ Modules loaded: 3 taskservs, 1 cluster
✓ SSH key configured
✓ Age encryption key available
-```plaintext
-
-**Fix any errors** before proceeding to deployment.
-
----
-
-## Step 9: Deploy Servers
-
-### Preview Server Creation (Dry Run)
-
-```bash
-# Check what would be created (no actual changes)
+
+Fix any errors before proceeding to deployment.
+
+
+
+# Check what would be created (no actual changes)
provisioning server create --infra production --check
# With debug output for details
provisioning server create --infra production --check --debug
-```plaintext
-
-**Review the output:**
-
-- Server names and configurations
-- Zones and regions
-- CPU, memory, disk specifications
-- Estimated costs
-- Network settings
-
-### Create Servers
-
-```bash
-# Create servers (with confirmation prompt)
+
+Review the output:
+
+Server names and configurations
+Zones and regions
+CPU, memory, disk specifications
+Estimated costs
+Network settings
+
+
+# Create servers (with confirmation prompt)
provisioning server create --infra production
# Or auto-confirm (skip prompt)
@@ -74646,16 +68282,13 @@ provisioning server create --infra production --yes
# Wait for completion
provisioning server create --infra production --wait
-```plaintext
+
+Expected output:
+Creating servers for infrastructure: production
-**Expected output:**
-
-```plaintext
-Creating servers for infrastructure: production
-
- ● Creating server: k8s-master-01 (de-fra1, 4xCPU-8GB)
- ● Creating server: k8s-worker-01 (de-fra1, 4xCPU-8GB)
- ● Creating server: k8s-worker-02 (de-fra1, 4xCPU-8GB)
+ ● Creating server: k8s-master-01 (de-fra1, 4xCPU-8 GB)
+ ● Creating server: k8s-worker-01 (de-fra1, 4xCPU-8 GB)
+ ● Creating server: k8s-worker-02 (de-fra1, 4xCPU-8 GB)
✓ Created 3 servers in 120 seconds
@@ -74663,12 +68296,9 @@ Servers:
• k8s-master-01: 192.168.1.10 (Running)
• k8s-worker-01: 192.168.1.11 (Running)
• k8s-worker-02: 192.168.1.12 (Running)
-```plaintext
-
-### Verify Server Creation
-
-```bash
-# List all servers
+
+
+# List all servers
provisioning server list --infra production
# Show detailed server info
@@ -74677,18 +68307,12 @@ provisioning server list --infra production --out yaml
# SSH to server (test connectivity)
provisioning server ssh k8s-master-01
# Type 'exit' to return
-```plaintext
-
----
-
-## Step 10: Install Task Services
-
-Task services are infrastructure components like Kubernetes, databases, monitoring, etc.
-
-### Install Kubernetes (Check Mode First)
-
-```bash
-# Preview Kubernetes installation
+
+
+
+Task services are infrastructure components like Kubernetes, databases, monitoring, etc.
+
+# Preview Kubernetes installation
provisioning taskserv create kubernetes --infra production --check
# Shows:
@@ -74696,12 +68320,9 @@ provisioning taskserv create kubernetes --infra production --check
# - Configuration to be applied
# - Resources needed
# - Estimated installation time
-```plaintext
-
-### Install Kubernetes
-
-```bash
-# Install Kubernetes (with dependencies)
+
+
+# Install Kubernetes (with dependencies)
provisioning taskserv create kubernetes --infra production
# Or install dependencies first
@@ -74711,12 +68332,9 @@ provisioning taskserv create kubernetes --infra production
# Monitor progress
provisioning workflow monitor <task_id>
-```plaintext
-
-**Expected output:**
-
-```plaintext
-Installing taskserv: kubernetes
+
+Expected output:
+Installing taskserv: kubernetes
● Installing containerd on k8s-master-01
● Installing containerd on k8s-worker-01
@@ -74739,12 +68357,9 @@ Cluster Info:
• Version: 1.28.0
• Nodes: 3 (1 control-plane, 2 workers)
• API Server: https://192.168.1.10:6443
-```plaintext
-
-### Install Additional Services
-
-```bash
-# Install Cilium (CNI)
+
+
+# Install Cilium (CNI)
provisioning taskserv create cilium --infra production
# Install Helm
@@ -74752,18 +68367,12 @@ provisioning taskserv create helm --infra production
# Verify all taskservs
provisioning taskserv list --infra production
-```plaintext
-
----
-
-## Step 11: Create Clusters
-
-Clusters are complete application stacks (e.g., BuildKit, OCI Registry, Monitoring).
-
-### Create BuildKit Cluster (Check Mode)
-
-```bash
-# Preview cluster creation
+
+
+
+Clusters are complete application stacks (for example, BuildKit, OCI Registry, Monitoring).
+
+# Preview cluster creation
provisioning cluster create buildkit --infra production --check
# Shows:
@@ -74771,12 +68380,9 @@ provisioning cluster create buildkit --infra production --check
# - Dependencies required
# - Configuration values
# - Resource requirements
-```plaintext
-
-### Create BuildKit Cluster
-
-```bash
-# Create BuildKit cluster
+
+
+# Create BuildKit cluster
provisioning cluster create buildkit --infra production
# Monitor deployment
@@ -74784,12 +68390,9 @@ provisioning workflow monitor <task_id>
# Or use plugin for faster monitoring
orch tasks --status running
-```plaintext
-
-**Expected output:**
-
-```plaintext
-Creating cluster: buildkit
+
+Expected output:
+Creating cluster: buildkit
● Deploying BuildKit daemon
● Deploying BuildKit worker
@@ -74801,14 +68404,11 @@ Creating cluster: buildkit
Cluster Info:
• BuildKit version: 0.12.0
• Workers: 2
- • Cache: 50GB
+ • Cache: 50 GB
• Registry: registry.production.local
-```plaintext
-
-### Verify Cluster
-
-```bash
-# List all clusters
+
+
+# List all clusters
provisioning cluster list --infra production
# Show cluster details
@@ -74816,16 +68416,11 @@ provisioning cluster list --infra production --out yaml
# Check cluster health
kubectl get pods -n buildkit
-```plaintext
-
----
-
-## Step 12: Verify Deployment
-
-### Comprehensive Health Check
-
-```bash
-# Check orchestrator status
+
+
+
+
+# Check orchestrator status
orch status
# or
provisioning orchestrator status
@@ -74842,12 +68437,9 @@ provisioning cluster list --infra production
# Verify Kubernetes cluster
kubectl get nodes
kubectl get pods --all-namespaces
-```plaintext
-
-### Run Validation Tests
-
-```bash
-# Validate infrastructure
+
+
+# Validate infrastructure
provisioning infra validate --infra production
# Test connectivity
@@ -74855,26 +68447,20 @@ provisioning server ssh k8s-master-01 "kubectl get nodes"
# Test BuildKit
kubectl exec -it -n buildkit buildkit-0 -- buildctl --version
-```plaintext
-
-### Expected Results
-
-All checks should show:
-
-- ✅ Servers: Running
-- ✅ Taskservs: Installed and healthy
-- ✅ Clusters: Deployed and operational
-- ✅ Kubernetes: 3/3 nodes ready
-- ✅ BuildKit: 2/2 workers ready
-
----
-
-## Step 13: Post-Deployment
-
-### Configure kubectl Access
-
-```bash
-# Get kubeconfig from master node
+
+
+All checks should show:
+
+✅ Servers: Running
+✅ Taskservs: Installed and healthy
+✅ Clusters: Deployed and operational
+✅ Kubernetes: 3/3 nodes ready
+✅ BuildKit: 2/2 workers ready
+
+
+
+
+# Get kubeconfig from master node
provisioning server ssh k8s-master-01 "cat ~/.kube/config" > ~/.kube/config-production
# Set KUBECONFIG
@@ -74883,34 +68469,25 @@ export KUBECONFIG=~/.kube/config-production
# Verify access
kubectl get nodes
kubectl get pods --all-namespaces
-```plaintext
-
-### Set Up Monitoring (Optional)
-
-```bash
-# Deploy monitoring stack
+
+
+# Deploy monitoring stack
provisioning cluster create monitoring --infra production
# Access Grafana
kubectl port-forward -n monitoring svc/grafana 3000:80
# Open: http://localhost:3000
-```plaintext
-
-### Configure CI/CD Integration (Optional)
-
-```bash
-# Generate CI/CD credentials
+
+
+# Generate CI/CD credentials
provisioning secrets generate aws --ttl 12h
# Create CI/CD kubeconfig
kubectl create serviceaccount ci-cd -n default
kubectl create clusterrolebinding ci-cd --clusterrole=admin --serviceaccount=default:ci-cd
-```plaintext
-
-### Backup Configuration
-
-```bash
-# Backup workspace configuration
+
+
+# Backup workspace configuration
tar -czf workspace-production-backup.tar.gz workspace/
# Encrypt backup
@@ -74918,18 +68495,12 @@ kms encrypt (open workspace-production-backup.tar.gz | encode base64) --backend
| save workspace-production-backup.tar.gz.enc
# Store securely (S3, Vault, etc.)
-```plaintext
-
----
-
-## Troubleshooting
-
-### Server Creation Fails
-
-**Problem**: Server creation times out or fails
-
-```bash
-# Check provider credentials
+
+
+
+
+Problem : Server creation times out or fails
+# Check provider credentials
provisioning validate config
# Check provider API status
@@ -74937,14 +68508,10 @@ curl -u username:password https://api.upcloud.com/1.3/account
# Try with debug mode
provisioning server create --infra production --check --debug
-```plaintext
-
-### Taskserv Installation Fails
-
-**Problem**: Kubernetes installation fails
-
-```bash
-# Check server connectivity
+
+
+Problem : Kubernetes installation fails
+# Check server connectivity
provisioning server ssh k8s-master-01
# Check logs
@@ -74956,14 +68523,10 @@ provisioning taskserv list --infra production | where status == "failed"
# Retry installation
provisioning taskserv delete kubernetes --infra production
provisioning taskserv create kubernetes --infra production
-```plaintext
-
-### Plugin Commands Don't Work
-
-**Problem**: `auth`, `kms`, or `orch` commands not found
-
-```bash
-# Check plugin registration
+
+
+Problem : auth, kms, or orch commands not found
+# Check plugin registration
plugin list | where name =~ "auth|kms|orch"
# Re-register if missing
@@ -74975,14 +68538,10 @@ plugin add target/release/nu_plugin_orchestrator
# Restart Nushell
exit
nu
-```plaintext
-
-### KMS Encryption Fails
-
-**Problem**: `kms encrypt` returns error
-
-```bash
-# Check backend status
+
+
+Problem : kms encrypt returns error
+# Check backend status
kms status
# Check RustyVault running
@@ -74993,14 +68552,10 @@ kms encrypt "data" --backend age --key age1xxxxxxxxx
# Check Age key
cat ~/.age/key.txt
-```plaintext
-
-### Orchestrator Not Running
-
-**Problem**: `orch status` returns error
-
-```bash
-# Check orchestrator status
+
+
+Problem : orch status returns error
+# Check orchestrator status
ps aux | grep orchestrator
# Start orchestrator
@@ -75009,14 +68564,10 @@ cd provisioning/platform/orchestrator
# Check logs
tail -f provisioning/platform/orchestrator/data/orchestrator.log
-```plaintext
-
-### Configuration Validation Errors
-
-**Problem**: `provisioning validate config` shows errors
-
-```bash
-# Show detailed errors
+
+
+Problem : provisioning validate config shows errors
+# Show detailed errors
provisioning validate config --debug
# Check configuration files
@@ -75024,27 +68575,23 @@ provisioning allenv
# Fix missing settings
vim workspace/config/local-overrides.toml
-```plaintext
-
----
-
-## Next Steps
-
-### Explore Advanced Features
-
-1. **Multi-Environment Deployment**
-
- ```bash
- # Create dev and staging workspaces
- provisioning workspace create dev
- provisioning workspace create staging
- provisioning workspace switch dev
+
+
+
+Multi-Environment Deployment
+# Create dev and staging workspaces
+provisioning workspace create dev
+provisioning workspace create staging
+provisioning workspace switch dev
+
+
+
Batch Operations
# Deploy to multiple clouds
-provisioning batch submit workflows/multi-cloud-deploy.k
+provisioning batch submit workflows/multi-cloud-deploy.ncl
@@ -75069,7 +68616,7 @@ provisioning compliance report --standard soc2
Update Guide : docs/guides/update-infrastructure.md
Customize Guide : docs/guides/customize-infrastructure.md
Plugin Guide : docs/user/PLUGIN_INTEGRATION_GUIDE.md
-Security System : docs/architecture/ADR-009-security-system-complete.md
+Security System : docs/architecture/adr-009-security-system-complete.md
# Show help for any command
@@ -75082,15 +68629,11 @@ provisioning version
# Start Nushell session with provisioning library
provisioning nu
-```plaintext
-
----
-
-## Summary
-
-You've successfully:
-
-✅ Installed Nushell and essential tools
+
+
+
+You’ve successfully:
+✅ Installed Nushell and essential tools
✅ Built and registered native plugins (10-50x faster operations)
✅ Cloned and configured the project
✅ Initialized a production workspace
@@ -75098,24 +68641,19 @@ You've successfully:
✅ Deployed servers
✅ Installed Kubernetes and task services
✅ Created application clusters
-✅ Verified complete deployment
-
-**Your infrastructure is now ready for production use!**
-
----
-
-**Estimated Total Time**: 30-60 minutes
-**Next Guide**: [Update Infrastructure](update-infrastructure.md)
-**Questions?**: Open an issue or contact <platform-team@example.com>
-
-**Last Updated**: 2025-10-09
-**Version**: 3.5.0
-
+✅ Verified complete deployment
+Your infrastructure is now ready for production use!
+
+Estimated Total Time : 30-60 minutes
+Next Guide : Update Infrastructure
+Questions? : Open an issue or contact platform-team@example.com
+Last Updated : 2025-10-09
+Version : 3.5.0
Goal : Safely update running infrastructure with minimal downtime
Time : 15-30 minutes
Difficulty : Intermediate
-
+
This guide covers:
Checking for updates
@@ -75130,42 +68668,27 @@ You've successfully:
Best for : Non-critical environments, development, staging
# Direct update without downtime consideration
provisioning t create <taskserv> --infra <project>
-```plaintext
-
-### Strategy 2: Rolling Updates (Recommended)
-
-**Best for**: Production environments, high availability
-
-```bash
-# Update servers one by one
+
+
+Best for : Production environments, high availability
+# Update servers one by one
provisioning s update --infra <project> --rolling
-```plaintext
-
-### Strategy 3: Blue-Green Deployment (Safest)
-
-**Best for**: Critical production, zero-downtime requirements
-
-```bash
-# Create new infrastructure, switch traffic, remove old
+
+
+Best for : Critical production, zero-downtime requirements
+# Create new infrastructure, switch traffic, remove old
provisioning ws init <project>-green
# ... configure and deploy
# ... switch traffic
provisioning ws delete <project>-blue
-```plaintext
-
-## Step 1: Check for Updates
-
-### 1.1 Check All Task Services
-
-```bash
-# Check all taskservs for updates
+
+
+
+# Check all taskservs for updates
provisioning t check-updates
-```plaintext
-
-**Expected Output:**
-
-```plaintext
-📦 Task Service Update Check:
+
+Expected Output:
+📦 Task Service Update Check:
NAME CURRENT LATEST STATUS
kubernetes 1.29.0 1.30.0 ⬆️ update available
@@ -75175,19 +68698,13 @@ postgres 15.5 16.1 ⬆️ update available
redis 7.2.3 7.2.3 ✅ up-to-date
Updates available: 3
-```plaintext
-
-### 1.2 Check Specific Task Service
-
-```bash
-# Check specific taskserv
+
+
+# Check specific taskserv
provisioning t check-updates kubernetes
-```plaintext
-
-**Expected Output:**
-
-```plaintext
-📦 Kubernetes Update Check:
+
+Expected Output:
+📦 Kubernetes Update Check:
Current: 1.29.0
Latest: 1.30.0
@@ -75203,19 +68720,13 @@ Breaking Changes:
• None
Recommended: ✅ Safe to update
-```plaintext
-
-### 1.3 Check Version Status
-
-```bash
-# Show detailed version information
+
+
+# Show detailed version information
provisioning version show
-```plaintext
-
-**Expected Output:**
-
-```plaintext
-📋 Component Versions:
+
+Expected Output:
+📋 Component Versions:
COMPONENT CURRENT LATEST DAYS OLD STATUS
kubernetes 1.29.0 1.30.0 45 ⬆️ update
@@ -75223,51 +68734,32 @@ containerd 1.7.13 1.7.13 0 ✅ current
cilium 1.14.5 1.15.0 30 ⬆️ update
postgres 15.5 16.1 60 ⬆️ update (major)
redis 7.2.3 7.2.3 0 ✅ current
-```plaintext
-
-### 1.4 Check for Security Updates
-
-```bash
-# Check for security-related updates
+
+
+# Check for security-related updates
provisioning version updates --security-only
-```plaintext
-
-## Step 2: Plan Your Update
-
-### 2.1 Review Current Configuration
-
-```bash
-# Show current infrastructure
+
+
+
+# Show current infrastructure
provisioning show settings --infra my-production
-```plaintext
-
-### 2.2 Backup Configuration
-
-```bash
-# Create configuration backup
+
+
+# Create configuration backup
cp -r workspace/infra/my-production workspace/infra/my-production.backup-$(date +%Y%m%d)
# Or use built-in backup
provisioning ws backup my-production
-```plaintext
-
-**Expected Output:**
-
-```plaintext
-✅ Backup created: workspace/backups/my-production-20250930.tar.gz
-```plaintext
-
-### 2.3 Create Update Plan
-
-```bash
-# Generate update plan
+
+Expected Output:
+✅ Backup created: workspace/backups/my-production-20250930.tar.gz
+
+
+# Generate update plan
provisioning plan update --infra my-production
-```plaintext
-
-**Expected Output:**
-
-```plaintext
-📝 Update Plan for my-production:
+
+Expected Output:
+📝 Update Plan for my-production:
Phase 1: Minor Updates (Low Risk)
• containerd: No update needed
@@ -75287,23 +68779,15 @@ Recommended Order:
Total Estimated Time: 30 minutes
Recommended: Test in staging environment first
-```plaintext
-
-## Step 3: Update Task Services
-
-### 3.1 Update Non-Critical Service (Cilium Example)
-
-#### Dry-Run Update
-
-```bash
-# Test update without applying
+
+
+
+
+# Test update without applying
provisioning t create cilium --infra my-production --check
-```plaintext
-
-**Expected Output:**
-
-```plaintext
-🔍 CHECK MODE: Simulating Cilium update
+
+Expected Output:
+🔍 CHECK MODE: Simulating Cilium update
Current: 1.14.5
Target: 1.15.0
@@ -75316,33 +68800,21 @@ Would perform:
Estimated downtime: <1 minute per node
No errors detected. Ready to update.
-```plaintext
-
-#### Generate Updated Configuration
-
-```bash
-# Generate new configuration
+
+
+# Generate new configuration
provisioning t generate cilium --infra my-production
-```plaintext
-
-**Expected Output:**
-
-```plaintext
-✅ Generated Cilium configuration (version 1.15.0)
- Saved to: workspace/infra/my-production/taskservs/cilium.k
-```plaintext
-
-#### Apply Update
-
-```bash
-# Apply update
+
+Expected Output:
+✅ Generated Cilium configuration (version 1.15.0)
+ Saved to: workspace/infra/my-production/taskservs/cilium.ncl
+
+
+# Apply update
provisioning t create cilium --infra my-production
-```plaintext
-
-**Expected Output:**
-
-```plaintext
-🚀 Updating Cilium on my-production...
+
+Expected Output:
+🚀 Updating Cilium on my-production...
Downloading Cilium 1.15.0... ⏳
✅ Downloaded
@@ -75362,19 +68834,13 @@ Verifying connectivity... ⏳
🎉 Cilium update complete!
Version: 1.14.5 → 1.15.0
Downtime: 0 minutes
-```plaintext
-
-#### Verify Update
-
-```bash
-# Verify updated version
+
+
+# Verify updated version
provisioning version taskserv cilium
-```plaintext
-
-**Expected Output:**
-
-```plaintext
-📦 Cilium Version Info:
+
+Expected Output:
+📦 Cilium Version Info:
Installed: 1.15.0
Latest: 1.15.0
@@ -75383,49 +68849,33 @@ Status: ✅ Up-to-date
Nodes:
✅ web-01: 1.15.0 (running)
✅ web-02: 1.15.0 (running)
-```plaintext
-
-### 3.2 Update Critical Service (Kubernetes Example)
-
-#### Test in Staging First
-
-```bash
-# If you have staging environment
+
+
+
+# If you have staging environment
provisioning t create kubernetes --infra my-staging --check
provisioning t create kubernetes --infra my-staging
# Run integration tests
provisioning test kubernetes --infra my-staging
-```plaintext
-
-#### Backup Current State
-
-```bash
-# Backup Kubernetes state
+
+
+# Backup Kubernetes state
kubectl get all -A -o yaml > k8s-backup-$(date +%Y%m%d).yaml
# Backup etcd (if using external etcd)
provisioning t backup kubernetes --infra my-production
-```plaintext
-
-#### Schedule Maintenance Window
-
-```bash
-# Set maintenance mode (optional, if supported)
+
+
+# Set maintenance mode (optional, if supported)
provisioning maintenance enable --infra my-production --duration 30m
-```plaintext
-
-#### Update Kubernetes
-
-```bash
-# Update control plane first
+
+
+# Update control plane first
provisioning t create kubernetes --infra my-production --control-plane-only
-```plaintext
-
-**Expected Output:**
-
-```plaintext
-🚀 Updating Kubernetes control plane on my-production...
+
+Expected Output:
+🚀 Updating Kubernetes control plane on my-production...
Draining control plane: web-01... ⏳
✅ web-01 drained
@@ -75440,17 +68890,12 @@ Verifying control plane... ⏳
✅ Control plane healthy
🎉 Control plane update complete!
-```plaintext
-
-```bash
-# Update worker nodes one by one
+
+# Update worker nodes one by one
provisioning t create kubernetes --infra my-production --workers-only --rolling
-```plaintext
-
-**Expected Output:**
-
-```plaintext
-🚀 Updating Kubernetes workers on my-production...
+
+Expected Output:
+🚀 Updating Kubernetes workers on my-production...
Rolling update: web-02...
Draining... ⏳
@@ -75468,44 +68913,28 @@ Rolling update: web-02...
🎉 Worker update complete!
Updated: web-02
Version: 1.30.0
-```plaintext
-
-#### Verify Update
-
-```bash
-# Verify Kubernetes cluster
+
+
+# Verify Kubernetes cluster
kubectl get nodes
provisioning version taskserv kubernetes
-```plaintext
-
-**Expected Output:**
-
-```plaintext
-NAME STATUS ROLES AGE VERSION
+
+Expected Output:
+NAME STATUS ROLES AGE VERSION
web-01 Ready control-plane 30d v1.30.0
web-02 Ready <none> 30d v1.30.0
-```plaintext
-
-```bash
-# Run smoke tests
+
+# Run smoke tests
provisioning test kubernetes --infra my-production
-```plaintext
-
-### 3.3 Update Database (PostgreSQL Example)
-
-⚠️ **WARNING**: Database updates may require data migration. Always backup first!
-
-#### Backup Database
-
-```bash
-# Backup PostgreSQL database
+
+
+⚠️ WARNING : Database updates may require data migration. Always backup first!
+
+# Backup PostgreSQL database
provisioning t backup postgres --infra my-production
-```plaintext
-
-**Expected Output:**
-
-```plaintext
-🗄️ Backing up PostgreSQL...
+
+Expected Output:
+🗄️ Backing up PostgreSQL...
Creating dump: my-production-postgres-20250930.sql... ⏳
✅ Dump created (2.3 GB)
@@ -75514,19 +68943,13 @@ Compressing... ⏳
✅ Compressed (450 MB)
Saved to: workspace/backups/postgres/my-production-20250930.sql.gz
-```plaintext
-
-#### Check Compatibility
-
-```bash
-# Check if data migration is needed
+
+
+# Check if data migration is needed
provisioning t check-migration postgres --from 15.5 --to 16.1
-```plaintext
-
-**Expected Output:**
-
-```plaintext
-🔍 PostgreSQL Migration Check:
+
+Expected Output:
+🔍 PostgreSQL Migration Check:
From: 15.5
To: 16.1
@@ -75544,19 +68967,13 @@ Estimated Time: 15-30 minutes (depending on data size)
Estimated Downtime: 15-30 minutes
Recommended: Use streaming replication for zero-downtime upgrade
-```plaintext
-
-#### Perform Update
-
-```bash
-# Update PostgreSQL (with automatic migration)
+
+
+# Update PostgreSQL (with automatic migration)
provisioning t create postgres --infra my-production --migrate
-```plaintext
-
-**Expected Output:**
-
-```plaintext
-🚀 Updating PostgreSQL on my-production...
+
+Expected Output:
+🚀 Updating PostgreSQL on my-production...
⚠️ Major version upgrade detected (15.5 → 16.1)
Automatic migration will be performed
@@ -75585,29 +69002,19 @@ Verifying data integrity... ⏳
🎉 PostgreSQL update complete!
Version: 15.5 → 16.1
Downtime: 18 minutes
-```plaintext
-
-#### Verify Update
-
-```bash
-# Verify PostgreSQL
+
+
+# Verify PostgreSQL
provisioning version taskserv postgres
ssh db-01 "psql --version"
-```plaintext
-
-## Step 4: Update Multiple Services
-
-### 4.1 Batch Update (Sequentially)
-
-```bash
-# Update multiple taskservs one by one
+
+
+
+# Update multiple taskservs one by one
provisioning t update --infra my-production --taskservs cilium,containerd,redis
-```plaintext
-
-**Expected Output:**
-
-```plaintext
-🚀 Updating 3 taskservs on my-production...
+
+Expected Output:
+🚀 Updating 3 taskservs on my-production...
[1/3] Updating cilium... ⏳
✅ cilium updated (1.15.0)
@@ -75621,19 +69028,13 @@ provisioning t update --infra my-production --taskservs cilium,containerd,redis
🎉 All updates complete!
Updated: 3 taskservs
Total time: 8 minutes
-```plaintext
-
-### 4.2 Parallel Update (Non-Dependent Services)
-
-```bash
-# Update taskservs in parallel (if they don't depend on each other)
+
+
+# Update taskservs in parallel (if they don't depend on each other)
provisioning t update --infra my-production --taskservs redis,postgres --parallel
-```plaintext
-
-**Expected Output:**
-
-```plaintext
-🚀 Updating 2 taskservs in parallel on my-production...
+
+Expected Output:
+🚀 Updating 2 taskservs in parallel on my-production...
redis: Updating... ⏳
postgres: Updating... ⏳
@@ -75644,50 +69045,35 @@ postgres: ✅ Updated (16.1)
🎉 All updates complete!
Updated: 2 taskservs
Total time: 3 minutes (parallel)
-```plaintext
-
-## Step 5: Update Server Configuration
-
-### 5.1 Update Server Resources
-
-```bash
-# Edit server configuration
-provisioning sops workspace/infra/my-production/servers.k
-```plaintext
-
-**Example: Upgrade server plan**
-
-```kcl
-# Before
+
+
+
+# Edit server configuration
+provisioning sops workspace/infra/my-production/servers.ncl
+
+Example: Upgrade server plan
+# Before
{
name = "web-01"
- plan = "1xCPU-2GB" # Old plan
+ plan = "1xCPU-2 GB" # Old plan
}
# After
{
name = "web-01"
- plan = "2xCPU-4GB" # New plan
+ plan = "2xCPU-4 GB" # New plan
}
-```plaintext
-
-```bash
-# Apply server update
+
+# Apply server update
provisioning s update --infra my-production --check
provisioning s update --infra my-production
-```plaintext
-
-### 5.2 Update Server OS
-
-```bash
-# Update operating system packages
+
+
+# Update operating system packages
provisioning s update --infra my-production --os-update
-```plaintext
-
-**Expected Output:**
-
-```plaintext
-🚀 Updating OS packages on my-production servers...
+
+Expected Output:
+🚀 Updating OS packages on my-production servers...
web-01: Updating packages... ⏳
✅ web-01: 24 packages updated
@@ -75699,23 +69085,15 @@ db-01: Updating packages... ⏳
✅ db-01: 24 packages updated
🎉 OS updates complete!
-```plaintext
-
-## Step 6: Rollback Procedures
-
-### 6.1 Rollback Task Service
-
-If update fails or causes issues:
-
-```bash
-# Rollback to previous version
+
+
+
+If update fails or causes issues:
+# Rollback to previous version
provisioning t rollback cilium --infra my-production
-```plaintext
-
-**Expected Output:**
-
-```plaintext
-🔄 Rolling back Cilium on my-production...
+
+Expected Output:
+🔄 Rolling back Cilium on my-production...
Current: 1.15.0
Target: 1.14.5 (previous version)
@@ -75731,35 +69109,22 @@ Verifying connectivity... ⏳
🎉 Rollback complete!
Version: 1.15.0 → 1.14.5
-```plaintext
-
-### 6.2 Rollback from Backup
-
-```bash
-# Restore configuration from backup
+
+
+# Restore configuration from backup
provisioning ws restore my-production --from workspace/backups/my-production-20250930.tar.gz
-```plaintext
-
-### 6.3 Emergency Rollback
-
-```bash
-# Complete infrastructure rollback
+
+
+# Complete infrastructure rollback
provisioning rollback --infra my-production --to-snapshot <snapshot-id>
-```plaintext
-
-## Step 7: Post-Update Verification
-
-### 7.1 Verify All Components
-
-```bash
-# Check overall health
+
+
+
+# Check overall health
provisioning health --infra my-production
-```plaintext
-
-**Expected Output:**
-
-```plaintext
-🏥 Health Check: my-production
+
+Expected Output:
+🏥 Health Check: my-production
Servers:
✅ web-01: Healthy
@@ -75776,26 +69141,17 @@ Clusters:
✅ buildkit: 2/2 replicas (healthy)
Overall Status: ✅ All systems healthy
-```plaintext
-
-### 7.2 Verify Version Updates
-
-```bash
-# Verify all versions are updated
+
+
+# Verify all versions are updated
provisioning version show
-```plaintext
-
-### 7.3 Run Integration Tests
-
-```bash
-# Run comprehensive tests
+
+
+# Run comprehensive tests
provisioning test all --infra my-production
-```plaintext
-
-**Expected Output:**
-
-```plaintext
-🧪 Running Integration Tests...
+
+Expected Output:
+🧪 Running Integration Tests...
[1/5] Server connectivity... ⏳
✅ All servers reachable
@@ -75813,70 +69169,66 @@ provisioning test all --infra my-production
✅ All applications healthy
🎉 All tests passed!
-```plaintext
-
-### 7.4 Monitor for Issues
-
-```bash
-# Monitor logs for errors
+
+
+# Monitor logs for errors
provisioning logs --infra my-production --follow --level error
-```plaintext
-
-## Update Checklist
-
-Use this checklist for production updates:
-
-- [ ] Check for available updates
-- [ ] Review changelog and breaking changes
-- [ ] Create configuration backup
-- [ ] Test update in staging environment
-- [ ] Schedule maintenance window
-- [ ] Notify team/users of maintenance
-- [ ] Update non-critical services first
-- [ ] Verify each update before proceeding
-- [ ] Update critical services with rolling updates
-- [ ] Backup database before major updates
-- [ ] Verify all components after update
-- [ ] Run integration tests
-- [ ] Monitor for issues (30 minutes minimum)
-- [ ] Document any issues encountered
-- [ ] Close maintenance window
-
-## Common Update Scenarios
-
-### Scenario 1: Minor Security Patch
-
-```bash
-# Quick security update
+
+
+Use this checklist for production updates:
+
+
+
+# Quick security update
provisioning t check-updates --security-only
provisioning t update --infra my-production --security-patches --yes
-```plaintext
-
-### Scenario 2: Major Version Upgrade
-
-```bash
-# Careful major version update
+
+
+# Careful major version update
provisioning ws backup my-production
provisioning t check-migration <service> --from X.Y --to X+1.Y
provisioning t create <service> --infra my-production --migrate
provisioning test all --infra my-production
-```plaintext
-
-### Scenario 3: Emergency Hotfix
-
-```bash
-# Apply critical hotfix immediately
+
+
+# Apply critical hotfix immediately
provisioning t create <service> --infra my-production --hotfix --yes
-```plaintext
-
-## Troubleshooting Updates
-
-### Issue: Update fails mid-process
-
-**Solution:**
-
-```bash
-# Check update status
+
+
+
+Solution:
+# Check update status
provisioning t status <taskserv> --infra my-production
# Resume failed update
@@ -75884,14 +69236,10 @@ provisioning t update <taskserv> --infra my-production --resume
# Or rollback
provisioning t rollback <taskserv> --infra my-production
-```plaintext
-
-### Issue: Service not starting after update
-
-**Solution:**
-
-```bash
-# Check logs
+
+
+Solution:
+# Check logs
provisioning logs <taskserv> --infra my-production
# Verify configuration
@@ -75899,41 +69247,34 @@ provisioning t validate <taskserv> --infra my-production
# Rollback if necessary
provisioning t rollback <taskserv> --infra my-production
-```plaintext
-
-### Issue: Data migration fails
-
-**Solution:**
-
-```bash
-# Check migration logs
+
+
+Solution:
+# Check migration logs
provisioning t migration-logs <taskserv> --infra my-production
# Restore from backup
provisioning t restore <taskserv> --infra my-production --from <backup-file>
-```plaintext
-
-## Best Practices
-
-1. **Always Test First**: Test updates in staging before production
-2. **Backup Everything**: Create backups before any update
-3. **Update Gradually**: Update one service at a time
-4. **Monitor Closely**: Watch for errors after each update
-5. **Have Rollback Plan**: Always have a rollback strategy
-6. **Document Changes**: Keep update logs for reference
-7. **Schedule Wisely**: Update during low-traffic periods
-8. **Verify Thoroughly**: Run tests after each update
-
-## Next Steps
-
-- **[Customize Guide](customize-infrastructure.md)** - Customize your infrastructure
-- **[From Scratch Guide](from-scratch.md)** - Deploy new infrastructure
-- **[Workflow Guide](../development/workflow.md)** - Automate with workflows
-
-## Quick Reference
-
-```bash
-# Update workflow
+
+
+
+Always Test First : Test updates in staging before production
+Backup Everything : Create backups before any update
+Update Gradually : Update one service at a time
+Monitor Closely : Watch for errors after each update
+Have Rollback Plan : Always have a rollback strategy
+Document Changes : Keep update logs for reference
+Schedule Wisely : Update during low-traffic periods
+Verify Thoroughly : Run tests after each update
+
+
+
+
+# Update workflow
provisioning t check-updates
provisioning ws backup my-production
provisioning t create <taskserv> --infra my-production --check
@@ -75941,17 +69282,14 @@ provisioning t create <taskserv> --infra my-production
provisioning version taskserv <taskserv>
provisioning health --infra my-production
provisioning test all --infra my-production
-```plaintext
-
----
-
-*This guide is part of the provisioning project documentation. Last updated: 2025-09-30*
+
+This guide is part of the provisioning project documentation. Last updated: 2025-09-30
Goal : Customize infrastructure using layers, templates, and configuration patterns
Time : 20-40 minutes
Difficulty : Intermediate to Advanced
-
+
This guide covers:
Understanding the layer system
@@ -75986,23 +69324,15 @@ provisioning test all --infra my-production
│ • Provider implementations │
│ • Default taskserv configs │
└─────────────────────────────────────┘
-```plaintext
-
-**Resolution Order**: Infrastructure (300) → Workspace (200) → Core (100)
-
-Higher numbers override lower numbers.
-
-### View Layer Resolution
-
-```bash
-# Explain layer concept
+
+Resolution Order : Infrastructure (300) → Workspace (200) → Core (100)
+Higher numbers override lower numbers.
+
+# Explain layer concept
provisioning lyr explain
-```plaintext
-
-**Expected Output:**
-
-```plaintext
-📚 LAYER SYSTEM EXPLAINED
+
+Expected Output:
+📚 LAYER SYSTEM EXPLAINED
The layer system provides configuration inheritance across 3 levels:
@@ -76029,28 +69359,23 @@ The layer system provides configuration inheritance across 3 levels:
Resolution: Infrastructure → Workspace → Core
Higher priority layers override lower ones.
-```plaintext
-
-```bash
-# Show layer resolution for your project
+
+# Show layer resolution for your project
provisioning lyr show my-production
-```plaintext
-
-**Expected Output:**
-
-```plaintext
-📊 Layer Resolution for my-production:
+
+Expected Output:
+📊 Layer Resolution for my-production:
LAYER PRIORITY SOURCE FILES
Infrastructure 300 workspace/infra/my-production/ 4 files
- • servers.k (overrides)
- • taskservs.k (overrides)
- • clusters.k (custom)
- • providers.k (overrides)
+ • servers.ncl (overrides)
+ • taskservs.ncl (overrides)
+ • clusters.ncl (custom)
+ • providers.ncl (overrides)
Workspace 200 provisioning/workspace/templates/ 2 files
- • production.k (used)
- • kubernetes.k (used)
+ • production.ncl (used)
+ • kubernetes.ncl (used)
Core 100 provisioning/extensions/ 15 files
• taskservs/* (base configs)
@@ -76059,38 +69384,32 @@ Core 100 provisioning/extensions/ 15 files
Resolution Order: Infrastructure → Workspace → Core
Status: ✅ All layers resolved successfully
-```plaintext
-
-### Test Layer Resolution
-
-```bash
-# Test how a specific module resolves
+
+
+# Test how a specific module resolves
provisioning lyr test kubernetes my-production
-```plaintext
-
-**Expected Output:**
-
-```plaintext
-🔍 Layer Resolution Test: kubernetes → my-production
+
+Expected Output:
+🔍 Layer Resolution Test: kubernetes → my-production
Resolving kubernetes configuration...
🔴 Infrastructure Layer (300):
- ✅ Found: workspace/infra/my-production/taskservs/kubernetes.k
+ ✅ Found: workspace/infra/my-production/taskservs/kubernetes.ncl
Provides:
• version = "1.30.0" (overrides)
• control_plane_servers = ["web-01"] (overrides)
• worker_servers = ["web-02"] (overrides)
🟢 Workspace Layer (200):
- ✅ Found: provisioning/workspace/templates/production-kubernetes.k
+ ✅ Found: provisioning/workspace/templates/production-kubernetes.ncl
Provides:
• security_policies (inherited)
• network_policies (inherited)
• resource_quotas (inherited)
🔵 Core Layer (100):
- ✅ Found: provisioning/extensions/taskservs/kubernetes/config.k
+ ✅ Found: provisioning/extensions/taskservs/kubernetes/main.ncl
Provides:
• default_version = "1.29.0" (base)
• default_features (base)
@@ -76107,21 +69426,14 @@ Final Configuration (after merging all layers):
default_plugins: {...} (from Core)
Resolution: ✅ Success
-```plaintext
-
-## Using Templates
-
-### List Available Templates
-
-```bash
-# List all templates
+
+
+
+# List all templates
provisioning tpl list
-```plaintext
-
-**Expected Output:**
-
-```plaintext
-📋 Available Templates:
+
+Expected Output:
+📋 Available Templates:
TASKSERVS:
• production-kubernetes - Production-ready Kubernetes setup
@@ -76143,26 +69455,18 @@ CLUSTERS:
• security-stack - Security monitoring tools
Total: 13 templates
-```plaintext
-
-```bash
-# List templates by type
+
+# List templates by type
provisioning tpl list --type taskservs
provisioning tpl list --type providers
provisioning tpl list --type clusters
-```plaintext
-
-### View Template Details
-
-```bash
-# Show template details
+
+
+# Show template details
provisioning tpl show production-kubernetes
-```plaintext
-
-**Expected Output:**
-
-```plaintext
-📄 Template: production-kubernetes
+
+Expected Output:
+📄 Template: production-kubernetes
Description: Production-ready Kubernetes configuration with
security hardening, network policies, and monitoring
@@ -76181,26 +69485,20 @@ Configuration Provided:
Requirements:
• Minimum 2 servers
- • 4GB RAM per server
+ • 4 GB RAM per server
• Network plugin (Cilium recommended)
-Location: provisioning/workspace/templates/production-kubernetes.k
+Location: provisioning/workspace/templates/production-kubernetes.ncl
Example Usage:
provisioning tpl apply production-kubernetes my-production
-```plaintext
-
-### Apply Template
-
-```bash
-# Apply template to your infrastructure
+
+
+# Apply template to your infrastructure
provisioning tpl apply production-kubernetes my-production
-```plaintext
-
-**Expected Output:**
-
-```plaintext
-🚀 Applying template: production-kubernetes → my-production
+
+Expected Output:
+🚀 Applying template: production-kubernetes → my-production
Checking compatibility... ⏳
✅ Infrastructure compatible with template
@@ -76209,10 +69507,10 @@ Merging configuration... ⏳
✅ Configuration merged
Files created/updated:
- • workspace/infra/my-production/taskservs/kubernetes.k (updated)
- • workspace/infra/my-production/policies/security.k (created)
- • workspace/infra/my-production/policies/network.k (created)
- • workspace/infra/my-production/monitoring/prometheus.k (created)
+ • workspace/infra/my-production/taskservs/kubernetes.ncl (updated)
+ • workspace/infra/my-production/policies/security.ncl (created)
+ • workspace/infra/my-production/policies/network.ncl (created)
+ • workspace/infra/my-production/monitoring/prometheus.ncl (created)
🎉 Template applied successfully!
@@ -76220,19 +69518,13 @@ Next steps:
1. Review generated configuration
2. Adjust as needed
3. Deploy: provisioning t create kubernetes --infra my-production
-```plaintext
-
-### Validate Template Usage
-
-```bash
-# Validate template was applied correctly
+
+
+# Validate template was applied correctly
provisioning tpl validate my-production
-```plaintext
-
-**Expected Output:**
-
-```plaintext
-✅ Template Validation: my-production
+
+Expected Output:
+✅ Template Validation: my-production
Templates Applied:
✅ production-kubernetes (v1.0.0)
@@ -76250,89 +69542,77 @@ Compliance:
✅ Monitoring enabled
Status: ✅ Valid
-```plaintext
-
-## Creating Custom Templates
-
-### Step 1: Create Template Structure
-
-```bash
-# Create custom template directory
+
+
+
+# Create custom template directory
mkdir -p provisioning/workspace/templates/my-custom-template
-```plaintext
+
+
+File: provisioning/workspace/templates/my-custom-template/main.ncl
+# Custom Kubernetes template with specific settings
+let kubernetes_config = {
+ # Version
+ version = "1.30.0",
-### Step 2: Write Template Configuration
+ # Custom feature gates
+ feature_gates = {
+ "GracefulNodeShutdown" = true,
+ "SeccompDefault" = true,
+ "StatefulSetAutoDeletePVC" = true,
+ },
-**File: `provisioning/workspace/templates/my-custom-template/config.k`**
+ # Custom kubelet configuration
+ kubelet_config = {
+ max_pods = 110,
+ pod_pids_limit = 4096,
+ container_log_max_size = "10Mi",
+ container_log_max_files = 5,
+ },
-```kcl
-# Custom Kubernetes template with specific settings
+ # Custom API server flags
+ apiserver_extra_args = {
+ "enable-admission-plugins" = "NodeRestriction,PodSecurity,LimitRanger",
+ "audit-log-maxage" = "30",
+ "audit-log-maxbackup" = "10",
+ },
-kubernetes_config = {
- # Version
- version = "1.30.0"
+ # Custom scheduler configuration
+ scheduler_config = {
+ profiles = [
+ {
+ name = "high-availability",
+ plugins = {
+ score = {
+ enabled = [
+ {name = "NodeResourcesBalancedAllocation", weight = 2},
+ {name = "NodeResourcesLeastAllocated", weight = 1},
+ ],
+ },
+ },
+ },
+ ],
+ },
- # Custom feature gates
- feature_gates = {
- "GracefulNodeShutdown" = True
- "SeccompDefault" = True
- "StatefulSetAutoDeletePVC" = True
- }
+ # Network configuration
+ network = {
+ service_cidr = "10.96.0.0/12",
+ pod_cidr = "10.244.0.0/16",
+ dns_domain = "cluster.local",
+ },
- # Custom kubelet configuration
- kubelet_config = {
- max_pods = 110
- pod_pids_limit = 4096
- container_log_max_size = "10Mi"
- container_log_max_files = 5
- }
-
- # Custom API server flags
- apiserver_extra_args = {
- "enable-admission-plugins" = "NodeRestriction,PodSecurity,LimitRanger"
- "audit-log-maxage" = "30"
- "audit-log-maxbackup" = "10"
- }
-
- # Custom scheduler configuration
- scheduler_config = {
- profiles = [
- {
- name = "high-availability"
- plugins = {
- score = {
- enabled = [
- {name = "NodeResourcesBalancedAllocation", weight = 2}
- {name = "NodeResourcesLeastAllocated", weight = 1}
- ]
- }
- }
- }
- ]
- }
-
- # Network configuration
- network = {
- service_cidr = "10.96.0.0/12"
- pod_cidr = "10.244.0.0/16"
- dns_domain = "cluster.local"
- }
-
- # Security configuration
- security = {
- pod_security_standard = "restricted"
- encrypt_etcd = True
- rotate_certificates = True
- }
-}
-```plaintext
-
-### Step 3: Create Template Metadata
-
-**File: `provisioning/workspace/templates/my-custom-template/metadata.toml`**
-
-```toml
-[template]
+ # Security configuration
+ security = {
+ pod_security_standard = "restricted",
+ encrypt_etcd = true,
+ rotate_certificates = true,
+ },
+} in
+kubernetes_config
+
+
+File: provisioning/workspace/templates/my-custom-template/metadata.toml
+[template]
name = "my-custom-template"
version = "1.0.0"
description = "Custom Kubernetes template with enhanced security"
@@ -76347,12 +69627,9 @@ required_taskservs = ["containerd", "cilium"]
[tags]
environment = ["production", "staging"]
features = ["security", "monitoring", "high-availability"]
-```plaintext
-
-### Step 4: Test Custom Template
-
-```bash
-# List templates (should include your custom template)
+
+
+# List templates (should include your custom template)
provisioning tpl list
# Show your template
@@ -76360,131 +69637,104 @@ provisioning tpl show my-custom-template
# Apply to test infrastructure
provisioning tpl apply my-custom-template my-test
-```plaintext
-
-## Configuration Inheritance Examples
-
-### Example 1: Override Single Value
-
-**Core Layer** (`provisioning/extensions/taskservs/postgres/config.k`):
-
-```kcl
-postgres_config = {
- version = "15.5"
- port = 5432
- max_connections = 100
-}
-```plaintext
-
-**Infrastructure Layer** (`workspace/infra/my-production/taskservs/postgres.k`):
-
-```kcl
-postgres_config = {
- max_connections = 500 # Override only max_connections
-}
-```plaintext
-
-**Result** (after layer resolution):
-
-```kcl
-postgres_config = {
- version = "15.5" # From Core
- port = 5432 # From Core
- max_connections = 500 # From Infrastructure (overridden)
-}
-```plaintext
-
-### Example 2: Add Custom Configuration
-
-**Workspace Layer** (`provisioning/workspace/templates/production-postgres.k`):
-
-```kcl
-postgres_config = {
- replication = {
- enabled = True
- replicas = 2
- sync_mode = "async"
- }
-}
-```plaintext
-
-**Infrastructure Layer** (`workspace/infra/my-production/taskservs/postgres.k`):
-
-```kcl
-postgres_config = {
- replication = {
- sync_mode = "sync" # Override sync mode
- }
- custom_extensions = ["pgvector", "timescaledb"] # Add custom config
-}
-```plaintext
-
-**Result**:
-
-```kcl
-postgres_config = {
- version = "15.5" # From Core
- port = 5432 # From Core
- max_connections = 100 # From Core
- replication = {
- enabled = True # From Workspace
- replicas = 2 # From Workspace
- sync_mode = "sync" # From Infrastructure (overridden)
- }
- custom_extensions = ["pgvector", "timescaledb"] # From Infrastructure (added)
-}
-```plaintext
-
-### Example 3: Environment-Specific Configuration
-
-**Workspace Layer** (`provisioning/workspace/templates/base-kubernetes.k`):
-
-```kcl
-kubernetes_config = {
- version = "1.30.0"
- control_plane_count = 3
- worker_count = 5
- resources = {
- control_plane = {cpu = "4", memory = "8Gi"}
- worker = {cpu = "8", memory = "16Gi"}
- }
-}
-```plaintext
-
-**Development Infrastructure** (`workspace/infra/my-dev/taskservs/kubernetes.k`):
-
-```kcl
-kubernetes_config = {
- control_plane_count = 1 # Smaller for dev
- worker_count = 2
- resources = {
- control_plane = {cpu = "2", memory = "4Gi"}
- worker = {cpu = "2", memory = "4Gi"}
- }
-}
-```plaintext
-
-**Production Infrastructure** (`workspace/infra/my-prod/taskservs/kubernetes.k`):
-
-```kcl
-kubernetes_config = {
- control_plane_count = 5 # Larger for prod
- worker_count = 10
- resources = {
- control_plane = {cpu = "8", memory = "16Gi"}
- worker = {cpu = "16", memory = "32Gi"}
- }
-}
-```plaintext
-
-## Advanced Customization Patterns
-
-### Pattern 1: Multi-Environment Setup
-
-Create different configurations for each environment:
-
-```bash
-# Create environments
+
+
+
+Core Layer (provisioning/extensions/taskservs/postgres/main.ncl):
+let postgres_config = {
+ version = "15.5",
+ port = 5432,
+ max_connections = 100,
+} in
+postgres_config
+
+Infrastructure Layer (workspace/infra/my-production/taskservs/postgres.ncl):
+let postgres_config = {
+ max_connections = 500, # Override only max_connections
+} in
+postgres_config
+
+Result (after layer resolution):
+let postgres_config = {
+ version = "15.5", # From Core
+ port = 5432, # From Core
+ max_connections = 500, # From Infrastructure (overridden)
+} in
+postgres_config
+
+
+Workspace Layer (provisioning/workspace/templates/production-postgres.ncl):
+let postgres_config = {
+ replication = {
+ enabled = true,
+ replicas = 2,
+ sync_mode = "async",
+ },
+} in
+postgres_config
+
+Infrastructure Layer (workspace/infra/my-production/taskservs/postgres.ncl):
+let postgres_config = {
+ replication = {
+ sync_mode = "sync", # Override sync mode
+ },
+ custom_extensions = ["pgvector", "timescaledb"], # Add custom config
+} in
+postgres_config
+
+Result :
+let postgres_config = {
+ version = "15.5", # From Core
+ port = 5432, # From Core
+ max_connections = 100, # From Core
+ replication = {
+ enabled = true, # From Workspace
+ replicas = 2, # From Workspace
+ sync_mode = "sync", # From Infrastructure (overridden)
+ },
+ custom_extensions = ["pgvector", "timescaledb"], # From Infrastructure (added)
+} in
+postgres_config
+
+
+Workspace Layer (provisioning/workspace/templates/base-kubernetes.ncl):
+let kubernetes_config = {
+ version = "1.30.0",
+ control_plane_count = 3,
+ worker_count = 5,
+ resources = {
+ control_plane = {cpu = "4", memory = "8Gi"},
+ worker = {cpu = "8", memory = "16Gi"},
+ },
+} in
+kubernetes_config
+
+Development Infrastructure (workspace/infra/my-dev/taskservs/kubernetes.ncl):
+let kubernetes_config = {
+ control_plane_count = 1, # Smaller for dev
+ worker_count = 2,
+ resources = {
+ control_plane = {cpu = "2", memory = "4Gi"},
+ worker = {cpu = "2", memory = "4Gi"},
+ },
+} in
+kubernetes_config
+
+Production Infrastructure (workspace/infra/my-prod/taskservs/kubernetes.ncl):
+let kubernetes_config = {
+ control_plane_count = 5, # Larger for prod
+ worker_count = 10,
+ resources = {
+ control_plane = {cpu = "8", memory = "16Gi"},
+ worker = {cpu = "16", memory = "32Gi"},
+ },
+} in
+kubernetes_config
+
+
+
+Create different configurations for each environment:
+# Create environments
provisioning ws init my-app-dev
provisioning ws init my-app-staging
provisioning ws init my-app-prod
@@ -76498,97 +69748,80 @@ provisioning tpl apply production-kubernetes my-app-prod
# Edit: workspace/infra/my-app-dev/...
# Edit: workspace/infra/my-app-staging/...
# Edit: workspace/infra/my-app-prod/...
-```plaintext
-
-### Pattern 2: Shared Configuration Library
-
-Create reusable configuration fragments:
-
-**File: `provisioning/workspace/templates/shared/security-policies.k`**
-
-```kcl
-security_policies = {
- pod_security = {
- enforce = "restricted"
- audit = "restricted"
- warn = "restricted"
- }
- network_policies = [
+
+
+Create reusable configuration fragments:
+File: provisioning/workspace/templates/shared/security-policies.ncl
+let security_policies = {
+ pod_security = {
+ enforce = "restricted",
+ audit = "restricted",
+ warn = "restricted",
+ },
+ network_policies = [
+ {
+ name = "deny-all",
+ pod_selector = {},
+ policy_types = ["Ingress", "Egress"],
+ },
+ {
+ name = "allow-dns",
+ pod_selector = {},
+ egress = [
{
- name = "deny-all"
- pod_selector = {}
- policy_types = ["Ingress", "Egress"]
+ to = [{namespace_selector = {name = "kube-system"}}],
+ ports = [{protocol = "UDP", port = 53}],
},
- {
- name = "allow-dns"
- pod_selector = {}
- egress = [
- {
- to = [{namespace_selector = {name = "kube-system"}}]
- ports = [{protocol = "UDP", port = 53}]
- }
- ]
- }
- ]
-}
-```plaintext
+ ],
+ },
+ ],
+} in
+security_policies
+
+Import in your infrastructure:
+let security_policies = (import "../../../provisioning/workspace/templates/shared/security-policies.ncl") in
-Import in your infrastructure:
+let kubernetes_config = {
+ version = "1.30.0",
+ image_repo = "k8s.gcr.io",
+ security = security_policies, # Import shared policies
+} in
+kubernetes_config
+
+
+Use Nickel features for dynamic configuration:
+# Calculate resources based on server count
+let server_count = 5 in
+let replicas_per_server = 2 in
+let total_replicas = server_count * replicas_per_server in
-```kcl
-import "../../../provisioning/workspace/templates/shared/security-policies.k"
+let postgres_config = {
+ version = "16.1",
+ max_connections = total_replicas * 50, # Dynamic calculation
+ shared_buffers = "1024 MB",
+} in
+postgres_config
+
+
+let environment = "production" in # or "development"
-kubernetes_config = {
- version = "1.30.0"
- # ... other config
- security = security_policies # Import shared policies
-}
-```plaintext
-
-### Pattern 3: Dynamic Configuration
-
-Use KCL features for dynamic configuration:
-
-```kcl
-# Calculate resources based on server count
-server_count = 5
-replicas_per_server = 2
-total_replicas = server_count * replicas_per_server
-
-postgres_config = {
- version = "16.1"
- max_connections = total_replicas * 50 # Dynamic calculation
- shared_buffers = "${total_replicas * 128}MB"
-}
-```plaintext
-
-### Pattern 4: Conditional Configuration
-
-```kcl
-environment = "production" # or "development"
-
-kubernetes_config = {
- version = "1.30.0"
- control_plane_count = if environment == "production" { 3 } else { 1 }
- worker_count = if environment == "production" { 5 } else { 2 }
- monitoring = {
- enabled = environment == "production"
- retention = if environment == "production" { "30d" } else { "7d" }
- }
-}
-```plaintext
-
-## Layer Statistics
-
-```bash
-# Show layer system statistics
+let kubernetes_config = {
+ version = "1.30.0",
+ control_plane_count = if environment == "production" then 3 else 1,
+ worker_count = if environment == "production" then 5 else 2,
+ monitoring = {
+ enabled = environment == "production",
+ retention = if environment == "production" then "30d" else "7d",
+ },
+} in
+kubernetes_config
+
+
+# Show layer system statistics
provisioning lyr stats
-```plaintext
-
-**Expected Output:**
-
-```plaintext
-📊 Layer System Statistics:
+
+Expected Output:
+📊 Layer System Statistics:
Infrastructure Layer:
• Projects: 3
@@ -76606,17 +69839,13 @@ Core Layer:
• Clusters: 3
Resolution Performance:
- • Average resolution time: 45ms
+ • Average resolution time: 45 ms
• Cache hit rate: 87%
• Total resolutions: 1,250
-```plaintext
-
-## Customization Workflow
-
-### Complete Customization Example
-
-```bash
-# 1. Create new infrastructure
+
+
+
+# 1. Create new infrastructure
provisioning ws init my-custom-app
# 2. Understand layer system
@@ -76632,7 +69861,7 @@ provisioning tpl apply production-kubernetes my-custom-app
provisioning lyr show my-custom-app
# 6. Customize (edit files)
-provisioning sops workspace/infra/my-custom-app/taskservs/kubernetes.k
+provisioning sops workspace/infra/my-custom-app/taskservs/kubernetes.ncl
# 7. Test layer resolution
provisioning lyr test kubernetes my-custom-app
@@ -76645,41 +69874,32 @@ provisioning val config --infra my-custom-app
provisioning s create --infra my-custom-app --check
provisioning s create --infra my-custom-app
provisioning t create kubernetes --infra my-custom-app
-```plaintext
-
-## Best Practices
-
-### 1. Use Layers Correctly
-
-- **Core Layer**: Only modify for system-wide changes
-- **Workspace Layer**: Use for organization-wide templates
-- **Infrastructure Layer**: Use for project-specific customizations
-
-### 2. Template Organization
-
-```plaintext
-provisioning/workspace/templates/
+
+
+
+
+Core Layer : Only modify for system-wide changes
+Workspace Layer : Use for organization-wide templates
+Infrastructure Layer : Use for project-specific customizations
+
+
+provisioning/workspace/templates/
├── shared/ # Shared configuration fragments
-│ ├── security-policies.k
-│ ├── network-policies.k
-│ └── monitoring.k
+│ ├── security-policies.ncl
+│ ├── network-policies.ncl
+│ └── monitoring.ncl
├── production/ # Production templates
-│ ├── kubernetes.k
-│ ├── postgres.k
-│ └── redis.k
+│ ├── kubernetes.ncl
+│ ├── postgres.ncl
+│ └── redis.ncl
└── development/ # Development templates
- ├── kubernetes.k
- └── postgres.k
-```plaintext
-
-### 3. Documentation
-
-Document your customizations:
-
-**File: `workspace/infra/my-production/README.md`**
-
-```markdown
-# My Production Infrastructure
+ ├── kubernetes.ncl
+ └── postgres.ncl
+
+
+Document your customizations:
+File: workspace/infra/my-production/README.md
+# My Production Infrastructure
## Customizations
@@ -76689,31 +69909,23 @@ Document your customizations:
## Layer Overrides
-- `taskservs/kubernetes.k`: Control plane count (3 → 5)
-- `taskservs/postgres.k`: Replication mode (async → sync)
-- `network/cilium.k`: Routing mode (tunnel → native)
-```plaintext
-
-### 4. Version Control
-
-Keep templates and configurations in version control:
-
-```bash
-cd provisioning/workspace/templates/
+- `taskservs/kubernetes.ncl`: Control plane count (3 → 5)
+- `taskservs/postgres.ncl`: Replication mode (async → sync)
+- `network/cilium.ncl`: Routing mode (tunnel → native)
+
+
+Keep templates and configurations in version control:
+cd provisioning/workspace/templates/
git add .
git commit -m "Add production Kubernetes template with enhanced security"
cd workspace/infra/my-production/
git add .
git commit -m "Configure production environment for my-production"
-```plaintext
-
-## Troubleshooting Customizations
-
-### Issue: Configuration not applied
-
-```bash
-# Check layer resolution
+
+
+
+# Check layer resolution
provisioning lyr show my-production
# Verify file exists
@@ -76721,22 +69933,16 @@ ls -la workspace/infra/my-production/taskservs/
# Test specific resolution
provisioning lyr test kubernetes my-production
-```plaintext
-
-### Issue: Conflicting configurations
-
-```bash
-# Validate configuration
+
+
+# Validate configuration
provisioning val config --infra my-production
# Show configuration merge result
provisioning show config kubernetes --infra my-production
-```plaintext
-
-### Issue: Template not found
-
-```bash
-# List available templates
+
+
+# List available templates
provisioning tpl list
# Check template path
@@ -76744,19 +69950,16 @@ ls -la provisioning/workspace/templates/
# Refresh template cache
provisioning tpl refresh
-```plaintext
-
-## Next Steps
-
-- **[From Scratch Guide](from-scratch.md)** - Deploy new infrastructure
-- **[Update Guide](update-infrastructure.md)** - Update existing infrastructure
-- **[Workflow Guide](../development/workflow.md)** - Automate with workflows
-- **[KCL Guide](../development/KCL_MODULE_GUIDE.md)** - Learn KCL configuration language
-
-## Quick Reference
-
-```bash
-# Layer system
+
+
+
+
+# Layer system
provisioning lyr explain # Explain layers
provisioning lyr show <project> # Show layer resolution
provisioning lyr test <module> <project> # Test resolution
@@ -76768,20 +69971,299 @@ provisioning tpl list --type <type> # Filter by type
provisioning tpl show <template> # Show template details
provisioning tpl apply <template> <project> # Apply template
provisioning tpl validate <project> # Validate template usage
-```plaintext
-
----
-
-*This guide is part of the provisioning project documentation. Last updated: 2025-09-30*
+
+This guide is part of the provisioning project documentation. Last updated: 2025-09-30
+
+Complete guide to provisioning infrastructure with Nickel + ConfigLoader + TypeDialog
+
+
+
+cd project-provisioning
+
+# Generate solo deployment (Docker Compose, Nginx, Prometheus, OCI Registry)
+nickel export --format json provisioning/schemas/infrastructure/examples-solo-deployment.ncl > /tmp/solo-infra.json
+
+# Verify JSON structure
+jq . /tmp/solo-infra.json
+
+
+# Solo deployment validation
+bash provisioning/platform/scripts/validate-infrastructure.nu --config-dir provisioning/platform/infrastructure
+
+# Output shows validation status for Docker, K8s, Nginx, Prometheus
+
+
+# Export both examples
+nickel export --format json provisioning/schemas/infrastructure/examples-solo-deployment.ncl > /tmp/solo.json
+nickel export --format json provisioning/schemas/infrastructure/examples-enterprise-deployment.ncl > /tmp/enterprise.json
+
+# Compare orchestrator resources
+echo "=== Solo Resources ===" && jq '.docker_compose_services.orchestrator.deploy.resources.limits' /tmp/solo.json
+echo "=== Enterprise Resources ===" && jq '.docker_compose_services.orchestrator.deploy.resources.limits' /tmp/enterprise.json
+
+# Compare prometheus monitoring
+echo "=== Solo Prometheus Jobs ===" && jq '.prometheus_config.scrape_configs | length' /tmp/solo.json
+echo "=== Enterprise Prometheus Jobs ===" && jq '.prometheus_config.scrape_configs | length' /tmp/enterprise.json
+
+
+
+
+Schema Purpose Mode Presets
+docker-compose.nclContainer orchestration solo, multiuser, enterprise
+kubernetes.nclK8s manifest generation solo, enterprise
+nginx.nclReverse proxy & load balancer solo, enterprise
+prometheus.nclMetrics & monitoring solo, multiuser, enterprise
+systemd.nclSystem service units solo, enterprise
+oci-registry.nclContainer registry (Zot/Harbor) solo, multiuser, enterprise
+
+
+
+Example Type Services CPU Memory
+examples-solo-deployment.nclDev/Testing 5 1.0 1024M
+examples-enterprise-deployment.nclProduction 6 4.0 4096M
+
+
+
+Script Purpose Usage
+generate-infrastructure-configs.nuGenerate all configs --mode solo --format yaml
+validate-infrastructure.nuValidate configs --config-dir /path
+setup-with-forms.shInteractive setup Auto-detects TypeDialog
+
+
+
+
+
+Platform Config Layer (Service-Internal):
+Orchestrator port, database host, logging level
+ ↓
+ConfigLoader (Rust)
+ ↓
+Service reads TOML from runtime/generated/
+
+Infrastructure Config Layer (Deployment-External):
+Docker Compose services, Nginx routing, Prometheus scrape jobs
+ ↓
+nickel export → YAML/JSON
+ ↓
+Docker/Kubernetes/Nginx deploys infrastructure
+
+
+1. Choose platform config mode
+ provisioning/platform/config/examples/orchestrator.solo.example.ncl
+ ↓
+2. Generate platform config TOML
+ nickel export --format toml → runtime/generated/orchestrator.solo.toml
+ ↓
+3. Choose infrastructure mode
+ provisioning/schemas/infrastructure/examples-solo-deployment.ncl
+ ↓
+4. Generate infrastructure JSON/YAML
+ nickel export --format json → docker-compose-solo.json
+ ↓
+5. Deploy infrastructure
+ docker-compose -f docker-compose-solo.yaml up
+ ↓
+6. Services start with configs
+ ConfigLoader reads platform config TOML
+ Docker/Nginx read infrastructure configs
+
+
+
+
+Orchestrator: 1.0 CPU, 1024M RAM (1 replica)
+Control Center: 0.5 CPU, 512M RAM
+CoreDNS: 0.25 CPU, 256M RAM
+KMS: 0.5 CPU, 512M RAM
+OCI Registry: 0.5 CPU, 512M RAM (Zot - filesystem)
+─────────────────────────────────────
+Total: 2.75 CPU, 2624M RAM
+Use Case: Development, testing, PoCs
+
+
+Orchestrator: 4.0 CPU, 4096M RAM (3 replicas)
+Control Center: 2.0 CPU, 2048M RAM (HA)
+CoreDNS: 1.0 CPU, 1024M RAM
+KMS: 2.0 CPU, 2048M RAM
+OCI Registry: 2.0 CPU, 2048M RAM (Harbor - S3)
+─────────────────────────────────────
+Total: 11.0 CPU, 10240M RAM (+ replicas)
+Use Case: Production deployments, high availability
+
+
+
+
+nickel export --format json provisioning/schemas/infrastructure/examples-solo-deployment.ncl
+
+
+nickel export --format json provisioning/schemas/infrastructure/examples-enterprise-deployment.ncl
+
+
+jq '.docker_compose_services | keys' /tmp/infra.json
+jq '.prometheus_config.scrape_configs | length' /tmp/infra.json
+jq '.oci_registry_config.backend' /tmp/infra.json
+
+
+# All services in solo mode
+jq '.docker_compose_services[] | {name: .name, cpu: .deploy.resources.limits.cpus, memory: .deploy.resources.limits.memory}' /tmp/solo.json
+
+# Just orchestrator
+jq '.docker_compose_services.orchestrator.deploy.resources.limits' /tmp/solo.json
+
+
+# Services count
+jq '.docker_compose_services | length' /tmp/solo.json # 5 services
+jq '.docker_compose_services | length' /tmp/enterprise.json # 6 services
+
+# Prometheus jobs
+jq '.prometheus_config.scrape_configs | length' /tmp/solo.json # 4 jobs
+jq '.prometheus_config.scrape_configs | length' /tmp/enterprise.json # 7 jobs
+
+# Registry backend
+jq -r '.oci_registry_config.backend' /tmp/solo.json # Zot
+jq -r '.oci_registry_config.backend' /tmp/enterprise.json # Harbor
+
+
+
+
+nickel typecheck provisioning/schemas/infrastructure/docker-compose.ncl
+nickel typecheck provisioning/schemas/infrastructure/kubernetes.ncl
+nickel typecheck provisioning/schemas/infrastructure/nginx.ncl
+nickel typecheck provisioning/schemas/infrastructure/prometheus.ncl
+nickel typecheck provisioning/schemas/infrastructure/systemd.ncl
+nickel typecheck provisioning/schemas/infrastructure/oci-registry.ncl
+
+
+nickel typecheck provisioning/schemas/infrastructure/examples-solo-deployment.ncl
+nickel typecheck provisioning/schemas/infrastructure/examples-enterprise-deployment.ncl
+
+
+nickel export --format json provisioning/schemas/infrastructure/examples-solo-deployment.ncl | jq .
+
+
+
+
+nickel export --format toml provisioning/platform/config/examples/orchestrator.solo.example.ncl
+# Output: TOML with [database], [logging], [monitoring], [workspace] sections
+
+
+nickel export --format toml provisioning/platform/config/examples/orchestrator.enterprise.example.ncl
+# Output: TOML with HA, S3, Redis, tracing configuration
+
+
+
+
+provisioning/platform/config/
+├── runtime/generated/*.toml # Auto-generated by ConfigLoader
+├── examples/ # Reference implementations
+│ ├── orchestrator.solo.example.ncl
+│ ├── orchestrator.multiuser.example.ncl
+│ └── orchestrator.enterprise.example.ncl
+└── README.md
+
+
+provisioning/schemas/infrastructure/
+├── docker-compose.ncl # 232 lines
+├── kubernetes.ncl # 376 lines
+├── nginx.ncl # 233 lines
+├── prometheus.ncl # 280 lines
+├── systemd.ncl # 235 lines
+├── oci-registry.ncl # 221 lines
+├── examples-solo-deployment.ncl # 27 lines
+├── examples-enterprise-deployment.ncl # 27 lines
+└── README.md
+
+
+provisioning/platform/.typedialog/provisioning/platform/
+├── forms/ # Ready for auto-generated forms
+├── templates/service-form.template.j2
+├── schemas/ → ../../schemas # Symlink
+├── constraints/constraints.toml # Validation rules
+└── README.md
+
+
+provisioning/platform/scripts/
+├── generate-infrastructure-configs.nu # Generate all configs
+├── validate-infrastructure.nu # Validate with tools
+└── setup-with-forms.sh # Interactive wizard
+
+
+
+Component Status Details
+Infrastructure Schemas ✅ Complete 6 schemas, 1,577 lines, all validated
+Deployment Examples ✅ Complete 2 examples (solo + enterprise), tested
+Generation Scripts ✅ Complete Auto-generate configs for all modes
+Validation Scripts ✅ Complete Validate Docker, K8s, Nginx, Prometheus
+Platform Config ✅ Complete 36 TOML files in runtime/generated/
+TypeDialog Forms ✅ Ready Forms + bash wrappers created, awaiting binary
+Setup Wizard ✅ Active Basic prompts as fallback
+Documentation ✅ Complete All guides updated with examples
+
+
+
+
+
+
+Generate infrastructure configs for solo/enterprise modes
+Validate generated configs with format-specific tools
+Use interactive setup wizard with basic Nushell prompts
+TypeDialog forms created and ready (awaiting binary install)
+Deploy with Docker/Kubernetes using generated configs
+
+
+
+Install TypeDialog binary
+TypeDialog forms already created (setup, auth, MFA)
+Bash wrappers handle TTY input (no Nushell stack issues)
+Full nickel-roundtrip workflow will be enabled
+
+
+
+Schemas :
+
+provisioning/schemas/infrastructure/ - All infrastructure schemas
+
+Examples :
+
+provisioning/schemas/infrastructure/examples-solo-deployment.ncl
+provisioning/schemas/infrastructure/examples-enterprise-deployment.ncl
+
+Platform Configs :
+
+provisioning/platform/config/examples/ - Platform config examples
+provisioning/platform/config/runtime/generated/ - Generated TOML files
+
+Scripts :
+
+provisioning/platform/scripts/generate-infrastructure-configs.nu
+provisioning/platform/scripts/validate-infrastructure.nu
+provisioning/platform/scripts/setup-with-forms.sh
+
+Documentation :
+
+provisioning/docs/src/guides/infrastructure-setup.md - This guide
+provisioning/schemas/infrastructure/README.md - Infrastructure schema reference
+provisioning/platform/config/examples/README.md - Platform config guide
+provisioning/platform/.typedialog/README.md - TypeDialog integration guide
+
+
+Version : 1.0.0
+Last Updated : 2025-01-06
+Status : Production Ready
-This guide provides a hands-on walkthrough for developing custom extensions using the KCL package and module loader system.
-
+This guide provides a hands-on walkthrough for developing custom extensions using the Nickel configuration system and module loader.
+
-Core provisioning package installed:
-./provisioning/tools/kcl-packager.nu build --version 1.0.0
-./provisioning/tools/kcl-packager.nu install dist/provisioning-1.0.0.tar.gz
+Nickel installed (1.15.0+):
+# macOS
+brew install nickel
+
+# Linux/Other
+cargo install nickel
+
+# Verify
+nickel --version
@@ -76803,44 +70285,55 @@ provisioning tpl validate <project> # Validate template usage
# Navigate to your new extension
-cd extensions/taskservs/my-app/kcl
+cd extensions/taskservs/my-app
# View generated files
ls -la
-# kcl.mod - Package configuration
-# my-app.k - Main taskserv definition
-# version.k - Version information
-# dependencies.k - Dependencies export
+# main.ncl - Main taskserv definition
+# contracts.ncl - Configuration contract/schema
+# defaults.ncl - Default values
# README.md - Documentation template
-Edit my-app.k to match your service requirements:
-# Update the configuration schema
-schema MyAppConfig:
- """Configuration for My Custom App"""
+Edit main.ncl to match your service requirements:
+# contracts.ncl - Define the schema
+{
+ MyAppConfig = {
+ database_url | String,
+ api_key | String,
+ debug_mode | Bool,
+ cpu_request | String,
+ memory_request | String,
+ port | Number,
+ }
+}
- # Your service-specific settings
- database_url: str
- api_key: str
- debug_mode: bool = False
+# defaults.ncl - Provide sensible defaults
+{
+ defaults = {
+ debug_mode = false,
+ cpu_request = "200m",
+ memory_request = "512Mi",
+ port = 3000,
+ }
+}
- # Customize resource requirements
- cpu_request: str = "200m"
- memory_request: str = "512Mi"
+# main.ncl - Combine and export
+let contracts = import "./contracts.ncl" in
+let defaults = import "./defaults.ncl" in
- # Add your service's port
- port: int = 3000
-
- check:
- len(database_url) > 0, "Database URL required"
- len(api_key) > 0, "API key required"
+{
+ defaults = defaults,
+ make_config | not_exported = fun overrides =>
+ defaults.defaults & overrides,
+}
# Test discovery
./provisioning/core/cli/module-loader discover taskservs | grep my-app
-# Validate KCL syntax
-kcl check my-app.k
+# Validate Nickel syntax
+nickel typecheck main.ncl
# Validate extension structure
./provisioning/tools/create-extension.nu validate ../../../my-app
@@ -76856,47 +70349,32 @@ cd /tmp/test-my-app
# Load your extension
../provisioning/core/cli/module-loader load taskservs . [my-app]
-# Configure in servers.k
-cat > servers.k << 'EOF'
-import provisioning.settings as settings
-import provisioning.server as server
-import .taskservs.my-app.my-app as my_app
-
-main_settings: settings.Settings = {
- main_name = "test-my-app"
- runset = {
- wait = True
- output_format = "human"
- output_path = "tmp/deployment"
- inventory_file = "./inventory.yaml"
- use_time = True
- }
-}
-
-test_servers: [server.Server] = [
- {
- hostname = "app-01"
- title = "My App Server"
- user = "admin"
- labels = "env: test"
-
- taskservs = [
- {
- name = "my-app"
- profile = "development"
- }
- ]
- }
-]
+# Configure in servers.ncl
+cat > infra/default/servers.ncl << 'EOF'
+let my_app = import "../../extensions/taskservs/my-app/main.ncl" in
{
- settings = main_settings
- servers = test_servers
+ servers = [
+ {
+ hostname = "app-01",
+ provider = "local",
+ plan = "2xCPU-4 GB",
+ zone = "local",
+ storages = [{ total = 25 }],
+ taskservs = [
+ my_app.make_config {
+ database_url = "postgresql://db:5432/myapp",
+ api_key = "secret-key",
+ debug_mode = false,
+ }
+ ],
+ }
+ ]
}
EOF
# Test configuration
-kcl run servers.k
+nickel export infra/default/servers.ncl
@@ -76906,32 +70384,33 @@ kcl run servers.k
--description "Company-specific database service"
# Customize for PostgreSQL with company settings
-cd extensions/taskservs/company-db/kcl
+cd extensions/taskservs/company-db
Edit the schema:
-schema CompanyDbConfig:
- """Company database configuration"""
+# Database service configuration schema
+let CompanyDbConfig = {
+ # Database settings
+ database_name | String = "company_db",
+ postgres_version | String = "13",
- # Database settings
- database_name: str = "company_db"
- postgres_version: str = "13"
+ # Company-specific settings
+ backup_schedule | String = "0 2 * * *",
+ compliance_mode | Bool = true,
+ encryption_enabled | Bool = true,
- # Company-specific settings
- backup_schedule: str = "0 2 * * *"
- compliance_mode: bool = True
- encryption_enabled: bool = True
+ # Connection settings
+ max_connections | Number = 100,
+ shared_buffers | String = "256 MB",
- # Connection settings
- max_connections: int = 100
- shared_buffers: str = "256MB"
-
- # Storage settings
- storage_size: str = "100Gi"
- storage_class: str = "fast-ssd"
-
- check:
- len(database_name) > 0, "Database name required"
- max_connections > 0, "Max connections must be positive"
+ # Storage settings
+ storage_size | String = "100Gi",
+ storage_class | String = "fast-ssd",
+} | {
+ # Validation contracts
+ database_name | String,
+ max_connections | std.contract.from_validator (fun x => x > 0),
+} in
+CompanyDbConfig
# Create monitoring service
@@ -76940,29 +70419,30 @@ cd extensions/taskservs/company-db/kcl
--description "Company-specific monitoring and alerting"
Customize for Prometheus with company dashboards:
-schema CompanyMonitoringConfig:
- """Company monitoring configuration"""
+# Monitoring service configuration
+let AlertManagerConfig = {
+ smtp_server | String,
+ smtp_port | Number = 587,
+ smtp_auth_enabled | Bool = true,
+} in
- # Prometheus settings
- retention_days: int = 30
- storage_size: str = "50Gi"
+let CompanyMonitoringConfig = {
+ # Prometheus settings
+ retention_days | Number = 30,
+ storage_size | String = "50Gi",
- # Company dashboards
- enable_business_metrics: bool = True
- enable_compliance_dashboard: bool = True
+ # Company dashboards
+ enable_business_metrics | Bool = true,
+ enable_compliance_dashboard | Bool = true,
- # Alert routing
- alert_manager_config: AlertManagerConfig
+ # Alert routing
+ alert_manager_config | AlertManagerConfig,
- # Integration settings
- slack_webhook?: str
- email_notifications: [str]
-
-schema AlertManagerConfig:
- """Alert manager configuration"""
- smtp_server: str
- smtp_port: int = 587
- smtp_auth_enabled: bool = True
+ # Integration settings
+ slack_webhook | String | optional,
+ email_notifications | Array String,
+} in
+CompanyMonitoringConfig
# Create legacy integration
@@ -76971,25 +70451,26 @@ schema AlertManagerConfig:
--description "Bridge for legacy system integration"
Customize for mainframe integration:
-schema LegacyBridgeConfig:
- """Legacy system bridge configuration"""
+# Legacy bridge configuration schema
+let LegacyBridgeConfig = {
+ # Legacy system details
+ mainframe_host | String,
+ mainframe_port | Number = 23,
+ connection_type | [String] = "tn3270", # "tn3270" or "direct"
- # Legacy system details
- mainframe_host: str
- mainframe_port: int = 23
- connection_type: "tn3270" | "direct" = "tn3270"
+ # Data transformation
+ data_format | [String] = "fixed-width", # "fixed-width", "csv", or "xml"
+ character_encoding | String = "ebcdic",
- # Data transformation
- data_format: "fixed-width" | "csv" | "xml" = "fixed-width"
- character_encoding: str = "ebcdic"
+ # Processing settings
+ batch_size | Number = 1000,
+ poll_interval_seconds | Number = 60,
- # Processing settings
- batch_size: int = 1000
- poll_interval_seconds: int = 60
-
- # Error handling
- retry_attempts: int = 3
- dead_letter_queue_enabled: bool = True
+ # Error handling
+ retry_attempts | Number = 3,
+ dead_letter_queue_enabled | Bool = true,
+} in
+LegacyBridgeConfig
@@ -77004,7 +70485,7 @@ schema AlertManagerConfig:
--author "Your Company" \
--description "Complete company infrastructure stack"
-
+
# 1. Create test workspace
mkdir test-workspace && cd test-workspace
@@ -77019,7 +70500,7 @@ mkdir test-workspace && cd test-workspace
../provisioning/core/cli/module-loader validate .
# 4. Test KCL compilation
-kcl run servers.k
+nickel export servers.ncl
# 5. Dry-run deployment
../provisioning/core/cli/provisioning server create --infra . --check
@@ -77035,10 +70516,10 @@ jobs:
steps:
- uses: actions/checkout@v3
- - name: Install KCL
+ - name: Install Nickel
run: |
- curl -fsSL https://kcl-lang.io/script/install-cli.sh | bash
- echo "$HOME/.kcl/bin" >> $GITHUB_PATH
+ curl -fsSL https://releases.nickel-lang.org/install.sh | bash
+ echo "$HOME/.nickel/bin" >> $GITHUB_PATH
- name: Install Nushell
run: |
@@ -77047,7 +70528,7 @@ jobs:
- name: Build core package
run: |
- nu provisioning/tools/kcl-packager.nu build --version test
+ nu provisioning/tools/nickel-packager.nu build --version test
- name: Test extension discovery
run: |
@@ -77055,7 +70536,7 @@ jobs:
- name: Validate extension syntax
run: |
- find extensions -name "*.k" -exec kcl check {} \;
+ find extensions -name "*.ncl" -exec nickel typecheck {} \;
- name: Test workspace creation
run: |
@@ -77063,7 +70544,7 @@ jobs:
nu provisioning/tools/workspace-init.nu test-workspace init
cd test-workspace
nu ../provisioning/core/cli/module-loader load taskservs . [my-app]
- kcl run servers.k
+ nickel export servers.ncl
@@ -77079,7 +70560,7 @@ jobs:
✅ Use semantic versioning
✅ Test compatibility with different versions
-
+
✅ Never hardcode secrets in schemas
✅ Use validation to ensure secure defaults
@@ -77095,7 +70576,7 @@ jobs:
✅ Test extension discovery and loading
-✅ Validate KCL syntax
+✅ Validate Nickel syntax with type checking
✅ Test in multiple environments
✅ Include CI/CD validation
@@ -77104,16 +70585,16 @@ jobs:
Problem : module-loader discover doesn’t find your extension
Solutions :
-Check directory structure: extensions/taskservs/my-service/kcl/
-Verify kcl.mod exists and is valid
-Ensure main .k file has correct name
+Check directory structure: extensions/taskservs/my-service/schemas/
+Verify manifest.toml exists and is valid
+Ensure main .ncl file has correct name
Check file permissions
-
-Problem : KCL syntax errors in your extension
+
+Problem : Nickel type checking errors in your extension
Solutions :
-Use kcl check my-service.k to validate syntax
+Use nickel typecheck my-service.ncl to validate syntax
Check import statements are correct
Verify schema validation rules
Ensure all required fields have defaults or are provided
@@ -77122,31 +70603,31 @@ jobs:
Problem : Extension loads but doesn’t work correctly
Solutions :
-Check generated import files: cat taskservs.k
+Check generated import files: cat taskservs.ncl
Verify dependencies are satisfied
Test with minimal configuration first
Check extension manifest: cat .manifest/taskservs.yaml
-
+
Explore Examples : Look at existing extensions in extensions/ directory
Read Advanced Docs : Study the comprehensive guides:
Join Community : Contribute to the provisioning system
Share Extensions : Publish useful extensions for others
-
+
Documentation : Package and Loader System Guide
Templates : Use ./provisioning/tools/create-extension.nu list-templates
Validation : Use ./provisioning/tools/create-extension.nu validate <path>
Examples : Check provisioning/examples/ directory
-Happy extension development! 🚀
+Happy extension development. 🚀
A comprehensive interactive guide system providing copy-paste ready commands and step-by-step walkthroughs.
@@ -77283,453 +70764,3689 @@ provisioning help howto
customize-infrastructure.md - Customization patterns
-
-COMPONENTES PRINCIPALES:
-/Users/Akasha/project-provisioning/
-├── provisioning/core/cli/provisioning # 🔵 Punto de entrada bash
-├── provisioning/core/cli/module-loader # 🔵 Cargador de módulos
-│
-├── provisioning/core/nulib/main_provisioning/
-│ ├── commands/workspace.nu # 🟢 Dispatcher workspace
-│ ├── commands/generation.nu # 🟢 Dispatcher generate
-│ └── workspace.nu # 🟢 Función wrapper
-│
-├── provisioning/core/nulib/lib_provisioning/workspace/
-│ ├── mod.nu # 🟡 Exports (main)
-│ ├── init.nu # 🟡 Inicialización interactiva
-│ ├── commands.nu # 🟡 CLI commands (activate, switch, etc)
-│ ├── config_commands.nu # 🟡 Configuración
-│ ├── helpers.nu # 🟡 Funciones aux
-│ ├── version.nu # 🟡 Versionado
-│ ├── enforcement.nu # 🟡 Validación reglas
-│ └── migration.nu # 🟡 Migración versiones
-│
-├── provisioning/tools/workspace-init.nu # 🟣 Script PRINCIPAL (966 líneas)
-│
-├── provisioning/templates/workspace/
-│ ├── minimal/servers.k # 📄 Template base
-│ ├── full/servers.k # 📄 Template completo
-│ └── example/servers.k # 📄 Template ejemplo
-│
-└── provisioning/workspace/layers/workspace.layer.k # 📋 Definición layer KCL
-
-DOCUMENTACIÓN:
-├── docs/architecture/adr/ADR-003-workspace-isolation.md
-└── WORKSPACE_GENERATION_GUIDE.md # 📖 Guía completa (esta)
-```plaintext
-
-## Flujo Rápido: Crear Workspace
-
-```bash
-# 1️⃣ INTERACTIVO
+Updated for Nickel-based workspaces with auto-generated documentation
+
+# Interactive mode (recommended)
provisioning workspace init
-→ Responder preguntas interactivas
-→ Se crea estructura completa automáticamente
-# 2️⃣ NO-INTERACTIVO
-provisioning workspace init ~/my_workspace \
- --infra-name production \
- --template minimal \
- --dep-option workspace-home
+# Non-interactive mode with explicit path
+provisioning workspace init my_workspace /path/to/my_workspace
-# 3️⃣ CON MÓDULOS PRE-CARGADOS
-provisioning workspace init ~/my_workspace \
- --infra-name staging \
- --template full \
- --taskservs kubernetes cilium \
- --providers upcloud
-```plaintext
-
-## Proceso de Inicialización (7 Pasos)
-
-```plaintext
-┌─ PASO 1: VALIDACIÓN
-│ ├─ Workspace name sin hyphens
-│ └─ Infraestructura name sin hyphens
+# With activation
+provisioning workspace init my_workspace /path/to/my_workspace --activate
+
+
+When you run provisioning workspace init, the system creates:
+my_workspace/
+├── config/
+│ ├── config.ncl # Master Nickel configuration
+│ ├── providers/ # Provider configurations
+│ └── platform/ # Platform service configs
│
-├─ PASO 2: DEPENDENCIAS KCL
-│ ├─ workspace-home (default) → .kcl/packages/provisioning
-│ ├─ home-package → ~/.kcl/packages/provisioning
-│ ├─ git-package → repositorio Git
-│ └─ publish-repo → registry KCL
+├── infra/
+│ └── default/
+│ ├── main.ncl # Infrastructure definition
+│ └── servers.ncl # Server configurations
│
-├─ PASO 3: ESTRUCTURA DIRECTORIOS
-│ ├─ workspace/ + Layer 2 dirs (.taskservs, .providers, etc)
-│ └─ infra/<name>/ + Layer 3 dirs
+├── docs/ # ✨ AUTO-GENERATED GUIDES
+│ ├── README.md # Workspace overview
+│ ├── deployment-guide.md # Step-by-step deployment
+│ ├── configuration-guide.md # Configuration reference
+│ └── troubleshooting.md # Common issues & solutions
│
-├─ PASO 4: INSTALAR PACKAGE KCL
-│ ├─ Copiar provisioning/kcl → destino
-│ └─ Verificar/actualizar versión (check-and-update-package)
-│
-├─ PASO 5: CONFIGURACIÓN
-│ ├─ Crear kcl.mod (con dependencias)
-│ ├─ Crear .gitignore
-│ └─ Crear manifests YAML (vacíos)
-│
-├─ PASO 6: ARCHIVOS EJEMPLO
-│ ├─ Copiar template servers.k
-│ └─ Generar README.md
-│
-└─ PASO 7: MÓDULOS DEFECTO
- └─ module-loader load taskservs <path> os
-```plaintext
-
-## Estructura 3-Layer (Resolución de Módulos)
-
-```plaintext
-Layer 1: Sistema Global (provisioning/extensions/)
- ↑
-Layer 2: Workspace (workspace/.taskservs, .providers, .clusters)
- ↑
-Layer 3: Infraestructura (workspace/infra/<name>/.taskservs, etc)
- ↑ (Override precedence)
-
-Ejemplo:
- provisioning/extensions/taskservs/kubernetes/
- ↓ override si existe
- workspace/.taskservs/kubernetes/
- ↓ override si existe
- workspace/infra/prod/.taskservs/kubernetes/ ← USADO
-```plaintext
-
-## Estructura de Workspace Creada
-
-```plaintext
-workspace_root/
-├── .gitignore
-├── README.md
-├── data/ # Datos runtime
-├── tmp/ # Archivos temporales
-├── resources/ # Recursos
-│
-├── .taskservs/ # Layer 2 (workspace-level)
├── .providers/
-├── .clusters/
-├── .manifest/
-│
-└── infra/
- └── <nombre>/
- ├── kcl.mod # Dependencias KCL
- ├── servers.k # Configuración servidores
- ├── README.md
- │
- ├── .taskservs/ # Layer 3 (infra-specific)
- ├── .providers/
- ├── .clusters/
- ├── .manifest/
- │ ├── taskservs.yaml
- │ ├── providers.yaml
- │ └── clusters.yaml
- │
- ├── taskservs/ # Loaded modules
- ├── overrides/ # Module overrides
- ├── defs/ # Definiciones
- └── config/ # Configuración
-```plaintext
+├── .kms/
+├── .provisioning/
+└── workspace.nu # Utility scripts
+
+
+
+{
+ workspace = {
+ name = "my_workspace",
+ path = "/path/to/my_workspace",
+ description = "Workspace: my_workspace",
+ metadata = {
+ owner = "your_username",
+ created = "2025-01-07T19:30:00Z",
+ environment = "development",
+ },
+ },
-## Funciones Clave en workspace-init.nu
+ providers = {
+ local = {
+ name = "local",
+ enabled = true,
+ workspace = "my_workspace",
+ auth = { interface = "local" },
+ paths = {
+ base = ".providers/local",
+ cache = ".providers/local/cache",
+ state = ".providers/local/state",
+ },
+ },
+ },
+}
+
+
+{
+ workspace_name = "my_workspace",
+ infrastructure = "default",
+ servers = [
+ {
+ hostname = "my-workspace-server-0",
+ provider = "local",
+ plan = "1xCPU-2 GB",
+ zone = "local",
+ storages = [{total = 25}],
+ },
+ ],
+}
+
+
+Every workspace includes 4 auto-generated guides in the docs/ directory:
+Guide Content
+README.md Workspace overview, quick start, and structure
+deployment-guide.md Step-by-step deployment for your infrastructure
+configuration-guide.md Configuration options specific to your setup
+troubleshooting.md Solutions for common issues
+
+
+These guides are customized for your workspace’s:
+
+Configured providers
+Infrastructure definitions
+Server configurations
+Platform services
+
+
+STEP 1: Create directory structure
+ └─ workspace/, config/, infra/default/, etc.
-| Función | Líneas | Propósito |
-|---------|--------|----------|
-| `get-dependency-config` | 9-113 | Selecciona opción dependencia KCL |
-| `install-workspace-provisioning` | 116-168 | Instala package en workspace |
-| `install-home-provisioning` | 171-222 | Instala package en home |
-| `check-and-update-package` | 226-252 | Verifica versión, actualiza si es necesario |
-| `build-distribution-package` | 270-383 | Crea tar.gz con package |
-| `update-package-registry` | 386-424 | Actualiza packages.json registry |
-| `load-default-modules` | 427-452 | Carga taskserv "os" por defecto |
-| `create-workspace-structure` | 577-621 | Crea directorios |
-| `create-workspace-config` | 624-715 | Crea kcl.mod, .gitignore, manifests |
-| `create-workspace-examples` | 735-858 | Copia template servers.k |
-| `main` | 455-574 | Función principal orquestadora |
+STEP 2: Generate Nickel configuration
+ ├─ config/config.ncl (master config)
+ └─ infra/default/*.ncl (infrastructure files)
-## Templates Disponibles
+STEP 3: Configure providers
+ └─ Setup local provider (default)
-| Template | Ruta | Complejidad | Servidores | Módulos | Casos de uso |
-|----------|------|-------------|-----------|---------|-------------|
-| **minimal** | `templates/workspace/minimal/` | Baja | 1 ejemplo | 0 | Learning, simple deployments |
-| **full** | `templates/workspace/full/` | Alta | Múltiples | Sí | Production-ready |
-| **example** | `templates/workspace/example/` | Media | Algunos | Ejemplos | Demostración |
+STEP 4: Initialize metadata
+ └─ .provisioning/metadata.yaml
-## Configuración de Dependencias KCL
+STEP 5: Activate workspace (if requested)
+ └─ Set as default workspace
-### Opción 1: workspace-home (DEFAULT)
+STEP 6: Create .gitignore
+ └─ Workspace-specific ignore rules
-```toml
-[dependencies]
-provisioning = { path = "../../.kcl/packages/provisioning", version = "0.0.1" }
-```plaintext
+STEP 7: ✨ GENERATE DOCUMENTATION
+ ├─ Extract workspace metadata
+ ├─ Render 4 workspace guides
+ └─ Place in docs/ directory
-✓ Self-contained per workspace
-✓ No requiere ~/.kcl/
-✗ Duplica package por workspace
+STEP 8: Display summary
+ └─ Show workspace path and documentation location
+
+
+
+# Create interactive workspace
+provisioning workspace init
-### Opción 2: home-package
+# Create with explicit path and activate
+provisioning workspace init my_workspace /path/to/workspace --activate
-```toml
-[dependencies]
-provisioning = { path = "~/.kcl/packages/provisioning", version = "0.0.1" }
-```plaintext
+# List all workspaces
+provisioning workspace list
-✓ Compartido entre workspaces
-✓ Economiza espacio
-✗ Requiere ~/.kcl/ global
+# Activate workspace
+provisioning workspace activate my_workspace
-### Opción 3: git-package
-
-```toml
-[dependencies]
-provisioning = { git = "https://github.com/...", version = "0.0.1" }
-```plaintext
-
-✓ Siempre versión latest
-✗ Requiere conectividad
-
-### Opción 4: publish-repo
-
-```toml
-[dependencies]
-provisioning = { version = "0.0.1" } # default KCL registry
-```plaintext
-
-✓ Oficial, mantenido
-✗ Requiere versión publicada
-
-## Comandos CLI
-
-### Inicialización
-
-```bash
-provisioning workspace init [path] # Interactivo
-provisioning ws init # Alias
-provisioning workspace init ~/ws --template=full # No-interactivo
-```plaintext
-
-### Gestión
-
-```bash
-provisioning workspace list # Listar registrados
-provisioning workspace activate <name> # Activar
-provisioning workspace switch <name> # Alias activate
-provisioning workspace register <name> <path> # Registrar existente
-provisioning workspace remove <name> # Remover del registry
-```plaintext
-
-### Información
-
-```bash
-provisioning workspace active # Ver workspace activo
-provisioning workspace version <name> # Ver versión
-provisioning workspace preferences # Ver preferencias
-```plaintext
-
-### Mantenimiento
-
-```bash
-provisioning workspace migrate <name> # Migrar a versión nueva
-provisioning workspace check-compatibility # Validar compatibilidad
-provisioning workspace list-backups # Listar backups
-provisioning workspace restore-backup <path> # Restaurar desde backup
-```plaintext
-
-## Validaciones Importantes
-
-### Nombres
-
-❌ No permitido: `my-workspace`, `prod-infra` (hyphens)
-✅ Permitido: `my_workspace`, `prod_infra` (underscores)
-
-**Razón**: Los hyphens rompen resolución de módulos KCL
-
-### Estructura Requerida
-
-```plaintext
-✅ .taskservs/ .providers/ .clusters/ .manifest/ ← Layer 2 (workspace)
-✅ kcl.mod servers.k ← Infrastructure files
-✅ .taskservs/ .providers/ .clusters/ .manifest/ ← Layer 3 (infra)
-```plaintext
-
-### Dependencias KCL
-
-```plaintext
-✅ Package version coincide entre source y target
-✅ provisioning/kcl accesible (local o vía env var)
-✅ Path de dependencia resuelve correctamente
-```plaintext
-
-## Flujo Tipo: Crear y Desplegar
-
-```bash
-# 1. CREAR WORKSPACE
-provisioning workspace init ~/production \
- --infra-name main \
- --template minimal
-
-# 2. RESULTADO
-~/production/
-├── infra/main/servers.k ← Editar aquí
-├── infra/main/kcl.mod
-└── ... (estructura completa)
-
-# 3. CARGAR MÓDULOS ADICIONALES
-cd ~/production/infra/main
-provisioning dt # Descubrir
-provisioning mod load taskservs . kubernetes cilium
-provisioning mod load providers . upcloud
-
-# 4. CONFIGURAR (EDITOR)
-# Editar infra/main/servers.k con:
-# - import taskservs.kubernetes as k8s
-# - import providers.upcloud as upcloud
-# - Definir servidores
-# - Configurar recursos
-
-# 5. VALIDAR
-kcl run servers.k
-
-# 6. DESPLEGAR
-provisioning s create --infra main --check # Dry-run
-provisioning s create --infra main # Real
-
-# 7. GESTIONAR
-provisioning workspace switch ~/production
+# Show active workspace
provisioning workspace active
-provisioning workspace version production
-```plaintext
+
+
+# Validate Nickel configuration
+nickel typecheck config/config.ncl
+nickel typecheck infra/default/main.ncl
-## Archivos Generados (Ejemplos)
+# Validate with provisioning system
+provisioning validate config
+
+
+# Dry-run (check mode)
+provisioning -c server create
-### servers.k (template minimal)
+# Actual deployment
+provisioning server create
-```kcl
-import provisioning.settings as settings
-import provisioning.server as server
+# List servers
+provisioning server list
+
+
+
+my_workspace/
+├── config/
+│ ├── config.ncl # Master configuration
+│ ├── providers/ # Provider configs
+│ └── platform/ # Platform configs
+│
+├── infra/
+│ └── default/
+│ ├── main.ncl # Infrastructure definition
+│ └── servers.ncl # Server definitions
+│
+├── docs/ # AUTO-GENERATED GUIDES
+│ ├── README.md # Workspace overview
+│ ├── deployment-guide.md # Step-by-step deployment
+│ ├── configuration-guide.md # Configuration reference
+│ └── troubleshooting.md # Common issues & solutions
+│
+├── .providers/ # Provider state & cache
+├── .kms/ # KMS data
+├── .provisioning/ # Workspace metadata
+└── workspace.nu # Utility scripts
+
+
+
+# Master workspace configuration
+vim config/config.ncl
-main_settings: settings.Settings = {
- main_name = "minimal-infra"
- main_title = "Minimal Infrastructure"
- settings_path = "../../data/settings.yaml"
- defaults_provs_dirpath = "./defs"
- # ... más config
+# Infrastructure definition
+vim infra/default/main.ncl
+
+# Server definitions
+vim infra/default/servers.ncl
+
+
+# Create new infrastructure environment
+mkdir -p infra/production infra/staging
+
+# Copy template files
+cp infra/default/main.ncl infra/production/main.ncl
+cp infra/default/servers.ncl infra/production/servers.ncl
+
+# Edit for your needs
+vim infra/production/servers.ncl
+
+
+Update config/config.ncl to enable cloud providers:
+providers = {
+ upcloud = {
+ name = "upcloud",
+ enabled = true, # Set to true
+ workspace = "my_workspace",
+ auth = { interface = "API" },
+ paths = {
+ base = ".providers/upcloud",
+ cache = ".providers/upcloud/cache",
+ state = ".providers/upcloud/state",
+ },
+ api = {
+ url = "https://api.upcloud.com/1.3",
+ timeout = 30,
+ },
+ },
+}
+
+
+
+Read auto-generated guides in docs/
+Customize configuration in Nickel files
+Validate with : nickel typecheck config/config.ncl
+Test deployment with dry-run mode: provisioning -c server create
+Deploy infrastructure when ready
+
+
+
+
+
+This guide covers strategies and patterns for deploying infrastructure across multiple cloud providers using the provisioning system. Multi-provider deployments enable high availability, disaster recovery, cost optimization, compliance with regional requirements, and vendor lock-in avoidance.
+
+
+
+The provisioning system provides a provider-agnostic abstraction layer that enables seamless deployment across Hetzner, UpCloud, AWS, and DigitalOcean. Each provider implements a standard interface with compute, storage, networking, and management capabilities.
+
+Provider Compute Storage Load Balancer Managed Services Network Isolation
+Hetzner Cloud Servers Volumes Load Balancer No vSwitch/Private Networks
+UpCloud Servers Storage Load Balancer No VLAN
+AWS EC2 EBS/S3 ALB/NLB RDS, ElastiCache, etc VPC/Security Groups
+DigitalOcean Droplets Volumes Load Balancer Managed DB VPC/Firewall
+
+
+
+
+Provider Abstraction : Consistent interface across all providers hides provider-specific details
+Workspace : Defines infrastructure components, resource allocation, and provider configuration
+Multi-Provider Workspace : A single workspace that spans multiple providers with coordinated deployment
+Batch Workflows : Orchestrate deployment across providers with dependency tracking and rollback capability
+
+
+
+Different providers excel at different workloads:
+
+Compute-Heavy : Hetzner offers best price/performance ratio for compute-intensive workloads
+Managed Services : AWS RDS or DigitalOcean Managed Databases often more cost-effective than self-managed
+Storage-Intensive : AWS S3 or Google Cloud Storage for large object storage requirements
+Edge Locations : DigitalOcean’s CDN and global regions for geographically distributed serving
+
+Example : Store application data in Hetzner compute nodes (cost-effective), analytics database in AWS RDS (managed), and backups in DigitalOcean Spaces (affordable object storage).
+
+
+Active-Active : Run identical infrastructure in multiple providers for load balancing
+Active-Standby : Primary on Provider A, warm standby on Provider B with automated failover
+Multi-Region : Distribute across geographic regions within and between providers
+Time-to-Recovery : Multiple providers reduce dependency on single provider’s infrastructure
+
+
+
+GDPR : European data must stay in EU providers (Hetzner DE, UpCloud FI/SE)
+Regional Requirements : Some compliance frameworks require data in specific countries
+Provider Certifications : Different providers have different compliance certifications (SOC2, ISO 27001, HIPAA)
+
+Example : Production data in Hetzner (EU-based), analytics in AWS (GDPR-compliant regions), backups in DigitalOcean.
+
+
+Portability : Multi-provider setup enables migration without complete outage
+Flexibility : Switch providers for cost negotiation or service issues
+Resilience : Not dependent on single provider’s reliability or pricing changes
+
+
+
+Geographic Distribution : Serve users from nearest provider
+Provider-Specific Performance : Some providers have better infrastructure for specific regions
+Regional Redundancy : Maintain service availability during provider-wide outages
+
+
+
+
+Compute-Intensive (batch processing, ML, heavy calculations)
+
+Recommended: Hetzner (best price), UpCloud (mid-range)
+Avoid: AWS on-demand (unless spot instances), DigitalOcean premium tier
+
+Web/Application (stateless serving, APIs)
+
+Recommended: DigitalOcean (simple management), Hetzner (cost), AWS (multi-region)
+Consider: Geographic proximity to users
+
+Stateful/Database (databases, caches, queues)
+
+Recommended: AWS RDS/ElastiCache, DigitalOcean Managed DB
+Alternative: Self-managed on any provider with replication
+
+Storage/File Serving (object storage, backups)
+
+Recommended: AWS S3, DigitalOcean Spaces, Hetzner Object Storage
+Consider: Cost per GB, access patterns, bandwidth
+
+
+North America
+
+AWS: Multiple regions (us-east-1, us-west-2, etc)
+DigitalOcean: NYC, SFO
+Hetzner: Ashburn, Virginia
+UpCloud: Multiple US locations
+
+Europe
+
+Hetzner: Falkenstein (DE), Nuremberg (DE), Helsinki (FI)
+UpCloud: Multiple EU locations
+AWS: eu-west-1 (IE), eu-central-1 (DE), etc
+DigitalOcean: London, Frankfurt, Amsterdam
+
+Asia
+
+AWS: ap-southeast-1 (SG), ap-northeast-1 (Tokyo)
+DigitalOcean: Singapore, Bangalore
+Hetzner: Limited
+UpCloud: Singapore
+
+Recommendation for Multi-Region : Combine Hetzner (EU backbone), DigitalOcean (global presence), AWS (comprehensive regions).
+
+
+Provider Price Notes
+Hetzner €6.90 (~$7.50) Cheapest, good performance
+DigitalOcean $24 Premium pricing, simplicity
+UpCloud $30 Mid-range, good support
+AWS t3.medium $60+ On-demand pricing (spot: $18-25)
+
+
+
+Minimal Budget (<$50/month)
+
+Single Hetzner server: €6.90
+Alternative: DigitalOcean $24 + DigitalOcean Spaces for backup
+
+Small Team ($100-500/month)
+
+Hetzner primary (€50-150), DigitalOcean backup (60-80)
+Good HA coverage with cost control
+
+Enterprise ($1000+/month)
+
+AWS primary (managed services, compliance)
+Hetzner backup (cost-effective)
+DigitalOcean edge locations (CDN)
+
+
+Provider GDPR SOC 2 ISO 27001 HIPAA FIPS PCI-DSS
+Hetzner ✓ ✓ ✓ ✗ ✗ ✓
+UpCloud ✓ ✓ ✓ ✗ ✗ ✓
+AWS ✓ ✓ ✓ ✓ ✓ ✓
+DigitalOcean ✓ ✓ ✓ ✓ ✓ ✓
+
+
+Compliance Selection Matrix
+
+GDPR Only : Hetzner, UpCloud (EU-based), all AWS/DO EU regions
+HIPAA Required : AWS, DigitalOcean (DigitalOcean requires BAA)
+FIPS Required : AWS (all regions)
+PCI-DSS : All providers support, AWS most comprehensive
+
+
+
+provisioning/examples/workspaces/my-multi-provider-app/
+├── workspace.ncl # Infrastructure definition
+├── config.toml # Provider credentials, regions, defaults
+├── README.md # Setup and deployment instructions
+└── deploy.nu # Deployment orchestration script
+
+
+
+Each provider requires authentication via environment variables:
+# Hetzner
+export HCLOUD_TOKEN="your-hetzner-api-token"
+
+# UpCloud
+export UPCLOUD_USERNAME="your-upcloud-username"
+export UPCLOUD_PASSWORD="your-upcloud-password"
+
+# AWS
+export AWS_ACCESS_KEY_ID="your-access-key"
+export AWS_SECRET_ACCESS_KEY="your-secret-key"
+export AWS_DEFAULT_REGION="us-east-1"
+
+# DigitalOcean
+export DIGITALOCEAN_TOKEN="your-do-api-token"
+
+
+[providers]
+
+[providers.hetzner]
+enabled = true
+api_token_env = "HCLOUD_TOKEN"
+default_region = "nbg1"
+default_datacenter = "nbg1-dc8"
+
+[providers.upcloud]
+enabled = true
+username_env = "UPCLOUD_USERNAME"
+password_env = "UPCLOUD_PASSWORD"
+default_region = "fi-hel1"
+
+[providers.aws]
+enabled = true
+region = "us-east-1"
+access_key_env = "AWS_ACCESS_KEY_ID"
+secret_key_env = "AWS_SECRET_ACCESS_KEY"
+
+[providers.digitalocean]
+enabled = true
+token_env = "DIGITALOCEAN_TOKEN"
+default_region = "nyc3"
+
+[workspace]
+name = "my-multi-provider-app"
+environment = "production"
+owner = "platform-team"
+
+
+Nickel workspace with multiple providers:
+# workspace.ncl - Multi-provider infrastructure definition
+
+let hetzner = import "../../extensions/providers/hetzner/nickel/main.ncl" in
+let upcloud = import "../../extensions/providers/upcloud/nickel/main.ncl" in
+let aws = import "../../extensions/providers/aws/nickel/main.ncl" in
+let digitalocean = import "../../extensions/providers/digitalocean/nickel/main.ncl" in
+
+{
+ workspace_name = "multi-provider-app",
+ description = "Multi-provider infrastructure example",
+
+ # Provider routing configuration
+ providers = {
+ primary_compute = "hetzner",
+ secondary_compute = "digitalocean",
+ database = "aws",
+ backup = "upcloud"
+ },
+
+ # Infrastructure defined per provider
+ infrastructure = {
+ # Hetzner: Primary compute tier
+ primary_servers = hetzner.Server & {
+ name = "primary-server",
+ server_type = "cx31",
+ image = "ubuntu-22.04",
+ location = "nbg1",
+ count = 3,
+ ssh_keys = ["your-ssh-key"],
+ firewalls = ["primary-fw"]
+ },
+
+ # DigitalOcean: Secondary compute tier
+ secondary_servers = digitalocean.Droplet & {
+ name = "secondary-droplet",
+ size = "s-2vcpu-4gb",
+ image = "ubuntu-22-04-x64",
+ region = "nyc3",
+ count = 2
+ },
+
+ # AWS: Managed database
+ database = aws.RDS & {
+ identifier = "prod-db",
+ engine = "postgresql",
+ engine_version = "14.6",
+ instance_class = "db.t3.medium",
+ allocated_storage = 100
+ },
+
+ # UpCloud: Backup storage
+ backup_storage = upcloud.Storage & {
+ name = "backup-volume",
+ size = 500,
+ location = "fi-hel1"
+ }
+ }
+}
+
+
+
+Scenario : Cost-effective compute with specialized managed storage.
+Example : Use Hetzner for compute (cheap), AWS S3 for object storage (reliable), managed database on AWS RDS.
+
+
+Compute optimization (Hetzner’s low cost)
+Storage specialization (AWS S3 reliability and features)
+Separation of concerns (different performance tuning)
+
+
+ ┌─────────────────────┐
+ │ Client Requests │
+ └──────────┬──────────┘
+ │
+ ┌──────────────┼──────────────┐
+ │ │ │
+ ┌──────▼─────┐ ┌────▼─────┐ ┌───▼──────┐
+ │ Hetzner │ │ AWS │ │ AWS S3 │
+ │ Servers │ │ RDS │ │ Storage │
+ │ (Compute) │ │(Database)│ │(Backups) │
+ └────────────┘ └──────────┘ └──────────┘
+
+
+let hetzner = import "../../extensions/providers/hetzner/nickel/main.ncl" in
+let aws = import "../../extensions/providers/aws/nickel/main.ncl" in
+
+{
+ compute = hetzner.Server & {
+ name = "app-server",
+ server_type = "cpx21", # 4 vCPU, 8 GB RAM
+ image = "ubuntu-22.04",
+ location = "nbg1",
+ count = 2,
+ volumes = [
+ {
+ size = 100,
+ format = "ext4",
+ mount = "/app"
+ }
+ ]
+ },
+
+ database = aws.RDS & {
+ identifier = "app-database",
+ engine = "postgresql",
+ instance_class = "db.t3.medium",
+ allocated_storage = 100
+ },
+
+ backup_bucket = aws.S3 & {
+ bucket = "app-backups",
+ region = "us-east-1",
+ versioning = true,
+ lifecycle_rules = [
+ {
+ id = "delete-old-backups",
+ days = 90,
+ action = "delete"
+ }
+ ]
+ }
+}
+
+
+Hetzner servers connect to AWS RDS via VPN or public endpoint:
+# Network setup script
+def setup_database_connection [] {
+ let hetzner_servers = (hetzner_list_servers)
+ let db_endpoint = (aws_get_rds_endpoint "app-database")
+
+ # Install PostgreSQL client
+ $hetzner_servers | each {|server|
+ ssh $server.ip "apt-get install -y postgresql-client"
+ ssh $server.ip $"echo 'DB_HOST=($db_endpoint)' >> /app/.env"
+ }
+}
+
+
+Monthly estimate:
+
+Hetzner cx31 × 2: €13.80 (~$15)
+AWS RDS t3.medium: $60
+AWS S3 (100 GB): $2.30
+Total: ~$77/month (vs $120+ for all-AWS)
+
+
+Scenario : Active-standby deployment for disaster recovery.
+Example : DigitalOcean primary datacenter, Hetzner warm standby with automated failover.
+
+
+Disaster recovery capability
+Zero data loss (with replication)
+Tested failover procedure
+Cost-effective backup (warm standby vs hot standby)
+
+
+ Primary (DigitalOcean NYC) Backup (Hetzner DE)
+ ┌──────────────────────┐ ┌─────────────────┐
+ │ DigitalOcean LB │◄────────►│ HAProxy Monitor │
+ └──────────┬───────────┘ └────────┬────────┘
+ │ │
+ ┌──────────┴──────────┐ │
+ │ │ │
+ ┌───▼───┐ ┌───▼───┐ ┌──▼──┐ ┌──────┐ ┌──▼───┐
+ │ APP 1 │ │ APP 2 │ │ DB │ │ ELK │ │ WARM │
+ │PRIMARY│ │PRIMARY│ │REPL │ │MON │ │STANDBY
+ └───────┘ └───────┘ └─────┘ └──────┘ └──────┘
+ │ │ ▲
+ └─────────────────────┼────────────────────┘
+ Async Replication
+
+
+def monitor_primary_health [do_region, hetzner_region] {
+ loop {
+ let health = (do_health_check $do_region)
+
+ if $health.status == "degraded" or $health.status == "down" {
+ print "Primary degraded, triggering failover"
+ trigger_failover $hetzner_region
+ break
+ }
+
+ sleep 30sec
+ }
}
-example_servers: [server.Server] = [
- {
- hostname = "server-01"
- title = "Basic Server"
- network_public_ipv4 = True
- user = "admin"
- # ... más config
- }
-]
+def trigger_failover [backup_region] {
+ # 1. Promote backup database
+ promote_replica_to_primary $backup_region
-{ settings = main_settings, servers = example_servers }
-```plaintext
+ # 2. Update DNS to point to backup
+ update_dns_to_backup $backup_region
-### kcl.mod (generado automáticamente)
+ # 3. Scale up backup servers
+ scale_servers $backup_region 3
-```toml
-[package]
-name = "production"
-edition = "v0.11.3"
-version = "0.0.1"
-
-[dependencies]
-provisioning = { path = "../../.kcl/packages/provisioning", version = "0.0.1" }
-```plaintext
-
-### .manifest/taskservs.yaml (generado vacío)
-
-```yaml
-loaded_taskservs: []
-loaded_providers: []
-loaded_clusters: []
-last_updated: "2025-11-13 10:30:00"
-```plaintext
-
-## Troubleshooting Rápido
-
-| Problema | Solución |
-|----------|----------|
-| **Workspace exists** | Usar `--overwrite` o cambiar nombre |
-| **Module not found** | Ejecutar `provisioning dt` y cargar manualmente |
-| **KCL import error** | Verificar que module fue cargado con `provisioning mod list` |
-| **Version mismatch** | Ejecutar `workspace migrate` para actualizar |
-| **No active workspace** | `provisioning workspace activate <name>` |
-| **Hyphens in name** | Cambiar a underscores: `my-ws` → `my_ws` |
-
-## Archivos de Configuración Ubicaciones
-
-**macOS**:
-
-```plaintext
-~/Library/Application Support/provisioning/
-├── workspaces.yaml # Registry de workspaces
-├── default-workspace.yaml # Workspace activo
-├── user-preferences.yaml # Preferencias
-└── ws_<name>.yaml # Context per workspace
-```plaintext
-
-**Linux**:
-
-```plaintext
-~/.config/provisioning/
-├── workspaces.yaml
-├── default-workspace.yaml
-├── user-preferences.yaml
-└── ws_<name>.yaml
-```plaintext
-
-## Variables de Entorno Importantes
-
-```bash
-PROVISIONING # Ruta base sistema
-PROVISIONING_DEBUG # Enable debug mode
-PROVISIONING_MODULE # Especifica módulo activo
-PROVISIONING_WORKSPACE # Workspace actual
-PROVISIONING_HOME # Home configuration dir
-```plaintext
-
-## Próximos Pasos Después de Crear Workspace
-
-```plaintext
-✅ Workspace creado en ~/my_workspace
-✅ Infraestructura en infra/main
-✅ Template aplicado
-
-📋 PRÓXIMOS PASOS:
-
-1. Navegar:
- cd ~/my_workspace/infra/main
-
-2. Descubrir módulos disponibles:
- provisioning dt
-
-3. Cargar módulos necesarios:
- provisioning mod load taskservs . kubernetes cilium
- provisioning mod load providers . upcloud
-
-4. Editar servers.k:
- - Agregar imports de taskservs/providers
- - Definir servidores
- - Configurar recursos
-
-5. Validar:
- kcl run servers.k
-
-6. Desplegar:
- provisioning s create --infra main --check
- provisioning s create --infra main
-```plaintext
-
-## Referencias
-
-- **Guía Completa**: WORKSPACE_GENERATION_GUIDE.md (1144 líneas)
-- **Arquitectura ADR**: docs/architecture/adr/ADR-003-workspace-isolation.md
-- **Module System**: lib_provisioning/workspace/mod.nu
-- **Inicialización**: provisioning/tools/workspace-init.nu (966 líneas)
-- **KCL Templates**: provisioning/templates/workspace/
+ # 4. Verify traffic flowing
+ wait_for_traffic_migration $backup_region 120sec
+}
+
+let digitalocean = import "../../extensions/providers/digitalocean/nickel/main.ncl" in
+let hetzner = import "../../extensions/providers/hetzner/nickel/main.ncl" in
+
+{
+ # Primary: DigitalOcean
+ primary = {
+ region = "nyc3",
+ provider = "digitalocean",
+
+ servers = digitalocean.Droplet & {
+ name = "primary-app",
+ size = "s-2vcpu-4gb",
+ count = 3,
+ region = "nyc3",
+ firewall = {
+ inbound = [
+ { protocol = "tcp", ports = "80", sources = ["0.0.0.0/0"] },
+ { protocol = "tcp", ports = "443", sources = ["0.0.0.0/0"] },
+ { protocol = "tcp", ports = "5432", sources = ["10.0.0.0/8"] }
+ ]
+ }
+ },
+
+ database = digitalocean.Database & {
+ name = "primary-db",
+ engine = "pg",
+ version = "14",
+ size = "db-s-2vcpu-4gb",
+ region = "nyc3"
+ }
+ },
+
+ # Backup: Hetzner (warm standby)
+ backup = {
+ region = "nbg1",
+ provider = "hetzner",
+
+ servers = hetzner.Server & {
+ name = "backup-app",
+ server_type = "cx31",
+ count = 1, # Minimal for cost
+ location = "nbg1",
+ automount = true
+ },
+
+ # Replica database (read-only until promoted)
+ database_replica = hetzner.Volume & {
+ name = "db-replica",
+ size = 100,
+ location = "nbg1"
+ }
+ },
+
+ replication = {
+ type = "async",
+ primary_to_backup = true,
+ recovery_point_objective = 300 # 5 minutes
+ }
+}
+
+
+# Test failover without affecting production
+def test_failover_dry_run [config] {
+ print "Starting failover dry-run test..."
+
+ # 1. Snapshot primary database
+ let snapshot = (do_create_db_snapshot "primary-db")
+
+ # 2. Create temporary replica from snapshot
+ let temp_replica = (hetzner_create_from_snapshot $snapshot)
+
+ # 3. Run traffic tests against temp replica
+ let test_results = (run_integration_tests $temp_replica.ip)
+
+ # 4. Verify database consistency
+ let consistency = (verify_db_consistency $temp_replica.ip)
+
+ # 5. Cleanup temp resources
+ hetzner_destroy $temp_replica.id
+ do_delete_snapshot $snapshot.id
+
+ {
+ status: "passed",
+ results: $test_results,
+ consistency_check: $consistency
+ }
+}
+
+
+Scenario : Distributed deployment across 3+ geographic regions with global load balancing.
+Example : DigitalOcean US (NYC), Hetzner EU (Germany), AWS Asia (Singapore) with DNS-based failover.
+
+
+Geographic distribution for low latency
+Protection against regional outages
+Compliance with data residency (data stays in region)
+Load distribution across regions
+
+
+ ┌─────────────────┐
+ │ Global DNS │
+ │ (Geofencing) │
+ └────────┬────────┘
+ ┌────────┴────────┐
+ │ │
+ ┌──────────▼──────┐ ┌──────▼─────────┐ ┌─────────────┐
+ │ DigitalOcean │ │ Hetzner │ │ AWS │
+ │ US/NYC Region │ │ EU/Germany │ │ Asia/SG │
+ ├─────────────────┤ ├────────────────┤ ├─────────────┤
+ │ Droplets (3) │ │ Servers (3) │ │ EC2 (3) │
+ │ LB │ │ HAProxy │ │ ALB │
+ │ DB (Primary) │ │ DB (Replica) │ │ DB (Replica)│
+ └─────────────────┘ └────────────────┘ └─────────────┘
+ │ │ │
+ └─────────────────┴────────────────────┘
+ Cross-Region Sync
+
+
+def setup_global_dns [] {
+ # Using Route53 or Cloudflare for DNS failover
+ let regions = [
+ { name: "us-nyc", provider: "digitalocean", endpoint: "us.app.example.com" },
+ { name: "eu-de", provider: "hetzner", endpoint: "eu.app.example.com" },
+ { name: "asia-sg", provider: "aws", endpoint: "asia.app.example.com" }
+ ]
+
+ # Create health checks
+ $regions | each {|region|
+ configure_health_check $region.name $region.endpoint
+ }
+
+ # Setup failover policy
+ # Primary: US, Secondary: EU, Tertiary: Asia
+ configure_dns_failover {
+ primary: "us-nyc",
+ secondary: "eu-de",
+ tertiary: "asia-sg"
+ }
+}
+
+
+{
+ regions = {
+ us_east = {
+ provider = "digitalocean",
+ region = "nyc3",
+
+ servers = digitalocean.Droplet & {
+ name = "us-app",
+ size = "s-2vcpu-4gb",
+ count = 3,
+ region = "nyc3"
+ },
+
+ database = digitalocean.Database & {
+ name = "us-db",
+ engine = "pg",
+ size = "db-s-2vcpu-4gb",
+ region = "nyc3",
+ replica_regions = ["eu-de", "asia-sg"]
+ }
+ },
+
+ eu_central = {
+ provider = "hetzner",
+ region = "nbg1",
+
+ servers = hetzner.Server & {
+ name = "eu-app",
+ server_type = "cx31",
+ count = 3,
+ location = "nbg1"
+ }
+ },
+
+ asia_southeast = {
+ provider = "aws",
+ region = "ap-southeast-1",
+
+ servers = aws.EC2 & {
+ name = "asia-app",
+ instance_type = "t3.medium",
+ count = 3,
+ region = "ap-southeast-1"
+ }
+ }
+ },
+
+ global_config = {
+ dns_provider = "route53",
+ ttl = 60,
+ health_check_interval = 30
+ }
+}
+
+
+# Multi-region data sync strategy
+def sync_data_across_regions [primary_region, secondary_regions] {
+ let sync_config = {
+ strategy: "async",
+ consistency: "eventual",
+ conflict_resolution: "last-write-wins",
+ replication_lag: "300s" # 5 minute max lag
+ }
+
+ # Setup replication from primary to all secondaries
+ $secondary_regions | each {|region|
+ setup_async_replication $primary_region $region $sync_config
+ }
+
+ # Monitor replication lag
+ loop {
+ let lag = (check_replication_lag)
+ if $lag > 300 {
+ print "Warning: replication lag exceeds threshold"
+ trigger_alert "replication-lag-warning"
+ }
+ sleep 60sec
+ }
+}
+
+
+Scenario : On-premises infrastructure with public cloud providers for burst capacity and backup.
+Example : On-premise data center + AWS for burst capacity + DigitalOcean for disaster recovery.
+
+
+Existing infrastructure utilization
+Burst capacity in public cloud
+Disaster recovery site
+Compliance with on-premise requirements
+Cost control (scale only when needed)
+
+
+ On-Premises Data Center Public Cloud (Burst)
+ ┌─────────────────────────┐ ┌────────────────────┐
+ │ Physical Servers │◄────►│ AWS Auto-Scaling │
+ │ - App Tier (24 cores) │ │ - Elasticity │
+ │ - DB Tier (48 cores) │ │ - Pay-as-you-go │
+ │ - Storage (50 TB) │ │ - CloudFront CDN │
+ └─────────────────────────┘ └────────────────────┘
+ │ ▲
+ │ VPN Tunnel │
+ └───────────────────────────────┘
+
+ On-Premises DR Site (DigitalOcean)
+ │ Production │ Warm Standby
+ ├─ 95% Utilization ├─ Cold VM Snapshots
+ ├─ Full Data ├─ Async Replication
+ ├─ Peak Load Handling ├─ Ready for 15 min RTO
+ │ │
+
+
+def setup_hybrid_vpn [] {
+ # AWS VPN to on-premise datacenter
+ let vpn_config = {
+ type: "site-to-site",
+ protocol: "ipsec",
+ encryption: "aes-256",
+ authentication: "sha256",
+ on_prem_cidr: "192.168.0.0/16",
+ aws_cidr: "10.0.0.0/16",
+ do_cidr: "172.16.0.0/16"
+ }
+
+ # Create AWS Site-to-Site VPN
+ let vpn = (aws_create_vpn_connection $vpn_config)
+
+ # Configure on-prem gateway
+ configure_on_prem_vpn_gateway $vpn
+
+ # Verify tunnel status
+ wait_for_vpn_ready 300
+}
+
+
+{
+ on_premises = {
+ provider = "manual",
+ gateway = "192.168.1.1",
+ cidr = "192.168.0.0/16",
+ bandwidth = "1gbps",
+
+ # Resources remain on-prem (managed manually)
+ servers = {
+ app_tier = { cores = 24, memory = 128 },
+ db_tier = { cores = 48, memory = 256 },
+ storage = { capacity = "50 TB" }
+ }
+ },
+
+ aws_burst_capacity = {
+ provider = "aws",
+ region = "us-east-1",
+
+ auto_scaling_group = aws.ASG & {
+ name = "burst-asg",
+ min_size = 0,
+ desired_capacity = 0,
+ max_size = 20,
+ instance_type = "c5.2xlarge",
+ scale_up_trigger = "on_prem_cpu > 80%",
+ scale_down_trigger = "on_prem_cpu < 40%"
+ },
+
+ cdn = aws.CloudFront & {
+ origin = "on-prem-origin",
+ regional_origins = ["us-east-1", "eu-west-1", "ap-southeast-1"]
+ }
+ },
+
+ dr_site = {
+ provider = "digitalocean",
+ region = "nyc3",
+
+ snapshot_storage = digitalocean.Droplet & {
+ name = "dr-snapshot",
+ size = "s-24vcpu-48gb",
+ count = 0, # Powered off until needed
+ image = "on-prem-snapshot"
+ }
+ },
+
+ replication = {
+ on_prem_to_aws: {
+ strategy = "continuous",
+ target = "aws-s3-bucket",
+ retention = "7days"
+ },
+
+ on_prem_to_do: {
+ strategy = "nightly",
+ target = "do-spaces-bucket",
+ retention = "30days"
+ }
+ }
+}
+
+
+# Monitor on-prem and trigger AWS burst
+def monitor_and_burst [] {
+ loop {
+ let on_prem_metrics = (collect_on_prem_metrics)
+
+ if $on_prem_metrics.cpu_avg > 80 {
+ # Trigger AWS burst scaling
+ let scale_size = ((100 - $on_prem_metrics.cpu_avg) / 10)
+ scale_aws_burst $scale_size
+ } else if $on_prem_metrics.cpu_avg < 40 {
+ # Scale down AWS
+ scale_aws_burst 0
+ }
+
+ sleep 60sec
+ }
+}
+
+
+
+Scenario : Production web application with DigitalOcean web servers, AWS managed database, and Hetzner backup storage.
+Architecture :
+
+DigitalOcean: 3 web servers with load balancer (cost-effective compute)
+AWS: RDS PostgreSQL database (managed, high availability)
+Hetzner: Backup volumes (low-cost storage)
+
+Files to Create :
+workspace.ncl :
+let digitalocean = import "../../extensions/providers/digitalocean/nickel/main.ncl" in
+let aws = import "../../extensions/providers/aws/nickel/main.ncl" in
+let hetzner = import "../../extensions/providers/hetzner/nickel/main.ncl" in
+
+{
+ workspace_name = "three-provider-webapp",
+ description = "Web application across three providers",
+
+ infrastructure = {
+ web_tier = digitalocean.Droplet & {
+ name = "web-server",
+ region = "nyc3",
+ size = "s-2vcpu-4gb",
+ image = "ubuntu-22-04-x64",
+ count = 3,
+ firewall = {
+ inbound_rules = [
+ { protocol = "tcp", ports = "22", sources = { addresses = ["your-ip/32"] } },
+ { protocol = "tcp", ports = "80", sources = { addresses = ["0.0.0.0/0"] } },
+ { protocol = "tcp", ports = "443", sources = { addresses = ["0.0.0.0/0"] } }
+ ],
+ outbound_rules = [
+ { protocol = "tcp", destinations = { addresses = ["0.0.0.0/0"] } }
+ ]
+ }
+ },
+
+ load_balancer = digitalocean.LoadBalancer & {
+ name = "web-lb",
+ algorithm = "round_robin",
+ region = "nyc3",
+ forwarding_rules = [
+ {
+ entry_protocol = "http",
+ entry_port = 80,
+ target_protocol = "http",
+ target_port = 80,
+ certificate_id = null
+ },
+ {
+ entry_protocol = "https",
+ entry_port = 443,
+ target_protocol = "http",
+ target_port = 80,
+ certificate_id = "your-cert-id"
+ }
+ ],
+ sticky_sessions = {
+ type = "cookies",
+ cookie_name = "lb",
+ cookie_ttl_seconds = 300
+ }
+ },
+
+ database = aws.RDS & {
+ identifier = "webapp-db",
+ engine = "postgres",
+ engine_version = "14.6",
+ instance_class = "db.t3.medium",
+ allocated_storage = 100,
+ storage_type = "gp3",
+ multi_az = true,
+ backup_retention_days = 30,
+ subnet_group = "default",
+ parameter_group = "default.postgres14",
+ tags = [
+ { key = "Environment", value = "production" },
+ { key = "Application", value = "web-app" }
+ ]
+ },
+
+ backup_volume = hetzner.Volume & {
+ name = "webapp-backups",
+ size = 500,
+ location = "nbg1",
+ automount = false,
+ format = "ext4"
+ }
+ }
+}
+
+config.toml :
+[workspace]
+name = "three-provider-webapp"
+environment = "production"
+owner = "platform-team"
+
+[providers.digitalocean]
+enabled = true
+token_env = "DIGITALOCEAN_TOKEN"
+default_region = "nyc3"
+
+[providers.aws]
+enabled = true
+region = "us-east-1"
+access_key_env = "AWS_ACCESS_KEY_ID"
+secret_key_env = "AWS_SECRET_ACCESS_KEY"
+
+[providers.hetzner]
+enabled = true
+token_env = "HCLOUD_TOKEN"
+default_location = "nbg1"
+
+[deployment]
+strategy = "rolling"
+batch_size = 1
+health_check_wait = 60
+rollback_on_failure = true
+
+deploy.nu :
+#!/usr/bin/env nu
+
+# Deploy three-provider web application
+def main [environment = "staging"] {
+ print "Deploying three-provider web application to ($environment)..."
+
+ # 1. Validate configuration
+ print "Step 1: Validating configuration..."
+ validate_config "workspace.ncl"
+
+ # 2. Create infrastructure
+ print "Step 2: Creating infrastructure..."
+ create_digitalocean_resources
+ create_aws_resources
+ create_hetzner_resources
+
+ # 3. Configure networking
+ print "Step 3: Configuring networking..."
+ setup_vpc_peering
+ configure_security_groups
+
+ # 4. Deploy application
+ print "Step 4: Deploying application..."
+ deploy_app_to_web_servers
+
+ # 5. Verify deployment
+ print "Step 5: Verifying deployment..."
+ verify_health_checks
+ verify_database_connectivity
+ verify_backups
+
+ print "Deployment complete!"
+}
+
+def validate_config [config_file] {
+ print $"Validating ($config_file)..."
+ nickel export $config_file | from json
+}
+
+def create_digitalocean_resources [] {
+ print "Creating DigitalOcean resources (3 droplets + load balancer)..."
+ # Implementation
+}
+
+def create_aws_resources [] {
+ print "Creating AWS resources (RDS database)..."
+ # Implementation
+}
+
+def create_hetzner_resources [] {
+ print "Creating Hetzner resources (backup volume)..."
+ # Implementation
+}
+
+def setup_vpc_peering [] {
+ print "Setting up cross-provider networking..."
+ # Implementation
+}
+
+def configure_security_groups [] {
+ print "Configuring security groups..."
+ # Implementation
+}
+
+def deploy_app_to_web_servers [] {
+ print "Deploying application..."
+ # Implementation
+}
+
+def verify_health_checks [] {
+ print "Verifying health checks..."
+ # Implementation
+}
+
+def verify_database_connectivity [] {
+ print "Verifying database connectivity..."
+ # Implementation
+}
+
+def verify_backups [] {
+ print "Verifying backup configuration..."
+ # Implementation
+}
+
+main $env.ENVIRONMENT?
+
+
+Scenario : Active-standby DR setup with DigitalOcean primary and Hetzner backup.
+Architecture :
+
+DigitalOcean NYC: Production environment (active)
+Hetzner Germany: Warm standby (scales down until needed)
+Async database replication
+DNS-based failover
+RPO: 5 minutes, RTO: 15 minutes
+
+
+Scenario : Optimize across provider strengths: Hetzner compute, AWS managed services, DigitalOcean CDN.
+Architecture :
+
+Hetzner: 5 application servers (best compute price)
+AWS: RDS database, ElastiCache (managed services)
+DigitalOcean: Spaces for backups, CDN endpoints
+
+
+
+
+Document provider choices : Keep record of which workloads run where and why
+Audit provider capabilities : Ensure chosen provider supports required features
+Monitor provider health : Track outages and issues per provider
+Cost tracking per provider : Understand where money is spent
+
+
+
+Encrypt inter-provider traffic : Use VPN, mTLS, or encrypted tunnels
+Implement firewall rules : Limit traffic between providers to necessary ports
+Use security groups : AWS-style security groups where available
+Monitor network traffic : Detect unusual patterns across providers
+
+
+
+Choose replication strategy : Synchronous (consistency), asynchronous (performance)
+Implement conflict resolution : Define how conflicts are resolved
+Monitor replication lag : Alert on excessive lag
+Test failover regularly : Verify data integrity during failover
+
+
+
+Define RPO/RTO targets : Recovery Point Objective and Recovery Time Objective
+Document failover procedures : Step-by-step instructions
+Test failover regularly : At least quarterly, ideally monthly
+Maintain DR site readiness : Cold, warm, or hot standby based on RTO
+
+
+
+Data residency : Ensure data stays in required regions
+Encryption at rest : Use provider-native encryption
+Encryption in transit : TLS/mTLS for all inter-provider communication
+Audit logging : Enable audit logs in all providers
+Access control : Implement least privilege across all providers
+
+
+
+Unified monitoring : Aggregate metrics from all providers
+Cross-provider dashboards : Visualize health across providers
+Provider-specific alerts : Configure alerts per provider
+Escalation procedures : Clear escalation for failures
+
+
+
+Set budget alerts : Per provider and total
+Reserved instances : Use provider discounts
+Spot instances : AWS spot for non-critical workloads
+Auto-scaling policies : Scale based on demand
+Regular cost reviews : Monthly cost analysis and optimization
+
+
+
+Symptoms : Droplets can’t reach AWS database, high latency between regions
+Diagnosis :
+# Check network connectivity
+def diagnose_network_issue [source_ip, dest_ip] {
+ print "Diagnosing network connectivity..."
+
+ # 1. Check routing
+ ssh $source_ip "ip route show"
+
+ # 2. Check firewall rules
+ check_security_groups $source_ip $dest_ip
+
+ # 3. Test connectivity
+ ssh $source_ip "ping -c 3 $dest_ip"
+ ssh $source_ip "traceroute $dest_ip"
+
+ # 4. Check DNS resolution
+ ssh $source_ip "nslookup $dest_ip"
+}
+
+Solutions :
+
+Verify firewall rules allow traffic on required ports
+Check VPN tunnel status if using site-to-site VPN
+Verify DNS resolution in both providers
+Check MTU size for jumbo frames (1500 bytes)
+Enable debug logging on network components
+
+
+Symptoms : Secondary database lagging behind primary
+Diagnosis :
+def check_replication_lag [] {
+ # AWS RDS
+ aws rds describe-db-instances --query 'DBInstances[].{ID:DBInstanceIdentifier,Lag:ReplicationLag}'
+
+ # DigitalOcean
+ doctl databases backups list --format Name,Created
+}
+
+Solutions :
+
+Check network bandwidth between providers
+Review write throughput on primary
+Monitor CPU/IO on secondary
+Adjust replication thread pool size
+Check for long-running queries blocking replication
+
+
+Symptoms : Failover script fails, DNS not updating
+Diagnosis :
+def test_failover_chain [] {
+ # 1. Verify backup infrastructure is ready
+ verify_backup_infrastructure
+
+ # 2. Test DNS failover
+ test_dns_failover
+
+ # 3. Verify database promotion
+ test_db_promotion
+
+ # 4. Check application configuration
+ verify_app_failover_config
+}
+
+Solutions :
+
+Ensure backup infrastructure is powered on and running
+Verify DNS TTL is appropriate (typically 60 seconds)
+Test failover in staging environment first
+Check VPN connectivity to backup provider
+Verify database promotion scripts
+Ensure application connection strings support both endpoints
+
+
+Symptoms : Monthly bill unexpectedly high
+Diagnosis :
+def analyze_cost_spike [] {
+ print "Analyzing cost spike..."
+
+ # Compare current vs previous month
+ let current = (get_current_month_costs)
+ let previous = (get_previous_month_costs)
+ let delta = ($current - $previous)
+
+ # Break down by provider
+ $current | group-by provider | each {|group|
+ let provider = ($group.0.provider)
+ let cost = ($group | map {|x| $x.cost} | math sum)
+ print $"($provider): $($cost)"
+ }
+
+ # Identify largest increases
+ ($delta | sort-by cost_change | reverse | first 5)
+}
+
+Solutions :
+
+Review auto-scaling activities
+Check for unintended resource creation
+Verify reserved instances are being used
+Review data transfer costs (cross-region expensive)
+Cancel idle resources
+Contact provider support if billing seems incorrect
+
+
+Multi-provider deployments provide significant benefits in cost optimization, reliability, and compliance. Start with a simple pattern (Compute + Storage Split) and evolve to more complex patterns as needs grow. Always test failover procedures and maintain clear documentation of provider responsibilities and network configurations.
+For more information, see:
+
+Provider-agnostic architecture guide
+Batch workflow orchestration guide
+Individual provider implementation guides
+
+
+This comprehensive guide covers private networking, VPN tunnels, and secure communication across multiple cloud providers using Hetzner, UpCloud, AWS, and DigitalOcean.
+
+
+
+Multi-provider deployments require secure, private communication between resources across different cloud providers. This involves:
+
+Private Networks : Isolated virtual networks within each provider (SDN)
+VPN Tunnels : Encrypted connections between provider networks
+Routing : Proper IP routing between provider networks
+Security : Firewall rules and access control across providers
+DNS : Private DNS for cross-provider resource discovery
+
+
+┌──────────────────────────────────┐
+│ DigitalOcean VPC │
+│ Network: 10.0.0.0/16 │
+│ ┌────────────────────────────┐ │
+│ │ Web Servers (10.0.1.0/24) │ │
+│ └────────────────────────────┘ │
+└────────────┬─────────────────────┘
+ │ IPSec VPN Tunnel
+ │ Encrypted
+ ├─────────────────────────────┐
+ │ │
+┌────────────▼──────────────────┐ ┌──────▼─────────────────────┐
+│ AWS VPC │ │ Hetzner vSwitch │
+│ Network: 10.1.0.0/16 │ │ Network: 10.2.0.0/16 │
+│ ┌──────────────────────────┐ │ │ ┌─────────────────────────┐│
+│ │ RDS Database (10.1.1.0) │ │ │ │ Backup (10.2.1.0) ││
+│ └──────────────────────────┘ │ │ └─────────────────────────┘│
+└───────────────────────────────┘ └─────────────────────────────┘
+ IPSec ▲ IPSec ▲
+ Tunnel │ Tunnel │
+
+
+
+Product : vSwitch (Virtual Switch)
+Characteristics :
+
+Private networks for Cloud Servers
+Multiple subnets per network
+Layer 2 switching
+IP-based traffic isolation
+Free service (included with servers)
+
+Features :
+
+Custom IP ranges
+Subnets and routing
+Attached/detached servers
+Static routes
+Private networking without NAT
+
+Configuration :
+# Create private network
+hcloud network create --name "app-network" --ip-range "10.0.0.0/16"
+
+# Create subnet
+hcloud network add-subnet app-network --ip-range "10.0.1.0/24" --network-zone eu-central
+
+# Attach server to network
+hcloud server attach-to-network server-1 --network app-network --ip 10.0.1.10
+
+
+Product : Private Networks (VLAN-based)
+Characteristics :
+
+Virtual LAN technology
+Layer 2 connectivity
+Multiple VLANs per account
+No bandwidth charges
+Simple configuration
+
+Features :
+
+Custom CIDR blocks
+Multiple networks per account
+Server attachment to VLANs
+VLAN tagging support
+Static routing
+
+Configuration :
+# Create private network
+upctl network create --name "app-network" --ip-networks 10.0.0.0/16
+
+# Attach server to network
+upctl server attach-network --server server-1 \
+ --network app-network --ip-address 10.0.1.10
+
+
+Product : VPC with subnets and security groups
+Characteristics :
+
+Enterprise-grade networking
+Multiple availability zones
+Complex security models
+NAT gateways and bastion hosts
+Advanced routing
+
+Features :
+
+VPC peering
+VPN connections
+Internet gateways
+NAT gateways
+Security groups and NACLs
+Route tables with multiple targets
+Flow logs and VPC insights
+
+Configuration :
+# Create VPC
+aws ec2 create-vpc --cidr-block 10.1.0.0/16
+
+# Create subnets
+aws ec2 create-subnet --vpc-id vpc-12345 \
+ --cidr-block 10.1.1.0/24 \
+ --availability-zone us-east-1a
+
+# Create security group
+aws ec2 create-security-group --group-name app-sg \
+ --description "Application security group" --vpc-id vpc-12345
+
+
+Product : VPC
+Characteristics :
+
+Simple private networking
+One VPC per region
+Droplet attachment
+Built-in firewall integration
+No additional cost
+
+Features :
+
+Custom IP ranges
+Droplet tagging and grouping
+Firewall rule integration
+Internal DNS resolution
+Droplet-to-droplet communication
+
+Configuration :
+# Create VPC
+doctl compute vpc create --name "app-vpc" --region nyc3 --ip-range 10.0.0.0/16
+
+# Attach droplet to VPC
+doctl compute vpc member add vpc-id --droplet-ids 12345
+
+# Setup firewall with VPC
+doctl compute firewall create --name app-fw --vpc-id vpc-id
+
+
+
+let hetzner = import "../../extensions/providers/hetzner/nickel/main.ncl" in
+
+{
+ # Create private network
+ private_network = hetzner.Network & {
+ name = "app-network",
+ ip_range = "10.0.0.0/16",
+ labels = { "environment" = "production" }
+ },
+
+ # Create subnet
+ private_subnet = hetzner.Subnet & {
+ network = "app-network",
+ network_zone = "eu-central",
+ ip_range = "10.0.1.0/24"
+ },
+
+ # Server attached to network
+ app_server = hetzner.Server & {
+ name = "app-server",
+ server_type = "cx31",
+ image = "ubuntu-22.04",
+ location = "nbg1",
+
+ # Attach to private network with static IP
+ networks = [
+ {
+ network_name = "app-network",
+ ip = "10.0.1.10"
+ }
+ ]
+ }
+}
+
+
+let aws = import "../../extensions/providers/aws/nickel/main.ncl" in
+
+{
+ # Create VPC
+ vpc = aws.VPC & {
+ cidr_block = "10.1.0.0/16",
+ enable_dns_hostnames = true,
+ enable_dns_support = true,
+ tags = [
+ { key = "Name", value = "app-vpc" }
+ ]
+ },
+
+ # Create subnet
+ private_subnet = aws.Subnet & {
+ vpc_id = "{{ vpc.id }}",
+ cidr_block = "10.1.1.0/24",
+ availability_zone = "us-east-1a",
+ map_public_ip_on_launch = false,
+ tags = [
+ { key = "Name", value = "private-subnet" }
+ ]
+ },
+
+ # Create security group
+ app_sg = aws.SecurityGroup & {
+ name = "app-sg",
+ description = "Application security group",
+ vpc_id = "{{ vpc.id }}",
+ ingress_rules = [
+ {
+ protocol = "tcp",
+ from_port = 5432,
+ to_port = 5432,
+ source_security_group_id = "{{ app_sg.id }}"
+ }
+ ],
+ tags = [
+ { key = "Name", value = "app-sg" }
+ ]
+ },
+
+ # RDS in private subnet
+ app_database = aws.RDS & {
+ identifier = "app-db",
+ engine = "postgres",
+ instance_class = "db.t3.medium",
+ allocated_storage = 100,
+ db_subnet_group_name = "default",
+ vpc_security_group_ids = ["{{ app_sg.id }}"],
+ publicly_accessible = false
+ }
+}
+
+
+let digitalocean = import "../../extensions/providers/digitalocean/nickel/main.ncl" in
+
+{
+ # Create VPC
+ private_vpc = digitalocean.VPC & {
+ name = "app-vpc",
+ region = "nyc3",
+ ip_range = "10.0.0.0/16"
+ },
+
+ # Droplets attached to VPC
+ web_servers = digitalocean.Droplet & {
+ name = "web-server",
+ region = "nyc3",
+ size = "s-2vcpu-4gb",
+ image = "ubuntu-22-04-x64",
+ count = 3,
+
+ # Attach to VPC
+ vpc_uuid = "{{ private_vpc.id }}"
+ },
+
+ # Firewall integrated with VPC
+ app_firewall = digitalocean.Firewall & {
+ name = "app-firewall",
+ vpc_id = "{{ private_vpc.id }}",
+ inbound_rules = [
+ {
+ protocol = "tcp",
+ ports = "22",
+ sources = { addresses = ["10.0.0.0/16"] }
+ },
+ {
+ protocol = "tcp",
+ ports = "443",
+ sources = { addresses = ["0.0.0.0/0"] }
+ }
+ ]
+ }
+}
+
+
+
+Use Case : Secure communication between DigitalOcean and AWS
+
+# Create Virtual Private Gateway (VGW)
+aws ec2 create-vpn-gateway \
+ --type ipsec.1 \
+ --amazon-side-asn 64512 \
+ --tag-specifications "ResourceType=vpn-gateway,Tags=[{Key=Name,Value=app-vpn-gw}]"
+
+# Get VGW ID
+VGW_ID="vgw-12345678"
+
+# Attach to VPC
+aws ec2 attach-vpn-gateway \
+ --vpn-gateway-id $VGW_ID \
+ --vpc-id vpc-12345
+
+# Create Customer Gateway (DigitalOcean endpoint)
+aws ec2 create-customer-gateway \
+ --type ipsec.1 \
+ --public-ip 203.0.113.12 \
+ --bgp-asn 65000
+
+# Get CGW ID
+CGW_ID="cgw-12345678"
+
+# Create VPN Connection
+aws ec2 create-vpn-connection \
+ --type ipsec.1 \
+ --customer-gateway-id $CGW_ID \
+ --vpn-gateway-id $VGW_ID \
+ --options "StaticRoutesOnly=true"
+
+# Get VPN Connection ID
+VPN_CONN_ID="vpn-12345678"
+
+# Enable static routing
+aws ec2 enable-vpn-route-propagation \
+ --route-table-id rtb-12345 \
+ --vpn-connection-id $VPN_CONN_ID
+
+# Create static route for DigitalOcean network
+aws ec2 create-route \
+ --route-table-id rtb-12345 \
+ --destination-cidr-block 10.0.0.0/16 \
+ --gateway-id $VGW_ID
+
+
+Download VPN configuration from AWS:
+# Get VPN configuration
+aws ec2 describe-vpn-connections \
+ --vpn-connection-ids $VPN_CONN_ID \
+ --query 'VpnConnections[0].CustomerGatewayConfiguration' \
+ --output text > vpn-config.xml
+
+Configure IPSec on DigitalOcean server (acting as VPN gateway):
+# Install StrongSwan
+ssh root@do-server
+apt-get update
+apt-get install -y strongswan strongswan-swanctl
+
+# Create ipsec configuration
+cat > /etc/swanctl/conf.d/aws-vpn.conf <<'EOF'
+connections {
+ aws-vpn {
+ remote_addrs = 203.0.113.1, 203.0.113.2 # AWS endpoints
+ local_addrs = 203.0.113.12 # DigitalOcean endpoint
+
+ local {
+ auth = psk
+ id = 203.0.113.12
+ }
+
+ remote {
+ auth = psk
+ id = 203.0.113.1
+ }
+
+ children {
+ aws-vpn {
+ local_ts = 10.0.0.0/16 # DO network
+ remote_ts = 10.1.0.0/16 # AWS VPC
+
+ esp_proposals = aes256-sha256
+ rekey_time = 3600s
+ rand_time = 540s
+ }
+ }
+
+ proposals = aes256-sha256-modp2048
+ rekey_time = 28800s
+ rand_time = 540s
+ }
+}
+
+secrets {
+ ike-aws {
+ secret = "SharedPreSharedKeyFromAWS123456789"
+ }
+}
+EOF
+
+# Enable IP forwarding
+sysctl -w net.ipv4.ip_forward=1
+echo "net.ipv4.ip_forward=1" >> /etc/sysctl.conf
+
+# Start StrongSwan
+systemctl restart strongswan-swanctl
+
+# Verify connection
+swanctl --stats
+
+
+# Add route to AWS VPC through VPN
+ssh root@do-server
+
+ip route add 10.1.0.0/16 via 10.0.0.1 dev eth0
+echo "10.1.0.0/16 via 10.0.0.1 dev eth0" >> /etc/network/interfaces
+
+# Enable forwarding on firewall
+ufw allow from 10.1.0.0/16 to 10.0.0.0/16
+
+
+Advantages : Simpler, faster, modern
+
+# On DO server
+ssh root@do-server
+apt-get install -y wireguard wireguard-tools
+
+# Generate keypairs
+wg genkey | tee /etc/wireguard/do_private.key | wg pubkey > /etc/wireguard/do_public.key
+
+# On AWS server
+ssh ubuntu@aws-server
+sudo apt-get install -y wireguard wireguard-tools
+
+sudo wg genkey | sudo tee /etc/wireguard/aws_private.key | wg pubkey > /etc/wireguard/aws_public.key
+
+
+# /etc/wireguard/wg0.conf
+cat > /etc/wireguard/wg0.conf <<'EOF'
+[Interface]
+PrivateKey = <contents-of-do_private.key>
+Address = 10.10.0.1/24
+ListenPort = 51820
+
+[Peer]
+PublicKey = <contents-of-aws_public.key>
+AllowedIPs = 10.10.0.2/32, 10.1.0.0/16
+Endpoint = aws-server-public-ip:51820
+PersistentKeepalive = 25
+EOF
+
+chmod 600 /etc/wireguard/wg0.conf
+
+# Enable interface
+wg-quick up wg0
+
+# Enable at boot
+systemctl enable wg-quick@wg0
+
+
+# /etc/wireguard/wg0.conf
+cat > /etc/wireguard/wg0.conf <<'EOF'
+[Interface]
+PrivateKey = <contents-of-aws_private.key>
+Address = 10.10.0.2/24
+ListenPort = 51820
+
+[Peer]
+PublicKey = <contents-of-do_public.key>
+AllowedIPs = 10.10.0.1/32, 10.0.0.0/16
+Endpoint = do-server-public-ip:51820
+PersistentKeepalive = 25
+EOF
+
+chmod 600 /etc/wireguard/wg0.conf
+
+# Enable interface
+sudo wg-quick up wg0
+sudo systemctl enable wg-quick@wg0
+
+
+# From DO server
+ssh root@do-server
+ping 10.10.0.2
+
+# From AWS server
+ssh ubuntu@aws-server
+sudo ping 10.10.0.1
+
+# Test actual services
+curl -I http://10.1.1.10:5432 # Test AWS RDS from DO
+
+
+
+{
+ # Route between DigitalOcean and AWS
+ vpn_routes = {
+ do_to_aws = {
+ source_network = "10.0.0.0/16", # DigitalOcean VPC
+ destination_network = "10.1.0.0/16", # AWS VPC
+ gateway = "vpn-tunnel",
+ metric = 100
+ },
+
+ aws_to_do = {
+ source_network = "10.1.0.0/16",
+ destination_network = "10.0.0.0/16",
+ gateway = "vpn-tunnel",
+ metric = 100
+ },
+
+ # Route to Hetzner through AWS (if AWS is central hub)
+ aws_to_hz = {
+ source_network = "10.1.0.0/16",
+ destination_network = "10.2.0.0/16",
+ gateway = "aws-vpn-gateway",
+ metric = 150
+ }
+ }
+}
+
+
+# Add route to AWS VPC
+ip route add 10.1.0.0/16 via 10.0.0.1
+
+# Add route to DigitalOcean VPC
+ip route add 10.0.0.0/16 via 10.2.0.1
+
+# Persist routes
+cat >> /etc/network/interfaces <<'EOF'
+# Routes to other providers
+up ip route add 10.1.0.0/16 via 10.0.0.1
+up ip route add 10.0.0.0/16 via 10.2.0.1
+EOF
+
+
+# Get main route table
+RT_ID=$(aws ec2 describe-route-tables --filters Name=vpc-id,Values=vpc-12345 --query 'RouteTables[0].RouteTableId' --output text)
+
+# Add route to DigitalOcean network through VPN gateway
+aws ec2 create-route \
+ --route-table-id $RT_ID \
+ --destination-cidr-block 10.0.0.0/16 \
+ --gateway-id vgw-12345
+
+# Add route to Hetzner network
+aws ec2 create-route \
+ --route-table-id $RT_ID \
+ --destination-cidr-block 10.2.0.0/16 \
+ --gateway-id vgw-12345
+
+
+
+IPSec :
+
+AES-256 encryption
+SHA-256 hashing
+2048-bit Diffie-Hellman
+Perfect Forward Secrecy (PFS)
+
+Wireguard :
+
+ChaCha20/Poly1305 or AES-GCM
+Curve25519 key exchange
+Automatic key rotation
+
+# Verify IPSec configuration
+swanctl --stats
+
+# Check encryption algorithms
+swanctl --list-connections
+
+
+DigitalOcean Firewall :
+inbound_rules = [
+ # Allow VPN traffic from AWS
+ {
+ protocol = "udp",
+ ports = "51820",
+ sources = { addresses = ["aws-server-public-ip/32"] }
+ },
+ # Allow traffic from AWS VPC
+ {
+ protocol = "tcp",
+ ports = "443",
+ sources = { addresses = ["10.1.0.0/16"] }
+ }
+]
+
+AWS Security Group :
+# Allow traffic from DigitalOcean VPC
+aws ec2 authorize-security-group-ingress \
+ --group-id sg-12345 \
+ --protocol tcp \
+ --port 443 \
+ --source-security-group-cidr 10.0.0.0/16
+
+# Allow VPN from DigitalOcean
+aws ec2 authorize-security-group-ingress \
+ --group-id sg-12345 \
+ --protocol udp \
+ --port 51820 \
+ --cidr "do-public-ip/32"
+
+Hetzner Firewall :
+hcloud firewall create --name vpn-fw \
+ --rules "direction=in protocol=udp destination_port=51820 source_ips=10.0.0.0/16;10.1.0.0/16"
+
+
+# Each provider has isolated subnets
+networks = {
+ do_web_tier = "10.0.1.0/24", # Public-facing web
+ do_app_tier = "10.0.2.0/24", # Internal apps
+ do_vpn_gateway = "10.0.3.0/24", # VPN endpoint
+
+ aws_data_tier = "10.1.1.0/24", # Databases
+ aws_cache_tier = "10.1.2.0/24", # Redis/Cache
+ aws_vpn_endpoint = "10.1.3.0/24", # VPN endpoint
+
+ hz_backup_tier = "10.2.1.0/24", # Backups
+ hz_vpn_gateway = "10.2.2.0/24" # VPN endpoint
+}
+
+
+# Private DNS for internal services
+# On each provider's VPC/network, configure:
+
+# DigitalOcean
+10.0.1.10 web-1.internal
+10.0.1.11 web-2.internal
+10.1.1.10 database.internal
+
+# Add to /etc/hosts or configure Route53 private hosted zones
+aws route53 create-hosted-zone \
+ --name internal.example.com \
+ --vpc VPCRegion=us-east-1,VPCId=vpc-12345 \
+ --caller-reference internal-zone
+
+# Create A record
+aws route53 change-resource-record-sets \
+ --hosted-zone-id ZONE_ID \
+ --change-batch file:///tmp/changes.json
+
+
+
+#!/usr/bin/env nu
+
+def setup_multi_provider_network [] {
+ print "🌐 Setting up multi-provider network"
+
+ # Phase 1: Create networks on each provider
+ print "\nPhase 1: Creating private networks..."
+ create_digitalocean_vpc
+ create_aws_vpc
+ create_hetzner_network
+
+ # Phase 2: Create VPN endpoints
+ print "\nPhase 2: Setting up VPN endpoints..."
+ setup_aws_vpn_gateway
+ setup_do_vpn_endpoint
+ setup_hetzner_vpn_endpoint
+
+ # Phase 3: Configure routing
+ print "\nPhase 3: Configuring routing..."
+ configure_aws_routes
+ configure_do_routes
+ configure_hetzner_routes
+
+ # Phase 4: Verify connectivity
+ print "\nPhase 4: Verifying connectivity..."
+ verify_do_to_aws
+ verify_aws_to_hetzner
+ verify_hetzner_to_do
+
+ print "\n✅ Multi-provider network ready!"
+}
+
+def create_digitalocean_vpc [] {
+ print " Creating DigitalOcean VPC..."
+ let vpc = (doctl compute vpc create \
+ --name "multi-provider-vpc" \
+ --region "nyc3" \
+ --ip-range "10.0.0.0/16" \
+ --format ID \
+ --no-header)
+
+ print $" ✓ VPC created: ($vpc)"
+}
+
+def create_aws_vpc [] {
+ print " Creating AWS VPC..."
+ let vpc = (aws ec2 create-vpc \
+ --cidr-block "10.1.0.0/16" \
+ --tag-specifications "ResourceType=vpc,Tags=[{Key=Name,Value=multi-provider-vpc}]" | from json)
+
+ print $" ✓ VPC created: ($vpc.Vpc.VpcId)"
+
+ # Create subnet
+ let subnet = (aws ec2 create-subnet \
+ --vpc-id $vpc.Vpc.VpcId \
+ --cidr-block "10.1.1.0/24" | from json)
+
+ print $" ✓ Subnet created: ($subnet.Subnet.SubnetId)"
+}
+
+def create_hetzner_network [] {
+ print " Creating Hetzner vSwitch..."
+ let network = (hcloud network create \
+ --name "multi-provider-network" \
+ --ip-range "10.2.0.0/16" \
+ --format "json" | from json)
+
+ print $" ✓ Network created: ($network.network.id)"
+
+ # Create subnet
+ let subnet = (hcloud network add-subnet \
+ multi-provider-network \
+ --ip-range "10.2.1.0/24" \
+ --network-zone "eu-central" \
+ --format "json" | from json)
+
+ print $" ✓ Subnet created"
+}
+
+def setup_aws_vpn_gateway [] {
+ print " Setting up AWS VPN gateway..."
+ let vgw = (aws ec2 create-vpn-gateway \
+ --type "ipsec.1" \
+ --tag-specifications "ResourceType=vpn-gateway,Tags=[{Key=Name,Value=multi-provider-vpn}]" | from json)
+
+ print $" ✓ VPN gateway created: ($vgw.VpnGateway.VpnGatewayId)"
+}
+
+def setup_do_vpn_endpoint [] {
+ print " Setting up DigitalOcean VPN endpoint..."
+ # Would SSH into DO droplet and configure IPSec/Wireguard
+ print " ✓ VPN endpoint configured via SSH"
+}
+
+def setup_hetzner_vpn_endpoint [] {
+ print " Setting up Hetzner VPN endpoint..."
+ # Would SSH into Hetzner server and configure VPN
+ print " ✓ VPN endpoint configured via SSH"
+}
+
+def configure_aws_routes [] {
+ print " Configuring AWS routes..."
+ # Routes configured via AWS CLI
+ print " ✓ Routes to DO (10.0.0.0/16) configured"
+ print " ✓ Routes to Hetzner (10.2.0.0/16) configured"
+}
+
+def configure_do_routes [] {
+ print " Configuring DigitalOcean routes..."
+ print " ✓ Routes to AWS (10.1.0.0/16) configured"
+ print " ✓ Routes to Hetzner (10.2.0.0/16) configured"
+}
+
+def configure_hetzner_routes [] {
+ print " Configuring Hetzner routes..."
+ print " ✓ Routes to DO (10.0.0.0/16) configured"
+ print " ✓ Routes to AWS (10.1.0.0/16) configured"
+}
+
+def verify_do_to_aws [] {
+ print " Verifying DigitalOcean to AWS connectivity..."
+ # Ping or curl from DO to AWS
+ print " ✓ Connectivity verified (latency: 45 ms)"
+}
+
+def verify_aws_to_hetzner [] {
+ print " Verifying AWS to Hetzner connectivity..."
+ print " ✓ Connectivity verified (latency: 65 ms)"
+}
+
+def verify_hetzner_to_do [] {
+ print " Verifying Hetzner to DigitalOcean connectivity..."
+ print " ✓ Connectivity verified (latency: 78 ms)"
+}
+
+setup_multi_provider_network
+
+
+
+Diagnosis :
+# Test VPN tunnel status
+swanctl --stats
+
+# Check routing
+ip route show
+
+# Test connectivity
+ping -c 3 10.1.1.10 # AWS target
+traceroute 10.1.1.10
+
+Solutions :
+
+Verify VPN tunnel is up: swanctl --up aws-vpn
+Check firewall rules on both sides
+Verify route table entries
+Check security group rules
+Verify DNS resolution
+
+
+Diagnosis :
+# Measure latency
+ping -c 10 10.1.1.10 | tail -1
+
+# Check packet loss
+mtr -c 100 10.1.1.10
+
+# Check bandwidth
+iperf3 -c 10.1.1.10 -t 10
+
+Solutions :
+
+Use geographically closer providers
+Check VPN tunnel encryption overhead
+Verify network bandwidth
+Consider dedicated connections
+
+
+Diagnosis :
+# Test internal DNS
+nslookup database.internal
+
+# Check /etc/resolv.conf
+cat /etc/resolv.conf
+
+# Test from another provider
+ssh do-server "nslookup database.internal"
+
+Solutions :
+
+Configure private hosted zones (Route53)
+Setup DNS forwarders between providers
+Add hosts entries for critical services
+
+
+Diagnosis :
+# Check connection logs
+journalctl -u strongswan-swanctl -f
+
+# Monitor tunnel status
+watch -n 1 'swanctl --stats'
+
+# Check timeout values
+swanctl --list-connections
+
+Solutions :
+
+Increase keepalive timeout
+Enable DPD (Dead Peer Detection)
+Check for firewall/ISP blocking
+Verify public IP stability
+
+
+Multi-provider networking requires:
+✓ Private Networks : VPC/vSwitch per provider
+✓ VPN Tunnels : IPSec or Wireguard encryption
+✓ Routing : Proper route tables and static routes
+✓ Security : Firewall rules and access control
+✓ Monitoring : Connectivity and latency checks
+Start with simple two-provider setup (for example, DO + AWS), then expand to three or more providers.
+For more information:
+
+
+This guide covers using DigitalOcean as a cloud provider in the provisioning system. DigitalOcean is known for simplicity, straightforward pricing, and outstanding documentation, making it ideal for startups, small teams, and developers.
+
+
+
+DigitalOcean offers a simplified cloud platform with competitive pricing and outstanding developer experience. Key characteristics:
+
+Transparent Pricing : No hidden fees, simple per-resource pricing
+Global Presence : Data centers in North America, Europe, and Asia
+Managed Services : Databases, Kubernetes (DOKS), App Platform
+Developer-Friendly : Outstanding documentation and community support
+Performance : Consistent performance, modern infrastructure
+
+
+Unlike AWS, DigitalOcean uses hourly billing with transparent monthly rates:
+
+Droplets : $0.03/hour (typically billed monthly)
+Volumes : $0.10/GB/month
+Managed Database : Price varies by tier
+Load Balancer : $10/month
+Data Transfer : Generally included for inbound, charged for outbound
+
+
+Resource Product Name Status
+Compute Droplets ✓ Full support
+Block Storage Volumes ✓ Full support
+Object Storage Spaces ✓ Full support
+Load Balancer Load Balancer ✓ Full support
+Database Managed Databases ✓ Full support
+Container Registry Container Registry ✓ Supported
+CDN CDN ✓ Supported
+DNS Domains ✓ Full support
+VPC VPC ✓ Full support
+Firewall Firewall ✓ Full support
+Reserved IPs Reserved IPs ✓ Supported
+
+
+
+
+DigitalOcean is ideal for :
+
+Startups : Clear pricing, low minimum commitment
+Small Teams : Simple management interface
+Developers : Great documentation, API-driven
+Regional Deployment : Global presence, predictable costs
+Managed Services : Simple database and Kubernetes offerings
+Web Applications : Outstanding fit for typical web workloads
+
+DigitalOcean is NOT ideal for :
+
+Highly Specialized Workloads : Limited service portfolio vs AWS
+HIPAA/FedRAMP : Limited compliance options
+Extreme Performance : Not focused on HPC
+Enterprise with Complex Requirements : Better served by AWS
+
+
+Monthly Comparison : 2 vCPU, 4 GB RAM
+
+DigitalOcean: $24/month (constant pricing)
+Hetzner: €6.90/month (~$7.50) - cheaper but harder to scale
+AWS: $60/month on-demand (but $18 with spot)
+UpCloud: $30/month
+
+When DigitalOcean Wins :
+
+Simplicity and transparency (no reserved instances needed)
+Managed database costs
+Small deployments (1-5 servers)
+Applications using DigitalOcean-specific services
+
+
+
+
+DigitalOcean account with billing enabled
+API token from DigitalOcean Control Panel
+doctl CLI installed (optional but recommended)
+Provisioning system with DigitalOcean provider plugin
+
+
+
+Go to DigitalOcean Control Panel
+Navigate to API > Tokens/Keys
+Click Generate New Token
+Set expiration to 90 days or custom
+Select Read & Write scope
+Copy the token (you can only view it once)
+
+
+# Add to ~/.bashrc, ~/.zshrc, or env file
+export DIGITALOCEAN_TOKEN="dop_v1_xxxxxxxxxxxxxxxxxxxxxxxxxxxx"
+
+# Optional: Default region for all operations
+export DIGITALOCEAN_REGION="nyc3"
+
+
+# Using provisioning CLI
+provisioning provider verify digitalocean
+
+# Or using doctl
+doctl auth init
+doctl compute droplet list
+
+
+Create or update config.toml in your workspace:
+[providers.digitalocean]
+enabled = true
+token_env = "DIGITALOCEAN_TOKEN"
+default_region = "nyc3"
+
+[workspace]
+provider = "digitalocean"
+region = "nyc3"
+
+
+
+DigitalOcean’s core compute offering - cloud servers with hourly billing.
+Resource Type : digitalocean.Droplet
+Available Sizes :
+Size Slug vCPU RAM Storage Price/Month
+s-1vcpu-512 m-10gb 1 512 MB 10 GB SSD $4
+s-1vcpu-1gb-25gb 1 1 GB 25 GB SSD $6
+s-2vcpu-2gb-50gb 2 2 GB 50 GB SSD $12
+s-2vcpu-4gb-80gb 2 4 GB 80 GB SSD $24
+s-4vcpu-8gb 4 8 GB 160 GB SSD $48
+s-6vcpu-16gb 6 16 GB 320 GB SSD $96
+c-2 2 4 GB 50 GB SSD $40 (CPU-optimized)
+g-2vcpu-8gb 2 8 GB 50 GB SSD $60 (GPU)
+
+
+Key Features :
+
+SSD storage
+Hourly or monthly billing
+Automatic backups
+SSH key management
+Private networking via VPC
+Firewall rules
+Monitoring and alerting
+
+
+Persistent block storage that can be attached to Droplets.
+Resource Type : digitalocean.Volume
+Characteristics :
+
+$0.10/GB/month
+SSD-based
+Snapshots for backup
+Maximum 100 TB size
+Automatic backups
+
+
+S3-compatible object storage for files, backups, media.
+Characteristics :
+
+$5/month for 250 GB
+Then $0.015/GB for additional storage
+$0.01/GB outbound transfer
+Versioning support
+CDN integration available
+
+
+Layer 4/7 load balancing with health checks.
+Price : $10/month
+Features :
+
+Round robin, least connections algorithms
+Health checks on Droplets
+SSL/TLS termination
+Sticky sessions
+HTTP/HTTPS support
+
+
+PostgreSQL, MySQL, and Redis databases.
+Price Examples :
+
+Single node PostgreSQL (1 GB RAM): $15/month
+3-node HA cluster: $60/month
+Enterprise plans available
+
+Features :
+
+Automated backups
+Read replicas
+High availability option
+Connection pooling
+Monitoring dashboard
+
+
+Managed Kubernetes service.
+Price : $12/month per cluster + node costs
+Features :
+
+Managed control plane
+Autoscaling node pools
+Integrated monitoring
+Container Registry integration
+
+
+Content Delivery Network for global distribution.
+Price : $0.005/GB delivered
+Features :
+
+600+ edge locations
+Purge cache by path
+Custom domains with SSL
+Edge caching
+
+
+Domain registration and DNS management.
+Features :
+
+Domain registration via Namecheap
+Free DNS hosting
+TTL control
+MX records, CNAMEs, etc.
+
+
+Private networking between resources.
+Features :
+
+Free tier (1 VPC included)
+Isolation between resources
+Custom IP ranges
+Subnet management
+
+
+Network firewall rules.
+Features :
+
+Inbound/outbound rules
+Protocol-specific (TCP, UDP, ICMP)
+Source/destination filtering
+Rule priorities
+
+
+
+let digitalocean = import "../../extensions/providers/digitalocean/nickel/main.ncl" in
+
+digitalocean.Droplet & {
+ # Required
+ name = "my-droplet",
+ region = "nyc3",
+ size = "s-2vcpu-4gb",
+
+ # Optional
+ image = "ubuntu-22-04-x64", # Default: ubuntu-22-04-x64
+ count = 1, # Number of identical droplets
+ ssh_keys = ["key-id-1"],
+ backups = false,
+ ipv6 = true,
+ monitoring = true,
+ vpc_uuid = "vpc-id",
+
+ # Volumes to attach
+ volumes = [
+ {
+ size = 100,
+ name = "data-volume",
+ filesystem_type = "ext4",
+ filesystem_label = "data"
+ }
+ ],
+
+ # Firewall configuration
+ firewall = {
+ inbound_rules = [
+ {
+ protocol = "tcp",
+ ports = "22",
+ sources = {
+ addresses = ["0.0.0.0/0"],
+ droplet_ids = [],
+ tags = []
+ }
+ },
+ {
+ protocol = "tcp",
+ ports = "80",
+ sources = {
+ addresses = ["0.0.0.0/0"]
+ }
+ },
+ {
+ protocol = "tcp",
+ ports = "443",
+ sources = {
+ addresses = ["0.0.0.0/0"]
+ }
+ }
+ ],
+
+ outbound_rules = [
+ {
+ protocol = "tcp",
+ destinations = {
+ addresses = ["0.0.0.0/0"]
+ }
+ },
+ {
+ protocol = "udp",
+ ports = "53",
+ destinations = {
+ addresses = ["0.0.0.0/0"]
+ }
+ }
+ ]
+ },
+
+ # Tags
+ tags = ["web", "production"],
+
+ # User data (startup script)
+ user_data = "#!/bin/bash\napt-get update\napt-get install -y nginx"
+}
+
+
+digitalocean.LoadBalancer & {
+ name = "web-lb",
+ algorithm = "round_robin", # or "least_connections"
+ region = "nyc3",
+
+ # Forwarding rules
+ forwarding_rules = [
+ {
+ entry_protocol = "http",
+ entry_port = 80,
+ target_protocol = "http",
+ target_port = 80,
+ certificate_id = null
+ },
+ {
+ entry_protocol = "https",
+ entry_port = 443,
+ target_protocol = "http",
+ target_port = 80,
+ certificate_id = "cert-id"
+ }
+ ],
+
+ # Health checks
+ health_check = {
+ protocol = "http",
+ port = 80,
+ path = "/health",
+ check_interval_seconds = 10,
+ response_timeout_seconds = 5,
+ healthy_threshold = 5,
+ unhealthy_threshold = 3
+ },
+
+ # Sticky sessions
+ sticky_sessions = {
+ type = "cookies",
+ cookie_name = "LB",
+ cookie_ttl_seconds = 300
+ }
+}
+
+
+digitalocean.Volume & {
+ name = "data-volume",
+ size = 100, # GB
+ region = "nyc3",
+ description = "Application data volume",
+ snapshots = true,
+
+ # To attach to a Droplet
+ attachment = {
+ droplet_id = "droplet-id",
+ mount_point = "/data"
+ }
+}
+
+
+digitalocean.Database & {
+ name = "prod-db",
+ engine = "pg", # or "mysql", "redis"
+ version = "14",
+ size = "db-s-1vcpu-1gb",
+ region = "nyc3",
+ num_nodes = 1, # or 3 for HA
+
+ # High availability
+ multi_az = false,
+
+ # Backups
+ backup_restore = {
+ backup_created_at = "2024-01-01T00:00:00Z"
+ }
+}
+
+
+
+let digitalocean = import "../../extensions/providers/digitalocean/nickel/main.ncl" in
+
+{
+ workspace_name = "simple-web",
+
+ web_server = digitalocean.Droplet & {
+ name = "web-01",
+ region = "nyc3",
+ size = "s-1vcpu-1gb-25gb",
+ image = "ubuntu-22-04-x64",
+ ssh_keys = ["your-ssh-key-id"],
+
+ user_data = ''
+ #!/bin/bash
+ apt-get update
+ apt-get install -y nginx
+ systemctl start nginx
+ systemctl enable nginx
+ '',
+
+ firewall = {
+ inbound_rules = [
+ { protocol = "tcp", ports = "22", sources = { addresses = ["YOUR_IP/32"] } },
+ { protocol = "tcp", ports = "80", sources = { addresses = ["0.0.0.0/0"] } },
+ { protocol = "tcp", ports = "443", sources = { addresses = ["0.0.0.0/0"] } }
+ ]
+ },
+
+ monitoring = true
+ }
+}
+
+
+{
+ web_tier = digitalocean.Droplet & {
+ name = "web-server",
+ region = "nyc3",
+ size = "s-2vcpu-4gb",
+ count = 2,
+
+ firewall = {
+ inbound_rules = [
+ { protocol = "tcp", ports = "22", sources = { addresses = ["0.0.0.0/0"] } },
+ { protocol = "tcp", ports = "80", sources = { addresses = ["0.0.0.0/0"] } },
+ { protocol = "tcp", ports = "443", sources = { addresses = ["0.0.0.0/0"] } }
+ ]
+ },
+
+ tags = ["web", "production"]
+ },
+
+ load_balancer = digitalocean.LoadBalancer & {
+ name = "web-lb",
+ region = "nyc3",
+ algorithm = "round_robin",
+
+ forwarding_rules = [
+ {
+ entry_protocol = "http",
+ entry_port = 80,
+ target_protocol = "http",
+ target_port = 8080
+ }
+ ],
+
+ health_check = {
+ protocol = "http",
+ port = 8080,
+ path = "/health",
+ check_interval_seconds = 10,
+ response_timeout_seconds = 5
+ }
+ },
+
+ database = digitalocean.Database & {
+ name = "app-db",
+ engine = "pg",
+ version = "14",
+ size = "db-s-1vcpu-1gb",
+ region = "nyc3",
+ multi_az = true
+ }
+}
+
+
+{
+ app_server = digitalocean.Droplet & {
+ name = "app-with-storage",
+ region = "nyc3",
+ size = "s-4vcpu-8gb",
+
+ volumes = [
+ {
+ size = 500,
+ name = "app-storage",
+ filesystem_type = "ext4"
+ }
+ ]
+ },
+
+ backup_storage = digitalocean.Volume & {
+ name = "backup-volume",
+ size = 1000,
+ region = "nyc3",
+ description = "Backup storage for app data"
+ }
+}
+
+
+
+Instance Sizing
+
+Start with smallest viable size (s-1vcpu-1gb)
+Monitor CPU/memory usage
+Scale vertically for predictable workloads
+Use autoscaling with Kubernetes for bursty workloads
+
+SSH Key Management
+
+Use SSH keys instead of passwords
+Store private keys securely
+Rotate keys regularly (at least yearly)
+Different keys for different environments
+
+Monitoring
+
+Enable monitoring on all Droplets
+Set up alerting for CPU > 80%
+Monitor disk usage
+Alert on high memory usage
+
+
+Principle of Least Privilege
+
+Only allow necessary ports
+Specify source IPs when possible
+Use SSH key authentication (no passwords)
+Block unnecessary outbound traffic
+
+Default Rules
+# Minimal firewall for web server
+inbound_rules = [
+ { protocol = "tcp", ports = "22", sources = { addresses = ["YOUR_OFFICE_IP/32"] } },
+ { protocol = "tcp", ports = "80", sources = { addresses = ["0.0.0.0/0"] } },
+ { protocol = "tcp", ports = "443", sources = { addresses = ["0.0.0.0/0"] } }
+],
+
+outbound_rules = [
+ { protocol = "tcp", destinations = { addresses = ["0.0.0.0/0"] } },
+ { protocol = "udp", ports = "53", destinations = { addresses = ["0.0.0.0/0"] } }
+]
+
+
+High Availability
+
+Use 3-node clusters for production
+Enable automated backups (retain for 30 days)
+Test backup restore procedures
+Use read replicas for scaling reads
+
+Connection Pooling
+
+Enable PgBouncer for PostgreSQL
+Set pool size based on app connections
+Monitor connection count
+
+Backup Strategy
+
+Daily automated backups (DigitalOcean manages)
+Export critical data to Spaces weekly
+Test restore procedures monthly
+Keep backups for minimum 30 days
+
+
+Data Persistence
+
+Use volumes for stateful data
+Don’t store critical data on Droplet root volume
+Enable automatic snapshots
+Document mount points
+
+Capacity Planning
+
+Monitor volume usage
+Expand volumes as needed (no downtime)
+Delete old snapshots to save costs
+
+
+Health Checks
+
+Set appropriate health check paths
+Conservative intervals (10-30 seconds)
+Longer timeout to avoid false positives
+Multiple healthy thresholds
+
+Sticky Sessions
+
+Use if application requires session affinity
+Set appropriate TTL (300-3600 seconds)
+Monitor for imbalanced traffic
+
+
+Droplet Sizing
+
+Right-size instances to actual needs
+Use snapshots to create custom images
+Destroy unused Droplets
+
+Reserved Droplets
+
+Pre-pay for predictable workloads
+25-30% savings vs hourly
+
+Object Storage
+
+Use lifecycle policies to delete old data
+Compress data before uploading
+Use CDN for frequent access (reduces egress)
+
+
+
+Symptoms : Cannot SSH to Droplet, connection timeout
+Diagnosis :
+
+Verify Droplet status in DigitalOcean Control Panel
+Check firewall rules allow port 22 from your IP
+Verify SSH key is loaded in SSH agent: ssh-add -l
+Check Droplet has public IP assigned
+
+Solution :
+# Add to firewall
+doctl compute firewall add-rules firewall-id \
+ --inbound-rules="protocol:tcp,ports:22,sources:addresses:YOUR_IP"
+
+# Test SSH
+ssh -v -i ~/.ssh/key.pem root@DROPLET_IP
+
+# Or use VNC console in Control Panel
+
+
+Symptoms : Volume created but not accessible, mount fails
+Diagnosis :
+# Check volume attachment
+doctl compute volume list
+
+# On Droplet, check block devices
+lsblk
+
+# Check filesystem
+sudo file -s /dev/sdb
+
+Solution :
+# Format volume (only first time)
+sudo mkfs.ext4 /dev/sdb
+
+# Create mount point
+sudo mkdir -p /data
+
+# Mount volume
+sudo mount /dev/sdb /data
+
+# Make permanent by editing /etc/fstab
+echo '/dev/sdb /data ext4 defaults,nofail,discard 0 0' | sudo tee -a /etc/fstab
+
+
+Symptoms : Backends marked unhealthy, traffic not flowing
+Diagnosis :
+# Test health check endpoint manually
+curl -i http://BACKEND_IP:8080/health
+
+# Check backend logs
+ssh backend-server
+tail -f /var/log/app.log
+
+Solution :
+
+Verify endpoint returns HTTP 200
+Check backend firewall allows load balancer IPs
+Adjust health check timing (increase timeout)
+Verify backend service is running
+
+
+Symptoms : Cannot connect to managed database
+Diagnosis :
+# Test connectivity from Droplet
+psql -h db-host.db.ondigitalocean.com -U admin -d defaultdb
+
+# Check firewall
+doctl compute firewall list-rules firewall-id
+
+Solution :
+
+Add Droplet to database’s trusted sources
+Verify connection string (host, port, username)
+Check database is accepting connections
+For 3-node cluster, use connection pool endpoint
+
+
+DigitalOcean provides a simple, transparent platform ideal for developers and small teams. Its key advantages are:
+✓ Simple pricing and transparent costs
+✓ Excellent documentation
+✓ Good performance for typical workloads
+✓ Managed services (databases, Kubernetes)
+✓ Global presence
+✓ Developer-friendly interface
+Start small with a single Droplet and expand to managed services as your application grows.
+For more information, visit: DigitalOcean Documentation
+
+This guide covers using Hetzner Cloud as a provider in the provisioning system. Hetzner is renowned for competitive pricing, powerful infrastructure, and outstanding performance, making it ideal for cost-conscious teams and performance-critical workloads.
+
+
+
+Hetzner Cloud provides European cloud infrastructure with exceptional value. Key characteristics:
+
+Best Price/Performance : Lower cost than AWS, competitive with DigitalOcean
+European Focus : Primary datacenter in Germany with compliance emphasis
+Powerful Hardware : Modern CPUs, NVMe storage, 10Gbps networking
+Flexible Billing : Hourly or monthly, no long-term contracts
+API-First : Comprehensive RESTful API for automation
+
+
+Hetzner uses hourly billing with generous monthly rates (30.4 days):
+
+Cloud Servers : €0.003-0.072/hour (~€3-200/month depending on size)
+Volumes : €0.026/GB/month
+Data Transfer : €0.12/GB outbound (generous included traffic)
+Floating IP : Free (1 per server)
+
+
+Provider Monthly Hourly Notes
+Hetzner CX31 €6.90 €0.003 Best value
+DigitalOcean $24 $0.0357 3.5x more expensive
+AWS t3.medium $60+ $0.0896 On-demand pricing
+UpCloud $15 $0.0223 Mid-range
+
+
+
+Resource Product Name Status
+Compute Cloud Servers ✓ Full support
+Block Storage Volumes ✓ Full support
+Object Storage Object Storage ✓ Full support
+Load Balancer Load Balancer ✓ Full support
+Network vSwitch/Network ✓ Full support
+Firewall Firewall ✓ Full support
+DNS — ✓ Via Hetzner DNS
+Bare Metal Dedicated Servers ✓ Available
+Floating IP Floating IP ✓ Full support
+
+
+
+
+Hetzner is ideal for :
+
+Cost-Conscious Teams : 50-75% cheaper than AWS
+European Operations : Primary EU presence
+Predictable Workloads : Good for sustained compute
+Performance-Critical : Modern hardware, 10Gbps networking
+Self-Managed Services : Full control over infrastructure
+Bulk Computing : Good pricing for 10-100+ servers
+
+Hetzner is NOT ideal for :
+
+Managed Services : Limited compared to AWS/DigitalOcean
+Global Distribution : Limited regions (mainly EU + US)
+Windows Workloads : Limited Windows support
+Complex Compliance : Fewer certifications than AWS
+Hands-Off Operations : Need to manage own infrastructure
+
+
+Total Cost of Ownership Comparison (5 servers, 100 GB storage):
+Provider Compute Storage Data Transfer Monthly
+Hetzner €34.50 €2.60 Included €37.10
+DigitalOcean $120 $10 Included $130
+AWS $300 $100 $450 $850
+
+
+Hetzner is 3.5x cheaper than DigitalOcean and 23x cheaper than AWS for this scenario.
+
+
+
+Hetzner Cloud account at Hetzner Console
+API token from Cloud Console
+SSH key uploaded to Hetzner
+hcloud CLI installed (optional but recommended)
+Provisioning system with Hetzner provider plugin
+
+
+
+Log in to Hetzner Cloud Console
+Go to Projects > Your Project > Security > API Tokens
+Click Generate Token
+Name it (for example, “provisioning”)
+Select Read & Write permission
+Copy the token immediately (only shown once)
+
+
+# Add to ~/.bashrc, ~/.zshrc, or env file
+export HCLOUD_TOKEN="MC4wNTI1YmE1M2E4YmE0YTQzMTQ..."
+
+# Optional: Set default location
+export HCLOUD_LOCATION="nbg1"
+
+
+# macOS
+brew install hcloud
+
+# Linux
+curl https://github.com/hetznercloud/cli/releases/download/v1.x.x/hcloud-linux-amd64.tar.gz | tar xz
+sudo mv hcloud /usr/local/bin/
+
+# Verify
+hcloud version
+
+
+# Upload your SSH public key
+hcloud ssh-key create --name "provisioning-key" \
+ --public-key-from-file ~/.ssh/id_rsa.pub
+
+# List keys
+hcloud ssh-key list
+
+
+Create or update config.toml in your workspace:
+[providers.hetzner]
+enabled = true
+token_env = "HCLOUD_TOKEN"
+default_location = "nbg1"
+default_datacenter = "nbg1-dc8"
+
+[workspace]
+provider = "hetzner"
+region = "nbg1"
+
+
+
+Hetzner’s core compute offering with outstanding performance.
+Available Server Types :
+Type vCPU RAM SSD Storage Network Monthly Price
+CX11 1 1 GB 25 GB 1Gbps €3.29
+CX21 2 4 GB 40 GB 1Gbps €6.90
+CX31 2 8 GB 80 GB 1Gbps €13.80
+CX41 4 16 GB 160 GB 1Gbps €27.60
+CX51 8 32 GB 240 GB 10Gbps €55.20
+CPX21 4 8 GB 80 GB 10Gbps €20.90
+CPX31 8 16 GB 160 GB 10Gbps €41.80
+CPX41 16 32 GB 360 GB 10Gbps €83.60
+
+
+Key Features :
+
+NVMe SSD storage
+Hourly or monthly billing
+Automatic backups
+SSH key management
+Floating IPs for high availability
+Network interfaces for multi-homing
+Cloud-init support
+IPMI/KVM console access
+
+
+Persistent block storage that can be attached/detached.
+Characteristics :
+
+€0.026/GB/month (highly affordable)
+SSD-based with good performance
+Up to 10 TB capacity
+Snapshots for backup
+Can attach to multiple servers (read-only)
+Automatic snapshots available
+
+
+S3-compatible object storage.
+Characteristics :
+
+€0.025/GB/month
+S3-compatible API
+Versioning and lifecycle policies
+Bucket policy support
+CORS configuration
+
+
+Static IP addresses that can be reassigned.
+Characteristics :
+
+Free (1 per server, additional €0.50/month)
+IPv4 and IPv6 support
+Enable high availability and failover
+DNS pointing
+
+
+Layer 4/7 load balancing.
+Available Plans :
+
+LB11: €5/month (100 Mbps)
+LB21: €10/month (1 Gbps)
+LB31: €20/month (10 Gbps)
+
+Features :
+
+Health checks
+SSL/TLS termination
+Path/host-based routing
+Sticky sessions
+Algorithms: round robin, least connections
+
+
+Virtual switching for private networking.
+Characteristics :
+
+Private networks between servers
+Subnets within networks
+Routes and gateways
+Firewall integration
+
+
+Network firewall rules.
+Features :
+
+Per-server or per-network
+Stateful filtering
+Protocol-specific rules
+Source/destination filtering
+
+
+
+let hetzner = import "../../extensions/providers/hetzner/nickel/main.ncl" in
+
+hetzner.Server & {
+ # Required
+ name = "my-server",
+ server_type = "cx21",
+ image = "ubuntu-22.04",
+
+ # Optional
+ location = "nbg1", # nbg1, fsn1, hel1, ash
+ datacenter = "nbg1-dc8",
+ ssh_keys = ["key-name"],
+ count = 1,
+ public_net = {
+ enable_ipv4 = true,
+ enable_ipv6 = true
+ },
+
+ # Volumes to attach
+ volumes = [
+ {
+ size = 100,
+ format = "ext4",
+ automount = true
+ }
+ ],
+
+ # Network configuration
+ networks = [
+ {
+ network_name = "private-net",
+ ip = "10.0.1.5"
+ }
+ ],
+
+ # Firewall rules
+ firewall_rules = [
+ {
+ direction = "in",
+ source_ips = ["0.0.0.0/0", "::/0"],
+ destination_port = "22",
+ protocol = "tcp"
+ },
+ {
+ direction = "in",
+ source_ips = ["0.0.0.0/0", "::/0"],
+ destination_port = "80",
+ protocol = "tcp"
+ },
+ {
+ direction = "in",
+ source_ips = ["0.0.0.0/0", "::/0"],
+ destination_port = "443",
+ protocol = "tcp"
+ }
+ ],
+
+ # Labels for organization
+ labels = {
+ "environment" = "production",
+ "application" = "web"
+ },
+
+ # Startup script
+ user_data = "#!/bin/bash\napt-get update\napt-get install -y nginx"
+}
+
+
+hetzner.Volume & {
+ name = "data-volume",
+ size = 100, # GB
+ location = "nbg1",
+ automount = true,
+ format = "ext4",
+
+ # Attach to server
+ attachment = {
+ server = "server-name",
+ mount_point = "/data"
+ }
+}
+
+
+hetzner.LoadBalancer & {
+ name = "web-lb",
+ load_balancer_type = "lb11",
+ network_zone = "eu-central",
+ location = "nbg1",
+
+ # Services (backend targets)
+ services = [
+ {
+ protocol = "http",
+ listen_port = 80,
+ destination_port = 8080,
+ health_check = {
+ protocol = "http",
+ port = 8080,
+ interval = 15,
+ timeout = 10,
+ unhealthy_threshold = 3
+ },
+ http = {
+ sticky_sessions = true,
+ http_only = true,
+ certificates = []
+ }
+ }
+ ]
+}
+
+
+hetzner.Firewall & {
+ name = "web-firewall",
+ labels = { "env" = "prod" },
+
+ rules = [
+ # Allow SSH from management network
+ {
+ direction = "in",
+ source_ips = ["203.0.113.0/24"],
+ destination_port = "22",
+ protocol = "tcp"
+ },
+ # Allow HTTP/HTTPS from anywhere
+ {
+ direction = "in",
+ source_ips = ["0.0.0.0/0", "::/0"],
+ destination_port = "80",
+ protocol = "tcp"
+ },
+ {
+ direction = "in",
+ source_ips = ["0.0.0.0/0", "::/0"],
+ destination_port = "443",
+ protocol = "tcp"
+ },
+ # Allow all outbound
+ {
+ direction = "out",
+ destination_ips = ["0.0.0.0/0", "::/0"],
+ protocol = "esp"
+ }
+ ]
+}
+
+
+
+let hetzner = import "../../extensions/providers/hetzner/nickel/main.ncl" in
+
+{
+ workspace_name = "simple-web",
+
+ web_server = hetzner.Server & {
+ name = "web-01",
+ server_type = "cx21",
+ image = "ubuntu-22.04",
+ location = "nbg1",
+ ssh_keys = ["provisioning"],
+
+ user_data = ''
+ #!/bin/bash
+ apt-get update
+ apt-get install -y nginx
+ systemctl start nginx
+ systemctl enable nginx
+ '',
+
+ firewall_rules = [
+ { direction = "in", source_ips = ["0.0.0.0/0"], destination_port = "22", protocol = "tcp" },
+ { direction = "in", source_ips = ["0.0.0.0/0"], destination_port = "80", protocol = "tcp" },
+ { direction = "in", source_ips = ["0.0.0.0/0"], destination_port = "443", protocol = "tcp" }
+ ],
+
+ labels = { "service" = "web" }
+ }
+}
+
+
+{
+ # Backend servers
+ app_servers = hetzner.Server & {
+ name = "app",
+ server_type = "cx31",
+ image = "ubuntu-22.04",
+ location = "nbg1",
+ count = 3,
+ ssh_keys = ["provisioning"],
+
+ volumes = [
+ {
+ size = 100,
+ format = "ext4",
+ automount = true
+ }
+ ],
+
+ firewall_rules = [
+ { direction = "in", source_ips = ["0.0.0.0/0"], destination_port = "22", protocol = "tcp" },
+ { direction = "in", source_ips = ["0.0.0.0/0"], destination_port = "8080", protocol = "tcp" }
+ ],
+
+ labels = { "tier" = "application" }
+ },
+
+ # Load balancer
+ lb = hetzner.LoadBalancer & {
+ name = "web-lb",
+ load_balancer_type = "lb11",
+ location = "nbg1",
+
+ services = [
+ {
+ protocol = "http",
+ listen_port = 80,
+ destination_port = 8080,
+ health_check = {
+ protocol = "http",
+ port = 8080,
+ interval = 15
+ }
+ }
+ ]
+ },
+
+ # Persistent storage
+ shared_storage = hetzner.Volume & {
+ name = "shared-data",
+ size = 500,
+ location = "nbg1",
+ automount = false,
+ format = "ext4"
+ }
+}
+
+
+{
+ # Compute nodes with 10Gbps networking
+ compute_nodes = hetzner.Server & {
+ name = "compute",
+ server_type = "cpx41", # 16 vCPU, 32 GB, 10Gbps
+ image = "ubuntu-22.04",
+ location = "nbg1",
+ count = 5,
+
+ volumes = [
+ {
+ size = 500,
+ format = "ext4",
+ automount = true
+ }
+ ],
+
+ labels = { "tier" = "compute" }
+ },
+
+ # Storage node
+ storage = hetzner.Server & {
+ name = "storage",
+ server_type = "cx41",
+ image = "ubuntu-22.04",
+ location = "nbg1",
+
+ volumes = [
+ {
+ size = 2000,
+ format = "ext4",
+ automount = true
+ }
+ ],
+
+ labels = { "tier" = "storage" }
+ },
+
+ # High-capacity volume for data
+ data_volume = hetzner.Volume & {
+ name = "compute-data",
+ size = 5000,
+ location = "nbg1"
+ }
+}
+
+
+
+Performance Tiers :
+
+
+CX Series (Standard): Best value for most workloads
+
+CX21: Default choice for 2-4 GB workloads
+CX41: Good mid-range option
+
+
+
+CPX Series (ARM-based CPU-optimized): Better for CPU-intensive
+
+CPX21: Outstanding value at €20.90/month
+CPX31: Good for compute workloads
+
+
+
+CCX Series (AMD EPYC): High-performance options
+
+
+Selection Criteria :
+
+Start with CX21 (€6.90/month) for testing
+Scale to CPX21 (€20.90/month) for CPU-bound workloads
+Use CX31+ (€13.80+) for balanced workloads with data
+
+
+High Availability :
+# Use Floating IPs for failover
+floating_ip = hetzner.FloatingIP & {
+ name = "web-ip",
+ ip_type = "ipv4",
+ location = "nbg1"
+}
+
+# Attach to primary server, reassign on failure
+attachment = {
+ server = "primary-server"
+}
+
+Private Networking :
+# Create private network for internal communication
+private_network = hetzner.Network & {
+ name = "private",
+ ip_range = "10.0.0.0/8",
+ labels = { "env" = "prod" }
+}
+
+
+Volume Sizing :
+
+Estimate storage needs: app + data + logs + backups
+Add 20% buffer for growth
+Monitor usage monthly
+
+Backup Strategy :
+
+Enable automatic snapshots
+Regular manual snapshots for important data
+Test restore procedures
+Keep snapshots for minimum 30 days
+
+
+Principle of Least Privilege :
+# Only open necessary ports
+firewall_rules = [
+ # SSH from management IP only
+ { direction = "in", source_ips = ["203.0.113.1/32"], destination_port = "22", protocol = "tcp" },
+
+ # HTTP/HTTPS from anywhere
+ { direction = "in", source_ips = ["0.0.0.0/0", "::/0"], destination_port = "80", protocol = "tcp" },
+ { direction = "in", source_ips = ["0.0.0.0/0", "::/0"], destination_port = "443", protocol = "tcp" },
+
+ # Database replication (internal only)
+ { direction = "in", source_ips = ["10.0.0.0/8"], destination_port = "5432", protocol = "tcp" }
+]
+
+
+Enable Monitoring :
+hcloud server update <server-id> --enable-rescue
+
+Health Check Patterns :
+
+HTTP endpoint returning 200
+Custom health check scripts
+Regular resource verification
+
+
+Reserved Servers (Pre-pay for 12 months):
+
+25% discount vs hourly
+Good for predictable workloads
+
+Spot Pricing (Coming):
+
+Watch for additional discounts
+Off-peak capacity
+
+Resource Cleanup :
+
+Delete unused volumes
+Remove old snapshots
+Consolidate small servers
+
+
+
+Symptoms : SSH timeout or connection refused
+Diagnosis :
+# Check server status
+hcloud server list
+
+# Verify firewall allows port 22
+hcloud firewall describe firewall-name
+
+# Check if server has public IPv4
+hcloud server describe server-name
+
+Solution :
+# Update firewall to allow SSH from your IP
+hcloud firewall add-rules firewall-id \
+ --rules "direction=in protocol=tcp source_ips=YOUR_IP/32 destination_port=22"
+
+# Or reset SSH using rescue mode via console
+hcloud server request-console server-id
+
+
+Symptoms : Volume created but cannot attach, mount fails
+Diagnosis :
+# Check volume status
+hcloud volume list
+
+# Check server has available attachment slot
+hcloud server describe server-name
+
+Solution :
+# Format volume (first time only)
+sudo mkfs.ext4 /dev/sdb
+
+# Mount manually
+sudo mkdir -p /data
+sudo mount /dev/sdb /data
+
+# Make persistent
+echo '/dev/sdb /data ext4 defaults,nofail 0 0' | sudo tee -a /etc/fstab
+sudo mount -a
+
+
+Symptoms : Unexpected egress charges
+Diagnosis :
+# Check server network traffic
+sar -n DEV 1 100
+
+# Monitor connection patterns
+netstat -an | grep ESTABLISHED | wc -l
+
+Solution :
+
+Use Hetzner Object Storage for static files
+Cache content locally
+Optimize data transfer patterns
+Consider using Content Delivery Network
+
+
+Symptoms : LB created but backends not receiving traffic
+Diagnosis :
+# Check LB status
+hcloud load-balancer describe lb-name
+
+# Test backend directly
+curl -H "Host: example.com" http://backend-ip:8080/health
+
+Solution :
+
+Ensure backends have firewall allowing LB traffic
+Verify health check endpoint works
+Check backend service is running
+Review health check configuration
+
+
+Hetzner provides exceptional value with modern infrastructure:
+✓ Best price/performance ratio (50%+ cheaper than DigitalOcean)
+✓ Excellent European presence
+✓ Powerful hardware (NVMe, 10Gbps networking)
+✓ Flexible deployment options
+✓ Great API and CLI tools
+Start with CX21 servers (€6.90/month) and scale based on needs.
+For more information, visit: Hetzner Cloud Documentation
+
+
+
This directory contains consolidated quick reference guides organized by topic.
@@ -77837,7 +74554,7 @@ ss -tlnp | grep -E "8200|8081|8083|8082|9090|8080|8084|8085"
ps aux | grep "cargo run --release" | grep -v grep
-
+
# List all available schemas
ls -la provisioning/schemas/platform/schemas/
@@ -77946,7 +74663,7 @@ etcdctl --endpoints=http://etcd:2379 snapshot save backup.db
etcdctl --endpoints=http://etcd:2379 snapshot restore backup.db
-
+
# Vault overrides
export VAULT_SERVER_URL=http://vault-custom:8200
@@ -78038,7 +74755,7 @@ curl -s http://localhost:9090/api/v1/metrics/errors | jq .
-
+
# Check port in use
lsof -i :8200
ss -tlnp | grep 8200
@@ -78099,7 +74816,7 @@ dig vault.internal
echo "127.0.0.1 vault.internal" >> /etc/hosts
-
+
# 1. Stop everything
pkill -9 -f "cargo run"
@@ -78154,7 +74871,7 @@ cp -r /backup/vault-data/* /tmp/provisioning-solo/vault/
chmod -R 755 /tmp/provisioning-solo/vault/
-
+
# Configuration files (PUBLIC - version controlled)
provisioning/schemas/platform/ # Nickel schemas & defaults
provisioning/.typedialog/platform/ # Forms & generation scripts
@@ -78278,7 +74995,7 @@ tar -czf diagnostics-$(date +%Y%m%d-%H%M%S).tar.gz \
-✅ Document ingestion (Markdown, KCL, Nushell)
+✅ Document ingestion (Markdown, Nickel, Nushell)
✅ Vector embeddings (OpenAI + local ONNX fallback)
✅ SurrealDB vector storage with HNSW
✅ RAG agent with Claude API
@@ -78287,7 +75004,7 @@ tar -czf diagnostics-$(date +%Y%m%d-%H%M%S).tar.gz \
✅ Zero compiler warnings
✅ ~2,500 lines of production code
-
+
provisioning/platform/rag/src/
├── agent.rs - RAG orchestration
├── llm.rs - Claude API client
@@ -78296,169 +75013,125 @@ tar -czf diagnostics-$(date +%Y%m%d-%H%M%S).tar.gz \
├── ingestion.rs - Document pipeline
├── embeddings.rs - Vector generation
└── ... (5 more modules)
-```plaintext
-
----
-
-## 🚀 Quick Start
-
-### Build & Test
-
-```bash
-cd /Users/Akasha/project-provisioning/provisioning/platform
+
+
+
+
+cd /Users/Akasha/project-provisioning/provisioning/platform
cargo test -p provisioning-rag
-```plaintext
-
-### Run Example
-
-```bash
-cargo run --example rag_agent
-```plaintext
-
-### Check Tests
-
-```bash
-cargo test -p provisioning-rag --lib
+
+
+cargo run --example rag_agent
+
+
+cargo test -p provisioning-rag --lib
# Result: test result: ok. 22 passed; 0 failed
-```plaintext
-
----
-
-## 📚 Documentation Files
-
-| File | Purpose |
-|------|---------|
-| `PHASE5_CLAUDE_INTEGRATION_SUMMARY.md` | Claude API details |
-| `PHASE6_MCP_INTEGRATION_SUMMARY.md` | MCP integration guide |
-| `RAG_SYSTEM_COMPLETE_SUMMARY.md` | Overall architecture |
-| `RAG_SYSTEM_STATUS_SUMMARY.md` | Current status & metrics |
-| `PHASE7_ADVANCED_RAG_FEATURES_PLAN.md` | Future roadmap |
-| `RAG_IMPLEMENTATION_COMPLETE.md` | Final status report |
-
----
-
-## ⚙️ Configuration
-
-### Environment Variables
-
-```bash
-# Required for Claude integration
+
+
+
+File Purpose
+PHASE5_CLAUDE_INTEGRATION_SUMMARY.mdClaude API details
+PHASE6_MCP_INTEGRATION_SUMMARY.mdMCP integration guide
+RAG_SYSTEM_COMPLETE_SUMMARY.mdOverall architecture
+RAG_SYSTEM_STATUS_SUMMARY.mdCurrent status & metrics
+PHASE7_ADVANCED_RAG_FEATURES_PLAN.mdFuture roadmap
+RAG_IMPLEMENTATION_COMPLETE.mdFinal status report
+
+
+
+
+
+# Required for Claude integration
export ANTHROPIC_API_KEY="sk-..."
# Optional for OpenAI embeddings
export OPENAI_API_KEY="sk-..."
-```plaintext
-
-### SurrealDB
-
-- Default: In-memory for testing
-- Production: Network mode with persistence
-
-### Model
-
-- Default: claude-opus-4-1
-- Customizable via configuration
-
----
-
-## 🎯 Key Capabilities
-
-### 1. Ask Questions
-
-```rust
-let response = agent.ask("How do I deploy?").await?;
-// Returns: answer + sources + confidence
-```plaintext
-
-### 2. Semantic Search
-
-```rust
-let results = retriever.search("deployment", Some(5)).await?;
-// Returns: top-5 similar documents
-```plaintext
-
-### 3. Workspace Awareness
-
-```rust
-let context = workspace.enrich_query("deploy");
-// Automatically includes: taskservs, providers, infrastructure
-```plaintext
-
-### 4. MCP Integration
-
-- Tools: `rag_answer_question`, `semantic_search_rag`, `rag_system_status`
-- Ready when MCP server re-enabled
-
----
-
-## 📊 Performance
-
-| Metric | Value |
-|--------|-------|
-| Query Time (P95) | 450ms |
-| Throughput | 100+ qps |
-| Cost | $0.008/query |
-| Memory | ~200MB |
-| Test Pass Rate | 100% |
-
----
-
-## ✅ What's Working
-
-- ✅ Multi-format document chunking
-- ✅ Vector embedding generation
-- ✅ Semantic similarity search
-- ✅ RAG question answering
-- ✅ Claude API integration
-- ✅ Workspace context enrichment
-- ✅ Error handling & fallbacks
-- ✅ Comprehensive testing
-- ✅ MCP tool scaffolding
-- ✅ Production-ready code quality
-
----
-
-## 🔧 What's Not Implemented (Phase 7)
-
-Coming soon (next phase):
-
-- Response caching (70% hit rate planned)
-- Token streaming (better UX)
-- Function calling (Claude invokes tools)
-- Hybrid search (vector + keyword)
-- Multi-turn conversations
-- Query optimization
-
----
-
-## 🎯 Next Steps
-
-### This Week
-
-1. Review status & documentation
-2. Get feedback on Phase 7 priorities
-3. Set up monitoring infrastructure
-
-### Next Week (Phase 7a)
-
-1. Implement response caching
-2. Add streaming responses
-3. Deploy Prometheus metrics
-
-### Weeks 3-4 (Phase 7b)
-
-1. Implement function calling
-2. Add hybrid search
-3. Support conversations
-
----
-
-## 📞 How to Use
-
-### As a Library
-
-```rust
-use provisioning_rag::{RagAgent, DbConnection, RetrieverEngine};
+
+
+
+Default: In-memory for testing
+Production: Network mode with persistence
+
+
+
+Default: claude-opus-4-1
+Customizable via configuration
+
+
+
+
+let response = agent.ask("How do I deploy?").await?;
+// Returns: answer + sources + confidence
+
+let results = retriever.search("deployment", Some(5)).await?;
+// Returns: top-5 similar documents
+
+let context = workspace.enrich_query("deploy");
+// Automatically includes: taskservs, providers, infrastructure
+
+
+Tools: rag_answer_question, semantic_search_rag, rag_system_status
+Ready when MCP server re-enabled
+
+
+
+Metric Value
+Query Time (P95) 450 ms
+Throughput 100+ qps
+Cost $0.008/query
+Memory ~200 MB
+Test Pass Rate 100%
+
+
+
+
+
+✅ Multi-format document chunking
+✅ Vector embedding generation
+✅ Semantic similarity search
+✅ RAG question answering
+✅ Claude API integration
+✅ Workspace context enrichment
+✅ Error handling & fallbacks
+✅ Comprehensive testing
+✅ MCP tool scaffolding
+✅ Production-ready code quality
+
+
+
+Coming soon (next phase):
+
+Response caching (70% hit rate planned)
+Token streaming (better UX)
+Function calling (Claude invokes tools)
+Hybrid search (vector + keyword)
+Multi-turn conversations
+Query optimization
+
+
+
+
+
+Review status & documentation
+Get feedback on Phase 7 priorities
+Set up monitoring infrastructure
+
+
+
+Implement response caching
+Add streaming responses
+Deploy Prometheus metrics
+
+
+
+Implement function calling
+Add hybrid search
+Support conversations
+
+
+
+
+use provisioning_rag::{RagAgent, DbConnection, RetrieverEngine};
// Initialize
let db = DbConnection::new(config).await?;
@@ -78466,107 +75139,86 @@ let retriever = RetrieverEngine::new(config, db, embeddings).await?;
let agent = RagAgent::new(retriever, context, model)?;
// Ask questions
-let response = agent.ask("question").await?;
-```plaintext
-
-### Via MCP Server (When Enabled)
-
-```plaintext
-POST /tools/rag_answer_question
+let response = agent.ask("question").await?;
+
+POST /tools/rag_answer_question
{
"question": "How do I deploy?"
}
-```plaintext
-
-### From CLI (via example)
-
-```bash
-cargo run --example rag_agent
-```plaintext
-
----
-
-## 🔗 Integration Points
-
-### Current
-
-- Claude API ✅ (Anthropic)
-- SurrealDB ✅ (Vector store)
-- OpenAI ✅ (Embeddings)
-- Local ONNX ✅ (Fallback)
-
-### Future (Phase 7+)
-
-- Prometheus (metrics)
-- Streaming API
-- Function calling framework
-- Hybrid search engine
-
----
-
-## 🚨 Known Issues
-
-None - System is production ready
-
----
-
-## 📈 Metrics
-
-### Code Quality
-
-- Tests: 22/22 passing
-- Warnings: 0
-- Coverage: >90%
-- Type Safety: Complete
-
-### Performance
-
-- Latency P95: 450ms
-- Throughput: 100+ qps
-- Cost: $0.008/query
-- Memory: ~200MB
-
----
-
-## 💡 Tips
-
-### For Development
-
-1. Add tests alongside code
-2. Use `cargo test` frequently
-3. Check `cargo doc --open` for API
-4. Run clippy: `cargo clippy`
-
-### For Deployment
-
-1. Set API keys first
-2. Test with examples
-3. Monitor via metrics
-4. Setup log aggregation
-
-### For Debugging
-
-1. Enable debug logging: `RUST_LOG=debug`
-2. Check test examples
-3. Review error types in error.rs
-4. Use `cargo expand` for macros
-
----
-
-## 📚 Learning Resources
-
-1. **Module Documentation**: `cargo doc --open`
-2. **Example Code**: `examples/rag_agent.rs`
-3. **Tests**: Tests in each module
-4. **Architecture**: `RAG_SYSTEM_COMPLETE_SUMMARY.md`
-5. **Integration**: `PHASE6_MCP_INTEGRATION_SUMMARY.md`
-
----
-
-## 🎓 Architecture Overview
-
-```plaintext
-User Question
+
+
+cargo run --example rag_agent
+
+
+
+
+
+Claude API ✅ (Anthropic)
+SurrealDB ✅ (Vector store)
+OpenAI ✅ (Embeddings)
+Local ONNX ✅ (Fallback)
+
+
+
+Prometheus (metrics)
+Streaming API
+Function calling framework
+Hybrid search engine
+
+
+
+None - System is production ready
+
+
+
+
+Tests: 22/22 passing
+Warnings: 0
+Coverage: >90%
+Type Safety: Complete
+
+
+
+Latency P95: 450 ms
+Throughput: 100+ qps
+Cost: $0.008/query
+Memory: ~200 MB
+
+
+
+
+
+Add tests alongside code
+Use cargo test frequently
+Check cargo doc --open for API
+Run clippy: cargo clippy
+
+
+
+Set API keys first
+Test with examples
+Monitor via metrics
+Setup log aggregation
+
+
+
+Enable debug logging: RUST_LOG=debug
+Check test examples
+Review error types in error.rs
+Use cargo expand for macros
+
+
+
+
+Module Documentation : cargo doc --open
+Example Code : examples/rag_agent.rs
+Tests : Tests in each module
+Architecture : RAG_SYSTEM_COMPLETE_SUMMARY.md
+Integration : PHASE6_MCP_INTEGRATION_SUMMARY.md
+
+
+
+User Question
↓
Query Enrichment (Workspace context)
↓
@@ -78579,36 +75231,31 @@ Claude API Call
Answer Generation
↓
Return with Sources & Confidence
-```plaintext
-
----
-
-## 🔐 Security
-
-- ✅ API keys via environment
-- ✅ No hardcoded secrets
-- ✅ Input validation
-- ✅ Graceful error handling
-- ✅ No unsafe code
-- ✅ Type-safe throughout
-
----
-
-## 📞 Support
-
-- **Code Issues**: Check test examples
-- **Integration**: See PHASE6 docs
-- **Architecture**: See COMPLETE_SUMMARY.md
-- **API Details**: Run `cargo doc --open`
-- **Examples**: See `examples/rag_agent.rs`
-
----
-
-**Status**: 🟢 Production Ready
-**Last Verified**: 2025-11-06
-**All Tests**: ✅ Passing
-**Next Phase**: 🔵 Phase 7 (Ready to start)
+
+
+
+✅ API keys via environment
+✅ No hardcoded secrets
+✅ Input validation
+✅ Graceful error handling
+✅ No unsafe code
+✅ Type-safe throughout
+
+
+
+
+Code Issues : Check test examples
+Integration : See PHASE6 docs
+Architecture : See COMPLETE_SUMMARY.md
+API Details : Run cargo doc --open
+Examples : See examples/rag_agent.rs
+
+
+Status : 🟢 Production Ready
+Last Verified : 2025-11-06
+All Tests : ✅ Passing
+Next Phase : 🔵 Phase 7 (Ready to start)
# Login & Logout
@@ -78711,7 +75358,7 @@ just test-plugin-kms # Test KMS plugin
just test-plugin-orch # Test orchestrator plugin
just list-plugins # List installed plugins
-
+
just auth-login alice
just mfa-enroll-totp
@@ -78726,7 +75373,7 @@ just encrypt-config prod/secrets.yaml
just encrypt-env-files ./config
# Submit batch workflow
-just batch-submit workflows/deploy-prod.k
+just batch-submit workflows/deploy-prod.ncl
just batch-monitor <workflow-id>
@@ -78762,7 +75409,7 @@ just workflow-cleanup-failed
# Decrypt all files for migration
just decrypt-all-files ./encrypted
-
+
Help is Built-in : Every module has a help recipe
@@ -78800,31 +75447,27 @@ just decrypt-all-files ./encrypted
Orchestrator : 56 recipes
Total : 123 recipes
-
+
Full authentication guide: just auth-help
Full KMS guide: just kms-help
Full orchestrator guide: just orch-help
-Security system: docs/architecture/ADR-009-security-system-complete.md
+Security system: docs/architecture/adr-009-security-system-complete.md
Quick Start : just help → just auth-help → just auth-login <user> → just mfa-enroll-totp
Version : 1.0.0 | Date : 2025-10-06
-
+
# Install OCI tool (choose one)
brew install oras # Recommended
brew install skopeo # Alternative
go install github.com/google/go-containerregistry/cmd/crane@latest # Alternative
-```plaintext
-
----
-
-## Quick Start (5 Minutes)
-
-```bash
-# 1. Start local OCI registry
+
+
+
+# 1. Start local OCI registry
provisioning oci-registry start
# 2. Login to registry
@@ -78839,16 +75482,11 @@ provisioning oci list
# 5. Configure workspace to use OCI
# Edit: workspace/config/provisioning.yaml
# Add OCI dependency configuration
-```plaintext
-
----
-
-## Common Commands
-
-### Extension Discovery
-
-```bash
-# List all extensions
+
+
+
+
+# List all extensions
provisioning oci list
# Search for extensions
@@ -78859,12 +75497,9 @@ provisioning oci tags kubernetes
# Inspect extension details
provisioning oci inspect kubernetes:1.28.0
-```plaintext
-
-### Extension Installation
-
-```bash
-# Pull specific version
+
+
+# Pull specific version
provisioning oci pull kubernetes:1.28.0
# Pull to custom location
@@ -78874,12 +75509,9 @@ provisioning oci pull redis:7.0.0 --destination /path/to/extensions
provisioning oci pull postgres:15.0 \
--registry harbor.company.com \
--namespace provisioning-extensions
-```plaintext
-
-### Extension Publishing
-
-```bash
-# Login (one-time)
+
+
+# Login (one-time)
provisioning oci login localhost:5000
# Package extension
@@ -78890,12 +75522,9 @@ provisioning oci push ./extensions/taskservs/redis redis 1.0.0
# Verify publication
provisioning oci tags redis
-```plaintext
-
-### Dependency Management
-
-```bash
-# Resolve all dependencies
+
+
+# Resolve all dependencies
provisioning dep resolve
# Check for updates
@@ -78909,18 +75538,12 @@ provisioning dep tree kubernetes
# Validate dependencies
provisioning dep validate
-```plaintext
-
----
-
-## Configuration Templates
-
-### Workspace OCI Configuration
-
-**File**: `workspace/config/provisioning.yaml`
-
-```yaml
-dependencies:
+
+
+
+
+File : workspace/config/provisioning.yaml
+dependencies:
extensions:
source_type: "oci"
@@ -78940,14 +75563,10 @@ dependencies:
clusters:
- "oci://localhost:5000/provisioning-extensions/buildkit:0.12.0"
-```plaintext
-
-### Extension Manifest
-
-**File**: `extensions/{type}/{name}/manifest.yaml`
-
-```yaml
-name: redis
+
+
+File : extensions/{type}/{name}/manifest.yaml
+name: redis
type: taskserv
version: 1.0.0
description: Redis in-memory data store
@@ -78965,14 +75584,10 @@ platforms:
- linux/amd64
min_provisioning_version: "3.0.0"
-```plaintext
-
----
-
-## Extension Development Workflow
-
-```bash
-# 1. Create extension
+
+
+
+# 1. Create extension
provisioning generate extension taskserv redis
# 2. Develop extension
@@ -78993,16 +75608,11 @@ provisioning oci push ./extensions/taskservs/redis redis 1.0.0
# 7. Verify
provisioning oci inspect redis:1.0.0
-```plaintext
-
----
-
-## Registry Management
-
-### Local Registry (Development)
-
-```bash
-# Start
+
+
+
+
+# Start
provisioning oci-registry start
# Stop
@@ -79013,12 +75623,9 @@ provisioning oci-registry status
# Endpoint: localhost:5000
# Storage: ~/.provisioning/oci-registry/
-```plaintext
-
-### Remote Registry (Production)
-
-```bash
-# Login to Harbor
+
+
+# Login to Harbor
provisioning oci login harbor.company.com --username admin
# Configure in workspace
@@ -79028,14 +75635,10 @@ provisioning oci login harbor.company.com --username admin
# oci:
# endpoint: "https://harbor.company.com"
# tls_enabled: true
-```plaintext
-
----
-
-## Migration from Monorepo
-
-```bash
-# 1. Dry-run migration (preview)
+
+
+
+# 1. Dry-run migration (preview)
provisioning migrate-to-oci workspace_dev --dry-run
# 2. Migrate with publishing
@@ -79049,36 +75652,25 @@ provisioning migration-report workspace_dev
# 5. Rollback if needed
provisioning rollback-migration workspace_dev
-```plaintext
-
----
-
-## Troubleshooting
-
-### Registry Not Running
-
-```bash
-# Check if registry is running
+
+
+
+
+# Check if registry is running
curl http://localhost:5000/v2/_catalog
# Start if not running
provisioning oci-registry start
-```plaintext
-
-### Authentication Failed
-
-```bash
-# Login again
+
+
+# Login again
provisioning oci login localhost:5000
# Or use token file
echo "your-token" > ~/.provisioning/tokens/oci
-```plaintext
-
-### Extension Not Found
-
-```bash
-# Check registry connection
+
+
+# Check registry connection
provisioning oci config
# List available extensions
@@ -79086,12 +75678,9 @@ provisioning oci list
# Check namespace
provisioning oci list --namespace provisioning-extensions
-```plaintext
-
-### Dependency Resolution Failed
-
-```bash
-# Validate dependencies
+
+
+# Validate dependencies
provisioning dep validate
# Show dependency tree
@@ -79099,64 +75688,42 @@ provisioning dep tree kubernetes
# Check for updates
provisioning dep check-updates
-```plaintext
-
----
-
-## Best Practices
-
-### Versioning
-
-✅ **DO**: Use semantic versioning (MAJOR.MINOR.PATCH)
-
-```yaml
-version: 1.2.3
-```plaintext
-
-❌ **DON'T**: Use arbitrary versions
-
-```yaml
-version: latest # Unpredictable
-```plaintext
-
-### Dependencies
-
-✅ **DO**: Specify version constraints
-
-```yaml
-dependencies:
+
+
+
+
+✅ DO : Use semantic versioning (MAJOR.MINOR.PATCH)
+version: 1.2.3
+
+❌ DON’T : Use arbitrary versions
+version: latest # Unpredictable
+
+
+✅ DO : Specify version constraints
+dependencies:
containerd: ">=1.7.0"
etcd: "^3.5.0"
-```plaintext
-
-❌ **DON'T**: Use wildcards
-
-```yaml
-dependencies:
+
+❌ DON’T : Use wildcards
+dependencies:
containerd: "*" # Too permissive
-```plaintext
-
-### Security
-
-✅ **DO**:
-
-- Use TLS for production registries
-- Rotate authentication tokens
-- Scan for vulnerabilities
-
-❌ **DON'T**:
-
-- Use `--insecure` in production
-- Store passwords in config files
-
----
-
-## Common Patterns
-
-### Pull and Install
-
-```bash
-# Pull extension
+
+
+✅ DO :
+
+Use TLS for production registries
+Rotate authentication tokens
+Scan for vulnerabilities
+
+❌ DON’T :
+
+Use --insecure in production
+Store passwords in config files
+
+
+
+
+# Pull extension
provisioning oci pull kubernetes:1.28.0
# Resolve dependencies (auto-installs)
@@ -79164,12 +75731,9 @@ provisioning dep resolve
# Use extension
provisioning taskserv create kubernetes
-```plaintext
-
-### Update Extensions
-
-```bash
-# Check for updates
+
+
+# Check for updates
provisioning dep check-updates
# Update specific extension
@@ -79177,32 +75741,22 @@ provisioning dep update kubernetes
# Update all
provisioning dep resolve --update
-```plaintext
-
-### Copy Between Registries
-
-```bash
-# Copy from local to production
+
+
+# Copy from local to production
provisioning oci copy \
localhost:5000/provisioning-extensions/kubernetes:1.28.0 \
harbor.company.com/provisioning/kubernetes:1.28.0
-```plaintext
-
-### Publish Multiple Extensions
-
-```bash
-# Publish all taskservs
+
+
+# Publish all taskservs
for dir in (ls extensions/taskservs); do
provisioning oci push $dir.name $dir.name 1.0.0
done
-```plaintext
-
----
-
-## Environment Variables
-
-```bash
-# Override registry
+
+
+
+# Override registry
export PROVISIONING_OCI_REGISTRY="harbor.company.com"
# Override namespace
@@ -79210,14 +75764,10 @@ export PROVISIONING_OCI_NAMESPACE="my-extensions"
# Set auth token
export PROVISIONING_OCI_TOKEN="your-token-here"
-```plaintext
-
----
-
-## File Locations
-
-```plaintext
-~/.provisioning/
+
+
+
+~/.provisioning/
├── oci-cache/ # OCI artifact cache
├── oci-registry/ # Local Zot registry data
└── tokens/
@@ -79230,20 +75780,16 @@ workspace/
├── providers/
├── taskservs/
└── clusters/
-```plaintext
-
----
-
-## Reference Links
-
-- [OCI Registry Guide](user/OCI_REGISTRY_GUIDE.md) - Complete user guide
-- [Multi-Repo Architecture](architecture/MULTI_REPO_ARCHITECTURE.md) - Architecture details
-- [Implementation Summary](../MULTI_REPO_OCI_IMPLEMENTATION_SUMMARY.md) - Technical details
-
----
-
-**Quick Help**: `provisioning oci --help` | `provisioning dep --help`
+
+
+
+
+Quick Help : provisioning oci --help | provisioning dep --help
Sudo password is needed when fix_local_hosts: true in your server configuration. This modifies:
@@ -79254,84 +75800,54 @@ workspace/
sudo -v && provisioning -c server create
-```plaintext
-
-Credentials cached for 5 minutes, no prompts during operation.
-
-### ✅ Alternative: Disable Host Fixing
-
-```kcl
-# In your settings.k or server config
+
+Credentials cached for 5 minutes, no prompts during operation.
+
+# In your settings.ncl or server config
fix_local_hosts = false
-```plaintext
-
-No sudo required, manual `/etc/hosts` management.
-
-### ✅ Manual: Enter Password When Prompted
-
-```bash
-provisioning -c server create
+
+No sudo required, manual /etc/hosts management.
+
+provisioning -c server create
# Enter password when prompted
# Or press CTRL-C to cancel
-```plaintext
-
-## CTRL-C Handling
-
-### CTRL-C Behavior
-
-**IMPORTANT**: Pressing CTRL-C at the sudo password prompt will interrupt the entire operation due to how Unix signals work. This is **expected behavior** and cannot be caught by Nushell.
-
-When you press CTRL-C at the password prompt:
-
-```plaintext
-Password: [CTRL-C]
+
+
+
+IMPORTANT : Pressing CTRL-C at the sudo password prompt will interrupt the entire operation due to how Unix signals work. This is expected behavior and cannot be caught by Nushell.
+When you press CTRL-C at the password prompt:
+Password: [CTRL-C]
Error: nu::shell::error
× Operation interrupted
-```plaintext
-
-**Why this happens**: SIGINT (CTRL-C) is sent to the entire process group, including Nushell itself. The signal propagates before exit code handling can occur.
-
-### Graceful Handling (Non-CTRL-C Cancellation)
-
-The system **does** handle these cases gracefully:
-
-**No password provided** (just press Enter):
-
-```plaintext
-Password: [Enter]
+
+Why this happens : SIGINT (CTRL-C) is sent to the entire process group, including Nushell itself. The signal propagates before exit code handling can occur.
+
+The system does handle these cases gracefully:
+No password provided (just press Enter):
+Password: [Enter]
⚠ Operation cancelled - sudo password required but not provided
ℹ Run 'sudo -v' first to cache credentials, or run without --fix-local-hosts
-```plaintext
-
-**Wrong password 3 times**:
-
-```plaintext
-Password: [wrong]
+
+Wrong password 3 times :
+Password: [wrong]
Password: [wrong]
Password: [wrong]
⚠ Operation cancelled - sudo password required but not provided
ℹ Run 'sudo -v' first to cache credentials, or run without --fix-local-hosts
-```plaintext
-
-### Recommended Approach
-
-To avoid password prompts entirely:
-
-```bash
-# Best: Pre-cache credentials (lasts 5 minutes)
+
+
+To avoid password prompts entirely:
+# Best: Pre-cache credentials (lasts 5 minutes)
sudo -v && provisioning -c server create
# Alternative: Disable host modification
# Set fix_local_hosts = false in your server config
-```plaintext
-
-## Common Commands
-
-```bash
-# Cache sudo for 5 minutes
+
+
+# Cache sudo for 5 minutes
sudo -v
# Check if cached
@@ -79342,65 +75858,50 @@ alias prvng='sudo -v && provisioning'
# Use the alias
prvng -c server create
-```plaintext
-
-## Troubleshooting
-
-| Issue | Solution |
-|-------|----------|
-| "Password required" error | Run `sudo -v` first |
-| CTRL-C doesn't work cleanly | Update to latest version |
-| Too many password prompts | Set `fix_local_hosts = false` |
-| Sudo not available | Must disable `fix_local_hosts` |
-| Wrong password 3 times | Run `sudo -k` to reset, then `sudo -v` |
-
-## Environment-Specific Settings
-
-### Development (Local)
-
-```kcl
-fix_local_hosts = true # Convenient for local testing
-```plaintext
-
-### CI/CD (Automation)
-
-```kcl
-fix_local_hosts = false # No interactive prompts
-```plaintext
-
-### Production (Servers)
-
-```kcl
-fix_local_hosts = false # Managed by configuration management
-```plaintext
-
-## What fix_local_hosts Does
-
-When enabled:
-
-1. Removes old hostname entries from `/etc/hosts`
-2. Adds new hostname → IP mapping to `/etc/hosts`
-3. Adds SSH config entry to `~/.ssh/config`
-4. Removes old SSH host keys for the hostname
-
-When disabled:
-
-- You manually manage `/etc/hosts` entries
-- You manually manage `~/.ssh/config` entries
-- SSH to servers using IP addresses instead of hostnames
-
-## Security Note
-
-The provisioning tool **never** stores or caches your sudo password. It only:
-
-- Checks if sudo credentials are already cached (via `sudo -n true`)
-- Detects when sudo fails due to missing credentials
-- Provides helpful error messages and exit cleanly
-
-Your sudo password timeout is controlled by the system's sudoers configuration (default: 5 minutes).
+
+Issue Solution
+“Password required” error Run sudo -v first
+CTRL-C doesn’t work cleanly Update to latest version
+Too many password prompts Set fix_local_hosts = false
+Sudo not available Must disable fix_local_hosts
+Wrong password 3 times Run sudo -k to reset, then sudo -v
+
+
+
+
+fix_local_hosts = true # Convenient for local testing
+
+
+fix_local_hosts = false # No interactive prompts
+
+
+fix_local_hosts = false # Managed by configuration management
+
+
+When enabled:
+
+Removes old hostname entries from /etc/hosts
+Adds new hostname → IP mapping to /etc/hosts
+Adds SSH config entry to ~/.ssh/config
+Removes old SSH host keys for the hostname
+
+When disabled:
+
+You manually manage /etc/hosts entries
+You manually manage ~/.ssh/config entries
+SSH to servers using IP addresses instead of hostnames
+
+
+The provisioning tool never stores or caches your sudo password. It only:
+
+Checks if sudo credentials are already cached (via sudo -n true)
+Detects when sudo fails due to missing credentials
+Provides helpful error messages and exit cleanly
+
+Your sudo password timeout is controlled by the system’s sudoers configuration (default: 5 minutes).
-
+
The new configuration system includes comprehensive schema validation to catch errors early and ensure configuration correctness.
@@ -79418,14 +75919,10 @@ enabled = true
name = "my-service"
version = "1.0.0"
# Error: Required field missing: enabled
-```plaintext
-
-### 2. Type Validation
-
-Validates field types:
-
-```toml
-# Schema
+
+
+Validates field types:
+# Schema
[fields.port]
type = "int"
@@ -79442,14 +75939,10 @@ enabled = true
# Invalid - wrong type
port = "8080" # Error: Expected int, got string
-```plaintext
-
-### 3. Enum Validation
-
-Restricts values to predefined set:
-
-```toml
-# Schema
+
+
+Restricts values to predefined set:
+# Schema
[fields.environment]
type = "string"
enum = ["dev", "staging", "prod"]
@@ -79459,14 +75952,10 @@ environment = "prod"
# Invalid
environment = "production" # Error: Must be one of: dev, staging, prod
-```plaintext
-
-### 4. Range Validation
-
-Validates numeric ranges:
-
-```toml
-# Schema
+
+
+Validates numeric ranges:
+# Schema
[fields.port]
type = "int"
min = 1024
@@ -79480,14 +75969,10 @@ port = 80 # Error: Must be >= 1024
# Invalid - above maximum
port = 70000 # Error: Must be <= 65535
-```plaintext
-
-### 5. Pattern Validation
-
-Validates string patterns using regex:
-
-```toml
-# Schema
+
+
+Validates string patterns using regex:
+# Schema
[fields.email]
type = "string"
pattern = "^[a-zA-Z0-9._%+-]+@[a-zA-Z0-9.-]+\\.[a-zA-Z]{2,}$"
@@ -79497,14 +75982,10 @@ email = "admin@example.com"
# Invalid
email = "not-an-email" # Error: Does not match pattern
-```plaintext
-
-### 6. Deprecated Fields
-
-Warns about deprecated configuration:
-
-```toml
-# Schema
+
+
+Warns about deprecated configuration:
+# Schema
[deprecated]
fields = ["old_field"]
@@ -79513,14 +75994,10 @@ old_field = "new_field"
# Config using deprecated field
old_field = "value" # Warning: old_field is deprecated. Use new_field instead.
-```plaintext
-
-## Using Schema Validator
-
-### Command Line
-
-```bash
-# Validate workspace config
+
+
+
+# Validate workspace config
provisioning workspace config validate
# Validate provider config
@@ -79531,12 +76008,9 @@ provisioning platform validate orchestrator
# Validate with detailed output
provisioning workspace config validate --verbose
-```plaintext
-
-### Programmatic Usage
-
-```nushell
-use provisioning/core/nulib/lib_provisioning/config/schema_validator.nu *
+
+
+use provisioning/core/nulib/lib_provisioning/config/schema_validator.nu *
# Load config
let config = (open ~/workspaces/my-project/config/provisioning.yaml | from yaml)
@@ -79561,24 +76035,16 @@ if ($result.warnings | length) > 0 {
print $" • ($warning.message)"
}
}
-```plaintext
-
-### Pretty Print Results
-
-```nushell
-# Validate and print formatted results
+
+
+# Validate and print formatted results
let result = (validate-workspace-config $config)
print-validation-results $result
-```plaintext
-
-## Schema Examples
-
-### Workspace Schema
-
-File: `/Users/Akasha/project-provisioning/provisioning/config/workspace.schema.toml`
-
-```toml
-[required]
+
+
+
+File: /Users/Akasha/project-provisioning/provisioning/config/workspace.schema.toml
+[required]
fields = ["workspace", "paths"]
[fields.workspace]
@@ -79610,14 +76076,10 @@ type = "bool"
[fields.debug.log_level]
type = "string"
enum = ["debug", "info", "warn", "error"]
-```plaintext
-
-### Provider Schema (AWS)
-
-File: `/Users/Akasha/project-provisioning/provisioning/extensions/providers/aws/config.schema.toml`
-
-```toml
-[required]
+
+
+File: /Users/Akasha/project-provisioning/provisioning/extensions/providers/aws/config.schema.toml
+[required]
fields = ["provider", "credentials"]
[fields.provider]
@@ -79667,14 +76129,10 @@ fields = ["old_region_field"]
[deprecated_replacements]
old_region_field = "provider.region"
-```plaintext
-
-### Platform Service Schema (Orchestrator)
-
-File: `/Users/Akasha/project-provisioning/provisioning/platform/orchestrator/config.schema.toml`
-
-```toml
-[required]
+
+
+File: /Users/Akasha/project-provisioning/provisioning/platform/orchestrator/config.schema.toml
+[required]
fields = ["service", "server"]
[fields.service]
@@ -79713,14 +76171,10 @@ max = 10000
[fields.queue.storage_path]
type = "string"
-```plaintext
-
-### KMS Service Schema
-
-File: `/Users/Akasha/project-provisioning/provisioning/core/services/kms/config.schema.toml`
-
-```toml
-[required]
+
+
+File: /Users/Akasha/project-provisioning/provisioning/core/services/kms/config.schema.toml
+[required]
fields = ["kms", "encryption"]
[fields.kms]
@@ -79760,14 +76214,10 @@ fields = ["old_kms_type"]
[deprecated_replacements]
old_kms_type = "kms.provider"
-```plaintext
-
-## Validation Workflow
-
-### 1. Development
-
-```bash
-# Create new config
+
+
+
+# Create new config
vim ~/workspaces/dev/config/provisioning.yaml
# Validate immediately
@@ -79776,12 +76226,9 @@ provisioning workspace config validate
# Fix errors and revalidate
vim ~/workspaces/dev/config/provisioning.yaml
provisioning workspace config validate
-```plaintext
-
-### 2. CI/CD Pipeline
-
-```yaml
-# GitLab CI
+
+
+# GitLab CI
validate-config:
stage: validate
script:
@@ -79792,12 +76239,9 @@ validate-config:
only:
changes:
- "*/config/**/*"
-```plaintext
-
-### 3. Pre-Deployment
-
-```bash
-# Validate all configurations before deployment
+
+
+# Validate all configurations before deployment
provisioning workspace config validate --verbose
provisioning provider validate --all
provisioning platform validate --all
@@ -79806,14 +76250,10 @@ provisioning platform validate --all
if [[ $? -eq 0 ]]; then
provisioning deploy --workspace production
fi
-```plaintext
-
-## Error Messages
-
-### Clear Error Format
-
-```plaintext
-❌ Validation failed
+
+
+
+❌ Validation failed
Errors:
• Required field missing: workspace.name
@@ -79824,63 +76264,43 @@ Errors:
⚠️ Warnings:
• Field old_field is deprecated. Use new_field instead.
-```plaintext
-
-### Error Details
-
-Each error includes:
-
-- **field**: Which field has the error
-- **type**: Error type (missing_required, type_mismatch, invalid_enum, etc.)
-- **message**: Human-readable description
-- **Additional context**: Expected values, patterns, ranges
-
-## Common Validation Patterns
-
-### Pattern 1: Hostname Validation
-
-```toml
-[fields.hostname]
+
+
+Each error includes:
+
+field : Which field has the error
+type : Error type (missing_required, type_mismatch, invalid_enum, etc.)
+message : Human-readable description
+Additional context : Expected values, patterns, ranges
+
+
+
+[fields.hostname]
type = "string"
pattern = "^[a-z0-9]([a-z0-9-]{0,61}[a-z0-9])?$"
-```plaintext
-
-### Pattern 2: Email Validation
-
-```toml
-[fields.email]
+
+
+[fields.email]
type = "string"
pattern = "^[a-zA-Z0-9._%+-]+@[a-zA-Z0-9.-]+\\.[a-zA-Z]{2,}$"
-```plaintext
-
-### Pattern 3: Semantic Version
-
-```toml
-[fields.version]
+
+
+[fields.version]
type = "string"
pattern = "^\\d+\\.\\d+\\.\\d+(-[a-zA-Z0-9]+)?$"
-```plaintext
-
-### Pattern 4: URL Validation
-
-```toml
-[fields.url]
+
+
+[fields.url]
type = "string"
pattern = "^https?://[a-zA-Z0-9.-]+(:[0-9]+)?(/.*)?$"
-```plaintext
-
-### Pattern 5: IPv4 Address
-
-```toml
-[fields.ip_address]
+
+
+[fields.ip_address]
type = "string"
pattern = "^(?:[0-9]{1,3}\\.){3}[0-9]{1,3}$"
-```plaintext
-
-### Pattern 6: AWS Resource ID
-
-```toml
-[fields.instance_id]
+
+
+[fields.instance_id]
type = "string"
pattern = "^i-[a-f0-9]{8,17}$"
@@ -79891,30 +76311,20 @@ pattern = "^ami-[a-f0-9]{8,17}$"
[fields.vpc_id]
type = "string"
pattern = "^vpc-[a-f0-9]{8,17}$"
-```plaintext
-
-## Testing Validation
-
-### Unit Tests
-
-```nushell
-# Run validation test suite
+
+
+
+# Run validation test suite
nu provisioning/tests/config_validation_tests.nu
-```plaintext
-
-### Integration Tests
-
-```bash
-# Test with real configs
+
+
+# Test with real configs
provisioning test validate --workspace dev
provisioning test validate --workspace staging
provisioning test validate --workspace prod
-```plaintext
-
-### Custom Validation
-
-```nushell
-# Create custom validation function
+
+
+# Create custom validation function
def validate-custom-config [config: record] {
let result = (validate-workspace-config $config)
@@ -79931,81 +76341,55 @@ def validate-custom-config [config: record] {
$result
}
-```plaintext
-
-## Best Practices
-
-### 1. Validate Early
-
-```bash
-# Validate during development
+
+
+
+# Validate during development
provisioning workspace config validate
# Don't wait for deployment
-```plaintext
-
-### 2. Use Strict Schemas
-
-```toml
-# Be explicit about types and constraints
+
+
+# Be explicit about types and constraints
[fields.port]
type = "int"
min = 1024
max = 65535
# Don't leave fields unvalidated
-```plaintext
-
-### 3. Document Patterns
-
-```toml
-# Include examples in schema
+
+
+# Include examples in schema
[fields.email]
type = "string"
pattern = "^[a-zA-Z0-9._%+-]+@[a-zA-Z0-9.-]+\\.[a-zA-Z]{2,}$"
# Example: user@example.com
-```plaintext
-
-### 4. Handle Deprecation
-
-```toml
-# Always provide replacement guidance
+
+
+# Always provide replacement guidance
[deprecated_replacements]
old_field = "new_field" # Clear migration path
-```plaintext
-
-### 5. Test Schemas
-
-```nushell
-# Include test cases in comments
+
+
+# Include test cases in comments
# Valid: "admin@example.com"
# Invalid: "not-an-email"
-```plaintext
-
-## Troubleshooting
-
-### Schema File Not Found
-
-```bash
-# Error: Schema file not found: /path/to/schema.toml
+
+
+
+# Error: Schema file not found: /path/to/schema.toml
# Solution: Ensure schema exists
ls -la /Users/Akasha/project-provisioning/provisioning/config/*.schema.toml
-```plaintext
-
-### Pattern Not Matching
-
-```bash
-# Error: Field hostname does not match pattern
+
+
+# Error: Field hostname does not match pattern
# Debug: Test pattern separately
echo "my-hostname" | grep -E "^[a-z0-9]([a-z0-9-]{0,61}[a-z0-9])?$"
-```plaintext
-
-### Type Mismatch
-
-```bash
-# Error: Expected int, got string
+
+
+# Error: Expected int, got string
# Check config
cat ~/workspaces/dev/config/provisioning.yaml | yq '.server.port'
@@ -80015,15 +76399,14 @@ cat ~/workspaces/dev/config/provisioning.yaml | yq '.server.port'
vim ~/workspaces/dev/config/provisioning.yaml
# Change: port: "8080"
# To: port: 8080
-```plaintext
-
-## Additional Resources
-
-- [Migration Guide](./MIGRATION_GUIDE.md)
-- [Workspace Guide](./WORKSPACE_GUIDE.md)
-- [Schema Files](../config/*.schema.toml)
-- [Validation Tests](../tests/config_validation_tests.nu)
+
+
diff --git a/docs/book/searchindex.js b/docs/book/searchindex.js
index a4d0be1..6ae85a5 100644
--- a/docs/book/searchindex.js
+++ b/docs/book/searchindex.js
@@ -1 +1 @@
-window.search = JSON.parse('{"doc_urls":["index.html#provisioning-platform-documentation","index.html#quick-navigation","index.html#-getting-started","index.html#-user-guides","index.html#-architecture","index.html#-architecture-decision-records-adrs","index.html#-api-documentation","index.html#-development","index.html#-troubleshooting","index.html#-how-to-guides","index.html#-configuration","index.html#-quick-references","index.html#documentation-structure","getting-started/installation-guide.html#installation-guide","getting-started/installation-guide.html#what-youll-learn","getting-started/installation-guide.html#system-requirements","getting-started/installation-guide.html#operating-system-support","getting-started/installation-guide.html#hardware-requirements","getting-started/installation-guide.html#architecture-support","getting-started/installation-guide.html#prerequisites","getting-started/installation-guide.html#pre-installation-checklist","getting-started/getting-started.html#getting-started-guide","getting-started/getting-started.html#what-youll-learn","getting-started/getting-started.html#prerequisites","getting-started/getting-started.html#essential-concepts","getting-started/getting-started.html#infrastructure-as-code-iac","getting-started/quickstart-cheatsheet.html#provisioning-platform-quick-reference","getting-started/quickstart-cheatsheet.html#quick-navigation","getting-started/quickstart-cheatsheet.html#plugin-commands","getting-started/quickstart-cheatsheet.html#authentication-plugin-nu_plugin_auth","getting-started/quickstart-cheatsheet.html#kms-plugin-nu_plugin_kms","getting-started/quickstart-cheatsheet.html#orchestrator-plugin-nu_plugin_orchestrator","getting-started/quickstart-cheatsheet.html#plugin-performance-comparison","getting-started/quickstart-cheatsheet.html#cli-shortcuts","getting-started/quickstart-cheatsheet.html#infrastructure-shortcuts","getting-started/quickstart-cheatsheet.html#orchestration-shortcuts","getting-started/quickstart-cheatsheet.html#development-shortcuts","getting-started/quickstart-cheatsheet.html#workspace-shortcuts","getting-started/quickstart-cheatsheet.html#configuration-shortcuts","getting-started/quickstart-cheatsheet.html#utility-shortcuts","getting-started/quickstart-cheatsheet.html#generation-shortcuts","getting-started/quickstart-cheatsheet.html#action-shortcuts","getting-started/quickstart-cheatsheet.html#infrastructure-commands","getting-started/quickstart-cheatsheet.html#server-management","getting-started/quickstart-cheatsheet.html#taskserv-management","getting-started/quickstart-cheatsheet.html#cluster-management","getting-started/quickstart-cheatsheet.html#orchestration-commands","getting-started/quickstart-cheatsheet.html#workflow-management","getting-started/quickstart-cheatsheet.html#batch-operations","getting-started/quickstart-cheatsheet.html#orchestrator-management","getting-started/quickstart-cheatsheet.html#configuration-commands","getting-started/quickstart-cheatsheet.html#environment-and-validation","getting-started/quickstart-cheatsheet.html#configuration-files","getting-started/quickstart-cheatsheet.html#http-configuration","getting-started/quickstart-cheatsheet.html#workspace-commands","getting-started/quickstart-cheatsheet.html#workspace-management","getting-started/quickstart-cheatsheet.html#user-preferences","getting-started/quickstart-cheatsheet.html#security-commands","getting-started/quickstart-cheatsheet.html#authentication-via-cli","getting-started/quickstart-cheatsheet.html#multi-factor-authentication-mfa","getting-started/quickstart-cheatsheet.html#secrets-management","getting-started/quickstart-cheatsheet.html#ssh-temporal-keys","getting-started/quickstart-cheatsheet.html#kms-operations-via-cli","getting-started/quickstart-cheatsheet.html#break-glass-emergency-access","getting-started/quickstart-cheatsheet.html#compliance-and-audit","getting-started/quickstart-cheatsheet.html#common-workflows","getting-started/quickstart-cheatsheet.html#complete-deployment-from-scratch","getting-started/quickstart-cheatsheet.html#multi-environment-deployment","getting-started/quickstart-cheatsheet.html#update-infrastructure","getting-started/quickstart-cheatsheet.html#encrypted-secrets-deployment","getting-started/quickstart-cheatsheet.html#debug-and-check-mode","getting-started/quickstart-cheatsheet.html#debug-mode","getting-started/quickstart-cheatsheet.html#check-mode-dry-run","getting-started/quickstart-cheatsheet.html#auto-confirm-mode","getting-started/quickstart-cheatsheet.html#wait-mode","getting-started/quickstart-cheatsheet.html#infrastructure-selection","getting-started/quickstart-cheatsheet.html#output-formats","getting-started/quickstart-cheatsheet.html#json-output","getting-started/quickstart-cheatsheet.html#yaml-output","getting-started/quickstart-cheatsheet.html#table-output-default","getting-started/quickstart-cheatsheet.html#text-output","getting-started/quickstart-cheatsheet.html#performance-tips","getting-started/quickstart-cheatsheet.html#use-plugins-for-frequent-operations","getting-started/quickstart-cheatsheet.html#batch-operations-1","getting-started/quickstart-cheatsheet.html#check-mode-for-testing","getting-started/quickstart-cheatsheet.html#help-system","getting-started/quickstart-cheatsheet.html#command-specific-help","getting-started/quickstart-cheatsheet.html#bi-directional-help","getting-started/quickstart-cheatsheet.html#general-help","getting-started/quickstart-cheatsheet.html#quick-reference-common-flags","getting-started/quickstart-cheatsheet.html#plugin-installation-quick-reference","getting-started/quickstart-cheatsheet.html#related-documentation","getting-started/setup-quickstart.html#setup-quick-start---5-minutes-to-deployment","getting-started/setup-quickstart.html#step-1-check-prerequisites-30-seconds","getting-started/setup-quickstart.html#step-2-install-provisioning-1-minute","getting-started/setup-quickstart.html#step-3-initialize-system-2-minutes","getting-started/setup-quickstart.html#step-4-create-your-first-workspace-1-minute","getting-started/setup-quickstart.html#step-5-deploy-your-first-server-1-minute","getting-started/setup-quickstart.html#verify-everything-works","getting-started/setup-quickstart.html#common-commands-cheat-sheet","getting-started/setup-quickstart.html#troubleshooting-quick-fixes","getting-started/setup-quickstart.html#whats-next","getting-started/setup-quickstart.html#need-help","getting-started/setup-quickstart.html#key-files","getting-started/setup-system-guide.html#provisioning-setup-system-guide","getting-started/setup-system-guide.html#quick-start","getting-started/setup-system-guide.html#prerequisites","getting-started/setup-system-guide.html#30-second-setup","getting-started/quickstart.html#quick-start","getting-started/quickstart.html#-navigate-to-quick-start-guide","getting-started/quickstart.html#quick-commands","getting-started/01-prerequisites.html#prerequisites","getting-started/01-prerequisites.html#hardware-requirements","getting-started/01-prerequisites.html#minimum-requirements-solo-mode","getting-started/01-prerequisites.html#recommended-requirements-multi-user-mode","getting-started/01-prerequisites.html#production-requirements-enterprise-mode","getting-started/01-prerequisites.html#operating-system","getting-started/01-prerequisites.html#supported-platforms","getting-started/01-prerequisites.html#platform-specific-notes","getting-started/01-prerequisites.html#required-software","getting-started/01-prerequisites.html#core-dependencies","getting-started/01-prerequisites.html#optional-dependencies","getting-started/01-prerequisites.html#installation-verification","getting-started/01-prerequisites.html#nushell","getting-started/01-prerequisites.html#kcl","getting-started/01-prerequisites.html#docker","getting-started/01-prerequisites.html#sops","getting-started/01-prerequisites.html#age","getting-started/01-prerequisites.html#installing-missing-dependencies","getting-started/01-prerequisites.html#macos-using-homebrew","getting-started/01-prerequisites.html#ubuntudebian","getting-started/01-prerequisites.html#fedorarhel","getting-started/01-prerequisites.html#network-requirements","getting-started/01-prerequisites.html#firewall-ports","getting-started/01-prerequisites.html#external-connectivity","getting-started/01-prerequisites.html#cloud-provider-credentials-optional","getting-started/01-prerequisites.html#aws","getting-started/01-prerequisites.html#upcloud","getting-started/01-prerequisites.html#next-steps","getting-started/02-installation.html#installation","getting-started/02-installation.html#overview","getting-started/02-installation.html#step-1-clone-the-repository","getting-started/02-installation.html#step-2-install-nushell-plugins","getting-started/02-installation.html#install-nu_plugin_tera-template-rendering","getting-started/02-installation.html#install-nu_plugin_kcl-optional-kcl-integration","getting-started/02-installation.html#verify-plugin-installation","getting-started/02-installation.html#step-3-add-cli-to-path","getting-started/02-installation.html#step-4-generate-age-encryption-keys","getting-started/02-installation.html#step-5-configure-environment","getting-started/02-installation.html#step-6-initialize-workspace","getting-started/02-installation.html#step-7-validate-installation","getting-started/02-installation.html#optional-install-platform-services","getting-started/02-installation.html#optional-install-platform-with-installer","getting-started/02-installation.html#troubleshooting","getting-started/02-installation.html#nushell-plugin-not-found","getting-started/02-installation.html#permission-denied","getting-started/02-installation.html#age-keys-not-found","getting-started/02-installation.html#next-steps","getting-started/02-installation.html#additional-resources","getting-started/03-first-deployment.html#first-deployment","getting-started/03-first-deployment.html#overview","getting-started/03-first-deployment.html#step-1-configure-infrastructure","getting-started/03-first-deployment.html#step-2-edit-configuration","getting-started/03-first-deployment.html#step-3-create-server-check-mode","getting-started/03-first-deployment.html#step-4-create-server-real","getting-started/03-first-deployment.html#step-5-verify-server","getting-started/03-first-deployment.html#step-6-install-kubernetes-check-mode","getting-started/03-first-deployment.html#step-7-install-kubernetes-real","getting-started/03-first-deployment.html#step-8-verify-installation","getting-started/03-first-deployment.html#common-deployment-patterns","getting-started/03-first-deployment.html#pattern-1-multiple-servers","getting-started/03-first-deployment.html#pattern-2-server-with-multiple-task-services","getting-started/03-first-deployment.html#pattern-3-complete-cluster","getting-started/03-first-deployment.html#deployment-workflow","getting-started/03-first-deployment.html#troubleshooting","getting-started/03-first-deployment.html#server-creation-fails","getting-started/03-first-deployment.html#task-service-installation-fails","getting-started/03-first-deployment.html#ssh-connection-issues","getting-started/03-first-deployment.html#next-steps","getting-started/03-first-deployment.html#additional-resources","getting-started/04-verification.html#verification","getting-started/04-verification.html#overview","getting-started/04-verification.html#step-1-verify-configuration","getting-started/04-verification.html#step-2-verify-servers","getting-started/04-verification.html#step-3-verify-task-services","getting-started/04-verification.html#step-4-verify-kubernetes-if-installed","getting-started/04-verification.html#step-5-verify-platform-services-optional","getting-started/04-verification.html#orchestrator","getting-started/04-verification.html#control-center","getting-started/04-verification.html#kms-service","getting-started/04-verification.html#step-6-run-health-checks","getting-started/04-verification.html#step-7-verify-workflows","getting-started/04-verification.html#common-verification-checks","getting-started/04-verification.html#dns-resolution-if-coredns-installed","getting-started/04-verification.html#network-connectivity","getting-started/04-verification.html#storage-and-resources","getting-started/04-verification.html#troubleshooting-failed-verifications","getting-started/04-verification.html#configuration-validation-failed","getting-started/04-verification.html#server-unreachable","getting-started/04-verification.html#task-service-not-running","getting-started/04-verification.html#platform-service-down","getting-started/04-verification.html#performance-verification","getting-started/04-verification.html#response-time-tests","getting-started/04-verification.html#resource-usage","getting-started/04-verification.html#security-verification","getting-started/04-verification.html#encryption","getting-started/04-verification.html#authentication-if-enabled","getting-started/04-verification.html#verification-checklist","getting-started/04-verification.html#next-steps","getting-started/04-verification.html#additional-resources","getting-started/05-platform-configuration.html#platform-service-configuration","getting-started/05-platform-configuration.html#what-youll-learn","getting-started/05-platform-configuration.html#prerequisites","getting-started/05-platform-configuration.html#platform-services-overview","getting-started/05-platform-configuration.html#deployment-modes","getting-started/05-platform-configuration.html#step-1-initialize-configuration-script","getting-started/05-platform-configuration.html#step-2-choose-configuration-method","getting-started/05-platform-configuration.html#method-a-interactive-typedialog-configuration-recommended","getting-started/05-platform-configuration.html#method-b-quick-mode-configuration-fastest","getting-started/05-platform-configuration.html#method-c-manual-nickel-configuration","getting-started/05-platform-configuration.html#step-3-understand-configuration-layers","getting-started/05-platform-configuration.html#step-4-verify-generated-configuration","getting-started/05-platform-configuration.html#step-5-run-platform-services","getting-started/05-platform-configuration.html#running-a-single-service","getting-started/05-platform-configuration.html#running-multiple-services","getting-started/05-platform-configuration.html#docker-based-deployment","getting-started/05-platform-configuration.html#step-6-verify-services-are-running","getting-started/05-platform-configuration.html#customizing-configuration","getting-started/05-platform-configuration.html#scenario-change-deployment-mode","getting-started/05-platform-configuration.html#scenario-manual-configuration-edit","getting-started/05-platform-configuration.html#scenario-workspace-specific-overrides","getting-started/05-platform-configuration.html#available-configuration-commands","getting-started/05-platform-configuration.html#configuration-file-locations","getting-started/05-platform-configuration.html#public-definitions-part-of-repository","getting-started/05-platform-configuration.html#private-runtime-configs-gitignored","getting-started/05-platform-configuration.html#examples-reference","getting-started/05-platform-configuration.html#troubleshooting-configuration","getting-started/05-platform-configuration.html#issue-script-fails-with-nickel-not-found","getting-started/05-platform-configuration.html#issue-configuration-wont-generate-toml","getting-started/05-platform-configuration.html#issue-service-cant-read-configuration","getting-started/05-platform-configuration.html#issue-services-wont-start-after-config-change","getting-started/05-platform-configuration.html#important-notes","getting-started/05-platform-configuration.html#-runtime-configurations-are-private","getting-started/05-platform-configuration.html#-schemas-are-public","getting-started/05-platform-configuration.html#-configuration-is-idempotent","getting-started/05-platform-configuration.html#-installer-status","getting-started/05-platform-configuration.html#next-steps","getting-started/05-platform-configuration.html#additional-resources","architecture/system-overview.html#system-overview","architecture/system-overview.html#executive-summary","architecture/system-overview.html#high-level-architecture","architecture/system-overview.html#system-diagram","architecture/architecture-overview.html#provisioning-platform---architecture-overview","architecture/architecture-overview.html#table-of-contents","architecture/architecture-overview.html#executive-summary","architecture/architecture-overview.html#what-is-the-provisioning-platform","architecture/architecture-overview.html#key-characteristics","architecture/architecture-overview.html#architecture-at-a-glance","architecture/design-principles.html#design-principles","architecture/design-principles.html#overview","architecture/design-principles.html#core-architectural-principles","architecture/design-principles.html#1-project-architecture-principles-pap-compliance","architecture/integration-patterns.html#integration-patterns","architecture/integration-patterns.html#overview","architecture/integration-patterns.html#core-integration-patterns","architecture/integration-patterns.html#1-hybrid-language-integration","architecture/integration-patterns.html#2-provider-abstraction-pattern","architecture/integration-patterns.html#3-configuration-resolution-pattern","architecture/integration-patterns.html#4-workflow-orchestration-patterns","architecture/integration-patterns.html#5-state-management-patterns","architecture/integration-patterns.html#6-event-and-messaging-patterns","architecture/integration-patterns.html#7-extension-integration-patterns","architecture/integration-patterns.html#8-api-design-patterns","architecture/integration-patterns.html#error-handling-patterns","architecture/integration-patterns.html#structured-error-pattern","architecture/integration-patterns.html#error-recovery-pattern","architecture/integration-patterns.html#performance-optimization-patterns","architecture/integration-patterns.html#caching-strategy-pattern","architecture/integration-patterns.html#streaming-pattern-for-large-data","architecture/integration-patterns.html#testing-integration-patterns","architecture/integration-patterns.html#integration-test-pattern","architecture/orchestrator-integration-model.html#orchestrator-integration-model---deep-dive","architecture/orchestrator-integration-model.html#executive-summary","architecture/orchestrator-integration-model.html#current-architecture-hybrid-orchestrator-v30","architecture/orchestrator-integration-model.html#the-problem-being-solved","architecture/orchestrator-integration-model.html#why-not-pure-rust","architecture/orchestrator-integration-model.html#multi-repo-integration-example","architecture/orchestrator-integration-model.html#installation","architecture/multi-repo-architecture.html#multi-repository-architecture-with-oci-registry-support","architecture/multi-repo-architecture.html#overview","architecture/multi-repo-architecture.html#architecture-goals","architecture/multi-repo-architecture.html#repository-structure","architecture/multi-repo-architecture.html#repository-1-provisioning-core","architecture/multi-repo-strategy.html#multi-repository-strategy-analysis","architecture/multi-repo-strategy.html#executive-summary","architecture/multi-repo-strategy.html#repository-architecture-options","architecture/multi-repo-strategy.html#option-a-pure-monorepo-original-recommendation","architecture/multi-repo-strategy.html#option-b-multi-repo-with-submodules--not-recommended","architecture/multi-repo-strategy.html#option-c-multi-repo-with-package-dependencies--recommended","architecture/multi-repo-strategy.html#recommended-multi-repo-architecture","architecture/multi-repo-strategy.html#repository-1-provisioning-core","architecture/database-and-config-architecture.html#database-and-configuration-architecture","architecture/database-and-config-architecture.html#control-center-database-dbs","architecture/database-and-config-architecture.html#database-type--surrealdb--in-memory-backend","architecture/database-and-config-architecture.html#database-configuration","architecture/database-and-config-architecture.html#orchestrator-database","architecture/database-and-config-architecture.html#storage-type--filesystem--file-based-queue","architecture/ecosystem-integration.html#prov-ecosystem--provctl-integration","architecture/ecosystem-integration.html#overview","architecture/ecosystem-integration.html#architecture","architecture/ecosystem-integration.html#three-layer-integration","architecture/package-and-loader-system.html#kcl-package-and-module-loader-system","architecture/package-and-loader-system.html#architecture-overview","architecture/package-and-loader-system.html#benefits","architecture/package-and-loader-system.html#components","architecture/package-and-loader-system.html#1-core-kcl-package-provisioningkcl","architecture/package-and-loader-system.html#2-module-discovery-system","architecture/nickel-vs-kcl-comparison.html#nickel-vs-kcl-comprehensive-comparison","architecture/nickel-vs-kcl-comparison.html#quick-decision-tree","architecture/nickel-vs-kcl-comparison.html#for-legacy-kcl-workspace-level","architecture/nickel-vs-kcl-comparison.html#9-typedialog-integration","architecture/nickel-vs-kcl-comparison.html#what-is-typedialog","architecture/nickel-vs-kcl-comparison.html#workflow-nickel-schemas--interactive-uis--nickel-output","architecture/nickel-executable-examples.html#nickel-executable-examples--test-cases","architecture/nickel-executable-examples.html#setup-run-examples-locally","architecture/nickel-executable-examples.html#prerequisites","architecture/nickel-executable-examples.html#directory-structure-for-examples","architecture/nickel-executable-examples.html#example-1-simple-server-configuration-executable","architecture/nickel-executable-examples.html#step-1-create-contract-file","architecture/nickel-executable-examples.html#step-2-create-defaults-file","architecture/nickel-executable-examples.html#step-3-create-main-module-with-hybrid-interface","architecture/nickel-executable-examples.html#test-export-and-validate-json","architecture/nickel-executable-examples.html#usage-in-consumer-module","architecture/nickel-executable-examples.html#example-2-complex-provider-extension-production-pattern","architecture/nickel-executable-examples.html#create-provider-structure","architecture/nickel-executable-examples.html#provider-contracts","architecture/nickel-executable-examples.html#provider-defaults","architecture/nickel-executable-examples.html#provider-main-module","architecture/nickel-executable-examples.html#test-provider-configuration","architecture/nickel-executable-examples.html#consumer-using-provider","architecture/nickel-executable-examples.html#example-3-real-world-pattern---taskserv-configuration","architecture/nickel-executable-examples.html#taskserv-contracts-from-wuji","architecture/nickel-executable-examples.html#taskserv-defaults","architecture/nickel-executable-examples.html#taskserv-main","architecture/nickel-executable-examples.html#test-taskserv-setup","architecture/nickel-executable-examples.html#example-4-composition--extension-pattern","architecture/nickel-executable-examples.html#base-infrastructure","architecture/nickel-executable-examples.html#extending-infrastructure-nickel-advantage","architecture/nickel-executable-examples.html#example-5-validation--error-handling","architecture/nickel-executable-examples.html#validation-functions","architecture/nickel-executable-examples.html#using-validations","architecture/nickel-executable-examples.html#example-6-comparison-with-kcl-same-logic","architecture/nickel-executable-examples.html#kcl-version","architecture/nickel-executable-examples.html#nickel-version","architecture/nickel-executable-examples.html#difference-summary","architecture/nickel-executable-examples.html#test-suite-bash-script","architecture/nickel-executable-examples.html#run-all-examples","architecture/nickel-executable-examples.html#quick-commands-reference","architecture/nickel-executable-examples.html#common-nickel-operations","architecture/nickel-executable-examples.html#troubleshooting-examples","architecture/nickel-executable-examples.html#problem-unexpected-token-with-multiple-let","architecture/nickel-executable-examples.html#problem-function-serialization-fails","architecture/nickel-executable-examples.html#problem-null-values-cause-export-issues","architecture/nickel-executable-examples.html#summary","architecture/orchestrator_info.html#cli-code","architecture/orchestrator_info.html#returns-workflow_id--abc-123","architecture/orchestrator_info.html#serverscreatenu","architecture/orchestrator-auth-integration.html#orchestrator-authentication--authorization-integration","architecture/orchestrator-auth-integration.html#overview","architecture/orchestrator-auth-integration.html#architecture","architecture/orchestrator-auth-integration.html#security-middleware-chain","architecture/repo-dist-analysis.html#repository-and-distribution-architecture-analysis","architecture/repo-dist-analysis.html#executive-summary","architecture/repo-dist-analysis.html#current-state-analysis","architecture/repo-dist-analysis.html#strengths","architecture/repo-dist-analysis.html#critical-issues","architecture/repo-dist-analysis.html#recommended-architecture","architecture/repo-dist-analysis.html#1-monorepo-structure","architecture/typedialog-nickel-integration.html#typedialog--nickel-integration-guide","architecture/typedialog-nickel-integration.html#what-is-typedialog","architecture/adr/ADR-001-project-structure.html#adr-001-project-structure-decision","architecture/adr/ADR-001-project-structure.html#status","architecture/adr/ADR-001-project-structure.html#context","architecture/adr/ADR-001-project-structure.html#decision","architecture/adr/ADR-002-distribution-strategy.html#adr-002-distribution-strategy","architecture/adr/ADR-002-distribution-strategy.html#status","architecture/adr/ADR-002-distribution-strategy.html#context","architecture/adr/ADR-002-distribution-strategy.html#decision","architecture/adr/ADR-002-distribution-strategy.html#distribution-layers","architecture/adr/ADR-002-distribution-strategy.html#distribution-structure","architecture/adr/ADR-003-workspace-isolation.html#adr-003-workspace-isolation","architecture/adr/ADR-003-workspace-isolation.html#status","architecture/adr/ADR-003-workspace-isolation.html#context","architecture/adr/ADR-003-workspace-isolation.html#decision","architecture/adr/ADR-003-workspace-isolation.html#workspace-structure","architecture/adr/ADR-004-hybrid-architecture.html#adr-004-hybrid-architecture","architecture/adr/ADR-004-hybrid-architecture.html#status","architecture/adr/ADR-004-hybrid-architecture.html#context","architecture/adr/ADR-004-hybrid-architecture.html#decision","architecture/adr/ADR-004-hybrid-architecture.html#architecture-layers","architecture/adr/ADR-004-hybrid-architecture.html#integration-patterns","architecture/adr/ADR-004-hybrid-architecture.html#key-architectural-principles","architecture/adr/ADR-004-hybrid-architecture.html#consequences","architecture/adr/ADR-004-hybrid-architecture.html#positive","architecture/adr/ADR-004-hybrid-architecture.html#negative","architecture/adr/ADR-004-hybrid-architecture.html#neutral","architecture/adr/ADR-004-hybrid-architecture.html#alternatives-considered","architecture/adr/ADR-004-hybrid-architecture.html#alternative-1-pure-nushell-implementation","architecture/adr/ADR-004-hybrid-architecture.html#alternative-2-complete-rust-rewrite","architecture/adr/ADR-004-hybrid-architecture.html#alternative-3-pure-go-implementation","architecture/adr/ADR-004-hybrid-architecture.html#alternative-4-pythonshell-hybrid","architecture/adr/ADR-004-hybrid-architecture.html#alternative-5-container-based-separation","architecture/adr/ADR-004-hybrid-architecture.html#implementation-details","architecture/adr/ADR-004-hybrid-architecture.html#orchestrator-components","architecture/adr/ADR-004-hybrid-architecture.html#integration-protocols","architecture/adr/ADR-004-hybrid-architecture.html#development-workflow","architecture/adr/ADR-004-hybrid-architecture.html#monitoring-and-observability","architecture/adr/ADR-004-hybrid-architecture.html#migration-strategy","architecture/adr/ADR-004-hybrid-architecture.html#phase-1-core-infrastructure-completed","architecture/adr/ADR-004-hybrid-architecture.html#phase-2-workflow-integration-completed","architecture/adr/ADR-004-hybrid-architecture.html#phase-3-advanced-features-completed","architecture/adr/ADR-004-hybrid-architecture.html#references","architecture/adr/ADR-005-extension-framework.html#adr-005-extension-framework","architecture/adr/ADR-005-extension-framework.html#status","architecture/adr/ADR-005-extension-framework.html#context","architecture/adr/ADR-005-extension-framework.html#decision","architecture/adr/ADR-005-extension-framework.html#extension-architecture","architecture/adr/ADR-005-extension-framework.html#extension-structure","architecture/adr/ADR-006-provisioning-cli-refactoring.html#adr-006-provisioning-cli-refactoring-to-modular-architecture","architecture/adr/ADR-006-provisioning-cli-refactoring.html#context","architecture/adr/ADR-006-provisioning-cli-refactoring.html#problems-identified","architecture/adr/ADR-006-provisioning-cli-refactoring.html#decision","architecture/adr/ADR-007-kms-simplification.html#adr-007-kms-service-simplification-to-age-and-cosmian-backends","architecture/adr/ADR-007-kms-simplification.html#context","architecture/adr/ADR-007-kms-simplification.html#problems-with-4-backend-approach","architecture/adr/ADR-007-kms-simplification.html#key-insights","architecture/adr/ADR-007-kms-simplification.html#decision","architecture/adr/ADR-007-kms-simplification.html#consequences","architecture/adr/ADR-007-kms-simplification.html#positive","architecture/adr/ADR-007-kms-simplification.html#negative","architecture/adr/ADR-007-kms-simplification.html#neutral","architecture/adr/ADR-007-kms-simplification.html#implementation","architecture/adr/ADR-007-kms-simplification.html#files-created","architecture/adr/ADR-007-kms-simplification.html#files-modified","architecture/adr/ADR-007-kms-simplification.html#files-deleted","architecture/adr/ADR-007-kms-simplification.html#dependencies-changed","architecture/adr/ADR-007-kms-simplification.html#migration-path","architecture/adr/ADR-007-kms-simplification.html#for-development","architecture/adr/ADR-007-kms-simplification.html#for-production","architecture/adr/ADR-007-kms-simplification.html#alternatives-considered","architecture/adr/ADR-007-kms-simplification.html#alternative-1-keep-all-4-backends","architecture/adr/ADR-007-kms-simplification.html#alternative-2-only-cosmian-no-age","architecture/adr/ADR-007-kms-simplification.html#alternative-3-only-age-no-production-backend","architecture/adr/ADR-007-kms-simplification.html#alternative-4-age--hashicorp-vault","architecture/adr/ADR-007-kms-simplification.html#metrics","architecture/adr/ADR-007-kms-simplification.html#code-reduction","architecture/adr/ADR-007-kms-simplification.html#dependency-reduction","architecture/adr/ADR-007-kms-simplification.html#compilation-time","architecture/adr/ADR-007-kms-simplification.html#compliance","architecture/adr/ADR-007-kms-simplification.html#security-considerations","architecture/adr/ADR-007-kms-simplification.html#testing-requirements","architecture/adr/ADR-007-kms-simplification.html#references","architecture/adr/ADR-007-kms-simplification.html#notes","architecture/adr/ADR-008-cedar-authorization.html#adr-008-cedar-authorization-policy-engine-integration","architecture/adr/ADR-008-cedar-authorization.html#context-and-problem-statement","architecture/adr/ADR-008-cedar-authorization.html#decision-drivers","architecture/adr/ADR-008-cedar-authorization.html#considered-options","architecture/adr/ADR-008-cedar-authorization.html#option-1-code-based-authorization-current-state","architecture/adr/ADR-008-cedar-authorization.html#option-2-opa-open-policy-agent","architecture/adr/ADR-008-cedar-authorization.html#option-3-cedar-policy-engine-chosen","architecture/adr/ADR-008-cedar-authorization.html#option-4-casbin","architecture/adr/ADR-008-cedar-authorization.html#decision-outcome","architecture/adr/ADR-008-cedar-authorization.html#rationale","architecture/adr/ADR-008-cedar-authorization.html#implementation-details","architecture/adr/ADR-009-security-system-complete.html#adr-009-complete-security-system-implementation","architecture/adr/ADR-009-security-system-complete.html#context","architecture/adr/ADR-009-security-system-complete.html#decision","architecture/adr/ADR-009-security-system-complete.html#implementation-summary","architecture/adr/ADR-009-security-system-complete.html#total-implementation","architecture/adr/ADR-009-security-system-complete.html#architecture-components","architecture/adr/ADR-009-security-system-complete.html#group-1-foundation-13485-lines","architecture/adr/ADR-009-security-system-complete.html#group-2-kms-integration-9331-lines","architecture/adr/ADR-009-security-system-complete.html#group-3-security-features-8948-lines","architecture/adr/ADR-009-security-system-complete.html#group-4-advanced-features-7935-lines","architecture/adr/ADR-009-security-system-complete.html#security-architecture-flow","architecture/adr/ADR-009-security-system-complete.html#end-to-end-request-flow","architecture/adr/ADR-010-configuration-format-strategy.html#adr-010-configuration-file-format-strategy","architecture/adr/ADR-010-configuration-format-strategy.html#context","architecture/adr/ADR-010-configuration-format-strategy.html#decision","architecture/adr/ADR-010-configuration-format-strategy.html#implementation-strategy","architecture/adr/ADR-010-configuration-format-strategy.html#phase-1-documentation-complete","architecture/adr/ADR-010-configuration-format-strategy.html#phase-2-workspace-config-migration-in-progress","architecture/adr/ADR-010-configuration-format-strategy.html#phase-3-template-file-reorganization-in-progress","architecture/adr/ADR-010-configuration-format-strategy.html#toml-for-application-configuration","architecture/adr/ADR-010-configuration-format-strategy.html#yaml-for-metadata-and-kubernetes-resources","architecture/adr/ADR-010-configuration-format-strategy.html#configuration-hierarchy-priority","architecture/adr/ADR-010-configuration-format-strategy.html#migration-path","architecture/adr/ADR-010-configuration-format-strategy.html#for-existing-workspaces","architecture/adr/ADR-010-configuration-format-strategy.html#for-new-workspaces","architecture/adr/ADR-010-configuration-format-strategy.html#file-format-guidelines-for-developers","architecture/adr/ADR-010-configuration-format-strategy.html#when-to-use-each-format","architecture/adr/ADR-010-configuration-format-strategy.html#consequences","architecture/adr/ADR-010-configuration-format-strategy.html#benefits","architecture/adr/ADR-010-configuration-format-strategy.html#trade-offs","architecture/adr/ADR-010-configuration-format-strategy.html#risk-mitigation","architecture/adr/ADR-010-configuration-format-strategy.html#template-file-reorganization","architecture/adr/ADR-010-configuration-format-strategy.html#problem","architecture/adr/ADR-011-nickel-migration.html#adr-011-migration-from-kcl-to-nickel","architecture/adr/ADR-011-nickel-migration.html#context","architecture/adr/ADR-011-nickel-migration.html#problems-with-kcl","architecture/adr/ADR-011-nickel-migration.html#project-needs","architecture/adr/ADR-011-nickel-migration.html#decision","architecture/adr/ADR-011-nickel-migration.html#key-changes","architecture/adr/ADR-011-nickel-migration.html#implementation-summary","architecture/adr/ADR-011-nickel-migration.html#migration-complete","architecture/adr/ADR-011-nickel-migration.html#platform-schemas-provisioningschemas","architecture/adr/ADR-011-nickel-migration.html#extensions-provisioningextensions","architecture/adr/adr-012-nushell-nickel-plugin-cli-wrapper.html#adr-014-nushell-nickel-plugin---cli-wrapper-architecture","architecture/adr/adr-012-nushell-nickel-plugin-cli-wrapper.html#status","architecture/adr/adr-012-nushell-nickel-plugin-cli-wrapper.html#context","architecture/adr/adr-012-nushell-nickel-plugin-cli-wrapper.html#system-requirements","architecture/adr/adr-012-nushell-nickel-plugin-cli-wrapper.html#documentation-gap","architecture/adr/adr-012-nushell-nickel-plugin-cli-wrapper.html#why-nickel-is-different-from-simple-use-cases","architecture/adr/adr-012-nushell-nickel-plugin-cli-wrapper.html#consequences","architecture/adr/adr-012-nushell-nickel-plugin-cli-wrapper.html#positive","architecture/adr/adr-012-nushell-nickel-plugin-cli-wrapper.html#negative","architecture/adr/adr-012-nushell-nickel-plugin-cli-wrapper.html#mitigation-strategies","architecture/adr/adr-012-nushell-nickel-plugin-cli-wrapper.html#alternatives-considered","architecture/adr/adr-012-nushell-nickel-plugin-cli-wrapper.html#alternative-1-pure-rust-with-nickel-lang-core","architecture/adr/adr-012-nushell-nickel-plugin-cli-wrapper.html#alternative-2-hybrid-pure-rust--cli-fallback","architecture/adr/adr-012-nushell-nickel-plugin-cli-wrapper.html#alternative-3-webassembly-version","architecture/adr/adr-012-nushell-nickel-plugin-cli-wrapper.html#alternative-4-use-nickel-lsp","architecture/adr/adr-012-nushell-nickel-plugin-cli-wrapper.html#implementation-details","architecture/adr/adr-012-nushell-nickel-plugin-cli-wrapper.html#command-set","architecture/adr/adr-012-nushell-nickel-plugin-cli-wrapper.html#critical-implementation-detail-command-syntax","api-reference/rest-api.html#rest-api-reference","api-reference/rest-api.html#overview","api-reference/rest-api.html#base-urls","api-reference/rest-api.html#authentication","api-reference/rest-api.html#jwt-authentication","api-reference/websocket.html#websocket-api-reference","api-reference/websocket.html#overview","api-reference/websocket.html#websocket-endpoints","api-reference/websocket.html#primary-websocket-endpoint","api-reference/websocket.html#specialized-websocket-endpoints","api-reference/websocket.html#authentication","api-reference/websocket.html#jwt-token-authentication","api-reference/websocket.html#connection-authentication-flow","api-reference/websocket.html#event-types-and-schemas","api-reference/websocket.html#core-event-types","api-reference/websocket.html#custom-event-types","api-reference/websocket.html#client-side-javascript-api","api-reference/websocket.html#connection-management","api-reference/websocket.html#real-time-dashboard-example","api-reference/websocket.html#server-side-implementation","api-reference/websocket.html#rust-websocket-handler","api-reference/websocket.html#event-filtering-and-subscriptions","api-reference/websocket.html#client-side-filtering","api-reference/websocket.html#server-side-event-filtering","api-reference/websocket.html#error-handling-and-reconnection","api-reference/websocket.html#connection-errors","api-reference/websocket.html#heartbeat-and-keep-alive","api-reference/websocket.html#performance-considerations","api-reference/websocket.html#message-batching","api-reference/websocket.html#compression","api-reference/websocket.html#rate-limiting","api-reference/websocket.html#security-considerations","api-reference/websocket.html#authentication-and-authorization","api-reference/websocket.html#message-validation","api-reference/websocket.html#data-sanitization","api-reference/extensions.html#extension-development-api","api-reference/extensions.html#overview","api-reference/extensions.html#extension-structure","api-reference/extensions.html#standard-directory-layout","api-reference/sdks.html#sdk-documentation","api-reference/sdks.html#available-sdks","api-reference/sdks.html#official-sdks","api-reference/sdks.html#community-sdks","api-reference/sdks.html#python-sdk","api-reference/sdks.html#installation","api-reference/sdks.html#quick-start","api-reference/sdks.html#advanced-usage","api-reference/sdks.html#api-reference","api-reference/sdks.html#javascripttypescript-sdk","api-reference/sdks.html#installation-1","api-reference/sdks.html#quick-start-1","api-reference/sdks.html#react-integration","api-reference/sdks.html#nodejs-cli-tool","api-reference/sdks.html#api-reference-1","api-reference/sdks.html#go-sdk","api-reference/sdks.html#installation-2","api-reference/sdks.html#quick-start-2","api-reference/sdks.html#websocket-integration","api-reference/sdks.html#http-client-with-retry-logic","api-reference/sdks.html#rust-sdk","api-reference/sdks.html#installation-3","api-reference/sdks.html#quick-start-3","api-reference/sdks.html#websocket-integration-1","api-reference/sdks.html#batch-operations","api-reference/sdks.html#best-practices","api-reference/sdks.html#authentication-and-security","api-reference/sdks.html#error-handling","api-reference/sdks.html#performance-optimization","api-reference/sdks.html#websocket-connections","api-reference/sdks.html#testing","api-reference/integration-examples.html#integration-examples","api-reference/integration-examples.html#overview","api-reference/integration-examples.html#complete-integration-examples","api-reference/integration-examples.html#python-integration","api-reference/integration-examples.html#nodejsjavascript-integration","api-reference/integration-examples.html#error-handling-strategies","api-reference/integration-examples.html#comprehensive-error-handling","api-reference/integration-examples.html#circuit-breaker-pattern","api-reference/integration-examples.html#performance-optimization","api-reference/integration-examples.html#connection-pooling-and-caching","api-reference/integration-examples.html#websocket-connection-pooling","api-reference/integration-examples.html#sdk-documentation","api-reference/integration-examples.html#python-sdk","api-reference/integration-examples.html#javascripttypescript-sdk","api-reference/integration-examples.html#common-integration-patterns","api-reference/integration-examples.html#workflow-orchestration-pipeline","api-reference/integration-examples.html#event-driven-architecture","api-reference/provider-api.html#provider-api-reference","api-reference/provider-api.html#overview","api-reference/provider-api.html#supported-providers","api-reference/provider-api.html#provider-interface","api-reference/provider-api.html#required-functions","api-reference/nushell-api.html#nushell-api-reference","api-reference/nushell-api.html#overview","api-reference/nushell-api.html#core-modules","api-reference/nushell-api.html#configuration-module","api-reference/nushell-api.html#server-module","api-reference/nushell-api.html#task-service-module","api-reference/nushell-api.html#workspace-module","api-reference/nushell-api.html#provider-module","api-reference/nushell-api.html#diagnostics--utilities","api-reference/nushell-api.html#diagnostics-module","api-reference/nushell-api.html#hints-module","api-reference/nushell-api.html#usage-example","api-reference/nushell-api.html#api-conventions","api-reference/nushell-api.html#best-practices","api-reference/nushell-api.html#source-code","api-reference/path-resolution.html#path-resolution-api","api-reference/path-resolution.html#overview","api-reference/path-resolution.html#configuration-resolution-hierarchy","api-reference/path-resolution.html#error-recovery","api-reference/path-resolution.html#performance-considerations","api-reference/path-resolution.html#best-practices","api-reference/path-resolution.html#monitoring","development/extension-development.html#extension-development-guide","development/extension-development.html#what-youll-learn","development/extension-development.html#extension-architecture","development/extension-development.html#extension-types","development/extension-development.html#extension-structure","development/infrastructure-specific-extensions.html#infrastructure-specific-extension-development","development/infrastructure-specific-extensions.html#table-of-contents","development/infrastructure-specific-extensions.html#overview","development/infrastructure-specific-extensions.html#infrastructure-assessment","development/infrastructure-specific-extensions.html#identifying-extension-needs","development/infrastructure-specific-extensions.html#requirements-gathering","development/infrastructure-specific-extensions.html#custom-taskserv-development","development/infrastructure-specific-extensions.html#company-specific-application-taskserv","development/infrastructure-specific-extensions.html#compliance-focused-taskserv","development/infrastructure-specific-extensions.html#provider-specific-extensions","development/infrastructure-specific-extensions.html#custom-cloud-provider-integration","development/infrastructure-specific-extensions.html#multi-environment-management","development/infrastructure-specific-extensions.html#environment-specific-configuration-management","development/infrastructure-specific-extensions.html#integration-patterns","development/infrastructure-specific-extensions.html#legacy-system-integration","development/infrastructure-specific-extensions.html#real-world-examples","development/infrastructure-specific-extensions.html#example-1-financial-services-company","development/infrastructure-specific-extensions.html#example-2-healthcare-organization","development/infrastructure-specific-extensions.html#example-3-manufacturing-company","development/infrastructure-specific-extensions.html#usage-examples","development/quick-provider-guide.html#quick-developer-guide-adding-new-providers","development/quick-provider-guide.html#prerequisites","development/quick-provider-guide.html#5-minute-provider-addition","development/quick-provider-guide.html#step-1-create-provider-directory","development/quick-provider-guide.html#step-2-copy-template-and-customize","development/quick-provider-guide.html#step-3-update-provider-metadata","development/quick-provider-guide.html#step-4-implement-core-functions","development/quick-provider-guide.html#step-5-create-provider-specific-functions","development/quick-provider-guide.html#step-6-test-your-provider","development/quick-provider-guide.html#step-7-add-provider-to-infrastructure","development/quick-provider-guide.html#provider-templates","development/quick-provider-guide.html#cloud-provider-template","development/quick-provider-guide.html#container-platform-template","development/quick-provider-guide.html#bare-metal-provider-template","development/quick-provider-guide.html#best-practices","development/quick-provider-guide.html#1-error-handling","development/quick-provider-guide.html#2-authentication","development/quick-provider-guide.html#3-rate-limiting","development/quick-provider-guide.html#4-provider-capabilities","development/quick-provider-guide.html#testing-checklist","development/quick-provider-guide.html#common-issues","development/quick-provider-guide.html#provider-not-found","development/quick-provider-guide.html#interface-validation-failed","development/quick-provider-guide.html#authentication-errors","development/quick-provider-guide.html#next-steps","development/quick-provider-guide.html#getting-help","development/command-handler-guide.html#command-handler-developer-guide","development/command-handler-guide.html#overview","development/command-handler-guide.html#key-architecture-principles","development/command-handler-guide.html#architecture-components","development/configuration.html#configuration","development/workflow.html#development-workflow-guide","development/workflow.html#table-of-contents","development/workflow.html#overview","development/workflow.html#development-setup","development/workflow.html#initial-environment-setup","development/integration.html#integration-guide","development/integration.html#table-of-contents","development/integration.html#overview","development/build-system.html#build-system-documentation","development/build-system.html#table-of-contents","development/build-system.html#overview","development/build-system.html#quick-start","development/build-system.html#makefile-reference","development/build-system.html#build-configuration","development/build-system.html#build-targets","development/build-system.html#build-tools","development/build-system.html#core-build-scripts","development/build-system.html#distribution-tools","development/build-system.html#package-tools","development/build-system.html#release-tools","development/build-system.html#cross-platform-compilation","development/build-system.html#supported-platforms","development/build-system.html#cross-compilation-setup","development/build-system.html#cross-compilation-usage","development/build-system.html#dependency-management","development/build-system.html#build-dependencies","development/build-system.html#dependency-validation","development/build-system.html#dependency-caching","development/build-system.html#troubleshooting","development/build-system.html#common-build-issues","development/build-system.html#build-performance-issues","development/build-system.html#distribution-issues","development/build-system.html#debug-mode","development/build-system.html#cicd-integration","development/build-system.html#github-actions","development/build-system.html#release-automation","development/build-system.html#local-ci-testing","development/extensions.html#extension-development-guide","development/extensions.html#table-of-contents","development/extensions.html#overview","development/extensions.html#extension-types","development/extensions.html#extension-architecture","development/distribution-process.html#distribution-process-documentation","development/distribution-process.html#table-of-contents","development/distribution-process.html#overview","development/distribution-process.html#distribution-architecture","development/distribution-process.html#distribution-components","development/implementation-guide.html#repository-restructuring---implementation-guide","development/implementation-guide.html#overview","development/implementation-guide.html#prerequisites","development/implementation-guide.html#required-tools","development/implementation-guide.html#recommended-tools","development/implementation-guide.html#before-starting","development/implementation-guide.html#phase-1-repository-restructuring-days-1-4","development/implementation-guide.html#day-1-backup-and-analysis","development/implementation-guide.html#day-2-directory-restructuring","development/implementation-guide.html#day-3-update-path-references","development/implementation-guide.html#day-4-validation-and-testing","development/implementation-guide.html#phase-2-build-system-implementation-days-5-8","development/implementation-guide.html#day-5-build-system-core","development/implementation-guide.html#day-6-8-continue-with-platform-extensions-and-validation","development/implementation-guide.html#phase-3-installation-system-days-9-11","development/implementation-guide.html#day-9-nushell-installer","development/implementation-guide.html#rollback-procedures","development/implementation-guide.html#if-phase-1-fails","development/implementation-guide.html#if-build-system-fails","development/implementation-guide.html#if-installation-fails","development/implementation-guide.html#checklist","development/implementation-guide.html#phase-1-repository-restructuring","development/implementation-guide.html#phase-2-build-system","development/implementation-guide.html#phase-3-installation","development/implementation-guide.html#phase-4-registry-optional","development/implementation-guide.html#phase-5-documentation","development/implementation-guide.html#notes","development/implementation-guide.html#support","development/taskserv-developer-guide.html#taskserv-developer-guide","development/taskserv-quick-guide.html#taskserv-quick-guide","development/taskserv-quick-guide.html#-quick-start","development/taskserv-quick-guide.html#create-a-new-taskserv-interactive","development/project-structure.html#project-structure-guide","development/project-structure.html#table-of-contents","development/project-structure.html#overview","development/project-structure.html#new-structure-vs-legacy","development/project-structure.html#new-development-structure-src","development/provider-agnostic-architecture.html#provider-agnostic-architecture-documentation","development/provider-agnostic-architecture.html#overview","development/provider-agnostic-architecture.html#architecture-components","development/provider-agnostic-architecture.html#1-provider-interface-interfacenu","development/provider-agnostic-architecture.html#adding-new-providers","development/provider-agnostic-architecture.html#1-create-provider-adapter","development/ctrl-c-implementation-notes.html#ctrl-c-handling-implementation-notes","development/ctrl-c-implementation-notes.html#overview","development/ctrl-c-implementation-notes.html#problem-statement","development/ctrl-c-implementation-notes.html#solution-architecture","development/ctrl-c-implementation-notes.html#key-principle-return-values-not-exit-codes","development/ctrl-c-implementation-notes.html#three-layer-approach","development/ctrl-c-implementation-notes.html#implementation-details","development/ctrl-c-implementation-notes.html#1-helper-functions-sshnu11-32","development/auth-metadata-guide.html#metadata-driven-authentication-system---implementation-guide","development/auth-metadata-guide.html#table-of-contents","development/auth-metadata-guide.html#overview","development/auth-metadata-guide.html#architecture","development/auth-metadata-guide.html#system-components","development/migration-guide.html#migration-guide-target-based-configuration-system","development/migration-guide.html#overview","development/migration-guide.html#migration-path","development/kms-simplification.html#kms-simplification-migration-guide","development/kms-simplification.html#overview","development/kms-simplification.html#what-changed","development/kms-simplification.html#removed","development/kms-simplification.html#added","development/kms-simplification.html#modified","development/kms-simplification.html#why-this-change","development/kms-simplification.html#problems-with-previous-approach","development/kms-simplification.html#benefits-of-simplified-approach","development/kms-simplification.html#migration-steps","development/kms-simplification.html#for-development-environments","development/kms-simplification.html#for-production-environments","development/kms-simplification.html#configuration-comparison","development/kms-simplification.html#before-4-backends","development/kms-simplification.html#after-2-backends","development/kms-simplification.html#breaking-changes","development/kms-simplification.html#api-changes","development/kms-simplification.html#code-migration","development/kms-simplification.html#rust-code","development/kms-simplification.html#nushell-code","development/kms-simplification.html#rollback-plan","development/kms-simplification.html#testing-the-migration","development/kms-simplification.html#development-testing","development/kms-simplification.html#production-testing","development/kms-simplification.html#troubleshooting","development/kms-simplification.html#age-keys-not-found","development/kms-simplification.html#cosmian-connection-failed","development/kms-simplification.html#compilation-errors","development/kms-simplification.html#support","development/kms-simplification.html#timeline","development/kms-simplification.html#faqs","development/kms-simplification.html#checklist","development/kms-simplification.html#development-migration","development/kms-simplification.html#production-migration","development/kms-simplification.html#conclusion","development/migration-example.html#migration-example","development/glossary.html#provisioning-platform-glossary","development/glossary.html#a","development/glossary.html#adr-architecture-decision-record","development/glossary.html#agent","development/glossary.html#anchor-link","development/glossary.html#api-gateway","development/glossary.html#auth-authentication","development/glossary.html#authorization","development/glossary.html#b","development/glossary.html#batch-operation","development/glossary.html#break-glass","development/glossary.html#c","development/glossary.html#cedar","development/glossary.html#checkpoint","development/glossary.html#cli-command-line-interface","development/glossary.html#cluster","development/glossary.html#compliance","development/glossary.html#config-configuration","development/glossary.html#control-center","development/glossary.html#coredns","development/glossary.html#cross-reference","development/glossary.html#d","development/glossary.html#dependency","development/glossary.html#diagnostics","development/glossary.html#dynamic-secrets","development/glossary.html#e","development/glossary.html#environment","development/glossary.html#extension","development/glossary.html#f","development/glossary.html#feature","development/glossary.html#g","development/glossary.html#gdpr-general-data-protection-regulation","development/glossary.html#glossary","development/glossary.html#guide","development/glossary.html#h","development/glossary.html#health-check","development/glossary.html#hybrid-architecture","development/glossary.html#i","development/glossary.html#infrastructure","development/glossary.html#integration","development/glossary.html#internal-link","development/glossary.html#j","development/glossary.html#jwt-json-web-token","development/glossary.html#k","development/glossary.html#kcl-kcl-configuration-language","development/glossary.html#kms-key-management-service","development/glossary.html#kubernetes","development/glossary.html#l","development/glossary.html#layer","development/glossary.html#m","development/glossary.html#mcp-model-context-protocol","development/glossary.html#mfa-multi-factor-authentication","development/glossary.html#migration","development/glossary.html#module","development/glossary.html#n","development/glossary.html#nushell","development/glossary.html#o","development/glossary.html#oci-open-container-initiative","development/glossary.html#operation","development/glossary.html#orchestrator","development/glossary.html#p","development/glossary.html#pap-project-architecture-principles","development/glossary.html#platform-service","development/glossary.html#plugin","development/glossary.html#provider","development/glossary.html#q","development/glossary.html#quick-reference","development/glossary.html#r","development/glossary.html#rbac-role-based-access-control","development/glossary.html#registry","development/glossary.html#rest-api","development/glossary.html#rollback","development/glossary.html#rustyvault","development/glossary.html#s","development/glossary.html#schema","development/glossary.html#secrets-management","development/glossary.html#security-system","development/glossary.html#server","development/glossary.html#service","development/glossary.html#shortcut","development/glossary.html#sops-secrets-operations","development/glossary.html#ssh-secure-shell","development/glossary.html#state-management","development/glossary.html#t","development/glossary.html#task","development/glossary.html#taskserv","development/glossary.html#template","development/glossary.html#test-environment","development/glossary.html#topology","development/glossary.html#totp-time-based-one-time-password","development/glossary.html#troubleshooting","development/glossary.html#u","development/glossary.html#ui-user-interface","development/glossary.html#update","development/glossary.html#v","development/glossary.html#validation","development/glossary.html#version","development/glossary.html#w","development/glossary.html#webauthn","development/glossary.html#workflow","development/glossary.html#workspace","development/glossary.html#x-z","development/glossary.html#yaml","development/glossary.html#symbol-and-acronym-index","development/glossary.html#cross-reference-map","development/glossary.html#by-topic-area","development/glossary.html#by-user-journey","development/glossary.html#terminology-guidelines","development/glossary.html#writing-style","development/glossary.html#avoiding-confusion","development/glossary.html#contributing-to-the-glossary","development/glossary.html#adding-new-terms","development/glossary.html#updating-existing-terms","development/glossary.html#version-history","development/provider-distribution-guide.html#provider-distribution-guide","development/provider-distribution-guide.html#table-of-contents","development/provider-distribution-guide.html#overview","development/provider-distribution-guide.html#module-loader-approach","development/provider-distribution-guide.html#purpose","development/provider-distribution-guide.html#how-it-works","development/provider-distribution-guide.html#for-releases","development/provider-distribution-guide.html#for-production","development/provider-distribution-guide.html#for-cicd","development/provider-distribution-guide.html#migration-path","development/provider-distribution-guide.html#from-module-loader-to-packs","development/taskserv-categorization.html#taskserv-categorization-plan","development/taskserv-categorization.html#categories-and-taskservs-38-total","development/taskserv-categorization.html#kubernetes--1","development/taskserv-categorization.html#networking--6","development/taskserv-categorization.html#container-runtime--6","development/taskserv-categorization.html#storage--4","development/taskserv-categorization.html#databases--2","development/taskserv-categorization.html#development--6","development/taskserv-categorization.html#infrastructure--6","development/taskserv-categorization.html#misc--1","development/taskserv-categorization.html#keep-in-root--6","development/extension-registry.html#extension-registry-service","development/extension-registry.html#features","development/extension-registry.html#architecture","development/extension-registry.html#dual-trait-system","development/mcp-server.html#mcp-server---model-context-protocol","development/mcp-server.html#overview","development/mcp-server.html#performance-results","development/typedialog-platform-config-guide.html#typedialog-platform-configuration-guide","development/typedialog-platform-config-guide.html#overview","development/typedialog-platform-config-guide.html#quick-start","development/typedialog-platform-config-guide.html#1-configure-a-platform-service-5-minutes","development/typedialog-platform-config-guide.html#2-review-generated-configuration","development/typedialog-platform-config-guide.html#3-validate-configuration","development/typedialog-platform-config-guide.html#4-services-use-generated-config","development/typedialog-platform-config-guide.html#interactive-configuration-workflow","development/typedialog-platform-config-guide.html#recommended-approach-use-typedialog-forms","development/typedialog-platform-config-guide.html#advanced-approach-manual-nickel-editing","development/typedialog-platform-config-guide.html#configuration-structure","development/typedialog-platform-config-guide.html#single-file-three-sections","development/typedialog-platform-config-guide.html#available-configuration-sections","development/typedialog-platform-config-guide.html#service-specific-configuration","development/typedialog-platform-config-guide.html#orchestrator-service","development/typedialog-platform-config-guide.html#kms-service","development/typedialog-platform-config-guide.html#control-center-service","development/typedialog-platform-config-guide.html#deployment-modes","development/typedialog-platform-config-guide.html#new-platform-services-phase-13-19","development/typedialog-platform-config-guide.html#vault-service","development/typedialog-platform-config-guide.html#extension-registry-service","development/typedialog-platform-config-guide.html#rag-retrieval-augmented-generation-service","development/typedialog-platform-config-guide.html#ai-service","development/typedialog-platform-config-guide.html#provisioning-daemon","development/typedialog-platform-config-guide.html#using-typedialog-forms","development/typedialog-platform-config-guide.html#form-navigation","development/typedialog-platform-config-guide.html#field-types","development/typedialog-platform-config-guide.html#special-values","development/typedialog-platform-config-guide.html#validation--export","development/typedialog-platform-config-guide.html#validating-configuration","development/typedialog-platform-config-guide.html#exporting-to-service-formats","development/typedialog-platform-config-guide.html#updating-configuration","development/typedialog-platform-config-guide.html#change-a-setting","development/typedialog-platform-config-guide.html#using-typedialog-to-update","development/typedialog-platform-config-guide.html#troubleshooting","development/typedialog-platform-config-guide.html#form-wont-load","development/typedialog-platform-config-guide.html#validation-fails","development/typedialog-platform-config-guide.html#export-creates-empty-files","development/typedialog-platform-config-guide.html#services-dont-use-new-config","development/typedialog-platform-config-guide.html#configuration-examples","development/typedialog-platform-config-guide.html#development-setup","development/typedialog-platform-config-guide.html#production-setup","development/typedialog-platform-config-guide.html#multi-provider-setup","development/typedialog-platform-config-guide.html#best-practices","development/typedialog-platform-config-guide.html#1-use-typedialog-for-initial-setup","development/typedialog-platform-config-guide.html#2-never-edit-generated-files","development/typedialog-platform-config-guide.html#3-validate-before-deploy","development/typedialog-platform-config-guide.html#4-use-environment-variables-for-secrets","development/typedialog-platform-config-guide.html#5-document-changes","development/typedialog-platform-config-guide.html#related-documentation","development/typedialog-platform-config-guide.html#core-resources","development/typedialog-platform-config-guide.html#platform-services","development/typedialog-platform-config-guide.html#public-definition-locations","development/typedialog-platform-config-guide.html#getting-help","development/typedialog-platform-config-guide.html#validation-errors","development/typedialog-platform-config-guide.html#configuration-questions","development/typedialog-platform-config-guide.html#test-configuration","operations/deployment-guide.html#platform-deployment-guide","operations/deployment-guide.html#table-of-contents","operations/deployment-guide.html#prerequisites","operations/deployment-guide.html#required-software","operations/deployment-guide.html#required-tools-mode-dependent","operations/deployment-guide.html#system-requirements","operations/deployment-guide.html#directory-structure","operations/deployment-guide.html#deployment-modes","operations/deployment-guide.html#mode-selection-matrix","operations/deployment-guide.html#mode-characteristics","operations/deployment-guide.html#quick-start","operations/deployment-guide.html#1-clone-repository","operations/deployment-guide.html#2-select-deployment-mode","operations/deployment-guide.html#3-set-environment-variables","operations/deployment-guide.html#4-build-all-services","operations/deployment-guide.html#5-start-services-order-matters","operations/deployment-guide.html#6-verify-services","operations/deployment-guide.html#solo-mode-deployment","operations/deployment-guide.html#step-1-verify-solo-configuration-files","operations/deployment-guide.html#step-2-set-solo-environment-variables","operations/deployment-guide.html#step-3-build-services","operations/deployment-guide.html#step-4-create-local-data-directories","operations/deployment-guide.html#step-5-start-services","operations/deployment-guide.html#step-6-test-services","operations/deployment-guide.html#step-7-verify-persistence-optional","operations/deployment-guide.html#cleanup","operations/deployment-guide.html#multiuser-mode-deployment","operations/deployment-guide.html#prerequisites-1","operations/deployment-guide.html#step-1-deploy-surrealdb","operations/deployment-guide.html#step-2-verify-surrealdb-connectivity","operations/deployment-guide.html#step-3-set-multiuser-environment-variables","operations/deployment-guide.html#step-4-build-services","operations/deployment-guide.html#step-5-create-shared-data-directories","operations/deployment-guide.html#step-6-start-services-on-multiple-machines","operations/deployment-guide.html#step-7-test-multi-machine-setup","operations/deployment-guide.html#step-8-enable-user-access","operations/deployment-guide.html#monitoring-multiuser-deployment","operations/deployment-guide.html#cicd-mode-deployment","operations/deployment-guide.html#step-1-understand-ephemeral-nature","operations/deployment-guide.html#step-2-set-cicd-environment-variables","operations/deployment-guide.html#step-3-containerize-services-optional","operations/deployment-guide.html#step-4-github-actions-example","operations/deployment-guide.html#step-5-run-cicd-tests","operations/deployment-guide.html#enterprise-mode-deployment","operations/deployment-guide.html#prerequisites-2","operations/deployment-guide.html#step-1-deploy-infrastructure","operations/deployment-guide.html#step-2-set-enterprise-environment-variables","operations/deployment-guide.html#step-3-deploy-services-across-cluster","operations/deployment-guide.html#step-4-monitor-cluster-health","operations/deployment-guide.html#step-5-enable-monitoring--alerting","operations/deployment-guide.html#step-6-backup--recovery","operations/deployment-guide.html#service-management","operations/deployment-guide.html#starting-services","operations/deployment-guide.html#stopping-services","operations/deployment-guide.html#restarting-services","operations/deployment-guide.html#checking-service-status","operations/deployment-guide.html#health-checks--monitoring","operations/deployment-guide.html#manual-health-verification","operations/deployment-guide.html#service-integration-tests","operations/deployment-guide.html#monitoring-dashboards","operations/deployment-guide.html#alerting","operations/deployment-guide.html#troubleshooting","operations/deployment-guide.html#service-wont-start","operations/deployment-guide.html#configuration-loading-fails","operations/deployment-guide.html#database-connection-issues","operations/deployment-guide.html#service-crashes-on-startup","operations/deployment-guide.html#high-memory-usage","operations/deployment-guide.html#networkdns-issues","operations/deployment-guide.html#data-persistence-issues","operations/deployment-guide.html#debugging-checklist","operations/deployment-guide.html#configuration-updates","operations/deployment-guide.html#updating-service-configuration","operations/deployment-guide.html#mode-migration","operations/deployment-guide.html#production-checklist","operations/deployment-guide.html#getting-help","operations/deployment-guide.html#community-resources","operations/deployment-guide.html#internal-support","operations/deployment-guide.html#useful-commands-reference","operations/service-management-guide.html#service-management-guide","operations/service-management-guide.html#table-of-contents","operations/service-management-guide.html#overview","operations/service-management-guide.html#key-features","operations/service-management-guide.html#supported-services","operations/service-management-guide.html#service-architecture","operations/service-management-guide.html#system-architecture","operations/monitoring-alerting-setup.html#service-monitoring--alerting-setup","operations/monitoring-alerting-setup.html#overview","operations/monitoring-alerting-setup.html#architecture","operations/monitoring-alerting-setup.html#prerequisites","operations/monitoring-alerting-setup.html#software-requirements","operations/monitoring-alerting-setup.html#system-requirements","operations/monitoring-alerting-setup.html#ports","operations/monitoring-alerting-setup.html#service-metrics-endpoints","operations/monitoring-alerting-setup.html#prometheus-configuration","operations/monitoring-alerting-setup.html#1-create-prometheus-config","operations/monitoring-alerting-setup.html#2-start-prometheus","operations/monitoring-alerting-setup.html#3-verify-prometheus","operations/monitoring-alerting-setup.html#alert-rules-configuration","operations/monitoring-alerting-setup.html#1-create-alert-rules","operations/monitoring-alerting-setup.html#2-validate-alert-rules","operations/monitoring-alerting-setup.html#alertmanager-configuration","operations/monitoring-alerting-setup.html#1-create-alertmanager-config","operations/monitoring-alerting-setup.html#2-start-alertmanager","operations/monitoring-alerting-setup.html#3-verify-alertmanager","operations/monitoring-alerting-setup.html#grafana-dashboards","operations/monitoring-alerting-setup.html#1-install-grafana","operations/monitoring-alerting-setup.html#2-add-prometheus-data-source","operations/monitoring-alerting-setup.html#3-create-platform-overview-dashboard","operations/monitoring-alerting-setup.html#4-import-dashboard-via-api","operations/monitoring-alerting-setup.html#health-check-monitoring","operations/monitoring-alerting-setup.html#1-service-health-check-script","operations/monitoring-alerting-setup.html#2-liveness-probe-configuration","operations/monitoring-alerting-setup.html#log-aggregation-elk-stack","operations/monitoring-alerting-setup.html#1-elasticsearch-setup","operations/monitoring-alerting-setup.html#2-filebeat-configuration","operations/monitoring-alerting-setup.html#3-kibana-dashboard","operations/monitoring-alerting-setup.html#monitoring-dashboard-queries","operations/monitoring-alerting-setup.html#common-prometheus-queries","operations/monitoring-alerting-setup.html#alert-testing","operations/monitoring-alerting-setup.html#1-test-alert-firing","operations/monitoring-alerting-setup.html#2-stop-service-to-trigger-alert","operations/monitoring-alerting-setup.html#3-generate-load-to-test-error-alerts","operations/monitoring-alerting-setup.html#backup--retention-policies","operations/monitoring-alerting-setup.html#1-prometheus-data-backup","operations/monitoring-alerting-setup.html#2-prometheus-retention-configuration","operations/monitoring-alerting-setup.html#maintenance--troubleshooting","operations/monitoring-alerting-setup.html#common-issues","operations/monitoring-alerting-setup.html#production-deployment-checklist","operations/monitoring-alerting-setup.html#quick-commands-reference","operations/monitoring-alerting-setup.html#documentation--runbooks","operations/monitoring-alerting-setup.html#sample-runbook-service-down","operations/monitoring-alerting-setup.html#resources","operations/service-management-quickref.html#service-management-quick-reference","operations/coredns-guide.html#coredns-integration-guide","operations/coredns-guide.html#table-of-contents","operations/coredns-guide.html#overview","operations/coredns-guide.html#key-features","operations/coredns-guide.html#installation","operations/coredns-guide.html#prerequisites","operations/coredns-guide.html#install-coredns-binary","operations/coredns-guide.html#dns-queries-not-working","operations/coredns-guide.html#zone-file-validation-errors","operations/coredns-guide.html#docker-container-issues","operations/coredns-guide.html#dynamic-updates-not-working","operations/coredns-guide.html#advanced-topics","operations/coredns-guide.html#custom-corefile-plugins","operations/backup-recovery.html#backup-and-recovery","operations/deployment.html#deployment-guide","operations/monitoring.html#monitoring-guide","operations/production-readiness-checklist.html#production-readiness-checklist","operations/production-readiness-checklist.html#executive-summary","operations/production-readiness-checklist.html#quality-metrics","operations/production-readiness-checklist.html#pre-deployment-verification","operations/production-readiness-checklist.html#1-system-requirements-","operations/production-readiness-checklist.html#2-code-quality-","operations/production-readiness-checklist.html#3-testing-","operations/production-readiness-checklist.html#4-security-","operations/production-readiness-checklist.html#5-documentation-","operations/production-readiness-checklist.html#6-deployment-readiness-","operations/production-readiness-checklist.html#pre-production-checklist","operations/production-readiness-checklist.html#team-preparation","operations/production-readiness-checklist.html#infrastructure-preparation","operations/production-readiness-checklist.html#configuration-preparation","operations/production-readiness-checklist.html#testing-in-production-like-environment","operations/production-readiness-checklist.html#deployment-steps","operations/production-readiness-checklist.html#phase-1-installation-30-minutes","operations/production-readiness-checklist.html#phase-2-initial-configuration-15-minutes","operations/production-readiness-checklist.html#phase-3-workspace-setup-10-minutes","operations/production-readiness-checklist.html#phase-4-verification-10-minutes","operations/production-readiness-checklist.html#post-deployment-verification","operations/production-readiness-checklist.html#immediate-within-1-hour","operations/production-readiness-checklist.html#daily-first-week","operations/production-readiness-checklist.html#weekly-first-month","operations/production-readiness-checklist.html#ongoing-production","operations/production-readiness-checklist.html#troubleshooting-reference","operations/production-readiness-checklist.html#issue-setup-wizard-wont-start","operations/production-readiness-checklist.html#issue-configuration-validation-fails","operations/production-readiness-checklist.html#issue-health-check-shows-warnings","operations/production-readiness-checklist.html#issue-deployment-fails","operations/production-readiness-checklist.html#performance-baselines","operations/production-readiness-checklist.html#support-and-escalation","operations/production-readiness-checklist.html#level-1-support-team","operations/production-readiness-checklist.html#level-2-support-engineering","operations/production-readiness-checklist.html#level-3-support-development","operations/production-readiness-checklist.html#rollback-procedure","operations/production-readiness-checklist.html#success-criteria","operations/production-readiness-checklist.html#sign-off","operations/break-glass-training-guide.html#break-glass-emergency-access---training-guide","operations/break-glass-training-guide.html#-what-is-break-glass","operations/break-glass-training-guide.html#key-principles","operations/break-glass-training-guide.html#-table-of-contents","operations/break-glass-training-guide.html#when-to-use-break-glass","operations/break-glass-training-guide.html#-valid-emergency-scenarios","operations/break-glass-training-guide.html#criteria-checklist","operations/break-glass-training-guide.html#when-not-to-use","operations/break-glass-training-guide.html#-invalid-scenarios-do-not-use-break-glass","operations/break-glass-training-guide.html#consequences-of-misuse","operations/break-glass-training-guide.html#roles--responsibilities","operations/break-glass-training-guide.html#requester","operations/break-glass-training-guide.html#approvers","operations/break-glass-training-guide.html#security-team","operations/break-glass-training-guide.html#break-glass-workflow","operations/break-glass-training-guide.html#phase-1-request-5-minutes","operations/cedar-policies-production-guide.html#cedar-policies-production-guide","operations/cedar-policies-production-guide.html#table-of-contents","operations/cedar-policies-production-guide.html#introduction","operations/cedar-policies-production-guide.html#why-cedar","operations/cedar-policies-production-guide.html#cedar-policy-basics","operations/cedar-policies-production-guide.html#core-concepts","operations/cedar-policies-production-guide.html#unexpected-denials","operations/mfa-admin-setup-guide.html#mfa-admin-setup-guide---production-operations-manual","operations/mfa-admin-setup-guide.html#-table-of-contents","operations/mfa-admin-setup-guide.html#overview","operations/mfa-admin-setup-guide.html#what-is-mfa","operations/mfa-admin-setup-guide.html#why-mfa-for-admins","operations/mfa-admin-setup-guide.html#mfa-methods-supported","operations/mfa-admin-setup-guide.html#mfa-requirements","operations/mfa-admin-setup-guide.html#mandatory-mfa-enforcement","operations/mfa-admin-setup-guide.html#grace-period","operations/mfa-admin-setup-guide.html#timeline-for-rollout","operations/orchestrator.html#provisioning-orchestrator","operations/orchestrator.html#architecture","operations/orchestrator.html#key-features","operations/orchestrator.html#quick-start","operations/orchestrator.html#build-and-run","operations/orchestrator.html#submit-workflow","operations/orchestrator.html#api-endpoints","operations/orchestrator.html#core-endpoints","operations/orchestrator.html#workflow-endpoints","operations/orchestrator.html#test-environment-endpoints","operations/orchestrator.html#test-environment-service","operations/orchestrator.html#test-environment-types","operations/orchestrator.html#nushell-cli-integration","operations/orchestrator.html#topology-templates","operations/orchestrator.html#storage-backends","operations/orchestrator.html#related-documentation","operations/orchestrator-system.html#hybrid-orchestrator-architecture-v300","operations/orchestrator-system.html#-orchestrator-implementation-completed-2025-09-25","operations/orchestrator-system.html#architecture-overview","operations/orchestrator-system.html#orchestrator-management","operations/orchestrator-system.html#workflow-system","operations/orchestrator-system.html#server-workflows","operations/orchestrator-system.html#taskserv-workflows","operations/orchestrator-system.html#cluster-workflows","operations/orchestrator-system.html#workflow-management","operations/orchestrator-system.html#rest-api-endpoints","operations/control-center.html#control-center---cedar-policy-engine","operations/control-center.html#key-features","operations/control-center.html#cedar-policy-engine","operations/control-center.html#security--authentication","operations/control-center.html#compliance-framework","operations/control-center.html#anomaly-detection","operations/control-center.html#storage--persistence","operations/control-center.html#quick-start","operations/control-center.html#installation","operations/control-center.html#configuration","operations/control-center.html#start-server","operations/control-center.html#test-policy-evaluation","operations/control-center.html#policy-examples","operations/control-center.html#multi-factor-authentication-policy","operations/control-center.html#production-approval-policy","operations/control-center.html#geographic-restrictions","operations/control-center.html#cli-commands","operations/control-center.html#policy-management","operations/control-center.html#compliance-checking","operations/control-center.html#api-endpoints","operations/control-center.html#policy-evaluation","operations/control-center.html#policy-versions","operations/control-center.html#compliance","operations/control-center.html#anomaly-detection-1","operations/control-center.html#architecture","operations/control-center.html#core-components","operations/control-center.html#configuration-driven-design","operations/control-center.html#deployment","operations/control-center.html#docker","operations/control-center.html#kubernetes","operations/control-center.html#related-documentation","operations/installer.html#provisioning-platform-installer","operations/installer.html#features","operations/installer.html#installation","operations/installer-system.html#provisioning-platform-installer-v350","operations/installer-system.html#-flexible-installation-and-configuration-system","operations/installer-system.html#installation-modes","operations/installer-system.html#1--interactive-tui-mode","operations/installer-system.html#2--headless-mode","operations/installer-system.html#3--unattended-mode","operations/installer-system.html#deployment-modes","operations/installer-system.html#configuration-system","operations/installer-system.html#toml-configuration","operations/installer-system.html#configuration-loading-priority","operations/installer-system.html#mcp-integration","operations/installer-system.html#deployment-automation","operations/installer-system.html#nushell-scripts","operations/installer-system.html#self-installation","operations/installer-system.html#command-reference","operations/installer-system.html#integration-examples","operations/installer-system.html#gitops-workflow","operations/installer-system.html#terraform-integration","operations/installer-system.html#ansible-integration","operations/installer-system.html#configuration-templates","operations/installer-system.html#documentation","operations/installer-system.html#help-and-support","operations/installer-system.html#nushell-fallback","operations/provisioning-server.html#provisioning-api-server","operations/provisioning-server.html#features","operations/provisioning-server.html#architecture","infrastructure/infrastructure-management.html#infrastructure-management-guide","infrastructure/infrastructure-management.html#what-youll-learn","infrastructure/infrastructure-management.html#infrastructure-concepts","infrastructure/infrastructure-management.html#infrastructure-components","infrastructure/infrastructure-management.html#infrastructure-lifecycle","infrastructure/infrastructure-management.html#advanced-testing","infrastructure/infrastructure-from-code-guide.html#infrastructure-from-code-iac-guide","infrastructure/infrastructure-from-code-guide.html#overview","infrastructure/infrastructure-from-code-guide.html#quick-start","infrastructure/infrastructure-from-code-guide.html#1-detect-technologies-in-your-project","infrastructure/batch-workflow-system.html#batch-workflow-system-v310---token-optimized-architecture","infrastructure/batch-workflow-system.html#-batch-workflow-system-completed-2025-09-25","infrastructure/batch-workflow-system.html#key-achievements","infrastructure/batch-workflow-system.html#batch-workflow-commands","infrastructure/batch-workflow-system.html#kcl-workflow-schema","infrastructure/batch-workflow-system.html#rest-api-endpoints-batch-operations","infrastructure/batch-workflow-system.html#system-benefits","infrastructure/cli-architecture.html#modular-cli-architecture-v320---major-refactoring","infrastructure/cli-architecture.html#-cli-refactoring-completed-2025-09-30","infrastructure/cli-architecture.html#architecture-improvements","infrastructure/cli-architecture.html#command-shortcuts-reference","infrastructure/cli-architecture.html#infrastructure","infrastructure/cli-architecture.html#orchestration","infrastructure/cli-architecture.html#development","infrastructure/cli-architecture.html#workspace","infrastructure/cli-architecture.html#configuration","infrastructure/cli-architecture.html#utilities","infrastructure/cli-architecture.html#generation","infrastructure/cli-architecture.html#special-commands","infrastructure/cli-architecture.html#bi-directional-help-system","infrastructure/configuration-system.html#configuration-system-v200","infrastructure/configuration-system.html#-migration-completed-2025-09-23","infrastructure/configuration-system.html#configuration-files","infrastructure/configuration-system.html#essential-commands","infrastructure/configuration-system.html#configuration-architecture","infrastructure/configuration-system.html#configuration-loading-hierarchy-priority","infrastructure/configuration-system.html#file-type-guidelines","infrastructure/workspace-setup.html#workspace-setup-guide","infrastructure/workspace-setup.html#quick-start","infrastructure/workspace-setup.html#1-create-a-new-infrastructure-workspace","infrastructure/workspace-switching-guide.html#workspace-switching-guide","infrastructure/workspace-switching-guide.html#overview","infrastructure/workspace-switching-guide.html#quick-start","infrastructure/workspace-switching-guide.html#list-available-workspaces","infrastructure/workspace-switching-system.html#workspace-switching-system-v205","infrastructure/workspace-switching-system.html#-workspace-switching-completed-2025-10-02","infrastructure/workspace-switching-system.html#key-features","infrastructure/workspace-switching-system.html#workspace-management-commands","infrastructure/cli-reference.html#cli-reference","infrastructure/cli-reference.html#what-youll-learn","infrastructure/cli-reference.html#command-structure","infrastructure/cli-reference.html#global-options","infrastructure/cli-reference.html#output-formats","infrastructure/cli-reference.html#core-commands","infrastructure/cli-reference.html#help---show-help-information","infrastructure/cli-reference.html#version---show-version-information","infrastructure/cli-reference.html#env---environment-information","infrastructure/cli-reference.html#server-management-commands","infrastructure/cli-reference.html#server-create---create-servers","infrastructure/cli-reference.html#server-delete---delete-servers","infrastructure/cli-reference.html#server-list---list-servers","infrastructure/cli-reference.html#server-ssh---ssh-access","infrastructure/cli-reference.html#server-price---cost-information","infrastructure/cli-reference.html#task-service-commands","infrastructure/cli-reference.html#taskserv-create---install-services","infrastructure/cli-reference.html#taskserv-delete---remove-services","infrastructure/cli-reference.html#taskserv-list---list-services","infrastructure/cli-reference.html#taskserv-generate---generate-configurations","infrastructure/cli-reference.html#taskserv-check-updates---version-management","infrastructure/cli-reference.html#cluster-management-commands","infrastructure/cli-reference.html#cluster-create---deploy-clusters","infrastructure/cli-reference.html#cluster-delete---remove-clusters","infrastructure/cli-reference.html#cluster-list---list-clusters","infrastructure/cli-reference.html#cluster-scale---scale-clusters","infrastructure/cli-reference.html#infrastructure-commands","infrastructure/cli-reference.html#generate---generate-configurations","infrastructure/cli-reference.html#show---display-information","infrastructure/cli-reference.html#list---list-resources","infrastructure/cli-reference.html#validate---validate-configuration","infrastructure/cli-reference.html#configuration-commands","infrastructure/cli-reference.html#init---initialize-configuration","infrastructure/cli-reference.html#template---template-management","infrastructure/cli-reference.html#advanced-commands","infrastructure/cli-reference.html#nu---interactive-shell","infrastructure/cli-reference.html#sops---secret-management","infrastructure/cli-reference.html#context---context-management","infrastructure/cli-reference.html#workflow-commands","infrastructure/cli-reference.html#workflows---batch-operations","infrastructure/cli-reference.html#orchestrator---orchestrator-management","infrastructure/cli-reference.html#scripting-and-automation","infrastructure/cli-reference.html#exit-codes","infrastructure/cli-reference.html#environment-variables","infrastructure/cli-reference.html#batch-operations","infrastructure/cli-reference.html#json-output-processing","infrastructure/cli-reference.html#command-chaining-and-pipelines","infrastructure/cli-reference.html#sequential-operations","infrastructure/cli-reference.html#complex-workflows","infrastructure/cli-reference.html#integration-with-other-tools","infrastructure/cli-reference.html#cicd-integration","infrastructure/cli-reference.html#monitoring-integration","infrastructure/cli-reference.html#backup-automation","infrastructure/workspace-config-architecture.html#workspace-configuration-architecture","infrastructure/workspace-config-architecture.html#overview","infrastructure/workspace-config-architecture.html#critical-design-principle","infrastructure/workspace-config-architecture.html#configuration-hierarchy","infrastructure/workspace-config-architecture.html#workspace-structure","infrastructure/dynamic-secrets-guide.html#dynamic-secrets-guide","infrastructure/dynamic-secrets-guide.html#quick-reference","infrastructure/dynamic-secrets-guide.html#quick-commands","infrastructure/dynamic-secrets-guide.html#secret-types","infrastructure/dynamic-secrets-guide.html#rest-api-endpoints","infrastructure/dynamic-secrets-guide.html#aws-sts-example","infrastructure/dynamic-secrets-guide.html#ssh-key-example","infrastructure/dynamic-secrets-guide.html#configuration","infrastructure/dynamic-secrets-guide.html#troubleshooting","infrastructure/dynamic-secrets-guide.html#provider-not-found","infrastructure/dynamic-secrets-guide.html#ttl-exceeds-maximum","infrastructure/dynamic-secrets-guide.html#secret-not-renewable","infrastructure/dynamic-secrets-guide.html#missing-required-parameter","infrastructure/dynamic-secrets-guide.html#security-features","infrastructure/dynamic-secrets-guide.html#support","infrastructure/mode-system-guide.html#mode-system-quick-reference","infrastructure/mode-system-guide.html#quick-start","infrastructure/workspace-guide.html#workspace-guide","infrastructure/workspace-guide.html#-workspace-switching-guide","infrastructure/workspace-guide.html#quick-start","infrastructure/workspace-guide.html#additional-workspace-resources","infrastructure/workspace-enforcement-guide.html#workspace-enforcement-and-version-tracking-guide","infrastructure/workspace-enforcement-guide.html#table-of-contents","infrastructure/workspace-enforcement-guide.html#overview","infrastructure/workspace-enforcement-guide.html#key-features","infrastructure/workspace-enforcement-guide.html#workspace-requirement","infrastructure/workspace-enforcement-guide.html#commands-that-require-workspace","infrastructure/workspace-enforcement-guide.html#commands-that-dont-require-workspace","infrastructure/workspace-enforcement-guide.html#what-happens-without-a-workspace","infrastructure/workspace-infra-reference.html#unified-workspaceinfrastructure-reference-system","infrastructure/workspace-infra-reference.html#overview","infrastructure/workspace-infra-reference.html#quick-start","infrastructure/workspace-infra-reference.html#temporal-override-single-command","infrastructure/workspace-infra-reference.html#usage-patterns","infrastructure/workspace-infra-reference.html#pattern-1-temporal-override-for-commands","infrastructure/workspace-config-commands.html#workspace-configuration-management-commands","infrastructure/workspace-config-commands.html#overview","infrastructure/workspace-config-commands.html#command-summary","infrastructure/workspace-config-commands.html#commands","infrastructure/workspace-config-commands.html#show-workspace-configuration","infrastructure/workspace-config-commands.html#configuration-file-locations","infrastructure/config-rendering-guide.html#configuration-rendering-guide","infrastructure/config-rendering-guide.html#overview","infrastructure/config-rendering-guide.html#quick-start","infrastructure/config-rendering-guide.html#starting-the-daemon","infrastructure/config-rendering-guide.html#see-also","infrastructure/config-rendering-guide.html#quick-reference","infrastructure/config-rendering-guide.html#api-endpoint","infrastructure/configuration.html#configuration-guide","infrastructure/configuration.html#what-youll-learn","infrastructure/configuration.html#configuration-architecture","infrastructure/configuration.html#configuration-hierarchy","security/authentication-layer-guide.html#authentication-layer-implementation-guide","security/authentication-layer-guide.html#overview","security/authentication-layer-guide.html#key-features","security/authentication-layer-guide.html#--jwt-authentication","security/authentication-layer-guide.html#--mfa-support","security/authentication-layer-guide.html#--security-policies","security/authentication-layer-guide.html#--audit-logging","security/authentication-layer-guide.html#--user-friendly-error-messages","security/authentication-layer-guide.html#quick-start","security/authentication-layer-guide.html#1-login-to-platform","security/config-encryption-guide.html#configuration-encryption-guide","security/config-encryption-guide.html#overview","security/config-encryption-guide.html#table-of-contents","security/config-encryption-guide.html#prerequisites","security/config-encryption-guide.html#required-tools","security/config-encryption-guide.html#verify-installation","security/config-encryption-guide.html#aws-kms-access-denied","security/security-system.html#complete-security-system-v400","security/security-system.html#-enterprise-grade-security-implementation","security/security-system.html#core-security-components","security/security-system.html#1--authentication--jwt","security/security-system.html#2--authorization--cedar","security/security-system.html#3--multi-factor-authentication--mfa","security/security-system.html#4--secrets-management","security/security-system.html#5--key-management-system--kms","security/security-system.html#6--audit-logging","security/security-system.html#7--break-glass-emergency-access","security/security-system.html#8--compliance-management","security/security-system.html#9--audit-query-system","security/security-system.html#10--token-management","security/security-system.html#11--access-control","security/security-system.html#12--encryption","security/security-system.html#performance-characteristics","security/security-system.html#quick-reference","security/security-system.html#architecture","security/security-system.html#configuration","security/security-system.html#documentation","security/security-system.html#help-commands","security/rustyvault-kms-guide.html#rustyvault-kms-backend-guide","security/rustyvault-kms-guide.html#overview","security/rustyvault-kms-guide.html#why-rustyvault","security/rustyvault-kms-guide.html#architecture-position","security/secretumvault-kms-guide.html#secretumvault-kms-backend-guide","security/secretumvault-kms-guide.html#overview","security/secretumvault-kms-guide.html#what-is-secretumvault","security/secretumvault-kms-guide.html#when-to-use-secretumvault","security/secretumvault-kms-guide.html#deployment-modes","security/secretumvault-kms-guide.html#development-mode-embedded","security/secretumvault-kms-guide.html#staging-mode-service--surrealdb","security/secretumvault-kms-guide.html#production-mode-service--etcd","security/secretumvault-kms-guide.html#configuration","security/secretumvault-kms-guide.html#environment-variables","security/secretumvault-kms-guide.html#configuration-files","security/secretumvault-kms-guide.html#operations","security/secretumvault-kms-guide.html#encrypt-data","security/secretumvault-kms-guide.html#decrypt-data","security/secretumvault-kms-guide.html#generate-data-keys","security/secretumvault-kms-guide.html#health-and-status","security/secretumvault-kms-guide.html#key-rotation","security/secretumvault-kms-guide.html#storage-backends","security/secretumvault-kms-guide.html#filesystem-development","security/secretumvault-kms-guide.html#surrealdb-staging","security/secretumvault-kms-guide.html#etcd-production","security/secretumvault-kms-guide.html#postgresql-enterprise","security/secretumvault-kms-guide.html#troubleshooting","security/secretumvault-kms-guide.html#connection-errors","security/secretumvault-kms-guide.html#authentication-failures","security/secretumvault-kms-guide.html#storage-backend-errors","security/secretumvault-kms-guide.html#performance-issues","security/secretumvault-kms-guide.html#debugging","security/secretumvault-kms-guide.html#security-best-practices","security/secretumvault-kms-guide.html#token-management","security/secretumvault-kms-guide.html#tlsssl","security/secretumvault-kms-guide.html#access-control","security/secretumvault-kms-guide.html#key-rotation-1","security/secretumvault-kms-guide.html#backup-and-recovery","security/secretumvault-kms-guide.html#migration-guide","security/secretumvault-kms-guide.html#from-age-to-secretumvault","security/secretumvault-kms-guide.html#from-rustyvault-to-secretumvault","security/secretumvault-kms-guide.html#from-cosmian-to-secretumvault","security/secretumvault-kms-guide.html#performance-tuning","security/secretumvault-kms-guide.html#development-filesystem","security/secretumvault-kms-guide.html#staging-surrealdb","security/secretumvault-kms-guide.html#production-etcd","security/secretumvault-kms-guide.html#compliance-and-audit","security/secretumvault-kms-guide.html#audit-logging","security/secretumvault-kms-guide.html#compliance-reports","security/secretumvault-kms-guide.html#advanced-topics","security/secretumvault-kms-guide.html#cedar-authorization-policies","security/secretumvault-kms-guide.html#key-encryption-keys-kek","security/secretumvault-kms-guide.html#multi-region-setup","security/secretumvault-kms-guide.html#support-and-resources","security/secretumvault-kms-guide.html#see-also","security/ssh-temporal-keys-user-guide.html#ssh-temporal-keys---user-guide","security/ssh-temporal-keys-user-guide.html#quick-start","security/ssh-temporal-keys-user-guide.html#generate-and-connect-with-temporary-key","security/ssh-temporal-keys-user-guide.html#private-key-not-working","security/ssh-temporal-keys-user-guide.html#cleanup-not-running","security/ssh-temporal-keys-user-guide.html#best-practices","security/ssh-temporal-keys-user-guide.html#security","security/ssh-temporal-keys-user-guide.html#workflow-integration","security/ssh-temporal-keys-user-guide.html#advanced-usage","security/ssh-temporal-keys-user-guide.html#vault-integration","security/plugin-integration-guide.html#nushell-plugin-integration-guide","security/plugin-integration-guide.html#table-of-contents","security/plugin-integration-guide.html#overview","security/plugin-integration-guide.html#architecture-benefits","security/nushell-plugins-guide.html#nushell-plugins-for-provisioning-platform","security/nushell-plugins-guide.html#overview","security/nushell-plugins-guide.html#why-native-plugins","security/nushell-plugins-guide.html#installation","security/nushell-plugins-guide.html#prerequisites","security/nushell-plugins-guide.html#build-from-source","security/nushell-plugins-system.html#nushell-plugins-integration-v100---see-detailed-guide-for-complete-reference","security/nushell-plugins-system.html#overview","security/nushell-plugins-system.html#performance-improvements","security/nushell-plugins-system.html#three-native-plugins","security/nushell-plugins-system.html#quick-commands","security/nushell-plugins-system.html#installation","security/nushell-plugins-system.html#benefits","security/plugin-usage-guide.html#provisioning-plugins-usage-guide","security/plugin-usage-guide.html#overview","security/plugin-usage-guide.html#installation","security/plugin-usage-guide.html#prerequisites","security/plugin-usage-guide.html#quick-install","security/plugin-usage-guide.html#manual-installation","security/plugin-usage-guide.html#usage","security/plugin-usage-guide.html#authentication-plugin","security/plugin-usage-guide.html#kms-plugin","security/plugin-usage-guide.html#orchestrator-plugin","security/plugin-usage-guide.html#plugin-status","security/plugin-usage-guide.html#testing-plugins","security/plugin-usage-guide.html#list-registered-plugins","security/plugin-usage-guide.html#performance-comparison","security/plugin-usage-guide.html#graceful-fallback","security/plugin-usage-guide.html#troubleshooting","security/plugin-usage-guide.html#plugins-not-found-after-installation","security/plugin-usage-guide.html#command-not-found-errors","security/plugin-usage-guide.html#plugins-crash-or-are-unresponsive","security/plugin-usage-guide.html#integration-with-provisioning-cli","security/plugin-usage-guide.html#advanced-configuration","security/plugin-usage-guide.html#custom-data-directory","security/plugin-usage-guide.html#custom-auth-url","security/plugin-usage-guide.html#kms-backend-selection","security/plugin-usage-guide.html#building-plugins-from-source","security/plugin-usage-guide.html#architecture","security/plugin-usage-guide.html#security-notes","security/plugin-usage-guide.html#support","security/secrets-management-guide.html#secrets-management-system---configuration-guide","security/secrets-management-guide.html#overview","security/secrets-management-guide.html#secret-sources","security/secrets-management-guide.html#1-sops-secrets-operations","security/auth-quick-reference.html#auth-quick-reference","security/config-encryption-quickref.html#config-encryption-quick-reference","security/kms-service.html#kms-service---key-management-service","security/kms-service.html#supported-backends","security/kms-service.html#architecture","integration/gitea-integration-guide.html#gitea-integration-guide","integration/gitea-integration-guide.html#table-of-contents","integration/gitea-integration-guide.html#overview","integration/gitea-integration-guide.html#architecture","integration/service-mesh-ingress-guide.html#service-mesh--ingress-guide","integration/service-mesh-ingress-guide.html#comparison","integration/service-mesh-ingress-guide.html#understanding-the-difference","integration/service-mesh-ingress-guide.html#service-mesh-options","integration/oci-registry-guide.html#oci-registry-user-guide","integration/oci-registry-guide.html#table-of-contents","integration/oci-registry-guide.html#overview","integration/oci-registry-guide.html#what-are-oci-artifacts","integration/oci-registry-guide.html#quick-start","integration/oci-registry-guide.html#prerequisites","integration/oci-registry-guide.html#troubleshooting","integration/oci-registry-guide.html#no-oci-tool-found","integration/oci-registry-guide.html#dependency-resolution-failed","integration/integrations-quickstart.html#prov-ecosystem--provctl-integrations---quick-start-guide","integration/integrations-quickstart.html#overview","integration/integrations-quickstart.html#quick-start-commands","integration/integrations-quickstart.html#-30-second-test","integration/secrets-service-layer-complete.html#secrets-service-layer-sst---complete-user-guide","integration/secrets-service-layer-complete.html#-executive-summary","integration/secrets-service-layer-complete.html#-key-features","integration/secrets-service-layer-complete.html#-quick-start-5-minutes","integration/secrets-service-layer-complete.html#1-register-the-workspace-librecloud","integration/oci-registry-platform.html#oci-registry-service","integration/oci-registry-platform.html#supported-registries","integration/oci-registry-platform.html#features","integration/oci-registry-platform.html#quick-start","integration/oci-registry-platform.html#start-zot-registry-default","integration/oci-registry-platform.html#start-harbor-registry","integration/oci-registry-platform.html#default-namespaces","integration/oci-registry-platform.html#management","integration/oci-registry-platform.html#nushell-commands","integration/oci-registry-platform.html#docker-compose","integration/oci-registry-platform.html#registry-comparison","integration/oci-registry-platform.html#security","integration/oci-registry-platform.html#authentication","integration/oci-registry-platform.html#monitoring","integration/oci-registry-platform.html#health-checks","integration/oci-registry-platform.html#metrics","integration/oci-registry-platform.html#related-documentation","testing/test-environment-guide.html#test-environment-guide","testing/test-environment-guide.html#overview","testing/test-environment-guide.html#architecture","testing/test-environment-guide.html#basic-workflow","testing/test-environment-usage.html#test-environment-usage","testing/test-environment-system.html#test-environment-service-v340","testing/test-environment-system.html#-test-environment-service-completed-2025-10-06","testing/test-environment-system.html#key-features","testing/test-environment-system.html#test-environment-types","testing/test-environment-system.html#1-single-taskserv-testing","testing/test-environment-system.html#architecture","testing/taskserv-validation-guide.html#taskserv-validation-and-testing-guide","testing/taskserv-validation-guide.html#overview","testing/taskserv-validation-guide.html#validation-levels","testing/taskserv-validation-guide.html#1-static-validation","troubleshooting/troubleshooting-guide.html#troubleshooting-guide","troubleshooting/troubleshooting-guide.html#what-youll-learn","troubleshooting/troubleshooting-guide.html#general-troubleshooting-approach","troubleshooting/troubleshooting-guide.html#1-identify-the-problem","guides/from-scratch.html#complete-deployment-guide-from-scratch-to-production","guides/from-scratch.html#table-of-contents","guides/from-scratch.html#prerequisites","guides/from-scratch.html#recommended-hardware","guides/from-scratch.html#step-1-install-nushell","guides/from-scratch.html#macos-via-homebrew","guides/from-scratch.html#learn-more","guides/from-scratch.html#get-help","guides/update-infrastructure.html#update-existing-infrastructure","guides/update-infrastructure.html#overview","guides/update-infrastructure.html#update-strategies","guides/update-infrastructure.html#strategy-1-in-place-updates-fastest","guides/customize-infrastructure.html#customize-infrastructure","guides/customize-infrastructure.html#overview","guides/customize-infrastructure.html#the-layer-system","guides/customize-infrastructure.html#understanding-layers","guides/extension-development-quickstart.html#extension-development-quick-start-guide","guides/extension-development-quickstart.html#prerequisites","guides/extension-development-quickstart.html#quick-start-creating-your-first-extension","guides/extension-development-quickstart.html#step-1-create-extension-from-template","guides/extension-development-quickstart.html#step-2-navigate-and-customize","guides/extension-development-quickstart.html#step-3-customize-configuration","guides/extension-development-quickstart.html#step-4-test-your-extension","guides/extension-development-quickstart.html#step-5-use-in-workspace","guides/extension-development-quickstart.html#common-extension-patterns","guides/extension-development-quickstart.html#database-service-extension","guides/extension-development-quickstart.html#monitoring-service-extension","guides/extension-development-quickstart.html#legacy-system-integration","guides/extension-development-quickstart.html#advanced-customization","guides/extension-development-quickstart.html#custom-provider-development","guides/extension-development-quickstart.html#complete-infrastructure-stack","guides/extension-development-quickstart.html#testing-and-validation","guides/extension-development-quickstart.html#local-testing-workflow","guides/extension-development-quickstart.html#continuous-integration-testing","guides/extension-development-quickstart.html#best-practices-summary","guides/extension-development-quickstart.html#1-extension-design","guides/extension-development-quickstart.html#2-dependencies","guides/extension-development-quickstart.html#3-security","guides/extension-development-quickstart.html#4-documentation","guides/extension-development-quickstart.html#5-testing","guides/extension-development-quickstart.html#common-issues-and-solutions","guides/extension-development-quickstart.html#extension-not-discovered","guides/extension-development-quickstart.html#kcl-compilation-errors","guides/extension-development-quickstart.html#loading-failures","guides/extension-development-quickstart.html#next-steps","guides/extension-development-quickstart.html#support","guides/guide-system.html#interactive-guides-and-quick-reference-v330","guides/guide-system.html#-guide-system-added-2025-09-30","guides/guide-system.html#available-guides","guides/guide-system.html#guide-features","guides/guide-system.html#recommended-setup","guides/guide-system.html#quick-start-with-guides","guides/guide-system.html#guide-content","guides/guide-system.html#access-from-help-system","guides/guide-system.html#guide-shortcuts","guides/guide-system.html#documentation-location","guides/workspace-generation-quick-reference.html#workspace-generation---quick-reference","guides/workspace-generation-quick-reference.html#rutas-clave-de-archivos","quick-reference/MASTER.html#quick-reference-master-index","quick-reference/MASTER.html#available-quick-references","quick-reference/MASTER.html#topic-specific-guides-with-embedded-quick-references","quick-reference/MASTER.html#using-quick-references","quick-reference/platform-operations-cheatsheet.html#platform-operations-cheatsheet","quick-reference/platform-operations-cheatsheet.html#mode-selection-one-command","quick-reference/platform-operations-cheatsheet.html#service-ports--endpoints","quick-reference/platform-operations-cheatsheet.html#service-startup-order-matters","quick-reference/platform-operations-cheatsheet.html#quick-checks-all-services","quick-reference/platform-operations-cheatsheet.html#configuration-management","quick-reference/platform-operations-cheatsheet.html#view-config-files","quick-reference/platform-operations-cheatsheet.html#apply-config-changes","quick-reference/platform-operations-cheatsheet.html#service-control","quick-reference/platform-operations-cheatsheet.html#stop-services","quick-reference/platform-operations-cheatsheet.html#restart-services","quick-reference/platform-operations-cheatsheet.html#check-logs","quick-reference/platform-operations-cheatsheet.html#database-management","quick-reference/platform-operations-cheatsheet.html#surrealdb-multiuserenterprise","quick-reference/platform-operations-cheatsheet.html#etcd-enterprise-ha","quick-reference/platform-operations-cheatsheet.html#environment-variable-overrides","quick-reference/platform-operations-cheatsheet.html#override-individual-settings","quick-reference/platform-operations-cheatsheet.html#health--status-checks","quick-reference/platform-operations-cheatsheet.html#quick-status-30-seconds","quick-reference/platform-operations-cheatsheet.html#detailed-status","quick-reference/platform-operations-cheatsheet.html#performance--monitoring","quick-reference/platform-operations-cheatsheet.html#system-resources","quick-reference/platform-operations-cheatsheet.html#service-performance","quick-reference/platform-operations-cheatsheet.html#troubleshooting-quick-fixes","quick-reference/platform-operations-cheatsheet.html#service-wont-start","quick-reference/platform-operations-cheatsheet.html#high-memory-usage","quick-reference/platform-operations-cheatsheet.html#database-connection-error","quick-reference/platform-operations-cheatsheet.html#services-not-communicating","quick-reference/platform-operations-cheatsheet.html#emergency-procedures","quick-reference/platform-operations-cheatsheet.html#full-service-recovery","quick-reference/platform-operations-cheatsheet.html#rollback-to-previous-configuration","quick-reference/platform-operations-cheatsheet.html#data-recovery","quick-reference/platform-operations-cheatsheet.html#file-locations","quick-reference/platform-operations-cheatsheet.html#mode-quick-reference-matrix","quick-reference/platform-operations-cheatsheet.html#common-command-patterns","quick-reference/platform-operations-cheatsheet.html#deploy-mode-change","quick-reference/platform-operations-cheatsheet.html#restart-single-service-without-downtime","quick-reference/platform-operations-cheatsheet.html#scale-workers-for-load","quick-reference/platform-operations-cheatsheet.html#diagnostic-bundle","quick-reference/platform-operations-cheatsheet.html#essential-references","quick-reference/general.html#rag-system---quick-reference-guide","quick-reference/general.html#-what-you-have","quick-reference/general.html#complete-rag-system","quick-reference/general.html#key-files","quick-reference/justfile-recipes.html#justfile-recipes---quick-reference","quick-reference/justfile-recipes.html#authentication-authjust","quick-reference/justfile-recipes.html#kms-kmsjust","quick-reference/justfile-recipes.html#orchestrator-orchestratorjust","quick-reference/justfile-recipes.html#plugin-testing","quick-reference/justfile-recipes.html#common-workflows","quick-reference/justfile-recipes.html#complete-authentication-setup","quick-reference/justfile-recipes.html#production-deployment-workflow","quick-reference/justfile-recipes.html#kms-setup-and-testing","quick-reference/justfile-recipes.html#monitoring-operations","quick-reference/justfile-recipes.html#cleanup-operations","quick-reference/justfile-recipes.html#tips","quick-reference/justfile-recipes.html#recipe-count","quick-reference/justfile-recipes.html#documentation","quick-reference/oci.html#oci-registry-quick-reference","quick-reference/oci.html#prerequisites","quick-reference/sudo-password-handling.html#sudo-password-handling---quick-reference","quick-reference/sudo-password-handling.html#when-sudo-is-required","quick-reference/sudo-password-handling.html#quick-solutions","quick-reference/sudo-password-handling.html#-best-cache-credentials-first","configuration/config-validation.html#configuration-validation-guide","configuration/config-validation.html#overview","configuration/config-validation.html#schema-validation-features","configuration/config-validation.html#1-required-fields-validation","configuration/workspace-config-architecture.html#workspace-config-architecture"],"index":{"documentStore":{"docInfo":{"0":{"body":49,"breadcrumbs":4,"title":3},"1":{"body":0,"breadcrumbs":3,"title":2},"10":{"body":7,"breadcrumbs":2,"title":1},"100":{"body":42,"breadcrumbs":6,"title":3},"1000":{"body":0,"breadcrumbs":6,"title":2},"1001":{"body":62,"breadcrumbs":8,"title":4},"1002":{"body":110,"breadcrumbs":7,"title":3},"1003":{"body":0,"breadcrumbs":7,"title":3},"1004":{"body":79,"breadcrumbs":6,"title":2},"1005":{"body":33,"breadcrumbs":6,"title":2},"1006":{"body":33,"breadcrumbs":7,"title":3},"1007":{"body":85,"breadcrumbs":6,"title":2},"1008":{"body":0,"breadcrumbs":10,"title":6},"1009":{"body":124,"breadcrumbs":6,"title":2},"101":{"body":30,"breadcrumbs":5,"title":2},"1010":{"body":129,"breadcrumbs":7,"title":3},"1011":{"body":153,"breadcrumbs":9,"title":5},"1012":{"body":132,"breadcrumbs":6,"title":2},"1013":{"body":122,"breadcrumbs":6,"title":2},"1014":{"body":0,"breadcrumbs":7,"title":3},"1015":{"body":31,"breadcrumbs":6,"title":2},"1016":{"body":25,"breadcrumbs":6,"title":2},"1017":{"body":16,"breadcrumbs":6,"title":2},"1018":{"body":0,"breadcrumbs":6,"title":2},"1019":{"body":22,"breadcrumbs":6,"title":2},"102":{"body":20,"breadcrumbs":5,"title":2},"1020":{"body":169,"breadcrumbs":7,"title":3},"1021":{"body":0,"breadcrumbs":6,"title":2},"1022":{"body":23,"breadcrumbs":6,"title":2},"1023":{"body":21,"breadcrumbs":7,"title":3},"1024":{"body":0,"breadcrumbs":5,"title":1},"1025":{"body":20,"breadcrumbs":7,"title":3},"1026":{"body":28,"breadcrumbs":6,"title":2},"1027":{"body":21,"breadcrumbs":8,"title":4},"1028":{"body":24,"breadcrumbs":9,"title":5},"1029":{"body":0,"breadcrumbs":6,"title":2},"103":{"body":25,"breadcrumbs":5,"title":2},"1030":{"body":42,"breadcrumbs":6,"title":2},"1031":{"body":59,"breadcrumbs":6,"title":2},"1032":{"body":49,"breadcrumbs":7,"title":3},"1033":{"body":0,"breadcrumbs":6,"title":2},"1034":{"body":9,"breadcrumbs":9,"title":5},"1035":{"body":13,"breadcrumbs":9,"title":5},"1036":{"body":11,"breadcrumbs":8,"title":4},"1037":{"body":17,"breadcrumbs":9,"title":5},"1038":{"body":7,"breadcrumbs":7,"title":3},"1039":{"body":0,"breadcrumbs":6,"title":2},"104":{"body":10,"breadcrumbs":7,"title":4},"1040":{"body":22,"breadcrumbs":6,"title":2},"1041":{"body":59,"breadcrumbs":6,"title":2},"1042":{"body":26,"breadcrumbs":7,"title":3},"1043":{"body":0,"breadcrumbs":6,"title":2},"1044":{"body":14,"breadcrumbs":6,"title":2},"1045":{"body":26,"breadcrumbs":6,"title":2},"1046":{"body":19,"breadcrumbs":6,"title":2},"1047":{"body":28,"breadcrumbs":6,"title":3},"1048":{"body":23,"breadcrumbs":5,"title":2},"1049":{"body":0,"breadcrumbs":4,"title":1},"105":{"body":0,"breadcrumbs":5,"title":2},"1050":{"body":18,"breadcrumbs":5,"title":2},"1051":{"body":21,"breadcrumbs":7,"title":4},"1052":{"body":35,"breadcrumbs":5,"title":2},"1053":{"body":19,"breadcrumbs":5,"title":2},"1054":{"body":0,"breadcrumbs":5,"title":2},"1055":{"body":19,"breadcrumbs":6,"title":3},"1056":{"body":207,"breadcrumbs":5,"title":2},"1057":{"body":0,"breadcrumbs":5,"title":2},"1058":{"body":8,"breadcrumbs":6,"title":3},"1059":{"body":18,"breadcrumbs":7,"title":4},"106":{"body":14,"breadcrumbs":4,"title":1},"1060":{"body":31,"breadcrumbs":7,"title":4},"1061":{"body":31,"breadcrumbs":6,"title":3},"1062":{"body":77,"breadcrumbs":8,"title":5},"1063":{"body":32,"breadcrumbs":6,"title":3},"1064":{"body":4,"breadcrumbs":6,"title":3},"1065":{"body":21,"breadcrumbs":9,"title":6},"1066":{"body":20,"breadcrumbs":9,"title":6},"1067":{"body":8,"breadcrumbs":7,"title":4},"1068":{"body":13,"breadcrumbs":9,"title":6},"1069":{"body":71,"breadcrumbs":7,"title":4},"107":{"body":392,"breadcrumbs":6,"title":3},"1070":{"body":28,"breadcrumbs":7,"title":4},"1071":{"body":16,"breadcrumbs":8,"title":5},"1072":{"body":15,"breadcrumbs":4,"title":1},"1073":{"body":5,"breadcrumbs":6,"title":3},"1074":{"body":15,"breadcrumbs":4,"title":1},"1075":{"body":25,"breadcrumbs":7,"title":4},"1076":{"body":8,"breadcrumbs":8,"title":5},"1077":{"body":33,"breadcrumbs":9,"title":6},"1078":{"body":3,"breadcrumbs":7,"title":4},"1079":{"body":22,"breadcrumbs":9,"title":6},"108":{"body":7,"breadcrumbs":5,"title":2},"1080":{"body":72,"breadcrumbs":9,"title":6},"1081":{"body":27,"breadcrumbs":9,"title":6},"1082":{"body":27,"breadcrumbs":8,"title":5},"1083":{"body":22,"breadcrumbs":6,"title":3},"1084":{"body":8,"breadcrumbs":6,"title":3},"1085":{"body":20,"breadcrumbs":8,"title":5},"1086":{"body":20,"breadcrumbs":9,"title":6},"1087":{"body":69,"breadcrumbs":8,"title":5},"1088":{"body":123,"breadcrumbs":8,"title":5},"1089":{"body":53,"breadcrumbs":8,"title":5},"109":{"body":23,"breadcrumbs":7,"title":4},"1090":{"body":5,"breadcrumbs":6,"title":3},"1091":{"body":31,"breadcrumbs":4,"title":1},"1092":{"body":111,"breadcrumbs":7,"title":4},"1093":{"body":43,"breadcrumbs":9,"title":6},"1094":{"body":68,"breadcrumbs":8,"title":5},"1095":{"body":28,"breadcrumbs":8,"title":5},"1096":{"body":36,"breadcrumbs":8,"title":5},"1097":{"body":49,"breadcrumbs":7,"title":4},"1098":{"body":0,"breadcrumbs":5,"title":2},"1099":{"body":114,"breadcrumbs":5,"title":2},"11":{"body":11,"breadcrumbs":3,"title":2},"110":{"body":22,"breadcrumbs":5,"title":2},"1100":{"body":39,"breadcrumbs":5,"title":2},"1101":{"body":37,"breadcrumbs":5,"title":2},"1102":{"body":40,"breadcrumbs":6,"title":3},"1103":{"body":0,"breadcrumbs":6,"title":3},"1104":{"body":39,"breadcrumbs":6,"title":3},"1105":{"body":56,"breadcrumbs":6,"title":3},"1106":{"body":52,"breadcrumbs":5,"title":2},"1107":{"body":39,"breadcrumbs":4,"title":1},"1108":{"body":0,"breadcrumbs":4,"title":1},"1109":{"body":35,"breadcrumbs":6,"title":3},"111":{"body":9,"breadcrumbs":2,"title":1},"1110":{"body":51,"breadcrumbs":6,"title":3},"1111":{"body":46,"breadcrumbs":6,"title":3},"1112":{"body":41,"breadcrumbs":6,"title":3},"1113":{"body":50,"breadcrumbs":6,"title":3},"1114":{"body":34,"breadcrumbs":5,"title":2},"1115":{"body":37,"breadcrumbs":6,"title":3},"1116":{"body":79,"breadcrumbs":5,"title":2},"1117":{"body":0,"breadcrumbs":5,"title":2},"1118":{"body":57,"breadcrumbs":6,"title":3},"1119":{"body":59,"breadcrumbs":5,"title":2},"112":{"body":0,"breadcrumbs":3,"title":2},"1120":{"body":54,"breadcrumbs":5,"title":2},"1121":{"body":0,"breadcrumbs":5,"title":2},"1122":{"body":14,"breadcrumbs":5,"title":2},"1123":{"body":13,"breadcrumbs":5,"title":2},"1124":{"body":39,"breadcrumbs":6,"title":3},"1125":{"body":7,"breadcrumbs":6,"title":3},"1126":{"body":19,"breadcrumbs":5,"title":2},"1127":{"body":20,"breadcrumbs":4,"title":1},"1128":{"body":41,"breadcrumbs":5,"title":2},"1129":{"body":56,"breadcrumbs":5,"title":2},"113":{"body":14,"breadcrumbs":5,"title":4},"1130":{"body":0,"breadcrumbs":5,"title":2},"1131":{"body":2410,"breadcrumbs":5,"title":2},"1132":{"body":25,"breadcrumbs":7,"title":4},"1133":{"body":26,"breadcrumbs":4,"title":1},"1134":{"body":20,"breadcrumbs":4,"title":1},"1135":{"body":0,"breadcrumbs":4,"title":1},"1136":{"body":41,"breadcrumbs":5,"title":2},"1137":{"body":21,"breadcrumbs":5,"title":2},"1138":{"body":22,"breadcrumbs":4,"title":1},"1139":{"body":75,"breadcrumbs":6,"title":3},"114":{"body":13,"breadcrumbs":6,"title":5},"1140":{"body":0,"breadcrumbs":5,"title":2},"1141":{"body":161,"breadcrumbs":7,"title":4},"1142":{"body":60,"breadcrumbs":6,"title":3},"1143":{"body":21,"breadcrumbs":6,"title":3},"1144":{"body":0,"breadcrumbs":6,"title":3},"1145":{"body":391,"breadcrumbs":7,"title":4},"1146":{"body":19,"breadcrumbs":7,"title":4},"1147":{"body":0,"breadcrumbs":5,"title":2},"1148":{"body":174,"breadcrumbs":7,"title":4},"1149":{"body":44,"breadcrumbs":6,"title":3},"115":{"body":17,"breadcrumbs":5,"title":4},"1150":{"body":20,"breadcrumbs":6,"title":3},"1151":{"body":0,"breadcrumbs":5,"title":2},"1152":{"body":24,"breadcrumbs":6,"title":3},"1153":{"body":23,"breadcrumbs":8,"title":5},"1154":{"body":89,"breadcrumbs":8,"title":5},"1155":{"body":25,"breadcrumbs":8,"title":5},"1156":{"body":0,"breadcrumbs":6,"title":3},"1157":{"body":58,"breadcrumbs":8,"title":5},"1158":{"body":42,"breadcrumbs":7,"title":4},"1159":{"body":0,"breadcrumbs":7,"title":4},"116":{"body":0,"breadcrumbs":3,"title":2},"1160":{"body":17,"breadcrumbs":6,"title":3},"1161":{"body":28,"breadcrumbs":6,"title":3},"1162":{"body":18,"breadcrumbs":6,"title":3},"1163":{"body":0,"breadcrumbs":6,"title":3},"1164":{"body":74,"breadcrumbs":6,"title":3},"1165":{"body":0,"breadcrumbs":5,"title":2},"1166":{"body":30,"breadcrumbs":7,"title":4},"1167":{"body":31,"breadcrumbs":8,"title":5},"1168":{"body":16,"breadcrumbs":9,"title":6},"1169":{"body":0,"breadcrumbs":6,"title":3},"117":{"body":19,"breadcrumbs":3,"title":2},"1170":{"body":40,"breadcrumbs":7,"title":4},"1171":{"body":7,"breadcrumbs":7,"title":4},"1172":{"body":0,"breadcrumbs":5,"title":2},"1173":{"body":97,"breadcrumbs":5,"title":2},"1174":{"body":55,"breadcrumbs":6,"title":3},"1175":{"body":58,"breadcrumbs":6,"title":3},"1176":{"body":0,"breadcrumbs":5,"title":2},"1177":{"body":88,"breadcrumbs":7,"title":4},"1178":{"body":22,"breadcrumbs":4,"title":1},"1179":{"body":0,"breadcrumbs":8,"title":4},"118":{"body":19,"breadcrumbs":4,"title":3},"1180":{"body":10,"breadcrumbs":5,"title":3},"1181":{"body":15,"breadcrumbs":4,"title":2},"1182":{"body":48,"breadcrumbs":3,"title":1},"1183":{"body":42,"breadcrumbs":4,"title":2},"1184":{"body":0,"breadcrumbs":3,"title":1},"1185":{"body":12,"breadcrumbs":3,"title":1},"1186":{"body":964,"breadcrumbs":5,"title":3},"1187":{"body":47,"breadcrumbs":5,"title":3},"1188":{"body":37,"breadcrumbs":6,"title":4},"1189":{"body":50,"breadcrumbs":5,"title":3},"119":{"body":0,"breadcrumbs":3,"title":2},"1190":{"body":39,"breadcrumbs":5,"title":3},"1191":{"body":0,"breadcrumbs":4,"title":2},"1192":{"body":1155,"breadcrumbs":5,"title":3},"1193":{"body":0,"breadcrumbs":4,"title":2},"1194":{"body":0,"breadcrumbs":3,"title":2},"1195":{"body":0,"breadcrumbs":3,"title":2},"1196":{"body":10,"breadcrumbs":6,"title":3},"1197":{"body":14,"breadcrumbs":5,"title":2},"1198":{"body":34,"breadcrumbs":5,"title":2},"1199":{"body":0,"breadcrumbs":6,"title":3},"12":{"body":670,"breadcrumbs":3,"title":2},"120":{"body":26,"breadcrumbs":3,"title":2},"1200":{"body":25,"breadcrumbs":6,"title":3},"1201":{"body":21,"breadcrumbs":6,"title":3},"1202":{"body":16,"breadcrumbs":5,"title":2},"1203":{"body":19,"breadcrumbs":5,"title":2},"1204":{"body":13,"breadcrumbs":5,"title":2},"1205":{"body":16,"breadcrumbs":6,"title":3},"1206":{"body":0,"breadcrumbs":6,"title":3},"1207":{"body":22,"breadcrumbs":5,"title":2},"1208":{"body":16,"breadcrumbs":5,"title":2},"1209":{"body":16,"breadcrumbs":5,"title":2},"121":{"body":30,"breadcrumbs":3,"title":2},"1210":{"body":16,"breadcrumbs":6,"title":3},"1211":{"body":0,"breadcrumbs":5,"title":2},"1212":{"body":18,"breadcrumbs":8,"title":5},"1213":{"body":20,"breadcrumbs":9,"title":6},"1214":{"body":23,"breadcrumbs":9,"title":6},"1215":{"body":25,"breadcrumbs":8,"title":5},"1216":{"body":0,"breadcrumbs":6,"title":3},"1217":{"body":16,"breadcrumbs":7,"title":4},"1218":{"body":16,"breadcrumbs":6,"title":3},"1219":{"body":13,"breadcrumbs":6,"title":3},"122":{"body":7,"breadcrumbs":3,"title":2},"1220":{"body":13,"breadcrumbs":5,"title":2},"1221":{"body":0,"breadcrumbs":5,"title":2},"1222":{"body":13,"breadcrumbs":8,"title":5},"1223":{"body":23,"breadcrumbs":7,"title":4},"1224":{"body":20,"breadcrumbs":8,"title":5},"1225":{"body":23,"breadcrumbs":6,"title":3},"1226":{"body":42,"breadcrumbs":5,"title":2},"1227":{"body":0,"breadcrumbs":5,"title":2},"1228":{"body":11,"breadcrumbs":7,"title":4},"1229":{"body":10,"breadcrumbs":7,"title":4},"123":{"body":9,"breadcrumbs":2,"title":1},"1230":{"body":9,"breadcrumbs":7,"title":4},"1231":{"body":48,"breadcrumbs":5,"title":2},"1232":{"body":26,"breadcrumbs":5,"title":2},"1233":{"body":33,"breadcrumbs":4,"title":1},"1234":{"body":20,"breadcrumbs":10,"title":6},"1235":{"body":22,"breadcrumbs":6,"title":2},"1236":{"body":35,"breadcrumbs":6,"title":2},"1237":{"body":20,"breadcrumbs":6,"title":2},"1238":{"body":0,"breadcrumbs":7,"title":3},"1239":{"body":46,"breadcrumbs":7,"title":3},"124":{"body":9,"breadcrumbs":2,"title":1},"1240":{"body":26,"breadcrumbs":6,"title":2},"1241":{"body":0,"breadcrumbs":5,"title":1},"1242":{"body":37,"breadcrumbs":9,"title":5},"1243":{"body":15,"breadcrumbs":6,"title":2},"1244":{"body":0,"breadcrumbs":6,"title":2},"1245":{"body":27,"breadcrumbs":5,"title":1},"1246":{"body":27,"breadcrumbs":5,"title":1},"1247":{"body":26,"breadcrumbs":6,"title":2},"1248":{"body":0,"breadcrumbs":7,"title":3},"1249":{"body":1622,"breadcrumbs":9,"title":5},"125":{"body":16,"breadcrumbs":2,"title":1},"1250":{"body":19,"breadcrumbs":8,"title":4},"1251":{"body":20,"breadcrumbs":6,"title":2},"1252":{"body":18,"breadcrumbs":5,"title":1},"1253":{"body":33,"breadcrumbs":5,"title":1},"1254":{"body":0,"breadcrumbs":7,"title":3},"1255":{"body":1047,"breadcrumbs":6,"title":2},"1256":{"body":452,"breadcrumbs":6,"title":2},"1257":{"body":21,"breadcrumbs":11,"title":7},"1258":{"body":29,"breadcrumbs":6,"title":2},"1259":{"body":0,"breadcrumbs":5,"title":1},"126":{"body":9,"breadcrumbs":2,"title":1},"1260":{"body":25,"breadcrumbs":5,"title":1},"1261":{"body":33,"breadcrumbs":6,"title":2},"1262":{"body":32,"breadcrumbs":7,"title":3},"1263":{"body":0,"breadcrumbs":6,"title":2},"1264":{"body":23,"breadcrumbs":7,"title":3},"1265":{"body":14,"breadcrumbs":6,"title":2},"1266":{"body":3277,"breadcrumbs":6,"title":2},"1267":{"body":16,"breadcrumbs":3,"title":2},"1268":{"body":33,"breadcrumbs":2,"title":1},"1269":{"body":96,"breadcrumbs":3,"title":2},"127":{"body":9,"breadcrumbs":2,"title":1},"1270":{"body":0,"breadcrumbs":3,"title":2},"1271":{"body":56,"breadcrumbs":3,"title":2},"1272":{"body":22,"breadcrumbs":3,"title":2},"1273":{"body":0,"breadcrumbs":3,"title":2},"1274":{"body":11,"breadcrumbs":3,"title":2},"1275":{"body":18,"breadcrumbs":3,"title":2},"1276":{"body":25,"breadcrumbs":4,"title":3},"1277":{"body":9,"breadcrumbs":4,"title":3},"1278":{"body":28,"breadcrumbs":4,"title":3},"1279":{"body":42,"breadcrumbs":4,"title":3},"128":{"body":0,"breadcrumbs":4,"title":3},"1280":{"body":28,"breadcrumbs":3,"title":2},"1281":{"body":32,"breadcrumbs":3,"title":2},"1282":{"body":12,"breadcrumbs":3,"title":2},"1283":{"body":0,"breadcrumbs":6,"title":4},"1284":{"body":15,"breadcrumbs":8,"title":6},"1285":{"body":41,"breadcrumbs":4,"title":2},"1286":{"body":27,"breadcrumbs":4,"title":2},"1287":{"body":5,"breadcrumbs":4,"title":2},"1288":{"body":20,"breadcrumbs":4,"title":2},"1289":{"body":38,"breadcrumbs":4,"title":2},"129":{"body":44,"breadcrumbs":4,"title":3},"1290":{"body":24,"breadcrumbs":4,"title":2},"1291":{"body":46,"breadcrumbs":4,"title":2},"1292":{"body":26,"breadcrumbs":5,"title":3},"1293":{"body":15,"breadcrumbs":7,"title":5},"1294":{"body":0,"breadcrumbs":4,"title":2},"1295":{"body":28,"breadcrumbs":5,"title":3},"1296":{"body":27,"breadcrumbs":4,"title":2},"1297":{"body":24,"breadcrumbs":4,"title":2},"1298":{"body":31,"breadcrumbs":4,"title":2},"1299":{"body":25,"breadcrumbs":4,"title":2},"13":{"body":8,"breadcrumbs":4,"title":2},"130":{"body":96,"breadcrumbs":2,"title":1},"1300":{"body":0,"breadcrumbs":4,"title":2},"1301":{"body":6,"breadcrumbs":3,"title":1},"1302":{"body":30,"breadcrumbs":3,"title":1},"1303":{"body":5,"breadcrumbs":4,"title":2},"1304":{"body":27,"breadcrumbs":5,"title":3},"1305":{"body":0,"breadcrumbs":4,"title":2},"1306":{"body":14,"breadcrumbs":6,"title":4},"1307":{"body":16,"breadcrumbs":5,"title":3},"1308":{"body":12,"breadcrumbs":4,"title":2},"1309":{"body":0,"breadcrumbs":4,"title":2},"131":{"body":58,"breadcrumbs":2,"title":1},"1310":{"body":25,"breadcrumbs":4,"title":2},"1311":{"body":23,"breadcrumbs":4,"title":2},"1312":{"body":0,"breadcrumbs":4,"title":2},"1313":{"body":21,"breadcrumbs":4,"title":2},"1314":{"body":11,"breadcrumbs":4,"title":2},"1315":{"body":12,"breadcrumbs":3,"title":1},"1316":{"body":12,"breadcrumbs":4,"title":2},"1317":{"body":0,"breadcrumbs":3,"title":1},"1318":{"body":47,"breadcrumbs":4,"title":2},"1319":{"body":29,"breadcrumbs":5,"title":3},"132":{"body":0,"breadcrumbs":3,"title":2},"1320":{"body":0,"breadcrumbs":3,"title":1},"1321":{"body":30,"breadcrumbs":3,"title":1},"1322":{"body":28,"breadcrumbs":3,"title":1},"1323":{"body":7,"breadcrumbs":4,"title":2},"1324":{"body":19,"breadcrumbs":4,"title":3},"1325":{"body":43,"breadcrumbs":2,"title":1},"1326":{"body":388,"breadcrumbs":2,"title":1},"1327":{"body":0,"breadcrumbs":6,"title":4},"1328":{"body":16,"breadcrumbs":6,"title":4},"1329":{"body":0,"breadcrumbs":4,"title":2},"133":{"body":45,"breadcrumbs":3,"title":2},"1330":{"body":53,"breadcrumbs":6,"title":4},"1331":{"body":69,"breadcrumbs":5,"title":3},"1332":{"body":35,"breadcrumbs":5,"title":3},"1333":{"body":35,"breadcrumbs":4,"title":2},"1334":{"body":0,"breadcrumbs":4,"title":2},"1335":{"body":41,"breadcrumbs":4,"title":2},"1336":{"body":33,"breadcrumbs":5,"title":3},"1337":{"body":42,"breadcrumbs":4,"title":2},"1338":{"body":0,"breadcrumbs":4,"title":2},"1339":{"body":27,"breadcrumbs":4,"title":2},"134":{"body":20,"breadcrumbs":3,"title":2},"1340":{"body":22,"breadcrumbs":4,"title":2},"1341":{"body":72,"breadcrumbs":4,"title":2},"1342":{"body":0,"breadcrumbs":4,"title":2},"1343":{"body":24,"breadcrumbs":4,"title":2},"1344":{"body":17,"breadcrumbs":4,"title":2},"1345":{"body":13,"breadcrumbs":4,"title":2},"1346":{"body":33,"breadcrumbs":4,"title":2},"1347":{"body":15,"breadcrumbs":3,"title":1},"1348":{"body":26,"breadcrumbs":4,"title":2},"1349":{"body":21,"breadcrumbs":4,"title":2},"135":{"body":6,"breadcrumbs":5,"title":4},"1350":{"body":16,"breadcrumbs":5,"title":3},"1351":{"body":62,"breadcrumbs":3,"title":1},"1352":{"body":367,"breadcrumbs":3,"title":1},"1353":{"body":10,"breadcrumbs":5,"title":3},"1354":{"body":22,"breadcrumbs":4,"title":2},"1355":{"body":0,"breadcrumbs":4,"title":2},"1356":{"body":43,"breadcrumbs":4,"title":2},"1357":{"body":2188,"breadcrumbs":4,"title":2},"1358":{"body":293,"breadcrumbs":4,"title":2},"1359":{"body":0,"breadcrumbs":7,"title":4},"136":{"body":13,"breadcrumbs":2,"title":1},"1360":{"body":34,"breadcrumbs":4,"title":1},"1361":{"body":0,"breadcrumbs":5,"title":2},"1362":{"body":1449,"breadcrumbs":7,"title":4},"1363":{"body":0,"breadcrumbs":10,"title":7},"1364":{"body":30,"breadcrumbs":10,"title":7},"1365":{"body":50,"breadcrumbs":5,"title":2},"1366":{"body":66,"breadcrumbs":6,"title":3},"1367":{"body":63,"breadcrumbs":6,"title":3},"1368":{"body":26,"breadcrumbs":8,"title":5},"1369":{"body":38,"breadcrumbs":5,"title":2},"137":{"body":10,"breadcrumbs":2,"title":1},"1370":{"body":0,"breadcrumbs":8,"title":6},"1371":{"body":14,"breadcrumbs":8,"title":6},"1372":{"body":54,"breadcrumbs":4,"title":2},"1373":{"body":0,"breadcrumbs":5,"title":3},"1374":{"body":30,"breadcrumbs":3,"title":1},"1375":{"body":29,"breadcrumbs":3,"title":1},"1376":{"body":30,"breadcrumbs":3,"title":1},"1377":{"body":21,"breadcrumbs":3,"title":1},"1378":{"body":30,"breadcrumbs":3,"title":1},"1379":{"body":36,"breadcrumbs":3,"title":1},"138":{"body":5,"breadcrumbs":3,"title":2},"1380":{"body":13,"breadcrumbs":3,"title":1},"1381":{"body":26,"breadcrumbs":4,"title":2},"1382":{"body":148,"breadcrumbs":6,"title":4},"1383":{"body":0,"breadcrumbs":5,"title":3},"1384":{"body":34,"breadcrumbs":7,"title":5},"1385":{"body":26,"breadcrumbs":4,"title":2},"1386":{"body":20,"breadcrumbs":4,"title":2},"1387":{"body":10,"breadcrumbs":4,"title":2},"1388":{"body":29,"breadcrumbs":6,"title":4},"1389":{"body":39,"breadcrumbs":5,"title":3},"139":{"body":7,"breadcrumbs":3,"title":1},"1390":{"body":12,"breadcrumbs":5,"title":3},"1391":{"body":0,"breadcrumbs":4,"title":2},"1392":{"body":621,"breadcrumbs":7,"title":5},"1393":{"body":9,"breadcrumbs":6,"title":3},"1394":{"body":19,"breadcrumbs":4,"title":1},"1395":{"body":0,"breadcrumbs":5,"title":2},"1396":{"body":1181,"breadcrumbs":6,"title":3},"1397":{"body":0,"breadcrumbs":7,"title":4},"1398":{"body":22,"breadcrumbs":9,"title":6},"1399":{"body":51,"breadcrumbs":5,"title":2},"14":{"body":15,"breadcrumbs":4,"title":2},"140":{"body":19,"breadcrumbs":3,"title":1},"1400":{"body":333,"breadcrumbs":6,"title":3},"1401":{"body":12,"breadcrumbs":4,"title":2},"1402":{"body":17,"breadcrumbs":4,"title":2},"1403":{"body":12,"breadcrumbs":4,"title":2},"1404":{"body":50,"breadcrumbs":4,"title":2},"1405":{"body":30,"breadcrumbs":4,"title":2},"1406":{"body":0,"breadcrumbs":4,"title":2},"1407":{"body":43,"breadcrumbs":6,"title":4},"1408":{"body":36,"breadcrumbs":6,"title":4},"1409":{"body":38,"breadcrumbs":5,"title":3},"141":{"body":17,"breadcrumbs":6,"title":4},"1410":{"body":0,"breadcrumbs":5,"title":3},"1411":{"body":90,"breadcrumbs":6,"title":4},"1412":{"body":64,"breadcrumbs":6,"title":4},"1413":{"body":57,"breadcrumbs":6,"title":4},"1414":{"body":71,"breadcrumbs":6,"title":4},"1415":{"body":54,"breadcrumbs":6,"title":4},"1416":{"body":0,"breadcrumbs":5,"title":3},"1417":{"body":80,"breadcrumbs":6,"title":4},"1418":{"body":59,"breadcrumbs":6,"title":4},"1419":{"body":57,"breadcrumbs":6,"title":4},"142":{"body":7,"breadcrumbs":7,"title":5},"1420":{"body":62,"breadcrumbs":6,"title":4},"1421":{"body":63,"breadcrumbs":7,"title":5},"1422":{"body":0,"breadcrumbs":5,"title":3},"1423":{"body":60,"breadcrumbs":6,"title":4},"1424":{"body":48,"breadcrumbs":6,"title":4},"1425":{"body":44,"breadcrumbs":6,"title":4},"1426":{"body":64,"breadcrumbs":6,"title":4},"1427":{"body":0,"breadcrumbs":4,"title":2},"1428":{"body":77,"breadcrumbs":5,"title":3},"1429":{"body":72,"breadcrumbs":5,"title":3},"143":{"body":15,"breadcrumbs":6,"title":4},"1430":{"body":49,"breadcrumbs":5,"title":3},"1431":{"body":70,"breadcrumbs":5,"title":3},"1432":{"body":0,"breadcrumbs":4,"title":2},"1433":{"body":45,"breadcrumbs":5,"title":3},"1434":{"body":45,"breadcrumbs":5,"title":3},"1435":{"body":0,"breadcrumbs":4,"title":2},"1436":{"body":41,"breadcrumbs":5,"title":3},"1437":{"body":53,"breadcrumbs":5,"title":3},"1438":{"body":51,"breadcrumbs":5,"title":3},"1439":{"body":0,"breadcrumbs":4,"title":2},"144":{"body":19,"breadcrumbs":7,"title":5},"1440":{"body":60,"breadcrumbs":5,"title":3},"1441":{"body":31,"breadcrumbs":5,"title":3},"1442":{"body":0,"breadcrumbs":4,"title":2},"1443":{"body":23,"breadcrumbs":4,"title":2},"1444":{"body":24,"breadcrumbs":4,"title":2},"1445":{"body":63,"breadcrumbs":4,"title":2},"1446":{"body":41,"breadcrumbs":5,"title":3},"1447":{"body":0,"breadcrumbs":5,"title":3},"1448":{"body":36,"breadcrumbs":4,"title":2},"1449":{"body":65,"breadcrumbs":4,"title":2},"145":{"body":14,"breadcrumbs":5,"title":3},"1450":{"body":0,"breadcrumbs":4,"title":2},"1451":{"body":31,"breadcrumbs":4,"title":2},"1452":{"body":40,"breadcrumbs":4,"title":2},"1453":{"body":59,"breadcrumbs":4,"title":2},"1454":{"body":8,"breadcrumbs":6,"title":3},"1455":{"body":19,"breadcrumbs":4,"title":1},"1456":{"body":17,"breadcrumbs":6,"title":3},"1457":{"body":26,"breadcrumbs":5,"title":2},"1458":{"body":834,"breadcrumbs":5,"title":2},"1459":{"body":19,"breadcrumbs":6,"title":3},"146":{"body":33,"breadcrumbs":7,"title":5},"1460":{"body":9,"breadcrumbs":5,"title":2},"1461":{"body":59,"breadcrumbs":5,"title":2},"1462":{"body":34,"breadcrumbs":5,"title":2},"1463":{"body":25,"breadcrumbs":6,"title":3},"1464":{"body":36,"breadcrumbs":6,"title":3},"1465":{"body":33,"breadcrumbs":6,"title":3},"1466":{"body":20,"breadcrumbs":4,"title":1},"1467":{"body":0,"breadcrumbs":4,"title":1},"1468":{"body":3,"breadcrumbs":5,"title":2},"1469":{"body":5,"breadcrumbs":6,"title":3},"147":{"body":35,"breadcrumbs":8,"title":6},"1470":{"body":4,"breadcrumbs":5,"title":2},"1471":{"body":7,"breadcrumbs":6,"title":3},"1472":{"body":18,"breadcrumbs":5,"title":2},"1473":{"body":9,"breadcrumbs":4,"title":1},"1474":{"body":6,"breadcrumbs":7,"title":4},"1475":{"body":1068,"breadcrumbs":5,"title":2},"1476":{"body":6,"breadcrumbs":4,"title":2},"1477":{"body":29,"breadcrumbs":5,"title":3},"1478":{"body":24,"breadcrumbs":4,"title":2},"1479":{"body":21,"breadcrumbs":5,"title":3},"148":{"body":34,"breadcrumbs":6,"title":4},"1480":{"body":10,"breadcrumbs":8,"title":5},"1481":{"body":12,"breadcrumbs":5,"title":2},"1482":{"body":38,"breadcrumbs":4,"title":1},"1483":{"body":33,"breadcrumbs":5,"title":2},"1484":{"body":0,"breadcrumbs":5,"title":2},"1485":{"body":28,"breadcrumbs":6,"title":3},"1486":{"body":30,"breadcrumbs":7,"title":4},"1487":{"body":1141,"breadcrumbs":6,"title":3},"1488":{"body":7,"breadcrumbs":7,"title":4},"1489":{"body":19,"breadcrumbs":4,"title":1},"149":{"body":28,"breadcrumbs":6,"title":4},"1490":{"body":0,"breadcrumbs":5,"title":2},"1491":{"body":137,"breadcrumbs":7,"title":4},"1492":{"body":0,"breadcrumbs":5,"title":2},"1493":{"body":779,"breadcrumbs":8,"title":5},"1494":{"body":0,"breadcrumbs":7,"title":4},"1495":{"body":14,"breadcrumbs":4,"title":1},"1496":{"body":41,"breadcrumbs":5,"title":2},"1497":{"body":0,"breadcrumbs":4,"title":1},"1498":{"body":409,"breadcrumbs":6,"title":3},"1499":{"body":354,"breadcrumbs":6,"title":3},"15":{"body":0,"breadcrumbs":4,"title":2},"150":{"body":31,"breadcrumbs":6,"title":4},"1500":{"body":14,"breadcrumbs":6,"title":3},"1501":{"body":51,"breadcrumbs":4,"title":1},"1502":{"body":0,"breadcrumbs":5,"title":2},"1503":{"body":1101,"breadcrumbs":5,"title":2},"1504":{"body":13,"breadcrumbs":4,"title":1},"1505":{"body":0,"breadcrumbs":5,"title":2},"1506":{"body":627,"breadcrumbs":5,"title":2},"1507":{"body":14,"breadcrumbs":3,"title":2},"1508":{"body":23,"breadcrumbs":3,"title":2},"1509":{"body":0,"breadcrumbs":3,"title":2},"151":{"body":45,"breadcrumbs":6,"title":4},"1510":{"body":1390,"breadcrumbs":3,"title":2},"1511":{"body":9,"breadcrumbs":7,"title":4},"1512":{"body":23,"breadcrumbs":4,"title":1},"1513":{"body":0,"breadcrumbs":5,"title":2},"1514":{"body":20,"breadcrumbs":5,"title":2},"1515":{"body":12,"breadcrumbs":5,"title":2},"1516":{"body":26,"breadcrumbs":5,"title":2},"1517":{"body":14,"breadcrumbs":5,"title":2},"1518":{"body":14,"breadcrumbs":7,"title":4},"1519":{"body":0,"breadcrumbs":5,"title":2},"152":{"body":25,"breadcrumbs":6,"title":4},"1520":{"body":1865,"breadcrumbs":6,"title":3},"1521":{"body":10,"breadcrumbs":6,"title":3},"1522":{"body":45,"breadcrumbs":4,"title":1},"1523":{"body":15,"breadcrumbs":5,"title":2},"1524":{"body":0,"breadcrumbs":4,"title":1},"1525":{"body":40,"breadcrumbs":5,"title":2},"1526":{"body":1133,"breadcrumbs":5,"title":2},"1527":{"body":638,"breadcrumbs":7,"title":4},"1528":{"body":0,"breadcrumbs":6,"title":4},"1529":{"body":13,"breadcrumbs":6,"title":4},"153":{"body":0,"breadcrumbs":3,"title":1},"1530":{"body":0,"breadcrumbs":5,"title":3},"1531":{"body":25,"breadcrumbs":5,"title":3},"1532":{"body":23,"breadcrumbs":5,"title":3},"1533":{"body":22,"breadcrumbs":7,"title":5},"1534":{"body":33,"breadcrumbs":5,"title":3},"1535":{"body":23,"breadcrumbs":7,"title":5},"1536":{"body":26,"breadcrumbs":5,"title":3},"1537":{"body":24,"breadcrumbs":7,"title":5},"1538":{"body":24,"breadcrumbs":5,"title":3},"1539":{"body":22,"breadcrumbs":6,"title":4},"154":{"body":13,"breadcrumbs":5,"title":3},"1540":{"body":10,"breadcrumbs":5,"title":3},"1541":{"body":12,"breadcrumbs":5,"title":3},"1542":{"body":11,"breadcrumbs":4,"title":2},"1543":{"body":21,"breadcrumbs":4,"title":2},"1544":{"body":93,"breadcrumbs":4,"title":2},"1545":{"body":27,"breadcrumbs":3,"title":1},"1546":{"body":17,"breadcrumbs":3,"title":1},"1547":{"body":23,"breadcrumbs":3,"title":1},"1548":{"body":20,"breadcrumbs":4,"title":2},"1549":{"body":9,"breadcrumbs":7,"title":4},"155":{"body":17,"breadcrumbs":4,"title":2},"1550":{"body":26,"breadcrumbs":4,"title":1},"1551":{"body":44,"breadcrumbs":4,"title":1},"1552":{"body":938,"breadcrumbs":5,"title":2},"1553":{"body":23,"breadcrumbs":7,"title":4},"1554":{"body":0,"breadcrumbs":4,"title":1},"1555":{"body":48,"breadcrumbs":4,"title":1},"1556":{"body":28,"breadcrumbs":5,"title":2},"1557":{"body":0,"breadcrumbs":5,"title":2},"1558":{"body":22,"breadcrumbs":6,"title":3},"1559":{"body":39,"breadcrumbs":7,"title":4},"156":{"body":14,"breadcrumbs":5,"title":3},"1560":{"body":69,"breadcrumbs":7,"title":4},"1561":{"body":0,"breadcrumbs":4,"title":1},"1562":{"body":51,"breadcrumbs":5,"title":2},"1563":{"body":24,"breadcrumbs":5,"title":2},"1564":{"body":0,"breadcrumbs":4,"title":1},"1565":{"body":25,"breadcrumbs":5,"title":2},"1566":{"body":25,"breadcrumbs":5,"title":2},"1567":{"body":32,"breadcrumbs":6,"title":3},"1568":{"body":17,"breadcrumbs":5,"title":2},"1569":{"body":29,"breadcrumbs":5,"title":2},"157":{"body":6,"breadcrumbs":4,"title":2},"1570":{"body":0,"breadcrumbs":5,"title":2},"1571":{"body":29,"breadcrumbs":5,"title":2},"1572":{"body":35,"breadcrumbs":5,"title":2},"1573":{"body":38,"breadcrumbs":5,"title":2},"1574":{"body":28,"breadcrumbs":5,"title":2},"1575":{"body":0,"breadcrumbs":4,"title":1},"1576":{"body":26,"breadcrumbs":5,"title":2},"1577":{"body":29,"breadcrumbs":5,"title":2},"1578":{"body":60,"breadcrumbs":6,"title":3},"1579":{"body":59,"breadcrumbs":5,"title":2},"158":{"body":7,"breadcrumbs":4,"title":2},"1580":{"body":31,"breadcrumbs":4,"title":1},"1581":{"body":0,"breadcrumbs":6,"title":3},"1582":{"body":19,"breadcrumbs":5,"title":2},"1583":{"body":17,"breadcrumbs":4,"title":1},"1584":{"body":16,"breadcrumbs":5,"title":2},"1585":{"body":18,"breadcrumbs":5,"title":2},"1586":{"body":16,"breadcrumbs":5,"title":2},"1587":{"body":0,"breadcrumbs":5,"title":2},"1588":{"body":30,"breadcrumbs":5,"title":2},"1589":{"body":23,"breadcrumbs":5,"title":2},"159":{"body":9,"breadcrumbs":4,"title":2},"1590":{"body":37,"breadcrumbs":5,"title":2},"1591":{"body":0,"breadcrumbs":5,"title":2},"1592":{"body":9,"breadcrumbs":5,"title":2},"1593":{"body":9,"breadcrumbs":5,"title":2},"1594":{"body":9,"breadcrumbs":5,"title":2},"1595":{"body":0,"breadcrumbs":5,"title":2},"1596":{"body":29,"breadcrumbs":5,"title":2},"1597":{"body":26,"breadcrumbs":5,"title":2},"1598":{"body":0,"breadcrumbs":5,"title":2},"1599":{"body":30,"breadcrumbs":6,"title":3},"16":{"body":18,"breadcrumbs":5,"title":3},"160":{"body":19,"breadcrumbs":3,"title":1},"1600":{"body":30,"breadcrumbs":7,"title":4},"1601":{"body":16,"breadcrumbs":6,"title":3},"1602":{"body":19,"breadcrumbs":5,"title":2},"1603":{"body":22,"breadcrumbs":4,"title":1},"1604":{"body":0,"breadcrumbs":10,"title":5},"1605":{"body":0,"breadcrumbs":7,"title":2},"1606":{"body":794,"breadcrumbs":9,"title":4},"1607":{"body":33,"breadcrumbs":8,"title":3},"1608":{"body":24,"breadcrumbs":7,"title":2},"1609":{"body":0,"breadcrumbs":7,"title":2},"161":{"body":21,"breadcrumbs":6,"title":4},"1610":{"body":48,"breadcrumbs":6,"title":1},"1611":{"body":42,"breadcrumbs":7,"title":2},"1612":{"body":0,"breadcrumbs":7,"title":2},"1613":{"body":309,"breadcrumbs":7,"title":2},"1614":{"body":14,"breadcrumbs":7,"title":4},"1615":{"body":30,"breadcrumbs":5,"title":2},"1616":{"body":44,"breadcrumbs":4,"title":1},"1617":{"body":4971,"breadcrumbs":5,"title":2},"1618":{"body":6,"breadcrumbs":7,"title":4},"1619":{"body":29,"breadcrumbs":4,"title":1},"162":{"body":42,"breadcrumbs":6,"title":4},"1620":{"body":47,"breadcrumbs":5,"title":2},"1621":{"body":0,"breadcrumbs":4,"title":1},"1622":{"body":10,"breadcrumbs":4,"title":1},"1623":{"body":1835,"breadcrumbs":5,"title":2},"1624":{"body":22,"breadcrumbs":12,"title":9},"1625":{"body":14,"breadcrumbs":4,"title":1},"1626":{"body":31,"breadcrumbs":5,"title":2},"1627":{"body":53,"breadcrumbs":6,"title":3},"1628":{"body":28,"breadcrumbs":5,"title":2},"1629":{"body":17,"breadcrumbs":4,"title":1},"163":{"body":35,"breadcrumbs":8,"title":6},"1630":{"body":41,"breadcrumbs":4,"title":1},"1631":{"body":0,"breadcrumbs":7,"title":4},"1632":{"body":32,"breadcrumbs":4,"title":1},"1633":{"body":0,"breadcrumbs":4,"title":1},"1634":{"body":8,"breadcrumbs":4,"title":1},"1635":{"body":17,"breadcrumbs":5,"title":2},"1636":{"body":36,"breadcrumbs":5,"title":2},"1637":{"body":0,"breadcrumbs":4,"title":1},"1638":{"body":62,"breadcrumbs":5,"title":2},"1639":{"body":85,"breadcrumbs":5,"title":2},"164":{"body":30,"breadcrumbs":7,"title":5},"1640":{"body":132,"breadcrumbs":5,"title":2},"1641":{"body":33,"breadcrumbs":5,"title":2},"1642":{"body":12,"breadcrumbs":5,"title":2},"1643":{"body":8,"breadcrumbs":6,"title":3},"1644":{"body":40,"breadcrumbs":5,"title":2},"1645":{"body":40,"breadcrumbs":5,"title":2},"1646":{"body":0,"breadcrumbs":4,"title":1},"1647":{"body":16,"breadcrumbs":6,"title":3},"1648":{"body":28,"breadcrumbs":6,"title":3},"1649":{"body":14,"breadcrumbs":6,"title":3},"165":{"body":26,"breadcrumbs":6,"title":4},"1650":{"body":37,"breadcrumbs":6,"title":3},"1651":{"body":0,"breadcrumbs":5,"title":2},"1652":{"body":18,"breadcrumbs":6,"title":3},"1653":{"body":19,"breadcrumbs":6,"title":3},"1654":{"body":39,"breadcrumbs":6,"title":3},"1655":{"body":40,"breadcrumbs":6,"title":3},"1656":{"body":28,"breadcrumbs":4,"title":1},"1657":{"body":26,"breadcrumbs":5,"title":2},"1658":{"body":25,"breadcrumbs":4,"title":1},"1659":{"body":9,"breadcrumbs":8,"title":5},"166":{"body":33,"breadcrumbs":8,"title":6},"1660":{"body":34,"breadcrumbs":4,"title":1},"1661":{"body":0,"breadcrumbs":5,"title":2},"1662":{"body":1080,"breadcrumbs":7,"title":4},"1663":{"body":0,"breadcrumbs":6,"title":3},"1664":{"body":0,"breadcrumbs":8,"title":4},"1665":{"body":12,"breadcrumbs":7,"title":5},"1666":{"body":28,"breadcrumbs":4,"title":2},"1667":{"body":422,"breadcrumbs":3,"title":1},"1668":{"body":17,"breadcrumbs":6,"title":3},"1669":{"body":14,"breadcrumbs":5,"title":2},"167":{"body":34,"breadcrumbs":7,"title":5},"1670":{"body":34,"breadcrumbs":4,"title":1},"1671":{"body":1337,"breadcrumbs":4,"title":1},"1672":{"body":0,"breadcrumbs":8,"title":4},"1673":{"body":12,"breadcrumbs":5,"title":1},"1674":{"body":48,"breadcrumbs":6,"title":2},"1675":{"body":2717,"breadcrumbs":7,"title":3},"1676":{"body":9,"breadcrumbs":7,"title":4},"1677":{"body":13,"breadcrumbs":5,"title":2},"1678":{"body":40,"breadcrumbs":4,"title":1},"1679":{"body":30,"breadcrumbs":5,"title":2},"168":{"body":34,"breadcrumbs":6,"title":4},"1680":{"body":0,"breadcrumbs":5,"title":2},"1681":{"body":1103,"breadcrumbs":4,"title":1},"1682":{"body":0,"breadcrumbs":4,"title":1},"1683":{"body":147,"breadcrumbs":6,"title":3},"1684":{"body":169,"breadcrumbs":6,"title":3},"1685":{"body":18,"breadcrumbs":10,"title":7},"1686":{"body":58,"breadcrumbs":4,"title":1},"1687":{"body":0,"breadcrumbs":6,"title":3},"1688":{"body":1374,"breadcrumbs":6,"title":3},"1689":{"body":14,"breadcrumbs":11,"title":7},"169":{"body":0,"breadcrumbs":5,"title":3},"1690":{"body":30,"breadcrumbs":6,"title":2},"1691":{"body":63,"breadcrumbs":6,"title":2},"1692":{"body":0,"breadcrumbs":8,"title":4},"1693":{"body":1751,"breadcrumbs":8,"title":4},"1694":{"body":13,"breadcrumbs":6,"title":3},"1695":{"body":22,"breadcrumbs":5,"title":2},"1696":{"body":42,"breadcrumbs":4,"title":1},"1697":{"body":0,"breadcrumbs":5,"title":2},"1698":{"body":20,"breadcrumbs":7,"title":4},"1699":{"body":28,"breadcrumbs":6,"title":3},"17":{"body":25,"breadcrumbs":4,"title":2},"170":{"body":36,"breadcrumbs":6,"title":4},"1700":{"body":37,"breadcrumbs":5,"title":2},"1701":{"body":0,"breadcrumbs":4,"title":1},"1702":{"body":54,"breadcrumbs":5,"title":2},"1703":{"body":22,"breadcrumbs":5,"title":2},"1704":{"body":28,"breadcrumbs":5,"title":2},"1705":{"body":0,"breadcrumbs":4,"title":1},"1706":{"body":18,"breadcrumbs":4,"title":1},"1707":{"body":0,"breadcrumbs":4,"title":1},"1708":{"body":8,"breadcrumbs":5,"title":2},"1709":{"body":6,"breadcrumbs":4,"title":1},"171":{"body":14,"breadcrumbs":8,"title":6},"1710":{"body":8,"breadcrumbs":5,"title":2},"1711":{"body":9,"breadcrumbs":6,"title":3},"1712":{"body":22,"breadcrumbs":4,"title":1},"1713":{"body":153,"breadcrumbs":4,"title":1},"1714":{"body":763,"breadcrumbs":5,"title":2},"1715":{"body":0,"breadcrumbs":6,"title":3},"1716":{"body":0,"breadcrumbs":7,"title":4},"1717":{"body":20,"breadcrumbs":10,"title":7},"1718":{"body":64,"breadcrumbs":5,"title":2},"1719":{"body":0,"breadcrumbs":6,"title":3},"172":{"body":10,"breadcrumbs":6,"title":4},"1720":{"body":287,"breadcrumbs":7,"title":4},"1721":{"body":176,"breadcrumbs":4,"title":1},"1722":{"body":9,"breadcrumbs":7,"title":4},"1723":{"body":16,"breadcrumbs":4,"title":1},"1724":{"body":0,"breadcrumbs":5,"title":2},"1725":{"body":1109,"breadcrumbs":6,"title":3},"1726":{"body":10,"breadcrumbs":4,"title":2},"1727":{"body":15,"breadcrumbs":4,"title":2},"1728":{"body":0,"breadcrumbs":5,"title":3},"1729":{"body":2122,"breadcrumbs":5,"title":3},"173":{"body":88,"breadcrumbs":4,"title":2},"1730":{"body":15,"breadcrumbs":6,"title":5},"1731":{"body":62,"breadcrumbs":3,"title":2},"1732":{"body":46,"breadcrumbs":2,"title":1},"1733":{"body":13,"breadcrumbs":3,"title":2},"1734":{"body":8,"breadcrumbs":5,"title":4},"1735":{"body":2312,"breadcrumbs":4,"title":3},"1736":{"body":24,"breadcrumbs":3,"title":2},"1737":{"body":88,"breadcrumbs":2,"title":1},"1738":{"body":13,"breadcrumbs":5,"title":3},"1739":{"body":15,"breadcrumbs":3,"title":1},"174":{"body":0,"breadcrumbs":3,"title":1},"1740":{"body":0,"breadcrumbs":4,"title":2},"1741":{"body":1826,"breadcrumbs":7,"title":5},"1742":{"body":15,"breadcrumbs":4,"title":2},"1743":{"body":15,"breadcrumbs":3,"title":1},"1744":{"body":0,"breadcrumbs":4,"title":2},"1745":{"body":1680,"breadcrumbs":4,"title":2},"1746":{"body":13,"breadcrumbs":8,"title":5},"1747":{"body":25,"breadcrumbs":4,"title":1},"1748":{"body":0,"breadcrumbs":8,"title":5},"1749":{"body":19,"breadcrumbs":8,"title":5},"175":{"body":17,"breadcrumbs":5,"title":3},"1750":{"body":27,"breadcrumbs":7,"title":4},"1751":{"body":49,"breadcrumbs":7,"title":4},"1752":{"body":21,"breadcrumbs":7,"title":4},"1753":{"body":82,"breadcrumbs":7,"title":4},"1754":{"body":0,"breadcrumbs":6,"title":3},"1755":{"body":78,"breadcrumbs":6,"title":3},"1756":{"body":63,"breadcrumbs":6,"title":3},"1757":{"body":64,"breadcrumbs":6,"title":3},"1758":{"body":0,"breadcrumbs":5,"title":2},"1759":{"body":16,"breadcrumbs":6,"title":3},"176":{"body":17,"breadcrumbs":6,"title":4},"1760":{"body":16,"breadcrumbs":6,"title":3},"1761":{"body":0,"breadcrumbs":5,"title":2},"1762":{"body":55,"breadcrumbs":6,"title":3},"1763":{"body":111,"breadcrumbs":6,"title":3},"1764":{"body":0,"breadcrumbs":6,"title":3},"1765":{"body":17,"breadcrumbs":6,"title":3},"1766":{"body":10,"breadcrumbs":5,"title":2},"1767":{"body":12,"breadcrumbs":5,"title":2},"1768":{"body":11,"breadcrumbs":5,"title":2},"1769":{"body":13,"breadcrumbs":5,"title":2},"177":{"body":24,"breadcrumbs":5,"title":3},"1770":{"body":0,"breadcrumbs":6,"title":3},"1771":{"body":26,"breadcrumbs":5,"title":2},"1772":{"body":25,"breadcrumbs":6,"title":3},"1773":{"body":25,"breadcrumbs":5,"title":2},"1774":{"body":30,"breadcrumbs":5,"title":2},"1775":{"body":24,"breadcrumbs":4,"title":1},"1776":{"body":0,"breadcrumbs":7,"title":5},"1777":{"body":12,"breadcrumbs":8,"title":6},"1778":{"body":53,"breadcrumbs":4,"title":2},"1779":{"body":38,"breadcrumbs":4,"title":2},"178":{"body":10,"breadcrumbs":4,"title":2},"1780":{"body":43,"breadcrumbs":4,"title":2},"1781":{"body":34,"breadcrumbs":5,"title":3},"1782":{"body":105,"breadcrumbs":4,"title":2},"1783":{"body":20,"breadcrumbs":5,"title":3},"1784":{"body":29,"breadcrumbs":4,"title":2},"1785":{"body":19,"breadcrumbs":4,"title":2},"1786":{"body":0,"breadcrumbs":8,"title":4},"1787":{"body":1086,"breadcrumbs":8,"title":4},"1788":{"body":8,"breadcrumbs":6,"title":4},"1789":{"body":16,"breadcrumbs":5,"title":3},"179":{"body":7,"breadcrumbs":4,"title":2},"1790":{"body":31,"breadcrumbs":8,"title":6},"1791":{"body":30,"breadcrumbs":5,"title":3},"1792":{"body":6,"breadcrumbs":6,"title":3},"1793":{"body":31,"breadcrumbs":7,"title":4},"1794":{"body":48,"breadcrumbs":6,"title":3},"1795":{"body":85,"breadcrumbs":7,"title":4},"1796":{"body":53,"breadcrumbs":6,"title":3},"1797":{"body":0,"breadcrumbs":5,"title":2},"1798":{"body":20,"breadcrumbs":6,"title":3},"1799":{"body":55,"breadcrumbs":6,"title":3},"18":{"body":13,"breadcrumbs":4,"title":2},"180":{"body":8,"breadcrumbs":2,"title":1},"1800":{"body":0,"breadcrumbs":5,"title":2},"1801":{"body":30,"breadcrumbs":5,"title":2},"1802":{"body":31,"breadcrumbs":5,"title":2},"1803":{"body":24,"breadcrumbs":5,"title":2},"1804":{"body":0,"breadcrumbs":5,"title":2},"1805":{"body":52,"breadcrumbs":5,"title":2},"1806":{"body":42,"breadcrumbs":6,"title":3},"1807":{"body":0,"breadcrumbs":6,"title":3},"1808":{"body":49,"breadcrumbs":6,"title":3},"1809":{"body":0,"breadcrumbs":6,"title":3},"181":{"body":14,"breadcrumbs":2,"title":1},"1810":{"body":50,"breadcrumbs":7,"title":4},"1811":{"body":34,"breadcrumbs":5,"title":2},"1812":{"body":0,"breadcrumbs":5,"title":2},"1813":{"body":40,"breadcrumbs":5,"title":2},"1814":{"body":37,"breadcrumbs":5,"title":2},"1815":{"body":0,"breadcrumbs":6,"title":3},"1816":{"body":44,"breadcrumbs":6,"title":3},"1817":{"body":39,"breadcrumbs":6,"title":3},"1818":{"body":38,"breadcrumbs":6,"title":3},"1819":{"body":31,"breadcrumbs":5,"title":2},"182":{"body":27,"breadcrumbs":5,"title":4},"1820":{"body":0,"breadcrumbs":5,"title":2},"1821":{"body":56,"breadcrumbs":6,"title":3},"1822":{"body":58,"breadcrumbs":6,"title":3},"1823":{"body":38,"breadcrumbs":5,"title":2},"1824":{"body":64,"breadcrumbs":5,"title":2},"1825":{"body":53,"breadcrumbs":7,"title":4},"1826":{"body":0,"breadcrumbs":6,"title":3},"1827":{"body":32,"breadcrumbs":6,"title":3},"1828":{"body":38,"breadcrumbs":8,"title":5},"1829":{"body":51,"breadcrumbs":6,"title":3},"183":{"body":47,"breadcrumbs":5,"title":4},"1830":{"body":72,"breadcrumbs":5,"title":2},"1831":{"body":41,"breadcrumbs":5,"title":2},"1832":{"body":13,"breadcrumbs":7,"title":5},"1833":{"body":0,"breadcrumbs":2,"title":0},"1834":{"body":34,"breadcrumbs":5,"title":3},"1835":{"body":568,"breadcrumbs":4,"title":2},"1836":{"body":0,"breadcrumbs":6,"title":4},"1837":{"body":74,"breadcrumbs":4,"title":2},"1838":{"body":97,"breadcrumbs":4,"title":2},"1839":{"body":151,"breadcrumbs":4,"title":2},"184":{"body":52,"breadcrumbs":6,"title":5},"1840":{"body":27,"breadcrumbs":4,"title":2},"1841":{"body":0,"breadcrumbs":4,"title":2},"1842":{"body":8,"breadcrumbs":5,"title":3},"1843":{"body":27,"breadcrumbs":5,"title":3},"1844":{"body":14,"breadcrumbs":5,"title":3},"1845":{"body":22,"breadcrumbs":4,"title":2},"1846":{"body":19,"breadcrumbs":4,"title":2},"1847":{"body":51,"breadcrumbs":3,"title":1},"1848":{"body":12,"breadcrumbs":4,"title":2},"1849":{"body":33,"breadcrumbs":3,"title":1},"185":{"body":45,"breadcrumbs":6,"title":5},"1850":{"body":6,"breadcrumbs":6,"title":4},"1851":{"body":711,"breadcrumbs":3,"title":1},"1852":{"body":0,"breadcrumbs":8,"title":5},"1853":{"body":19,"breadcrumbs":5,"title":2},"1854":{"body":0,"breadcrumbs":5,"title":2},"1855":{"body":380,"breadcrumbs":7,"title":4},"1856":{"body":0,"breadcrumbs":5,"title":3},"1857":{"body":13,"breadcrumbs":3,"title":1},"1858":{"body":0,"breadcrumbs":5,"title":3},"1859":{"body":1039,"breadcrumbs":6,"title":4},"186":{"body":3,"breadcrumbs":7,"title":6},"1860":{"body":0,"breadcrumbs":6,"title":3},"187":{"body":11,"breadcrumbs":2,"title":1},"188":{"body":19,"breadcrumbs":3,"title":2},"189":{"body":12,"breadcrumbs":3,"title":2},"19":{"body":21,"breadcrumbs":3,"title":1},"190":{"body":30,"breadcrumbs":6,"title":5},"191":{"body":21,"breadcrumbs":5,"title":4},"192":{"body":0,"breadcrumbs":4,"title":3},"193":{"body":18,"breadcrumbs":5,"title":4},"194":{"body":28,"breadcrumbs":3,"title":2},"195":{"body":35,"breadcrumbs":3,"title":2},"196":{"body":0,"breadcrumbs":4,"title":3},"197":{"body":15,"breadcrumbs":4,"title":3},"198":{"body":19,"breadcrumbs":3,"title":2},"199":{"body":15,"breadcrumbs":4,"title":3},"2":{"body":31,"breadcrumbs":3,"title":2},"20":{"body":989,"breadcrumbs":5,"title":3},"200":{"body":22,"breadcrumbs":4,"title":3},"201":{"body":0,"breadcrumbs":3,"title":2},"202":{"body":30,"breadcrumbs":4,"title":3},"203":{"body":14,"breadcrumbs":3,"title":2},"204":{"body":0,"breadcrumbs":3,"title":2},"205":{"body":16,"breadcrumbs":2,"title":1},"206":{"body":17,"breadcrumbs":3,"title":2},"207":{"body":46,"breadcrumbs":3,"title":2},"208":{"body":21,"breadcrumbs":3,"title":2},"209":{"body":18,"breadcrumbs":3,"title":2},"21":{"body":15,"breadcrumbs":5,"title":3},"210":{"body":15,"breadcrumbs":6,"title":3},"211":{"body":27,"breadcrumbs":5,"title":2},"212":{"body":22,"breadcrumbs":4,"title":1},"213":{"body":57,"breadcrumbs":6,"title":3},"214":{"body":44,"breadcrumbs":5,"title":2},"215":{"body":32,"breadcrumbs":8,"title":5},"216":{"body":0,"breadcrumbs":8,"title":5},"217":{"body":97,"breadcrumbs":8,"title":5},"218":{"body":57,"breadcrumbs":9,"title":6},"219":{"body":64,"breadcrumbs":8,"title":5},"22":{"body":17,"breadcrumbs":4,"title":2},"220":{"body":58,"breadcrumbs":8,"title":5},"221":{"body":40,"breadcrumbs":8,"title":5},"222":{"body":4,"breadcrumbs":8,"title":5},"223":{"body":14,"breadcrumbs":6,"title":3},"224":{"body":42,"breadcrumbs":6,"title":3},"225":{"body":22,"breadcrumbs":6,"title":3},"226":{"body":24,"breadcrumbs":8,"title":5},"227":{"body":0,"breadcrumbs":5,"title":2},"228":{"body":42,"breadcrumbs":7,"title":4},"229":{"body":59,"breadcrumbs":7,"title":4},"23":{"body":16,"breadcrumbs":3,"title":1},"230":{"body":45,"breadcrumbs":7,"title":4},"231":{"body":62,"breadcrumbs":6,"title":3},"232":{"body":0,"breadcrumbs":6,"title":3},"233":{"body":27,"breadcrumbs":7,"title":4},"234":{"body":17,"breadcrumbs":7,"title":4},"235":{"body":9,"breadcrumbs":5,"title":2},"236":{"body":0,"breadcrumbs":5,"title":2},"237":{"body":19,"breadcrumbs":8,"title":5},"238":{"body":22,"breadcrumbs":8,"title":5},"239":{"body":36,"breadcrumbs":8,"title":5},"24":{"body":0,"breadcrumbs":4,"title":2},"240":{"body":24,"breadcrumbs":9,"title":6},"241":{"body":0,"breadcrumbs":5,"title":2},"242":{"body":18,"breadcrumbs":6,"title":3},"243":{"body":17,"breadcrumbs":5,"title":2},"244":{"body":34,"breadcrumbs":5,"title":2},"245":{"body":28,"breadcrumbs":5,"title":2},"246":{"body":41,"breadcrumbs":5,"title":2},"247":{"body":43,"breadcrumbs":5,"title":2},"248":{"body":0,"breadcrumbs":4,"title":2},"249":{"body":35,"breadcrumbs":4,"title":2},"25":{"body":1173,"breadcrumbs":5,"title":3},"250":{"body":0,"breadcrumbs":5,"title":3},"251":{"body":1138,"breadcrumbs":4,"title":2},"252":{"body":11,"breadcrumbs":6,"title":4},"253":{"body":22,"breadcrumbs":4,"title":2},"254":{"body":0,"breadcrumbs":4,"title":2},"255":{"body":21,"breadcrumbs":4,"title":2},"256":{"body":42,"breadcrumbs":4,"title":2},"257":{"body":2499,"breadcrumbs":4,"title":2},"258":{"body":0,"breadcrumbs":4,"title":2},"259":{"body":26,"breadcrumbs":3,"title":1},"26":{"body":7,"breadcrumbs":7,"title":4},"260":{"body":0,"breadcrumbs":5,"title":3},"261":{"body":1248,"breadcrumbs":8,"title":6},"262":{"body":0,"breadcrumbs":4,"title":2},"263":{"body":25,"breadcrumbs":3,"title":1},"264":{"body":0,"breadcrumbs":5,"title":3},"265":{"body":109,"breadcrumbs":6,"title":4},"266":{"body":112,"breadcrumbs":6,"title":4},"267":{"body":89,"breadcrumbs":6,"title":4},"268":{"body":92,"breadcrumbs":6,"title":4},"269":{"body":139,"breadcrumbs":6,"title":4},"27":{"body":55,"breadcrumbs":5,"title":2},"270":{"body":93,"breadcrumbs":6,"title":4},"271":{"body":96,"breadcrumbs":6,"title":4},"272":{"body":59,"breadcrumbs":6,"title":4},"273":{"body":0,"breadcrumbs":5,"title":3},"274":{"body":40,"breadcrumbs":5,"title":3},"275":{"body":44,"breadcrumbs":5,"title":3},"276":{"body":0,"breadcrumbs":5,"title":3},"277":{"body":67,"breadcrumbs":5,"title":3},"278":{"body":26,"breadcrumbs":6,"title":4},"279":{"body":0,"breadcrumbs":5,"title":3},"28":{"body":11,"breadcrumbs":5,"title":2},"280":{"body":35,"breadcrumbs":5,"title":3},"281":{"body":14,"breadcrumbs":8,"title":5},"282":{"body":29,"breadcrumbs":5,"title":2},"283":{"body":0,"breadcrumbs":8,"title":5},"284":{"body":1059,"breadcrumbs":6,"title":3},"285":{"body":51,"breadcrumbs":5,"title":2},"286":{"body":0,"breadcrumbs":7,"title":4},"287":{"body":390,"breadcrumbs":4,"title":1},"288":{"body":9,"breadcrumbs":9,"title":6},"289":{"body":19,"breadcrumbs":4,"title":1},"29":{"body":87,"breadcrumbs":6,"title":3},"290":{"body":42,"breadcrumbs":5,"title":2},"291":{"body":0,"breadcrumbs":5,"title":2},"292":{"body":1453,"breadcrumbs":7,"title":4},"293":{"body":11,"breadcrumbs":7,"title":4},"294":{"body":27,"breadcrumbs":5,"title":2},"295":{"body":0,"breadcrumbs":6,"title":3},"296":{"body":31,"breadcrumbs":8,"title":5},"297":{"body":38,"breadcrumbs":9,"title":6},"298":{"body":55,"breadcrumbs":10,"title":7},"299":{"body":0,"breadcrumbs":7,"title":4},"3":{"body":52,"breadcrumbs":3,"title":2},"30":{"body":133,"breadcrumbs":6,"title":3},"300":{"body":2100,"breadcrumbs":7,"title":4},"301":{"body":7,"breadcrumbs":6,"title":3},"302":{"body":0,"breadcrumbs":7,"title":4},"303":{"body":14,"breadcrumbs":8,"title":5},"304":{"body":153,"breadcrumbs":5,"title":2},"305":{"body":0,"breadcrumbs":5,"title":2},"306":{"body":537,"breadcrumbs":9,"title":6},"307":{"body":9,"breadcrumbs":6,"title":4},"308":{"body":43,"breadcrumbs":3,"title":1},"309":{"body":0,"breadcrumbs":3,"title":1},"31":{"body":66,"breadcrumbs":6,"title":3},"310":{"body":1119,"breadcrumbs":5,"title":3},"311":{"body":18,"breadcrumbs":8,"title":5},"312":{"body":20,"breadcrumbs":5,"title":2},"313":{"body":29,"breadcrumbs":4,"title":1},"314":{"body":0,"breadcrumbs":4,"title":1},"315":{"body":34,"breadcrumbs":8,"title":5},"316":{"body":812,"breadcrumbs":7,"title":4},"317":{"body":14,"breadcrumbs":9,"title":5},"318":{"body":1392,"breadcrumbs":7,"title":3},"319":{"body":33,"breadcrumbs":8,"title":4},"32":{"body":35,"breadcrumbs":6,"title":3},"320":{"body":0,"breadcrumbs":7,"title":3},"321":{"body":10,"breadcrumbs":5,"title":1},"322":{"body":673,"breadcrumbs":11,"title":7},"323":{"body":19,"breadcrumbs":8,"title":5},"324":{"body":0,"breadcrumbs":7,"title":4},"325":{"body":14,"breadcrumbs":4,"title":1},"326":{"body":7,"breadcrumbs":6,"title":3},"327":{"body":0,"breadcrumbs":9,"title":6},"328":{"body":13,"breadcrumbs":8,"title":5},"329":{"body":34,"breadcrumbs":8,"title":5},"33":{"body":0,"breadcrumbs":5,"title":2},"330":{"body":63,"breadcrumbs":10,"title":7},"331":{"body":40,"breadcrumbs":7,"title":4},"332":{"body":36,"breadcrumbs":6,"title":3},"333":{"body":0,"breadcrumbs":10,"title":7},"334":{"body":5,"breadcrumbs":6,"title":3},"335":{"body":27,"breadcrumbs":5,"title":2},"336":{"body":22,"breadcrumbs":5,"title":2},"337":{"body":90,"breadcrumbs":6,"title":3},"338":{"body":37,"breadcrumbs":6,"title":3},"339":{"body":64,"breadcrumbs":6,"title":3},"34":{"body":95,"breadcrumbs":5,"title":2},"340":{"body":0,"breadcrumbs":10,"title":7},"341":{"body":18,"breadcrumbs":6,"title":3},"342":{"body":64,"breadcrumbs":5,"title":2},"343":{"body":80,"breadcrumbs":5,"title":2},"344":{"body":52,"breadcrumbs":6,"title":3},"345":{"body":0,"breadcrumbs":8,"title":5},"346":{"body":87,"breadcrumbs":5,"title":2},"347":{"body":60,"breadcrumbs":7,"title":4},"348":{"body":0,"breadcrumbs":8,"title":5},"349":{"body":41,"breadcrumbs":5,"title":2},"35":{"body":86,"breadcrumbs":5,"title":2},"350":{"body":66,"breadcrumbs":5,"title":2},"351":{"body":0,"breadcrumbs":9,"title":6},"352":{"body":24,"breadcrumbs":5,"title":2},"353":{"body":34,"breadcrumbs":5,"title":2},"354":{"body":12,"breadcrumbs":5,"title":2},"355":{"body":0,"breadcrumbs":7,"title":4},"356":{"body":109,"breadcrumbs":5,"title":2},"357":{"body":0,"breadcrumbs":6,"title":3},"358":{"body":68,"breadcrumbs":6,"title":3},"359":{"body":0,"breadcrumbs":5,"title":2},"36":{"body":96,"breadcrumbs":5,"title":2},"360":{"body":16,"breadcrumbs":7,"title":4},"361":{"body":25,"breadcrumbs":7,"title":4},"362":{"body":13,"breadcrumbs":9,"title":6},"363":{"body":48,"breadcrumbs":4,"title":1},"364":{"body":76,"breadcrumbs":4,"title":2},"365":{"body":57,"breadcrumbs":6,"title":4},"366":{"body":202,"breadcrumbs":3,"title":1},"367":{"body":8,"breadcrumbs":7,"title":4},"368":{"body":24,"breadcrumbs":4,"title":1},"369":{"body":0,"breadcrumbs":4,"title":1},"37":{"body":70,"breadcrumbs":5,"title":2},"370":{"body":1334,"breadcrumbs":6,"title":3},"371":{"body":12,"breadcrumbs":7,"title":4},"372":{"body":29,"breadcrumbs":5,"title":2},"373":{"body":0,"breadcrumbs":6,"title":3},"374":{"body":58,"breadcrumbs":4,"title":1},"375":{"body":81,"breadcrumbs":5,"title":2},"376":{"body":0,"breadcrumbs":5,"title":2},"377":{"body":3398,"breadcrumbs":6,"title":3},"378":{"body":18,"breadcrumbs":7,"title":4},"379":{"body":1746,"breadcrumbs":4,"title":1},"38":{"body":45,"breadcrumbs":5,"title":2},"380":{"body":0,"breadcrumbs":9,"title":5},"381":{"body":1,"breadcrumbs":5,"title":1},"382":{"body":105,"breadcrumbs":5,"title":1},"383":{"body":392,"breadcrumbs":5,"title":1},"384":{"body":0,"breadcrumbs":8,"title":4},"385":{"body":1,"breadcrumbs":5,"title":1},"386":{"body":103,"breadcrumbs":5,"title":1},"387":{"body":10,"breadcrumbs":5,"title":1},"388":{"body":59,"breadcrumbs":6,"title":2},"389":{"body":499,"breadcrumbs":6,"title":2},"39":{"body":84,"breadcrumbs":5,"title":2},"390":{"body":0,"breadcrumbs":8,"title":4},"391":{"body":1,"breadcrumbs":5,"title":1},"392":{"body":128,"breadcrumbs":5,"title":1},"393":{"body":8,"breadcrumbs":5,"title":1},"394":{"body":606,"breadcrumbs":6,"title":2},"395":{"body":0,"breadcrumbs":8,"title":4},"396":{"body":1,"breadcrumbs":5,"title":1},"397":{"body":126,"breadcrumbs":5,"title":1},"398":{"body":7,"breadcrumbs":5,"title":1},"399":{"body":97,"breadcrumbs":6,"title":2},"4":{"body":28,"breadcrumbs":2,"title":1},"40":{"body":33,"breadcrumbs":5,"title":2},"400":{"body":70,"breadcrumbs":6,"title":2},"401":{"body":49,"breadcrumbs":7,"title":3},"402":{"body":0,"breadcrumbs":5,"title":1},"403":{"body":68,"breadcrumbs":5,"title":1},"404":{"body":43,"breadcrumbs":5,"title":1},"405":{"body":29,"breadcrumbs":5,"title":1},"406":{"body":0,"breadcrumbs":6,"title":2},"407":{"body":20,"breadcrumbs":9,"title":5},"408":{"body":19,"breadcrumbs":9,"title":5},"409":{"body":21,"breadcrumbs":9,"title":5},"41":{"body":51,"breadcrumbs":5,"title":2},"410":{"body":19,"breadcrumbs":8,"title":4},"411":{"body":17,"breadcrumbs":9,"title":5},"412":{"body":0,"breadcrumbs":6,"title":2},"413":{"body":37,"breadcrumbs":6,"title":2},"414":{"body":28,"breadcrumbs":6,"title":2},"415":{"body":28,"breadcrumbs":6,"title":2},"416":{"body":28,"breadcrumbs":6,"title":2},"417":{"body":0,"breadcrumbs":6,"title":2},"418":{"body":13,"breadcrumbs":9,"title":5},"419":{"body":12,"breadcrumbs":9,"title":5},"42":{"body":0,"breadcrumbs":5,"title":2},"420":{"body":10,"breadcrumbs":9,"title":5},"421":{"body":24,"breadcrumbs":5,"title":1},"422":{"body":0,"breadcrumbs":8,"title":4},"423":{"body":1,"breadcrumbs":5,"title":1},"424":{"body":111,"breadcrumbs":5,"title":1},"425":{"body":8,"breadcrumbs":5,"title":1},"426":{"body":42,"breadcrumbs":6,"title":2},"427":{"body":717,"breadcrumbs":6,"title":2},"428":{"body":18,"breadcrumbs":12,"title":7},"429":{"body":21,"breadcrumbs":6,"title":1},"43":{"body":68,"breadcrumbs":5,"title":2},"430":{"body":88,"breadcrumbs":7,"title":2},"431":{"body":1012,"breadcrumbs":6,"title":1},"432":{"body":15,"breadcrumbs":12,"title":8},"433":{"body":22,"breadcrumbs":5,"title":1},"434":{"body":49,"breadcrumbs":8,"title":4},"435":{"body":42,"breadcrumbs":6,"title":2},"436":{"body":62,"breadcrumbs":5,"title":1},"437":{"body":0,"breadcrumbs":5,"title":1},"438":{"body":57,"breadcrumbs":5,"title":1},"439":{"body":26,"breadcrumbs":5,"title":1},"44":{"body":63,"breadcrumbs":5,"title":2},"440":{"body":20,"breadcrumbs":5,"title":1},"441":{"body":0,"breadcrumbs":5,"title":1},"442":{"body":29,"breadcrumbs":6,"title":2},"443":{"body":36,"breadcrumbs":6,"title":2},"444":{"body":17,"breadcrumbs":6,"title":2},"445":{"body":39,"breadcrumbs":6,"title":2},"446":{"body":0,"breadcrumbs":6,"title":2},"447":{"body":32,"breadcrumbs":5,"title":1},"448":{"body":31,"breadcrumbs":5,"title":1},"449":{"body":0,"breadcrumbs":6,"title":2},"45":{"body":38,"breadcrumbs":5,"title":2},"450":{"body":16,"breadcrumbs":9,"title":5},"451":{"body":22,"breadcrumbs":8,"title":4},"452":{"body":20,"breadcrumbs":9,"title":5},"453":{"body":24,"breadcrumbs":9,"title":5},"454":{"body":0,"breadcrumbs":5,"title":1},"455":{"body":20,"breadcrumbs":6,"title":2},"456":{"body":21,"breadcrumbs":6,"title":2},"457":{"body":13,"breadcrumbs":6,"title":2},"458":{"body":0,"breadcrumbs":5,"title":1},"459":{"body":28,"breadcrumbs":6,"title":2},"46":{"body":0,"breadcrumbs":5,"title":2},"460":{"body":29,"breadcrumbs":6,"title":2},"461":{"body":21,"breadcrumbs":5,"title":1},"462":{"body":33,"breadcrumbs":5,"title":1},"463":{"body":15,"breadcrumbs":11,"title":7},"464":{"body":63,"breadcrumbs":7,"title":3},"465":{"body":38,"breadcrumbs":6,"title":2},"466":{"body":0,"breadcrumbs":6,"title":2},"467":{"body":33,"breadcrumbs":11,"title":7},"468":{"body":27,"breadcrumbs":10,"title":6},"469":{"body":44,"breadcrumbs":10,"title":6},"47":{"body":98,"breadcrumbs":5,"title":2},"470":{"body":24,"breadcrumbs":7,"title":3},"471":{"body":7,"breadcrumbs":6,"title":2},"472":{"body":44,"breadcrumbs":5,"title":1},"473":{"body":622,"breadcrumbs":6,"title":2},"474":{"body":10,"breadcrumbs":11,"title":6},"475":{"body":27,"breadcrumbs":6,"title":1},"476":{"body":12,"breadcrumbs":6,"title":1},"477":{"body":0,"breadcrumbs":7,"title":2},"478":{"body":19,"breadcrumbs":7,"title":2},"479":{"body":0,"breadcrumbs":7,"title":2},"48":{"body":99,"breadcrumbs":5,"title":2},"480":{"body":153,"breadcrumbs":10,"title":5},"481":{"body":110,"breadcrumbs":11,"title":6},"482":{"body":126,"breadcrumbs":11,"title":6},"483":{"body":101,"breadcrumbs":11,"title":6},"484":{"body":0,"breadcrumbs":8,"title":3},"485":{"body":904,"breadcrumbs":9,"title":4},"486":{"body":19,"breadcrumbs":11,"title":6},"487":{"body":66,"breadcrumbs":6,"title":1},"488":{"body":45,"breadcrumbs":6,"title":1},"489":{"body":0,"breadcrumbs":7,"title":2},"49":{"body":33,"breadcrumbs":5,"title":2},"490":{"body":21,"breadcrumbs":9,"title":4},"491":{"body":50,"breadcrumbs":11,"title":6},"492":{"body":166,"breadcrumbs":11,"title":6},"493":{"body":64,"breadcrumbs":8,"title":3},"494":{"body":60,"breadcrumbs":9,"title":4},"495":{"body":65,"breadcrumbs":8,"title":3},"496":{"body":0,"breadcrumbs":7,"title":2},"497":{"body":46,"breadcrumbs":7,"title":2},"498":{"body":26,"breadcrumbs":7,"title":2},"499":{"body":0,"breadcrumbs":9,"title":4},"5":{"body":34,"breadcrumbs":5,"title":4},"50":{"body":0,"breadcrumbs":5,"title":2},"500":{"body":57,"breadcrumbs":8,"title":3},"501":{"body":0,"breadcrumbs":6,"title":1},"502":{"body":51,"breadcrumbs":6,"title":1},"503":{"body":20,"breadcrumbs":7,"title":2},"504":{"body":25,"breadcrumbs":7,"title":2},"505":{"body":0,"breadcrumbs":8,"title":3},"506":{"body":200,"breadcrumbs":6,"title":1},"507":{"body":15,"breadcrumbs":9,"title":5},"508":{"body":28,"breadcrumbs":5,"title":1},"509":{"body":116,"breadcrumbs":6,"title":2},"51":{"body":22,"breadcrumbs":5,"title":2},"510":{"body":26,"breadcrumbs":6,"title":2},"511":{"body":12,"breadcrumbs":5,"title":1},"512":{"body":116,"breadcrumbs":6,"title":2},"513":{"body":0,"breadcrumbs":6,"title":2},"514":{"body":29,"breadcrumbs":6,"title":2},"515":{"body":19,"breadcrumbs":7,"title":3},"516":{"body":1009,"breadcrumbs":6,"title":2},"517":{"body":0,"breadcrumbs":15,"title":8},"518":{"body":4,"breadcrumbs":8,"title":1},"519":{"body":41,"breadcrumbs":8,"title":1},"52":{"body":26,"breadcrumbs":5,"title":2},"520":{"body":296,"breadcrumbs":9,"title":2},"521":{"body":29,"breadcrumbs":9,"title":2},"522":{"body":37,"breadcrumbs":12,"title":5},"523":{"body":0,"breadcrumbs":8,"title":1},"524":{"body":42,"breadcrumbs":8,"title":1},"525":{"body":27,"breadcrumbs":8,"title":1},"526":{"body":51,"breadcrumbs":9,"title":2},"527":{"body":0,"breadcrumbs":9,"title":2},"528":{"body":13,"breadcrumbs":14,"title":7},"529":{"body":14,"breadcrumbs":14,"title":7},"53":{"body":13,"breadcrumbs":5,"title":2},"530":{"body":11,"breadcrumbs":11,"title":4},"531":{"body":14,"breadcrumbs":12,"title":5},"532":{"body":0,"breadcrumbs":9,"title":2},"533":{"body":39,"breadcrumbs":9,"title":2},"534":{"body":231,"breadcrumbs":12,"title":5},"535":{"body":8,"breadcrumbs":5,"title":3},"536":{"body":24,"breadcrumbs":3,"title":1},"537":{"body":5,"breadcrumbs":4,"title":2},"538":{"body":0,"breadcrumbs":3,"title":1},"539":{"body":1510,"breadcrumbs":4,"title":2},"54":{"body":0,"breadcrumbs":5,"title":2},"540":{"body":15,"breadcrumbs":4,"title":3},"541":{"body":26,"breadcrumbs":2,"title":1},"542":{"body":0,"breadcrumbs":3,"title":2},"543":{"body":44,"breadcrumbs":4,"title":3},"544":{"body":35,"breadcrumbs":4,"title":3},"545":{"body":0,"breadcrumbs":2,"title":1},"546":{"body":26,"breadcrumbs":4,"title":3},"547":{"body":28,"breadcrumbs":4,"title":3},"548":{"body":0,"breadcrumbs":4,"title":3},"549":{"body":252,"breadcrumbs":4,"title":3},"55":{"body":83,"breadcrumbs":5,"title":2},"550":{"body":18,"breadcrumbs":4,"title":3},"551":{"body":0,"breadcrumbs":5,"title":4},"552":{"body":178,"breadcrumbs":3,"title":2},"553":{"body":174,"breadcrumbs":5,"title":4},"554":{"body":0,"breadcrumbs":4,"title":3},"555":{"body":266,"breadcrumbs":4,"title":3},"556":{"body":0,"breadcrumbs":4,"title":3},"557":{"body":39,"breadcrumbs":4,"title":3},"558":{"body":16,"breadcrumbs":5,"title":4},"559":{"body":0,"breadcrumbs":4,"title":3},"56":{"body":43,"breadcrumbs":5,"title":2},"560":{"body":66,"breadcrumbs":3,"title":2},"561":{"body":54,"breadcrumbs":4,"title":3},"562":{"body":0,"breadcrumbs":3,"title":2},"563":{"body":22,"breadcrumbs":3,"title":2},"564":{"body":9,"breadcrumbs":2,"title":1},"565":{"body":20,"breadcrumbs":3,"title":2},"566":{"body":0,"breadcrumbs":3,"title":2},"567":{"body":16,"breadcrumbs":3,"title":2},"568":{"body":13,"breadcrumbs":3,"title":2},"569":{"body":30,"breadcrumbs":3,"title":2},"57":{"body":0,"breadcrumbs":5,"title":2},"570":{"body":13,"breadcrumbs":4,"title":3},"571":{"body":35,"breadcrumbs":2,"title":1},"572":{"body":0,"breadcrumbs":3,"title":2},"573":{"body":2308,"breadcrumbs":4,"title":3},"574":{"body":10,"breadcrumbs":3,"title":2},"575":{"body":7,"breadcrumbs":3,"title":2},"576":{"body":29,"breadcrumbs":3,"title":2},"577":{"body":16,"breadcrumbs":3,"title":2},"578":{"body":0,"breadcrumbs":3,"title":2},"579":{"body":14,"breadcrumbs":2,"title":1},"58":{"body":19,"breadcrumbs":6,"title":3},"580":{"body":64,"breadcrumbs":3,"title":2},"581":{"body":225,"breadcrumbs":3,"title":2},"582":{"body":113,"breadcrumbs":3,"title":2},"583":{"body":0,"breadcrumbs":3,"title":2},"584":{"body":12,"breadcrumbs":2,"title":1},"585":{"body":55,"breadcrumbs":3,"title":2},"586":{"body":186,"breadcrumbs":3,"title":2},"587":{"body":256,"breadcrumbs":4,"title":3},"588":{"body":78,"breadcrumbs":3,"title":2},"589":{"body":0,"breadcrumbs":3,"title":2},"59":{"body":39,"breadcrumbs":7,"title":4},"590":{"body":4,"breadcrumbs":2,"title":1},"591":{"body":103,"breadcrumbs":3,"title":2},"592":{"body":97,"breadcrumbs":3,"title":2},"593":{"body":135,"breadcrumbs":5,"title":4},"594":{"body":0,"breadcrumbs":3,"title":2},"595":{"body":11,"breadcrumbs":2,"title":1},"596":{"body":80,"breadcrumbs":3,"title":2},"597":{"body":76,"breadcrumbs":3,"title":2},"598":{"body":103,"breadcrumbs":3,"title":2},"599":{"body":0,"breadcrumbs":3,"title":2},"6":{"body":26,"breadcrumbs":3,"title":2},"60":{"body":42,"breadcrumbs":5,"title":2},"600":{"body":26,"breadcrumbs":3,"title":2},"601":{"body":26,"breadcrumbs":3,"title":2},"602":{"body":23,"breadcrumbs":3,"title":2},"603":{"body":23,"breadcrumbs":3,"title":2},"604":{"body":46,"breadcrumbs":2,"title":1},"605":{"body":17,"breadcrumbs":4,"title":2},"606":{"body":23,"breadcrumbs":3,"title":1},"607":{"body":0,"breadcrumbs":5,"title":3},"608":{"body":774,"breadcrumbs":4,"title":2},"609":{"body":877,"breadcrumbs":4,"title":2},"61":{"body":33,"breadcrumbs":6,"title":3},"610":{"body":0,"breadcrumbs":5,"title":3},"611":{"body":239,"breadcrumbs":5,"title":3},"612":{"body":108,"breadcrumbs":5,"title":3},"613":{"body":0,"breadcrumbs":4,"title":2},"614":{"body":231,"breadcrumbs":5,"title":3},"615":{"body":75,"breadcrumbs":5,"title":3},"616":{"body":0,"breadcrumbs":4,"title":2},"617":{"body":63,"breadcrumbs":4,"title":2},"618":{"body":36,"breadcrumbs":4,"title":2},"619":{"body":0,"breadcrumbs":5,"title":3},"62":{"body":29,"breadcrumbs":7,"title":4},"620":{"body":165,"breadcrumbs":5,"title":3},"621":{"body":221,"breadcrumbs":5,"title":3},"622":{"body":6,"breadcrumbs":5,"title":3},"623":{"body":16,"breadcrumbs":3,"title":1},"624":{"body":12,"breadcrumbs":4,"title":2},"625":{"body":4,"breadcrumbs":4,"title":2},"626":{"body":296,"breadcrumbs":4,"title":2},"627":{"body":7,"breadcrumbs":5,"title":3},"628":{"body":10,"breadcrumbs":3,"title":1},"629":{"body":0,"breadcrumbs":4,"title":2},"63":{"body":40,"breadcrumbs":7,"title":4},"630":{"body":18,"breadcrumbs":4,"title":2},"631":{"body":18,"breadcrumbs":4,"title":2},"632":{"body":20,"breadcrumbs":5,"title":3},"633":{"body":18,"breadcrumbs":4,"title":2},"634":{"body":18,"breadcrumbs":4,"title":2},"635":{"body":0,"breadcrumbs":4,"title":2},"636":{"body":24,"breadcrumbs":4,"title":2},"637":{"body":23,"breadcrumbs":4,"title":2},"638":{"body":30,"breadcrumbs":4,"title":2},"639":{"body":27,"breadcrumbs":4,"title":2},"64":{"body":77,"breadcrumbs":5,"title":2},"640":{"body":6,"breadcrumbs":4,"title":2},"641":{"body":15,"breadcrumbs":4,"title":2},"642":{"body":15,"breadcrumbs":5,"title":3},"643":{"body":28,"breadcrumbs":3,"title":1},"644":{"body":1115,"breadcrumbs":5,"title":3},"645":{"body":28,"breadcrumbs":4,"title":2},"646":{"body":0,"breadcrumbs":4,"title":2},"647":{"body":26,"breadcrumbs":4,"title":2},"648":{"body":91,"breadcrumbs":3,"title":1},"649":{"body":13,"breadcrumbs":5,"title":3},"65":{"body":0,"breadcrumbs":5,"title":2},"650":{"body":21,"breadcrumbs":4,"title":2},"651":{"body":0,"breadcrumbs":4,"title":2},"652":{"body":29,"breadcrumbs":4,"title":2},"653":{"body":2788,"breadcrumbs":4,"title":2},"654":{"body":12,"breadcrumbs":7,"title":4},"655":{"body":17,"breadcrumbs":5,"title":2},"656":{"body":29,"breadcrumbs":4,"title":1},"657":{"body":0,"breadcrumbs":5,"title":2},"658":{"body":204,"breadcrumbs":6,"title":3},"659":{"body":121,"breadcrumbs":5,"title":2},"66":{"body":97,"breadcrumbs":6,"title":3},"660":{"body":0,"breadcrumbs":6,"title":3},"661":{"body":484,"breadcrumbs":7,"title":4},"662":{"body":308,"breadcrumbs":6,"title":3},"663":{"body":0,"breadcrumbs":6,"title":3},"664":{"body":323,"breadcrumbs":7,"title":4},"665":{"body":0,"breadcrumbs":6,"title":3},"666":{"body":328,"breadcrumbs":7,"title":4},"667":{"body":0,"breadcrumbs":5,"title":2},"668":{"body":430,"breadcrumbs":6,"title":3},"669":{"body":0,"breadcrumbs":6,"title":3},"67":{"body":58,"breadcrumbs":6,"title":3},"670":{"body":11,"breadcrumbs":8,"title":5},"671":{"body":9,"breadcrumbs":7,"title":4},"672":{"body":9,"breadcrumbs":7,"title":4},"673":{"body":110,"breadcrumbs":5,"title":2},"674":{"body":10,"breadcrumbs":9,"title":6},"675":{"body":12,"breadcrumbs":4,"title":1},"676":{"body":0,"breadcrumbs":7,"title":4},"677":{"body":6,"breadcrumbs":8,"title":5},"678":{"body":7,"breadcrumbs":8,"title":5},"679":{"body":44,"breadcrumbs":8,"title":5},"68":{"body":35,"breadcrumbs":5,"title":2},"680":{"body":127,"breadcrumbs":8,"title":5},"681":{"body":88,"breadcrumbs":9,"title":6},"682":{"body":30,"breadcrumbs":7,"title":4},"683":{"body":16,"breadcrumbs":8,"title":5},"684":{"body":0,"breadcrumbs":5,"title":2},"685":{"body":36,"breadcrumbs":6,"title":3},"686":{"body":31,"breadcrumbs":6,"title":3},"687":{"body":31,"breadcrumbs":7,"title":4},"688":{"body":0,"breadcrumbs":5,"title":2},"689":{"body":21,"breadcrumbs":6,"title":3},"69":{"body":38,"breadcrumbs":6,"title":3},"690":{"body":22,"breadcrumbs":5,"title":2},"691":{"body":34,"breadcrumbs":6,"title":3},"692":{"body":57,"breadcrumbs":6,"title":3},"693":{"body":31,"breadcrumbs":5,"title":2},"694":{"body":0,"breadcrumbs":5,"title":2},"695":{"body":17,"breadcrumbs":5,"title":2},"696":{"body":11,"breadcrumbs":6,"title":3},"697":{"body":16,"breadcrumbs":5,"title":2},"698":{"body":28,"breadcrumbs":5,"title":2},"699":{"body":23,"breadcrumbs":5,"title":2},"7":{"body":29,"breadcrumbs":2,"title":1},"70":{"body":0,"breadcrumbs":6,"title":3},"700":{"body":16,"breadcrumbs":7,"title":4},"701":{"body":16,"breadcrumbs":4,"title":1},"702":{"body":42,"breadcrumbs":6,"title":3},"703":{"body":1486,"breadcrumbs":5,"title":2},"704":{"body":0,"breadcrumbs":2,"title":1},"705":{"body":13,"breadcrumbs":4,"title":3},"706":{"body":20,"breadcrumbs":3,"title":2},"707":{"body":63,"breadcrumbs":2,"title":1},"708":{"body":0,"breadcrumbs":3,"title":2},"709":{"body":2084,"breadcrumbs":4,"title":3},"71":{"body":35,"breadcrumbs":5,"title":2},"710":{"body":18,"breadcrumbs":3,"title":2},"711":{"body":22,"breadcrumbs":3,"title":2},"712":{"body":2603,"breadcrumbs":2,"title":1},"713":{"body":19,"breadcrumbs":5,"title":3},"714":{"body":15,"breadcrumbs":4,"title":2},"715":{"body":47,"breadcrumbs":3,"title":1},"716":{"body":39,"breadcrumbs":4,"title":2},"717":{"body":0,"breadcrumbs":4,"title":2},"718":{"body":40,"breadcrumbs":4,"title":2},"719":{"body":633,"breadcrumbs":4,"title":2},"72":{"body":43,"breadcrumbs":7,"title":4},"720":{"body":0,"breadcrumbs":4,"title":2},"721":{"body":352,"breadcrumbs":5,"title":3},"722":{"body":212,"breadcrumbs":4,"title":2},"723":{"body":65,"breadcrumbs":4,"title":2},"724":{"body":75,"breadcrumbs":4,"title":2},"725":{"body":0,"breadcrumbs":5,"title":3},"726":{"body":37,"breadcrumbs":4,"title":2},"727":{"body":67,"breadcrumbs":5,"title":3},"728":{"body":55,"breadcrumbs":5,"title":3},"729":{"body":0,"breadcrumbs":4,"title":2},"73":{"body":25,"breadcrumbs":6,"title":3},"730":{"body":40,"breadcrumbs":4,"title":2},"731":{"body":46,"breadcrumbs":4,"title":2},"732":{"body":32,"breadcrumbs":4,"title":2},"733":{"body":0,"breadcrumbs":3,"title":1},"734":{"body":118,"breadcrumbs":5,"title":3},"735":{"body":59,"breadcrumbs":5,"title":3},"736":{"body":36,"breadcrumbs":4,"title":2},"737":{"body":36,"breadcrumbs":4,"title":2},"738":{"body":0,"breadcrumbs":4,"title":2},"739":{"body":50,"breadcrumbs":4,"title":2},"74":{"body":22,"breadcrumbs":5,"title":2},"740":{"body":36,"breadcrumbs":4,"title":2},"741":{"body":38,"breadcrumbs":5,"title":3},"742":{"body":17,"breadcrumbs":4,"title":3},"743":{"body":17,"breadcrumbs":3,"title":2},"744":{"body":74,"breadcrumbs":2,"title":1},"745":{"body":0,"breadcrumbs":3,"title":2},"746":{"body":3363,"breadcrumbs":3,"title":2},"747":{"body":18,"breadcrumbs":5,"title":3},"748":{"body":19,"breadcrumbs":4,"title":2},"749":{"body":62,"breadcrumbs":3,"title":1},"75":{"body":27,"breadcrumbs":5,"title":2},"750":{"body":0,"breadcrumbs":4,"title":2},"751":{"body":1955,"breadcrumbs":4,"title":2},"752":{"body":13,"breadcrumbs":6,"title":4},"753":{"body":20,"breadcrumbs":3,"title":1},"754":{"body":0,"breadcrumbs":3,"title":1},"755":{"body":10,"breadcrumbs":4,"title":2},"756":{"body":8,"breadcrumbs":4,"title":2},"757":{"body":13,"breadcrumbs":4,"title":2},"758":{"body":0,"breadcrumbs":9,"title":7},"759":{"body":225,"breadcrumbs":6,"title":4},"76":{"body":0,"breadcrumbs":5,"title":2},"760":{"body":407,"breadcrumbs":6,"title":4},"761":{"body":318,"breadcrumbs":7,"title":5},"762":{"body":423,"breadcrumbs":6,"title":4},"763":{"body":0,"breadcrumbs":10,"title":8},"764":{"body":106,"breadcrumbs":7,"title":5},"765":{"body":7,"breadcrumbs":9,"title":7},"766":{"body":0,"breadcrumbs":9,"title":7},"767":{"body":52,"breadcrumbs":6,"title":4},"768":{"body":0,"breadcrumbs":4,"title":2},"769":{"body":25,"breadcrumbs":5,"title":3},"77":{"body":23,"breadcrumbs":5,"title":2},"770":{"body":12,"breadcrumbs":5,"title":3},"771":{"body":16,"breadcrumbs":4,"title":2},"772":{"body":0,"breadcrumbs":3,"title":1},"773":{"body":19,"breadcrumbs":6,"title":4},"774":{"body":19,"breadcrumbs":6,"title":4},"775":{"body":15,"breadcrumbs":5,"title":3},"776":{"body":12,"breadcrumbs":6,"title":4},"777":{"body":8,"breadcrumbs":5,"title":3},"778":{"body":27,"breadcrumbs":3,"title":1},"779":{"body":14,"breadcrumbs":3,"title":1},"78":{"body":23,"breadcrumbs":5,"title":2},"780":{"body":0,"breadcrumbs":6,"title":3},"781":{"body":0,"breadcrumbs":6,"title":3},"782":{"body":0,"breadcrumbs":5,"title":2},"783":{"body":518,"breadcrumbs":7,"title":4},"784":{"body":18,"breadcrumbs":5,"title":3},"785":{"body":16,"breadcrumbs":4,"title":2},"786":{"body":41,"breadcrumbs":3,"title":1},"787":{"body":0,"breadcrumbs":6,"title":4},"788":{"body":967,"breadcrumbs":6,"title":4},"789":{"body":0,"breadcrumbs":7,"title":4},"79":{"body":18,"breadcrumbs":6,"title":3},"790":{"body":46,"breadcrumbs":4,"title":1},"791":{"body":0,"breadcrumbs":5,"title":2},"792":{"body":362,"breadcrumbs":7,"title":4},"793":{"body":0,"breadcrumbs":6,"title":3},"794":{"body":334,"breadcrumbs":7,"title":4},"795":{"body":0,"breadcrumbs":9,"title":5},"796":{"body":12,"breadcrumbs":5,"title":1},"797":{"body":68,"breadcrumbs":6,"title":2},"798":{"body":0,"breadcrumbs":6,"title":2},"799":{"body":18,"breadcrumbs":10,"title":6},"8":{"body":7,"breadcrumbs":2,"title":1},"80":{"body":8,"breadcrumbs":5,"title":2},"800":{"body":45,"breadcrumbs":7,"title":3},"801":{"body":0,"breadcrumbs":6,"title":2},"802":{"body":714,"breadcrumbs":9,"title":5},"803":{"body":11,"breadcrumbs":9,"title":6},"804":{"body":11,"breadcrumbs":5,"title":2},"805":{"body":50,"breadcrumbs":4,"title":1},"806":{"body":0,"breadcrumbs":4,"title":1},"807":{"body":1153,"breadcrumbs":5,"title":2},"808":{"body":0,"breadcrumbs":8,"title":6},"809":{"body":13,"breadcrumbs":3,"title":1},"81":{"body":0,"breadcrumbs":5,"title":2},"810":{"body":692,"breadcrumbs":4,"title":2},"811":{"body":8,"breadcrumbs":6,"title":4},"812":{"body":38,"breadcrumbs":3,"title":1},"813":{"body":0,"breadcrumbs":3,"title":1},"814":{"body":28,"breadcrumbs":3,"title":1},"815":{"body":18,"breadcrumbs":3,"title":1},"816":{"body":18,"breadcrumbs":3,"title":1},"817":{"body":0,"breadcrumbs":3,"title":1},"818":{"body":35,"breadcrumbs":5,"title":3},"819":{"body":31,"breadcrumbs":5,"title":3},"82":{"body":23,"breadcrumbs":7,"title":4},"820":{"body":0,"breadcrumbs":4,"title":2},"821":{"body":102,"breadcrumbs":4,"title":2},"822":{"body":282,"breadcrumbs":4,"title":2},"823":{"body":0,"breadcrumbs":4,"title":2},"824":{"body":25,"breadcrumbs":5,"title":3},"825":{"body":32,"breadcrumbs":4,"title":2},"826":{"body":0,"breadcrumbs":4,"title":2},"827":{"body":53,"breadcrumbs":4,"title":2},"828":{"body":0,"breadcrumbs":4,"title":2},"829":{"body":35,"breadcrumbs":4,"title":2},"83":{"body":11,"breadcrumbs":5,"title":2},"830":{"body":30,"breadcrumbs":4,"title":2},"831":{"body":27,"breadcrumbs":4,"title":2},"832":{"body":0,"breadcrumbs":4,"title":2},"833":{"body":49,"breadcrumbs":4,"title":2},"834":{"body":56,"breadcrumbs":4,"title":2},"835":{"body":0,"breadcrumbs":3,"title":1},"836":{"body":17,"breadcrumbs":5,"title":3},"837":{"body":23,"breadcrumbs":5,"title":3},"838":{"body":12,"breadcrumbs":4,"title":2},"839":{"body":11,"breadcrumbs":3,"title":1},"84":{"body":12,"breadcrumbs":6,"title":3},"840":{"body":27,"breadcrumbs":3,"title":1},"841":{"body":76,"breadcrumbs":3,"title":1},"842":{"body":4,"breadcrumbs":3,"title":1},"843":{"body":37,"breadcrumbs":4,"title":2},"844":{"body":56,"breadcrumbs":4,"title":2},"845":{"body":33,"breadcrumbs":3,"title":1},"846":{"body":0,"breadcrumbs":4,"title":2},"847":{"body":26,"breadcrumbs":4,"title":3},"848":{"body":0,"breadcrumbs":1,"title":0},"849":{"body":44,"breadcrumbs":5,"title":4},"85":{"body":0,"breadcrumbs":5,"title":2},"850":{"body":29,"breadcrumbs":2,"title":1},"851":{"body":41,"breadcrumbs":3,"title":2},"852":{"body":35,"breadcrumbs":3,"title":2},"853":{"body":35,"breadcrumbs":3,"title":2},"854":{"body":29,"breadcrumbs":2,"title":1},"855":{"body":0,"breadcrumbs":2,"title":1},"856":{"body":39,"breadcrumbs":3,"title":2},"857":{"body":40,"breadcrumbs":3,"title":2},"858":{"body":0,"breadcrumbs":2,"title":1},"859":{"body":26,"breadcrumbs":2,"title":1},"86":{"body":38,"breadcrumbs":6,"title":3},"860":{"body":26,"breadcrumbs":2,"title":1},"861":{"body":40,"breadcrumbs":5,"title":4},"862":{"body":40,"breadcrumbs":2,"title":1},"863":{"body":28,"breadcrumbs":2,"title":1},"864":{"body":34,"breadcrumbs":3,"title":2},"865":{"body":30,"breadcrumbs":3,"title":2},"866":{"body":28,"breadcrumbs":2,"title":1},"867":{"body":27,"breadcrumbs":3,"title":2},"868":{"body":0,"breadcrumbs":2,"title":1},"869":{"body":28,"breadcrumbs":2,"title":1},"87":{"body":14,"breadcrumbs":6,"title":3},"870":{"body":26,"breadcrumbs":2,"title":1},"871":{"body":30,"breadcrumbs":3,"title":2},"872":{"body":0,"breadcrumbs":2,"title":1},"873":{"body":29,"breadcrumbs":2,"title":1},"874":{"body":30,"breadcrumbs":2,"title":1},"875":{"body":0,"breadcrumbs":2,"title":1},"876":{"body":33,"breadcrumbs":2,"title":1},"877":{"body":0,"breadcrumbs":2,"title":1},"878":{"body":35,"breadcrumbs":6,"title":5},"879":{"body":20,"breadcrumbs":2,"title":1},"88":{"body":12,"breadcrumbs":5,"title":2},"880":{"body":32,"breadcrumbs":2,"title":1},"881":{"body":0,"breadcrumbs":2,"title":1},"882":{"body":26,"breadcrumbs":3,"title":2},"883":{"body":30,"breadcrumbs":3,"title":2},"884":{"body":0,"breadcrumbs":1,"title":0},"885":{"body":34,"breadcrumbs":2,"title":1},"886":{"body":25,"breadcrumbs":2,"title":1},"887":{"body":32,"breadcrumbs":3,"title":2},"888":{"body":0,"breadcrumbs":2,"title":1},"889":{"body":24,"breadcrumbs":5,"title":4},"89":{"body":55,"breadcrumbs":7,"title":4},"890":{"body":0,"breadcrumbs":2,"title":1},"891":{"body":27,"breadcrumbs":5,"title":4},"892":{"body":28,"breadcrumbs":5,"title":4},"893":{"body":27,"breadcrumbs":2,"title":1},"894":{"body":0,"breadcrumbs":2,"title":1},"895":{"body":22,"breadcrumbs":2,"title":1},"896":{"body":0,"breadcrumbs":2,"title":1},"897":{"body":27,"breadcrumbs":5,"title":4},"898":{"body":37,"breadcrumbs":5,"title":4},"899":{"body":24,"breadcrumbs":2,"title":1},"9":{"body":16,"breadcrumbs":2,"title":1},"90":{"body":42,"breadcrumbs":7,"title":4},"900":{"body":38,"breadcrumbs":2,"title":1},"901":{"body":0,"breadcrumbs":2,"title":1},"902":{"body":26,"breadcrumbs":2,"title":1},"903":{"body":0,"breadcrumbs":2,"title":1},"904":{"body":22,"breadcrumbs":5,"title":4},"905":{"body":21,"breadcrumbs":2,"title":1},"906":{"body":33,"breadcrumbs":2,"title":1},"907":{"body":0,"breadcrumbs":2,"title":1},"908":{"body":22,"breadcrumbs":5,"title":4},"909":{"body":27,"breadcrumbs":3,"title":2},"91":{"body":40,"breadcrumbs":5,"title":2},"910":{"body":34,"breadcrumbs":2,"title":1},"911":{"body":40,"breadcrumbs":2,"title":1},"912":{"body":0,"breadcrumbs":2,"title":1},"913":{"body":29,"breadcrumbs":3,"title":2},"914":{"body":0,"breadcrumbs":2,"title":1},"915":{"body":28,"breadcrumbs":6,"title":5},"916":{"body":23,"breadcrumbs":2,"title":1},"917":{"body":27,"breadcrumbs":3,"title":2},"918":{"body":26,"breadcrumbs":2,"title":1},"919":{"body":23,"breadcrumbs":2,"title":1},"92":{"body":7,"breadcrumbs":9,"title":6},"920":{"body":0,"breadcrumbs":2,"title":1},"921":{"body":36,"breadcrumbs":2,"title":1},"922":{"body":22,"breadcrumbs":3,"title":2},"923":{"body":32,"breadcrumbs":3,"title":2},"924":{"body":33,"breadcrumbs":2,"title":1},"925":{"body":24,"breadcrumbs":2,"title":1},"926":{"body":39,"breadcrumbs":2,"title":1},"927":{"body":26,"breadcrumbs":4,"title":3},"928":{"body":37,"breadcrumbs":4,"title":3},"929":{"body":18,"breadcrumbs":3,"title":2},"93":{"body":16,"breadcrumbs":9,"title":6},"930":{"body":0,"breadcrumbs":2,"title":1},"931":{"body":18,"breadcrumbs":2,"title":1},"932":{"body":38,"breadcrumbs":2,"title":1},"933":{"body":21,"breadcrumbs":2,"title":1},"934":{"body":39,"breadcrumbs":3,"title":2},"935":{"body":29,"breadcrumbs":2,"title":1},"936":{"body":30,"breadcrumbs":7,"title":6},"937":{"body":21,"breadcrumbs":2,"title":1},"938":{"body":0,"breadcrumbs":2,"title":1},"939":{"body":23,"breadcrumbs":4,"title":3},"94":{"body":18,"breadcrumbs":9,"title":6},"940":{"body":30,"breadcrumbs":2,"title":1},"941":{"body":0,"breadcrumbs":2,"title":1},"942":{"body":29,"breadcrumbs":2,"title":1},"943":{"body":28,"breadcrumbs":2,"title":1},"944":{"body":0,"breadcrumbs":2,"title":1},"945":{"body":28,"breadcrumbs":2,"title":1},"946":{"body":38,"breadcrumbs":2,"title":1},"947":{"body":37,"breadcrumbs":2,"title":1},"948":{"body":0,"breadcrumbs":3,"title":2},"949":{"body":21,"breadcrumbs":2,"title":1},"95":{"body":20,"breadcrumbs":9,"title":6},"950":{"body":99,"breadcrumbs":4,"title":3},"951":{"body":0,"breadcrumbs":4,"title":3},"952":{"body":78,"breadcrumbs":3,"title":2},"953":{"body":26,"breadcrumbs":3,"title":2},"954":{"body":0,"breadcrumbs":3,"title":2},"955":{"body":42,"breadcrumbs":3,"title":2},"956":{"body":28,"breadcrumbs":3,"title":2},"957":{"body":0,"breadcrumbs":3,"title":2},"958":{"body":31,"breadcrumbs":4,"title":3},"959":{"body":19,"breadcrumbs":4,"title":3},"96":{"body":11,"breadcrumbs":10,"title":7},"960":{"body":27,"breadcrumbs":3,"title":2},"961":{"body":16,"breadcrumbs":6,"title":3},"962":{"body":19,"breadcrumbs":5,"title":2},"963":{"body":32,"breadcrumbs":4,"title":1},"964":{"body":0,"breadcrumbs":6,"title":3},"965":{"body":8,"breadcrumbs":4,"title":1},"966":{"body":961,"breadcrumbs":4,"title":1},"967":{"body":33,"breadcrumbs":4,"title":1},"968":{"body":29,"breadcrumbs":4,"title":1},"969":{"body":32,"breadcrumbs":4,"title":1},"97":{"body":26,"breadcrumbs":10,"title":7},"970":{"body":0,"breadcrumbs":5,"title":2},"971":{"body":289,"breadcrumbs":6,"title":3},"972":{"body":0,"breadcrumbs":5,"title":3},"973":{"body":0,"breadcrumbs":6,"title":4},"974":{"body":1,"breadcrumbs":4,"title":2},"975":{"body":7,"breadcrumbs":4,"title":2},"976":{"body":6,"breadcrumbs":5,"title":3},"977":{"body":7,"breadcrumbs":4,"title":2},"978":{"body":2,"breadcrumbs":4,"title":2},"979":{"body":6,"breadcrumbs":4,"title":2},"98":{"body":18,"breadcrumbs":6,"title":3},"980":{"body":6,"breadcrumbs":4,"title":2},"981":{"body":1,"breadcrumbs":4,"title":2},"982":{"body":15,"breadcrumbs":5,"title":3},"983":{"body":21,"breadcrumbs":5,"title":3},"984":{"body":87,"breadcrumbs":3,"title":1},"985":{"body":0,"breadcrumbs":3,"title":1},"986":{"body":833,"breadcrumbs":5,"title":3},"987":{"body":20,"breadcrumbs":7,"title":5},"988":{"body":12,"breadcrumbs":3,"title":1},"989":{"body":297,"breadcrumbs":4,"title":2},"99":{"body":74,"breadcrumbs":7,"title":4},"990":{"body":46,"breadcrumbs":8,"title":4},"991":{"body":45,"breadcrumbs":5,"title":1},"992":{"body":0,"breadcrumbs":6,"title":2},"993":{"body":44,"breadcrumbs":10,"title":6},"994":{"body":9,"breadcrumbs":8,"title":4},"995":{"body":13,"breadcrumbs":7,"title":3},"996":{"body":19,"breadcrumbs":9,"title":5},"997":{"body":0,"breadcrumbs":7,"title":3},"998":{"body":34,"breadcrumbs":9,"title":5},"999":{"body":29,"breadcrumbs":9,"title":5}},"docs":{"0":{"body":"Last Updated : 2025-01-02 (Phase 3.A Cleanup Complete) Status : ✅ Primary documentation source (145 files consolidated) Welcome to the comprehensive documentation for the Provisioning Platform - a modern, cloud-native infrastructure automation system built with Nushell, KCL, and Rust. Note : Architecture Decision Records (ADRs) and high-level design documentation are in docs/ directory. This location contains all user-facing, operational, and product documentation.","breadcrumbs":"Home » Provisioning Platform Documentation","id":"0","title":"Provisioning Platform Documentation"},"1":{"body":"","breadcrumbs":"Home » Quick Navigation","id":"1","title":"Quick Navigation"},"10":{"body":"Document Description Workspace Config Architecture Configuration architecture","breadcrumbs":"Home » 🔐 Configuration","id":"10","title":"🔐 Configuration"},"100":{"body":"Setup wizard won\'t start # Check Nushell\\nnu --version # Check permissions\\nchmod +x $(which provisioning) Configuration error # Validate configuration\\nprovisioning setup validate --verbose # Check paths\\nprovisioning info paths Deployment fails # Dry-run to see what would happen\\nprovisioning server create --check # Check platform status\\nprovisioning platform status","breadcrumbs":"Setup Quick Start » Troubleshooting Quick Fixes","id":"100","title":"Troubleshooting Quick Fixes"},"1000":{"body":"","breadcrumbs":"TypeDialog Platform Config Guide » Configuration Structure","id":"1000","title":"Configuration Structure"},"1001":{"body":"All configuration lives in one Nickel file with three sections: # workspace_librecloud/config/config.ncl\\n{ # SECTION 1: Workspace metadata workspace = { name = \\"librecloud\\", path = \\"/Users/Akasha/project-provisioning/workspace_librecloud\\", description = \\"Production workspace\\" }, # SECTION 2: Cloud providers providers = { upcloud = { enabled = true, api_user = \\"{{env.UPCLOUD_USER}}\\", api_password = \\"{{kms.decrypt(\'upcloud_pass\')}}\\" }, aws = { enabled = false }, local = { enabled = true } }, # SECTION 3: Platform services platform = { orchestrator = { enabled = true, server = { host = \\"127.0.0.1\\", port = 9090 }, storage = { type = \\"filesystem\\" } }, kms = { enabled = true, backend = \\"rustyvault\\", url = \\"http://localhost:8200\\" } }\\n}","breadcrumbs":"TypeDialog Platform Config Guide » Single File, Three Sections","id":"1001","title":"Single File, Three Sections"},"1002":{"body":"Section Purpose Used By workspace Workspace metadata and paths Config loader, providers providers.upcloud UpCloud provider settings UpCloud provisioning providers.aws AWS provider settings AWS provisioning providers.local Local VM provider settings Local VM provisioning Core Platform Services platform.orchestrator Orchestrator service config Orchestrator REST API platform.control_center Control center service config Control center REST API platform.mcp_server MCP server service config Model Context Protocol integration platform.installer Installer service config Infrastructure provisioning Security & Secrets platform.vault_service Vault service config Secrets management and encryption Extensions & Registry platform.extension_registry Extension registry config Extension distribution via Gitea/OCI AI & Intelligence platform.rag RAG system config Retrieval-Augmented Generation platform.ai_service AI service config AI model integration and DAG workflows Operations & Daemon platform.provisioning_daemon Provisioning daemon config Background provisioning operations","breadcrumbs":"TypeDialog Platform Config Guide » Available Configuration Sections","id":"1002","title":"Available Configuration Sections"},"1003":{"body":"","breadcrumbs":"TypeDialog Platform Config Guide » Service-Specific Configuration","id":"1003","title":"Service-Specific Configuration"},"1004":{"body":"Purpose : Coordinate infrastructure operations, manage workflows, handle batch operations Key Settings : server : HTTP server configuration (host, port, workers) storage : Task queue storage (filesystem or SurrealDB) queue : Task processing (concurrency, retries, timeouts) batch : Batch operation settings (parallelism, timeouts) monitoring : Health checks and metrics collection rollback : Checkpoint and recovery strategy logging : Log level and format Example : platform = { orchestrator = { enabled = true, server = { host = \\"127.0.0.1\\", port = 9090, workers = 4, keep_alive = 75, max_connections = 1000 }, storage = { type = \\"filesystem\\", backend_path = \\"{{workspace.path}}/.orchestrator/data/queue.rkvs\\" }, queue = { max_concurrent_tasks = 5, retry_attempts = 3, retry_delay_seconds = 5, task_timeout_minutes = 60 } }\\n}","breadcrumbs":"TypeDialog Platform Config Guide » Orchestrator Service","id":"1004","title":"Orchestrator Service"},"1005":{"body":"Purpose : Cryptographic key management, secret encryption/decryption Key Settings : backend : KMS backend (rustyvault, age, aws, vault, cosmian) url : Backend URL or connection string credentials : Authentication if required Example : platform = { kms = { enabled = true, backend = \\"rustyvault\\", url = \\"http://localhost:8200\\" }\\n}","breadcrumbs":"TypeDialog Platform Config Guide » KMS Service","id":"1005","title":"KMS Service"},"1006":{"body":"Purpose : Centralized monitoring and control interface Key Settings : server : HTTP server configuration database : Backend database connection jwt : JWT authentication settings security : CORS and security policies Example : platform = { control_center = { enabled = true, server = { host = \\"127.0.0.1\\", port = 8080 } }\\n}","breadcrumbs":"TypeDialog Platform Config Guide » Control Center Service","id":"1006","title":"Control Center Service"},"1007":{"body":"All platform services support four deployment modes, each with different resource allocation and feature sets: Mode Resources Use Case Storage TLS solo Minimal (2 workers) Development, testing Embedded/filesystem No multiuser Moderate (4 workers) Team environments Shared databases Optional cicd High throughput (8+ workers) CI/CD pipelines Ephemeral/memory No enterprise High availability (16+ workers) Production Clustered/distributed Yes Mode-based Configuration Loading : # Load a specific mode\'s configuration\\nexport VAULT_MODE=enterprise\\nexport REGISTRY_MODE=multiuser\\nexport RAG_MODE=cicd # Services automatically resolve to correct TOML files:\\n# Generated from: provisioning/schemas/platform/\\n# - vault-service.enterprise.toml (generated from vault-service.ncl)\\n# - extension-registry.multiuser.toml (generated from extension-registry.ncl)\\n# - rag.cicd.toml (generated from rag.ncl)","breadcrumbs":"TypeDialog Platform Config Guide » Deployment Modes","id":"1007","title":"Deployment Modes"},"1008":{"body":"","breadcrumbs":"TypeDialog Platform Config Guide » New Platform Services (Phase 13-19)","id":"1008","title":"New Platform Services (Phase 13-19)"},"1009":{"body":"Purpose : Secrets management, encryption, and cryptographic key storage Key Settings : server : HTTP server configuration (host, port, workers) storage : Backend storage (filesystem, memory, surrealdb, etcd, postgresql) vault : Vault mounting and key management ha : High availability clustering security : TLS, certificate validation logging : Log level and audit trails Mode Characteristics : solo : Filesystem storage, no TLS, embedded mode multiuser : SurrealDB backend, shared storage, TLS optional cicd : In-memory ephemeral storage, no persistence enterprise : Etcd HA, TLS required, audit logging enabled Environment Variable Overrides : VAULT_CONFIG=/path/to/vault.toml # Explicit config path\\nVAULT_MODE=enterprise # Mode-specific config\\nVAULT_SERVER_URL=http://localhost:8200 # Server URL\\nVAULT_STORAGE_BACKEND=etcd # Storage backend\\nVAULT_AUTH_TOKEN=s.xxxxxxxx # Authentication token\\nVAULT_TLS_VERIFY=true # TLS verification Example Configuration : platform = { vault_service = { enabled = true, server = { host = \\"0.0.0.0\\", port = 8200, workers = 8 }, storage = { backend = \\"surrealdb\\", url = \\"http://surrealdb:8000\\", namespace = \\"vault\\", database = \\"secrets\\" }, vault = { mount_point = \\"transit\\", key_name = \\"provisioning-master\\" }, ha = { enabled = true } }\\n}","breadcrumbs":"TypeDialog Platform Config Guide » Vault Service","id":"1009","title":"Vault Service"},"101":{"body":"After basic setup: Configure Provider : Add cloud provider credentials Create More Workspaces : Dev, staging, production Deploy Services : Web servers, databases, etc. Set Up Monitoring : Health checks, logging Automate Deployments : CI/CD integration","breadcrumbs":"Setup Quick Start » What\'s Next?","id":"101","title":"What\'s Next?"},"1010":{"body":"Purpose : Extension distribution and management via Gitea and OCI registries Key Settings : server : HTTP server configuration (host, port, workers) gitea : Gitea integration for extension source repository oci : OCI registry for artifact distribution cache : Metadata and list caching auth : Registry authentication Mode Characteristics : solo : Gitea only, minimal cache, CORS disabled multiuser : Gitea + OCI, both enabled, CORS enabled cicd : OCI only (high-throughput mode), ephemeral cache enterprise : Both Gitea + OCI, TLS verification, large cache Environment Variable Overrides : REGISTRY_CONFIG=/path/to/registry.toml # Explicit config path\\nREGISTRY_MODE=multiuser # Mode-specific config\\nREGISTRY_SERVER_HOST=0.0.0.0 # Server host\\nREGISTRY_SERVER_PORT=8081 # Server port\\nREGISTRY_SERVER_WORKERS=4 # Worker count\\nREGISTRY_GITEA_URL=http://gitea:3000 # Gitea URL\\nREGISTRY_GITEA_ORG=provisioning # Gitea organization\\nREGISTRY_OCI_REGISTRY=registry.local:5000 # OCI registry\\nREGISTRY_OCI_NAMESPACE=provisioning # OCI namespace Example Configuration : platform = { extension_registry = { enabled = true, server = { host = \\"0.0.0.0\\", port = 8081, workers = 4 }, gitea = { enabled = true, url = \\"http://gitea:3000\\", org = \\"provisioning\\" }, oci = { enabled = true, registry = \\"registry.local:5000\\", namespace = \\"provisioning\\" }, cache = { capacity = 1000, ttl = 300 } }\\n}","breadcrumbs":"TypeDialog Platform Config Guide » Extension Registry Service","id":"1010","title":"Extension Registry Service"},"1011":{"body":"Purpose : Document retrieval, semantic search, and AI-augmented responses Key Settings : embeddings : Embedding model provider (openai, local, anthropic) vector_db : Vector database backend (memory, surrealdb, qdrant, milvus) llm : Language model provider (anthropic, openai, ollama) retrieval : Search strategy and parameters ingestion : Document processing and indexing Mode Characteristics : solo : Local embeddings, in-memory vector DB, Ollama LLM multiuser : OpenAI embeddings, SurrealDB vector DB, Anthropic LLM cicd : RAG completely disabled (not applicable for ephemeral pipelines) enterprise : Large embeddings (3072-dim), distributed vector DB, Claude Opus Environment Variable Overrides : RAG_CONFIG=/path/to/rag.toml # Explicit config path\\nRAG_MODE=multiuser # Mode-specific config\\nRAG_ENABLED=true # Enable/disable RAG\\nRAG_EMBEDDINGS_PROVIDER=openai # Embedding provider\\nRAG_EMBEDDINGS_API_KEY=sk-xxx # Embedding API key\\nRAG_VECTOR_DB_URL=http://surrealdb:8000 # Vector DB URL\\nRAG_LLM_PROVIDER=anthropic # LLM provider\\nRAG_LLM_API_KEY=sk-ant-xxx # LLM API key\\nRAG_VECTOR_DB_TYPE=surrealdb # Vector DB type Example Configuration : platform = { rag = { enabled = true, embeddings = { provider = \\"openai\\", model = \\"text-embedding-3-small\\", api_key = \\"{{env.OPENAI_API_KEY}}\\" }, vector_db = { db_type = \\"surrealdb\\", url = \\"http://surrealdb:8000\\", namespace = \\"rag_prod\\" }, llm = { provider = \\"anthropic\\", model = \\"claude-opus-4-5-20251101\\", api_key = \\"{{env.ANTHROPIC_API_KEY}}\\" }, retrieval = { top_k = 10, similarity_threshold = 0.75 } }\\n}","breadcrumbs":"TypeDialog Platform Config Guide » RAG (Retrieval-Augmented Generation) Service","id":"1011","title":"RAG (Retrieval-Augmented Generation) Service"},"1012":{"body":"Purpose : AI model integration with RAG and MCP support for multi-step workflows Key Settings : server : HTTP server configuration rag : RAG system integration mcp : Model Context Protocol integration dag : Directed acyclic graph task orchestration Mode Characteristics : solo : RAG enabled, no MCP, minimal concurrency (3 tasks) multiuser : Both RAG and MCP enabled, moderate concurrency (10 tasks) cicd : RAG disabled, MCP enabled, high concurrency (20 tasks) enterprise : Both enabled, max concurrency (50 tasks), full monitoring Environment Variable Overrides : AI_SERVICE_CONFIG=/path/to/ai.toml # Explicit config path\\nAI_SERVICE_MODE=enterprise # Mode-specific config\\nAI_SERVICE_SERVER_PORT=8082 # Server port\\nAI_SERVICE_SERVER_WORKERS=16 # Worker count\\nAI_SERVICE_RAG_ENABLED=true # Enable RAG integration\\nAI_SERVICE_MCP_ENABLED=true # Enable MCP integration\\nAI_SERVICE_DAG_MAX_CONCURRENT_TASKS=50 # Max concurrent tasks Example Configuration : platform = { ai_service = { enabled = true, server = { host = \\"0.0.0.0\\", port = 8082, workers = 8 }, rag = { enabled = true, rag_service_url = \\"http://rag:8083\\", timeout = 60000 }, mcp = { enabled = true, mcp_service_url = \\"http://mcp-server:8084\\", timeout = 60000 }, dag = { max_concurrent_tasks = 20, task_timeout = 600000, retry_attempts = 5 } }\\n}","breadcrumbs":"TypeDialog Platform Config Guide » AI Service","id":"1012","title":"AI Service"},"1013":{"body":"Purpose : Background service for provisioning operations, workspace management, and health monitoring Key Settings : daemon : Daemon control (poll interval, max workers) logging : Log level and output configuration actions : Automated actions (cleanup, updates, sync) workers : Worker pool configuration health : Health check settings Mode Characteristics : solo : Minimal polling, no auto-cleanup, debug logging multiuser : Standard polling, workspace sync enabled, info logging cicd : Frequent polling, ephemeral cleanup, warning logging enterprise : Standard polling, full automation, all features enabled Environment Variable Overrides : DAEMON_CONFIG=/path/to/daemon.toml # Explicit config path\\nDAEMON_MODE=enterprise # Mode-specific config\\nDAEMON_POLL_INTERVAL=30 # Polling interval (seconds)\\nDAEMON_MAX_WORKERS=16 # Maximum worker threads\\nDAEMON_LOGGING_LEVEL=info # Log level (debug/info/warn/error)\\nDAEMON_AUTO_CLEANUP=true # Enable auto cleanup\\nDAEMON_AUTO_UPDATE=true # Enable auto updates Example Configuration : platform = { provisioning_daemon = { enabled = true, daemon = { poll_interval = 30, max_workers = 8 }, logging = { level = \\"info\\", file = \\"/var/log/provisioning/daemon.log\\" }, actions = { auto_cleanup = true, auto_update = false, workspace_sync = true } }\\n}","breadcrumbs":"TypeDialog Platform Config Guide » Provisioning Daemon","id":"1013","title":"Provisioning Daemon"},"1014":{"body":"","breadcrumbs":"TypeDialog Platform Config Guide » Using TypeDialog Forms","id":"1014","title":"Using TypeDialog Forms"},"1015":{"body":"Interactive Prompts : Answer questions one at a time Validation : Inputs are validated as you type Defaults : Each field shows a sensible default Skip Optional : Press Enter to use default or skip optional fields Review : Preview generated Nickel before saving","breadcrumbs":"TypeDialog Platform Config Guide » Form Navigation","id":"1015","title":"Form Navigation"},"1016":{"body":"Type Example Notes text \\"127.0.0.1\\" Free-form text input confirm true/false Yes/no answer select \\"filesystem\\" Choose from list custom(u16) 9090 Number input custom(u32) 1000 Larger number","breadcrumbs":"TypeDialog Platform Config Guide » Field Types","id":"1016","title":"Field Types"},"1017":{"body":"Environment Variables : api_user = \\"{{env.UPCLOUD_USER}}\\"\\napi_password = \\"{{env.UPCLOUD_PASSWORD}}\\" Workspace Paths : data_dir = \\"{{workspace.path}}/.orchestrator/data\\"\\nlogs_dir = \\"{{workspace.path}}/.orchestrator/logs\\" KMS Decryption : api_password = \\"{{kms.decrypt(\'upcloud_pass\')}}\\"","breadcrumbs":"TypeDialog Platform Config Guide » Special Values","id":"1017","title":"Special Values"},"1018":{"body":"","breadcrumbs":"TypeDialog Platform Config Guide » Validation & Export","id":"1018","title":"Validation & Export"},"1019":{"body":"# Check Nickel syntax\\nnickel typecheck workspace_librecloud/config/config.ncl # Detailed validation with error messages\\nnickel typecheck workspace_librecloud/config/config.ncl 2>&1 # Schema validation happens during export\\nprovisioning config export","breadcrumbs":"TypeDialog Platform Config Guide » Validating Configuration","id":"1019","title":"Validating Configuration"},"102":{"body":"# Get help\\nprovisioning help # Setup help\\nprovisioning help setup # Specific command help\\nprovisioning --help # View documentation\\nprovisioning guide system-setup","breadcrumbs":"Setup Quick Start » Need Help?","id":"102","title":"Need Help?"},"1020":{"body":"# One-time export\\nprovisioning config export # Export creates (pre-configured TOML for all services):\\nworkspace_librecloud/config/generated/\\n├── workspace.toml # Workspace metadata\\n├── providers/\\n│ ├── upcloud.toml # UpCloud provider\\n│ └── local.toml # Local provider\\n└── platform/ ├── orchestrator.toml # Orchestrator service ├── control_center.toml # Control center service ├── mcp_server.toml # MCP server service ├── installer.toml # Installer service ├── kms.toml # KMS service ├── vault_service.toml # Vault service (new) ├── extension_registry.toml # Extension registry (new) ├── rag.toml # RAG service (new) ├── ai_service.toml # AI service (new) └── provisioning_daemon.toml # Daemon service (new) # Public Nickel Schemas (20 total for 5 new services):\\nprovisioning/schemas/platform/\\n├── schemas/\\n│ ├── vault-service.ncl\\n│ ├── extension-registry.ncl\\n│ ├── rag.ncl\\n│ ├── ai-service.ncl\\n│ └── provisioning-daemon.ncl\\n├── defaults/\\n│ ├── vault-service-defaults.ncl\\n│ ├── extension-registry-defaults.ncl\\n│ ├── rag-defaults.ncl\\n│ ├── ai-service-defaults.ncl\\n│ ├── provisioning-daemon-defaults.ncl\\n│ └── deployment/\\n│ ├── solo-defaults.ncl\\n│ ├── multiuser-defaults.ncl\\n│ ├── cicd-defaults.ncl\\n│ └── enterprise-defaults.ncl\\n├── validators/\\n├── templates/\\n├── constraints/\\n└── values/ Using Pre-Generated Configurations : All 5 new services come with pre-built TOML configs for each deployment mode: # View available schemas for vault service\\nls -la provisioning/schemas/platform/schemas/vault-service.ncl\\nls -la provisioning/schemas/platform/defaults/vault-service-defaults.ncl # Load enterprise mode\\nexport VAULT_MODE=enterprise\\ncargo run -p vault-service # Or load multiuser mode\\nexport REGISTRY_MODE=multiuser\\ncargo run -p extension-registry # All 5 services support mode-based loading\\nexport RAG_MODE=cicd\\nexport AI_SERVICE_MODE=enterprise\\nexport DAEMON_MODE=multiuser","breadcrumbs":"TypeDialog Platform Config Guide » Exporting to Service Formats","id":"1020","title":"Exporting to Service Formats"},"1021":{"body":"","breadcrumbs":"TypeDialog Platform Config Guide » Updating Configuration","id":"1021","title":"Updating Configuration"},"1022":{"body":"Edit source config : vim workspace_librecloud/config/config.ncl Validate changes : nickel typecheck workspace_librecloud/config/config.ncl Re-export to TOML : provisioning config export Restart affected service (if needed): provisioning restart orchestrator","breadcrumbs":"TypeDialog Platform Config Guide » Change a Setting","id":"1022","title":"Change a Setting"},"1023":{"body":"If you prefer interactive updating: # Re-run TypeDialog form (overwrites config.ncl)\\nprovisioning config platform orchestrator # Or edit via TypeDialog with existing values\\ntypedialog form .typedialog/provisioning/platform/orchestrator/form.toml","breadcrumbs":"TypeDialog Platform Config Guide » Using TypeDialog to Update","id":"1023","title":"Using TypeDialog to Update"},"1024":{"body":"","breadcrumbs":"TypeDialog Platform Config Guide » Troubleshooting","id":"1024","title":"Troubleshooting"},"1025":{"body":"Problem : Failed to parse config file Solution : Check form.toml syntax and verify required fields are present (name, description, locales_path, templates_path) head -10 .typedialog/provisioning/platform/orchestrator/form.toml","breadcrumbs":"TypeDialog Platform Config Guide » Form Won\'t Load","id":"1025","title":"Form Won\'t Load"},"1026":{"body":"Problem : Nickel configuration validation failed Solution : Check for syntax errors and correct field names nickel typecheck workspace_librecloud/config/config.ncl 2>&1 | less Common issues: Missing closing braces, incorrect field names, wrong data types","breadcrumbs":"TypeDialog Platform Config Guide » Validation Fails","id":"1026","title":"Validation Fails"},"1027":{"body":"Problem : Generated TOML files are empty Solution : Verify config.ncl exports to JSON and check all required sections exist nickel export --format json workspace_librecloud/config/config.ncl | head -20","breadcrumbs":"TypeDialog Platform Config Guide » Export Creates Empty Files","id":"1027","title":"Export Creates Empty Files"},"1028":{"body":"Problem : Changes don\'t take effect Solution : Verify export succeeded: ls -lah workspace_librecloud/config/generated/platform/ Check service path: provisioning start orchestrator --check Restart service: provisioning restart orchestrator","breadcrumbs":"TypeDialog Platform Config Guide » Services Don\'t Use New Config","id":"1028","title":"Services Don\'t Use New Config"},"1029":{"body":"","breadcrumbs":"TypeDialog Platform Config Guide » Configuration Examples","id":"1029","title":"Configuration Examples"},"103":{"body":"Your configuration is in: macOS : ~/Library/Application Support/provisioning/ Linux : ~/.config/provisioning/ Important files: system.toml - System configuration user_preferences.toml - User settings workspaces/*/ - Workspace definitions Ready to dive deeper? Check out the Full Setup Guide","breadcrumbs":"Setup Quick Start » Key Files","id":"103","title":"Key Files"},"1030":{"body":"{ workspace = { name = \\"dev\\", path = \\"/Users/dev/workspace\\", description = \\"Development workspace\\" }, providers = { local = { enabled = true, base_path = \\"/opt/vms\\" }, upcloud = { enabled = false }, aws = { enabled = false } }, platform = { orchestrator = { enabled = true, server = { host = \\"127.0.0.1\\", port = 9090 }, storage = { type = \\"filesystem\\" }, logging = { level = \\"debug\\", format = \\"json\\" } }, kms = { enabled = true, backend = \\"age\\" } }\\n}","breadcrumbs":"TypeDialog Platform Config Guide » Development Setup","id":"1030","title":"Development Setup"},"1031":{"body":"{ workspace = { name = \\"prod\\", path = \\"/opt/provisioning/prod\\", description = \\"Production workspace\\" }, providers = { upcloud = { enabled = true, api_user = \\"{{env.UPCLOUD_USER}}\\", api_password = \\"{{kms.decrypt(\'upcloud_prod\')}}\\", default_zone = \\"de-fra1\\" }, aws = { enabled = false }, local = { enabled = false } }, platform = { orchestrator = { enabled = true, server = { host = \\"0.0.0.0\\", port = 9090, workers = 8 }, storage = { type = \\"surrealdb-server\\", url = \\"ws://surreal.internal:8000\\" }, monitoring = { enabled = true, metrics_interval_seconds = 30 }, logging = { level = \\"info\\", format = \\"json\\" } }, kms = { enabled = true, backend = \\"vault\\", url = \\"https://vault.internal:8200\\" } }\\n}","breadcrumbs":"TypeDialog Platform Config Guide » Production Setup","id":"1031","title":"Production Setup"},"1032":{"body":"{ workspace = { name = \\"multi\\", path = \\"/opt/multi\\", description = \\"Multi-cloud workspace\\" }, providers = { upcloud = { enabled = true, api_user = \\"{{env.UPCLOUD_USER}}\\", default_zone = \\"de-fra1\\", zones = [\\"de-fra1\\", \\"us-nyc1\\", \\"nl-ams1\\"] }, aws = { enabled = true, access_key = \\"{{env.AWS_ACCESS_KEY_ID}}\\" }, local = { enabled = true, base_path = \\"/opt/local-vms\\" } }, platform = { orchestrator = { enabled = true, multi_workspace = false, storage = { type = \\"filesystem\\" } }, kms = { enabled = true, backend = \\"rustyvault\\" } }\\n}","breadcrumbs":"TypeDialog Platform Config Guide » Multi-Provider Setup","id":"1032","title":"Multi-Provider Setup"},"1033":{"body":"","breadcrumbs":"TypeDialog Platform Config Guide » Best Practices","id":"1033","title":"Best Practices"},"1034":{"body":"Start with TypeDialog forms for the best experience: provisioning config platform orchestrator","breadcrumbs":"TypeDialog Platform Config Guide » 1. Use TypeDialog for Initial Setup","id":"1034","title":"1. Use TypeDialog for Initial Setup"},"1035":{"body":"Only edit the source .ncl file, not the generated TOML files. Correct : vim workspace_librecloud/config/config.ncl Wrong : vim workspace_librecloud/config/generated/platform/orchestrator.toml","breadcrumbs":"TypeDialog Platform Config Guide » 2. Never Edit Generated Files","id":"1035","title":"2. Never Edit Generated Files"},"1036":{"body":"Always validate before deploying changes: nickel typecheck workspace_librecloud/config/config.ncl\\nprovisioning config export","breadcrumbs":"TypeDialog Platform Config Guide » 3. Validate Before Deploy","id":"1036","title":"3. Validate Before Deploy"},"1037":{"body":"Never hardcode credentials in config. Reference environment variables or KMS: Wrong : api_password = \\"my-password\\" Correct : api_password = \\"{{env.UPCLOUD_PASSWORD}}\\" Better : api_password = \\"{{kms.decrypt(\'upcloud_key\')}}\\"","breadcrumbs":"TypeDialog Platform Config Guide » 4. Use Environment Variables for Secrets","id":"1037","title":"4. Use Environment Variables for Secrets"},"1038":{"body":"Add comments explaining custom settings in the Nickel file.","breadcrumbs":"TypeDialog Platform Config Guide » 5. Document Changes","id":"1038","title":"5. Document Changes"},"1039":{"body":"","breadcrumbs":"TypeDialog Platform Config Guide » Related Documentation","id":"1039","title":"Related Documentation"},"104":{"body":"Version : 1.0.0 Last Updated : 2025-12-09 Status : Production Ready","breadcrumbs":"Setup System Guide » Provisioning Setup System Guide","id":"104","title":"Provisioning Setup System Guide"},"1040":{"body":"Configuration System : See CLAUDE.md#configuration-file-format-selection Migration Guide : See provisioning/config/README.md#migration-strategy Schema Reference : See provisioning/schemas/ Nickel Language : See ADR-011 in docs/architecture/adr/","breadcrumbs":"TypeDialog Platform Config Guide » Core Resources","id":"1040","title":"Core Resources"},"1041":{"body":"Platform Services Overview : See provisioning/platform/*/README.md Core Services (Phases 8-12): orchestrator, control-center, mcp-server New Services (Phases 13-19): vault-service: Secrets management and encryption extension-registry: Extension distribution via Gitea/OCI rag: Retrieval-Augmented Generation system ai-service: AI model integration with DAG workflows provisioning-daemon: Background provisioning operations Note : Installer is a distribution tool (provisioning/tools/distribution/create-installer.nu), not a platform service configurable via TypeDialog.","breadcrumbs":"TypeDialog Platform Config Guide » Platform Services","id":"1041","title":"Platform Services"},"1042":{"body":"TypeDialog Forms (Interactive UI): provisioning/.typedialog/platform/forms/ Nickel Schemas (Type Definitions): provisioning/schemas/platform/schemas/ Default Values (Base Configuration): provisioning/schemas/platform/defaults/ Validators (Business Logic): provisioning/schemas/platform/validators/ Deployment Modes (Presets): provisioning/schemas/platform/defaults/deployment/ Rust Integration : provisioning/platform/crates/*/src/config.rs","breadcrumbs":"TypeDialog Platform Config Guide » Public Definition Locations","id":"1042","title":"Public Definition Locations"},"1043":{"body":"","breadcrumbs":"TypeDialog Platform Config Guide » Getting Help","id":"1043","title":"Getting Help"},"1044":{"body":"Get detailed error messages and check available fields: nickel typecheck workspace_librecloud/config/config.ncl 2>&1 | less\\ngrep \\"prompt =\\" .typedialog/provisioning/platform/orchestrator/form.toml","breadcrumbs":"TypeDialog Platform Config Guide » Validation Errors","id":"1044","title":"Validation Errors"},"1045":{"body":"# Show all available config commands\\nprovisioning config --help # Show help for specific service\\nprovisioning config platform --help # List providers and services\\nprovisioning config providers list\\nprovisioning config services list","breadcrumbs":"TypeDialog Platform Config Guide » Configuration Questions","id":"1045","title":"Configuration Questions"},"1046":{"body":"# Validate without deploying\\nnickel typecheck workspace_librecloud/config/config.ncl # Export to see generated config\\nprovisioning config export # Check generated files\\nls -la workspace_librecloud/config/generated/","breadcrumbs":"TypeDialog Platform Config Guide » Test Configuration","id":"1046","title":"Test Configuration"},"1047":{"body":"Version : 1.0.0 Last Updated : 2026-01-05 Target Audience : DevOps Engineers, Platform Operators Status : Production Ready Practical guide for deploying the 9-service provisioning platform in any environment using mode-based configuration.","breadcrumbs":"Platform Deployment Guide » Platform Deployment Guide","id":"1047","title":"Platform Deployment Guide"},"1048":{"body":"Prerequisites Deployment Modes Quick Start Solo Mode Deployment Multiuser Mode Deployment CICD Mode Deployment Enterprise Mode Deployment Service Management Health Checks & Monitoring Troubleshooting","breadcrumbs":"Platform Deployment Guide » Table of Contents","id":"1048","title":"Table of Contents"},"1049":{"body":"","breadcrumbs":"Platform Deployment Guide » Prerequisites","id":"1049","title":"Prerequisites"},"105":{"body":"","breadcrumbs":"Setup System Guide » Quick Start","id":"105","title":"Quick Start"},"1050":{"body":"Rust : 1.70+ (for building services) Nickel : Latest (for config validation) Nushell : 0.109.1+ (for scripts) Cargo : Included with Rust Git : For cloning and pulling updates","breadcrumbs":"Platform Deployment Guide » Required Software","id":"1050","title":"Required Software"},"1051":{"body":"Tool Solo Multiuser CICD Enterprise Docker/Podman No Optional Yes Yes SurrealDB No Yes No No Etcd No No No Yes PostgreSQL No Optional No Optional OpenAI/Anthropic API No Optional Yes Yes","breadcrumbs":"Platform Deployment Guide » Required Tools (Mode-Dependent)","id":"1051","title":"Required Tools (Mode-Dependent)"},"1052":{"body":"Resource Solo Multiuser CICD Enterprise CPU Cores 2+ 4+ 8+ 16+ Memory 2 GB 4 GB 8 GB 16 GB Disk 10 GB 50 GB 100 GB 500 GB Network Local Local/Cloud Cloud HA Cloud","breadcrumbs":"Platform Deployment Guide » System Requirements","id":"1052","title":"System Requirements"},"1053":{"body":"# Ensure base directories exist\\nmkdir -p provisioning/schemas/platform\\nmkdir -p provisioning/platform/logs\\nmkdir -p provisioning/platform/data\\nmkdir -p provisioning/.typedialog/platform\\nmkdir -p provisioning/config/runtime","breadcrumbs":"Platform Deployment Guide » Directory Structure","id":"1053","title":"Directory Structure"},"1054":{"body":"","breadcrumbs":"Platform Deployment Guide » Deployment Modes","id":"1054","title":"Deployment Modes"},"1055":{"body":"Requirement Recommended Mode Development & testing solo Team environment (2-10 people) multiuser CI/CD pipelines & automation cicd Production with HA enterprise","breadcrumbs":"Platform Deployment Guide » Mode Selection Matrix","id":"1055","title":"Mode Selection Matrix"},"1056":{"body":"Solo Mode Use Case : Development, testing, demonstration Characteristics : All services run locally with minimal resources Filesystem-based storage (no external databases) No TLS/SSL required Embedded/in-memory backends Single machine only Services Configuration : 2-4 workers per service 30-60 second timeouts No replication or clustering Debug-level logging enabled Startup Time : ~2-5 minutes Data Persistence : Local files only Multiuser Mode Use Case : Team environments, shared infrastructure Characteristics : Shared database backends (SurrealDB) Multiple concurrent users CORS and multi-user features enabled Optional TLS support 2-4 machines (or containerized) Services Configuration : 4-6 workers per service 60-120 second timeouts Basic replication available Info-level logging Startup Time : ~3-8 minutes (database dependent) Data Persistence : SurrealDB (shared) CICD Mode Use Case : CI/CD pipelines, ephemeral environments Characteristics : Ephemeral storage (memory, temporary) High throughput RAG system disabled Minimal logging Stateless services Services Configuration : 8-12 workers per service 10-30 second timeouts No persistence Warn-level logging Startup Time : ~1-2 minutes Data Persistence : None (ephemeral) Enterprise Mode Use Case : Production, high availability, compliance Characteristics : Distributed, replicated backends High availability (HA) clustering TLS/SSL encryption Audit logging Full monitoring and observability Services Configuration : 16-32 workers per service 120-300 second timeouts Active replication across 3+ nodes Info-level logging with audit trails Startup Time : ~5-15 minutes (cluster initialization) Data Persistence : Replicated across cluster","breadcrumbs":"Platform Deployment Guide » Mode Characteristics","id":"1056","title":"Mode Characteristics"},"1057":{"body":"","breadcrumbs":"Platform Deployment Guide » Quick Start","id":"1057","title":"Quick Start"},"1058":{"body":"git clone https://github.com/your-org/project-provisioning.git\\ncd project-provisioning","breadcrumbs":"Platform Deployment Guide » 1. Clone Repository","id":"1058","title":"1. Clone Repository"},"1059":{"body":"Choose your mode based on use case: # For development\\nexport DEPLOYMENT_MODE=solo # For team environments\\nexport DEPLOYMENT_MODE=multiuser # For CI/CD\\nexport DEPLOYMENT_MODE=cicd # For production\\nexport DEPLOYMENT_MODE=enterprise","breadcrumbs":"Platform Deployment Guide » 2. Select Deployment Mode","id":"1059","title":"2. Select Deployment Mode"},"106":{"body":"Nushell 0.109.0+ bash One deployment tool: Docker, Kubernetes, SSH, or systemd Optional: KCL, SOPS, Age","breadcrumbs":"Setup System Guide » Prerequisites","id":"106","title":"Prerequisites"},"1060":{"body":"All services use mode-specific TOML configs automatically loaded via environment variables: # Vault Service\\nexport VAULT_MODE=$DEPLOYMENT_MODE # Extension Registry\\nexport REGISTRY_MODE=$DEPLOYMENT_MODE # RAG System\\nexport RAG_MODE=$DEPLOYMENT_MODE # AI Service\\nexport AI_SERVICE_MODE=$DEPLOYMENT_MODE # Provisioning Daemon\\nexport DAEMON_MODE=$DEPLOYMENT_MODE","breadcrumbs":"Platform Deployment Guide » 3. Set Environment Variables","id":"1060","title":"3. Set Environment Variables"},"1061":{"body":"# Build all platform crates\\ncargo build --release -p vault-service \\\\ -p extension-registry \\\\ -p provisioning-rag \\\\ -p ai-service \\\\ -p provisioning-daemon \\\\ -p orchestrator \\\\ -p control-center \\\\ -p mcp-server \\\\ -p installer","breadcrumbs":"Platform Deployment Guide » 4. Build All Services","id":"1061","title":"4. Build All Services"},"1062":{"body":"# Start in dependency order: # 1. Core infrastructure (KMS, storage)\\ncargo run --release -p vault-service & # 2. Configuration and extensions\\ncargo run --release -p extension-registry & # 3. AI/RAG layer\\ncargo run --release -p provisioning-rag &\\ncargo run --release -p ai-service & # 4. Orchestration layer\\ncargo run --release -p orchestrator &\\ncargo run --release -p control-center &\\ncargo run --release -p mcp-server & # 5. Background operations\\ncargo run --release -p provisioning-daemon & # 6. Installer (optional, for new deployments)\\ncargo run --release -p installer &","breadcrumbs":"Platform Deployment Guide » 5. Start Services (Order Matters)","id":"1062","title":"5. Start Services (Order Matters)"},"1063":{"body":"# Check all services are running\\npgrep -l \\"vault-service|extension-registry|provisioning-rag|ai-service\\" # Test endpoints\\ncurl http://localhost:8200/health # Vault\\ncurl http://localhost:8081/health # Registry\\ncurl http://localhost:8083/health # RAG\\ncurl http://localhost:8082/health # AI Service\\ncurl http://localhost:9090/health # Orchestrator\\ncurl http://localhost:8080/health # Control Center","breadcrumbs":"Platform Deployment Guide » 6. Verify Services","id":"1063","title":"6. Verify Services"},"1064":{"body":"Perfect for : Development, testing, learning","breadcrumbs":"Platform Deployment Guide » Solo Mode Deployment","id":"1064","title":"Solo Mode Deployment"},"1065":{"body":"# Check that solo schemas are available\\nls -la provisioning/schemas/platform/defaults/deployment/solo-defaults.ncl # Available schemas for each service:\\n# - provisioning/schemas/platform/schemas/vault-service.ncl\\n# - provisioning/schemas/platform/schemas/extension-registry.ncl\\n# - provisioning/schemas/platform/schemas/rag.ncl\\n# - provisioning/schemas/platform/schemas/ai-service.ncl\\n# - provisioning/schemas/platform/schemas/provisioning-daemon.ncl","breadcrumbs":"Platform Deployment Guide » Step 1: Verify Solo Configuration Files","id":"1065","title":"Step 1: Verify Solo Configuration Files"},"1066":{"body":"# Set all services to solo mode\\nexport VAULT_MODE=solo\\nexport REGISTRY_MODE=solo\\nexport RAG_MODE=solo\\nexport AI_SERVICE_MODE=solo\\nexport DAEMON_MODE=solo # Verify settings\\necho $VAULT_MODE # Should output: solo","breadcrumbs":"Platform Deployment Guide » Step 2: Set Solo Environment Variables","id":"1066","title":"Step 2: Set Solo Environment Variables"},"1067":{"body":"# Build in release mode for better performance\\ncargo build --release","breadcrumbs":"Platform Deployment Guide » Step 3: Build Services","id":"1067","title":"Step 3: Build Services"},"1068":{"body":"# Create storage directories for solo mode\\nmkdir -p /tmp/provisioning-solo/{vault,registry,rag,ai,daemon}\\nchmod 755 /tmp/provisioning-solo/{vault,registry,rag,ai,daemon}","breadcrumbs":"Platform Deployment Guide » Step 4: Create Local Data Directories","id":"1068","title":"Step 4: Create Local Data Directories"},"1069":{"body":"# Start each service in a separate terminal or use tmux: # Terminal 1: Vault\\ncargo run --release -p vault-service # Terminal 2: Registry\\ncargo run --release -p extension-registry # Terminal 3: RAG\\ncargo run --release -p provisioning-rag # Terminal 4: AI Service\\ncargo run --release -p ai-service # Terminal 5: Orchestrator\\ncargo run --release -p orchestrator # Terminal 6: Control Center\\ncargo run --release -p control-center # Terminal 7: Daemon\\ncargo run --release -p provisioning-daemon","breadcrumbs":"Platform Deployment Guide » Step 5: Start Services","id":"1069","title":"Step 5: Start Services"},"107":{"body":"# Install provisioning\\ncurl -sSL https://install.provisioning.dev | bash # Run setup wizard\\nprovisioning setup system --interactive # Create workspace\\nprovisioning setup workspace myproject # Start deploying\\nprovisioning server create\\n```plaintext ## Configuration Paths **macOS**: `~/Library/Application Support/provisioning/`\\n**Linux**: `~/.config/provisioning/`\\n**Windows**: `%APPDATA%/provisioning/` ## Directory Structure ```plaintext\\nprovisioning/\\n├── system.toml # System info (immutable)\\n├── user_preferences.toml # User settings (editable)\\n├── platform/ # Platform services\\n├── providers/ # Provider configs\\n└── workspaces/ # Workspace definitions └── myproject/ ├── config/ ├── infra/ └── auth.token\\n```plaintext ## Setup Wizard Run the interactive setup wizard: ```bash\\nprovisioning setup system --interactive\\n```plaintext The wizard guides you through: 1. Welcome & Prerequisites Check\\n2. Operating System Detection\\n3. Configuration Path Selection\\n4. Platform Services Setup\\n5. Provider Selection\\n6. Security Configuration\\n7. Review & Confirmation ## Configuration Management ### Hierarchy (highest to lowest priority) 1. Runtime Arguments (`--flag value`)\\n2. Environment Variables (`PROVISIONING_*`)\\n3. Workspace Configuration\\n4. Workspace Authentication Token\\n5. User Preferences (`user_preferences.toml`)\\n6. Platform Configurations (`platform/*.toml`)\\n7. Provider Configurations (`providers/*.toml`)\\n8. System Configuration (`system.toml`)\\n9. Built-in Defaults ### Configuration Files - `system.toml` - System information (OS, architecture, paths)\\n- `user_preferences.toml` - User preferences (editor, format, etc.)\\n- `platform/*.toml` - Service endpoints and configuration\\n- `providers/*.toml` - Cloud provider settings ## Multiple Workspaces Create and manage multiple isolated environments: ```bash\\n# Create workspace\\nprovisioning setup workspace dev\\nprovisioning setup workspace prod # List workspaces\\nprovisioning workspace list # Activate workspace\\nprovisioning workspace activate prod\\n```plaintext ## Configuration Updates Update any setting: ```bash\\n# Update platform configuration\\nprovisioning setup platform --config new-config.toml # Update provider settings\\nprovisioning setup provider upcloud --config upcloud-config.toml # Validate changes\\nprovisioning setup validate\\n```plaintext ## Backup & Restore ```bash\\n# Backup current configuration\\nprovisioning setup backup --path ./backup.tar.gz # Restore from backup\\nprovisioning setup restore --path ./backup.tar.gz # Migrate from old setup\\nprovisioning setup migrate --from-existing\\n```plaintext ## Troubleshooting ### \\"Command not found: provisioning\\" ```bash\\nexport PATH=\\"/usr/local/bin:$PATH\\"\\n```plaintext ### \\"Nushell not found\\" ```bash\\ncurl -sSL https://raw.githubusercontent.com/nushell/nushell/main/install.sh | bash\\n```plaintext ### \\"Cannot write to directory\\" ```bash\\nchmod 755 ~/Library/Application\\\\ Support/provisioning/\\n```plaintext ### Check required tools ```bash\\nprovisioning setup validate --check-tools\\n```plaintext ## FAQ **Q: Do I need all optional tools?**\\nA: No. You need at least one deployment tool (Docker, Kubernetes, SSH, or systemd). **Q: Can I use provisioning without Docker?**\\nA: Yes. Provisioning supports Docker, Kubernetes, SSH, systemd, or combinations. **Q: How do I update configuration?**\\nA: `provisioning setup update ` **Q: Can I have multiple workspaces?**\\nA: Yes, unlimited workspaces. **Q: Is my configuration secure?**\\nA: Yes. Credentials stored securely, never in config files. **Q: Can I share workspaces with my team?**\\nA: Yes, via GitOps - configurations in Git, secrets in secure storage. ## Getting Help ```bash\\n# General help\\nprovisioning help # Setup help\\nprovisioning help setup # Specific command help\\nprovisioning setup system --help\\n```plaintext ## Next Steps 1. [Installation Guide](installation-guide.md)\\n2. [Workspace Setup](workspace-setup.md)\\n3. [Provider Configuration](provider-setup.md)\\n4. [From Scratch Guide](../guides/from-scratch.md) --- **Status**: Production Ready ✅\\n**Version**: 1.0.0\\n**Last Updated**: 2025-12-09","breadcrumbs":"Setup System Guide » 30-Second Setup","id":"107","title":"30-Second Setup"},"1070":{"body":"# Wait 10-15 seconds for services to start, then test # Check service health\\ncurl -s http://localhost:8200/health | jq .\\ncurl -s http://localhost:8081/health | jq .\\ncurl -s http://localhost:8083/health | jq . # Try a simple operation\\ncurl -X GET http://localhost:9090/api/v1/health","breadcrumbs":"Platform Deployment Guide » Step 6: Test Services","id":"1070","title":"Step 6: Test Services"},"1071":{"body":"# Check that data is stored locally\\nls -la /tmp/provisioning-solo/vault/\\nls -la /tmp/provisioning-solo/registry/ # Data should accumulate as you use the services","breadcrumbs":"Platform Deployment Guide » Step 7: Verify Persistence (Optional)","id":"1071","title":"Step 7: Verify Persistence (Optional)"},"1072":{"body":"# Stop all services\\npkill -f \\"cargo run --release\\" # Remove temporary data (optional)\\nrm -rf /tmp/provisioning-solo","breadcrumbs":"Platform Deployment Guide » Cleanup","id":"1072","title":"Cleanup"},"1073":{"body":"Perfect for : Team environments, shared infrastructure","breadcrumbs":"Platform Deployment Guide » Multiuser Mode Deployment","id":"1073","title":"Multiuser Mode Deployment"},"1074":{"body":"SurrealDB : Running and accessible at http://surrealdb:8000 Network Access : All machines can reach SurrealDB DNS/Hostnames : Services accessible via hostnames (not just localhost)","breadcrumbs":"Platform Deployment Guide » Prerequisites","id":"1074","title":"Prerequisites"},"1075":{"body":"# Using Docker (recommended)\\ndocker run -d \\\\ --name surrealdb \\\\ -p 8000:8000 \\\\ surrealdb/surrealdb:latest \\\\ start --user root --pass root # Or using native installation:\\nsurreal start --user root --pass root","breadcrumbs":"Platform Deployment Guide » Step 1: Deploy SurrealDB","id":"1075","title":"Step 1: Deploy SurrealDB"},"1076":{"body":"# Test SurrealDB connection\\ncurl -s http://localhost:8000/health # Should return: {\\"version\\":\\"v1.x.x\\"}","breadcrumbs":"Platform Deployment Guide » Step 2: Verify SurrealDB Connectivity","id":"1076","title":"Step 2: Verify SurrealDB Connectivity"},"1077":{"body":"# Configure all services for multiuser mode\\nexport VAULT_MODE=multiuser\\nexport REGISTRY_MODE=multiuser\\nexport RAG_MODE=multiuser\\nexport AI_SERVICE_MODE=multiuser\\nexport DAEMON_MODE=multiuser # Set database connection\\nexport SURREALDB_URL=http://surrealdb:8000\\nexport SURREALDB_USER=root\\nexport SURREALDB_PASS=root # Set service hostnames (if not localhost)\\nexport VAULT_SERVICE_HOST=vault.internal\\nexport REGISTRY_HOST=registry.internal\\nexport RAG_HOST=rag.internal","breadcrumbs":"Platform Deployment Guide » Step 3: Set Multiuser Environment Variables","id":"1077","title":"Step 3: Set Multiuser Environment Variables"},"1078":{"body":"cargo build --release","breadcrumbs":"Platform Deployment Guide » Step 4: Build Services","id":"1078","title":"Step 4: Build Services"},"1079":{"body":"# Create directories on shared storage (NFS, etc.)\\nmkdir -p /mnt/provisioning-data/{vault,registry,rag,ai}\\nchmod 755 /mnt/provisioning-data/{vault,registry,rag,ai} # Or use local directories if on separate machines\\nmkdir -p /var/lib/provisioning/{vault,registry,rag,ai}","breadcrumbs":"Platform Deployment Guide » Step 5: Create Shared Data Directories","id":"1079","title":"Step 5: Create Shared Data Directories"},"108":{"body":"This guide has moved to a multi-chapter format for better readability.","breadcrumbs":"Quick Start (Full) » Quick Start","id":"108","title":"Quick Start"},"1080":{"body":"# Machine 1: Infrastructure services\\nssh ops@machine1\\nexport VAULT_MODE=multiuser\\ncargo run --release -p vault-service &\\ncargo run --release -p extension-registry & # Machine 2: AI services\\nssh ops@machine2\\nexport RAG_MODE=multiuser\\nexport AI_SERVICE_MODE=multiuser\\ncargo run --release -p provisioning-rag &\\ncargo run --release -p ai-service & # Machine 3: Orchestration\\nssh ops@machine3\\ncargo run --release -p orchestrator &\\ncargo run --release -p control-center & # Machine 4: Background tasks\\nssh ops@machine4\\nexport DAEMON_MODE=multiuser\\ncargo run --release -p provisioning-daemon &","breadcrumbs":"Platform Deployment Guide » Step 6: Start Services on Multiple Machines","id":"1080","title":"Step 6: Start Services on Multiple Machines"},"1081":{"body":"# From any machine, test cross-machine connectivity\\ncurl -s http://machine1:8200/health\\ncurl -s http://machine2:8083/health\\ncurl -s http://machine3:9090/health # Test integration\\ncurl -X POST http://machine3:9090/api/v1/provision \\\\ -H \\"Content-Type: application/json\\" \\\\ -d \'{\\"workspace\\": \\"test\\"}\'","breadcrumbs":"Platform Deployment Guide » Step 7: Test Multi-Machine Setup","id":"1081","title":"Step 7: Test Multi-Machine Setup"},"1082":{"body":"# Create shared credentials\\nexport VAULT_TOKEN=s.xxxxxxxxxxx # Configure TLS (optional but recommended)\\n# Update configs to use https:// URLs\\nexport VAULT_MODE=multiuser\\n# Edit provisioning/schemas/platform/schemas/vault-service.ncl\\n# Add TLS configuration in the schema definition\\n# See: provisioning/schemas/platform/validators/ for constraints","breadcrumbs":"Platform Deployment Guide » Step 8: Enable User Access","id":"1082","title":"Step 8: Enable User Access"},"1083":{"body":"# Check all services are connected to SurrealDB\\nfor host in machine1 machine2 machine3 machine4; do ssh ops@$host \\"curl -s http://localhost/api/v1/health | jq .database_connected\\"\\ndone # Monitor SurrealDB\\ncurl -s http://surrealdb:8000/version","breadcrumbs":"Platform Deployment Guide » Monitoring Multiuser Deployment","id":"1083","title":"Monitoring Multiuser Deployment"},"1084":{"body":"Perfect for : GitHub Actions, GitLab CI, Jenkins, cloud automation","breadcrumbs":"Platform Deployment Guide » CICD Mode Deployment","id":"1084","title":"CICD Mode Deployment"},"1085":{"body":"CICD mode services: Don\'t persist data between runs Use in-memory storage Have RAG completely disabled Optimize for startup speed Suitable for containerized deployments","breadcrumbs":"Platform Deployment Guide » Step 1: Understand Ephemeral Nature","id":"1085","title":"Step 1: Understand Ephemeral Nature"},"1086":{"body":"# Use cicd mode for all services\\nexport VAULT_MODE=cicd\\nexport REGISTRY_MODE=cicd\\nexport RAG_MODE=cicd\\nexport AI_SERVICE_MODE=cicd\\nexport DAEMON_MODE=cicd # Disable TLS (not needed in CI)\\nexport CI_ENVIRONMENT=true","breadcrumbs":"Platform Deployment Guide » Step 2: Set CICD Environment Variables","id":"1086","title":"Step 2: Set CICD Environment Variables"},"1087":{"body":"# Dockerfile for CICD deployments\\nFROM rust:1.75-slim WORKDIR /app\\nCOPY . . # Build all services\\nRUN cargo build --release # Set CICD mode\\nENV VAULT_MODE=cicd\\nENV REGISTRY_MODE=cicd\\nENV RAG_MODE=cicd\\nENV AI_SERVICE_MODE=cicd # Expose ports\\nEXPOSE 8200 8081 8083 8082 9090 8080 # Run services\\nCMD [\\"sh\\", \\"-c\\", \\"\\\\ cargo run --release -p vault-service & \\\\ cargo run --release -p extension-registry & \\\\ cargo run --release -p provisioning-rag & \\\\ cargo run --release -p ai-service & \\\\ cargo run --release -p orchestrator & \\\\ wait\\"]","breadcrumbs":"Platform Deployment Guide » Step 3: Containerize Services (Optional)","id":"1087","title":"Step 3: Containerize Services (Optional)"},"1088":{"body":"name: CICD Platform Deployment on: push: branches: [main, develop] jobs: test-deployment: runs-on: ubuntu-latest steps: - uses: actions/checkout@v3 - name: Install Rust uses: actions-rs/toolchain@v1 with: toolchain: 1.75 profile: minimal - name: Set CICD Mode run: | echo \\"VAULT_MODE=cicd\\" >> $GITHUB_ENV echo \\"REGISTRY_MODE=cicd\\" >> $GITHUB_ENV echo \\"RAG_MODE=cicd\\" >> $GITHUB_ENV echo \\"AI_SERVICE_MODE=cicd\\" >> $GITHUB_ENV echo \\"DAEMON_MODE=cicd\\" >> $GITHUB_ENV - name: Build Services run: cargo build --release - name: Run Integration Tests run: | # Start services in background cargo run --release -p vault-service & cargo run --release -p extension-registry & cargo run --release -p orchestrator & # Wait for startup sleep 10 # Run tests cargo test --release - name: Health Checks run: | curl -f http://localhost:8200/health curl -f http://localhost:8081/health curl -f http://localhost:9090/health deploy: needs: test-deployment runs-on: ubuntu-latest if: github.ref == \'refs/heads/main\' steps: - uses: actions/checkout@v3 - name: Deploy to Production run: | # Deploy production enterprise cluster ./scripts/deploy-enterprise.sh","breadcrumbs":"Platform Deployment Guide » Step 4: GitHub Actions Example","id":"1088","title":"Step 4: GitHub Actions Example"},"1089":{"body":"# Simulate CI environment locally\\nexport VAULT_MODE=cicd\\nexport CI_ENVIRONMENT=true # Build\\ncargo build --release # Run short-lived services for testing\\ntimeout 30 cargo run --release -p vault-service &\\ntimeout 30 cargo run --release -p extension-registry &\\ntimeout 30 cargo run --release -p orchestrator & # Run tests while services are running\\nsleep 5\\ncargo test --release # Services auto-cleanup after timeout","breadcrumbs":"Platform Deployment Guide » Step 5: Run CICD Tests","id":"1089","title":"Step 5: Run CICD Tests"},"109":{"body":"Please see the complete quick start guide here: Prerequisites - System requirements and setup Installation - Install provisioning platform First Deployment - Deploy your first infrastructure Verification - Verify your deployment","breadcrumbs":"Quick Start (Full) » 📖 Navigate to Quick Start Guide","id":"109","title":"📖 Navigate to Quick Start Guide"},"1090":{"body":"Perfect for : Production, high availability, compliance","breadcrumbs":"Platform Deployment Guide » Enterprise Mode Deployment","id":"1090","title":"Enterprise Mode Deployment"},"1091":{"body":"3+ Machines : Minimum 3 for HA Etcd Cluster : For distributed consensus Load Balancer : HAProxy, nginx, or cloud LB TLS Certificates : Valid certificates for all services Monitoring : Prometheus, ELK, or cloud monitoring Backup System : Daily snapshots to S3 or similar","breadcrumbs":"Platform Deployment Guide » Prerequisites","id":"1091","title":"Prerequisites"},"1092":{"body":"1.1 Deploy Etcd Cluster # Node 1, 2, 3\\netcd --name=node-1 \\\\ --listen-client-urls=http://0.0.0.0:2379 \\\\ --advertise-client-urls=http://node-1.internal:2379 \\\\ --initial-cluster=\\"node-1=http://node-1.internal:2380,node-2=http://node-2.internal:2380,node-3=http://node-3.internal:2380\\" \\\\ --initial-cluster-state=new # Verify cluster\\netcdctl --endpoints=http://localhost:2379 member list 1.2 Deploy Load Balancer # HAProxy configuration for vault-service (example)\\nfrontend vault_frontend bind *:8200 mode tcp default_backend vault_backend backend vault_backend mode tcp balance roundrobin server vault-1 10.0.1.10:8200 check server vault-2 10.0.1.11:8200 check server vault-3 10.0.1.12:8200 check 1.3 Configure TLS # Generate certificates (or use existing)\\nmkdir -p /etc/provisioning/tls # For each service:\\nopenssl req -x509 -newkey rsa:4096 \\\\ -keyout /etc/provisioning/tls/vault-key.pem \\\\ -out /etc/provisioning/tls/vault-cert.pem \\\\ -days 365 -nodes \\\\ -subj \\"/CN=vault.provisioning.prod\\" # Set permissions\\nchmod 600 /etc/provisioning/tls/*-key.pem\\nchmod 644 /etc/provisioning/tls/*-cert.pem","breadcrumbs":"Platform Deployment Guide » Step 1: Deploy Infrastructure","id":"1092","title":"Step 1: Deploy Infrastructure"},"1093":{"body":"# All machines: Set enterprise mode\\nexport VAULT_MODE=enterprise\\nexport REGISTRY_MODE=enterprise\\nexport RAG_MODE=enterprise\\nexport AI_SERVICE_MODE=enterprise\\nexport DAEMON_MODE=enterprise # Database cluster\\nexport SURREALDB_URL=\\"ws://surrealdb-cluster.internal:8000\\"\\nexport SURREALDB_REPLICAS=3 # Etcd cluster\\nexport ETCD_ENDPOINTS=\\"http://node-1.internal:2379,http://node-2.internal:2379,http://node-3.internal:2379\\" # TLS configuration\\nexport TLS_CERT_PATH=/etc/provisioning/tls\\nexport TLS_VERIFY=true\\nexport TLS_CA_CERT=/etc/provisioning/tls/ca.crt # Monitoring\\nexport PROMETHEUS_URL=http://prometheus.internal:9090\\nexport METRICS_ENABLED=true\\nexport AUDIT_LOG_ENABLED=true","breadcrumbs":"Platform Deployment Guide » Step 2: Set Enterprise Environment Variables","id":"1093","title":"Step 2: Set Enterprise Environment Variables"},"1094":{"body":"# Ansible playbook (simplified)\\n---\\n- hosts: provisioning_cluster tasks: - name: Build services shell: cargo build --release - name: Start vault-service (machine 1-3) shell: \\"cargo run --release -p vault-service\\" when: \\"\'vault\' in group_names\\" - name: Start orchestrator (machine 2-3) shell: \\"cargo run --release -p orchestrator\\" when: \\"\'orchestrator\' in group_names\\" - name: Start daemon (machine 3) shell: \\"cargo run --release -p provisioning-daemon\\" when: \\"\'daemon\' in group_names\\" - name: Verify cluster health uri: url: \\"https://{{ inventory_hostname }}:9090/health\\" validate_certs: yes","breadcrumbs":"Platform Deployment Guide » Step 3: Deploy Services Across Cluster","id":"1094","title":"Step 3: Deploy Services Across Cluster"},"1095":{"body":"# Check cluster status\\ncurl -s https://vault.internal:8200/health | jq .state # Check replication\\ncurl -s https://orchestrator.internal:9090/api/v1/cluster/status # Monitor etcd\\netcdctl --endpoints=https://node-1.internal:2379 endpoint health # Check leader election\\netcdctl --endpoints=https://node-1.internal:2379 election list","breadcrumbs":"Platform Deployment Guide » Step 4: Monitor Cluster Health","id":"1095","title":"Step 4: Monitor Cluster Health"},"1096":{"body":"# Prometheus configuration\\nglobal: scrape_interval: 30s evaluation_interval: 30s scrape_configs: - job_name: \'vault-service\' scheme: https tls_config: ca_file: /etc/provisioning/tls/ca.crt static_configs: - targets: [\'vault-1.internal:8200\', \'vault-2.internal:8200\', \'vault-3.internal:8200\'] - job_name: \'orchestrator\' scheme: https static_configs: - targets: [\'orch-1.internal:9090\', \'orch-2.internal:9090\', \'orch-3.internal:9090\']","breadcrumbs":"Platform Deployment Guide » Step 5: Enable Monitoring & Alerting","id":"1096","title":"Step 5: Enable Monitoring & Alerting"},"1097":{"body":"# Daily backup script\\n#!/bin/bash\\nBACKUP_DIR=\\"/mnt/provisioning-backups\\"\\nDATE=$(date +%Y%m%d_%H%M%S) # Backup etcd\\netcdctl --endpoints=https://node-1.internal:2379 \\\\ snapshot save \\"$BACKUP_DIR/etcd-$DATE.db\\" # Backup SurrealDB\\ncurl -X POST https://surrealdb.internal:8000/backup \\\\ -H \\"Authorization: Bearer $SURREALDB_TOKEN\\" \\\\ > \\"$BACKUP_DIR/surreal-$DATE.sql\\" # Upload to S3\\naws s3 cp \\"$BACKUP_DIR/etcd-$DATE.db\\" \\\\ s3://provisioning-backups/etcd/ # Cleanup old backups (keep 30 days)\\nfind \\"$BACKUP_DIR\\" -mtime +30 -delete","breadcrumbs":"Platform Deployment Guide » Step 6: Backup & Recovery","id":"1097","title":"Step 6: Backup & Recovery"},"1098":{"body":"","breadcrumbs":"Platform Deployment Guide » Service Management","id":"1098","title":"Service Management"},"1099":{"body":"Individual Service Startup # Start one service\\nexport VAULT_MODE=enterprise\\ncargo run --release -p vault-service # In another terminal\\nexport REGISTRY_MODE=enterprise\\ncargo run --release -p extension-registry Batch Startup # Start all services (dependency order)\\n#!/bin/bash\\nset -e MODE=${1:-solo}\\nexport VAULT_MODE=$MODE\\nexport REGISTRY_MODE=$MODE\\nexport RAG_MODE=$MODE\\nexport AI_SERVICE_MODE=$MODE\\nexport DAEMON_MODE=$MODE echo \\"Starting provisioning platform in $MODE mode...\\" # Core services first\\necho \\"Starting infrastructure...\\"\\ncargo run --release -p vault-service &\\nVAULT_PID=$! echo \\"Starting extension registry...\\"\\ncargo run --release -p extension-registry &\\nREGISTRY_PID=$! # AI layer\\necho \\"Starting AI services...\\"\\ncargo run --release -p provisioning-rag &\\nRAG_PID=$! cargo run --release -p ai-service &\\nAI_PID=$! # Orchestration\\necho \\"Starting orchestration...\\"\\ncargo run --release -p orchestrator &\\nORCH_PID=$! echo \\"All services started. PIDs: $VAULT_PID $REGISTRY_PID $RAG_PID $AI_PID $ORCH_PID\\"","breadcrumbs":"Platform Deployment Guide » Starting Services","id":"1099","title":"Starting Services"},"11":{"body":"Document Description Quickstart Cheatsheet Command shortcuts OCI Quick Reference OCI operations","breadcrumbs":"Home » 📦 Quick References","id":"11","title":"📦 Quick References"},"110":{"body":"# Check system status\\nprovisioning status # Get next step suggestions\\nprovisioning next # View interactive guide\\nprovisioning guide from-scratch For the complete step-by-step walkthrough, start with Prerequisites.","breadcrumbs":"Quick Start (Full) » Quick Commands","id":"110","title":"Quick Commands"},"1100":{"body":"# Stop all services gracefully\\npkill -SIGTERM -f \\"cargo run --release -p\\" # Wait for graceful shutdown\\nsleep 5 # Force kill if needed\\npkill -9 -f \\"cargo run --release -p\\" # Verify all stopped\\npgrep -f \\"cargo run --release -p\\" && echo \\"Services still running\\" || echo \\"All stopped\\"","breadcrumbs":"Platform Deployment Guide » Stopping Services","id":"1100","title":"Stopping Services"},"1101":{"body":"# Restart single service\\npkill -SIGTERM vault-service\\nsleep 2\\ncargo run --release -p vault-service & # Restart all services\\n./scripts/restart-all.sh $MODE # Restart with config reload\\nexport VAULT_MODE=multiuser\\npkill -SIGTERM vault-service\\nsleep 2\\ncargo run --release -p vault-service &","breadcrumbs":"Platform Deployment Guide » Restarting Services","id":"1101","title":"Restarting Services"},"1102":{"body":"# Check running processes\\npgrep -a \\"cargo run --release\\" # Check listening ports\\nnetstat -tlnp | grep -E \\"8200|8081|8083|8082|9090|8080\\" # Or using ss (modern alternative)\\nss -tlnp | grep -E \\"8200|8081|8083|8082|9090|8080\\" # Health endpoint checks\\nfor service in vault registry rag ai orchestrator; do echo \\"=== $service ===\\" curl -s http://localhost:${port[$service]}/health | jq .\\ndone","breadcrumbs":"Platform Deployment Guide » Checking Service Status","id":"1102","title":"Checking Service Status"},"1103":{"body":"","breadcrumbs":"Platform Deployment Guide » Health Checks & Monitoring","id":"1103","title":"Health Checks & Monitoring"},"1104":{"body":"# Vault Service\\ncurl -s http://localhost:8200/health | jq .\\n# Expected: {\\"status\\":\\"ok\\",\\"uptime\\":123.45} # Extension Registry\\ncurl -s http://localhost:8081/health | jq . # RAG System\\ncurl -s http://localhost:8083/health | jq .\\n# Expected: {\\"status\\":\\"ok\\",\\"embeddings\\":\\"ready\\",\\"vector_db\\":\\"connected\\"} # AI Service\\ncurl -s http://localhost:8082/health | jq . # Orchestrator\\ncurl -s http://localhost:9090/health | jq . # Control Center\\ncurl -s http://localhost:8080/health | jq .","breadcrumbs":"Platform Deployment Guide » Manual Health Verification","id":"1104","title":"Manual Health Verification"},"1105":{"body":"# Test vault <-> registry integration\\ncurl -X POST http://localhost:8200/api/encrypt \\\\ -H \\"Content-Type: application/json\\" \\\\ -d \'{\\"plaintext\\":\\"secret\\"}\' | jq . # Test RAG system\\ncurl -X POST http://localhost:8083/api/ingest \\\\ -H \\"Content-Type: application/json\\" \\\\ -d \'{\\"document\\":\\"test.md\\",\\"content\\":\\"# Test\\"}\' | jq . # Test orchestrator\\ncurl -X GET http://localhost:9090/api/v1/status | jq . # End-to-end workflow\\ncurl -X POST http://localhost:9090/api/v1/provision \\\\ -H \\"Content-Type: application/json\\" \\\\ -d \'{ \\"workspace\\": \\"test\\", \\"services\\": [\\"vault\\", \\"registry\\"], \\"mode\\": \\"solo\\" }\' | jq .","breadcrumbs":"Platform Deployment Guide » Service Integration Tests","id":"1105","title":"Service Integration Tests"},"1106":{"body":"Prometheus Metrics # Query service uptime\\ncurl -s \'http://prometheus:9090/api/v1/query?query=up\' | jq . # Query request rate\\ncurl -s \'http://prometheus:9090/api/v1/query?query=rate(http_requests_total[5m])\' | jq . # Query error rate\\ncurl -s \'http://prometheus:9090/api/v1/query?query=rate(http_errors_total[5m])\' | jq . Log Aggregation # Follow vault logs\\ntail -f /var/log/provisioning/vault-service.log # Follow all service logs\\ntail -f /var/log/provisioning/*.log # Search for errors\\ngrep -r \\"ERROR\\" /var/log/provisioning/ # Follow with filtering\\ntail -f /var/log/provisioning/orchestrator.log | grep -E \\"ERROR|WARN\\"","breadcrumbs":"Platform Deployment Guide » Monitoring Dashboards","id":"1106","title":"Monitoring Dashboards"},"1107":{"body":"# AlertManager configuration\\ngroups: - name: provisioning rules: - alert: ServiceDown expr: up{job=~\\"vault|registry|rag|orchestrator\\"} == 0 for: 5m annotations: summary: \\"{{ $labels.job }} is down\\" - alert: HighErrorRate expr: rate(http_errors_total[5m]) > 0.05 annotations: summary: \\"High error rate detected\\" - alert: DiskSpaceWarning expr: node_filesystem_avail_bytes / node_filesystem_size_bytes < 0.2 annotations: summary: \\"Disk space below 20%\\"","breadcrumbs":"Platform Deployment Guide » Alerting","id":"1107","title":"Alerting"},"1108":{"body":"","breadcrumbs":"Platform Deployment Guide » Troubleshooting","id":"1108","title":"Troubleshooting"},"1109":{"body":"Problem : error: failed to bind to port 8200 Solutions : # Check if port is in use\\nlsof -i :8200\\nss -tlnp | grep 8200 # Kill existing process\\npkill -9 -f vault-service # Or use different port\\nexport VAULT_SERVER_PORT=8201\\ncargo run --release -p vault-service","breadcrumbs":"Platform Deployment Guide » Service Won\'t Start","id":"1109","title":"Service Won\'t Start"},"111":{"body":"Before installing the Provisioning Platform, ensure your system meets the following requirements.","breadcrumbs":"Prerequisites » Prerequisites","id":"111","title":"Prerequisites"},"1110":{"body":"Problem : error: failed to load config from mode file Solutions : # Verify schemas exist\\nls -la provisioning/schemas/platform/schemas/vault-service.ncl # Validate schema syntax\\nnickel typecheck provisioning/schemas/platform/schemas/vault-service.ncl # Check defaults are present\\nnickel typecheck provisioning/schemas/platform/defaults/vault-service-defaults.ncl # Verify deployment mode overlay exists\\nls -la provisioning/schemas/platform/defaults/deployment/$VAULT_MODE-defaults.ncl # Run service with explicit mode\\nexport VAULT_MODE=solo\\ncargo run --release -p vault-service","breadcrumbs":"Platform Deployment Guide » Configuration Loading Fails","id":"1110","title":"Configuration Loading Fails"},"1111":{"body":"Problem : error: failed to connect to database Solutions : # Verify database is running\\ncurl http://surrealdb:8000/health\\netcdctl --endpoints=http://etcd:2379 endpoint health # Check connectivity\\nnc -zv surrealdb 8000\\nnc -zv etcd 2379 # Update connection string\\nexport SURREALDB_URL=ws://surrealdb:8000\\nexport ETCD_ENDPOINTS=http://etcd:2379 # Restart service with new config\\npkill -9 vault-service\\ncargo run --release -p vault-service","breadcrumbs":"Platform Deployment Guide » Database Connection Issues","id":"1111","title":"Database Connection Issues"},"1112":{"body":"Problem : Service exits with code 1 or 139 Solutions : # Run with verbose logging\\nRUST_LOG=debug cargo run -p vault-service 2>&1 | head -50 # Check system resources\\nfree -h\\ndf -h # Check for core dumps\\ncoredumpctl list # Run under debugger (if crash suspected)\\nrust-gdb --args target/release/vault-service","breadcrumbs":"Platform Deployment Guide » Service Crashes on Startup","id":"1112","title":"Service Crashes on Startup"},"1113":{"body":"Problem : Service consuming > expected memory Solutions : # Check memory usage\\nps aux | grep vault-service | grep -v grep # Monitor over time\\nwatch -n 1 \'ps aux | grep vault-service | grep -v grep\' # Reduce worker count\\nexport VAULT_SERVER_WORKERS=2\\ncargo run --release -p vault-service # Check for memory leaks\\nvalgrind --leak-check=full target/release/vault-service","breadcrumbs":"Platform Deployment Guide » High Memory Usage","id":"1113","title":"High Memory Usage"},"1114":{"body":"Problem : error: failed to resolve hostname Solutions : # Test DNS resolution\\nnslookup vault.internal\\ndig vault.internal # Test connectivity to service\\ncurl -v http://vault.internal:8200/health # Add to /etc/hosts if needed\\necho \\"10.0.1.10 vault.internal\\" >> /etc/hosts # Check network interface\\nip addr show\\nnetstat -nr","breadcrumbs":"Platform Deployment Guide » Network/DNS Issues","id":"1114","title":"Network/DNS Issues"},"1115":{"body":"Problem : Data lost after restart Solutions : # Verify backup exists\\nls -la /mnt/provisioning-backups/\\nls -la /var/lib/provisioning/ # Check disk space\\ndf -h /var/lib/provisioning # Verify file permissions\\nls -l /var/lib/provisioning/vault/\\nchmod 755 /var/lib/provisioning/vault/* # Restore from backup\\n./scripts/restore-backup.sh /mnt/provisioning-backups/vault-20260105.sql","breadcrumbs":"Platform Deployment Guide » Data Persistence Issues","id":"1115","title":"Data Persistence Issues"},"1116":{"body":"When troubleshooting, use this systematic approach: # 1. Check service is running\\npgrep -f vault-service || echo \\"Service not running\\" # 2. Check port is listening\\nss -tlnp | grep 8200 || echo \\"Port not listening\\" # 3. Check logs for errors\\ntail -20 /var/log/provisioning/vault-service.log | grep -i error # 4. Test HTTP endpoint\\ncurl -i http://localhost:8200/health # 5. Check dependencies\\ncurl http://surrealdb:8000/health\\netcdctl --endpoints=http://etcd:2379 endpoint health # 6. Check schema definition\\nnickel typecheck provisioning/schemas/platform/schemas/vault-service.ncl # 7. Verify environment variables\\nenv | grep -E \\"VAULT_|SURREALDB_|ETCD_\\" # 8. Check system resources\\nfree -h && df -h && top -bn1 | head -10","breadcrumbs":"Platform Deployment Guide » Debugging Checklist","id":"1116","title":"Debugging Checklist"},"1117":{"body":"","breadcrumbs":"Platform Deployment Guide » Configuration Updates","id":"1117","title":"Configuration Updates"},"1118":{"body":"# 1. Edit the schema definition\\nvim provisioning/schemas/platform/schemas/vault-service.ncl # 2. Update defaults if needed\\nvim provisioning/schemas/platform/defaults/vault-service-defaults.ncl # 3. Validate syntax\\nnickel typecheck provisioning/schemas/platform/schemas/vault-service.ncl # 4. Re-export configuration from schemas\\n./provisioning/.typedialog/platform/scripts/generate-configs.nu vault-service multiuser # 5. Restart affected service (no downtime for clients)\\npkill -SIGTERM vault-service\\nsleep 2\\ncargo run --release -p vault-service & # 4. Verify configuration loaded\\ncurl http://localhost:8200/api/config | jq .","breadcrumbs":"Platform Deployment Guide » Updating Service Configuration","id":"1118","title":"Updating Service Configuration"},"1119":{"body":"# Migrate from solo to multiuser: # 1. Stop services\\npkill -SIGTERM -f \\"cargo run\\"\\nsleep 5 # 2. Backup current data\\ntar -czf /backup/provisioning-solo-$(date +%s).tar.gz /var/lib/provisioning/ # 3. Set new mode\\nexport VAULT_MODE=multiuser\\nexport REGISTRY_MODE=multiuser\\nexport RAG_MODE=multiuser # 4. Start services with new config\\ncargo run --release -p vault-service &\\ncargo run --release -p extension-registry & # 5. Verify new mode\\ncurl http://localhost:8200/api/config | jq .deployment_mode","breadcrumbs":"Platform Deployment Guide » Mode Migration","id":"1119","title":"Mode Migration"},"112":{"body":"","breadcrumbs":"Prerequisites » Hardware Requirements","id":"112","title":"Hardware Requirements"},"1120":{"body":"Before deploying to production: All services compiled in release mode (--release) TLS certificates installed and valid Database cluster deployed and healthy Load balancer configured and routing traffic Monitoring and alerting configured Backup system tested and working High availability verified (failover tested) Security hardening applied (firewall rules, etc.) Documentation updated for your environment Team trained on deployment procedures Runbooks created for common operations Disaster recovery plan tested","breadcrumbs":"Platform Deployment Guide » Production Checklist","id":"1120","title":"Production Checklist"},"1121":{"body":"","breadcrumbs":"Platform Deployment Guide » Getting Help","id":"1121","title":"Getting Help"},"1122":{"body":"GitHub Issues : Report bugs at github.com/your-org/provisioning/issues Documentation : Full docs at provisioning/docs/ Slack Channel : #provisioning-platform","breadcrumbs":"Platform Deployment Guide » Community Resources","id":"1122","title":"Community Resources"},"1123":{"body":"Platform Team : platform@your-org.com On-Call : Check PagerDuty for active rotation Escalation : Contact infrastructure leadership","breadcrumbs":"Platform Deployment Guide » Internal Support","id":"1123","title":"Internal Support"},"1124":{"body":"# View all available commands\\ncargo run -- --help # View service schemas\\nls -la provisioning/schemas/platform/schemas/\\nls -la provisioning/schemas/platform/defaults/ # List running services\\nps aux | grep cargo # Monitor service logs in real-time\\njournalctl -fu provisioning-vault # Generate diagnostics bundle\\n./scripts/generate-diagnostics.sh > /tmp/diagnostics-$(date +%s).tar.gz","breadcrumbs":"Platform Deployment Guide » Useful Commands Reference","id":"1124","title":"Useful Commands Reference"},"1125":{"body":"Version : 1.0.0 Last Updated : 2025-10-06","breadcrumbs":"Service Management Guide » Service Management Guide","id":"1125","title":"Service Management Guide"},"1126":{"body":"Overview Service Architecture Service Registry Platform Commands Service Commands Deployment Modes Health Monitoring Dependency Management Pre-flight Checks Troubleshooting","breadcrumbs":"Service Management Guide » Table of Contents","id":"1126","title":"Table of Contents"},"1127":{"body":"The Service Management System provides comprehensive lifecycle management for all platform services (orchestrator, control-center, CoreDNS, Gitea, OCI registry, MCP server, API gateway).","breadcrumbs":"Service Management Guide » Overview","id":"1127","title":"Overview"},"1128":{"body":"Unified Service Management : Single interface for all services Automatic Dependency Resolution : Start services in correct order Health Monitoring : Continuous health checks with automatic recovery Multiple Deployment Modes : Binary, Docker, Docker Compose, Kubernetes, Remote Pre-flight Checks : Validate prerequisites before operations Service Registry : Centralized service configuration","breadcrumbs":"Service Management Guide » Key Features","id":"1128","title":"Key Features"},"1129":{"body":"Service Type Category Description orchestrator Platform Orchestration Rust-based workflow coordinator control-center Platform UI Web-based management interface coredns Infrastructure DNS Local DNS resolution gitea Infrastructure Git Self-hosted Git service oci-registry Infrastructure Registry OCI-compliant container registry mcp-server Platform API Model Context Protocol server api-gateway Platform API Unified REST API gateway","breadcrumbs":"Service Management Guide » Supported Services","id":"1129","title":"Supported Services"},"113":{"body":"CPU : 2 cores RAM : 4GB Disk : 20GB available space Network : Internet connection for downloading dependencies","breadcrumbs":"Prerequisites » Minimum Requirements (Solo Mode)","id":"113","title":"Minimum Requirements (Solo Mode)"},"1130":{"body":"","breadcrumbs":"Service Management Guide » Service Architecture","id":"1130","title":"Service Architecture"},"1131":{"body":"┌─────────────────────────────────────────┐\\n│ Service Management CLI │\\n│ (platform/services commands) │\\n└─────────────────┬───────────────────────┘ │ ┌──────────┴──────────┐ │ │ ▼ ▼\\n┌──────────────┐ ┌───────────────┐\\n│ Manager │ │ Lifecycle │\\n│ (Core) │ │ (Start/Stop)│\\n└──────┬───────┘ └───────┬───────┘ │ │ ▼ ▼\\n┌──────────────┐ ┌───────────────┐\\n│ Health │ │ Dependencies │\\n│ (Checks) │ │ (Resolution) │\\n└──────────────┘ └───────────────┘ │ │ └────────┬───────────┘ │ ▼ ┌────────────────┐ │ Pre-flight │ │ (Validation) │ └────────────────┘\\n```plaintext ### Component Responsibilities **Manager** (`manager.nu`) - Service registry loading\\n- Service status tracking\\n- State persistence **Lifecycle** (`lifecycle.nu`) - Service start/stop operations\\n- Deployment mode handling\\n- Process management **Health** (`health.nu`) - Health check execution\\n- HTTP/TCP/Command/File checks\\n- Continuous monitoring **Dependencies** (`dependencies.nu`) - Dependency graph analysis\\n- Topological sorting\\n- Startup order calculation **Pre-flight** (`preflight.nu`) - Prerequisite validation\\n- Conflict detection\\n- Auto-start orchestration --- ## Service Registry ### Configuration File **Location**: `provisioning/config/services.toml` ### Service Definition Structure ```toml\\n[services.]\\nname = \\"\\"\\ntype = \\"platform\\" | \\"infrastructure\\" | \\"utility\\"\\ncategory = \\"orchestration\\" | \\"auth\\" | \\"dns\\" | \\"git\\" | \\"registry\\" | \\"api\\" | \\"ui\\"\\ndescription = \\"Service description\\"\\nrequired_for = [\\"operation1\\", \\"operation2\\"]\\ndependencies = [\\"dependency1\\", \\"dependency2\\"]\\nconflicts = [\\"conflicting-service\\"] [services..deployment]\\nmode = \\"binary\\" | \\"docker\\" | \\"docker-compose\\" | \\"kubernetes\\" | \\"remote\\" # Mode-specific configuration\\n[services..deployment.binary]\\nbinary_path = \\"/path/to/binary\\"\\nargs = [\\"--arg1\\", \\"value1\\"]\\nworking_dir = \\"/working/directory\\"\\nenv = { KEY = \\"value\\" } [services..health_check]\\ntype = \\"http\\" | \\"tcp\\" | \\"command\\" | \\"file\\" | \\"none\\"\\ninterval = 10\\nretries = 3\\ntimeout = 5 [services..health_check.http]\\nendpoint = \\"http://localhost:9090/health\\"\\nexpected_status = 200\\nmethod = \\"GET\\" [services..startup]\\nauto_start = true\\nstart_timeout = 30\\nstart_order = 10\\nrestart_on_failure = true\\nmax_restarts = 3\\n```plaintext ### Example: Orchestrator Service ```toml\\n[services.orchestrator]\\nname = \\"orchestrator\\"\\ntype = \\"platform\\"\\ncategory = \\"orchestration\\"\\ndescription = \\"Rust-based orchestrator for workflow coordination\\"\\nrequired_for = [\\"server\\", \\"taskserv\\", \\"cluster\\", \\"workflow\\", \\"batch\\"] [services.orchestrator.deployment]\\nmode = \\"binary\\" [services.orchestrator.deployment.binary]\\nbinary_path = \\"${HOME}/.provisioning/bin/provisioning-orchestrator\\"\\nargs = [\\"--port\\", \\"8080\\", \\"--data-dir\\", \\"${HOME}/.provisioning/orchestrator/data\\"] [services.orchestrator.health_check]\\ntype = \\"http\\" [services.orchestrator.health_check.http]\\nendpoint = \\"http://localhost:9090/health\\"\\nexpected_status = 200 [services.orchestrator.startup]\\nauto_start = true\\nstart_timeout = 30\\nstart_order = 10\\n```plaintext --- ## Platform Commands Platform commands manage all services as a cohesive system. ### Start Platform Start all auto-start services or specific services: ```bash\\n# Start all auto-start services\\nprovisioning platform start # Start specific services (with dependencies)\\nprovisioning platform start orchestrator control-center # Force restart if already running\\nprovisioning platform start --force orchestrator\\n```plaintext **Behavior**: 1. Resolves dependencies\\n2. Calculates startup order (topological sort)\\n3. Starts services in correct order\\n4. Waits for health checks\\n5. Reports success/failure ### Stop Platform Stop all running services or specific services: ```bash\\n# Stop all running services\\nprovisioning platform stop # Stop specific services\\nprovisioning platform stop orchestrator control-center # Force stop (kill -9)\\nprovisioning platform stop --force orchestrator\\n```plaintext **Behavior**: 1. Checks for dependent services\\n2. Stops in reverse dependency order\\n3. Updates service state\\n4. Cleans up PID files ### Restart Platform Restart running services: ```bash\\n# Restart all running services\\nprovisioning platform restart # Restart specific services\\nprovisioning platform restart orchestrator\\n```plaintext ### Platform Status Show status of all services: ```bash\\nprovisioning platform status\\n```plaintext **Output**: ```plaintext\\nPlatform Services Status Running: 3/7 === ORCHESTRATION === 🟢 orchestrator - running (uptime: 3600s) ✅ === UI === 🟢 control-center - running (uptime: 3550s) ✅ === DNS === ⚪ coredns - stopped ❓ === GIT === ⚪ gitea - stopped ❓ === REGISTRY === ⚪ oci-registry - stopped ❓ === API === 🟢 mcp-server - running (uptime: 3540s) ✅ ⚪ api-gateway - stopped ❓\\n```plaintext ### Platform Health Check health of all running services: ```bash\\nprovisioning platform health\\n```plaintext **Output**: ```plaintext\\nPlatform Health Check ✅ orchestrator: Healthy - HTTP health check passed\\n✅ control-center: Healthy - HTTP status 200 matches expected\\n⚪ coredns: Not running\\n✅ mcp-server: Healthy - HTTP health check passed Summary: 3 healthy, 0 unhealthy, 4 not running\\n```plaintext ### Platform Logs View service logs: ```bash\\n# View last 50 lines\\nprovisioning platform logs orchestrator # View last 100 lines\\nprovisioning platform logs orchestrator --lines 100 # Follow logs in real-time\\nprovisioning platform logs orchestrator --follow\\n```plaintext --- ## Service Commands Individual service management commands. ### List Services ```bash\\n# List all services\\nprovisioning services list # List only running services\\nprovisioning services list --running # Filter by category\\nprovisioning services list --category orchestration\\n```plaintext **Output**: ```plaintext\\nname type category status deployment_mode auto_start\\norchestrator platform orchestration running binary true\\ncontrol-center platform ui stopped binary false\\ncoredns infrastructure dns stopped docker false\\n```plaintext ### Service Status Get detailed status of a service: ```bash\\nprovisioning services status orchestrator\\n```plaintext **Output**: ```plaintext\\nService: orchestrator\\nType: platform\\nCategory: orchestration\\nStatus: running\\nDeployment: binary\\nHealth: healthy\\nAuto-start: true\\nPID: 12345\\nUptime: 3600s\\nDependencies: []\\n```plaintext ### Start Service ```bash\\n# Start service (with pre-flight checks)\\nprovisioning services start orchestrator # Force start (skip checks)\\nprovisioning services start orchestrator --force\\n```plaintext **Pre-flight Checks**: 1. Validate prerequisites (binary exists, Docker running, etc.)\\n2. Check for conflicts\\n3. Verify dependencies are running\\n4. Auto-start dependencies if needed ### Stop Service ```bash\\n# Stop service (with dependency check)\\nprovisioning services stop orchestrator # Force stop (ignore dependents)\\nprovisioning services stop orchestrator --force\\n```plaintext ### Restart Service ```bash\\nprovisioning services restart orchestrator\\n```plaintext ### Service Health Check service health: ```bash\\nprovisioning services health orchestrator\\n```plaintext **Output**: ```plaintext\\nService: orchestrator\\nStatus: healthy\\nHealthy: true\\nMessage: HTTP health check passed\\nCheck type: http\\nCheck duration: 15ms\\n```plaintext ### Service Logs ```bash\\n# View logs\\nprovisioning services logs orchestrator # Follow logs\\nprovisioning services logs orchestrator --follow # Custom line count\\nprovisioning services logs orchestrator --lines 200\\n```plaintext ### Check Required Services Check which services are required for an operation: ```bash\\nprovisioning services check server\\n```plaintext **Output**: ```plaintext\\nOperation: server\\nRequired services: orchestrator\\nAll running: true\\n```plaintext ### Service Dependencies View dependency graph: ```bash\\n# View all dependencies\\nprovisioning services dependencies # View specific service dependencies\\nprovisioning services dependencies control-center\\n```plaintext ### Validate Services Validate all service configurations: ```bash\\nprovisioning services validate\\n```plaintext **Output**: ```plaintext\\nTotal services: 7\\nValid: 6\\nInvalid: 1 Invalid services: ❌ coredns: - Docker is not installed or not running\\n```plaintext ### Readiness Report Get platform readiness report: ```bash\\nprovisioning services readiness\\n```plaintext **Output**: ```plaintext\\nPlatform Readiness Report Total services: 7\\nRunning: 3\\nReady to start: 6 Services: 🟢 orchestrator - platform - orchestration 🟢 control-center - platform - ui 🔴 coredns - infrastructure - dns Issues: 1 🟡 gitea - infrastructure - git\\n```plaintext ### Monitor Service Continuous health monitoring: ```bash\\n# Monitor with default interval (30s)\\nprovisioning services monitor orchestrator # Custom interval\\nprovisioning services monitor orchestrator --interval 10\\n```plaintext --- ## Deployment Modes ### Binary Deployment Run services as native binaries. **Configuration**: ```toml\\n[services.orchestrator.deployment]\\nmode = \\"binary\\" [services.orchestrator.deployment.binary]\\nbinary_path = \\"${HOME}/.provisioning/bin/provisioning-orchestrator\\"\\nargs = [\\"--port\\", \\"8080\\"]\\nworking_dir = \\"${HOME}/.provisioning/orchestrator\\"\\nenv = { RUST_LOG = \\"info\\" }\\n```plaintext **Process Management**: - PID tracking in `~/.provisioning/services/pids/`\\n- Log output to `~/.provisioning/services/logs/`\\n- State tracking in `~/.provisioning/services/state/` ### Docker Deployment Run services as Docker containers. **Configuration**: ```toml\\n[services.coredns.deployment]\\nmode = \\"docker\\" [services.coredns.deployment.docker]\\nimage = \\"coredns/coredns:1.11.1\\"\\ncontainer_name = \\"provisioning-coredns\\"\\nports = [\\"5353:53/udp\\"]\\nvolumes = [\\"${HOME}/.provisioning/coredns/Corefile:/Corefile:ro\\"]\\nrestart_policy = \\"unless-stopped\\"\\n```plaintext **Prerequisites**: - Docker daemon running\\n- Docker CLI installed ### Docker Compose Deployment Run services via Docker Compose. **Configuration**: ```toml\\n[services.platform.deployment]\\nmode = \\"docker-compose\\" [services.platform.deployment.docker_compose]\\ncompose_file = \\"${HOME}/.provisioning/platform/docker-compose.yaml\\"\\nservice_name = \\"orchestrator\\"\\nproject_name = \\"provisioning\\"\\n```plaintext **File**: `provisioning/platform/docker-compose.yaml` ### Kubernetes Deployment Run services on Kubernetes. **Configuration**: ```toml\\n[services.orchestrator.deployment]\\nmode = \\"kubernetes\\" [services.orchestrator.deployment.kubernetes]\\nnamespace = \\"provisioning\\"\\ndeployment_name = \\"orchestrator\\"\\nmanifests_path = \\"${HOME}/.provisioning/k8s/orchestrator/\\"\\n```plaintext **Prerequisites**: - kubectl installed and configured\\n- Kubernetes cluster accessible ### Remote Deployment Connect to remotely-running services. **Configuration**: ```toml\\n[services.orchestrator.deployment]\\nmode = \\"remote\\" [services.orchestrator.deployment.remote]\\nendpoint = \\"https://orchestrator.example.com\\"\\ntls_enabled = true\\nauth_token_path = \\"${HOME}/.provisioning/tokens/orchestrator.token\\"\\n```plaintext --- ## Health Monitoring ### Health Check Types #### HTTP Health Check ```toml\\n[services.orchestrator.health_check]\\ntype = \\"http\\" [services.orchestrator.health_check.http]\\nendpoint = \\"http://localhost:9090/health\\"\\nexpected_status = 200\\nmethod = \\"GET\\"\\n```plaintext #### TCP Health Check ```toml\\n[services.coredns.health_check]\\ntype = \\"tcp\\" [services.coredns.health_check.tcp]\\nhost = \\"localhost\\"\\nport = 5353\\n```plaintext #### Command Health Check ```toml\\n[services.custom.health_check]\\ntype = \\"command\\" [services.custom.health_check.command]\\ncommand = \\"systemctl is-active myservice\\"\\nexpected_exit_code = 0\\n```plaintext #### File Health Check ```toml\\n[services.custom.health_check]\\ntype = \\"file\\" [services.custom.health_check.file]\\npath = \\"/var/run/myservice.pid\\"\\nmust_exist = true\\n```plaintext ### Health Check Configuration - `interval`: Seconds between checks (default: 10)\\n- `retries`: Max retry attempts (default: 3)\\n- `timeout`: Check timeout in seconds (default: 5) ### Continuous Monitoring ```bash\\nprovisioning services monitor orchestrator --interval 30\\n```plaintext **Output**: ```plaintext\\nStarting health monitoring for orchestrator (interval: 30s)\\nPress Ctrl+C to stop\\n2025-10-06 14:30:00 ✅ orchestrator: HTTP health check passed\\n2025-10-06 14:30:30 ✅ orchestrator: HTTP health check passed\\n2025-10-06 14:31:00 ✅ orchestrator: HTTP health check passed\\n```plaintext --- ## Dependency Management ### Dependency Graph Services can depend on other services: ```toml\\n[services.control-center]\\ndependencies = [\\"orchestrator\\"] [services.api-gateway]\\ndependencies = [\\"orchestrator\\", \\"control-center\\", \\"mcp-server\\"]\\n```plaintext ### Startup Order Services start in topological order: ```plaintext\\norchestrator (order: 10) └─> control-center (order: 20) └─> api-gateway (order: 45)\\n```plaintext ### Dependency Resolution Automatic dependency resolution when starting services: ```bash\\n# Starting control-center automatically starts orchestrator first\\nprovisioning services start control-center\\n```plaintext **Output**: ```plaintext\\nStarting dependency: orchestrator\\n✅ Started orchestrator with PID 12345\\nWaiting for orchestrator to become healthy...\\n✅ Service orchestrator is healthy\\nStarting service: control-center\\n✅ Started control-center with PID 12346\\n✅ Service control-center is healthy\\n```plaintext ### Conflicts Services can conflict with each other: ```toml\\n[services.coredns]\\nconflicts = [\\"dnsmasq\\", \\"systemd-resolved\\"]\\n```plaintext Attempting to start a conflicting service will fail: ```bash\\nprovisioning services start coredns\\n```plaintext **Output**: ```plaintext\\n❌ Pre-flight check failed: conflicts\\nConflicting services running: dnsmasq\\n```plaintext ### Reverse Dependencies Check which services depend on a service: ```bash\\nprovisioning services dependencies orchestrator\\n```plaintext **Output**: ```plaintext\\n## orchestrator\\n- Type: platform\\n- Category: orchestration\\n- Required by: - control-center - mcp-server - api-gateway\\n```plaintext ### Safe Stop System prevents stopping services with running dependents: ```bash\\nprovisioning services stop orchestrator\\n```plaintext **Output**: ```plaintext\\n❌ Cannot stop orchestrator: Dependent services running: control-center, mcp-server, api-gateway Use --force to stop anyway\\n```plaintext --- ## Pre-flight Checks ### Purpose Pre-flight checks ensure services can start successfully before attempting to start them. ### Check Types 1. **Prerequisites**: Binary exists, Docker running, etc.\\n2. **Conflicts**: No conflicting services running\\n3. **Dependencies**: All dependencies available ### Automatic Checks Pre-flight checks run automatically when starting services: ```bash\\nprovisioning services start orchestrator\\n```plaintext **Check Process**: ```plaintext\\nRunning pre-flight checks for orchestrator...\\n✅ Binary found: /Users/user/.provisioning/bin/provisioning-orchestrator\\n✅ No conflicts detected\\n✅ All dependencies available\\nStarting service: orchestrator\\n```plaintext ### Manual Validation Validate all services: ```bash\\nprovisioning services validate\\n```plaintext Validate specific service: ```bash\\nprovisioning services status orchestrator\\n```plaintext ### Auto-Start Services with `auto_start = true` can be started automatically when needed: ```bash\\n# Orchestrator auto-starts if needed for server operations\\nprovisioning server create\\n```plaintext **Output**: ```plaintext\\nStarting required services...\\n✅ Orchestrator started\\nCreating server...\\n```plaintext --- ## Troubleshooting ### Service Won\'t Start **Check prerequisites**: ```bash\\nprovisioning services validate\\nprovisioning services status \\n```plaintext **Common issues**: - Binary not found: Check `binary_path` in config\\n- Docker not running: Start Docker daemon\\n- Port already in use: Check for conflicting processes\\n- Dependencies not running: Start dependencies first ### Service Health Check Failing **View health status**: ```bash\\nprovisioning services health \\n```plaintext **Check logs**: ```bash\\nprovisioning services logs --follow\\n```plaintext **Common issues**: - Service not fully initialized: Wait longer or increase `start_timeout`\\n- Wrong health check endpoint: Verify endpoint in config\\n- Network issues: Check firewall, port bindings ### Dependency Issues **View dependency tree**: ```bash\\nprovisioning services dependencies \\n```plaintext **Check dependency status**: ```bash\\nprovisioning services status \\n```plaintext **Start with dependencies**: ```bash\\nprovisioning platform start \\n```plaintext ### Circular Dependencies **Validate dependency graph**: ```bash\\n# This is done automatically but you can check manually\\nnu -c \\"use lib_provisioning/services/mod.nu *; validate-dependency-graph\\"\\n```plaintext ### PID File Stale If service reports running but isn\'t: ```bash\\n# Manual cleanup\\nrm ~/.provisioning/services/pids/.pid # Force restart\\nprovisioning services restart \\n```plaintext ### Port Conflicts **Find process using port**: ```bash\\nlsof -i :9090\\n```plaintext **Kill conflicting process**: ```bash\\nkill \\n```plaintext ### Docker Issues **Check Docker status**: ```bash\\ndocker ps\\ndocker info\\n```plaintext **View container logs**: ```bash\\ndocker logs provisioning-\\n```plaintext **Restart Docker daemon**: ```bash\\n# macOS\\nkillall Docker && open /Applications/Docker.app # Linux\\nsystemctl restart docker\\n```plaintext ### Service Logs **View recent logs**: ```bash\\ntail -f ~/.provisioning/services/logs/.log\\n```plaintext **Search logs**: ```bash\\ngrep \\"ERROR\\" ~/.provisioning/services/logs/.log\\n```plaintext --- ## Advanced Usage ### Custom Service Registration Add custom services by editing `provisioning/config/services.toml`. ### Integration with Workflows Services automatically start when required by workflows: ```bash\\n# Orchestrator starts automatically if not running\\nprovisioning workflow submit my-workflow\\n```plaintext ### CI/CD Integration ```yaml\\n# GitLab CI\\nbefore_script: - provisioning platform start orchestrator - provisioning services health orchestrator test: script: - provisioning test quick kubernetes\\n```plaintext ### Monitoring Integration Services can integrate with monitoring systems via health endpoints. --- ## Related Documentation - Orchestrator README\\n- [Test Environment Guide](test-environment-guide.md)\\n- [Workflow Management](workflow-management.md) --- ## Quick Reference **Version**: 1.0.0 ### Platform Commands (Manage All Services) ```bash\\n# Start all auto-start services\\nprovisioning platform start # Start specific services with dependencies\\nprovisioning platform start control-center mcp-server # Stop all running services\\nprovisioning platform stop # Stop specific services\\nprovisioning platform stop orchestrator # Restart services\\nprovisioning platform restart # Show platform status\\nprovisioning platform status # Check platform health\\nprovisioning platform health # View service logs\\nprovisioning platform logs orchestrator --follow\\n```plaintext --- ### Service Commands (Individual Services) ```bash\\n# List all services\\nprovisioning services list # List only running services\\nprovisioning services list --running # Filter by category\\nprovisioning services list --category orchestration # Service status\\nprovisioning services status orchestrator # Start service (with pre-flight checks)\\nprovisioning services start orchestrator # Force start (skip checks)\\nprovisioning services start orchestrator --force # Stop service\\nprovisioning services stop orchestrator # Force stop (ignore dependents)\\nprovisioning services stop orchestrator --force # Restart service\\nprovisioning services restart orchestrator # Check health\\nprovisioning services health orchestrator # View logs\\nprovisioning services logs orchestrator --follow --lines 100 # Monitor health continuously\\nprovisioning services monitor orchestrator --interval 30\\n```plaintext --- ### Dependency & Validation ```bash\\n# View dependency graph\\nprovisioning services dependencies # View specific service dependencies\\nprovisioning services dependencies control-center # Validate all services\\nprovisioning services validate # Check readiness\\nprovisioning services readiness # Check required services for operation\\nprovisioning services check server\\n```plaintext --- ### Registered Services | Service | Port | Type | Auto-Start | Dependencies |\\n|---------|------|------|------------|--------------|\\n| orchestrator | 8080 | Platform | Yes | - |\\n| control-center | 8081 | Platform | No | orchestrator |\\n| coredns | 5353 | Infrastructure | No | - |\\n| gitea | 3000, 222 | Infrastructure | No | - |\\n| oci-registry | 5000 | Infrastructure | No | - |\\n| mcp-server | 8082 | Platform | No | orchestrator |\\n| api-gateway | 8083 | Platform | No | orchestrator, control-center, mcp-server | --- ### Docker Compose ```bash\\n# Start all services\\ncd provisioning/platform\\ndocker-compose up -d # Start specific services\\ndocker-compose up -d orchestrator control-center # Check status\\ndocker-compose ps # View logs\\ndocker-compose logs -f orchestrator # Stop all services\\ndocker-compose down # Stop and remove volumes\\ndocker-compose down -v\\n```plaintext --- ### Service State Directories ```plaintext\\n~/.provisioning/services/\\n├── pids/ # Process ID files\\n├── state/ # Service state (JSON)\\n└── logs/ # Service logs\\n```plaintext --- ### Health Check Endpoints | Service | Endpoint | Type |\\n|---------|----------|------|\\n| orchestrator | | HTTP |\\n| control-center | | HTTP |\\n| coredns | localhost:5353 | TCP |\\n| gitea | | HTTP |\\n| oci-registry | | HTTP |\\n| mcp-server | | HTTP |\\n| api-gateway | | HTTP | --- ### Common Workflows #### Start Platform for Development ```bash\\n# Start core services\\nprovisioning platform start orchestrator # Check status\\nprovisioning platform status # Check health\\nprovisioning platform health\\n```plaintext #### Start Full Platform Stack ```bash\\n# Use Docker Compose\\ncd provisioning/platform\\ndocker-compose up -d # Verify\\ndocker-compose ps\\nprovisioning platform health\\n```plaintext #### Debug Service Issues ```bash\\n# Check service status\\nprovisioning services status # View logs\\nprovisioning services logs --follow # Check health\\nprovisioning services health # Validate prerequisites\\nprovisioning services validate # Restart service\\nprovisioning services restart \\n```plaintext #### Safe Service Shutdown ```bash\\n# Check dependents\\nnu -c \\"use lib_provisioning/services/mod.nu *; can-stop-service orchestrator\\" # Stop with dependency check\\nprovisioning services stop orchestrator # Force stop if needed\\nprovisioning services stop orchestrator --force\\n```plaintext --- ### Troubleshooting #### Service Won\'t Start ```bash\\n# 1. Check prerequisites\\nprovisioning services validate # 2. View detailed status\\nprovisioning services status # 3. Check logs\\nprovisioning services logs # 4. Verify binary/image exists\\nls ~/.provisioning/bin/\\ndocker images | grep \\n```plaintext #### Health Check Failing ```bash\\n# Check endpoint manually\\ncurl http://localhost:9090/health # View health details\\nprovisioning services health # Monitor continuously\\nprovisioning services monitor --interval 10\\n```plaintext #### PID File Stale ```bash\\n# Remove stale PID file\\nrm ~/.provisioning/services/pids/.pid # Restart service\\nprovisioning services restart \\n```plaintext #### Port Already in Use ```bash\\n# Find process using port\\nlsof -i :9090 # Kill process\\nkill # Restart service\\nprovisioning services start \\n```plaintext --- ### Integration with Operations #### Server Operations ```bash\\n# Orchestrator auto-starts if needed\\nprovisioning server create # Manual check\\nprovisioning services check server\\n```plaintext #### Workflow Operations ```bash\\n# Orchestrator auto-starts\\nprovisioning workflow submit my-workflow # Check status\\nprovisioning services status orchestrator\\n```plaintext #### Test Operations ```bash\\n# Orchestrator required for test environments\\nprovisioning test quick kubernetes # Pre-flight check\\nprovisioning services check test-env\\n```plaintext --- ### Advanced Usage #### Custom Service Startup Order Services start based on: 1. Dependency order (topological sort)\\n2. `start_order` field (lower = earlier) #### Auto-Start Configuration Edit `provisioning/config/services.toml`: ```toml\\n[services..startup]\\nauto_start = true # Enable auto-start\\nstart_timeout = 30 # Timeout in seconds\\nstart_order = 10 # Startup priority\\n```plaintext #### Health Check Configuration ```toml\\n[services..health_check]\\ntype = \\"http\\" # http, tcp, command, file\\ninterval = 10 # Seconds between checks\\nretries = 3 # Max retry attempts\\ntimeout = 5 # Check timeout [services..health_check.http]\\nendpoint = \\"http://localhost:9090/health\\"\\nexpected_status = 200\\n```plaintext --- ### Key Files - **Service Registry**: `provisioning/config/services.toml`\\n- **KCL Schema**: `provisioning/kcl/services.k`\\n- **Docker Compose**: `provisioning/platform/docker-compose.yaml`\\n- **User Guide**: `docs/user/SERVICE_MANAGEMENT_GUIDE.md` --- ### Getting Help ```bash\\n# View documentation\\ncat docs/user/SERVICE_MANAGEMENT_GUIDE.md | less # Run verification\\nnu provisioning/core/nulib/tests/verify_services.nu # Check readiness\\nprovisioning services readiness\\n```plaintext --- **Quick Tip**: Use `--help` flag with any command for detailed usage information. --- **Maintained By**: Platform Team\\n**Support**: [GitHub Issues](https://github.com/your-org/provisioning/issues)","breadcrumbs":"Service Management Guide » System Architecture","id":"1131","title":"System Architecture"},"1132":{"body":"Complete guide for monitoring the 9-service platform with Prometheus, Grafana, and AlertManager Version : 1.0.0 Last Updated : 2026-01-05 Target Audience : DevOps Engineers, Platform Operators Status : Production Ready","breadcrumbs":"Monitoring & Alerting Setup » Service Monitoring & Alerting Setup","id":"1132","title":"Service Monitoring & Alerting Setup"},"1133":{"body":"This guide provides complete setup instructions for monitoring and alerting on the provisioning platform using industry-standard tools: Prometheus : Metrics collection and time-series database Grafana : Visualization and dashboarding AlertManager : Alert routing and notification","breadcrumbs":"Monitoring & Alerting Setup » Overview","id":"1133","title":"Overview"},"1134":{"body":"Services (metrics endpoints) ↓\\nPrometheus (scrapes every 30s) ↓\\nAlertManager (evaluates rules) ↓\\nNotification Channels (email, slack, pagerduty) Prometheus Data ↓\\nGrafana (queries) ↓\\nDashboards & Visualization","breadcrumbs":"Monitoring & Alerting Setup » Architecture","id":"1134","title":"Architecture"},"1135":{"body":"","breadcrumbs":"Monitoring & Alerting Setup » Prerequisites","id":"1135","title":"Prerequisites"},"1136":{"body":"# Prometheus (for metrics)\\nwget https://github.com/prometheus/prometheus/releases/download/v2.48.0/prometheus-2.48.0.linux-amd64.tar.gz\\ntar xvfz prometheus-2.48.0.linux-amd64.tar.gz\\nsudo mv prometheus-2.48.0.linux-amd64 /opt/prometheus # Grafana (for dashboards)\\nsudo apt-get install -y grafana-server # AlertManager (for alerting)\\nwget https://github.com/prometheus/alertmanager/releases/download/v0.26.0/alertmanager-0.26.0.linux-amd64.tar.gz\\ntar xvfz alertmanager-0.26.0.linux-amd64.tar.gz\\nsudo mv alertmanager-0.26.0.linux-amd64 /opt/alertmanager","breadcrumbs":"Monitoring & Alerting Setup » Software Requirements","id":"1136","title":"Software Requirements"},"1137":{"body":"CPU : 2+ cores Memory : 4 GB minimum, 8 GB recommended Disk : 100 GB for metrics retention (30 days) Network : Access to all service endpoints","breadcrumbs":"Monitoring & Alerting Setup » System Requirements","id":"1137","title":"System Requirements"},"1138":{"body":"Component Port Purpose Prometheus 9090 Web UI & API Grafana 3000 Web UI AlertManager 9093 Web UI & API Node Exporter 9100 System metrics","breadcrumbs":"Monitoring & Alerting Setup » Ports","id":"1138","title":"Ports"},"1139":{"body":"All platform services expose metrics on the /metrics endpoint: # Health and metrics endpoints for each service\\ncurl http://localhost:8200/health # Vault health\\ncurl http://localhost:8200/metrics # Vault metrics (Prometheus format) curl http://localhost:8081/health # Registry health\\ncurl http://localhost:8081/metrics # Registry metrics curl http://localhost:8083/health # RAG health\\ncurl http://localhost:8083/metrics # RAG metrics curl http://localhost:8082/health # AI Service health\\ncurl http://localhost:8082/metrics # AI Service metrics curl http://localhost:9090/health # Orchestrator health\\ncurl http://localhost:9090/metrics # Orchestrator metrics curl http://localhost:8080/health # Control Center health\\ncurl http://localhost:8080/metrics # Control Center metrics curl http://localhost:8084/health # MCP Server health\\ncurl http://localhost:8084/metrics # MCP Server metrics","breadcrumbs":"Monitoring & Alerting Setup » Service Metrics Endpoints","id":"1139","title":"Service Metrics Endpoints"},"114":{"body":"CPU : 4 cores RAM : 8GB Disk : 50GB available space Network : Reliable internet connection","breadcrumbs":"Prerequisites » Recommended Requirements (Multi-User Mode)","id":"114","title":"Recommended Requirements (Multi-User Mode)"},"1140":{"body":"","breadcrumbs":"Monitoring & Alerting Setup » Prometheus Configuration","id":"1140","title":"Prometheus Configuration"},"1141":{"body":"# /etc/prometheus/prometheus.yml\\nglobal: scrape_interval: 30s evaluation_interval: 30s external_labels: monitor: \'provisioning-platform\' environment: \'production\' alerting: alertmanagers: - static_configs: - targets: - localhost:9093 rule_files: - \'/etc/prometheus/rules/*.yml\' scrape_configs: # Core Platform Services - job_name: \'vault-service\' metrics_path: \'/metrics\' static_configs: - targets: [\'localhost:8200\'] relabel_configs: - source_labels: [__address__] target_label: instance replacement: \'vault-service\' - job_name: \'extension-registry\' metrics_path: \'/metrics\' static_configs: - targets: [\'localhost:8081\'] relabel_configs: - source_labels: [__address__] target_label: instance replacement: \'registry\' - job_name: \'rag-service\' metrics_path: \'/metrics\' static_configs: - targets: [\'localhost:8083\'] relabel_configs: - source_labels: [__address__] target_label: instance replacement: \'rag\' - job_name: \'ai-service\' metrics_path: \'/metrics\' static_configs: - targets: [\'localhost:8082\'] relabel_configs: - source_labels: [__address__] target_label: instance replacement: \'ai-service\' - job_name: \'orchestrator\' metrics_path: \'/metrics\' static_configs: - targets: [\'localhost:9090\'] relabel_configs: - source_labels: [__address__] target_label: instance replacement: \'orchestrator\' - job_name: \'control-center\' metrics_path: \'/metrics\' static_configs: - targets: [\'localhost:8080\'] relabel_configs: - source_labels: [__address__] target_label: instance replacement: \'control-center\' - job_name: \'mcp-server\' metrics_path: \'/metrics\' static_configs: - targets: [\'localhost:8084\'] relabel_configs: - source_labels: [__address__] target_label: instance replacement: \'mcp-server\' # System Metrics (Node Exporter) - job_name: \'node\' static_configs: - targets: [\'localhost:9100\'] labels: instance: \'system\' # SurrealDB (if multiuser/enterprise) - job_name: \'surrealdb\' metrics_path: \'/metrics\' static_configs: - targets: [\'surrealdb:8000\'] # Etcd (if enterprise) - job_name: \'etcd\' metrics_path: \'/metrics\' static_configs: - targets: [\'etcd:2379\']","breadcrumbs":"Monitoring & Alerting Setup » 1. Create Prometheus Config","id":"1141","title":"1. Create Prometheus Config"},"1142":{"body":"# Create necessary directories\\nsudo mkdir -p /etc/prometheus /var/lib/prometheus\\nsudo mkdir -p /etc/prometheus/rules # Start Prometheus\\ncd /opt/prometheus\\nsudo ./prometheus --config.file=/etc/prometheus/prometheus.yml \\\\ --storage.tsdb.path=/var/lib/prometheus \\\\ --web.console.templates=consoles \\\\ --web.console.libraries=console_libraries # Or as systemd service\\nsudo tee /etc/systemd/system/prometheus.service > /dev/null << EOF\\n[Unit]\\nDescription=Prometheus\\nWants=network-online.target\\nAfter=network-online.target [Service]\\nUser=prometheus\\nType=simple\\nExecStart=/opt/prometheus/prometheus \\\\ --config.file=/etc/prometheus/prometheus.yml \\\\ --storage.tsdb.path=/var/lib/prometheus Restart=on-failure\\nRestartSec=10 [Install]\\nWantedBy=multi-user.target\\nEOF sudo systemctl daemon-reload\\nsudo systemctl enable prometheus\\nsudo systemctl start prometheus","breadcrumbs":"Monitoring & Alerting Setup » 2. Start Prometheus","id":"1142","title":"2. Start Prometheus"},"1143":{"body":"# Check Prometheus is running\\ncurl -s http://localhost:9090/-/healthy # List scraped targets\\ncurl -s http://localhost:9090/api/v1/targets | jq . # Query test metric\\ncurl -s \'http://localhost:9090/api/v1/query?query=up\' | jq .","breadcrumbs":"Monitoring & Alerting Setup » 3. Verify Prometheus","id":"1143","title":"3. Verify Prometheus"},"1144":{"body":"","breadcrumbs":"Monitoring & Alerting Setup » Alert Rules Configuration","id":"1144","title":"Alert Rules Configuration"},"1145":{"body":"# /etc/prometheus/rules/platform-alerts.yml\\ngroups: - name: platform_availability interval: 30s rules: - alert: ServiceDown expr: up{job=~\\"vault-service|registry|rag|ai-service|orchestrator\\"} == 0 for: 5m labels: severity: critical service: \'{{ $labels.job }}\' annotations: summary: \\"{{ $labels.job }} is DOWN\\" description: \\"{{ $labels.job }} has been down for 5+ minutes\\" - alert: ServiceSlowResponse expr: histogram_quantile(0.95, rate(http_request_duration_seconds_bucket[5m])) > 1 for: 5m labels: severity: warning service: \'{{ $labels.job }}\' annotations: summary: \\"{{ $labels.job }} slow response times\\" description: \\"95th percentile latency above 1 second\\" - name: platform_errors interval: 30s rules: - alert: HighErrorRate expr: rate(http_requests_total{status=~\\"5..\\"}[5m]) > 0.05 for: 5m labels: severity: warning service: \'{{ $labels.job }}\' annotations: summary: \\"{{ $labels.job }} high error rate\\" description: \\"Error rate above 5% for 5 minutes\\" - alert: DatabaseConnectionError expr: increase(database_connection_errors_total[5m]) > 10 for: 2m labels: severity: critical component: database annotations: summary: \\"Database connection failures detected\\" description: \\"{{ $value }} connection errors in last 5 minutes\\" - alert: QueueBacklog expr: orchestrator_queue_depth > 1000 for: 5m labels: severity: warning component: orchestrator annotations: summary: \\"Orchestrator queue backlog growing\\" description: \\"Queue depth: {{ $value }} tasks\\" - name: platform_resources interval: 30s rules: - alert: HighMemoryUsage expr: container_memory_usage_bytes / container_spec_memory_limit_bytes > 0.9 for: 5m labels: severity: warning resource: memory annotations: summary: \\"{{ $labels.container_name }} memory usage critical\\" description: \\"Memory usage: {{ $value | humanizePercentage }}\\" - alert: HighDiskUsage expr: node_filesystem_avail_bytes{mountpoint=\\"/\\"} / node_filesystem_size_bytes < 0.1 for: 5m labels: severity: warning resource: disk annotations: summary: \\"Disk space critically low\\" description: \\"Available disk space: {{ $value | humanizePercentage }}\\" - alert: HighCPUUsage expr: (1 - avg(rate(node_cpu_seconds_total{mode=\\"idle\\"}[5m])) by (instance)) > 0.9 for: 10m labels: severity: warning resource: cpu annotations: summary: \\"High CPU usage detected\\" description: \\"CPU usage: {{ $value | humanizePercentage }}\\" - alert: DiskIOLatency expr: node_disk_io_time_seconds_total > 100 for: 5m labels: severity: warning resource: disk annotations: summary: \\"High disk I/O latency\\" description: \\"I/O latency: {{ $value }}ms\\" - name: platform_network interval: 30s rules: - alert: HighNetworkLatency expr: probe_duration_seconds > 0.5 for: 5m labels: severity: warning component: network annotations: summary: \\"High network latency detected\\" description: \\"Latency: {{ $value }}ms\\" - alert: PacketLoss expr: node_network_transmit_errors_total > 100 for: 5m labels: severity: warning component: network annotations: summary: \\"Packet loss detected\\" description: \\"Transmission errors: {{ $value }}\\" - name: platform_services interval: 30s rules: - alert: VaultSealed expr: vault_core_unsealed == 0 for: 1m labels: severity: critical service: vault annotations: summary: \\"Vault is sealed\\" description: \\"Vault instance is sealed and requires unseal operation\\" - alert: RegistryAuthError expr: increase(registry_auth_failures_total[5m]) > 5 for: 2m labels: severity: warning service: registry annotations: summary: \\"Registry authentication failures\\" description: \\"{{ $value }} auth failures in last 5 minutes\\" - alert: RAGVectorDBDown expr: rag_vectordb_connection_status == 0 for: 2m labels: severity: critical service: rag annotations: summary: \\"RAG Vector Database disconnected\\" description: \\"Vector DB connection lost\\" - alert: AIServiceMCPError expr: increase(ai_service_mcp_errors_total[5m]) > 10 for: 2m labels: severity: warning service: ai_service annotations: summary: \\"AI Service MCP integration errors\\" description: \\"{{ $value }} errors in last 5 minutes\\" - alert: OrchestratorLeaderElectionIssue expr: orchestrator_leader_elected == 0 for: 5m labels: severity: critical service: orchestrator annotations: summary: \\"Orchestrator leader election failed\\" description: \\"No leader elected in cluster\\"","breadcrumbs":"Monitoring & Alerting Setup » 1. Create Alert Rules","id":"1145","title":"1. Create Alert Rules"},"1146":{"body":"# Check rule syntax\\n/opt/prometheus/promtool check rules /etc/prometheus/rules/platform-alerts.yml # Reload Prometheus with new rules (without restart)\\ncurl -X POST http://localhost:9090/-/reload","breadcrumbs":"Monitoring & Alerting Setup » 2. Validate Alert Rules","id":"1146","title":"2. Validate Alert Rules"},"1147":{"body":"","breadcrumbs":"Monitoring & Alerting Setup » AlertManager Configuration","id":"1147","title":"AlertManager Configuration"},"1148":{"body":"# /etc/alertmanager/alertmanager.yml\\nglobal: resolve_timeout: 5m slack_api_url: \'YOUR_SLACK_WEBHOOK_URL\' pagerduty_url: \'https://events.pagerduty.com/v2/enqueue\' route: receiver: \'platform-notifications\' group_by: [\'alertname\', \'service\', \'severity\'] group_wait: 10s group_interval: 10s repeat_interval: 12h routes: # Critical alerts go to PagerDuty - match: severity: critical receiver: \'pagerduty-critical\' group_wait: 0s repeat_interval: 5m # Warnings go to Slack - match: severity: warning receiver: \'slack-warnings\' repeat_interval: 1h # Service-specific routing - match: service: vault receiver: \'vault-team\' group_by: [\'service\', \'severity\'] - match: service: orchestrator receiver: \'orchestrator-team\' group_by: [\'service\', \'severity\'] receivers: - name: \'platform-notifications\' slack_configs: - channel: \'#platform-alerts\' title: \'Platform Alert\' text: \'{{ range .Alerts }}{{ .Annotations.description }}{{ end }}\' send_resolved: true - name: \'slack-warnings\' slack_configs: - channel: \'#platform-warnings\' title: \'Warning: {{ .GroupLabels.alertname }}\' text: \'{{ range .Alerts }}{{ .Annotations.description }}{{ end }}\' - name: \'pagerduty-critical\' pagerduty_configs: - service_key: \'YOUR_PAGERDUTY_SERVICE_KEY\' description: \'{{ .GroupLabels.alertname }}\' details: firing: \'{{ template \\"pagerduty.default.instances\\" .Alerts.Firing }}\' - name: \'vault-team\' email_configs: - to: \'vault-team@company.com\' from: \'alertmanager@company.com\' smarthost: \'smtp.company.com:587\' auth_username: \'alerts@company.com\' auth_password: \'PASSWORD\' headers: Subject: \'Vault Alert: {{ .GroupLabels.alertname }}\' - name: \'orchestrator-team\' email_configs: - to: \'orchestrator-team@company.com\' from: \'alertmanager@company.com\' smarthost: \'smtp.company.com:587\' inhibit_rules: # Don\'t alert on errors if service is already down - source_match: severity: \'critical\' alertname: \'ServiceDown\' target_match_re: severity: \'warning|info\' equal: [\'service\', \'instance\'] # Don\'t alert on resource exhaustion if service is down - source_match: alertname: \'ServiceDown\' target_match_re: alertname: \'HighMemoryUsage|HighCPUUsage\' equal: [\'instance\']","breadcrumbs":"Monitoring & Alerting Setup » 1. Create AlertManager Config","id":"1148","title":"1. Create AlertManager Config"},"1149":{"body":"cd /opt/alertmanager\\nsudo ./alertmanager --config.file=/etc/alertmanager/alertmanager.yml \\\\ --storage.path=/var/lib/alertmanager # Or as systemd service\\nsudo tee /etc/systemd/system/alertmanager.service > /dev/null << EOF\\n[Unit]\\nDescription=AlertManager\\nWants=network-online.target\\nAfter=network-online.target [Service]\\nUser=alertmanager\\nType=simple\\nExecStart=/opt/alertmanager/alertmanager \\\\ --config.file=/etc/alertmanager/alertmanager.yml \\\\ --storage.path=/var/lib/alertmanager Restart=on-failure\\nRestartSec=10 [Install]\\nWantedBy=multi-user.target\\nEOF sudo systemctl daemon-reload\\nsudo systemctl enable alertmanager\\nsudo systemctl start alertmanager","breadcrumbs":"Monitoring & Alerting Setup » 2. Start AlertManager","id":"1149","title":"2. Start AlertManager"},"115":{"body":"CPU : 16 cores RAM : 32GB Disk : 500GB available space (SSD recommended) Network : High-bandwidth connection with static IP","breadcrumbs":"Prerequisites » Production Requirements (Enterprise Mode)","id":"115","title":"Production Requirements (Enterprise Mode)"},"1150":{"body":"# Check AlertManager is running\\ncurl -s http://localhost:9093/-/healthy # List active alerts\\ncurl -s http://localhost:9093/api/v1/alerts | jq . # Check configuration\\ncurl -s http://localhost:9093/api/v1/status | jq .","breadcrumbs":"Monitoring & Alerting Setup » 3. Verify AlertManager","id":"1150","title":"3. Verify AlertManager"},"1151":{"body":"","breadcrumbs":"Monitoring & Alerting Setup » Grafana Dashboards","id":"1151","title":"Grafana Dashboards"},"1152":{"body":"# Install Grafana\\nsudo apt-get install -y grafana-server # Start Grafana\\nsudo systemctl enable grafana-server\\nsudo systemctl start grafana-server # Access at http://localhost:3000\\n# Default: admin/admin","breadcrumbs":"Monitoring & Alerting Setup » 1. Install Grafana","id":"1152","title":"1. Install Grafana"},"1153":{"body":"# Via API\\ncurl -X POST http://localhost:3000/api/datasources \\\\ -H \\"Content-Type: application/json\\" \\\\ -u admin:admin \\\\ -d \'{ \\"name\\": \\"Prometheus\\", \\"type\\": \\"prometheus\\", \\"url\\": \\"http://localhost:9090\\", \\"access\\": \\"proxy\\", \\"isDefault\\": true }\'","breadcrumbs":"Monitoring & Alerting Setup » 2. Add Prometheus Data Source","id":"1153","title":"2. Add Prometheus Data Source"},"1154":{"body":"{ \\"dashboard\\": { \\"title\\": \\"Platform Overview\\", \\"description\\": \\"9-service provisioning platform metrics\\", \\"tags\\": [\\"platform\\", \\"overview\\"], \\"timezone\\": \\"browser\\", \\"panels\\": [ { \\"title\\": \\"Service Status\\", \\"type\\": \\"stat\\", \\"targets\\": [ { \\"expr\\": \\"up{job=~\\\\\\"vault-service|registry|rag|ai-service|orchestrator|control-center|mcp-server\\\\\\"}\\" } ], \\"fieldConfig\\": { \\"defaults\\": { \\"mappings\\": [ { \\"type\\": \\"value\\", \\"value\\": \\"1\\", \\"text\\": \\"UP\\" }, { \\"type\\": \\"value\\", \\"value\\": \\"0\\", \\"text\\": \\"DOWN\\" } ] } } }, { \\"title\\": \\"Request Rate\\", \\"type\\": \\"graph\\", \\"targets\\": [ { \\"expr\\": \\"rate(http_requests_total[5m])\\" } ] }, { \\"title\\": \\"Error Rate\\", \\"type\\": \\"graph\\", \\"targets\\": [ { \\"expr\\": \\"rate(http_requests_total{status=~\\\\\\"5..\\\\\\"}[5m])\\" } ] }, { \\"title\\": \\"Latency (p95)\\", \\"type\\": \\"graph\\", \\"targets\\": [ { \\"expr\\": \\"histogram_quantile(0.95, rate(http_request_duration_seconds_bucket[5m]))\\" } ] }, { \\"title\\": \\"Memory Usage\\", \\"type\\": \\"graph\\", \\"targets\\": [ { \\"expr\\": \\"container_memory_usage_bytes / 1024 / 1024\\" } ] }, { \\"title\\": \\"Disk Usage\\", \\"type\\": \\"gauge\\", \\"targets\\": [ { \\"expr\\": \\"(1 - (node_filesystem_avail_bytes / node_filesystem_size_bytes)) * 100\\" } ] } ] }\\n}","breadcrumbs":"Monitoring & Alerting Setup » 3. Create Platform Overview Dashboard","id":"1154","title":"3. Create Platform Overview Dashboard"},"1155":{"body":"# Save dashboard JSON to file\\ncat > platform-overview.json << \'EOF\'\\n{ \\"dashboard\\": { ... }\\n}\\nEOF # Import dashboard\\ncurl -X POST http://localhost:3000/api/dashboards/db \\\\ -H \\"Content-Type: application/json\\" \\\\ -u admin:admin \\\\ -d @platform-overview.json","breadcrumbs":"Monitoring & Alerting Setup » 4. Import Dashboard via API","id":"1155","title":"4. Import Dashboard via API"},"1156":{"body":"","breadcrumbs":"Monitoring & Alerting Setup » Health Check Monitoring","id":"1156","title":"Health Check Monitoring"},"1157":{"body":"#!/bin/bash\\n# scripts/check-service-health.sh SERVICES=( \\"vault:8200\\" \\"registry:8081\\" \\"rag:8083\\" \\"ai-service:8082\\" \\"orchestrator:9090\\" \\"control-center:8080\\" \\"mcp-server:8084\\"\\n) UNHEALTHY=0 for service in \\"${SERVICES[@]}\\"; do IFS=\':\' read -r name port <<< \\"$service\\" response=$(curl -s -o /dev/null -w \\"%{http_code}\\" http://localhost:$port/health) if [ \\"$response\\" = \\"200\\" ]; then echo \\"✓ $name is healthy\\" else echo \\"✗ $name is UNHEALTHY (HTTP $response)\\" ((UNHEALTHY++)) fi\\ndone if [ $UNHEALTHY -gt 0 ]; then echo \\"\\" echo \\"WARNING: $UNHEALTHY service(s) unhealthy\\" exit 1\\nfi exit 0","breadcrumbs":"Monitoring & Alerting Setup » 1. Service Health Check Script","id":"1157","title":"1. Service Health Check Script"},"1158":{"body":"# For Kubernetes deployments\\napiVersion: v1\\nkind: Pod\\nmetadata: name: vault-service\\nspec: containers: - name: vault-service image: vault-service:latest livenessProbe: httpGet: path: /health port: 8200 initialDelaySeconds: 30 periodSeconds: 10 failureThreshold: 3 readinessProbe: httpGet: path: /health port: 8200 initialDelaySeconds: 10 periodSeconds: 5 failureThreshold: 2","breadcrumbs":"Monitoring & Alerting Setup » 2. Liveness Probe Configuration","id":"1158","title":"2. Liveness Probe Configuration"},"1159":{"body":"","breadcrumbs":"Monitoring & Alerting Setup » Log Aggregation (ELK Stack)","id":"1159","title":"Log Aggregation (ELK Stack)"},"116":{"body":"","breadcrumbs":"Prerequisites » Operating System","id":"116","title":"Operating System"},"1160":{"body":"# Install Elasticsearch\\nwget https://artifacts.elastic.co/downloads/elasticsearch/elasticsearch-8.11.0-linux-x86_64.tar.gz\\ntar xvfz elasticsearch-8.11.0-linux-x86_64.tar.gz\\ncd elasticsearch-8.11.0/bin\\n./elasticsearch","breadcrumbs":"Monitoring & Alerting Setup » 1. Elasticsearch Setup","id":"1160","title":"1. Elasticsearch Setup"},"1161":{"body":"# /etc/filebeat/filebeat.yml\\nfilebeat.inputs: - type: log enabled: true paths: - /var/log/provisioning/*.log fields: service: provisioning-platform environment: production output.elasticsearch: hosts: [\\"localhost:9200\\"] username: \\"elastic\\" password: \\"changeme\\" logging.level: info\\nlogging.to_files: true\\nlogging.files: path: /var/log/filebeat","breadcrumbs":"Monitoring & Alerting Setup » 2. Filebeat Configuration","id":"1161","title":"2. Filebeat Configuration"},"1162":{"body":"# Access at http://localhost:5601\\n# Create index pattern: provisioning-*\\n# Create visualizations for:\\n# - Error rate over time\\n# - Service availability\\n# - Performance metrics\\n# - Request volume","breadcrumbs":"Monitoring & Alerting Setup » 3. Kibana Dashboard","id":"1162","title":"3. Kibana Dashboard"},"1163":{"body":"","breadcrumbs":"Monitoring & Alerting Setup » Monitoring Dashboard Queries","id":"1163","title":"Monitoring Dashboard Queries"},"1164":{"body":"# Service availability (last hour)\\navg(increase(up[1h])) by (job) # Request rate per service\\nsum(rate(http_requests_total[5m])) by (job) # Error rate per service\\nsum(rate(http_requests_total{status=~\\"5..\\"}[5m])) by (job) # Latency percentiles\\nhistogram_quantile(0.95, rate(http_request_duration_seconds_bucket[5m]))\\nhistogram_quantile(0.99, rate(http_request_duration_seconds_bucket[5m])) # Memory usage per service\\ncontainer_memory_usage_bytes / 1024 / 1024 / 1024 # CPU usage per service\\nrate(container_cpu_usage_seconds_total[5m]) * 100 # Disk I/O operations\\nrate(node_disk_io_time_seconds_total[5m]) # Network throughput\\nrate(node_network_transmit_bytes_total[5m]) # Queue depth (Orchestrator)\\norchestrator_queue_depth # Task processing rate\\nrate(orchestrator_tasks_total[5m]) # Task failure rate\\nrate(orchestrator_tasks_failed_total[5m]) # Cache hit ratio\\nrate(service_cache_hits_total[5m]) / (rate(service_cache_hits_total[5m]) + rate(service_cache_misses_total[5m])) # Database connection pool status\\ndatabase_connection_pool_usage{job=\\"orchestrator\\"} # TLS certificate expiration\\n(ssl_certificate_expiry - time()) / 86400","breadcrumbs":"Monitoring & Alerting Setup » Common Prometheus Queries","id":"1164","title":"Common Prometheus Queries"},"1165":{"body":"","breadcrumbs":"Monitoring & Alerting Setup » Alert Testing","id":"1165","title":"Alert Testing"},"1166":{"body":"# Manually fire test alert\\ncurl -X POST http://localhost:9093/api/v1/alerts \\\\ -H \'Content-Type: application/json\' \\\\ -d \'[ { \\"status\\": \\"firing\\", \\"labels\\": { \\"alertname\\": \\"TestAlert\\", \\"severity\\": \\"critical\\" }, \\"annotations\\": { \\"summary\\": \\"This is a test alert\\", \\"description\\": \\"Test alert to verify notification routing\\" } } ]\'","breadcrumbs":"Monitoring & Alerting Setup » 1. Test Alert Firing","id":"1166","title":"1. Test Alert Firing"},"1167":{"body":"# Stop a service to trigger ServiceDown alert\\npkill -9 vault-service # Within 5 minutes, alert should fire\\n# Check AlertManager UI: http://localhost:9093 # Restart service\\ncargo run --release -p vault-service & # Alert should resolve after service is back up","breadcrumbs":"Monitoring & Alerting Setup » 2. Stop Service to Trigger Alert","id":"1167","title":"2. Stop Service to Trigger Alert"},"1168":{"body":"# Generate request load\\nab -n 10000 -c 100 http://localhost:9090/api/v1/health # Monitor error rate in Prometheus\\ncurl -s \'http://localhost:9090/api/v1/query?query=rate(http_requests_total{status=~\\"5..\\"}[5m])\' | jq .","breadcrumbs":"Monitoring & Alerting Setup » 3. Generate Load to Test Error Alerts","id":"1168","title":"3. Generate Load to Test Error Alerts"},"1169":{"body":"","breadcrumbs":"Monitoring & Alerting Setup » Backup & Retention Policies","id":"1169","title":"Backup & Retention Policies"},"117":{"body":"macOS : 12.0 (Monterey) or later Linux : Ubuntu 22.04 LTS or later Fedora 38 or later Debian 12 (Bookworm) or later RHEL 9 or later","breadcrumbs":"Prerequisites » Supported Platforms","id":"117","title":"Supported Platforms"},"1170":{"body":"#!/bin/bash\\n# scripts/backup-prometheus-data.sh BACKUP_DIR=\\"/backups/prometheus\\"\\nRETENTION_DAYS=30 # Create snapshot\\ncurl -X POST http://localhost:9090/api/v1/admin/tsdb/snapshot # Backup snapshot\\nSNAPSHOT=$(ls -t /var/lib/prometheus/snapshots | head -1)\\ntar -czf \\"$BACKUP_DIR/prometheus-$SNAPSHOT.tar.gz\\" \\\\ \\"/var/lib/prometheus/snapshots/$SNAPSHOT\\" # Upload to S3\\naws s3 cp \\"$BACKUP_DIR/prometheus-$SNAPSHOT.tar.gz\\" \\\\ s3://backups/prometheus/ # Clean old backups\\nfind \\"$BACKUP_DIR\\" -mtime +$RETENTION_DAYS -delete","breadcrumbs":"Monitoring & Alerting Setup » 1. Prometheus Data Backup","id":"1170","title":"1. Prometheus Data Backup"},"1171":{"body":"# Keep metrics for 15 days\\n/opt/prometheus/prometheus \\\\ --storage.tsdb.retention.time=15d \\\\ --storage.tsdb.retention.size=50GB","breadcrumbs":"Monitoring & Alerting Setup » 2. Prometheus Retention Configuration","id":"1171","title":"2. Prometheus Retention Configuration"},"1172":{"body":"","breadcrumbs":"Monitoring & Alerting Setup » Maintenance & Troubleshooting","id":"1172","title":"Maintenance & Troubleshooting"},"1173":{"body":"Prometheus Won\'t Scrape Service # Check configuration\\n/opt/prometheus/promtool check config /etc/prometheus/prometheus.yml # Verify service is accessible\\ncurl http://localhost:8200/metrics # Check Prometheus targets\\ncurl -s http://localhost:9090/api/v1/targets | jq \'.data.activeTargets[] | select(.job==\\"vault-service\\")\' # Check scrape error\\ncurl -s http://localhost:9090/api/v1/targets | jq \'.data.activeTargets[] | .lastError\' AlertManager Not Sending Notifications # Verify AlertManager config\\n/opt/alertmanager/amtool config routes # Test webhook\\ncurl -X POST http://localhost:3012/ -d \'{\\"test\\": \\"alert\\"}\' # Check AlertManager logs\\njournalctl -u alertmanager -n 100 -f # Verify notification channels configured\\ncurl -s http://localhost:9093/api/v1/receivers High Memory Usage # Reduce Prometheus retention\\nprometheus --storage.tsdb.retention.time=7d --storage.tsdb.max-block-duration=2h # Disable unused scrape jobs\\n# Edit prometheus.yml and remove unused jobs # Monitor memory\\nps aux | grep prometheus | grep -v grep","breadcrumbs":"Monitoring & Alerting Setup » Common Issues","id":"1173","title":"Common Issues"},"1174":{"body":"Prometheus installed and running AlertManager installed and running Grafana installed and configured Prometheus scraping all 8 services Alert rules deployed and validated Notification channels configured (Slack, email, PagerDuty) AlertManager webhooks tested Grafana dashboards created Log aggregation stack deployed (optional) Backup scripts configured Retention policies set Health checks configured Team notified of alerting setup Runbooks created for common alerts Alert testing procedure documented","breadcrumbs":"Monitoring & Alerting Setup » Production Deployment Checklist","id":"1174","title":"Production Deployment Checklist"},"1175":{"body":"# Prometheus\\ncurl http://localhost:9090/api/v1/targets # List scrape targets\\ncurl \'http://localhost:9090/api/v1/query?query=up\' # Query metric\\ncurl -X POST http://localhost:9090/-/reload # Reload config # AlertManager\\ncurl http://localhost:9093/api/v1/alerts # List active alerts\\ncurl http://localhost:9093/api/v1/receivers # List receivers\\ncurl http://localhost:9093/api/v2/status # Check status # Grafana\\ncurl -u admin:admin http://localhost:3000/api/datasources # List data sources\\ncurl -u admin:admin http://localhost:3000/api/dashboards # List dashboards # Validation\\npromtool check config /etc/prometheus/prometheus.yml\\npromtool check rules /etc/prometheus/rules/platform-alerts.yml\\namtool config routes","breadcrumbs":"Monitoring & Alerting Setup » Quick Commands Reference","id":"1175","title":"Quick Commands Reference"},"1176":{"body":"","breadcrumbs":"Monitoring & Alerting Setup » Documentation & Runbooks","id":"1176","title":"Documentation & Runbooks"},"1177":{"body":"# Service Down Alert ## Detection\\nAlert fires when service is unreachable for 5+ minutes ## Immediate Actions\\n1. Check service is running: pgrep -f service-name\\n2. Check service port: ss -tlnp | grep 8200\\n3. Check service logs: tail -100 /var/log/provisioning/service.log ## Diagnosis\\n1. Service crashed: look for panic/error in logs\\n2. Port conflict: lsof -i :8200\\n3. Configuration issue: validate config file\\n4. Dependency down: check database/cache connectivity ## Remediation\\n1. Restart service: pkill service && cargo run --release -p service &\\n2. Check health: curl http://localhost:8200/health\\n3. Verify dependencies: curl http://localhost:5432/health ## Escalation\\nIf service doesn\'t recover after restart, escalate to on-call engineer","breadcrumbs":"Monitoring & Alerting Setup » Sample Runbook: Service Down","id":"1177","title":"Sample Runbook: Service Down"},"1178":{"body":"Prometheus Documentation AlertManager Documentation Grafana Documentation Platform Deployment Guide Service Management Guide Last Updated : 2026-01-05 Version : 1.0.0 Status : Production Ready ✅","breadcrumbs":"Monitoring & Alerting Setup » Resources","id":"1178","title":"Resources"},"1179":{"body":"","breadcrumbs":"Service Management Quick Reference » Service Management Quick Reference","id":"1179","title":"Service Management Quick Reference"},"118":{"body":"macOS : Xcode Command Line Tools required Homebrew recommended for package management Linux : systemd-based distribution recommended sudo access required for some operations","breadcrumbs":"Prerequisites » Platform-Specific Notes","id":"118","title":"Platform-Specific Notes"},"1180":{"body":"Version : 1.0.0 Date : 2025-10-06 Author : CoreDNS Integration Agent","breadcrumbs":"CoreDNS Guide » CoreDNS Integration Guide","id":"1180","title":"CoreDNS Integration Guide"},"1181":{"body":"Overview Installation Configuration CLI Commands Zone Management Record Management Docker Deployment Integration Troubleshooting Advanced Topics","breadcrumbs":"CoreDNS Guide » Table of Contents","id":"1181","title":"Table of Contents"},"1182":{"body":"The CoreDNS integration provides comprehensive DNS management capabilities for the provisioning system. It supports: Local DNS service - Run CoreDNS as binary or Docker container Dynamic DNS updates - Automatic registration of infrastructure changes Multi-zone support - Manage multiple DNS zones Provider integration - Seamless integration with orchestrator REST API - Programmatic DNS management Docker deployment - Containerized CoreDNS with docker-compose","breadcrumbs":"CoreDNS Guide » Overview","id":"1182","title":"Overview"},"1183":{"body":"✅ Automatic Server Registration - Servers automatically registered in DNS on creation ✅ Zone File Management - Create, update, and manage zone files programmatically ✅ Multiple Deployment Modes - Binary, Docker, remote, or hybrid ✅ Health Monitoring - Built-in health checks and metrics ✅ CLI Interface - Comprehensive command-line tools ✅ API Integration - REST API for external integration","breadcrumbs":"CoreDNS Guide » Key Features","id":"1183","title":"Key Features"},"1184":{"body":"","breadcrumbs":"CoreDNS Guide » Installation","id":"1184","title":"Installation"},"1185":{"body":"Nushell 0.107+ - For CLI and scripts Docker (optional) - For containerized deployment dig (optional) - For DNS queries","breadcrumbs":"CoreDNS Guide » Prerequisites","id":"1185","title":"Prerequisites"},"1186":{"body":"# Install latest version\\nprovisioning dns install # Install specific version\\nprovisioning dns install 1.11.1 # Check mode\\nprovisioning dns install --check\\n```plaintext The binary will be installed to `~/.provisioning/bin/coredns`. ### Verify Installation ```bash\\n# Check CoreDNS version\\n~/.provisioning/bin/coredns -version # Verify installation\\nls -lh ~/.provisioning/bin/coredns\\n```plaintext --- ## Configuration ### KCL Configuration Schema Add CoreDNS configuration to your infrastructure config: ```kcl\\n# In workspace/infra/{name}/config.k\\nimport provisioning.coredns as dns coredns_config: dns.CoreDNSConfig = { mode = \\"local\\" local = { enabled = True deployment_type = \\"binary\\" # or \\"docker\\" binary_path = \\"~/.provisioning/bin/coredns\\" config_path = \\"~/.provisioning/coredns/Corefile\\" zones_path = \\"~/.provisioning/coredns/zones\\" port = 5353 auto_start = True zones = [\\"provisioning.local\\", \\"workspace.local\\"] } dynamic_updates = { enabled = True api_endpoint = \\"http://localhost:9090/dns\\" auto_register_servers = True auto_unregister_servers = True ttl = 300 } upstream = [\\"8.8.8.8\\", \\"1.1.1.1\\"] default_ttl = 3600 enable_logging = True enable_metrics = True metrics_port = 9153\\n}\\n```plaintext ### Configuration Modes #### Local Mode (Binary) Run CoreDNS as a local binary process: ```kcl\\ncoredns_config: CoreDNSConfig = { mode = \\"local\\" local = { deployment_type = \\"binary\\" auto_start = True }\\n}\\n```plaintext #### Local Mode (Docker) Run CoreDNS in Docker container: ```kcl\\ncoredns_config: CoreDNSConfig = { mode = \\"local\\" local = { deployment_type = \\"docker\\" docker = { image = \\"coredns/coredns:1.11.1\\" container_name = \\"provisioning-coredns\\" restart_policy = \\"unless-stopped\\" } }\\n}\\n```plaintext #### Remote Mode Connect to external CoreDNS service: ```kcl\\ncoredns_config: CoreDNSConfig = { mode = \\"remote\\" remote = { enabled = True endpoints = [\\"https://dns1.example.com\\", \\"https://dns2.example.com\\"] zones = [\\"production.local\\"] verify_tls = True }\\n}\\n```plaintext #### Disabled Mode Disable CoreDNS integration: ```kcl\\ncoredns_config: CoreDNSConfig = { mode = \\"disabled\\"\\n}\\n```plaintext --- ## CLI Commands ### Service Management ```bash\\n# Check status\\nprovisioning dns status # Start service\\nprovisioning dns start # Start in foreground (for debugging)\\nprovisioning dns start --foreground # Stop service\\nprovisioning dns stop # Restart service\\nprovisioning dns restart # Reload configuration (graceful)\\nprovisioning dns reload # View logs\\nprovisioning dns logs # Follow logs\\nprovisioning dns logs --follow # Show last 100 lines\\nprovisioning dns logs --lines 100\\n```plaintext ### Health & Monitoring ```bash\\n# Check health\\nprovisioning dns health # View configuration\\nprovisioning dns config show # Validate configuration\\nprovisioning dns config validate # Generate new Corefile\\nprovisioning dns config generate\\n```plaintext --- ## Zone Management ### List Zones ```bash\\n# List all zones\\nprovisioning dns zone list\\n```plaintext **Output:** ```plaintext\\nDNS Zones\\n========= • provisioning.local ✓ • workspace.local ✓\\n```plaintext ### Create Zone ```bash\\n# Create new zone\\nprovisioning dns zone create myapp.local # Check mode\\nprovisioning dns zone create myapp.local --check\\n```plaintext ### Show Zone Details ```bash\\n# Show all records in zone\\nprovisioning dns zone show provisioning.local # JSON format\\nprovisioning dns zone show provisioning.local --format json # YAML format\\nprovisioning dns zone show provisioning.local --format yaml\\n```plaintext ### Delete Zone ```bash\\n# Delete zone (with confirmation)\\nprovisioning dns zone delete myapp.local # Force deletion (skip confirmation)\\nprovisioning dns zone delete myapp.local --force # Check mode\\nprovisioning dns zone delete myapp.local --check\\n```plaintext --- ## Record Management ### Add Records #### A Record (IPv4) ```bash\\nprovisioning dns record add server-01 A 10.0.1.10 # With custom TTL\\nprovisioning dns record add server-01 A 10.0.1.10 --ttl 600 # With comment\\nprovisioning dns record add server-01 A 10.0.1.10 --comment \\"Web server\\" # Different zone\\nprovisioning dns record add server-01 A 10.0.1.10 --zone myapp.local\\n```plaintext #### AAAA Record (IPv6) ```bash\\nprovisioning dns record add server-01 AAAA 2001:db8::1\\n```plaintext #### CNAME Record ```bash\\nprovisioning dns record add web CNAME server-01.provisioning.local\\n```plaintext #### MX Record ```bash\\nprovisioning dns record add @ MX mail.example.com --priority 10\\n```plaintext #### TXT Record ```bash\\nprovisioning dns record add @ TXT \\"v=spf1 mx -all\\"\\n```plaintext ### Remove Records ```bash\\n# Remove record\\nprovisioning dns record remove server-01 # Different zone\\nprovisioning dns record remove server-01 --zone myapp.local # Check mode\\nprovisioning dns record remove server-01 --check\\n```plaintext ### Update Records ```bash\\n# Update record value\\nprovisioning dns record update server-01 A 10.0.1.20 # With new TTL\\nprovisioning dns record update server-01 A 10.0.1.20 --ttl 1800\\n```plaintext ### List Records ```bash\\n# List all records in zone\\nprovisioning dns record list # Different zone\\nprovisioning dns record list --zone myapp.local # JSON format\\nprovisioning dns record list --format json # YAML format\\nprovisioning dns record list --format yaml\\n```plaintext **Example Output:** ```plaintext\\nDNS Records - Zone: provisioning.local ╭───┬──────────────┬──────┬─────────────┬─────╮\\n│ # │ name │ type │ value │ ttl │\\n├───┼──────────────┼──────┼─────────────┼─────┤\\n│ 0 │ server-01 │ A │ 10.0.1.10 │ 300 │\\n│ 1 │ server-02 │ A │ 10.0.1.11 │ 300 │\\n│ 2 │ db-01 │ A │ 10.0.2.10 │ 300 │\\n│ 3 │ web │ CNAME│ server-01 │ 300 │\\n╰───┴──────────────┴──────┴─────────────┴─────╯\\n```plaintext --- ## Docker Deployment ### Prerequisites Ensure Docker and docker-compose are installed: ```bash\\ndocker --version\\ndocker-compose --version\\n```plaintext ### Start CoreDNS in Docker ```bash\\n# Start CoreDNS container\\nprovisioning dns docker start # Check mode\\nprovisioning dns docker start --check\\n```plaintext ### Manage Docker Container ```bash\\n# Check status\\nprovisioning dns docker status # View logs\\nprovisioning dns docker logs # Follow logs\\nprovisioning dns docker logs --follow # Restart container\\nprovisioning dns docker restart # Stop container\\nprovisioning dns docker stop # Check health\\nprovisioning dns docker health\\n```plaintext ### Update Docker Image ```bash\\n# Pull latest image\\nprovisioning dns docker pull # Pull specific version\\nprovisioning dns docker pull --version 1.11.1 # Update and restart\\nprovisioning dns docker update\\n```plaintext ### Remove Container ```bash\\n# Remove container (with confirmation)\\nprovisioning dns docker remove # Remove with volumes\\nprovisioning dns docker remove --volumes # Force remove (skip confirmation)\\nprovisioning dns docker remove --force # Check mode\\nprovisioning dns docker remove --check\\n```plaintext ### View Configuration ```bash\\n# Show docker-compose config\\nprovisioning dns docker config\\n```plaintext --- ## Integration ### Automatic Server Registration When dynamic DNS is enabled, servers are automatically registered: ```bash\\n# Create server (automatically registers in DNS)\\nprovisioning server create web-01 --infra myapp # Server gets DNS record: web-01.provisioning.local -> \\n```plaintext ### Manual Registration ```nushell\\nuse lib_provisioning/coredns/integration.nu * # Register server\\nregister-server-in-dns \\"web-01\\" \\"10.0.1.10\\" # Unregister server\\nunregister-server-from-dns \\"web-01\\" # Bulk register\\nbulk-register-servers [ {hostname: \\"web-01\\", ip: \\"10.0.1.10\\"} {hostname: \\"web-02\\", ip: \\"10.0.1.11\\"} {hostname: \\"db-01\\", ip: \\"10.0.2.10\\"}\\n]\\n```plaintext ### Sync Infrastructure with DNS ```bash\\n# Sync all servers in infrastructure with DNS\\nprovisioning dns sync myapp # Check mode\\nprovisioning dns sync myapp --check\\n```plaintext ### Service Registration ```nushell\\nuse lib_provisioning/coredns/integration.nu * # Register service\\nregister-service-in-dns \\"api\\" \\"10.0.1.10\\" # Unregister service\\nunregister-service-from-dns \\"api\\"\\n```plaintext --- ## Query DNS ### Using CLI ```bash\\n# Query A record\\nprovisioning dns query server-01 # Query specific type\\nprovisioning dns query server-01 --type AAAA # Query different server\\nprovisioning dns query server-01 --server 8.8.8.8 --port 53 # Query from local CoreDNS\\nprovisioning dns query server-01 --server 127.0.0.1 --port 5353\\n```plaintext ### Using dig ```bash\\n# Query from local CoreDNS\\ndig @127.0.0.1 -p 5353 server-01.provisioning.local # Query CNAME\\ndig @127.0.0.1 -p 5353 web.provisioning.local CNAME # Query MX\\ndig @127.0.0.1 -p 5353 example.com MX\\n```plaintext --- ## Troubleshooting ### CoreDNS Not Starting **Symptoms:** `dns start` fails or service doesn\'t respond **Solutions:** 1. **Check if port is in use:** ```bash lsof -i :5353 netstat -an | grep 5353 Validate Corefile: provisioning dns config validate Check logs: provisioning dns logs\\ntail -f ~/.provisioning/coredns/coredns.log Verify binary exists: ls -lh ~/.provisioning/bin/coredns\\nprovisioning dns install","breadcrumbs":"CoreDNS Guide » Install CoreDNS Binary","id":"1186","title":"Install CoreDNS Binary"},"1187":{"body":"Symptoms: dig returns SERVFAIL or timeout Solutions: Check CoreDNS is running: provisioning dns status\\nprovisioning dns health Verify zone file exists: ls -lh ~/.provisioning/coredns/zones/\\ncat ~/.provisioning/coredns/zones/provisioning.local.zone Test with dig: dig @127.0.0.1 -p 5353 provisioning.local SOA Check firewall: # macOS\\nsudo pfctl -sr | grep 5353 # Linux\\nsudo iptables -L -n | grep 5353","breadcrumbs":"CoreDNS Guide » DNS Queries Not Working","id":"1187","title":"DNS Queries Not Working"},"1188":{"body":"Symptoms: dns config validate shows errors Solutions: Backup zone file: cp ~/.provisioning/coredns/zones/provisioning.local.zone \\\\ ~/.provisioning/coredns/zones/provisioning.local.zone.backup Regenerate zone: provisioning dns zone create provisioning.local --force Check syntax manually: cat ~/.provisioning/coredns/zones/provisioning.local.zone Increment serial: Edit zone file manually Increase serial number in SOA record","breadcrumbs":"CoreDNS Guide » Zone File Validation Errors","id":"1188","title":"Zone File Validation Errors"},"1189":{"body":"Symptoms: Docker container won\'t start or crashes Solutions: Check Docker logs: provisioning dns docker logs\\ndocker logs provisioning-coredns Verify volumes exist: ls -lh ~/.provisioning/coredns/ Check container status: provisioning dns docker status\\ndocker ps -a | grep coredns Recreate container: provisioning dns docker stop\\nprovisioning dns docker remove --volumes\\nprovisioning dns docker start","breadcrumbs":"CoreDNS Guide » Docker Container Issues","id":"1189","title":"Docker Container Issues"},"119":{"body":"","breadcrumbs":"Prerequisites » Required Software","id":"119","title":"Required Software"},"1190":{"body":"Symptoms: Servers not auto-registered in DNS Solutions: Check if enabled: provisioning dns config show | grep -A 5 dynamic_updates Verify orchestrator running: curl http://localhost:9090/health Check logs for errors: provisioning dns logs | grep -i error Test manual registration: use lib_provisioning/coredns/integration.nu *\\nregister-server-in-dns \\"test-server\\" \\"10.0.0.1\\"","breadcrumbs":"CoreDNS Guide » Dynamic Updates Not Working","id":"1190","title":"Dynamic Updates Not Working"},"1191":{"body":"","breadcrumbs":"CoreDNS Guide » Advanced Topics","id":"1191","title":"Advanced Topics"},"1192":{"body":"Add custom plugins to Corefile: use lib_provisioning/coredns/corefile.nu * # Add plugin to zone\\nadd-corefile-plugin \\\\ \\"~/.provisioning/coredns/Corefile\\" \\\\ \\"provisioning.local\\" \\\\ \\"cache 30\\"\\n```plaintext ### Backup and Restore ```bash\\n# Backup configuration\\ntar czf coredns-backup.tar.gz ~/.provisioning/coredns/ # Restore configuration\\ntar xzf coredns-backup.tar.gz -C ~/\\n```plaintext ### Zone File Backup ```nushell\\nuse lib_provisioning/coredns/zones.nu * # Backup zone\\nbackup-zone-file \\"provisioning.local\\" # Creates: ~/.provisioning/coredns/zones/provisioning.local.zone.YYYYMMDD-HHMMSS.bak\\n```plaintext ### Metrics and Monitoring CoreDNS exposes Prometheus metrics on port 9153: ```bash\\n# View metrics\\ncurl http://localhost:9153/metrics # Common metrics:\\n# - coredns_dns_request_duration_seconds\\n# - coredns_dns_requests_total\\n# - coredns_dns_responses_total\\n```plaintext ### Multi-Zone Setup ```kcl\\ncoredns_config: CoreDNSConfig = { local = { zones = [ \\"provisioning.local\\", \\"workspace.local\\", \\"dev.local\\", \\"staging.local\\", \\"prod.local\\" ] }\\n}\\n```plaintext ### Split-Horizon DNS Configure different zones for internal/external: ```kcl\\ncoredns_config: CoreDNSConfig = { local = { zones = [\\"internal.local\\"] port = 5353 } remote = { zones = [\\"external.com\\"] endpoints = [\\"https://dns.external.com\\"] }\\n}\\n```plaintext --- ## Configuration Reference ### CoreDNSConfig Fields | Field | Type | Default | Description |\\n|-------|------|---------|-------------|\\n| `mode` | `\\"local\\" \\\\| \\"remote\\" \\\\| \\"hybrid\\" \\\\| \\"disabled\\"` | `\\"local\\"` | Deployment mode |\\n| `local` | `LocalCoreDNS?` | - | Local config (required for local mode) |\\n| `remote` | `RemoteCoreDNS?` | - | Remote config (required for remote mode) |\\n| `dynamic_updates` | `DynamicDNS` | - | Dynamic DNS configuration |\\n| `upstream` | `[str]` | `[\\"8.8.8.8\\", \\"1.1.1.1\\"]` | Upstream DNS servers |\\n| `default_ttl` | `int` | `300` | Default TTL (seconds) |\\n| `enable_logging` | `bool` | `True` | Enable query logging |\\n| `enable_metrics` | `bool` | `True` | Enable Prometheus metrics |\\n| `metrics_port` | `int` | `9153` | Metrics port | ### LocalCoreDNS Fields | Field | Type | Default | Description |\\n|-------|------|---------|-------------|\\n| `enabled` | `bool` | `True` | Enable local CoreDNS |\\n| `deployment_type` | `\\"binary\\" \\\\| \\"docker\\"` | `\\"binary\\"` | How to deploy |\\n| `binary_path` | `str` | `\\"~/.provisioning/bin/coredns\\"` | Path to binary |\\n| `config_path` | `str` | `\\"~/.provisioning/coredns/Corefile\\"` | Corefile path |\\n| `zones_path` | `str` | `\\"~/.provisioning/coredns/zones\\"` | Zones directory |\\n| `port` | `int` | `5353` | DNS listening port |\\n| `auto_start` | `bool` | `True` | Auto-start on boot |\\n| `zones` | `[str]` | `[\\"provisioning.local\\"]` | Managed zones | ### DynamicDNS Fields | Field | Type | Default | Description |\\n|-------|------|---------|-------------|\\n| `enabled` | `bool` | `True` | Enable dynamic updates |\\n| `api_endpoint` | `str` | `\\"http://localhost:9090/dns\\"` | Orchestrator API |\\n| `auto_register_servers` | `bool` | `True` | Auto-register on create |\\n| `auto_unregister_servers` | `bool` | `True` | Auto-unregister on delete |\\n| `ttl` | `int` | `300` | TTL for dynamic records |\\n| `update_strategy` | `\\"immediate\\" \\\\| \\"batched\\" \\\\| \\"scheduled\\"` | `\\"immediate\\"` | Update strategy | --- ## Examples ### Complete Setup Example ```bash\\n# 1. Install CoreDNS\\nprovisioning dns install # 2. Generate configuration\\nprovisioning dns config generate # 3. Start service\\nprovisioning dns start # 4. Create custom zone\\nprovisioning dns zone create myapp.local # 5. Add DNS records\\nprovisioning dns record add web-01 A 10.0.1.10\\nprovisioning dns record add web-02 A 10.0.1.11\\nprovisioning dns record add api CNAME web-01.myapp.local --zone myapp.local # 6. Query records\\nprovisioning dns query web-01 --server 127.0.0.1 --port 5353 # 7. Check status\\nprovisioning dns status\\nprovisioning dns health\\n```plaintext ### Docker Deployment Example ```bash\\n# 1. Start CoreDNS in Docker\\nprovisioning dns docker start # 2. Check status\\nprovisioning dns docker status # 3. View logs\\nprovisioning dns docker logs --follow # 4. Add records (container must be running)\\nprovisioning dns record add server-01 A 10.0.1.10 # 5. Query\\ndig @127.0.0.1 -p 5353 server-01.provisioning.local # 6. Stop\\nprovisioning dns docker stop\\n```plaintext --- ## Best Practices 1. **Use TTL wisely** - Lower TTL (300s) for frequently changing records, higher (3600s) for stable\\n2. **Enable logging** - Essential for troubleshooting\\n3. **Regular backups** - Backup zone files before major changes\\n4. **Validate before reload** - Always run `dns config validate` before reloading\\n5. **Monitor metrics** - Track DNS query rates and error rates\\n6. **Use comments** - Add comments to records for documentation\\n7. **Separate zones** - Use different zones for different environments (dev, staging, prod) --- ## See Also - [Architecture Documentation](../architecture/coredns-architecture.md)\\n- [API Reference](../api/dns-api.md)\\n- [Orchestrator Integration](../integration/orchestrator-dns.md)\\n- KCL Schema Reference --- ## Quick Reference **Quick command reference for CoreDNS DNS management** --- ### Installation ```bash\\n# Install CoreDNS binary\\nprovisioning dns install # Install specific version\\nprovisioning dns install 1.11.1\\n```plaintext --- ### Service Management ```bash\\n# Status\\nprovisioning dns status # Start\\nprovisioning dns start # Stop\\nprovisioning dns stop # Restart\\nprovisioning dns restart # Reload (graceful)\\nprovisioning dns reload # Logs\\nprovisioning dns logs\\nprovisioning dns logs --follow\\nprovisioning dns logs --lines 100 # Health\\nprovisioning dns health\\n```plaintext --- ### Zone Management ```bash\\n# List zones\\nprovisioning dns zone list # Create zone\\nprovisioning dns zone create myapp.local # Show zone records\\nprovisioning dns zone show provisioning.local\\nprovisioning dns zone show provisioning.local --format json # Delete zone\\nprovisioning dns zone delete myapp.local\\nprovisioning dns zone delete myapp.local --force\\n```plaintext --- ### Record Management ```bash\\n# Add A record\\nprovisioning dns record add server-01 A 10.0.1.10 # Add with custom TTL\\nprovisioning dns record add server-01 A 10.0.1.10 --ttl 600 # Add with comment\\nprovisioning dns record add server-01 A 10.0.1.10 --comment \\"Web server\\" # Add to specific zone\\nprovisioning dns record add server-01 A 10.0.1.10 --zone myapp.local # Add CNAME\\nprovisioning dns record add web CNAME server-01.provisioning.local # Add MX\\nprovisioning dns record add @ MX mail.example.com --priority 10 # Add TXT\\nprovisioning dns record add @ TXT \\"v=spf1 mx -all\\" # Remove record\\nprovisioning dns record remove server-01\\nprovisioning dns record remove server-01 --zone myapp.local # Update record\\nprovisioning dns record update server-01 A 10.0.1.20 # List records\\nprovisioning dns record list\\nprovisioning dns record list --zone myapp.local\\nprovisioning dns record list --format json\\n```plaintext --- ### DNS Queries ```bash\\n# Query A record\\nprovisioning dns query server-01 # Query CNAME\\nprovisioning dns query web --type CNAME # Query from local CoreDNS\\nprovisioning dns query server-01 --server 127.0.0.1 --port 5353 # Using dig\\ndig @127.0.0.1 -p 5353 server-01.provisioning.local\\ndig @127.0.0.1 -p 5353 provisioning.local SOA\\n```plaintext --- ### Configuration ```bash\\n# Show configuration\\nprovisioning dns config show # Validate configuration\\nprovisioning dns config validate # Generate Corefile\\nprovisioning dns config generate\\n```plaintext --- ### Docker Deployment ```bash\\n# Start Docker container\\nprovisioning dns docker start # Status\\nprovisioning dns docker status # Logs\\nprovisioning dns docker logs\\nprovisioning dns docker logs --follow # Restart\\nprovisioning dns docker restart # Stop\\nprovisioning dns docker stop # Health\\nprovisioning dns docker health # Remove\\nprovisioning dns docker remove\\nprovisioning dns docker remove --volumes\\nprovisioning dns docker remove --force # Pull image\\nprovisioning dns docker pull\\nprovisioning dns docker pull --version 1.11.1 # Update\\nprovisioning dns docker update # Show config\\nprovisioning dns docker config\\n```plaintext --- ### Common Workflows #### Initial Setup ```bash\\n# 1. Install\\nprovisioning dns install # 2. Start\\nprovisioning dns start # 3. Verify\\nprovisioning dns status\\nprovisioning dns health\\n```plaintext #### Add Server ```bash\\n# Add DNS record for new server\\nprovisioning dns record add web-01 A 10.0.1.10 # Verify\\nprovisioning dns query web-01\\n```plaintext #### Create Custom Zone ```bash\\n# 1. Create zone\\nprovisioning dns zone create myapp.local # 2. Add records\\nprovisioning dns record add web-01 A 10.0.1.10 --zone myapp.local\\nprovisioning dns record add api CNAME web-01.myapp.local --zone myapp.local # 3. List records\\nprovisioning dns record list --zone myapp.local # 4. Query\\ndig @127.0.0.1 -p 5353 web-01.myapp.local\\n```plaintext #### Docker Setup ```bash\\n# 1. Start container\\nprovisioning dns docker start # 2. Check status\\nprovisioning dns docker status # 3. Add records\\nprovisioning dns record add server-01 A 10.0.1.10 # 4. Query\\ndig @127.0.0.1 -p 5353 server-01.provisioning.local\\n```plaintext --- ### Troubleshooting ```bash\\n# Check if CoreDNS is running\\nprovisioning dns status\\nps aux | grep coredns # Check port usage\\nlsof -i :5353\\nnetstat -an | grep 5353 # View logs\\nprovisioning dns logs\\ntail -f ~/.provisioning/coredns/coredns.log # Validate configuration\\nprovisioning dns config validate # Test DNS query\\ndig @127.0.0.1 -p 5353 provisioning.local SOA # Restart service\\nprovisioning dns restart # For Docker\\nprovisioning dns docker logs\\nprovisioning dns docker health\\ndocker ps -a | grep coredns\\n```plaintext --- ### File Locations ```bash\\n# Binary\\n~/.provisioning/bin/coredns # Corefile\\n~/.provisioning/coredns/Corefile # Zone files\\n~/.provisioning/coredns/zones/ # Logs\\n~/.provisioning/coredns/coredns.log # PID file\\n~/.provisioning/coredns/coredns.pid # Docker compose\\nprovisioning/config/coredns/docker-compose.yml\\n```plaintext --- ### Configuration Example ```kcl\\nimport provisioning.coredns as dns coredns_config: dns.CoreDNSConfig = { mode = \\"local\\" local = { enabled = True deployment_type = \\"binary\\" # or \\"docker\\" port = 5353 zones = [\\"provisioning.local\\", \\"myapp.local\\"] } dynamic_updates = { enabled = True auto_register_servers = True } upstream = [\\"8.8.8.8\\", \\"1.1.1.1\\"]\\n}\\n```plaintext --- ### Environment Variables ```bash\\n# None required - configuration via KCL\\n```plaintext --- ### Default Values | Setting | Default |\\n|---------|---------|\\n| Port | 5353 |\\n| Zones | [\\"provisioning.local\\"] |\\n| Upstream | [\\"8.8.8.8\\", \\"1.1.1.1\\"] |\\n| TTL | 300 |\\n| Deployment | binary |\\n| Auto-start | true |\\n| Logging | enabled |\\n| Metrics | enabled |\\n| Metrics Port | 9153 | --- ## See Also - [Complete Guide](COREDNS_GUIDE.md) - Full documentation\\n- Implementation Summary - Technical details\\n- KCL Schema - Configuration schema --- **Last Updated**: 2025-10-06\\n**Version**: 1.0.0","breadcrumbs":"CoreDNS Guide » Custom Corefile Plugins","id":"1192","title":"Custom Corefile Plugins"},"1193":{"body":"","breadcrumbs":"Backup Recovery » Backup and Recovery","id":"1193","title":"Backup and Recovery"},"1194":{"body":"","breadcrumbs":"Deployment » Deployment Guide","id":"1194","title":"Deployment Guide"},"1195":{"body":"","breadcrumbs":"Monitoring » Monitoring Guide","id":"1195","title":"Monitoring Guide"},"1196":{"body":"Status : ✅ PRODUCTION READY Version : 1.0.0 Last Verified : 2025-12-09","breadcrumbs":"Production Readiness Checklist » Production Readiness Checklist","id":"1196","title":"Production Readiness Checklist"},"1197":{"body":"The Provisioning Setup System is production-ready for enterprise deployment. All components have been tested, validated, and verified to meet production standards.","breadcrumbs":"Production Readiness Checklist » Executive Summary","id":"1197","title":"Executive Summary"},"1198":{"body":"✅ Code Quality : 100% Nushell 0.109 compliant ✅ Test Coverage : 33/33 tests passing (100% pass rate) ✅ Security : Enterprise-grade security controls ✅ Performance : Sub-second response times ✅ Documentation : Comprehensive user and admin guides ✅ Reliability : Graceful error handling and fallbacks","breadcrumbs":"Production Readiness Checklist » Quality Metrics","id":"1198","title":"Quality Metrics"},"1199":{"body":"","breadcrumbs":"Production Readiness Checklist » Pre-Deployment Verification","id":"1199","title":"Pre-Deployment Verification"},"12":{"body":"provisioning/docs/src/\\n├── README.md (this file) # Documentation hub\\n├── getting-started/ # Getting started guides\\n│ ├── installation-guide.md\\n│ ├── getting-started.md\\n│ └── quickstart-cheatsheet.md\\n├── architecture/ # System architecture\\n│ ├── adr/ # Architecture Decision Records\\n│ ├── design-principles.md\\n│ ├── integration-patterns.md\\n│ ├── system-overview.md\\n│ └── ... (and 10+ more architecture docs)\\n├── infrastructure/ # Infrastructure guides\\n│ ├── cli-reference.md\\n│ ├── workspace-setup.md\\n│ ├── workspace-switching-guide.md\\n│ └── infrastructure-management.md\\n├── api-reference/ # API documentation\\n│ ├── rest-api.md\\n│ ├── websocket.md\\n│ ├── integration-examples.md\\n│ └── sdks.md\\n├── development/ # Developer guides\\n│ ├── README.md\\n│ ├── implementation-guide.md\\n│ ├── quick-provider-guide.md\\n│ ├── taskserv-developer-guide.md\\n│ └── ... (15+ more developer docs)\\n├── guides/ # How-to guides\\n│ ├── from-scratch.md\\n│ ├── update-infrastructure.md\\n│ └── customize-infrastructure.md\\n├── operations/ # Operations guides\\n│ ├── service-management-guide.md\\n│ ├── coredns-guide.md\\n│ └── ... (more operations docs)\\n├── security/ # Security docs\\n├── integration/ # Integration guides\\n├── testing/ # Testing docs\\n├── configuration/ # Configuration docs\\n├── troubleshooting/ # Troubleshooting guides\\n└── quick-reference/ # Quick references\\n```plaintext --- ## Key Concepts ### Infrastructure as Code (IaC) The provisioning platform uses **declarative configuration** to manage infrastructure. Instead of manually creating resources, you define what you want in Nickel configuration files, and the system makes it happen. ### Mode-Based Architecture The system supports four operational modes: - **Solo**: Single developer local development\\n- **Multi-user**: Team collaboration with shared services\\n- **CI/CD**: Automated pipeline execution\\n- **Enterprise**: Production deployment with strict compliance ### Extension System Extensibility through: - **Providers**: Cloud platform integrations (AWS, UpCloud, Local)\\n- **Task Services**: Infrastructure components (Kubernetes, databases, etc.)\\n- **Clusters**: Complete deployment configurations ### OCI-Native Distribution Extensions and packages distributed as OCI artifacts, enabling: - Industry-standard packaging\\n- Efficient caching and bandwidth\\n- Version pinning and rollback\\n- Air-gapped deployments --- ## Documentation by Role ### For New Users 1. Start with **[Installation Guide](getting-started/installation-guide.md)**\\n2. Read **[Getting Started](getting-started/getting-started.md)**\\n3. Follow **[From Scratch Guide](guides/from-scratch.md)**\\n4. Reference **[Quickstart Cheatsheet](guides/quickstart-cheatsheet.md)** ### For Developers 1. Review **[System Overview](architecture/system-overview.md)**\\n2. Study **[Design Principles](architecture/design-principles.md)**\\n3. Read relevant **[ADRs](architecture/)**\\n4. Follow **[Development Guide](development/README.md)**\\n5. Reference **KCL Quick Reference** ### For Operators 1. Understand **[Mode System](infrastructure/mode-system)**\\n2. Learn **[Service Management](operations/service-management-guide.md)**\\n3. Review **[Infrastructure Management](infrastructure/infrastructure-management.md)**\\n4. Study **[OCI Registry](integration/oci-registry-guide.md)** ### For Architects 1. Read **[System Overview](architecture/system-overview.md)**\\n2. Study all **[ADRs](architecture/)**\\n3. Review **[Integration Patterns](architecture/integration-patterns.md)**\\n4. Understand **[Multi-Repo Architecture](architecture/multi-repo-architecture.md)** --- ## System Capabilities ### ✅ Infrastructure Automation - Multi-cloud support (AWS, UpCloud, Local)\\n- Declarative configuration with KCL\\n- Automated dependency resolution\\n- Batch operations with rollback ### ✅ Workflow Orchestration - Hybrid Rust/Nushell orchestration\\n- Checkpoint-based recovery\\n- Parallel execution with limits\\n- Real-time monitoring ### ✅ Test Environments - Containerized testing\\n- Multi-node cluster simulation\\n- Topology templates\\n- Automated cleanup ### ✅ Mode-Based Operation - Solo: Local development\\n- Multi-user: Team collaboration\\n- CI/CD: Automated pipelines\\n- Enterprise: Production deployment ### ✅ Extension Management - OCI-native distribution\\n- Automatic dependency resolution\\n- Version management\\n- Local and remote sources --- ## Key Achievements ### 🚀 Batch Workflow System (v3.1.0) - Provider-agnostic batch operations\\n- Mixed provider support (UpCloud + AWS + local)\\n- Dependency resolution with soft/hard dependencies\\n- Real-time monitoring and rollback ### 🏗️ Hybrid Orchestrator (v3.0.0) - Solves Nushell deep call stack limitations\\n- Preserves all business logic\\n- REST API for external integration\\n- Checkpoint-based state management ### ⚙️ Configuration System (v2.0.0) - Migrated from ENV to config-driven\\n- Hierarchical configuration loading\\n- Variable interpolation\\n- True IaC without hardcoded fallbacks ### 🎯 Modular CLI (v3.2.0) - 84% reduction in main file size\\n- Domain-driven handlers\\n- 80+ shortcuts\\n- Bi-directional help system ### 🧪 Test Environment Service (v3.4.0) - Automated containerized testing\\n- Multi-node cluster topologies\\n- CI/CD integration ready\\n- Template-based configurations ### 🔄 Workspace Switching (v2.0.5) - Centralized workspace management\\n- Single-command workspace switching\\n- Active workspace tracking\\n- User preference system --- ## Technology Stack | Component | Technology | Purpose |\\n|-----------|------------|---------|\\n| **Core CLI** | Nushell 0.107.1 | Shell and scripting |\\n| **Configuration** | KCL 0.11.2 | Type-safe IaC |\\n| **Orchestrator** | Rust | High-performance coordination |\\n| **Templates** | Jinja2 (nu_plugin_tera) | Code generation |\\n| **Secrets** | SOPS 3.10.2 + Age 1.2.1 | Encryption |\\n| **Distribution** | OCI (skopeo/crane/oras) | Artifact management | --- ## Support ### Getting Help - **Documentation**: You\'re reading it!\\n- **Quick Reference**: Run `provisioning sc` or `provisioning guide quickstart`\\n- **Help System**: Run `provisioning help` or `provisioning help`\\n- **Interactive Shell**: Run `provisioning nu` for Nushell REPL ### Reporting Issues - Check **[Troubleshooting Guide](infrastructure/troubleshooting-guide.md)**\\n- Review **[FAQ](troubleshooting/troubleshooting-guide.md)**\\n- Enable debug mode: `provisioning --debug `\\n- Check logs: `provisioning platform logs ` --- ## Contributing This project welcomes contributions! See **[Development Guide](development/README.md)** for: - Development setup\\n- Code style guidelines\\n- Testing requirements\\n- Pull request process --- ## License [Add license information] --- ## Version History | Version | Date | Major Changes |\\n|---------|------|---------------|\\n| **3.5.0** | 2025-10-06 | Mode system, OCI registry, comprehensive documentation |\\n| **3.4.0** | 2025-10-06 | Test environment service |\\n| **3.3.0** | 2025-09-30 | Interactive guides system |\\n| **3.2.0** | 2025-09-30 | Modular CLI refactoring |\\n| **3.1.0** | 2025-09-25 | Batch workflow system |\\n| **3.0.0** | 2025-09-25 | Hybrid orchestrator architecture |\\n| **2.0.5** | 2025-10-02 | Workspace switching system |\\n| **2.0.0** | 2025-09-23 | Configuration system migration | --- **Maintained By**: Provisioning Team\\n**Last Review**: 2025-10-06\\n**Next Review**: 2026-01-06","breadcrumbs":"Home » Documentation Structure","id":"12","title":"Documentation Structure"},"120":{"body":"Software Version Purpose Nushell 0.107.1+ Shell and scripting language KCL 0.11.2+ Configuration language Docker 20.10+ Container runtime (for platform services) SOPS 3.10.2+ Secrets management Age 1.2.1+ Encryption tool","breadcrumbs":"Prerequisites » Core Dependencies","id":"120","title":"Core Dependencies"},"1200":{"body":"Nushell 0.109.0 or higher bash shell available One deployment tool (Docker/Kubernetes/SSH/systemd) 2+ CPU cores (4+ recommended) 4+ GB RAM (8+ recommended) Network connectivity (optional for offline mode)","breadcrumbs":"Production Readiness Checklist » 1. System Requirements ✅","id":"1200","title":"1. System Requirements ✅"},"1201":{"body":"All 9 modules passing syntax validation 46 total issues identified and resolved Nushell 0.109 compatibility verified Code style guidelines followed No hardcoded credentials or secrets","breadcrumbs":"Production Readiness Checklist » 2. Code Quality ✅","id":"1201","title":"2. Code Quality ✅"},"1202":{"body":"Unit tests: 33/33 passing Integration tests: All passing E2E tests: All passing Health check: Operational Deployment validation: Working","breadcrumbs":"Production Readiness Checklist » 3. Testing ✅","id":"1202","title":"3. Testing ✅"},"1203":{"body":"Configuration encryption ready Credential management secure No sensitive data in logs GDPR-compliant audit logging Role-based access control (RBAC) ready","breadcrumbs":"Production Readiness Checklist » 4. Security ✅","id":"1203","title":"4. Security ✅"},"1204":{"body":"User Quick Start Guide Comprehensive Setup Guide Installation Guide Troubleshooting Guide API Documentation","breadcrumbs":"Production Readiness Checklist » 5. Documentation ✅","id":"1204","title":"5. Documentation ✅"},"1205":{"body":"Installation script tested Health check script operational Configuration validation working Backup/restore functionality verified Migration path available","breadcrumbs":"Production Readiness Checklist » 6. Deployment Readiness ✅","id":"1205","title":"6. Deployment Readiness ✅"},"1206":{"body":"","breadcrumbs":"Production Readiness Checklist » Pre-Production Checklist","id":"1206","title":"Pre-Production Checklist"},"1207":{"body":"Team trained on provisioning basics Admin team trained on configuration management Support team trained on troubleshooting Operations team ready for deployment Security team reviewed security controls","breadcrumbs":"Production Readiness Checklist » Team Preparation","id":"1207","title":"Team Preparation"},"1208":{"body":"Target deployment environment prepared Network connectivity verified Required tools installed and tested Backup systems in place Monitoring configured","breadcrumbs":"Production Readiness Checklist » Infrastructure Preparation","id":"1208","title":"Infrastructure Preparation"},"1209":{"body":"Provider credentials securely stored Network configuration planned Workspace structure defined Deployment strategy documented Rollback plan prepared","breadcrumbs":"Production Readiness Checklist » Configuration Preparation","id":"1209","title":"Configuration Preparation"},"121":{"body":"Software Version Purpose Podman 4.0+ Alternative container runtime OrbStack Latest macOS-optimized container runtime K9s 0.50.6+ Kubernetes management interface glow Latest Markdown renderer for guides bat Latest Syntax highlighting for file viewing","breadcrumbs":"Prerequisites » Optional Dependencies","id":"121","title":"Optional Dependencies"},"1210":{"body":"System installed on staging environment All capabilities tested Health checks passing Full deployment scenario tested Failover procedures tested","breadcrumbs":"Production Readiness Checklist » Testing in Production-Like Environment","id":"1210","title":"Testing in Production-Like Environment"},"1211":{"body":"","breadcrumbs":"Production Readiness Checklist » Deployment Steps","id":"1211","title":"Deployment Steps"},"1212":{"body":"# 1. Run installation script\\n./scripts/install-provisioning.sh # 2. Verify installation\\nprovisioning -v # 3. Run health check\\nnu scripts/health-check.nu","breadcrumbs":"Production Readiness Checklist » Phase 1: Installation (30 minutes)","id":"1212","title":"Phase 1: Installation (30 minutes)"},"1213":{"body":"# 1. Run setup wizard\\nprovisioning setup system --interactive # 2. Validate configuration\\nprovisioning setup validate # 3. Test health\\nprovisioning platform health","breadcrumbs":"Production Readiness Checklist » Phase 2: Initial Configuration (15 minutes)","id":"1213","title":"Phase 2: Initial Configuration (15 minutes)"},"1214":{"body":"# 1. Create production workspace\\nprovisioning setup workspace production # 2. Configure providers\\nprovisioning setup provider upcloud --config config.toml # 3. Validate workspace\\nprovisioning setup validate","breadcrumbs":"Production Readiness Checklist » Phase 3: Workspace Setup (10 minutes)","id":"1214","title":"Phase 3: Workspace Setup (10 minutes)"},"1215":{"body":"# 1. Run comprehensive health check\\nprovisioning setup validate --verbose # 2. Test deployment (dry-run)\\nprovisioning server create --check # 3. Verify no errors\\n# Review output and confirm readiness","breadcrumbs":"Production Readiness Checklist » Phase 4: Verification (10 minutes)","id":"1215","title":"Phase 4: Verification (10 minutes)"},"1216":{"body":"","breadcrumbs":"Production Readiness Checklist » Post-Deployment Verification","id":"1216","title":"Post-Deployment Verification"},"1217":{"body":"All services running and healthy Configuration loaded correctly First test deployment successful Monitoring and logging working Backup system operational","breadcrumbs":"Production Readiness Checklist » Immediate (Within 1 hour)","id":"1217","title":"Immediate (Within 1 hour)"},"1218":{"body":"Run health checks daily Monitor error logs Verify backup operations Check workspace synchronization Validate credentials refresh","breadcrumbs":"Production Readiness Checklist » Daily (First week)","id":"1218","title":"Daily (First week)"},"1219":{"body":"Run comprehensive validation Test backup/restore procedures Review audit logs Performance analysis Security review","breadcrumbs":"Production Readiness Checklist » Weekly (First month)","id":"1219","title":"Weekly (First month)"},"122":{"body":"Before proceeding, verify your system has the core dependencies installed:","breadcrumbs":"Prerequisites » Installation Verification","id":"122","title":"Installation Verification"},"1220":{"body":"Weekly health checks Monthly comprehensive validation Quarterly security review Annual disaster recovery test","breadcrumbs":"Production Readiness Checklist » Ongoing (Production)","id":"1220","title":"Ongoing (Production)"},"1221":{"body":"","breadcrumbs":"Production Readiness Checklist » Troubleshooting Reference","id":"1221","title":"Troubleshooting Reference"},"1222":{"body":"Solution : # Check Nushell installation\\nnu --version # Run with debug\\nprovisioning -x setup system --interactive","breadcrumbs":"Production Readiness Checklist » Issue: Setup wizard won\'t start","id":"1222","title":"Issue: Setup wizard won\'t start"},"1223":{"body":"Solution : # Check configuration\\nprovisioning setup validate --verbose # View configuration paths\\nprovisioning info paths # Reset and reconfigure\\nprovisioning setup reset --confirm\\nprovisioning setup system --interactive","breadcrumbs":"Production Readiness Checklist » Issue: Configuration validation fails","id":"1223","title":"Issue: Configuration validation fails"},"1224":{"body":"Solution : # Run detailed health check\\nnu scripts/health-check.nu # Check specific service\\nprovisioning platform status # Restart services if needed\\nprovisioning platform restart","breadcrumbs":"Production Readiness Checklist » Issue: Health check shows warnings","id":"1224","title":"Issue: Health check shows warnings"},"1225":{"body":"Solution : # Dry-run to see what would happen\\nprovisioning server create --check # Check logs\\nprovisioning logs tail -f # Verify provider credentials\\nprovisioning setup validate provider upcloud","breadcrumbs":"Production Readiness Checklist » Issue: Deployment fails","id":"1225","title":"Issue: Deployment fails"},"1226":{"body":"Expected performance on modern hardware (4+ cores, 8+ GB RAM): Operation Expected Time Maximum Time Setup system 2-5 seconds 10 seconds Health check < 3 seconds 5 seconds Configuration validation < 500ms 1 second Server creation < 30 seconds 60 seconds Workspace switch < 100ms 500ms","breadcrumbs":"Production Readiness Checklist » Performance Baselines","id":"1226","title":"Performance Baselines"},"1227":{"body":"","breadcrumbs":"Production Readiness Checklist » Support and Escalation","id":"1227","title":"Support and Escalation"},"1228":{"body":"Review troubleshooting guide Check system health Review logs Restart services if needed","breadcrumbs":"Production Readiness Checklist » Level 1 Support (Team)","id":"1228","title":"Level 1 Support (Team)"},"1229":{"body":"Review configuration Analyze performance metrics Check resource constraints Plan optimization","breadcrumbs":"Production Readiness Checklist » Level 2 Support (Engineering)","id":"1229","title":"Level 2 Support (Engineering)"},"123":{"body":"# Check Nushell version\\nnu --version # Expected output: 0.107.1 or higher","breadcrumbs":"Prerequisites » Nushell","id":"123","title":"Nushell"},"1230":{"body":"Code-level debugging Feature requests Bug fixes Architecture changes","breadcrumbs":"Production Readiness Checklist » Level 3 Support (Development)","id":"1230","title":"Level 3 Support (Development)"},"1231":{"body":"If issues occur post-deployment: # 1. Take backup of current configuration\\nprovisioning setup backup --path rollback-$(date +%Y%m%d-%H%M%S).tar.gz # 2. Stop running deployments\\nprovisioning workflow stop --all # 3. Restore from previous backup\\nprovisioning setup restore --path # 4. Verify restoration\\nprovisioning setup validate --verbose # 5. Run health check\\nnu scripts/health-check.nu","breadcrumbs":"Production Readiness Checklist » Rollback Procedure","id":"1231","title":"Rollback Procedure"},"1232":{"body":"System is production-ready when: ✅ All tests passing ✅ Health checks show no critical issues ✅ Configuration validates successfully ✅ Team trained and ready ✅ Documentation complete ✅ Backup and recovery tested ✅ Monitoring configured ✅ Support procedures established","breadcrumbs":"Production Readiness Checklist » Success Criteria","id":"1232","title":"Success Criteria"},"1233":{"body":"Technical Lead : System validated and tested Operations : Infrastructure ready and monitored Security : Security controls reviewed and approved Management : Deployment approved for production Verification Date : 2025-12-09 Status : ✅ APPROVED FOR PRODUCTION DEPLOYMENT Next Review : 2025-12-16 (Weekly)","breadcrumbs":"Production Readiness Checklist » Sign-Off","id":"1233","title":"Sign-Off"},"1234":{"body":"Version : 1.0.0 Date : 2025-10-08 Audience : Platform Administrators, SREs, Security Team Training Duration : 45-60 minutes Certification : Required annually","breadcrumbs":"Break Glass Training Guide » Break-Glass Emergency Access - Training Guide","id":"1234","title":"Break-Glass Emergency Access - Training Guide"},"1235":{"body":"Break-glass is an emergency access procedure that allows authorized personnel to bypass normal security controls during critical incidents (e.g., production outages, security breaches, data loss).","breadcrumbs":"Break Glass Training Guide » 🚨 What is Break-Glass?","id":"1235","title":"🚨 What is Break-Glass?"},"1236":{"body":"Last Resort Only : Use only when normal access is insufficient Multi-Party Approval : Requires 2+ approvers from different teams Time-Limited : Maximum 4 hours, auto-revokes Enhanced Audit : 7-year retention, immutable logs Real-Time Alerts : Security team notified immediately","breadcrumbs":"Break Glass Training Guide » Key Principles","id":"1236","title":"Key Principles"},"1237":{"body":"When to Use Break-Glass When NOT to Use Roles & Responsibilities Break-Glass Workflow Using the System Examples Auditing & Compliance Post-Incident Review FAQ Emergency Contacts","breadcrumbs":"Break Glass Training Guide » 📋 Table of Contents","id":"1237","title":"📋 Table of Contents"},"1238":{"body":"","breadcrumbs":"Break Glass Training Guide » When to Use Break-Glass","id":"1238","title":"When to Use Break-Glass"},"1239":{"body":"Scenario Example Urgency Production Outage Database cluster unresponsive, affecting all users Critical Security Incident Active breach detected, need immediate containment Critical Data Loss Accidental deletion of critical data, need restore High System Failure Infrastructure failure requiring emergency fixes High Locked Out Normal admin accounts compromised, need recovery High","breadcrumbs":"Break Glass Training Guide » ✅ Valid Emergency Scenarios","id":"1239","title":"✅ Valid Emergency Scenarios"},"124":{"body":"# Check KCL version\\nkcl --version # Expected output: 0.11.2 or higher","breadcrumbs":"Prerequisites » KCL","id":"124","title":"KCL"},"1240":{"body":"Use break-glass if ALL apply: Production systems affected OR security incident Normal access insufficient OR unavailable Immediate action required (cannot wait for approval process) Clear justification for emergency access Incident properly documented","breadcrumbs":"Break Glass Training Guide » Criteria Checklist","id":"1240","title":"Criteria Checklist"},"1241":{"body":"","breadcrumbs":"Break Glass Training Guide » When NOT to Use","id":"1241","title":"When NOT to Use"},"1242":{"body":"Scenario Why Not Alternative Forgot password Not an emergency Use password reset Routine maintenance Can be scheduled Use normal change process Convenience Normal process \\"too slow\\" Follow standard approval Deadline pressure Business pressure ≠ emergency Plan ahead Testing Want to test emergency access Use dev environment","breadcrumbs":"Break Glass Training Guide » ❌ Invalid Scenarios (Do NOT Use Break-Glass)","id":"1242","title":"❌ Invalid Scenarios (Do NOT Use Break-Glass)"},"1243":{"body":"Immediate suspension of break-glass privileges Security team investigation Disciplinary action (up to termination) All actions audited and reviewed","breadcrumbs":"Break Glass Training Guide » Consequences of Misuse","id":"1243","title":"Consequences of Misuse"},"1244":{"body":"","breadcrumbs":"Break Glass Training Guide » Roles & Responsibilities","id":"1244","title":"Roles & Responsibilities"},"1245":{"body":"Who : Platform Admin, SRE on-call, Security Officer Responsibilities : Assess if situation warrants emergency access Provide clear justification and reason Document incident timeline Use access only for stated purpose Revoke access immediately after resolution","breadcrumbs":"Break Glass Training Guide » Requester","id":"1245","title":"Requester"},"1246":{"body":"Who : 2+ from different teams (Security, Platform, Engineering Leadership) Responsibilities : Verify emergency is genuine Assess risk of granting access Review requester\'s justification Monitor usage during active session Participate in post-incident review","breadcrumbs":"Break Glass Training Guide » Approvers","id":"1246","title":"Approvers"},"1247":{"body":"Who : Security Operations team Responsibilities : Monitor all break-glass activations (real-time) Review audit logs during session Alert on suspicious activity Lead post-incident review Update policies based on learnings","breadcrumbs":"Break Glass Training Guide » Security Team","id":"1247","title":"Security Team"},"1248":{"body":"","breadcrumbs":"Break Glass Training Guide » Break-Glass Workflow","id":"1248","title":"Break-Glass Workflow"},"1249":{"body":"┌─────────────────────────────────────────────────────────┐\\n│ 1. Requester submits emergency access request │\\n│ - Reason: \\"Production database cluster down\\" │\\n│ - Justification: \\"Need direct SSH to diagnose\\" │\\n│ - Duration: 2 hours │\\n│ - Resources: [\\"database/*\\"] │\\n└─────────────────────────────────────────────────────────┘ ↓\\n┌─────────────────────────────────────────────────────────┐\\n│ 2. System creates request ID: BG-20251008-001 │\\n│ - Sends notifications to approver pool │\\n│ - Starts approval timeout (1 hour) │\\n└─────────────────────────────────────────────────────────┘\\n```plaintext ### Phase 2: Approval (10-15 minutes) ```plaintext\\n┌─────────────────────────────────────────────────────────┐\\n│ 3. First approver reviews request │\\n│ - Verifies emergency is real │\\n│ - Checks requester\'s justification │\\n│ - Approves with reason │\\n└─────────────────────────────────────────────────────────┘ ↓\\n┌─────────────────────────────────────────────────────────┐\\n│ 4. Second approver (different team) reviews │\\n│ - Independent verification │\\n│ - Approves with reason │\\n└─────────────────────────────────────────────────────────┘ ↓\\n┌─────────────────────────────────────────────────────────┐\\n│ 5. System validates approvals │\\n│ - ✓ Min 2 approvers │\\n│ - ✓ Different teams │\\n│ - ✓ Within approval window │\\n│ - Status → APPROVED │\\n└─────────────────────────────────────────────────────────┘\\n```plaintext ### Phase 3: Activation (1-2 minutes) ```plaintext\\n┌─────────────────────────────────────────────────────────┐\\n│ 6. Requester activates approved session │\\n│ - Receives emergency JWT token │\\n│ - Token valid for 2 hours (or requested duration) │\\n│ - All actions logged with session ID │\\n└─────────────────────────────────────────────────────────┘ ↓\\n┌─────────────────────────────────────────────────────────┐\\n│ 7. Security team notified │\\n│ - Real-time alert: \\"Break-glass activated\\" │\\n│ - Monitoring dashboard shows active session │\\n└─────────────────────────────────────────────────────────┘\\n```plaintext ### Phase 4: Usage (Variable) ```plaintext\\n┌─────────────────────────────────────────────────────────┐\\n│ 8. Requester performs emergency actions │\\n│ - Uses emergency token for access │\\n│ - Every action audited │\\n│ - Security team monitors in real-time │\\n└─────────────────────────────────────────────────────────┘ ↓\\n┌─────────────────────────────────────────────────────────┐\\n│ 9. Background monitoring │\\n│ - Checks for suspicious activity │\\n│ - Enforces inactivity timeout (30 min) │\\n│ - Alerts on unusual patterns │\\n└─────────────────────────────────────────────────────────┘\\n```plaintext ### Phase 5: Revocation (Immediate) ```plaintext\\n┌─────────────────────────────────────────────────────────┐\\n│ 10. Session ends (one of): │\\n│ - Manual revocation by requester │\\n│ - Expiration (max 4 hours) │\\n│ - Inactivity timeout (30 minutes) │\\n│ - Security team revocation │\\n└─────────────────────────────────────────────────────────┘ ↓\\n┌─────────────────────────────────────────────────────────┐\\n│ 11. System audit │\\n│ - All actions logged (7-year retention) │\\n│ - Incident report generated │\\n│ - Post-incident review scheduled │\\n└─────────────────────────────────────────────────────────┘\\n```plaintext --- ## Using the System ### CLI Commands #### 1. Request Emergency Access ```bash\\nprovisioning break-glass request \\\\ \\"Production database cluster unresponsive\\" \\\\ --justification \\"Need direct SSH access to diagnose PostgreSQL failure. All monitoring shows cluster down. Application completely offline affecting 10,000+ users.\\" \\\\ --resources \'[\\"database/*\\", \\"server/db-*\\"]\' \\\\ --duration 2hr # Output:\\n# ✓ Break-glass request created\\n# Request ID: BG-20251008-001\\n# Status: Pending Approval\\n# Approvers needed: 2\\n# Expires: 2025-10-08 11:30:00 (1 hour)\\n#\\n# Notifications sent to:\\n# - security-team@example.com\\n# - platform-admin@example.com\\n```plaintext #### 2. Approve Request (Approver) ```bash\\n# First approver (Security team)\\nprovisioning break-glass approve BG-20251008-001 \\\\ --reason \\"Emergency verified via incident INC-2025-234. Database cluster confirmed down, affecting production.\\" # Output:\\n# ✓ Approval granted\\n# Approver: alice@example.com (Security Team)\\n# Approvals: 1/2\\n# Status: Pending (need 1 more approval)\\n```plaintext ```bash\\n# Second approver (Platform team)\\nprovisioning break-glass approve BG-20251008-001 \\\\ --reason \\"Confirmed with monitoring. PostgreSQL master node unreachable. Emergency access justified.\\" # Output:\\n# ✓ Approval granted\\n# Approver: bob@example.com (Platform Team)\\n# Approvals: 2/2\\n# Status: APPROVED\\n#\\n# Requester can now activate session\\n```plaintext #### 3. Activate Session ```bash\\nprovisioning break-glass activate BG-20251008-001 # Output:\\n# ✓ Emergency session activated\\n# Session ID: BGS-20251008-001\\n# Token: eyJhbGciOiJSUzI1NiIsInR5cCI6IkpXVCJ9...\\n# Expires: 2025-10-08 12:30:00 (2 hours)\\n# Max inactivity: 30 minutes\\n#\\n# ⚠️ WARNING ⚠️\\n# - All actions are logged and monitored\\n# - Security team has been notified\\n# - Session will auto-revoke after 2 hours\\n# - Use ONLY for stated emergency purpose\\n#\\n# Export token:\\nexport EMERGENCY_TOKEN=\\"eyJhbGciOiJSUzI1NiIsInR5cCI6IkpXVCJ9...\\"\\n```plaintext #### 4. Use Emergency Access ```bash\\n# SSH to database server\\nprovisioning ssh connect db-master-01 \\\\ --token $EMERGENCY_TOKEN # Execute emergency commands\\nsudo systemctl status postgresql\\nsudo tail -f /var/log/postgresql/postgresql.log # Diagnose issue...\\n# Fix issue...\\n```plaintext #### 5. Revoke Session ```bash\\n# When done, immediately revoke\\nprovisioning break-glass revoke BGS-20251008-001 \\\\ --reason \\"Database cluster restored. PostgreSQL master node restarted successfully. All services online.\\" # Output:\\n# ✓ Emergency session revoked\\n# Duration: 47 minutes\\n# Actions performed: 23\\n# Audit log: /var/log/provisioning/break-glass/BGS-20251008-001.json\\n#\\n# Post-incident review scheduled: 2025-10-09 10:00am\\n```plaintext ### Web UI (Control Center) #### Request Flow 1. **Navigate**: Control Center → Security → Break-Glass\\n2. **Click**: \\"Request Emergency Access\\"\\n3. **Fill Form**: - Reason: \\"Production database cluster down\\" - Justification: (detailed description) - Duration: 2 hours - Resources: Select from dropdown or wildcard\\n4. **Submit**: Request sent to approvers #### Approver Flow 1. **Receive**: Email/Slack notification\\n2. **Navigate**: Control Center → Break-Glass → Pending Requests\\n3. **Review**: Request details, reason, justification\\n4. **Decision**: Approve or Deny\\n5. **Reason**: Provide approval/denial reason #### Monitor Active Sessions 1. **Navigate**: Control Center → Security → Break-Glass → Active Sessions\\n2. **View**: Real-time dashboard of active sessions - Who, What, When, How long - Actions performed (live) - Inactivity timer\\n3. **Revoke**: Emergency revoke button (if needed) --- ## Examples ### Example 1: Production Database Outage **Scenario**: PostgreSQL cluster unresponsive, affecting all users **Request**: ```bash\\nprovisioning break-glass request \\\\ \\"Production PostgreSQL cluster completely unresponsive\\" \\\\ --justification \\"Database cluster (3 nodes) not responding. All application services offline. 10,000+ users affected. Need direct SSH to diagnose and restore. Monitoring shows all nodes down. Last known state: replication failure during routine backup.\\" \\\\ --resources \'[\\"database/*\\", \\"server/db-prod-*\\"]\' \\\\ --duration 2hr\\n```plaintext **Approval 1** (Security):\\n> \\"Verified incident INC-2025-234. Database monitoring confirms cluster down. Application completely offline. Emergency justified.\\" **Approval 2** (Platform):\\n> \\"Confirmed. PostgreSQL master and replicas unreachable. On-call SRE needs immediate access. Approved.\\" **Actions Taken**: 1. SSH to db-prod-01, db-prod-02, db-prod-03\\n2. Check PostgreSQL status: `systemctl status postgresql`\\n3. Review logs: `/var/log/postgresql/`\\n4. Diagnose: Disk full on master node\\n5. Fix: Clear old WAL files, restart PostgreSQL\\n6. Verify: Cluster restored, replication working\\n7. Revoke access **Outcome**: Cluster restored in 47 minutes. Root cause: Backup retention not working. --- ### Example 2: Security Incident **Scenario**: Suspicious activity detected, need immediate containment **Request**: ```bash\\nprovisioning break-glass request \\\\ \\"Active security breach detected - need immediate containment\\" \\\\ --justification \\"IDS alerts show unauthorized access from IP 203.0.113.42 to production API servers. Multiple failed sudo attempts. Need to isolate affected servers and investigate. Potential data exfiltration in progress.\\" \\\\ --resources \'[\\"server/api-prod-*\\", \\"firewall/*\\", \\"network/*\\"]\' \\\\ --duration 4hr\\n```plaintext **Approval 1** (Security):\\n> \\"Security incident SI-2025-089 confirmed. IDS shows sustained attack from external IP. Immediate containment required. Approved.\\" **Approval 2** (Engineering Director):\\n> \\"Concur with security assessment. Production impact acceptable vs risk of data breach. Approved.\\" **Actions Taken**: 1. Firewall block on 203.0.113.42\\n2. Isolate affected API servers\\n3. Snapshot servers for forensics\\n4. Review access logs\\n5. Identify compromised service account\\n6. Rotate credentials\\n7. Restore from clean backup\\n8. Re-enable servers with patched vulnerability **Outcome**: Breach contained in 3h 15min. No data loss. Vulnerability patched across fleet. --- ### Example 3: Accidental Data Deletion **Scenario**: Critical production data accidentally deleted **Request**: ```bash\\nprovisioning break-glass request \\\\ \\"Critical customer data accidentally deleted from production\\" \\\\ --justification \\"Database migration script ran against production instead of staging. Deleted 50,000+ customer records. Need immediate restore from backup before data loss is noticed. Normal restore process requires change approval (4-6 hours). Data loss window critical.\\" \\\\ --resources \'[\\"database/customers\\", \\"backup/*\\"]\' \\\\ --duration 3hr\\n```plaintext **Approval 1** (Platform):\\n> \\"Verified data deletion in production database. 50,284 records deleted at 10:42am. Backup available from 10:00am (42 minutes ago). Time-critical restore needed. Approved.\\" **Approval 2** (Security):\\n> \\"Risk assessment: Restore from trusted backup less risky than data loss. Emergency justified. Ensure post-incident review of deployment process. Approved.\\" **Actions Taken**: 1. Stop application writes to affected tables\\n2. Identify latest good backup (10:00am)\\n3. Restore deleted records from backup\\n4. Verify data integrity\\n5. Compare record counts\\n6. Re-enable application writes\\n7. Notify affected users (if any noticed) **Outcome**: Data restored in 1h 38min. Only 42 minutes of data lost (from backup to deletion). Zero customer impact. --- ## Auditing & Compliance ### What is Logged Every break-glass session logs: 1. **Request Details**: - Requester identity - Reason and justification - Requested resources - Requested duration - Timestamp 2. **Approval Process**: - Each approver identity - Approval/denial reason - Approval timestamp - Team affiliation 3. **Session Activity**: - Activation timestamp - Every action performed - Resources accessed - Commands executed - Inactivity periods 4. **Revocation**: - Revocation reason - Who revoked (system or manual) - Total duration - Final status ### Retention - **Break-glass logs**: 7 years (immutable)\\n- **Cannot be deleted**: Only anonymized for GDPR\\n- **Exported to SIEM**: Real-time ### Compliance Reports ```bash\\n# Generate break-glass usage report\\nprovisioning break-glass audit \\\\ --from \\"2025-01-01\\" \\\\ --to \\"2025-12-31\\" \\\\ --format pdf \\\\ --output break-glass-2025-report.pdf # Report includes:\\n# - Total break-glass activations\\n# - Average duration\\n# - Most common reasons\\n# - Approval times\\n# - Incidents resolved\\n# - Misuse incidents (if any)\\n```plaintext --- ## Post-Incident Review ### Within 24 Hours **Required attendees**: - Requester\\n- Approvers\\n- Security team\\n- Incident commander **Agenda**: 1. **Timeline Review**: What happened, when\\n2. **Actions Taken**: What was done with emergency access\\n3. **Outcome**: Was issue resolved? Any side effects?\\n4. **Process**: Did break-glass work as intended?\\n5. **Lessons Learned**: What can be improved? ### Review Checklist - [ ] Was break-glass appropriate for this incident?\\n- [ ] Were approvals granted timely?\\n- [ ] Was access used only for stated purpose?\\n- [ ] Were any security policies violated?\\n- [ ] Could incident be prevented in future?\\n- [ ] Do we need policy updates?\\n- [ ] Do we need system changes? ### Output **Incident Report**: ```markdown\\n# Break-Glass Incident Report: BG-20251008-001 **Incident**: Production database cluster outage\\n**Duration**: 47 minutes\\n**Impact**: 10,000+ users, complete service outage ## Timeline\\n- 10:15: Incident detected\\n- 10:17: Break-glass requested\\n- 10:25: Approved (2/2)\\n- 10:27: Activated\\n- 11:02: Database restored\\n- 11:04: Session revoked ## Actions Taken\\n1. SSH access to database servers\\n2. Diagnosed disk full issue\\n3. Cleared old WAL files\\n4. Restarted PostgreSQL\\n5. Verified replication ## Root Cause\\nBackup retention job failed silently for 2 weeks, causing WAL files to accumulate until disk full. ## Prevention\\n- ✅ Add disk space monitoring alerts\\n- ✅ Fix backup retention job\\n- ✅ Test recovery procedures\\n- ✅ Implement WAL archiving to S3 ## Break-Glass Assessment\\n- ✓ Appropriate use\\n- ✓ Timely approvals\\n- ✓ No policy violations\\n- ✓ Access revoked promptly\\n```plaintext --- ## FAQ ### Q: How quickly can break-glass be activated? **A**: Typically 15-20 minutes: - 5 min: Request submission\\n- 10 min: Approvals (2 people)\\n- 2 min: Activation In extreme emergencies, approvers can be on standby. ### Q: Can I use break-glass for scheduled maintenance? **A**: No. Break-glass is for emergencies only. Schedule maintenance through normal change process. ### Q: What if I can\'t get 2 approvers? **A**: System requires 2 approvers from different teams. If unavailable: 1. Escalate to on-call manager\\n2. Contact security team directly\\n3. Use emergency contact list ### Q: Can approvers be from the same team? **A**: No. System enforces team diversity to prevent collusion. ### Q: What if security team revokes my session? **A**: Security team can revoke for: - Suspicious activity\\n- Policy violation\\n- Incident resolved\\n- Misuse detected You\'ll receive immediate notification. Contact security team for details. ### Q: Can I extend an active session? **A**: No. Maximum duration is 4 hours. If you need more time, submit a new request with updated justification. ### Q: What happens if I forget to revoke? **A**: Session auto-revokes after: - Maximum duration (4 hours), OR\\n- Inactivity timeout (30 minutes) Always manually revoke when done. ### Q: Is break-glass monitored? **A**: Yes. Security team monitors in real-time: - Session activation alerts\\n- Action logging\\n- Suspicious activity detection\\n- Compliance verification ### Q: Can I practice break-glass? **A**: Yes, in **development environment only**: ```bash\\nPROVISIONING_ENV=dev provisioning break-glass request \\"Test emergency access procedure\\"\\n```plaintext Never practice in staging or production. --- ## Emergency Contacts ### During Incident | Role | Contact | Response Time |\\n|------|---------|---------------|\\n| **Security On-Call** | +1-555-SECURITY | 5 minutes |\\n| **Platform On-Call** | +1-555-PLATFORM | 5 minutes |\\n| **Engineering Director** | +1-555-ENG-DIR | 15 minutes | ### Escalation Path 1. **L1**: On-call SRE\\n2. **L2**: Platform team lead\\n3. **L3**: Engineering manager\\n4. **L4**: Director of Engineering\\n5. **L5**: CTO ### Communication Channels - **Incident Slack**: `#incidents`\\n- **Security Slack**: `#security-alerts`\\n- **Email**: `security-team@example.com`\\n- **PagerDuty**: Break-glass policy --- ## Training Certification **I certify that I have**: - [ ] Read and understood this training guide\\n- [ ] Understand when to use (and not use) break-glass\\n- [ ] Know the approval workflow\\n- [ ] Can use the CLI commands\\n- [ ] Understand auditing and compliance requirements\\n- [ ] Will follow post-incident review process **Signature**: _________________________\\n**Date**: _________________________\\n**Next Training Due**: _________________________ (1 year) --- **Version**: 1.0.0\\n**Maintained By**: Security Team\\n**Last Updated**: 2025-10-08\\n**Next Review**: 2026-10-08","breadcrumbs":"Break Glass Training Guide » Phase 1: Request (5 minutes)","id":"1249","title":"Phase 1: Request (5 minutes)"},"125":{"body":"# Check Docker version\\ndocker --version # Check Docker is running\\ndocker ps # Expected: Docker version 20.10+ and connection successful","breadcrumbs":"Prerequisites » Docker","id":"125","title":"Docker"},"1250":{"body":"Version : 1.0.0 Date : 2025-10-08 Audience : Platform Administrators, Security Teams Prerequisites : Understanding of Cedar policy language, Provisioning platform architecture","breadcrumbs":"Cedar Policies Production Guide » Cedar Policies Production Guide","id":"1250","title":"Cedar Policies Production Guide"},"1251":{"body":"Introduction Cedar Policy Basics Production Policy Strategy Policy Templates Policy Development Workflow Testing Policies Deployment Monitoring & Auditing Troubleshooting Best Practices","breadcrumbs":"Cedar Policies Production Guide » Table of Contents","id":"1251","title":"Table of Contents"},"1252":{"body":"Cedar policies control who can do what in the Provisioning platform. This guide helps you create, test, and deploy production-ready Cedar policies that balance security with operational efficiency.","breadcrumbs":"Cedar Policies Production Guide » Introduction","id":"1252","title":"Introduction"},"1253":{"body":"Fine-grained : Control access at resource + action level Context-aware : Decisions based on MFA, IP, time, approvals Auditable : Every decision is logged with policy ID Hot-reload : Update policies without restarting services Type-safe : Schema validation prevents errors","breadcrumbs":"Cedar Policies Production Guide » Why Cedar?","id":"1253","title":"Why Cedar?"},"1254":{"body":"","breadcrumbs":"Cedar Policies Production Guide » Cedar Policy Basics","id":"1254","title":"Cedar Policy Basics"},"1255":{"body":"permit ( principal, # Who (user, team, role) action, # What (create, delete, deploy) resource # Where (server, cluster, environment)\\n) when { condition # Context (MFA, IP, time)\\n};\\n```plaintext ### Entities | Type | Examples | Description |\\n|------|----------|-------------|\\n| **User** | `User::\\"alice\\"` | Individual users |\\n| **Team** | `Team::\\"platform-admin\\"` | User groups |\\n| **Role** | `Role::\\"Admin\\"` | Permission levels |\\n| **Resource** | `Server::\\"web-01\\"` | Infrastructure resources |\\n| **Environment** | `Environment::\\"production\\"` | Deployment targets | ### Actions | Category | Actions |\\n|----------|---------|\\n| **Read** | `read`, `list` |\\n| **Write** | `create`, `update`, `delete` |\\n| **Deploy** | `deploy`, `rollback` |\\n| **Admin** | `ssh`, `execute`, `admin` | --- ## Production Policy Strategy ### Security Levels #### Level 1: Development (Permissive) ```cedar\\n// Developers have full access to dev environment\\npermit ( principal in Team::\\"developers\\", action, resource in Environment::\\"development\\"\\n);\\n```plaintext #### Level 2: Staging (MFA Required) ```cedar\\n// All operations require MFA\\npermit ( principal in Team::\\"developers\\", action, resource in Environment::\\"staging\\"\\n) when { context.mfa_verified == true\\n};\\n```plaintext #### Level 3: Production (MFA + Approval) ```cedar\\n// Deployments require MFA + approval\\npermit ( principal in Team::\\"platform-admin\\", action in [Action::\\"deploy\\", Action::\\"delete\\"], resource in Environment::\\"production\\"\\n) when { context.mfa_verified == true && context has approval_id && context.approval_id.startsWith(\\"APPROVAL-\\")\\n};\\n```plaintext #### Level 4: Critical (Break-Glass Only) ```cedar\\n// Only emergency access\\npermit ( principal, action, resource in Resource::\\"production-database\\"\\n) when { context.emergency_access == true && context.session_approved == true\\n};\\n```plaintext --- ## Policy Templates ### 1. Role-Based Access Control (RBAC) ```cedar\\n// Admin: Full access\\npermit ( principal in Role::\\"Admin\\", action, resource\\n); // Operator: Server management + read clusters\\npermit ( principal in Role::\\"Operator\\", action in [ Action::\\"create\\", Action::\\"update\\", Action::\\"delete\\" ], resource is Server\\n); permit ( principal in Role::\\"Operator\\", action in [Action::\\"read\\", Action::\\"list\\"], resource is Cluster\\n); // Viewer: Read-only everywhere\\npermit ( principal in Role::\\"Viewer\\", action in [Action::\\"read\\", Action::\\"list\\"], resource\\n); // Auditor: Read audit logs only\\npermit ( principal in Role::\\"Auditor\\", action in [Action::\\"read\\", Action::\\"list\\"], resource is AuditLog\\n);\\n```plaintext ### 2. Team-Based Policies ```cedar\\n// Platform team: Infrastructure management\\npermit ( principal in Team::\\"platform\\", action in [ Action::\\"create\\", Action::\\"update\\", Action::\\"delete\\", Action::\\"deploy\\" ], resource in [Server, Cluster, Taskserv]\\n); // Security team: Access control + audit\\npermit ( principal in Team::\\"security\\", action, resource in [User, Role, AuditLog, BreakGlass]\\n); // DevOps team: Application deployments\\npermit ( principal in Team::\\"devops\\", action == Action::\\"deploy\\", resource in Environment::\\"production\\"\\n) when { context.mfa_verified == true && context.has_approval == true\\n};\\n```plaintext ### 3. Time-Based Restrictions ```cedar\\n// Deployments only during business hours\\npermit ( principal, action == Action::\\"deploy\\", resource in Environment::\\"production\\"\\n) when { context.time.hour >= 9 && context.time.hour <= 17 && context.time.weekday in [\\"Monday\\", \\"Tuesday\\", \\"Wednesday\\", \\"Thursday\\", \\"Friday\\"]\\n}; // Maintenance window\\npermit ( principal in Team::\\"platform\\", action, resource\\n) when { context.maintenance_window == true\\n};\\n```plaintext ### 4. IP-Based Restrictions ```cedar\\n// Production access only from office network\\npermit ( principal, action, resource in Environment::\\"production\\"\\n) when { context.ip_address.isInRange(\\"10.0.0.0/8\\") || context.ip_address.isInRange(\\"192.168.1.0/24\\")\\n}; // VPN access for remote work\\npermit ( principal, action, resource in Environment::\\"production\\"\\n) when { context.vpn_connected == true && context.mfa_verified == true\\n};\\n```plaintext ### 5. Resource-Specific Policies ```cedar\\n// Database servers: Extra protection\\nforbid ( principal, action == Action::\\"delete\\", resource in Resource::\\"database-*\\"\\n) unless { context.emergency_access == true\\n}; // Critical clusters: Require multiple approvals\\npermit ( principal, action in [Action::\\"update\\", Action::\\"delete\\"], resource in Resource::\\"k8s-production-*\\"\\n) when { context.approval_count >= 2 && context.mfa_verified == true\\n};\\n```plaintext ### 6. Self-Service Policies ```cedar\\n// Users can manage their own MFA devices\\npermit ( principal, action in [Action::\\"create\\", Action::\\"delete\\"], resource is MfaDevice\\n) when { resource.owner == principal\\n}; // Users can view their own audit logs\\npermit ( principal, action == Action::\\"read\\", resource is AuditLog\\n) when { resource.user_id == principal.id\\n};\\n```plaintext --- ## Policy Development Workflow ### Step 1: Define Requirements **Document**: - Who needs access? (roles, teams, individuals)\\n- To what resources? (servers, clusters, environments)\\n- What actions? (read, write, deploy, delete)\\n- Under what conditions? (MFA, IP, time, approvals) **Example Requirements Document**: ```markdown\\n# Requirement: Production Deployment **Who**: DevOps team members\\n**What**: Deploy applications to production\\n**When**: Business hours (9am-5pm Mon-Fri)\\n**Conditions**:\\n- MFA verified\\n- Change request approved\\n- From office network or VPN\\n```plaintext ### Step 2: Write Policy ```cedar\\n@id(\\"prod-deploy-devops\\")\\n@description(\\"DevOps can deploy to production during business hours with approval\\")\\npermit ( principal in Team::\\"devops\\", action == Action::\\"deploy\\", resource in Environment::\\"production\\"\\n) when { context.mfa_verified == true && context has approval_id && context.time.hour >= 9 && context.time.hour <= 17 && context.time.weekday in [\\"Monday\\", \\"Tuesday\\", \\"Wednesday\\", \\"Thursday\\", \\"Friday\\"] && (context.ip_address.isInRange(\\"10.0.0.0/8\\") || context.vpn_connected == true)\\n};\\n```plaintext ### Step 3: Validate Syntax ```bash\\n# Use Cedar CLI to validate\\ncedar validate \\\\ --policies provisioning/config/cedar-policies/production.cedar \\\\ --schema provisioning/config/cedar-policies/schema.cedar # Expected output: ✓ Policy is valid\\n```plaintext ### Step 4: Test in Development ```bash\\n# Deploy to development environment first\\ncp production.cedar provisioning/config/cedar-policies/development.cedar # Restart orchestrator to load new policies\\nsystemctl restart provisioning-orchestrator # Test with real requests\\nprovisioning server create test-server --check\\n```plaintext ### Step 5: Review & Approve **Review Checklist**: - [ ] Policy syntax valid\\n- [ ] Policy ID unique\\n- [ ] Description clear\\n- [ ] Conditions appropriate for security level\\n- [ ] Tested in development\\n- [ ] Reviewed by security team\\n- [ ] Documented in change log ### Step 6: Deploy to Production ```bash\\n# Backup current policies\\ncp provisioning/config/cedar-policies/production.cedar \\\\ provisioning/config/cedar-policies/production.cedar.backup.$(date +%Y%m%d) # Deploy new policy\\ncp new-production.cedar provisioning/config/cedar-policies/production.cedar # Hot reload (no restart needed)\\nprovisioning cedar reload # Verify loaded\\nprovisioning cedar list\\n```plaintext --- ## Testing Policies ### Unit Testing Create test cases for each policy: ```yaml\\n# tests/cedar/prod-deploy-devops.yaml\\npolicy_id: prod-deploy-devops test_cases: - name: \\"DevOps can deploy with approval and MFA\\" principal: { type: \\"Team\\", id: \\"devops\\" } action: \\"deploy\\" resource: { type: \\"Environment\\", id: \\"production\\" } context: mfa_verified: true approval_id: \\"APPROVAL-123\\" time: { hour: 10, weekday: \\"Monday\\" } ip_address: \\"10.0.1.5\\" expected: Allow - name: \\"DevOps cannot deploy without MFA\\" principal: { type: \\"Team\\", id: \\"devops\\" } action: \\"deploy\\" resource: { type: \\"Environment\\", id: \\"production\\" } context: mfa_verified: false approval_id: \\"APPROVAL-123\\" time: { hour: 10, weekday: \\"Monday\\" } expected: Deny - name: \\"DevOps cannot deploy outside business hours\\" principal: { type: \\"Team\\", id: \\"devops\\" } action: \\"deploy\\" resource: { type: \\"Environment\\", id: \\"production\\" } context: mfa_verified: true approval_id: \\"APPROVAL-123\\" time: { hour: 22, weekday: \\"Monday\\" } expected: Deny\\n```plaintext Run tests: ```bash\\nprovisioning cedar test tests/cedar/\\n```plaintext ### Integration Testing Test with real API calls: ```bash\\n# Setup test user\\nexport TEST_USER=\\"alice\\"\\nexport TEST_TOKEN=$(provisioning login --user $TEST_USER --output token) # Test allowed action\\ncurl -H \\"Authorization: Bearer $TEST_TOKEN\\" \\\\ http://localhost:9090/api/v1/servers \\\\ -X POST -d \'{\\"name\\": \\"test-server\\"}\' # Expected: 200 OK # Test denied action (without MFA)\\ncurl -H \\"Authorization: Bearer $TEST_TOKEN\\" \\\\ http://localhost:9090/api/v1/servers/prod-server-01 \\\\ -X DELETE # Expected: 403 Forbidden (MFA required)\\n```plaintext ### Load Testing Verify policy evaluation performance: ```bash\\n# Generate load\\nprovisioning cedar bench \\\\ --policies production.cedar \\\\ --requests 10000 \\\\ --concurrency 100 # Expected: <10ms per evaluation\\n```plaintext --- ## Deployment ### Development → Staging → Production ```bash\\n#!/bin/bash\\n# deploy-policies.sh ENVIRONMENT=$1 # dev, staging, prod # Validate policies\\ncedar validate \\\\ --policies provisioning/config/cedar-policies/$ENVIRONMENT.cedar \\\\ --schema provisioning/config/cedar-policies/schema.cedar if [ $? -ne 0 ]; then echo \\"❌ Policy validation failed\\" exit 1\\nfi # Backup current policies\\nBACKUP_DIR=\\"provisioning/config/cedar-policies/backups/$ENVIRONMENT\\"\\nmkdir -p $BACKUP_DIR\\ncp provisioning/config/cedar-policies/$ENVIRONMENT.cedar \\\\ $BACKUP_DIR/$ENVIRONMENT.cedar.$(date +%Y%m%d-%H%M%S) # Deploy new policies\\nscp provisioning/config/cedar-policies/$ENVIRONMENT.cedar \\\\ $ENVIRONMENT-orchestrator:/etc/provisioning/cedar-policies/production.cedar # Hot reload on remote\\nssh $ENVIRONMENT-orchestrator \\"provisioning cedar reload\\" echo \\"✅ Policies deployed to $ENVIRONMENT\\"\\n```plaintext ### Rollback Procedure ```bash\\n# List backups\\nls -ltr provisioning/config/cedar-policies/backups/production/ # Restore previous version\\ncp provisioning/config/cedar-policies/backups/production/production.cedar.20251008-143000 \\\\ provisioning/config/cedar-policies/production.cedar # Reload\\nprovisioning cedar reload # Verify\\nprovisioning cedar list\\n```plaintext --- ## Monitoring & Auditing ### Monitor Authorization Decisions ```bash\\n# Query denied requests (last 24 hours)\\nprovisioning audit query \\\\ --action authorization_denied \\\\ --from \\"24h\\" \\\\ --out table # Expected output:\\n# ┌─────────┬────────┬──────────┬────────┬────────────────┐\\n# │ Time │ User │ Action │ Resour │ Reason │\\n# ├─────────┼────────┼──────────┼────────┼────────────────┤\\n# │ 10:15am │ bob │ deploy │ prod │ MFA not verif │\\n# │ 11:30am │ alice │ delete │ db-01 │ No approval │\\n# └─────────┴────────┴──────────┴────────┴────────────────┘\\n```plaintext ### Alert on Suspicious Activity ```yaml\\n# alerts/cedar-policies.yaml\\nalerts: - name: \\"High Denial Rate\\" query: \\"authorization_denied\\" threshold: 10 window: \\"5m\\" action: \\"notify:security-team\\" - name: \\"Policy Bypass Attempt\\" query: \\"action:deploy AND result:denied\\" user: \\"critical-users\\" action: \\"page:oncall\\"\\n```plaintext ### Policy Usage Statistics ```bash\\n# Which policies are most used?\\nprovisioning cedar stats --top 10 # Example output:\\n# Policy ID | Uses | Allows | Denies\\n# ----------------------|-------|--------|-------\\n# prod-deploy-devops | 1,234 | 1,100 | 134\\n# admin-full-access | 892 | 892 | 0\\n# viewer-read-only | 5,421 | 5,421 | 0\\n```plaintext --- ## Troubleshooting ### Policy Not Applying **Symptom**: Policy changes not taking effect **Solutions**: 1. Verify hot reload: ```bash provisioning cedar reload provisioning cedar list # Should show updated timestamp Check orchestrator logs: journalctl -u provisioning-orchestrator -f | grep cedar Restart orchestrator: systemctl restart provisioning-orchestrator","breadcrumbs":"Cedar Policies Production Guide » Core Concepts","id":"1255","title":"Core Concepts"},"1256":{"body":"Symptom : User denied access when policy should allow Debug : # Enable debug mode\\nexport PROVISIONING_DEBUG=1 # View authorization decision\\nprovisioning audit query \\\\ --user alice \\\\ --action deploy \\\\ --from \\"1h\\" \\\\ --out json | jq \'.authorization\' # Shows which policy evaluated, context used, reason for denial\\n```plaintext ### Policy Conflicts **Symptom**: Multiple policies match, unclear which applies **Resolution**: - Cedar uses **deny-override**: If any `forbid` matches, request denied\\n- Use `@priority` annotations (higher number = higher priority)\\n- Make policies more specific to avoid conflicts ```cedar\\n@priority(100)\\npermit ( principal in Role::\\"Admin\\", action, resource\\n); @priority(50)\\nforbid ( principal, action == Action::\\"delete\\", resource is Database\\n); // Admin can do anything EXCEPT delete databases\\n```plaintext --- ## Best Practices ### 1. Start Restrictive, Loosen Gradually ```cedar\\n// ❌ BAD: Too permissive initially\\npermit (principal, action, resource); // ✅ GOOD: Explicit allow, expand as needed\\npermit ( principal in Role::\\"Admin\\", action in [Action::\\"read\\", Action::\\"list\\"], resource\\n);\\n```plaintext ### 2. Use Annotations ```cedar\\n@id(\\"prod-deploy-mfa\\")\\n@description(\\"Production deployments require MFA verification\\")\\n@owner(\\"platform-team\\")\\n@reviewed(\\"2025-10-08\\")\\n@expires(\\"2026-10-08\\")\\npermit ( principal in Team::\\"platform-admin\\", action == Action::\\"deploy\\", resource in Environment::\\"production\\"\\n) when { context.mfa_verified == true\\n};\\n```plaintext ### 3. Principle of Least Privilege Give users **minimum permissions** needed: ```cedar\\n// ❌ BAD: Overly broad\\npermit (principal in Team::\\"developers\\", action, resource); // ✅ GOOD: Specific permissions\\npermit ( principal in Team::\\"developers\\", action in [Action::\\"read\\", Action::\\"create\\", Action::\\"update\\"], resource in Environment::\\"development\\"\\n);\\n```plaintext ### 4. Document Context Requirements ```cedar\\n// Context required for this policy:\\n// - mfa_verified: boolean (from JWT claims)\\n// - approval_id: string (from request header)\\n// - ip_address: IpAddr (from connection)\\npermit ( principal in Role::\\"Operator\\", action == Action::\\"deploy\\", resource in Environment::\\"production\\"\\n) when { context.mfa_verified == true && context has approval_id && context.ip_address.isInRange(\\"10.0.0.0/8\\")\\n};\\n```plaintext ### 5. Separate Policies by Concern **File organization**: ```plaintext\\ncedar-policies/\\n├── schema.cedar # Entity/action definitions\\n├── rbac.cedar # Role-based policies\\n├── teams.cedar # Team-based policies\\n├── time-restrictions.cedar # Time-based policies\\n├── ip-restrictions.cedar # Network-based policies\\n├── production.cedar # Production-specific\\n└── development.cedar # Development-specific\\n```plaintext ### 6. Version Control ```bash\\n# Git commit each policy change\\ngit add provisioning/config/cedar-policies/production.cedar\\ngit commit -m \\"feat(cedar): Add MFA requirement for prod deployments - Require MFA for all production deployments\\n- Applies to devops and platform-admin teams\\n- Effective 2025-10-08 Policy ID: prod-deploy-mfa\\nReviewed by: security-team\\nTicket: SEC-1234\\" git push\\n```plaintext ### 7. Regular Policy Audits **Quarterly review**: - [ ] Remove unused policies\\n- [ ] Tighten overly permissive policies\\n- [ ] Update for new resources/actions\\n- [ ] Verify team memberships current\\n- [ ] Test break-glass procedures --- ## Quick Reference ### Common Policy Patterns ```cedar\\n# Allow all\\npermit (principal, action, resource); # Deny all\\nforbid (principal, action, resource); # Role-based\\npermit (principal in Role::\\"Admin\\", action, resource); # Team-based\\npermit (principal in Team::\\"platform\\", action, resource); # Resource-based\\npermit (principal, action, resource in Environment::\\"production\\"); # Action-based\\npermit (principal, action in [Action::\\"read\\", Action::\\"list\\"], resource); # Condition-based\\npermit (principal, action, resource) when { context.mfa_verified == true }; # Complex\\npermit ( principal in Team::\\"devops\\", action == Action::\\"deploy\\", resource in Environment::\\"production\\"\\n) when { context.mfa_verified == true && context has approval_id && context.time.hour >= 9 && context.time.hour <= 17\\n};\\n```plaintext ### Useful Commands ```bash\\n# Validate policies\\nprovisioning cedar validate # Reload policies (hot reload)\\nprovisioning cedar reload # List active policies\\nprovisioning cedar list # Test policies\\nprovisioning cedar test tests/ # Query denials\\nprovisioning audit query --action authorization_denied # Policy statistics\\nprovisioning cedar stats\\n```plaintext --- ## Support - **Documentation**: `docs/architecture/CEDAR_AUTHORIZATION_IMPLEMENTATION.md`\\n- **Policy Examples**: `provisioning/config/cedar-policies/`\\n- **Issues**: Report to platform-team\\n- **Emergency**: Use break-glass procedure --- **Version**: 1.0.0\\n**Maintained By**: Platform Team\\n**Last Updated**: 2025-10-08","breadcrumbs":"Cedar Policies Production Guide » Unexpected Denials","id":"1256","title":"Unexpected Denials"},"1257":{"body":"Document Version : 1.0.0 Last Updated : 2025-10-08 Target Audience : Platform Administrators, Security Team Prerequisites : Control Center deployed, admin user created","breadcrumbs":"MFA Admin Setup Guide » MFA Admin Setup Guide - Production Operations Manual","id":"1257","title":"MFA Admin Setup Guide - Production Operations Manual"},"1258":{"body":"Overview MFA Requirements Admin Enrollment Process TOTP Setup (Authenticator Apps) WebAuthn Setup (Hardware Keys) Enforcing MFA via Cedar Policies Backup Codes Management Recovery Procedures Troubleshooting Best Practices Audit and Compliance","breadcrumbs":"MFA Admin Setup Guide » 📋 Table of Contents","id":"1258","title":"📋 Table of Contents"},"1259":{"body":"","breadcrumbs":"MFA Admin Setup Guide » Overview","id":"1259","title":"Overview"},"126":{"body":"# Check SOPS version\\nsops --version # Expected output: 3.10.2 or higher","breadcrumbs":"Prerequisites » SOPS","id":"126","title":"SOPS"},"1260":{"body":"Multi-Factor Authentication (MFA) adds a second layer of security beyond passwords. Admins must provide: Something they know : Password Something they have : TOTP code (authenticator app) or WebAuthn device (YubiKey, Touch ID)","breadcrumbs":"MFA Admin Setup Guide » What is MFA?","id":"1260","title":"What is MFA?"},"1261":{"body":"Administrators have elevated privileges including: Server creation/deletion Production deployments Secret management User management Break-glass approval MFA protects against : Password compromise (phishing, leaks, brute force) Unauthorized access to critical systems Compliance violations (SOC2, ISO 27001)","breadcrumbs":"MFA Admin Setup Guide » Why MFA for Admins?","id":"1261","title":"Why MFA for Admins?"},"1262":{"body":"Method Type Examples Recommended For TOTP Software Google Authenticator, Authy, 1Password All admins (primary) WebAuthn/FIDO2 Hardware YubiKey, Touch ID, Windows Hello High-security admins Backup Codes One-time 10 single-use codes Emergency recovery","breadcrumbs":"MFA Admin Setup Guide » MFA Methods Supported","id":"1262","title":"MFA Methods Supported"},"1263":{"body":"","breadcrumbs":"MFA Admin Setup Guide » MFA Requirements","id":"1263","title":"MFA Requirements"},"1264":{"body":"All administrators MUST enable MFA for: Production environment access Server creation/deletion operations Deployment to production clusters Secret access (KMS, dynamic secrets) Break-glass approval User management operations","breadcrumbs":"MFA Admin Setup Guide » Mandatory MFA Enforcement","id":"1264","title":"Mandatory MFA Enforcement"},"1265":{"body":"Development : MFA optional (not recommended) Staging : MFA recommended, not enforced Production : MFA mandatory (enforced by Cedar policies)","breadcrumbs":"MFA Admin Setup Guide » Grace Period","id":"1265","title":"Grace Period"},"1266":{"body":"Week 1-2: Pilot Program ├─ Platform admins enable MFA ├─ Document issues and refine process └─ Create training materials Week 3-4: Full Deployment ├─ All admins enable MFA ├─ Cedar policies enforce MFA for production └─ Monitor compliance Week 5+: Maintenance ├─ Regular MFA device audits ├─ Backup code rotation └─ User support for MFA issues\\n```plaintext --- ## Admin Enrollment Process ### Step 1: Initial Login (Password Only) ```bash\\n# Login with username/password\\nprovisioning login --user admin@example.com --workspace production # Response (partial token, MFA not yet verified):\\n{ \\"status\\": \\"mfa_required\\", \\"partial_token\\": \\"eyJhbGci...\\", # Limited access token \\"message\\": \\"MFA enrollment required for production access\\"\\n}\\n```plaintext **Partial token limitations**: - Cannot access production resources\\n- Can only access MFA enrollment endpoints\\n- Expires in 15 minutes ### Step 2: Choose MFA Method ```bash\\n# Check available MFA methods\\nprovisioning mfa methods # Output:\\nAvailable MFA Methods: • TOTP (Authenticator apps) - Recommended for all users • WebAuthn (Hardware keys) - Recommended for high-security roles • Backup Codes - Emergency recovery only # Check current MFA status\\nprovisioning mfa status # Output:\\nMFA Status: TOTP: Not enrolled WebAuthn: Not enrolled Backup Codes: Not generated MFA Required: Yes (production workspace)\\n```plaintext ### Step 3: Enroll MFA Device Choose one or both methods (TOTP + WebAuthn recommended): - [TOTP Setup](#totp-setup-authenticator-apps)\\n- [WebAuthn Setup](#webauthn-setup-hardware-keys) ### Step 4: Verify and Activate After enrollment, login again with MFA: ```bash\\n# Login (returns partial token)\\nprovisioning login --user admin@example.com --workspace production # Verify MFA code (returns full access token)\\nprovisioning mfa verify 123456 # Response:\\n{ \\"status\\": \\"authenticated\\", \\"access_token\\": \\"eyJhbGci...\\", # Full access token (15min) \\"refresh_token\\": \\"eyJhbGci...\\", # Refresh token (7 days) \\"mfa_verified\\": true, \\"expires_in\\": 900\\n}\\n```plaintext --- ## TOTP Setup (Authenticator Apps) ### Supported Authenticator Apps | App | Platform | Notes |\\n|-----|----------|-------|\\n| **Google Authenticator** | iOS, Android | Simple, widely used |\\n| **Authy** | iOS, Android, Desktop | Cloud backup, multi-device |\\n| **1Password** | All platforms | Integrated with password manager |\\n| **Microsoft Authenticator** | iOS, Android | Enterprise integration |\\n| **Bitwarden** | All platforms | Open source | ### Step-by-Step TOTP Enrollment #### 1. Initiate TOTP Enrollment ```bash\\nprovisioning mfa totp enroll\\n```plaintext **Output**: ```plaintext\\n╔════════════════════════════════════════════════════════════╗\\n║ TOTP ENROLLMENT ║\\n╚════════════════════════════════════════════════════════════╝ Scan this QR code with your authenticator app: █████████████████████████████████\\n█████████████████████████████████\\n████ ▄▄▄▄▄ █▀ █▀▀██ ▄▄▄▄▄ ████\\n████ █ █ █▀▄ ▀ ▄█ █ █ ████\\n████ █▄▄▄█ █ ▀▀ ▀▀█ █▄▄▄█ ████\\n████▄▄▄▄▄▄▄█ █▀█ ▀ █▄▄▄▄▄▄████\\n█████████████████████████████████\\n█████████████████████████████████ Manual entry (if QR code doesn\'t work): Secret: JBSWY3DPEHPK3PXP Account: admin@example.com Issuer: Provisioning Platform TOTP Configuration: Algorithm: SHA1 Digits: 6 Period: 30 seconds\\n```plaintext #### 2. Add to Authenticator App **Option A: Scan QR Code (Recommended)** 1. Open authenticator app (Google Authenticator, Authy, etc.)\\n2. Tap \\"+\\" or \\"Add Account\\"\\n3. Select \\"Scan QR Code\\"\\n4. Point camera at QR code displayed in terminal\\n5. Account added automatically **Option B: Manual Entry** 1. Open authenticator app\\n2. Tap \\"+\\" or \\"Add Account\\"\\n3. Select \\"Enter a setup key\\" or \\"Manual entry\\"\\n4. Enter: - **Account name**: - **Key**: `JBSWY3DPEHPK3PXP` (secret shown above) - **Type of key**: Time-based\\n5. Save account #### 3. Verify TOTP Code ```bash\\n# Get current code from authenticator app (6 digits, changes every 30s)\\n# Example code: 123456 provisioning mfa totp verify 123456\\n```plaintext **Success Response**: ```plaintext\\n✓ TOTP verified successfully! Backup Codes (SAVE THESE SECURELY): 1. A3B9-C2D7-E1F4 2. G8H5-J6K3-L9M2 3. N4P7-Q1R8-S5T2 4. U6V3-W9X1-Y7Z4 5. A2B8-C5D1-E9F3 6. G7H4-J2K6-L8M1 7. N3P9-Q5R2-S7T4 8. U1V6-W3X8-Y2Z5 9. A9B4-C7D2-E5F1 10. G3H8-J1K5-L6M9 ⚠ Store backup codes in a secure location (password manager, encrypted file)\\n⚠ Each code can only be used once\\n⚠ These codes allow access if you lose your authenticator device TOTP enrollment complete. MFA is now active for your account.\\n```plaintext #### 4. Save Backup Codes **Critical**: Store backup codes in a secure location: ```bash\\n# Copy backup codes to password manager or encrypted file\\n# NEVER store in plaintext, email, or cloud storage # Example: Store in encrypted file\\nprovisioning mfa backup-codes --save-encrypted ~/secure/mfa-backup-codes.enc # Or display again (requires existing MFA verification)\\nprovisioning mfa backup-codes --show\\n```plaintext #### 5. Test TOTP Login ```bash\\n# Logout to test full login flow\\nprovisioning logout # Login with password (returns partial token)\\nprovisioning login --user admin@example.com --workspace production # Get current TOTP code from authenticator app\\n# Verify with TOTP code (returns full access token)\\nprovisioning mfa verify 654321 # ✓ Full access granted\\n```plaintext --- ## WebAuthn Setup (Hardware Keys) ### Supported WebAuthn Devices | Device Type | Examples | Security Level |\\n|-------------|----------|----------------|\\n| **USB Security Keys** | YubiKey 5, SoloKey, Titan Key | Highest |\\n| **NFC Keys** | YubiKey 5 NFC, Google Titan | High (mobile compatible) |\\n| **Biometric** | Touch ID (macOS), Windows Hello, Face ID | High (convenience) |\\n| **Platform Authenticators** | Built-in laptop/phone biometrics | Medium-High | ### Step-by-Step WebAuthn Enrollment #### 1. Check WebAuthn Support ```bash\\n# Verify WebAuthn support on your system\\nprovisioning mfa webauthn check # Output:\\nWebAuthn Support: ✓ Browser: Chrome 120.0 (WebAuthn supported) ✓ Platform: macOS 14.0 (Touch ID available) ✓ USB: YubiKey 5 NFC detected\\n```plaintext #### 2. Initiate WebAuthn Registration ```bash\\nprovisioning mfa webauthn register --device-name \\"YubiKey-Admin-Primary\\"\\n```plaintext **Output**: ```plaintext\\n╔════════════════════════════════════════════════════════════╗\\n║ WEBAUTHN DEVICE REGISTRATION ║\\n╚════════════════════════════════════════════════════════════╝ Device Name: YubiKey-Admin-Primary\\nRelying Party: provisioning.example.com ⚠ Please insert your security key and touch it when it blinks Waiting for device interaction...\\n```plaintext #### 3. Complete Device Registration **For USB Security Keys (YubiKey, SoloKey)**: 1. Insert USB key into computer\\n2. Terminal shows \\"Touch your security key\\"\\n3. Touch the gold/silver contact on the key (it will blink)\\n4. Registration completes **For Touch ID (macOS)**: 1. Terminal shows \\"Touch ID prompt will appear\\"\\n2. Touch ID dialog appears on screen\\n3. Place finger on Touch ID sensor\\n4. Registration completes **For Windows Hello**: 1. Terminal shows \\"Windows Hello prompt\\"\\n2. Windows Hello biometric prompt appears\\n3. Complete biometric scan (fingerprint/face)\\n4. Registration completes **Success Response**: ```plaintext\\n✓ WebAuthn device registered successfully! Device Details: Name: YubiKey-Admin-Primary Type: USB Security Key AAGUID: 2fc0579f-8113-47ea-b116-bb5a8db9202a Credential ID: kZj8C3bx... Registered: 2025-10-08T14:32:10Z You can now use this device for authentication.\\n```plaintext #### 4. Register Additional Devices (Optional) **Best Practice**: Register 2+ WebAuthn devices (primary + backup) ```bash\\n# Register backup YubiKey\\nprovisioning mfa webauthn register --device-name \\"YubiKey-Admin-Backup\\" # Register Touch ID (for convenience on personal laptop)\\nprovisioning mfa webauthn register --device-name \\"MacBook-TouchID\\"\\n```plaintext #### 5. List Registered Devices ```bash\\nprovisioning mfa webauthn list # Output:\\nRegistered WebAuthn Devices: 1. YubiKey-Admin-Primary (USB Security Key) Registered: 2025-10-08T14:32:10Z Last Used: 2025-10-08T14:32:10Z 2. YubiKey-Admin-Backup (USB Security Key) Registered: 2025-10-08T14:35:22Z Last Used: Never 3. MacBook-TouchID (Platform Authenticator) Registered: 2025-10-08T14:40:15Z Last Used: 2025-10-08T15:20:05Z Total: 3 devices\\n```plaintext #### 6. Test WebAuthn Login ```bash\\n# Logout to test\\nprovisioning logout # Login with password (partial token)\\nprovisioning login --user admin@example.com --workspace production # Authenticate with WebAuthn\\nprovisioning mfa webauthn verify # Output:\\n⚠ Insert and touch your security key\\n[Touch YubiKey when it blinks] ✓ WebAuthn verification successful\\n✓ Full access granted\\n```plaintext --- ## Enforcing MFA via Cedar Policies ### Production MFA Enforcement Policy **Location**: `provisioning/config/cedar-policies/production.cedar` ```cedar\\n// Production operations require MFA verification\\npermit ( principal, action in [ Action::\\"server:create\\", Action::\\"server:delete\\", Action::\\"cluster:deploy\\", Action::\\"secret:read\\", Action::\\"user:manage\\" ], resource in Environment::\\"production\\"\\n) when { // MFA MUST be verified context.mfa_verified == true\\n}; // Admin role requires MFA for ALL production actions\\npermit ( principal in Role::\\"Admin\\", action, resource in Environment::\\"production\\"\\n) when { context.mfa_verified == true\\n}; // Break-glass approval requires MFA\\npermit ( principal, action == Action::\\"break_glass:approve\\", resource\\n) when { context.mfa_verified == true && principal.role in [Role::\\"Admin\\", Role::\\"SecurityLead\\"]\\n};\\n```plaintext ### Development/Staging Policies (MFA Recommended, Not Required) **Location**: `provisioning/config/cedar-policies/development.cedar` ```cedar\\n// Development: MFA recommended but not enforced\\npermit ( principal, action, resource in Environment::\\"dev\\"\\n) when { // MFA not required for dev, but logged if missing true\\n}; // Staging: MFA recommended for destructive operations\\npermit ( principal, action in [Action::\\"server:delete\\", Action::\\"cluster:delete\\"], resource in Environment::\\"staging\\"\\n) when { // Allow without MFA but log warning context.mfa_verified == true || context has mfa_warning_acknowledged\\n};\\n```plaintext ### Policy Deployment ```bash\\n# Validate Cedar policies\\nprovisioning cedar validate --policies config/cedar-policies/ # Test policies with sample requests\\nprovisioning cedar test --policies config/cedar-policies/ \\\\ --test-file tests/cedar-test-cases.yaml # Deploy to production (requires MFA + approval)\\nprovisioning cedar deploy production --policies config/cedar-policies/production.cedar # Verify policy is active\\nprovisioning cedar status production\\n```plaintext ### Testing MFA Enforcement ```bash\\n# Test 1: Production access WITHOUT MFA (should fail)\\nprovisioning login --user admin@example.com --workspace production\\nprovisioning server create web-01 --plan medium --check # Expected: Authorization denied (MFA not verified) # Test 2: Production access WITH MFA (should succeed)\\nprovisioning login --user admin@example.com --workspace production\\nprovisioning mfa verify 123456\\nprovisioning server create web-01 --plan medium --check # Expected: Server creation initiated\\n```plaintext --- ## Backup Codes Management ### Generating Backup Codes Backup codes are automatically generated during first MFA enrollment: ```bash\\n# View existing backup codes (requires MFA verification)\\nprovisioning mfa backup-codes --show # Regenerate backup codes (invalidates old ones)\\nprovisioning mfa backup-codes --regenerate # Output:\\n⚠ WARNING: Regenerating backup codes will invalidate all existing codes.\\nContinue? (yes/no): yes New Backup Codes: 1. X7Y2-Z9A4-B6C1 2. D3E8-F5G2-H9J4 3. K6L1-M7N3-P8Q2 4. R4S9-T6U1-V3W7 5. X2Y5-Z8A3-B9C4 6. D7E1-F4G6-H2J8 7. K5L9-M3N6-P1Q4 8. R8S2-T5U7-V9W3 9. X4Y6-Z1A8-B3C5 10. D9E2-F7G4-H6J1 ✓ Backup codes regenerated successfully\\n⚠ Save these codes in a secure location\\n```plaintext ### Using Backup Codes **When to use backup codes**: - Lost authenticator device (phone stolen, broken)\\n- WebAuthn key not available (traveling, left at office)\\n- Authenticator app not working (time sync issue) **Login with backup code**: ```bash\\n# Login (partial token)\\nprovisioning login --user admin@example.com --workspace production # Use backup code instead of TOTP/WebAuthn\\nprovisioning mfa verify-backup X7Y2-Z9A4-B6C1 # Output:\\n✓ Backup code verified\\n⚠ Backup code consumed (9 remaining)\\n⚠ Enroll a new MFA device as soon as possible\\n✓ Full access granted (temporary)\\n```plaintext ### Backup Code Storage Best Practices **✅ DO**: - Store in password manager (1Password, Bitwarden, LastPass)\\n- Print and store in physical safe\\n- Encrypt and store in secure cloud storage (with encryption key stored separately)\\n- Share with trusted IT team member (encrypted) **❌ DON\'T**: - Email to yourself\\n- Store in plaintext file on laptop\\n- Save in browser notes/bookmarks\\n- Share via Slack/Teams/unencrypted chat\\n- Screenshot and save to Photos **Example: Encrypted Storage**: ```bash\\n# Encrypt backup codes with Age\\nprovisioning mfa backup-codes --export | \\\\ age -p -o ~/secure/mfa-backup-codes.age # Decrypt when needed\\nage -d ~/secure/mfa-backup-codes.age\\n```plaintext --- ## Recovery Procedures ### Scenario 1: Lost Authenticator Device (TOTP) **Situation**: Phone stolen/broken, authenticator app not accessible **Recovery Steps**: ```bash\\n# Step 1: Use backup code to login\\nprovisioning login --user admin@example.com --workspace production\\nprovisioning mfa verify-backup X7Y2-Z9A4-B6C1 # Step 2: Remove old TOTP enrollment\\nprovisioning mfa totp unenroll # Step 3: Enroll new TOTP device\\nprovisioning mfa totp enroll\\n# [Scan QR code with new phone/authenticator app]\\nprovisioning mfa totp verify 654321 # Step 4: Generate new backup codes\\nprovisioning mfa backup-codes --regenerate\\n```plaintext ### Scenario 2: Lost WebAuthn Key (YubiKey) **Situation**: YubiKey lost, stolen, or damaged **Recovery Steps**: ```bash\\n# Step 1: Login with alternative method (TOTP or backup code)\\nprovisioning login --user admin@example.com --workspace production\\nprovisioning mfa verify 123456 # TOTP from authenticator app # Step 2: List registered WebAuthn devices\\nprovisioning mfa webauthn list # Step 3: Remove lost device\\nprovisioning mfa webauthn remove \\"YubiKey-Admin-Primary\\" # Output:\\n⚠ Remove WebAuthn device \\"YubiKey-Admin-Primary\\"?\\nThis cannot be undone. (yes/no): yes ✓ Device removed # Step 4: Register new WebAuthn device\\nprovisioning mfa webauthn register --device-name \\"YubiKey-Admin-Replacement\\"\\n```plaintext ### Scenario 3: All MFA Methods Lost **Situation**: Lost phone (TOTP), lost YubiKey, no backup codes **Recovery Steps** (Requires Admin Assistance): ```bash\\n# User contacts Security Team / Platform Admin # Admin performs MFA reset (requires 2+ admin approval)\\nprovisioning admin mfa-reset admin@example.com \\\\ --reason \\"Employee lost all MFA devices (phone + YubiKey)\\" \\\\ --ticket SUPPORT-12345 # Output:\\n⚠ MFA Reset Request Created Reset Request ID: MFA-RESET-20251008-001\\nUser: admin@example.com\\nReason: Employee lost all MFA devices (phone + YubiKey)\\nTicket: SUPPORT-12345 Required Approvals: 2\\nApprovers: 0/2 # Two other admins approve (with their own MFA)\\nprovisioning admin mfa-reset approve MFA-RESET-20251008-001 \\\\ --reason \\"Verified via video call + employee badge\\" # After 2 approvals, MFA is reset\\n✓ MFA reset approved (2/2 approvals)\\n✓ User admin@example.com can now re-enroll MFA devices # User re-enrolls TOTP and WebAuthn\\nprovisioning mfa totp enroll\\nprovisioning mfa webauthn register --device-name \\"YubiKey-New\\"\\n```plaintext ### Scenario 4: Backup Codes Depleted **Situation**: Used 9 out of 10 backup codes **Recovery Steps**: ```bash\\n# Login with last backup code\\nprovisioning login --user admin@example.com --workspace production\\nprovisioning mfa verify-backup D9E2-F7G4-H6J1 # Output:\\n⚠ WARNING: This is your LAST backup code!\\n✓ Backup code verified\\n⚠ Regenerate backup codes immediately! # Immediately regenerate backup codes\\nprovisioning mfa backup-codes --regenerate # Save new codes securely\\n```plaintext --- ## Troubleshooting ### Issue 1: \\"Invalid TOTP code\\" Error **Symptoms**: ```plaintext\\nprovisioning mfa verify 123456\\n✗ Error: Invalid TOTP code\\n```plaintext **Possible Causes**: 1. **Time sync issue** (most common)\\n2. Wrong secret key entered during enrollment\\n3. Code expired (30-second window) **Solutions**: ```bash\\n# Check time sync (device clock must be accurate)\\n# macOS:\\nsudo sntp -sS time.apple.com # Linux:\\nsudo ntpdate pool.ntp.org # Verify TOTP configuration\\nprovisioning mfa totp status # Output:\\nTOTP Configuration: Algorithm: SHA1 Digits: 6 Period: 30 seconds Time Window: ±1 period (90 seconds total) # Check system time vs NTP\\ndate && curl -s http://worldtimeapi.org/api/ip | grep datetime # If time is off by >30 seconds, sync time and retry\\n```plaintext ### Issue 2: WebAuthn Not Detected **Symptoms**: ```plaintext\\nprovisioning mfa webauthn register\\n✗ Error: No WebAuthn authenticator detected\\n```plaintext **Solutions**: ```bash\\n# Check USB connection (for hardware keys)\\n# macOS:\\nsystem_profiler SPUSBDataType | grep -i yubikey # Linux:\\nlsusb | grep -i yubico # Check browser WebAuthn support\\nprovisioning mfa webauthn check # Try different USB port (USB-A vs USB-C) # For Touch ID: Ensure finger is enrolled in System Preferences\\n# For Windows Hello: Ensure biometrics are configured in Settings\\n```plaintext ### Issue 3: \\"MFA Required\\" Despite Verification **Symptoms**: ```plaintext\\nprovisioning server create web-01\\n✗ Error: Authorization denied (MFA verification required)\\n```plaintext **Cause**: Access token expired (15 min) or MFA verification not in token claims **Solution**: ```bash\\n# Check token expiration\\nprovisioning auth status # Output:\\nAuthentication Status: Logged in: Yes User: admin@example.com Access Token: Expired (issued 16 minutes ago) MFA Verified: Yes (but token expired) # Re-authenticate (will prompt for MFA again)\\nprovisioning login --user admin@example.com --workspace production\\nprovisioning mfa verify 654321 # Verify MFA claim in token\\nprovisioning auth decode-token # Output (JWT claims):\\n{ \\"sub\\": \\"admin@example.com\\", \\"role\\": \\"Admin\\", \\"mfa_verified\\": true, # ← Must be true \\"mfa_method\\": \\"totp\\", \\"iat\\": 1696766400, \\"exp\\": 1696767300\\n}\\n```plaintext ### Issue 4: QR Code Not Displaying **Symptoms**: QR code appears garbled or doesn\'t display in terminal **Solutions**: ```bash\\n# Use manual entry instead\\nprovisioning mfa totp enroll --manual # Output (no QR code):\\nManual TOTP Setup: Secret: JBSWY3DPEHPK3PXP Account: admin@example.com Issuer: Provisioning Platform Enter this secret manually in your authenticator app. # Or export QR code to image file\\nprovisioning mfa totp enroll --qr-image ~/mfa-qr.png\\nopen ~/mfa-qr.png # View in image viewer\\n```plaintext ### Issue 5: Backup Code Not Working **Symptoms**: ```plaintext\\nprovisioning mfa verify-backup X7Y2-Z9A4-B6C1\\n✗ Error: Invalid or already used backup code\\n```plaintext **Possible Causes**: 1. Code already used (single-use only)\\n2. Backup codes regenerated (old codes invalidated)\\n3. Typo in code entry **Solutions**: ```bash\\n# Check backup code status (requires alternative login method)\\nprovisioning mfa backup-codes --status # Output:\\nBackup Codes Status: Total Generated: 10 Used: 3 Remaining: 7 Last Used: 2025-10-05T10:15:30Z # Contact admin for MFA reset if all codes used\\n# Or use alternative MFA method (TOTP, WebAuthn)\\n```plaintext --- ## Best Practices ### For Individual Admins #### 1. Use Multiple MFA Methods **✅ Recommended Setup**: - **Primary**: TOTP (Google Authenticator, Authy)\\n- **Backup**: WebAuthn (YubiKey or Touch ID)\\n- **Emergency**: Backup codes (stored securely) ```bash\\n# Enroll all three\\nprovisioning mfa totp enroll\\nprovisioning mfa webauthn register --device-name \\"YubiKey-Primary\\"\\nprovisioning mfa backup-codes --save-encrypted ~/secure/codes.enc\\n```plaintext #### 2. Secure Backup Code Storage ```bash\\n# Store in password manager (1Password example)\\nprovisioning mfa backup-codes --show | \\\\ op item create --category \\"Secure Note\\" \\\\ --title \\"Provisioning MFA Backup Codes\\" \\\\ --vault \\"Work\\" # Or encrypted file\\nprovisioning mfa backup-codes --export | \\\\ age -p -o ~/secure/mfa-backup-codes.age\\n```plaintext #### 3. Regular Device Audits ```bash\\n# Monthly: Review registered devices\\nprovisioning mfa devices --all # Remove unused/old devices\\nprovisioning mfa webauthn remove \\"Old-YubiKey\\"\\nprovisioning mfa totp remove \\"Old-Phone\\"\\n```plaintext #### 4. Test Recovery Procedures ```bash\\n# Quarterly: Test backup code login\\nprovisioning logout\\nprovisioning login --user admin@example.com --workspace dev\\nprovisioning mfa verify-backup [test-code] # Verify backup codes are accessible\\ncat ~/secure/mfa-backup-codes.enc | age -d\\n```plaintext ### For Security Teams #### 1. MFA Enrollment Verification ```bash\\n# Generate MFA enrollment report\\nprovisioning admin mfa-report --format csv > mfa-enrollment.csv # Output (CSV):\\n# User,MFA_Enabled,TOTP,WebAuthn,Backup_Codes,Last_MFA_Login,Role\\n# admin@example.com,Yes,Yes,Yes,10,2025-10-08T14:00:00Z,Admin\\n# dev@example.com,No,No,No,0,Never,Developer\\n```plaintext #### 2. Enforce MFA Deadlines ```bash\\n# Set MFA enrollment deadline\\nprovisioning admin mfa-deadline set 2025-11-01 \\\\ --roles Admin,Developer \\\\ --environment production # Send reminder emails\\nprovisioning admin mfa-remind \\\\ --users-without-mfa \\\\ --template \\"MFA enrollment required by Nov 1\\"\\n```plaintext #### 3. Monitor MFA Usage ```bash\\n# Audit: Find production logins without MFA\\nprovisioning audit query \\\\ --action \\"auth:login\\" \\\\ --filter \'mfa_verified == false && environment == \\"production\\"\' \\\\ --since 7d # Alert on repeated MFA failures\\nprovisioning monitoring alert create \\\\ --name \\"MFA Brute Force\\" \\\\ --condition \\"mfa_failures > 5 in 5min\\" \\\\ --action \\"notify security-team\\"\\n```plaintext #### 4. MFA Reset Policy **MFA Reset Requirements**: - User verification (video call + ID check)\\n- Support ticket created (incident tracking)\\n- 2+ admin approvals (different teams)\\n- Time-limited reset window (24 hours)\\n- Mandatory re-enrollment before production access ```bash\\n# MFA reset workflow\\nprovisioning admin mfa-reset create user@example.com \\\\ --reason \\"Lost all devices\\" \\\\ --ticket SUPPORT-12345 \\\\ --expires-in 24h # Requires 2 approvals\\nprovisioning admin mfa-reset approve MFA-RESET-001\\n```plaintext ### For Platform Admins #### 1. Cedar Policy Best Practices ```cedar\\n// Require MFA for high-risk actions\\npermit ( principal, action in [ Action::\\"server:delete\\", Action::\\"cluster:delete\\", Action::\\"secret:delete\\", Action::\\"user:delete\\" ], resource\\n) when { context.mfa_verified == true && context.mfa_age_seconds < 300 // MFA verified within last 5 minutes\\n};\\n```plaintext #### 2. MFA Grace Periods (For Rollout) ```bash\\n# Development: No MFA required\\nexport PROVISIONING_MFA_REQUIRED=false # Staging: MFA recommended (warnings only)\\nexport PROVISIONING_MFA_REQUIRED=warn # Production: MFA mandatory (strict enforcement)\\nexport PROVISIONING_MFA_REQUIRED=true\\n```plaintext #### 3. Backup Admin Account **Emergency Admin** (break-glass scenario): - Separate admin account with MFA enrollment\\n- Credentials stored in physical safe\\n- Only used when primary admins locked out\\n- Requires incident report after use ```bash\\n# Create emergency admin\\nprovisioning admin create emergency-admin@example.com \\\\ --role EmergencyAdmin \\\\ --mfa-required true \\\\ --max-concurrent-sessions 1 # Print backup codes and store in safe\\nprovisioning mfa backup-codes --show --user emergency-admin@example.com > emergency-codes.txt\\n# [Print and store in physical safe]\\n```plaintext --- ## Audit and Compliance ### MFA Audit Logging All MFA events are logged to the audit system: ```bash\\n# View MFA enrollment events\\nprovisioning audit query \\\\ --action-type \\"mfa:*\\" \\\\ --since 30d # Output (JSON):\\n[ { \\"timestamp\\": \\"2025-10-08T14:32:10Z\\", \\"action\\": \\"mfa:totp:enroll\\", \\"user\\": \\"admin@example.com\\", \\"result\\": \\"success\\", \\"device_type\\": \\"totp\\", \\"ip_address\\": \\"203.0.113.42\\" }, { \\"timestamp\\": \\"2025-10-08T14:35:22Z\\", \\"action\\": \\"mfa:webauthn:register\\", \\"user\\": \\"admin@example.com\\", \\"result\\": \\"success\\", \\"device_name\\": \\"YubiKey-Admin-Primary\\", \\"ip_address\\": \\"203.0.113.42\\" }\\n]\\n```plaintext ### Compliance Reports #### SOC2 Compliance (Access Control) ```bash\\n# Generate SOC2 access control report\\nprovisioning compliance report soc2 \\\\ --control \\"CC6.1\\" \\\\ --period \\"2025-Q3\\" # Output:\\nSOC2 Trust Service Criteria - CC6.1 (Logical Access) MFA Enforcement: ✓ MFA enabled for 100% of production admins (15/15) ✓ MFA verified for 98.7% of production logins (2,453/2,485) ✓ MFA policies enforced via Cedar authorization ✓ Failed MFA attempts logged and monitored Evidence: - Cedar policy: production.cedar (lines 15-25) - Audit logs: mfa-verification-logs-2025-q3.json - Enrollment report: mfa-enrollment-status.csv\\n```plaintext #### ISO 27001 Compliance (A.9.4.2 - Secure Log-on) ```bash\\n# ISO 27001 A.9.4.2 compliance report\\nprovisioning compliance report iso27001 \\\\ --control \\"A.9.4.2\\" \\\\ --format pdf \\\\ --output iso27001-a942-mfa-report.pdf # Report Sections:\\n# 1. MFA Implementation Details\\n# 2. Enrollment Procedures\\n# 3. Audit Trail\\n# 4. Policy Enforcement\\n# 5. Recovery Procedures\\n```plaintext #### GDPR Compliance (MFA Data Handling) ```bash\\n# GDPR data subject request (MFA data export)\\nprovisioning compliance gdpr export admin@example.com \\\\ --include mfa # Output (JSON):\\n{ \\"user\\": \\"admin@example.com\\", \\"mfa_data\\": { \\"totp_enrolled\\": true, \\"totp_enrollment_date\\": \\"2025-10-08T14:32:10Z\\", \\"webauthn_devices\\": [ { \\"name\\": \\"YubiKey-Admin-Primary\\", \\"registered\\": \\"2025-10-08T14:35:22Z\\", \\"last_used\\": \\"2025-10-08T16:20:05Z\\" } ], \\"backup_codes_remaining\\": 7, \\"mfa_login_history\\": [...] # Last 90 days }\\n} # GDPR deletion (MFA data removal after account deletion)\\nprovisioning compliance gdpr delete admin@example.com --include-mfa\\n```plaintext ### MFA Metrics Dashboard ```bash\\n# Generate MFA metrics\\nprovisioning admin mfa-metrics --period 30d # Output:\\nMFA Metrics (Last 30 Days) Enrollment: Total Users: 42 MFA Enabled: 38 (90.5%) TOTP Only: 22 (57.9%) WebAuthn Only: 3 (7.9%) Both TOTP + WebAuthn: 13 (34.2%) No MFA: 4 (9.5%) ⚠ Authentication: Total Logins: 3,847 MFA Verified: 3,802 (98.8%) MFA Failed: 45 (1.2%) Backup Code Used: 7 (0.2%) Devices: TOTP Devices: 35 WebAuthn Devices: 47 Backup Codes Remaining (avg): 8.3 Incidents: MFA Resets: 2 Lost Devices: 3 Lockouts: 1\\n```plaintext --- ## Quick Reference Card ### Daily Admin Operations ```bash\\n# Login with MFA\\nprovisioning login --user admin@example.com --workspace production\\nprovisioning mfa verify 123456 # Check MFA status\\nprovisioning mfa status # View registered devices\\nprovisioning mfa devices\\n```plaintext ### MFA Management ```bash\\n# TOTP\\nprovisioning mfa totp enroll # Enroll TOTP\\nprovisioning mfa totp verify 123456 # Verify TOTP code\\nprovisioning mfa totp unenroll # Remove TOTP # WebAuthn\\nprovisioning mfa webauthn register --device-name \\"YubiKey\\" # Register key\\nprovisioning mfa webauthn list # List devices\\nprovisioning mfa webauthn remove \\"YubiKey\\" # Remove device # Backup Codes\\nprovisioning mfa backup-codes --show # View codes\\nprovisioning mfa backup-codes --regenerate # Generate new codes\\nprovisioning mfa verify-backup X7Y2-Z9A4-B6C1 # Use backup code\\n```plaintext ### Emergency Procedures ```bash\\n# Lost device recovery (use backup code)\\nprovisioning login --user admin@example.com\\nprovisioning mfa verify-backup [code]\\nprovisioning mfa totp enroll # Re-enroll new device # MFA reset (admin only)\\nprovisioning admin mfa-reset user@example.com --reason \\"Lost all devices\\" # Check MFA compliance\\nprovisioning admin mfa-report\\n```plaintext --- ## Summary Checklist ### For New Admins - [ ] Complete initial login with password\\n- [ ] Enroll TOTP (Google Authenticator, Authy)\\n- [ ] Verify TOTP code successfully\\n- [ ] Save backup codes in password manager\\n- [ ] Register WebAuthn device (YubiKey or Touch ID)\\n- [ ] Test full login flow with MFA\\n- [ ] Store backup codes in secure location\\n- [ ] Verify production access works with MFA ### For Security Team - [ ] Deploy Cedar MFA enforcement policies\\n- [ ] Verify 100% admin MFA enrollment\\n- [ ] Configure MFA audit logging\\n- [ ] Setup MFA compliance reports (SOC2, ISO 27001)\\n- [ ] Document MFA reset procedures\\n- [ ] Train admins on MFA usage\\n- [ ] Create emergency admin account (break-glass)\\n- [ ] Schedule quarterly MFA audits ### For Platform Team - [ ] Configure MFA settings in `config/mfa.toml`\\n- [ ] Deploy Cedar policies with MFA requirements\\n- [ ] Setup monitoring for MFA failures\\n- [ ] Configure alerts for MFA bypass attempts\\n- [ ] Document MFA architecture in ADR\\n- [ ] Test MFA enforcement in all environments\\n- [ ] Verify audit logs capture MFA events\\n- [ ] Create runbooks for MFA incidents --- ## Support and Resources ### Documentation - **MFA Implementation**: `/docs/architecture/MFA_IMPLEMENTATION_SUMMARY.md`\\n- **Cedar Policies**: `/docs/operations/CEDAR_POLICIES_PRODUCTION_GUIDE.md`\\n- **Break-Glass**: `/docs/operations/BREAK_GLASS_TRAINING_GUIDE.md`\\n- **Audit Logging**: `/docs/architecture/AUDIT_LOGGING_IMPLEMENTATION.md` ### Configuration Files - **MFA Config**: `provisioning/config/mfa.toml`\\n- **Cedar Policies**: `provisioning/config/cedar-policies/production.cedar`\\n- **Control Center**: `provisioning/platform/control-center/config.toml` ### CLI Help ```bash\\nprovisioning mfa help # MFA command help\\nprovisioning mfa totp --help # TOTP-specific help\\nprovisioning mfa webauthn --help # WebAuthn-specific help\\n```plaintext ### Contact - **Security Team**: \\n- **Platform Team**: \\n- **Support Ticket**: --- **Document Status**: ✅ Complete\\n**Review Date**: 2025-11-08\\n**Maintained By**: Security Team, Platform Team","breadcrumbs":"MFA Admin Setup Guide » Timeline for Rollout","id":"1266","title":"Timeline for Rollout"},"1267":{"body":"A Rust-based orchestrator service that coordinates infrastructure provisioning workflows with pluggable storage backends and comprehensive migration tools. Source : provisioning/platform/orchestrator/","breadcrumbs":"Orchestrator » Provisioning Orchestrator","id":"1267","title":"Provisioning Orchestrator"},"1268":{"body":"The orchestrator implements a hybrid multi-storage approach: Rust Orchestrator : Handles coordination, queuing, and parallel execution Nushell Scripts : Execute the actual provisioning logic Pluggable Storage : Multiple storage backends with seamless migration REST API : HTTP interface for workflow submission and monitoring","breadcrumbs":"Orchestrator » Architecture","id":"1268","title":"Architecture"},"1269":{"body":"Multi-Storage Backends : Filesystem, SurrealDB Embedded, and SurrealDB Server options Task Queue : Priority-based task scheduling with retry logic Seamless Migration : Move data between storage backends with zero downtime Feature Flags : Compile-time backend selection for minimal dependencies Parallel Execution : Multiple tasks can run concurrently Status Tracking : Real-time task status and progress monitoring Advanced Features : Authentication, audit logging, and metrics (SurrealDB) Nushell Integration : Seamless execution of existing provisioning scripts RESTful API : HTTP endpoints for workflow management Test Environment Service : Automated containerized testing for taskservs, servers, and clusters Multi-Node Support : Test complex topologies including Kubernetes and etcd clusters Docker Integration : Automated container lifecycle management via Docker API","breadcrumbs":"Orchestrator » Key Features","id":"1269","title":"Key Features"},"127":{"body":"# Check Age version\\nage --version # Expected output: 1.2.1 or higher","breadcrumbs":"Prerequisites » Age","id":"127","title":"Age"},"1270":{"body":"","breadcrumbs":"Orchestrator » Quick Start","id":"1270","title":"Quick Start"},"1271":{"body":"Default Build (Filesystem Only) : cd provisioning/platform/orchestrator\\ncargo build --release\\ncargo run -- --port 8080 --data-dir ./data With SurrealDB Support : cargo build --release --features surrealdb # Run with SurrealDB embedded\\ncargo run --features surrealdb -- --storage-type surrealdb-embedded --data-dir ./data # Run with SurrealDB server\\ncargo run --features surrealdb -- --storage-type surrealdb-server \\\\ --surrealdb-url ws://localhost:8000 \\\\ --surrealdb-username admin --surrealdb-password secret","breadcrumbs":"Orchestrator » Build and Run","id":"1271","title":"Build and Run"},"1272":{"body":"curl -X POST http://localhost:8080/workflows/servers/create \\\\ -H \\"Content-Type: application/json\\" \\\\ -d \'{ \\"infra\\": \\"production\\", \\"settings\\": \\"./settings.yaml\\", \\"servers\\": [\\"web-01\\", \\"web-02\\"], \\"check_mode\\": false, \\"wait\\": true }\'","breadcrumbs":"Orchestrator » Submit Workflow","id":"1272","title":"Submit Workflow"},"1273":{"body":"","breadcrumbs":"Orchestrator » API Endpoints","id":"1273","title":"API Endpoints"},"1274":{"body":"GET /health - Service health status GET /tasks - List all tasks GET /tasks/{id} - Get specific task status","breadcrumbs":"Orchestrator » Core Endpoints","id":"1274","title":"Core Endpoints"},"1275":{"body":"POST /workflows/servers/create - Submit server creation workflow POST /workflows/taskserv/create - Submit taskserv creation workflow POST /workflows/cluster/create - Submit cluster creation workflow","breadcrumbs":"Orchestrator » Workflow Endpoints","id":"1275","title":"Workflow Endpoints"},"1276":{"body":"POST /test/environments/create - Create test environment GET /test/environments - List all test environments GET /test/environments/{id} - Get environment details POST /test/environments/{id}/run - Run tests in environment DELETE /test/environments/{id} - Cleanup test environment GET /test/environments/{id}/logs - Get environment logs","breadcrumbs":"Orchestrator » Test Environment Endpoints","id":"1276","title":"Test Environment Endpoints"},"1277":{"body":"The orchestrator includes a comprehensive test environment service for automated containerized testing.","breadcrumbs":"Orchestrator » Test Environment Service","id":"1277","title":"Test Environment Service"},"1278":{"body":"1. Single Taskserv Test individual taskserv in isolated container. 2. Server Simulation Test complete server configurations with multiple taskservs. 3. Cluster Topology Test multi-node cluster configurations (Kubernetes, etcd, etc.).","breadcrumbs":"Orchestrator » Test Environment Types","id":"1278","title":"Test Environment Types"},"1279":{"body":"# Quick test\\nprovisioning test quick kubernetes # Single taskserv test\\nprovisioning test env single postgres --auto-start --auto-cleanup # Server simulation\\nprovisioning test env server web-01 [containerd kubernetes cilium] --auto-start # Cluster from template\\nprovisioning test topology load kubernetes_3node | test env cluster kubernetes","breadcrumbs":"Orchestrator » Nushell CLI Integration","id":"1279","title":"Nushell CLI Integration"},"128":{"body":"","breadcrumbs":"Prerequisites » Installing Missing Dependencies","id":"128","title":"Installing Missing Dependencies"},"1280":{"body":"Predefined multi-node cluster topologies: kubernetes_3node : 3-node HA Kubernetes cluster kubernetes_single : All-in-one Kubernetes node etcd_cluster : 3-member etcd cluster containerd_test : Standalone containerd testing postgres_redis : Database stack testing","breadcrumbs":"Orchestrator » Topology Templates","id":"1280","title":"Topology Templates"},"1281":{"body":"Feature Filesystem SurrealDB Embedded SurrealDB Server Dependencies None Local database Remote server Auth/RBAC Basic Advanced Advanced Real-time No Yes Yes Scalability Limited Medium High Complexity Low Medium High Best For Development Production Distributed","breadcrumbs":"Orchestrator » Storage Backends","id":"1281","title":"Storage Backends"},"1282":{"body":"User Guide : Test Environment Guide Architecture : Orchestrator Architecture Feature Summary : Orchestrator Features","breadcrumbs":"Orchestrator » Related Documentation","id":"1282","title":"Related Documentation"},"1283":{"body":"","breadcrumbs":"Orchestrator System » Hybrid Orchestrator Architecture (v3.0.0)","id":"1283","title":"Hybrid Orchestrator Architecture (v3.0.0)"},"1284":{"body":"A production-ready hybrid Rust/Nushell orchestrator has been implemented to solve deep call stack limitations while preserving all Nushell business logic.","breadcrumbs":"Orchestrator System » 🚀 Orchestrator Implementation Completed (2025-09-25)","id":"1284","title":"🚀 Orchestrator Implementation Completed (2025-09-25)"},"1285":{"body":"Rust Orchestrator : High-performance coordination layer with REST API Nushell Business Logic : All existing scripts preserved and enhanced File-based Persistence : Reliable task queue using lightweight file storage Priority Processing : Intelligent task scheduling with retry logic Deep Call Stack Solution : Eliminates template.nu:71 \\"Type not supported\\" errors","breadcrumbs":"Orchestrator System » Architecture Overview","id":"1285","title":"Architecture Overview"},"1286":{"body":"# Start orchestrator in background\\ncd provisioning/platform/orchestrator\\n./scripts/start-orchestrator.nu --background --provisioning-path \\"/usr/local/bin/provisioning\\" # Check orchestrator status\\n./scripts/start-orchestrator.nu --check # Stop orchestrator\\n./scripts/start-orchestrator.nu --stop # View logs\\ntail -f ./data/orchestrator.log","breadcrumbs":"Orchestrator System » Orchestrator Management","id":"1286","title":"Orchestrator Management"},"1287":{"body":"The orchestrator provides comprehensive workflow management:","breadcrumbs":"Orchestrator System » Workflow System","id":"1287","title":"Workflow System"},"1288":{"body":"# Submit server creation workflow\\nnu -c \\"use core/nulib/workflows/server_create.nu *; server_create_workflow \'wuji\' \'\' [] --check\\" # Traditional orchestrated server creation\\nprovisioning servers create --orchestrated --check","breadcrumbs":"Orchestrator System » Server Workflows","id":"1288","title":"Server Workflows"},"1289":{"body":"# Create taskserv workflow\\nnu -c \\"use core/nulib/workflows/taskserv.nu *; taskserv create \'kubernetes\' \'wuji\' --check\\" # Other taskserv operations\\nnu -c \\"use core/nulib/workflows/taskserv.nu *; taskserv delete \'kubernetes\' \'wuji\' --check\\"\\nnu -c \\"use core/nulib/workflows/taskserv.nu *; taskserv generate \'kubernetes\' \'wuji\'\\"\\nnu -c \\"use core/nulib/workflows/taskserv.nu *; taskserv check-updates\\"","breadcrumbs":"Orchestrator System » Taskserv Workflows","id":"1289","title":"Taskserv Workflows"},"129":{"body":"# Install Homebrew if not already installed\\n/bin/bash -c \\"$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/HEAD/install.sh)\\" # Install Nushell\\nbrew install nushell # Install KCL\\nbrew install kcl # Install Docker Desktop\\nbrew install --cask docker # Install SOPS\\nbrew install sops # Install Age\\nbrew install age # Optional: Install extras\\nbrew install k9s glow bat","breadcrumbs":"Prerequisites » macOS (using Homebrew)","id":"129","title":"macOS (using Homebrew)"},"1290":{"body":"# Create cluster workflow\\nnu -c \\"use core/nulib/workflows/cluster.nu *; cluster create \'buildkit\' \'wuji\' --check\\" # Delete cluster workflow\\nnu -c \\"use core/nulib/workflows/cluster.nu *; cluster delete \'buildkit\' \'wuji\' --check\\"","breadcrumbs":"Orchestrator System » Cluster Workflows","id":"1290","title":"Cluster Workflows"},"1291":{"body":"# List all workflows\\nnu -c \\"use core/nulib/workflows/management.nu *; workflow list\\" # Get workflow statistics\\nnu -c \\"use core/nulib/workflows/management.nu *; workflow stats\\" # Monitor workflow in real-time\\nnu -c \\"use core/nulib/workflows/management.nu *; workflow monitor \\" # Check orchestrator health\\nnu -c \\"use core/nulib/workflows/management.nu *; workflow orchestrator\\" # Get specific workflow status\\nnu -c \\"use core/nulib/workflows/management.nu *; workflow status \\"","breadcrumbs":"Orchestrator System » Workflow Management","id":"1291","title":"Workflow Management"},"1292":{"body":"The orchestrator exposes HTTP endpoints for external integration: Health : GET http://localhost:9090/v1/health List Tasks : GET http://localhost:9090/v1/tasks Task Status : GET http://localhost:9090/v1/tasks/{id} Server Workflow : POST http://localhost:9090/v1/workflows/servers/create Taskserv Workflow : POST http://localhost:9090/v1/workflows/taskserv/create Cluster Workflow : POST http://localhost:9090/v1/workflows/cluster/create","breadcrumbs":"Orchestrator System » REST API Endpoints","id":"1292","title":"REST API Endpoints"},"1293":{"body":"A comprehensive Cedar policy engine implementation with advanced security features, compliance checking, and anomaly detection. Source : provisioning/platform/control-center/","breadcrumbs":"Control Center » Control Center - Cedar Policy Engine","id":"1293","title":"Control Center - Cedar Policy Engine"},"1294":{"body":"","breadcrumbs":"Control Center » Key Features","id":"1294","title":"Key Features"},"1295":{"body":"Policy Evaluation : High-performance policy evaluation with context injection Versioning : Complete policy versioning with rollback capabilities Templates : Configuration-driven policy templates with variable substitution Validation : Comprehensive policy validation with syntax and semantic checking","breadcrumbs":"Control Center » Cedar Policy Engine","id":"1295","title":"Cedar Policy Engine"},"1296":{"body":"JWT Authentication : Secure token-based authentication Multi-Factor Authentication : MFA support for sensitive operations Role-Based Access Control : Flexible RBAC with policy integration Session Management : Secure session handling with timeouts","breadcrumbs":"Control Center » Security & Authentication","id":"1296","title":"Security & Authentication"},"1297":{"body":"SOC2 Type II : Complete SOC2 compliance validation HIPAA : Healthcare data protection compliance Audit Trail : Comprehensive audit logging and reporting Impact Analysis : Policy change impact assessment","breadcrumbs":"Control Center » Compliance Framework","id":"1297","title":"Compliance Framework"},"1298":{"body":"Statistical Analysis : Multiple statistical methods (Z-Score, IQR, Isolation Forest) Real-time Detection : Continuous monitoring of policy evaluations Alert Management : Configurable alerting through multiple channels Baseline Learning : Adaptive baseline calculation for improved accuracy","breadcrumbs":"Control Center » Anomaly Detection","id":"1298","title":"Anomaly Detection"},"1299":{"body":"SurrealDB Integration : High-performance graph database backend Policy Storage : Versioned policy storage with metadata Metrics Storage : Policy evaluation metrics and analytics Compliance Records : Complete compliance audit trails","breadcrumbs":"Control Center » Storage & Persistence","id":"1299","title":"Storage & Persistence"},"13":{"body":"This guide will help you install Infrastructure Automation on your machine and get it ready for use.","breadcrumbs":"Installation Guide » Installation Guide","id":"13","title":"Installation Guide"},"130":{"body":"# Update package list\\nsudo apt update # Install prerequisites\\nsudo apt install -y curl git build-essential # Install Nushell (from GitHub releases)\\ncurl -LO https://github.com/nushell/nushell/releases/download/0.107.1/nu-0.107.1-x86_64-linux-musl.tar.gz\\ntar xzf nu-0.107.1-x86_64-linux-musl.tar.gz\\nsudo mv nu /usr/local/bin/ # Install KCL\\ncurl -LO https://github.com/kcl-lang/cli/releases/download/v0.11.2/kcl-v0.11.2-linux-amd64.tar.gz\\ntar xzf kcl-v0.11.2-linux-amd64.tar.gz\\nsudo mv kcl /usr/local/bin/ # Install Docker\\nsudo apt install -y docker.io\\nsudo systemctl enable --now docker\\nsudo usermod -aG docker $USER # Install SOPS\\ncurl -LO https://github.com/getsops/sops/releases/download/v3.10.2/sops-v3.10.2.linux.amd64\\nchmod +x sops-v3.10.2.linux.amd64\\nsudo mv sops-v3.10.2.linux.amd64 /usr/local/bin/sops # Install Age\\nsudo apt install -y age","breadcrumbs":"Prerequisites » Ubuntu/Debian","id":"130","title":"Ubuntu/Debian"},"1300":{"body":"","breadcrumbs":"Control Center » Quick Start","id":"1300","title":"Quick Start"},"1301":{"body":"cd provisioning/platform/control-center\\ncargo build --release","breadcrumbs":"Control Center » Installation","id":"1301","title":"Installation"},"1302":{"body":"Copy and edit the configuration: cp config.toml.example config.toml Configuration example: [database]\\nurl = \\"surreal://localhost:8000\\"\\nusername = \\"root\\"\\npassword = \\"your-password\\" [auth]\\njwt_secret = \\"your-super-secret-key\\"\\nrequire_mfa = true [compliance.soc2]\\nenabled = true [anomaly]\\nenabled = true\\ndetection_threshold = 2.5","breadcrumbs":"Control Center » Configuration","id":"1302","title":"Configuration"},"1303":{"body":"./target/release/control-center server --port 8080","breadcrumbs":"Control Center » Start Server","id":"1303","title":"Start Server"},"1304":{"body":"curl -X POST http://localhost:8080/policies/evaluate \\\\ -H \\"Content-Type: application/json\\" \\\\ -d \'{ \\"principal\\": {\\"id\\": \\"user123\\", \\"roles\\": [\\"Developer\\"]}, \\"action\\": {\\"id\\": \\"access\\"}, \\"resource\\": {\\"id\\": \\"sensitive-db\\", \\"classification\\": \\"confidential\\"}, \\"context\\": {\\"mfa_enabled\\": true, \\"location\\": \\"US\\"} }\'","breadcrumbs":"Control Center » Test Policy Evaluation","id":"1304","title":"Test Policy Evaluation"},"1305":{"body":"","breadcrumbs":"Control Center » Policy Examples","id":"1305","title":"Policy Examples"},"1306":{"body":"permit( principal, action == Action::\\"access\\", resource\\n) when { resource has classification && resource.classification in [\\"sensitive\\", \\"confidential\\"] && principal has mfa_enabled && principal.mfa_enabled == true\\n};","breadcrumbs":"Control Center » Multi-Factor Authentication Policy","id":"1306","title":"Multi-Factor Authentication Policy"},"1307":{"body":"permit( principal, action in [Action::\\"deploy\\", Action::\\"modify\\", Action::\\"delete\\"], resource\\n) when { resource has environment && resource.environment == \\"production\\" && principal has approval && principal.approval.approved_by in [\\"ProductionAdmin\\", \\"SRE\\"]\\n};","breadcrumbs":"Control Center » Production Approval Policy","id":"1307","title":"Production Approval Policy"},"1308":{"body":"permit( principal, action, resource\\n) when { context has geo && context.geo has country && context.geo.country in [\\"US\\", \\"CA\\", \\"GB\\", \\"DE\\"]\\n};","breadcrumbs":"Control Center » Geographic Restrictions","id":"1308","title":"Geographic Restrictions"},"1309":{"body":"","breadcrumbs":"Control Center » CLI Commands","id":"1309","title":"CLI Commands"},"131":{"body":"# Install Nushell\\nsudo dnf install -y nushell # Install KCL (from releases)\\ncurl -LO https://github.com/kcl-lang/cli/releases/download/v0.11.2/kcl-v0.11.2-linux-amd64.tar.gz\\ntar xzf kcl-v0.11.2-linux-amd64.tar.gz\\nsudo mv kcl /usr/local/bin/ # Install Docker\\nsudo dnf install -y docker\\nsudo systemctl enable --now docker\\nsudo usermod -aG docker $USER # Install SOPS\\nsudo dnf install -y sops # Install Age\\nsudo dnf install -y age","breadcrumbs":"Prerequisites » Fedora/RHEL","id":"131","title":"Fedora/RHEL"},"1310":{"body":"# Validate policies\\ncontrol-center policy validate policies/ # Test policy with test data\\ncontrol-center policy test policies/mfa.cedar tests/data/mfa_test.json # Analyze policy impact\\ncontrol-center policy impact policies/new_policy.cedar","breadcrumbs":"Control Center » Policy Management","id":"1310","title":"Policy Management"},"1311":{"body":"# Check SOC2 compliance\\ncontrol-center compliance soc2 # Check HIPAA compliance\\ncontrol-center compliance hipaa # Generate compliance report\\ncontrol-center compliance report --format html","breadcrumbs":"Control Center » Compliance Checking","id":"1311","title":"Compliance Checking"},"1312":{"body":"","breadcrumbs":"Control Center » API Endpoints","id":"1312","title":"API Endpoints"},"1313":{"body":"POST /policies/evaluate - Evaluate policy decision GET /policies - List all policies POST /policies - Create new policy PUT /policies/{id} - Update policy DELETE /policies/{id} - Delete policy","breadcrumbs":"Control Center » Policy Evaluation","id":"1313","title":"Policy Evaluation"},"1314":{"body":"GET /policies/{id}/versions - List policy versions GET /policies/{id}/versions/{version} - Get specific version POST /policies/{id}/rollback/{version} - Rollback to version","breadcrumbs":"Control Center » Policy Versions","id":"1314","title":"Policy Versions"},"1315":{"body":"GET /compliance/soc2 - SOC2 compliance check GET /compliance/hipaa - HIPAA compliance check GET /compliance/report - Generate compliance report","breadcrumbs":"Control Center » Compliance","id":"1315","title":"Compliance"},"1316":{"body":"GET /anomalies - List detected anomalies GET /anomalies/{id} - Get anomaly details POST /anomalies/detect - Trigger anomaly detection","breadcrumbs":"Control Center » Anomaly Detection","id":"1316","title":"Anomaly Detection"},"1317":{"body":"","breadcrumbs":"Control Center » Architecture","id":"1317","title":"Architecture"},"1318":{"body":"Policy Engine (src/policies/engine.rs) Cedar policy evaluation Context injection Caching and optimization Storage Layer (src/storage/) SurrealDB integration Policy versioning Metrics storage Compliance Framework (src/compliance/) SOC2 checker HIPAA validator Report generation Anomaly Detection (src/anomaly/) Statistical analysis Real-time monitoring Alert management Authentication (src/auth.rs) JWT token management Password hashing Session handling","breadcrumbs":"Control Center » Core Components","id":"1318","title":"Core Components"},"1319":{"body":"The system follows PAP (Project Architecture Principles) with: No hardcoded values : All behavior controlled via configuration Dynamic loading : Policies and rules loaded from configuration Template-based : Policy generation through templates Environment-aware : Different configs for dev/test/prod","breadcrumbs":"Control Center » Configuration-Driven Design","id":"1319","title":"Configuration-Driven Design"},"132":{"body":"","breadcrumbs":"Prerequisites » Network Requirements","id":"132","title":"Network Requirements"},"1320":{"body":"","breadcrumbs":"Control Center » Deployment","id":"1320","title":"Deployment"},"1321":{"body":"FROM rust:1.75 as builder\\nWORKDIR /app\\nCOPY . .\\nRUN cargo build --release FROM debian:bookworm-slim\\nRUN apt-get update && apt-get install -y ca-certificates\\nCOPY --from=builder /app/target/release/control-center /usr/local/bin/\\nEXPOSE 8080\\nCMD [\\"control-center\\", \\"server\\"]","breadcrumbs":"Control Center » Docker","id":"1321","title":"Docker"},"1322":{"body":"apiVersion: apps/v1\\nkind: Deployment\\nmetadata: name: control-center\\nspec: replicas: 3 template: spec: containers: - name: control-center image: control-center:latest ports: - containerPort: 8080 env: - name: DATABASE_URL value: \\"surreal://surrealdb:8000\\"","breadcrumbs":"Control Center » Kubernetes","id":"1322","title":"Kubernetes"},"1323":{"body":"Architecture : Cedar Authorization User Guide : Authentication Layer","breadcrumbs":"Control Center » Related Documentation","id":"1323","title":"Related Documentation"},"1324":{"body":"Interactive Ratatui-based installer for the Provisioning Platform with Nushell fallback for automation. Source : provisioning/platform/installer/ Status : COMPLETE - All 7 UI screens implemented (1,480 lines)","breadcrumbs":"Installer » Provisioning Platform Installer","id":"1324","title":"Provisioning Platform Installer"},"1325":{"body":"Rich Interactive TUI : Beautiful Ratatui interface with real-time feedback Headless Mode : Automation-friendly with Nushell scripts One-Click Deploy : Single command to deploy entire platform Platform Agnostic : Supports Docker, Podman, Kubernetes, OrbStack Live Progress : Real-time deployment progress and logs Health Checks : Automatic service health verification","breadcrumbs":"Installer » Features","id":"1325","title":"Features"},"1326":{"body":"cd provisioning/platform/installer\\ncargo build --release\\ncargo install --path .\\n```plaintext ## Usage ### Interactive TUI (Default) ```bash\\nprovisioning-installer\\n```plaintext The TUI guides you through: 1. Platform detection (Docker, Podman, K8s, OrbStack)\\n2. Deployment mode selection (Solo, Multi-User, CI/CD, Enterprise)\\n3. Service selection (check/uncheck services)\\n4. Configuration (domain, ports, secrets)\\n5. Live deployment with progress tracking\\n6. Success screen with access URLs ### Headless Mode (Automation) ```bash\\n# Quick deploy with auto-detection\\nprovisioning-installer --headless --mode solo --yes # Fully specified\\nprovisioning-installer \\\\ --headless \\\\ --platform orbstack \\\\ --mode solo \\\\ --services orchestrator,control-center,coredns \\\\ --domain localhost \\\\ --yes # Use existing config file\\nprovisioning-installer --headless --config my-deployment.toml --yes\\n```plaintext ### Configuration Generation ```bash\\n# Generate config without deploying\\nprovisioning-installer --config-only # Deploy later with generated config\\nprovisioning-installer --headless --config ~/.provisioning/installer-config.toml --yes\\n```plaintext ## Deployment Platforms ### Docker Compose ```bash\\nprovisioning-installer --platform docker --mode solo\\n```plaintext **Requirements**: Docker 20.10+, docker-compose 2.0+ ### OrbStack (macOS) ```bash\\nprovisioning-installer --platform orbstack --mode solo\\n```plaintext **Requirements**: OrbStack installed, 4GB RAM, 2 CPU cores ### Podman (Rootless) ```bash\\nprovisioning-installer --platform podman --mode solo\\n```plaintext **Requirements**: Podman 4.0+, systemd ### Kubernetes ```bash\\nprovisioning-installer --platform kubernetes --mode enterprise\\n```plaintext **Requirements**: kubectl configured, Helm 3.0+ ## Deployment Modes ### Solo Mode (Development) - **Services**: 5 core services\\n- **Resources**: 2 CPU cores, 4GB RAM, 20GB disk\\n- **Use case**: Single developer, local testing ### Multi-User Mode (Team) - **Services**: 7 services\\n- **Resources**: 4 CPU cores, 8GB RAM, 50GB disk\\n- **Use case**: Team collaboration, shared infrastructure ### CI/CD Mode (Automation) - **Services**: 8-10 services\\n- **Resources**: 8 CPU cores, 16GB RAM, 100GB disk\\n- **Use case**: Automated pipelines, webhooks ### Enterprise Mode (Production) - **Services**: 15+ services\\n- **Resources**: 16 CPU cores, 32GB RAM, 500GB disk\\n- **Use case**: Production deployments, full observability ## CLI Options ```plaintext\\nprovisioning-installer [OPTIONS] OPTIONS: --headless Run in headless mode (no TUI) --mode Deployment mode [solo|multi-user|cicd|enterprise] --platform Target platform [docker|podman|kubernetes|orbstack] --services Comma-separated list of services --domain Domain/hostname (default: localhost) --yes, -y Skip confirmation prompts --config-only Generate config without deploying --config Use existing config file -h, --help Print help -V, --version Print version\\n```plaintext ## CI/CD Integration ### GitLab CI ```yaml\\ndeploy_platform: stage: deploy script: - provisioning-installer --headless --mode cicd --platform kubernetes --yes only: - main\\n```plaintext ### GitHub Actions ```yaml\\n- name: Deploy Provisioning Platform run: | provisioning-installer --headless --mode cicd --platform docker --yes\\n```plaintext ## Nushell Scripts (Fallback) If the Rust binary is unavailable: ```bash\\ncd provisioning/platform/installer/scripts\\nnu deploy.nu --mode solo --platform orbstack --yes\\n```plaintext ## Related Documentation - **Deployment Guide**: [Platform Deployment](../guides/from-scratch.md)\\n- **Architecture**: [Platform Overview](../architecture/ARCHITECTURE_OVERVIEW.md)","breadcrumbs":"Installer » Installation","id":"1326","title":"Installation"},"1327":{"body":"","breadcrumbs":"Installer System » Provisioning Platform Installer (v3.5.0)","id":"1327","title":"Provisioning Platform Installer (v3.5.0)"},"1328":{"body":"A comprehensive installer system supporting interactive, headless, and unattended deployment modes with automatic configuration management via TOML and MCP integration.","breadcrumbs":"Installer System » 🚀 Flexible Installation and Configuration System","id":"1328","title":"🚀 Flexible Installation and Configuration System"},"1329":{"body":"","breadcrumbs":"Installer System » Installation Modes","id":"1329","title":"Installation Modes"},"133":{"body":"If running platform services, ensure these ports are available: Service Port Protocol Purpose Orchestrator 8080 HTTP Workflow API Control Center 9090 HTTP Policy engine KMS Service 8082 HTTP Key management API Server 8083 HTTP REST API Extension Registry 8084 HTTP Extension discovery OCI Registry 5000 HTTP Artifact storage","breadcrumbs":"Prerequisites » Firewall Ports","id":"133","title":"Firewall Ports"},"1330":{"body":"Beautiful terminal user interface with step-by-step guidance. provisioning-installer Features : 7 interactive screens with progress tracking Real-time validation and error feedback Visual feedback for each configuration step Beautiful formatting with color and styling Nushell fallback for unsupported terminals Screens : Welcome and prerequisites check Deployment mode selection Infrastructure provider selection Configuration details Resource allocation (CPU, memory) Security settings Review and confirm","breadcrumbs":"Installer System » 1. Interactive TUI Mode","id":"1330","title":"1. Interactive TUI Mode"},"1331":{"body":"CLI-only installation without interactive prompts, suitable for scripting. provisioning-installer --headless --mode solo --yes Features : Fully automated CLI options All settings via command-line flags No user interaction required Perfect for CI/CD pipelines Verbose output with progress tracking Common Usage : # Solo deployment\\nprovisioning-installer --headless --mode solo --provider upcloud --yes # Multi-user deployment\\nprovisioning-installer --headless --mode multiuser --cpu 4 --memory 8192 --yes # CI/CD mode\\nprovisioning-installer --headless --mode cicd --config ci-config.toml --yes","breadcrumbs":"Installer System » 2. Headless Mode","id":"1331","title":"2. Headless Mode"},"1332":{"body":"Zero-interaction mode using pre-defined configuration files, ideal for infrastructure automation. provisioning-installer --unattended --config config.toml Features : Load all settings from TOML file Complete automation for GitOps workflows No user interaction or prompts Suitable for production deployments Comprehensive logging and audit trails","breadcrumbs":"Installer System » 3. Unattended Mode","id":"1332","title":"3. Unattended Mode"},"1333":{"body":"Each mode configures resource allocation and features appropriately: Mode CPUs Memory Use Case Solo 2 4GB Single user development MultiUser 4 8GB Team development, testing CICD 8 16GB CI/CD pipelines, testing Enterprise 16 32GB Production deployment","breadcrumbs":"Installer System » Deployment Modes","id":"1333","title":"Deployment Modes"},"1334":{"body":"","breadcrumbs":"Installer System » Configuration System","id":"1334","title":"Configuration System"},"1335":{"body":"Define installation parameters in TOML format for unattended mode: [installation]\\nmode = \\"solo\\" # solo, multiuser, cicd, enterprise\\nprovider = \\"upcloud\\" # upcloud, aws, etc. [resources]\\ncpu = 2000 # millicores\\nmemory = 4096 # MB\\ndisk = 50 # GB [security]\\nenable_mfa = true\\nenable_audit = true\\ntls_enabled = true [mcp]\\nenabled = true\\nendpoint = \\"http://localhost:9090\\"","breadcrumbs":"Installer System » TOML Configuration","id":"1335","title":"TOML Configuration"},"1336":{"body":"Settings are loaded in this order (highest priority wins): CLI Arguments - Direct command-line flags Environment Variables - PROVISIONING_* variables Configuration File - TOML file specified via --config MCP Integration - AI-powered intelligent defaults Built-in Defaults - System defaults","breadcrumbs":"Installer System » Configuration Loading Priority","id":"1336","title":"Configuration Loading Priority"},"1337":{"body":"Model Context Protocol integration provides intelligent configuration: 7 AI-Powered Settings Tools : Resource recommendation engine Provider selection helper Security policy suggester Performance optimizer Compliance checker Network configuration advisor Monitoring setup assistant # Use MCP for intelligent config suggestion\\nprovisioning-installer --unattended --mcp-suggest > config.toml","breadcrumbs":"Installer System » MCP Integration","id":"1337","title":"MCP Integration"},"1338":{"body":"","breadcrumbs":"Installer System » Deployment Automation","id":"1338","title":"Deployment Automation"},"1339":{"body":"Complete deployment automation scripts for popular container runtimes: # Docker deployment\\n./provisioning/platform/installer/deploy/docker.nu --config config.toml # Podman deployment\\n./provisioning/platform/installer/deploy/podman.nu --config config.toml # Kubernetes deployment\\n./provisioning/platform/installer/deploy/kubernetes.nu --config config.toml # OrbStack deployment\\n./provisioning/platform/installer/deploy/orbstack.nu --config config.toml","breadcrumbs":"Installer System » Nushell Scripts","id":"1339","title":"Nushell Scripts"},"134":{"body":"The platform requires outbound internet access to: Download dependencies and updates Pull container images Access cloud provider APIs (AWS, UpCloud) Fetch extension packages","breadcrumbs":"Prerequisites » External Connectivity","id":"134","title":"External Connectivity"},"1340":{"body":"Infrastructure components can query MCP and install themselves: # Taskservs auto-install with dependencies\\ntaskserv install-self kubernetes\\ntaskserv install-self prometheus\\ntaskserv install-self cilium","breadcrumbs":"Installer System » Self-Installation","id":"1340","title":"Self-Installation"},"1341":{"body":"# Show interactive installer\\nprovisioning-installer # Show help\\nprovisioning-installer --help # Show available modes\\nprovisioning-installer --list-modes # Show available providers\\nprovisioning-installer --list-providers # List available templates\\nprovisioning-installer --list-templates # Validate configuration file\\nprovisioning-installer --validate --config config.toml # Dry-run (check without installing)\\nprovisioning-installer --config config.toml --check # Full unattended installation\\nprovisioning-installer --unattended --config config.toml # Headless with specific settings\\nprovisioning-installer --headless --mode solo --provider upcloud --cpu 2 --memory 4096 --yes","breadcrumbs":"Installer System » Command Reference","id":"1341","title":"Command Reference"},"1342":{"body":"","breadcrumbs":"Installer System » Integration Examples","id":"1342","title":"Integration Examples"},"1343":{"body":"# Define in Git\\ncat > infrastructure/installer.toml << EOF\\n[installation]\\nmode = \\"multiuser\\"\\nprovider = \\"upcloud\\" [resources]\\ncpu = 4\\nmemory = 8192\\nEOF # Deploy via CI/CD\\nprovisioning-installer --unattended --config infrastructure/installer.toml","breadcrumbs":"Installer System » GitOps Workflow","id":"1343","title":"GitOps Workflow"},"1344":{"body":"# Call installer as part of Terraform provisioning\\nresource \\"null_resource\\" \\"provisioning_installer\\" { provisioner \\"local-exec\\" { command = \\"provisioning-installer --unattended --config ${var.config_file}\\" }\\n}","breadcrumbs":"Installer System » Terraform Integration","id":"1344","title":"Terraform Integration"},"1345":{"body":"- name: Run provisioning installer shell: provisioning-installer --unattended --config /tmp/config.toml vars: ansible_python_interpreter: /usr/bin/python3","breadcrumbs":"Installer System » Ansible Integration","id":"1345","title":"Ansible Integration"},"1346":{"body":"Pre-built templates available in provisioning/config/installer-templates/: solo-dev.toml - Single developer setup team-test.toml - Team testing environment cicd-pipeline.toml - CI/CD integration enterprise-prod.toml - Production deployment kubernetes-ha.toml - High-availability Kubernetes multicloud.toml - Multi-provider setup","breadcrumbs":"Installer System » Configuration Templates","id":"1346","title":"Configuration Templates"},"1347":{"body":"User Guide : user/provisioning-installer-guide.md Deployment Guide : operations/installer-deployment-guide.md Configuration Guide : infrastructure/installer-configuration-guide.md","breadcrumbs":"Installer System » Documentation","id":"1347","title":"Documentation"},"1348":{"body":"# Show installer help\\nprovisioning-installer --help # Show detailed documentation\\nprovisioning help installer # Validate your configuration\\nprovisioning-installer --validate --config your-config.toml # Get configuration suggestions from MCP\\nprovisioning-installer --config-suggest","breadcrumbs":"Installer System » Help and Support","id":"1348","title":"Help and Support"},"1349":{"body":"If Ratatui TUI is not available, the installer automatically falls back to: Interactive Nushell prompt system Same functionality, text-based interface Full feature parity with TUI version","breadcrumbs":"Installer System » Nushell Fallback","id":"1349","title":"Nushell Fallback"},"135":{"body":"If you plan to use cloud providers, prepare credentials:","breadcrumbs":"Prerequisites » Cloud Provider Credentials (Optional)","id":"135","title":"Cloud Provider Credentials (Optional)"},"1350":{"body":"A comprehensive REST API server for remote provisioning operations, enabling thin clients and CI/CD pipeline integration. Source : provisioning/platform/provisioning-server/","breadcrumbs":"Provisioning Server » Provisioning API Server","id":"1350","title":"Provisioning API Server"},"1351":{"body":"Comprehensive REST API : Complete provisioning operations via HTTP JWT Authentication : Secure token-based authentication RBAC System : Role-based access control (Admin, Operator, Developer, Viewer) Async Operations : Long-running tasks with status tracking Nushell Integration : Direct execution of provisioning CLI commands Audit Logging : Complete operation tracking for compliance Metrics : Prometheus-compatible metrics endpoint CORS Support : Configurable cross-origin resource sharing Health Checks : Built-in health and readiness endpoints","breadcrumbs":"Provisioning Server » Features","id":"1351","title":"Features"},"1352":{"body":"┌─────────────────┐\\n│ REST Client │\\n│ (curl, CI/CD) │\\n└────────┬────────┘ │ HTTPS/JWT ▼\\n┌─────────────────┐\\n│ API Gateway │\\n│ - Routes │\\n│ - Auth │\\n│ - RBAC │\\n└────────┬────────┘ │ ▼\\n┌─────────────────┐\\n│ Async Task Mgr │\\n│ - Queue │\\n│ - Status │\\n└────────┬────────┘ │ ▼\\n┌─────────────────┐\\n│ Nushell Exec │\\n│ - CLI wrapper │\\n│ - Timeout │\\n└─────────────────┘\\n```plaintext ## Installation ```bash\\ncd provisioning/platform/provisioning-server\\ncargo build --release\\n```plaintext ## Configuration Create `config.toml`: ```toml\\n[server]\\nhost = \\"0.0.0.0\\"\\nport = 8083\\ncors_enabled = true [auth]\\njwt_secret = \\"your-secret-key-here\\"\\ntoken_expiry_hours = 24\\nrefresh_token_expiry_hours = 168 [provisioning]\\ncli_path = \\"/usr/local/bin/provisioning\\"\\ntimeout_seconds = 300\\nmax_concurrent_operations = 10 [logging]\\nlevel = \\"info\\"\\njson_format = false\\n```plaintext ## Usage ### Starting the Server ```bash\\n# Using config file\\nprovisioning-server --config config.toml # Custom settings\\nprovisioning-server \\\\ --host 0.0.0.0 \\\\ --port 8083 \\\\ --jwt-secret \\"my-secret\\" \\\\ --cli-path \\"/usr/local/bin/provisioning\\" \\\\ --log-level debug\\n```plaintext ### Authentication #### Login ```bash\\ncurl -X POST http://localhost:8083/v1/auth/login \\\\ -H \\"Content-Type: application/json\\" \\\\ -d \'{ \\"username\\": \\"admin\\", \\"password\\": \\"admin123\\" }\'\\n```plaintext Response: ```json\\n{ \\"token\\": \\"eyJhbGc...\\", \\"refresh_token\\": \\"eyJhbGc...\\", \\"expires_in\\": 86400\\n}\\n```plaintext #### Using Token ```bash\\nexport TOKEN=\\"eyJhbGc...\\" curl -X GET http://localhost:8083/v1/servers \\\\ -H \\"Authorization: Bearer $TOKEN\\"\\n```plaintext ## API Endpoints ### Authentication - `POST /v1/auth/login` - User login\\n- `POST /v1/auth/refresh` - Refresh access token ### Servers - `GET /v1/servers` - List all servers\\n- `POST /v1/servers/create` - Create new server\\n- `DELETE /v1/servers/{id}` - Delete server\\n- `GET /v1/servers/{id}/status` - Get server status ### Taskservs - `GET /v1/taskservs` - List all taskservs\\n- `POST /v1/taskservs/create` - Create taskserv\\n- `DELETE /v1/taskservs/{id}` - Delete taskserv\\n- `GET /v1/taskservs/{id}/status` - Get taskserv status ### Workflows - `POST /v1/workflows/submit` - Submit workflow\\n- `GET /v1/workflows/{id}` - Get workflow details\\n- `GET /v1/workflows/{id}/status` - Get workflow status\\n- `POST /v1/workflows/{id}/cancel` - Cancel workflow ### Operations - `GET /v1/operations` - List all operations\\n- `GET /v1/operations/{id}` - Get operation status\\n- `POST /v1/operations/{id}/cancel` - Cancel operation ### System - `GET /health` - Health check (no auth required)\\n- `GET /v1/version` - Version information\\n- `GET /v1/metrics` - Prometheus metrics ## RBAC Roles ### Admin Role Full system access including all operations, workspace management, and system administration. ### Operator Role Infrastructure operations including create/delete servers, taskservs, clusters, and workflow management. ### Developer Role Read access plus SSH to servers, view workflows and operations. ### Viewer Role Read-only access to all resources and status information. ## Security Best Practices 1. **Change Default Credentials**: Update all default usernames/passwords\\n2. **Use Strong JWT Secret**: Generate secure random string (32+ characters)\\n3. **Enable TLS**: Use HTTPS in production\\n4. **Restrict CORS**: Configure specific allowed origins\\n5. **Enable mTLS**: For client certificate authentication\\n6. **Regular Token Rotation**: Implement token refresh strategy\\n7. **Audit Logging**: Enable audit logs for compliance ## CI/CD Integration ### GitHub Actions ```yaml\\n- name: Deploy Infrastructure run: | TOKEN=$(curl -X POST https://api.example.com/v1/auth/login \\\\ -H \\"Content-Type: application/json\\" \\\\ -d \'{\\"username\\":\\"${{ secrets.API_USER }}\\",\\"password\\":\\"${{ secrets.API_PASS }}\\"}\' \\\\ | jq -r \'.token\') curl -X POST https://api.example.com/v1/servers/create \\\\ -H \\"Authorization: Bearer $TOKEN\\" \\\\ -H \\"Content-Type: application/json\\" \\\\ -d \'{\\"workspace\\": \\"production\\", \\"provider\\": \\"upcloud\\", \\"plan\\": \\"2xCPU-4GB\\"}\'\\n```plaintext ## Related Documentation - **API Reference**: [REST API Documentation](../api/rest-api.md)\\n- **Architecture**: [API Gateway Integration](../architecture/integration-patterns.md)","breadcrumbs":"Provisioning Server » Architecture","id":"1352","title":"Architecture"},"1353":{"body":"This comprehensive guide covers creating, managing, and maintaining infrastructure using Infrastructure Automation.","breadcrumbs":"Infrastructure Management » Infrastructure Management Guide","id":"1353","title":"Infrastructure Management Guide"},"1354":{"body":"Infrastructure lifecycle management Server provisioning and management Task service installation and configuration Cluster deployment and orchestration Scaling and optimization strategies Monitoring and maintenance procedures Cost management and optimization","breadcrumbs":"Infrastructure Management » What You\'ll Learn","id":"1354","title":"What You\'ll Learn"},"1355":{"body":"","breadcrumbs":"Infrastructure Management » Infrastructure Concepts","id":"1355","title":"Infrastructure Concepts"},"1356":{"body":"Component Description Examples Servers Virtual machines or containers Web servers, databases, workers Task Services Software installed on servers Kubernetes, Docker, databases Clusters Groups of related services Web clusters, database clusters Networks Connectivity between resources VPCs, subnets, load balancers Storage Persistent data storage Block storage, object storage","breadcrumbs":"Infrastructure Management » Infrastructure Components","id":"1356","title":"Infrastructure Components"},"1357":{"body":"Plan → Create → Deploy → Monitor → Scale → Update → Retire\\n```plaintext Each phase has specific commands and considerations. ## Server Management ### Understanding Server Configuration Servers are defined in KCL configuration files: ```kcl\\n# Example server configuration\\nimport models.server servers: [ server.Server { name = \\"web-01\\" provider = \\"aws\\" # aws, upcloud, local plan = \\"t3.medium\\" # Instance type/plan os = \\"ubuntu-22.04\\" # Operating system zone = \\"us-west-2a\\" # Availability zone # Network configuration vpc = \\"main\\" subnet = \\"web\\" security_groups = [\\"web\\", \\"ssh\\"] # Storage configuration storage = { root_size = \\"50GB\\" additional = [ {name = \\"data\\", size = \\"100GB\\", type = \\"gp3\\"} ] } # Task services to install taskservs = [ \\"containerd\\", \\"kubernetes\\", \\"monitoring\\" ] # Tags for organization tags = { environment = \\"production\\" team = \\"platform\\" cost_center = \\"engineering\\" } }\\n]\\n```plaintext ### Server Lifecycle Commands #### Creating Servers ```bash\\n# Plan server creation (dry run)\\nprovisioning server create --infra my-infra --check # Create servers\\nprovisioning server create --infra my-infra # Create with specific parameters\\nprovisioning server create --infra my-infra --wait --yes # Create single server type\\nprovisioning server create web --infra my-infra\\n```plaintext #### Managing Existing Servers ```bash\\n# List all servers\\nprovisioning server list --infra my-infra # Show detailed server information\\nprovisioning show servers --infra my-infra # Show specific server\\nprovisioning show servers web-01 --infra my-infra # Get server status\\nprovisioning server status web-01 --infra my-infra\\n```plaintext #### Server Operations ```bash\\n# Start/stop servers\\nprovisioning server start web-01 --infra my-infra\\nprovisioning server stop web-01 --infra my-infra # Restart servers\\nprovisioning server restart web-01 --infra my-infra # Resize server\\nprovisioning server resize web-01 --plan t3.large --infra my-infra # Update server configuration\\nprovisioning server update web-01 --infra my-infra\\n```plaintext #### SSH Access ```bash\\n# SSH to server\\nprovisioning server ssh web-01 --infra my-infra # SSH with specific user\\nprovisioning server ssh web-01 --user admin --infra my-infra # Execute command on server\\nprovisioning server exec web-01 \\"systemctl status kubernetes\\" --infra my-infra # Copy files to/from server\\nprovisioning server copy local-file.txt web-01:/tmp/ --infra my-infra\\nprovisioning server copy web-01:/var/log/app.log ./logs/ --infra my-infra\\n```plaintext #### Server Deletion ```bash\\n# Plan server deletion (dry run)\\nprovisioning server delete --infra my-infra --check # Delete specific server\\nprovisioning server delete web-01 --infra my-infra # Delete with confirmation\\nprovisioning server delete web-01 --infra my-infra --yes # Delete but keep storage\\nprovisioning server delete web-01 --infra my-infra --keepstorage\\n```plaintext ## Task Service Management ### Understanding Task Services Task services are software components installed on servers: - **Container Runtimes**: containerd, cri-o, docker\\n- **Orchestration**: kubernetes, nomad\\n- **Networking**: cilium, calico, haproxy\\n- **Storage**: rook-ceph, longhorn, nfs\\n- **Databases**: postgresql, mysql, mongodb\\n- **Monitoring**: prometheus, grafana, alertmanager ### Task Service Configuration ```kcl\\n# Task service configuration example\\ntaskservs: { kubernetes: { version = \\"1.28\\" network_plugin = \\"cilium\\" ingress_controller = \\"nginx\\" storage_class = \\"gp3\\" # Cluster configuration cluster = { name = \\"production\\" pod_cidr = \\"10.244.0.0/16\\" service_cidr = \\"10.96.0.0/12\\" } # Node configuration nodes = { control_plane = [\\"master-01\\", \\"master-02\\", \\"master-03\\"] workers = [\\"worker-01\\", \\"worker-02\\", \\"worker-03\\"] } } postgresql: { version = \\"15\\" port = 5432 max_connections = 200 shared_buffers = \\"256MB\\" # High availability replication = { enabled = true replicas = 2 sync_mode = \\"synchronous\\" } # Backup configuration backup = { enabled = true schedule = \\"0 2 * * *\\" # Daily at 2 AM retention = \\"30d\\" } }\\n}\\n```plaintext ### Task Service Commands #### Installing Services ```bash\\n# Install single service\\nprovisioning taskserv create kubernetes --infra my-infra # Install multiple services\\nprovisioning taskserv create containerd kubernetes cilium --infra my-infra # Install with specific version\\nprovisioning taskserv create kubernetes --version 1.28 --infra my-infra # Install on specific servers\\nprovisioning taskserv create postgresql --servers db-01,db-02 --infra my-infra\\n```plaintext #### Managing Services ```bash\\n# List available services\\nprovisioning taskserv list # List installed services\\nprovisioning taskserv list --infra my-infra --installed # Show service details\\nprovisioning taskserv show kubernetes --infra my-infra # Check service status\\nprovisioning taskserv status kubernetes --infra my-infra # Check service health\\nprovisioning taskserv health kubernetes --infra my-infra\\n```plaintext #### Service Operations ```bash\\n# Start/stop services\\nprovisioning taskserv start kubernetes --infra my-infra\\nprovisioning taskserv stop kubernetes --infra my-infra # Restart services\\nprovisioning taskserv restart kubernetes --infra my-infra # Update services\\nprovisioning taskserv update kubernetes --infra my-infra # Configure services\\nprovisioning taskserv configure kubernetes --config cluster.yaml --infra my-infra\\n```plaintext #### Service Removal ```bash\\n# Remove service\\nprovisioning taskserv delete kubernetes --infra my-infra # Remove with data cleanup\\nprovisioning taskserv delete postgresql --cleanup-data --infra my-infra # Remove from specific servers\\nprovisioning taskserv delete kubernetes --servers worker-03 --infra my-infra\\n```plaintext ### Version Management ```bash\\n# Check for updates\\nprovisioning taskserv check-updates --infra my-infra # Check specific service updates\\nprovisioning taskserv check-updates kubernetes --infra my-infra # Show available versions\\nprovisioning taskserv versions kubernetes # Upgrade to latest version\\nprovisioning taskserv upgrade kubernetes --infra my-infra # Upgrade to specific version\\nprovisioning taskserv upgrade kubernetes --version 1.29 --infra my-infra\\n```plaintext ## Cluster Management ### Understanding Clusters Clusters are collections of services that work together to provide functionality: ```kcl\\n# Cluster configuration example\\nclusters: { web_cluster: { name = \\"web-application\\" description = \\"Web application cluster\\" # Services in the cluster services = [ { name = \\"nginx\\" replicas = 3 image = \\"nginx:1.24\\" ports = [80, 443] } { name = \\"app\\" replicas = 5 image = \\"myapp:latest\\" ports = [8080] } ] # Load balancer configuration load_balancer = { type = \\"application\\" health_check = \\"/health\\" ssl_cert = \\"wildcard.example.com\\" } # Auto-scaling auto_scaling = { min_replicas = 2 max_replicas = 10 target_cpu = 70 target_memory = 80 } }\\n}\\n```plaintext ### Cluster Commands #### Creating Clusters ```bash\\n# Create cluster\\nprovisioning cluster create web-cluster --infra my-infra # Create with specific configuration\\nprovisioning cluster create web-cluster --config cluster.yaml --infra my-infra # Create and deploy\\nprovisioning cluster create web-cluster --deploy --infra my-infra\\n```plaintext #### Managing Clusters ```bash\\n# List available clusters\\nprovisioning cluster list # List deployed clusters\\nprovisioning cluster list --infra my-infra --deployed # Show cluster details\\nprovisioning cluster show web-cluster --infra my-infra # Get cluster status\\nprovisioning cluster status web-cluster --infra my-infra\\n```plaintext #### Cluster Operations ```bash\\n# Deploy cluster\\nprovisioning cluster deploy web-cluster --infra my-infra # Scale cluster\\nprovisioning cluster scale web-cluster --replicas 10 --infra my-infra # Update cluster\\nprovisioning cluster update web-cluster --infra my-infra # Rolling update\\nprovisioning cluster update web-cluster --rolling --infra my-infra\\n```plaintext #### Cluster Deletion ```bash\\n# Delete cluster\\nprovisioning cluster delete web-cluster --infra my-infra # Delete with data cleanup\\nprovisioning cluster delete web-cluster --cleanup --infra my-infra\\n```plaintext ## Network Management ### Network Configuration ```kcl\\n# Network configuration\\nnetwork: { vpc = { cidr = \\"10.0.0.0/16\\" enable_dns = true enable_dhcp = true } subnets = [ { name = \\"web\\" cidr = \\"10.0.1.0/24\\" zone = \\"us-west-2a\\" public = true } { name = \\"app\\" cidr = \\"10.0.2.0/24\\" zone = \\"us-west-2b\\" public = false } { name = \\"data\\" cidr = \\"10.0.3.0/24\\" zone = \\"us-west-2c\\" public = false } ] security_groups = [ { name = \\"web\\" rules = [ {protocol = \\"tcp\\", port = 80, source = \\"0.0.0.0/0\\"} {protocol = \\"tcp\\", port = 443, source = \\"0.0.0.0/0\\"} ] } { name = \\"app\\" rules = [ {protocol = \\"tcp\\", port = 8080, source = \\"10.0.1.0/24\\"} ] } ] load_balancers = [ { name = \\"web-lb\\" type = \\"application\\" scheme = \\"internet-facing\\" subnets = [\\"web\\"] targets = [\\"web-01\\", \\"web-02\\"] } ]\\n}\\n```plaintext ### Network Commands ```bash\\n# Show network configuration\\nprovisioning network show --infra my-infra # Create network resources\\nprovisioning network create --infra my-infra # Update network configuration\\nprovisioning network update --infra my-infra # Test network connectivity\\nprovisioning network test --infra my-infra\\n```plaintext ## Storage Management ### Storage Configuration ```kcl\\n# Storage configuration\\nstorage: { # Block storage volumes = [ { name = \\"app-data\\" size = \\"100GB\\" type = \\"gp3\\" encrypted = true } ] # Object storage buckets = [ { name = \\"app-assets\\" region = \\"us-west-2\\" versioning = true encryption = \\"AES256\\" } ] # Backup configuration backup = { schedule = \\"0 1 * * *\\" # Daily at 1 AM retention = { daily = 7 weekly = 4 monthly = 12 } }\\n}\\n```plaintext ### Storage Commands ```bash\\n# Create storage resources\\nprovisioning storage create --infra my-infra # List storage\\nprovisioning storage list --infra my-infra # Backup data\\nprovisioning storage backup --infra my-infra # Restore from backup\\nprovisioning storage restore --backup latest --infra my-infra\\n```plaintext ## Monitoring and Observability ### Monitoring Setup ```bash\\n# Install monitoring stack\\nprovisioning taskserv create prometheus --infra my-infra\\nprovisioning taskserv create grafana --infra my-infra\\nprovisioning taskserv create alertmanager --infra my-infra # Configure monitoring\\nprovisioning taskserv configure prometheus --config monitoring.yaml --infra my-infra\\n```plaintext ### Health Checks ```bash\\n# Check overall infrastructure health\\nprovisioning health check --infra my-infra # Check specific components\\nprovisioning health check servers --infra my-infra\\nprovisioning health check taskservs --infra my-infra\\nprovisioning health check clusters --infra my-infra # Continuous monitoring\\nprovisioning health monitor --infra my-infra --watch\\n```plaintext ### Metrics and Alerting ```bash\\n# Get infrastructure metrics\\nprovisioning metrics get --infra my-infra # Set up alerts\\nprovisioning alerts create --config alerts.yaml --infra my-infra # List active alerts\\nprovisioning alerts list --infra my-infra\\n```plaintext ## Cost Management ### Cost Monitoring ```bash\\n# Show current costs\\nprovisioning cost show --infra my-infra # Cost breakdown by component\\nprovisioning cost breakdown --infra my-infra # Cost trends\\nprovisioning cost trends --period 30d --infra my-infra # Set cost alerts\\nprovisioning cost alert --threshold 1000 --infra my-infra\\n```plaintext ### Cost Optimization ```bash\\n# Analyze cost optimization opportunities\\nprovisioning cost optimize --infra my-infra # Show unused resources\\nprovisioning cost unused --infra my-infra # Right-size recommendations\\nprovisioning cost recommendations --infra my-infra\\n```plaintext ## Scaling Strategies ### Manual Scaling ```bash\\n# Scale servers\\nprovisioning server scale --count 5 --infra my-infra # Scale specific service\\nprovisioning taskserv scale kubernetes --nodes 3 --infra my-infra # Scale cluster\\nprovisioning cluster scale web-cluster --replicas 10 --infra my-infra\\n```plaintext ### Auto-scaling Configuration ```kcl\\n# Auto-scaling configuration\\nauto_scaling: { servers = { min_count = 2 max_count = 10 # Scaling metrics cpu_threshold = 70 memory_threshold = 80 # Scaling behavior scale_up_cooldown = \\"5m\\" scale_down_cooldown = \\"10m\\" } clusters = { web_cluster = { min_replicas = 3 max_replicas = 20 metrics = [ {type = \\"cpu\\", target = 70} {type = \\"memory\\", target = 80} {type = \\"requests\\", target = 1000} ] } }\\n}\\n```plaintext ## Disaster Recovery ### Backup Strategies ```bash\\n# Full infrastructure backup\\nprovisioning backup create --type full --infra my-infra # Incremental backup\\nprovisioning backup create --type incremental --infra my-infra # Schedule automated backups\\nprovisioning backup schedule --daily --time \\"02:00\\" --infra my-infra\\n```plaintext ### Recovery Procedures ```bash\\n# List available backups\\nprovisioning backup list --infra my-infra # Restore infrastructure\\nprovisioning restore --backup latest --infra my-infra # Partial restore\\nprovisioning restore --backup latest --components servers --infra my-infra # Test restore (dry run)\\nprovisioning restore --backup latest --test --infra my-infra\\n```plaintext ## Advanced Infrastructure Patterns ### Multi-Region Deployment ```kcl\\n# Multi-region configuration\\nregions: { primary = { name = \\"us-west-2\\" servers = [\\"web-01\\", \\"web-02\\", \\"db-01\\"] availability_zones = [\\"us-west-2a\\", \\"us-west-2b\\"] } secondary = { name = \\"us-east-1\\" servers = [\\"web-03\\", \\"web-04\\", \\"db-02\\"] availability_zones = [\\"us-east-1a\\", \\"us-east-1b\\"] } # Cross-region replication replication = { database = { primary = \\"us-west-2\\" replicas = [\\"us-east-1\\"] sync_mode = \\"async\\" } storage = { sync_schedule = \\"*/15 * * * *\\" # Every 15 minutes } }\\n}\\n```plaintext ### Blue-Green Deployment ```bash\\n# Create green environment\\nprovisioning generate infra --from production --name production-green # Deploy to green\\nprovisioning server create --infra production-green\\nprovisioning taskserv create --infra production-green\\nprovisioning cluster deploy --infra production-green # Switch traffic to green\\nprovisioning network switch --from production --to production-green # Decommission blue\\nprovisioning server delete --infra production --yes\\n```plaintext ### Canary Deployment ```bash\\n# Create canary environment\\nprovisioning cluster create web-cluster-canary --replicas 1 --infra my-infra # Route small percentage of traffic\\nprovisioning network route --target web-cluster-canary --weight 10 --infra my-infra # Monitor canary metrics\\nprovisioning metrics monitor web-cluster-canary --infra my-infra # Promote or rollback\\nprovisioning cluster promote web-cluster-canary --infra my-infra\\n# or\\nprovisioning cluster rollback web-cluster-canary --infra my-infra\\n```plaintext ## Troubleshooting Infrastructure ### Common Issues #### Server Creation Failures ```bash\\n# Check provider status\\nprovisioning provider status aws # Validate server configuration\\nprovisioning server validate web-01 --infra my-infra # Check quota limits\\nprovisioning provider quota --infra my-infra # Debug server creation\\nprovisioning --debug server create web-01 --infra my-infra\\n```plaintext #### Service Installation Failures ```bash\\n# Check service prerequisites\\nprovisioning taskserv check kubernetes --infra my-infra # Validate service configuration\\nprovisioning taskserv validate kubernetes --infra my-infra # Check service logs\\nprovisioning taskserv logs kubernetes --infra my-infra # Debug service installation\\nprovisioning --debug taskserv create kubernetes --infra my-infra\\n```plaintext #### Network Connectivity Issues ```bash\\n# Test network connectivity\\nprovisioning network test --infra my-infra # Check security groups\\nprovisioning network security-groups --infra my-infra # Trace network path\\nprovisioning network trace --from web-01 --to db-01 --infra my-infra\\n```plaintext ### Performance Optimization ```bash\\n# Analyze performance bottlenecks\\nprovisioning performance analyze --infra my-infra # Get performance recommendations\\nprovisioning performance recommendations --infra my-infra # Monitor resource utilization\\nprovisioning performance monitor --infra my-infra --duration 1h\\n```plaintext ## Testing Infrastructure The provisioning system includes a comprehensive **Test Environment Service** for automated testing of infrastructure components before deployment. ### Why Test Infrastructure? Testing infrastructure before production deployment helps: - **Validate taskserv configurations** before installing on production servers\\n- **Test integration** between multiple taskservs\\n- **Verify cluster topologies** (Kubernetes, etcd, etc.) before deployment\\n- **Catch configuration errors** early in the development cycle\\n- **Ensure compatibility** between components ### Test Environment Types #### 1. Single Taskserv Testing Test individual taskservs in isolated containers: ```bash\\n# Quick test (create, run, cleanup automatically)\\nprovisioning test quick kubernetes # Single taskserv with custom resources\\nprovisioning test env single postgres \\\\ --cpu 2000 \\\\ --memory 4096 \\\\ --auto-start \\\\ --auto-cleanup # Test with specific infrastructure context\\nprovisioning test env single redis --infra my-infra\\n```plaintext #### 2. Server Simulation Test complete server configurations with multiple taskservs: ```bash\\n# Simulate web server with multiple taskservs\\nprovisioning test env server web-01 [containerd kubernetes cilium] \\\\ --auto-start # Simulate database server\\nprovisioning test env server db-01 [postgres redis] \\\\ --infra prod-stack \\\\ --auto-start\\n```plaintext #### 3. Multi-Node Cluster Testing Test complex cluster topologies before production deployment: ```bash\\n# Test 3-node Kubernetes cluster\\nprovisioning test topology load kubernetes_3node | \\\\ test env cluster kubernetes --auto-start # Test etcd cluster\\nprovisioning test topology load etcd_cluster | \\\\ test env cluster etcd --auto-start # Test single-node Kubernetes\\nprovisioning test topology load kubernetes_single | \\\\ test env cluster kubernetes --auto-start\\n```plaintext ### Managing Test Environments ```bash\\n# List all test environments\\nprovisioning test env list # Check environment status\\nprovisioning test env status # View environment logs\\nprovisioning test env logs # Cleanup environment when done\\nprovisioning test env cleanup \\n```plaintext ### Available Topology Templates Pre-configured multi-node cluster templates: | Template | Description | Use Case |\\n|----------|-------------|----------|\\n| `kubernetes_3node` | 3-node HA K8s cluster | Production-like K8s testing |\\n| `kubernetes_single` | All-in-one K8s node | Development K8s testing |\\n| `etcd_cluster` | 3-member etcd cluster | Distributed consensus testing |\\n| `containerd_test` | Standalone containerd | Container runtime testing |\\n| `postgres_redis` | Database stack | Database integration testing | ### Test Environment Workflow Typical testing workflow: ```bash\\n# 1. Test new taskserv before deploying\\nprovisioning test quick kubernetes # 2. If successful, test server configuration\\nprovisioning test env server k8s-node [containerd kubernetes cilium] \\\\ --auto-start # 3. Test complete cluster topology\\nprovisioning test topology load kubernetes_3node | \\\\ test env cluster kubernetes --auto-start # 4. Deploy to production\\nprovisioning server create --infra production\\nprovisioning taskserv create kubernetes --infra production\\n```plaintext ### CI/CD Integration Integrate infrastructure testing into CI/CD pipelines: ```yaml\\n# GitLab CI example\\ntest-infrastructure: stage: test script: # Start orchestrator - ./scripts/start-orchestrator.nu --background # Test critical taskservs - provisioning test quick kubernetes - provisioning test quick postgres - provisioning test quick redis # Test cluster topology - provisioning test topology load kubernetes_3node | test env cluster kubernetes --auto-start artifacts: when: on_failure paths: - test-logs/\\n```plaintext ### Prerequisites Test environments require: 1. **Docker Running**: Test environments use Docker containers ```bash docker ps # Should work without errors Orchestrator Running : The orchestrator manages test containers cd provisioning/platform/orchestrator\\n./scripts/start-orchestrator.nu --background","breadcrumbs":"Infrastructure Management » Infrastructure Lifecycle","id":"1357","title":"Infrastructure Lifecycle"},"1358":{"body":"Custom Topology Testing Create custom topology configurations: # custom-topology.toml\\n[my_cluster]\\nname = \\"Custom Test Cluster\\"\\ncluster_type = \\"custom\\" [[my_cluster.nodes]]\\nname = \\"node-01\\"\\nrole = \\"primary\\"\\ntaskservs = [\\"postgres\\", \\"redis\\"]\\n[my_cluster.nodes.resources]\\ncpu_millicores = 2000\\nmemory_mb = 4096 [[my_cluster.nodes]]\\nname = \\"node-02\\"\\nrole = \\"replica\\"\\ntaskservs = [\\"postgres\\"]\\n[my_cluster.nodes.resources]\\ncpu_millicores = 1000\\nmemory_mb = 2048\\n```plaintext Load and test custom topology: ```bash\\nprovisioning test env cluster custom-app custom-topology.toml --auto-start\\n```plaintext #### Integration Testing Test taskserv dependencies: ```bash\\n# Test Kubernetes dependencies in order\\nprovisioning test quick containerd\\nprovisioning test quick etcd\\nprovisioning test quick kubernetes\\nprovisioning test quick cilium # Test complete stack\\nprovisioning test env server k8s-stack \\\\ [containerd etcd kubernetes cilium] \\\\ --auto-start\\n```plaintext ### Documentation For complete test environment documentation: - **Test Environment Guide**: `docs/user/test-environment-guide.md`\\n- **Detailed Usage**: `docs/user/test-environment-usage.md`\\n- **Orchestrator README**: `provisioning/platform/orchestrator/README.md` ## Best Practices ### 1. Infrastructure Design - **Principle of Least Privilege**: Grant minimal necessary access\\n- **Defense in Depth**: Multiple layers of security\\n- **High Availability**: Design for failure resilience\\n- **Scalability**: Plan for growth from the start ### 2. Operational Excellence ```bash\\n# Always validate before applying changes\\nprovisioning validate config --infra my-infra # Use check mode for dry runs\\nprovisioning server create --check --infra my-infra # Monitor continuously\\nprovisioning health monitor --infra my-infra # Regular backups\\nprovisioning backup schedule --daily --infra my-infra\\n```plaintext ### 3. Security ```bash\\n# Regular security updates\\nprovisioning taskserv update --security-only --infra my-infra # Encrypt sensitive data\\nprovisioning sops settings.k --infra my-infra # Audit access\\nprovisioning audit logs --infra my-infra\\n```plaintext ### 4. Cost Optimization ```bash\\n# Regular cost reviews\\nprovisioning cost analyze --infra my-infra # Right-size resources\\nprovisioning cost optimize --apply --infra my-infra # Use reserved instances for predictable workloads\\nprovisioning server reserve --infra my-infra\\n```plaintext ## Next Steps Now that you understand infrastructure management: 1. **Learn about extensions**: [Extension Development Guide](extension-development.md)\\n2. **Master configuration**: [Configuration Guide](configuration.md)\\n3. **Explore advanced examples**: [Examples and Tutorials](examples/)\\n4. **Set up monitoring and alerting**\\n5. **Implement automated scaling**\\n6. **Plan disaster recovery procedures** You now have the knowledge to build and manage robust, scalable cloud infrastructure!","breadcrumbs":"Infrastructure Management » Advanced Testing","id":"1358","title":"Advanced Testing"},"1359":{"body":"","breadcrumbs":"Infrastructure from Code Guide » Infrastructure-from-Code (IaC) Guide","id":"1359","title":"Infrastructure-from-Code (IaC) Guide"},"136":{"body":"AWS Access Key ID AWS Secret Access Key Configured via ~/.aws/credentials or environment variables","breadcrumbs":"Prerequisites » AWS","id":"136","title":"AWS"},"1360":{"body":"The Infrastructure-from-Code system automatically detects technologies in your project and infers infrastructure requirements based on organization-specific rules. It consists of three main commands: detect : Scan a project and identify technologies complete : Analyze gaps and recommend infrastructure components ifc : Full-pipeline orchestration (workflow)","breadcrumbs":"Infrastructure from Code Guide » Overview","id":"1360","title":"Overview"},"1361":{"body":"","breadcrumbs":"Infrastructure from Code Guide » Quick Start","id":"1361","title":"Quick Start"},"1362":{"body":"Scan a project directory for detected technologies: provisioning detect /path/to/project --out json\\n```plaintext **Output Example:** ```json\\n{ \\"detections\\": [ {\\"technology\\": \\"nodejs\\", \\"confidence\\": 0.95}, {\\"technology\\": \\"postgres\\", \\"confidence\\": 0.92} ], \\"overall_confidence\\": 0.93\\n}\\n```plaintext ### 2. Analyze Infrastructure Gaps Get a completeness assessment and recommendations: ```bash\\nprovisioning complete /path/to/project --out json\\n```plaintext **Output Example:** ```json\\n{ \\"completeness\\": 1.0, \\"changes_needed\\": 2, \\"is_safe\\": true, \\"change_summary\\": \\"+ Adding: postgres-backup, pg-monitoring\\"\\n}\\n```plaintext ### 3. Run Full Workflow Orchestrate detection → completion → assessment pipeline: ```bash\\nprovisioning ifc /path/to/project --org default\\n```plaintext **Output:** ```plaintext\\n━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\\n🔄 Infrastructure-from-Code Workflow\\n━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ STEP 1: Technology Detection\\n────────────────────────────\\n✓ Detected 2 technologies STEP 2: Infrastructure Completion\\n─────────────────────────────────\\n✓ Completeness: 1% ✅ Workflow Complete\\n```plaintext ## Command Reference ### detect Scan and detect technologies in a project. **Usage:** ```bash\\nprovisioning detect [PATH] [OPTIONS]\\n```plaintext **Arguments:** - `PATH`: Project directory to analyze (default: current directory) **Options:** - `-o, --out TEXT`: Output format - `text`, `json`, `yaml` (default: `text`)\\n- `-C, --high-confidence-only`: Only show detections with confidence > 0.8\\n- `--pretty`: Pretty-print JSON/YAML output\\n- `-x, --debug`: Enable debug output **Examples:** ```bash\\n# Detect with default text output\\nprovisioning detect /path/to/project # Get JSON output for parsing\\nprovisioning detect /path/to/project --out json | jq \'.detections\' # Show only high-confidence detections\\nprovisioning detect /path/to/project --high-confidence-only # Pretty-printed YAML output\\nprovisioning detect /path/to/project --out yaml --pretty\\n```plaintext ### complete Analyze infrastructure completeness and recommend changes. **Usage:** ```bash\\nprovisioning complete [PATH] [OPTIONS]\\n```plaintext **Arguments:** - `PATH`: Project directory to analyze (default: current directory) **Options:** - `-o, --out TEXT`: Output format - `text`, `json`, `yaml` (default: `text`)\\n- `-c, --check`: Check mode (report only, no changes)\\n- `--pretty`: Pretty-print JSON/YAML output\\n- `-x, --debug`: Enable debug output **Examples:** ```bash\\n# Analyze completeness\\nprovisioning complete /path/to/project # Get detailed JSON report\\nprovisioning complete /path/to/project --out json # Check mode (dry-run, no changes)\\nprovisioning complete /path/to/project --check\\n```plaintext ### ifc (workflow) Run the full Infrastructure-from-Code pipeline. **Usage:** ```bash\\nprovisioning ifc [PATH] [OPTIONS]\\n```plaintext **Arguments:** - `PATH`: Project directory to process (default: current directory) **Options:** - `--org TEXT`: Organization name for rule loading (default: `default`)\\n- `-o, --out TEXT`: Output format - `text`, `json` (default: `text`)\\n- `--apply`: Apply recommendations (future feature)\\n- `-v, --verbose`: Verbose output with timing\\n- `--pretty`: Pretty-print output\\n- `-x, --debug`: Enable debug output **Examples:** ```bash\\n# Run workflow with default rules\\nprovisioning ifc /path/to/project # Run with organization-specific rules\\nprovisioning ifc /path/to/project --org acme-corp # Verbose output with timing\\nprovisioning ifc /path/to/project --verbose # JSON output for automation\\nprovisioning ifc /path/to/project --out json\\n```plaintext ## Organization-Specific Inference Rules Customize how infrastructure is inferred for your organization. ### Understanding Inference Rules An inference rule tells the system: \\"If we detect technology X, we should recommend taskservice Y.\\" **Rule Structure:** ```yaml\\nversion: \\"1.0.0\\"\\norganization: \\"your-org\\"\\nrules: - name: \\"rule-name\\" technology: [\\"detected-tech\\"] infers: \\"required-taskserv\\" confidence: 0.85 reason: \\"Why this taskserv is needed\\" required: true\\n```plaintext ### Creating Custom Rules Create an organization-specific rules file: ```bash\\n# ACME Corporation rules\\ncat > $PROVISIONING/config/inference-rules/acme-corp.yaml << \'EOF\'\\nversion: \\"1.0.0\\"\\norganization: \\"acme-corp\\"\\ndescription: \\"ACME Corporation infrastructure standards\\" rules: - name: \\"nodejs-to-redis\\" technology: [\\"nodejs\\", \\"express\\"] infers: \\"redis\\" confidence: 0.85 reason: \\"Node.js applications need caching\\" required: false - name: \\"postgres-to-backup\\" technology: [\\"postgres\\"] infers: \\"postgres-backup\\" confidence: 0.95 reason: \\"All databases require backup strategy\\" required: true - name: \\"all-services-monitoring\\" technology: [\\"nodejs\\", \\"python\\", \\"postgres\\"] infers: \\"monitoring\\" confidence: 0.90 reason: \\"ACME requires monitoring on production services\\" required: true\\nEOF\\n```plaintext Then use them: ```bash\\nprovisioning ifc /path/to/project --org acme-corp\\n```plaintext ### Default Rules If no organization rules are found, the system uses sensible defaults: - Node.js + Express → Redis (caching)\\n- Node.js → Nginx (reverse proxy)\\n- Database → Backup (data protection)\\n- Docker → Kubernetes (orchestration)\\n- Python → Gunicorn (WSGI server)\\n- PostgreSQL → Monitoring (production safety) ## Output Formats ### Text Output (Default) Human-readable format with visual indicators: ```plaintext\\nSTEP 1: Technology Detection\\n────────────────────────────\\n✓ Detected 2 technologies STEP 2: Infrastructure Completion\\n─────────────────────────────────\\n✓ Completeness: 1%\\n```plaintext ### JSON Output Structured format for automation and parsing: ```bash\\nprovisioning detect /path/to/project --out json | jq \'.detections[0]\'\\n```plaintext Output: ```json\\n{ \\"technology\\": \\"nodejs\\", \\"confidence\\": 0.8333333134651184, \\"evidence_count\\": 1\\n}\\n```plaintext ### YAML Output Alternative structured format: ```bash\\nprovisioning detect /path/to/project --out yaml\\n```plaintext ## Practical Examples ### Example 1: Node.js + PostgreSQL Project ```bash\\n# Step 1: Detect\\n$ provisioning detect my-app\\n✓ Detected: nodejs, express, postgres, docker # Step 2: Complete\\n$ provisioning complete my-app\\n✓ Changes needed: 3 - redis (caching) - nginx (reverse proxy) - pg-backup (database backup) # Step 3: Full workflow\\n$ provisioning ifc my-app --org acme-corp\\n```plaintext ### Example 2: Python Django Project ```bash\\n$ provisioning detect django-app --out json\\n{ \\"detections\\": [ {\\"technology\\": \\"python\\", \\"confidence\\": 0.95}, {\\"technology\\": \\"django\\", \\"confidence\\": 0.92} ]\\n} # Inferred requirements (with gunicorn, monitoring, backup)\\n```plaintext ### Example 3: Microservices Architecture ```bash\\n$ provisioning ifc microservices/ --org mycompany --verbose\\n🔍 Processing microservices/ - service-a: nodejs + postgres - service-b: python + redis - service-c: go + mongodb ✓ Detected common patterns\\n✓ Applied 12 inference rules\\n✓ Generated deployment plan\\n```plaintext ## Integration with Automation ### CI/CD Pipeline Example ```bash\\n#!/bin/bash\\n# Check infrastructure completeness in CI/CD PROJECT_PATH=${1:-.}\\nCOMPLETENESS=$(provisioning complete $PROJECT_PATH --out json | jq \'.completeness\') if (( $(echo \\"$COMPLETENESS < 0.9\\" | bc -l) )); then echo \\"❌ Infrastructure completeness too low: $COMPLETENESS\\" exit 1\\nfi echo \\"✅ Infrastructure is complete: $COMPLETENESS\\"\\n```plaintext ### Configuration as Code Integration ```bash\\n# Generate JSON for infrastructure config\\nprovisioning detect /path/to/project --out json > infra-report.json # Use in your config processing\\ncat infra-report.json | jq \'.detections[]\' | while read -r tech; do echo \\"Processing technology: $tech\\"\\ndone\\n```plaintext ## Troubleshooting ### \\"Detector binary not found\\" **Solution:** Ensure the provisioning project is properly built: ```bash\\ncd $PROVISIONING/platform\\ncargo build --release --bin provisioning-detector\\n```plaintext ### No technologies detected **Check:** 1. Project path is correct: `provisioning detect /actual/path`\\n2. Project contains recognizable technologies (package.json, Dockerfile, requirements.txt, etc.)\\n3. Use `--debug` flag for more details: `provisioning detect /path --debug` ### Organization rules not being applied **Check:** 1. Rules file exists: `$PROVISIONING/config/inference-rules/{org}.yaml`\\n2. Organization name is correct: `provisioning ifc /path --org myorg`\\n3. Verify rules structure with: `cat $PROVISIONING/config/inference-rules/myorg.yaml` ## Advanced Usage ### Custom Rule Template Generate a template for a new organization: ```bash\\n# Template will be created with proper structure\\nprovisioning rules create --org neworg\\n```plaintext ### Validate Rule Files ```bash\\n# Check for syntax errors\\nprovisioning rules validate /path/to/rules.yaml\\n```plaintext ### Export Rules for Integration Export as Rust code for embedding: ```bash\\nprovisioning rules export myorg --format rust > rules.rs\\n```plaintext ## Best Practices 1. **Organize by Organization**: Keep separate rules for different organizations\\n2. **High Confidence First**: Start with rules you\'re confident about (confidence > 0.8)\\n3. **Document Reasons**: Always fill in the `reason` field for maintainability\\n4. **Test Locally**: Run on sample projects before applying organization-wide\\n5. **Version Control**: Commit inference rules to version control\\n6. **Review Changes**: Always inspect recommendations with `--check` first ## Related Commands ```bash\\n# View available taskservs that can be inferred\\nprovisioning taskserv list # Create inferred infrastructure\\nprovisioning taskserv create {inferred-name} # View current configuration\\nprovisioning env | grep PROVISIONING\\n```plaintext ## Support and Documentation - **Full CLI Help**: `provisioning help`\\n- **Specific Command Help**: `provisioning help detect`\\n- **Configuration Guide**: See `CONFIG_ENCRYPTION_GUIDE.md`\\n- **Task Services**: See `SERVICE_MANAGEMENT_GUIDE.md` --- ## Quick Reference ### 3-Step Workflow ```bash\\n# 1. Detect technologies\\nprovisioning detect /path/to/project # 2. Analyze infrastructure gaps\\nprovisioning complete /path/to/project # 3. Run full workflow (detect + complete)\\nprovisioning ifc /path/to/project --org myorg\\n```plaintext ### Common Commands | Task | Command |\\n|------|---------|\\n| **Detect technologies** | `provisioning detect /path` |\\n| **Get JSON output** | `provisioning detect /path --out json` |\\n| **Check completeness** | `provisioning complete /path` |\\n| **Dry-run (check mode)** | `provisioning complete /path --check` |\\n| **Full workflow** | `provisioning ifc /path --org myorg` |\\n| **Verbose output** | `provisioning ifc /path --verbose` |\\n| **Debug mode** | `provisioning detect /path --debug` | ### Output Formats ```bash\\n# Text (human-readable)\\nprovisioning detect /path --out text # JSON (for automation)\\nprovisioning detect /path --out json | jq \'.detections\' # YAML (for configuration)\\nprovisioning detect /path --out yaml\\n```plaintext ### Organization Rules #### Use Organization Rules ```bash\\nprovisioning ifc /path --org acme-corp\\n```plaintext #### Create Rules File ```bash\\nmkdir -p $PROVISIONING/config/inference-rules\\ncat > $PROVISIONING/config/inference-rules/myorg.yaml << \'EOF\'\\nversion: \\"1.0.0\\"\\norganization: \\"myorg\\"\\nrules: - name: \\"nodejs-to-redis\\" technology: [\\"nodejs\\"] infers: \\"redis\\" confidence: 0.85 reason: \\"Caching layer\\" required: false\\nEOF\\n```plaintext ### Example: Node.js + PostgreSQL ```bash\\n$ provisioning detect myapp\\n✓ Detected: nodejs, postgres $ provisioning complete myapp\\n✓ Changes: +redis, +nginx, +pg-backup $ provisioning ifc myapp --org default\\n✓ Detection: 2 technologies\\n✓ Completion: recommended changes\\n✅ Workflow complete\\n```plaintext ### CI/CD Integration ```bash\\n#!/bin/bash\\n# Check infrastructure is complete before deploy\\nCOMPLETENESS=$(provisioning complete . --out json | jq \'.completeness\') if (( $(echo \\"$COMPLETENESS < 0.9\\" | bc -l) )); then echo \\"Infrastructure incomplete: $COMPLETENESS\\" exit 1\\nfi\\n```plaintext ### JSON Output Examples #### Detect Output ```json\\n{ \\"detections\\": [ {\\"technology\\": \\"nodejs\\", \\"confidence\\": 0.95}, {\\"technology\\": \\"postgres\\", \\"confidence\\": 0.92} ], \\"overall_confidence\\": 0.93\\n}\\n```plaintext #### Complete Output ```json\\n{ \\"completeness\\": 1.0, \\"changes_needed\\": 2, \\"is_safe\\": true, \\"change_summary\\": \\"+ redis, + monitoring\\"\\n}\\n```plaintext ### Flag Reference | Flag | Short | Purpose |\\n|------|-------|---------|\\n| `--out TEXT` | `-o` | Output format: text, json, yaml |\\n| `--debug` | `-x` | Enable debug output |\\n| `--pretty` | | Pretty-print JSON/YAML |\\n| `--check` | `-c` | Dry-run (detect/complete) |\\n| `--org TEXT` | | Organization name (ifc) |\\n| `--verbose` | `-v` | Verbose output (ifc) |\\n| `--apply` | | Apply changes (ifc, future) | ### Troubleshooting | Issue | Solution |\\n|-------|----------|\\n| \\"Detector binary not found\\" | `cd $PROVISIONING/platform && cargo build --release` |\\n| No technologies detected | Check file types (.py, .js, go.mod, package.json, etc.) |\\n| Organization rules not found | Verify file exists: `$PROVISIONING/config/inference-rules/{org}.yaml` |\\n| Invalid path error | Use absolute path: `provisioning detect /full/path` | ### Environment Variables | Variable | Purpose |\\n|----------|---------|\\n| `$PROVISIONING` | Path to provisioning root |\\n| `$PROVISIONING_ORG` | Default organization (optional) | ### Default Inference Rules - Node.js + Express → Redis (caching)\\n- Node.js → Nginx (reverse proxy)\\n- Database → Backup (data protection)\\n- Docker → Kubernetes (orchestration)\\n- Python → Gunicorn (WSGI)\\n- PostgreSQL → Monitoring (production) ### Useful Aliases ```bash\\n# Add to shell config\\nalias detect=\'provisioning detect\'\\nalias complete=\'provisioning complete\'\\nalias ifc=\'provisioning ifc\' # Usage\\ndetect /my/project\\ncomplete /my/project\\nifc /my/project --org myorg\\n```plaintext ### Tips & Tricks **Parse JSON in bash:** ```bash\\nprovisioning detect . --out json | \\\\ jq \'.detections[] | .technology\' | \\\\ sort | uniq\\n```plaintext **Watch for changes:** ```bash\\nwatch -n 5 \'provisioning complete . --out json | jq \\".completeness\\"\'\\n```plaintext **Generate reports:** ```bash\\nprovisioning detect . --out yaml > detection-report.yaml\\nprovisioning complete . --out yaml > completion-report.yaml\\n```plaintext **Validate all organizations:** ```bash\\nfor org in $PROVISIONING/config/inference-rules/*.yaml; do org_name=$(basename \\"$org\\" .yaml) echo \\"Testing $org_name...\\" provisioning ifc . --org \\"$org_name\\" --check\\ndone\\n```plaintext ### Related Guides - Full guide: `docs/user/INFRASTRUCTURE_FROM_CODE_GUIDE.md`\\n- Inference rules: `docs/user/INFRASTRUCTURE_FROM_CODE_GUIDE.md#organization-specific-inference-rules`\\n- Service management: `docs/user/SERVICE_MANAGEMENT_QUICKREF.md`\\n- Configuration: `docs/user/CONFIG_ENCRYPTION_QUICKREF.md`","breadcrumbs":"Infrastructure from Code Guide » 1. Detect Technologies in Your Project","id":"1362","title":"1. Detect Technologies in Your Project"},"1363":{"body":"","breadcrumbs":"Batch Workflow System » Batch Workflow System (v3.1.0 - TOKEN-OPTIMIZED ARCHITECTURE)","id":"1363","title":"Batch Workflow System (v3.1.0 - TOKEN-OPTIMIZED ARCHITECTURE)"},"1364":{"body":"A comprehensive batch workflow system has been implemented using 10 token-optimized agents achieving 85-90% token efficiency over monolithic approaches. The system enables provider-agnostic batch operations with mixed provider support (UpCloud + AWS + local).","breadcrumbs":"Batch Workflow System » 🚀 Batch Workflow System Completed (2025-09-25)","id":"1364","title":"🚀 Batch Workflow System Completed (2025-09-25)"},"1365":{"body":"Provider-Agnostic Design : Single workflows supporting multiple cloud providers KCL Schema Integration : Type-safe workflow definitions with comprehensive validation Dependency Resolution : Topological sorting with soft/hard dependency support State Management : Checkpoint-based recovery with rollback capabilities Real-time Monitoring : Live workflow progress tracking and health monitoring Token Optimization : 85-90% efficiency using parallel specialized agents","breadcrumbs":"Batch Workflow System » Key Achievements","id":"1365","title":"Key Achievements"},"1366":{"body":"# Submit batch workflow from KCL definition\\nnu -c \\"use core/nulib/workflows/batch.nu *; batch submit workflows/example_batch.k\\" # Monitor batch workflow progress\\nnu -c \\"use core/nulib/workflows/batch.nu *; batch monitor \\" # List batch workflows with filtering\\nnu -c \\"use core/nulib/workflows/batch.nu *; batch list --status Running\\" # Get detailed batch status\\nnu -c \\"use core/nulib/workflows/batch.nu *; batch status \\" # Initiate rollback for failed workflow\\nnu -c \\"use core/nulib/workflows/batch.nu *; batch rollback \\" # Show batch workflow statistics\\nnu -c \\"use core/nulib/workflows/batch.nu *; batch stats\\"","breadcrumbs":"Batch Workflow System » Batch Workflow Commands","id":"1366","title":"Batch Workflow Commands"},"1367":{"body":"Batch workflows are defined using KCL schemas in kcl/workflows.k: # Example batch workflow with mixed providers\\nbatch_workflow: BatchWorkflow = { name = \\"multi_cloud_deployment\\" version = \\"1.0.0\\" storage_backend = \\"surrealdb\\" # or \\"filesystem\\" parallel_limit = 5 rollback_enabled = True operations = [ { id = \\"upcloud_servers\\" type = \\"server_batch\\" provider = \\"upcloud\\" dependencies = [] server_configs = [ {name = \\"web-01\\", plan = \\"1xCPU-2GB\\", zone = \\"de-fra1\\"}, {name = \\"web-02\\", plan = \\"1xCPU-2GB\\", zone = \\"us-nyc1\\"} ] }, { id = \\"aws_taskservs\\" type = \\"taskserv_batch\\" provider = \\"aws\\" dependencies = [\\"upcloud_servers\\"] taskservs = [\\"kubernetes\\", \\"cilium\\", \\"containerd\\"] } ]\\n}","breadcrumbs":"Batch Workflow System » KCL Workflow Schema","id":"1367","title":"KCL Workflow Schema"},"1368":{"body":"Extended orchestrator API for batch workflow management: Submit Batch : POST http://localhost:9090/v1/workflows/batch/submit Batch Status : GET http://localhost:9090/v1/workflows/batch/{id} List Batches : GET http://localhost:9090/v1/workflows/batch Monitor Progress : GET http://localhost:9090/v1/workflows/batch/{id}/progress Initiate Rollback : POST http://localhost:9090/v1/workflows/batch/{id}/rollback Batch Statistics : GET http://localhost:9090/v1/workflows/batch/stats","breadcrumbs":"Batch Workflow System » REST API Endpoints (Batch Operations)","id":"1368","title":"REST API Endpoints (Batch Operations)"},"1369":{"body":"Provider Agnostic : Mix UpCloud, AWS, and local providers in single workflows Type Safety : KCL schema validation prevents runtime errors Dependency Management : Automatic resolution with failure handling State Recovery : Checkpoint-based recovery from any failure point Real-time Monitoring : Live progress tracking with detailed status","breadcrumbs":"Batch Workflow System » System Benefits","id":"1369","title":"System Benefits"},"137":{"body":"UpCloud username UpCloud password Configured via environment variables or config files","breadcrumbs":"Prerequisites » UpCloud","id":"137","title":"UpCloud"},"1370":{"body":"","breadcrumbs":"CLI Architecture » Modular CLI Architecture (v3.2.0 - MAJOR REFACTORING)","id":"1370","title":"Modular CLI Architecture (v3.2.0 - MAJOR REFACTORING)"},"1371":{"body":"A comprehensive CLI refactoring transforming the monolithic 1,329-line script into a modular, maintainable architecture with domain-driven design.","breadcrumbs":"CLI Architecture » 🚀 CLI Refactoring Completed (2025-09-30)","id":"1371","title":"🚀 CLI Refactoring Completed (2025-09-30)"},"1372":{"body":"Main File Reduction : 1,329 lines → 211 lines (84% reduction) Domain Handlers : 7 focused modules (infrastructure, orchestration, development, workspace, configuration, utilities, generation) Code Duplication : 50+ instances eliminated through centralized flag handling Command Registry : 80+ shortcuts for improved user experience Bi-directional Help : provisioning help ws = provisioning ws help Test Coverage : Comprehensive test suite with 6 test groups","breadcrumbs":"CLI Architecture » Architecture Improvements","id":"1372","title":"Architecture Improvements"},"1373":{"body":"","breadcrumbs":"CLI Architecture » Command Shortcuts Reference","id":"1373","title":"Command Shortcuts Reference"},"1374":{"body":"[Full docs: provisioning help infra] s → server (create, delete, list, ssh, price) t, task → taskserv (create, delete, list, generate, check-updates) cl → cluster (create, delete, list) i, infras → infra (list, validate)","breadcrumbs":"CLI Architecture » Infrastructure","id":"1374","title":"Infrastructure"},"1375":{"body":"[Full docs: provisioning help orch] wf, flow → workflow (list, status, monitor, stats, cleanup) bat → batch (submit, list, status, monitor, rollback, cancel, stats) orch → orchestrator (start, stop, status, health, logs)","breadcrumbs":"CLI Architecture » Orchestration","id":"1375","title":"Orchestration"},"1376":{"body":"[Full docs: provisioning help dev] mod → module (discover, load, list, unload, sync-kcl) lyr → layer (explain, show, test, stats) version (check, show, updates, apply, taskserv) pack (core, provider, list, clean)","breadcrumbs":"CLI Architecture » Development","id":"1376","title":"Development"},"1377":{"body":"[Full docs: provisioning help ws] ws → workspace (init, create, validate, info, list, migrate) tpl, tmpl → template (list, types, show, apply, validate)","breadcrumbs":"CLI Architecture » Workspace","id":"1377","title":"Workspace"},"1378":{"body":"[Full docs: provisioning help config] e → env (show environment variables) val → validate (validate configuration) st, config → setup (setup wizard) show (show configuration details) init (initialize infrastructure) allenv (show all config and environment)","breadcrumbs":"CLI Architecture » Configuration","id":"1378","title":"Configuration"},"1379":{"body":"l, ls, list → list (list resources) ssh (SSH operations) sops (edit encrypted files) cache (cache management) providers (provider operations) nu (start Nushell session with provisioning library) qr (QR code generation) nuinfo (Nushell information) plugin, plugins (plugin management)","breadcrumbs":"CLI Architecture » Utilities","id":"1379","title":"Utilities"},"138":{"body":"Once all prerequisites are met, proceed to: → Installation","breadcrumbs":"Prerequisites » Next Steps","id":"138","title":"Next Steps"},"1380":{"body":"[Full docs: provisioning generate help] g, gen → generate (server, taskserv, cluster, infra, new)","breadcrumbs":"CLI Architecture » Generation","id":"1380","title":"Generation"},"1381":{"body":"c → create (create resources) d → delete (delete resources) u → update (update resources) price, cost, costs → price (show pricing) cst, csts → create-server-task (create server with taskservs)","breadcrumbs":"CLI Architecture » Special Commands","id":"1381","title":"Special Commands"},"1382":{"body":"The help system works in both directions: # All these work identically:\\nprovisioning help workspace\\nprovisioning workspace help\\nprovisioning ws help\\nprovisioning help ws # Same for all categories:\\nprovisioning help infra = provisioning infra help\\nprovisioning help orch = provisioning orch help\\nprovisioning help dev = provisioning dev help\\nprovisioning help ws = provisioning ws help\\nprovisioning help plat = provisioning plat help\\nprovisioning help concept = provisioning concept help\\n```plaintext ## CLI Internal Architecture **File Structure:** ```plaintext\\nprovisioning/core/nulib/\\n├── provisioning (211 lines) - Main entry point\\n├── main_provisioning/\\n│ ├── flags.nu (139 lines) - Centralized flag handling\\n│ ├── dispatcher.nu (264 lines) - Command routing\\n│ ├── help_system.nu - Categorized help\\n│ └── commands/ - Domain-focused handlers\\n│ ├── infrastructure.nu (117 lines)\\n│ ├── orchestration.nu (64 lines)\\n│ ├── development.nu (72 lines)\\n│ ├── workspace.nu (56 lines)\\n│ ├── generation.nu (78 lines)\\n│ ├── utilities.nu (157 lines)\\n│ └── configuration.nu (316 lines)\\n```plaintext **For Developers:** - **Adding commands**: Update appropriate domain handler in `commands/`\\n- **Adding shortcuts**: Update command registry in `dispatcher.nu`\\n- **Flag changes**: Modify centralized functions in `flags.nu`\\n- **Testing**: Run `nu tests/test_provisioning_refactor.nu` See [ADR-006: CLI Refactoring](../architecture/adr/adr-006-provisioning-cli-refactoring.md) for complete refactoring details.","breadcrumbs":"CLI Architecture » Bi-directional Help System","id":"1382","title":"Bi-directional Help System"},"1383":{"body":"","breadcrumbs":"Configuration System » Configuration System (v2.0.0)","id":"1383","title":"Configuration System (v2.0.0)"},"1384":{"body":"The system has been completely migrated from ENV-based to config-driven architecture. 65+ files migrated across entire codebase 200+ ENV variables replaced with 476 config accessors 16 token-efficient agents used for systematic migration 92% token efficiency achieved vs monolithic approach","breadcrumbs":"Configuration System » ⚠️ Migration Completed (2025-09-23)","id":"1384","title":"⚠️ Migration Completed (2025-09-23)"},"1385":{"body":"Primary Config : config.defaults.toml (system defaults) User Config : config.user.toml (user preferences) Environment Configs : config.{dev,test,prod}.toml.example Hierarchical Loading : defaults → user → project → infra → env → runtime Interpolation : {{paths.base}}, {{env.HOME}}, {{now.date}}, {{git.branch}}","breadcrumbs":"Configuration System » Configuration Files","id":"1385","title":"Configuration Files"},"1386":{"body":"provisioning validate config - Validate configuration provisioning env - Show environment variables provisioning allenv - Show all config and environment PROVISIONING_ENV=prod provisioning - Use specific environment","breadcrumbs":"Configuration System » Essential Commands","id":"1386","title":"Essential Commands"},"1387":{"body":"See ADR-010: Configuration Format Strategy for complete rationale and design patterns.","breadcrumbs":"Configuration System » Configuration Architecture","id":"1387","title":"Configuration Architecture"},"1388":{"body":"When loading configuration, precedence is (highest to lowest): Runtime Arguments - CLI flags and direct user input Environment Variables - PROVISIONING_* overrides User Configuration - ~/.config/provisioning/user_config.yaml Infrastructure Configuration - Nickel schemas, extensions, provider configs System Defaults - provisioning/config/config.defaults.toml","breadcrumbs":"Configuration System » Configuration Loading Hierarchy (Priority)","id":"1388","title":"Configuration Loading Hierarchy (Priority)"},"1389":{"body":"For new configuration : Infrastructure/schemas → Use Nickel (type-safe, schema-validated) Application settings → Use TOML (hierarchical, supports interpolation) Kubernetes/CI-CD → Use YAML (standard, ecosystem-compatible) For existing workspace configs : KCL still supported but gradually migrating to Nickel Config loader supports both formats during transition","breadcrumbs":"Configuration System » File Type Guidelines","id":"1389","title":"File Type Guidelines"},"139":{"body":"This guide walks you through installing the Provisioning Platform on your system.","breadcrumbs":"Installation Steps » Installation","id":"139","title":"Installation"},"1390":{"body":"This guide shows you how to set up a new infrastructure workspace and extend the provisioning system with custom configurations.","breadcrumbs":"Workspace Setup » Workspace Setup Guide","id":"1390","title":"Workspace Setup Guide"},"1391":{"body":"","breadcrumbs":"Workspace Setup » Quick Start","id":"1391","title":"Quick Start"},"1392":{"body":"# Navigate to the workspace directory\\ncd workspace/infra # Create your infrastructure directory\\nmkdir my-infra\\ncd my-infra # Create the basic structure\\nmkdir -p task-servs clusters defs data tmp\\n```plaintext ### 2. Set Up KCL Module Dependencies Create `kcl.mod`: ```toml\\n[package]\\nname = \\"my-infra\\"\\nedition = \\"v0.11.2\\"\\nversion = \\"0.0.1\\" [dependencies]\\nprovisioning = { path = \\"../../../provisioning/kcl\\", version = \\"0.0.1\\" }\\ntaskservs = { path = \\"../../../provisioning/extensions/taskservs\\", version = \\"0.0.1\\" }\\ncluster = { path = \\"../../../provisioning/extensions/cluster\\", version = \\"0.0.1\\" }\\nupcloud_prov = { path = \\"../../../provisioning/extensions/providers/upcloud/kcl\\", version = \\"0.0.1\\" }\\n```plaintext ### 3. Create Main Settings Create `settings.k`: ```kcl\\nimport provisioning _settings = provisioning.Settings { main_name = \\"my-infra\\" main_title = \\"My Infrastructure Project\\" # Directories settings_path = \\"./settings.yaml\\" defaults_provs_dirpath = \\"./defs\\" prov_data_dirpath = \\"./data\\" created_taskservs_dirpath = \\"./tmp/NOW_deployment\\" # Cluster configuration cluster_admin_host = \\"my-infra-cp-0\\" cluster_admin_user = \\"root\\" servers_wait_started = 40 # Runtime settings runset = { wait = True output_format = \\"yaml\\" output_path = \\"./tmp/NOW\\" inventory_file = \\"./inventory.yaml\\" use_time = True }\\n} _settings\\n```plaintext ### 4. Test Your Setup ```bash\\n# Test the configuration\\nkcl run settings.k # Test with the provisioning system\\ncd ../../../\\nprovisioning -c -i my-infra show settings\\n```plaintext ## Adding Taskservers ### Example: Redis Create `task-servs/redis.k`: ```kcl\\nimport taskservs.redis.kcl.redis as redis_schema _taskserv = redis_schema.Redis { version = \\"7.2.3\\" port = 6379 maxmemory = \\"512mb\\" maxmemory_policy = \\"allkeys-lru\\" persistence = True bind_address = \\"0.0.0.0\\"\\n} _taskserv\\n```plaintext Test it: ```bash\\nkcl run task-servs/redis.k\\n```plaintext ### Example: Kubernetes Create `task-servs/kubernetes.k`: ```kcl\\nimport taskservs.kubernetes.kcl.kubernetes as k8s_schema _taskserv = k8s_schema.Kubernetes { version = \\"1.29.1\\" major_version = \\"1.29\\" cri = \\"crio\\" runtime_default = \\"crun\\" cni = \\"cilium\\" bind_port = 6443\\n} _taskserv\\n```plaintext ### Example: Cilium Create `task-servs/cilium.k`: ```kcl\\nimport taskservs.cilium.kcl.cilium as cilium_schema _taskserv = cilium_schema.Cilium { version = \\"v1.16.5\\"\\n} _taskserv\\n```plaintext ## Using the Provisioning System ### Create Servers ```bash\\n# Check configuration first\\nprovisioning -c -i my-infra server create # Actually create servers\\nprovisioning -i my-infra server create\\n```plaintext ### Install Taskservs ```bash\\n# Install Kubernetes\\nprovisioning -c -i my-infra taskserv create kubernetes # Install Cilium\\nprovisioning -c -i my-infra taskserv create cilium # Install Redis\\nprovisioning -c -i my-infra taskserv create redis\\n```plaintext ### Manage Clusters ```bash\\n# Create cluster\\nprovisioning -c -i my-infra cluster create # List cluster components\\nprovisioning -i my-infra cluster list\\n```plaintext ## Directory Structure Your workspace should look like this: ```plaintext\\nworkspace/infra/my-infra/\\n├── kcl.mod # Module dependencies\\n├── settings.k # Main infrastructure settings\\n├── task-servs/ # Taskserver configurations\\n│ ├── kubernetes.k\\n│ ├── cilium.k\\n│ ├── redis.k\\n│ └── {custom-service}.k\\n├── clusters/ # Cluster definitions\\n│ └── main.k\\n├── defs/ # Provider defaults\\n│ ├── upcloud_defaults.k\\n│ └── {provider}_defaults.k\\n├── data/ # Provider runtime data\\n│ ├── upcloud_settings.k\\n│ └── {provider}_settings.k\\n├── tmp/ # Temporary files\\n│ ├── NOW_deployment/\\n│ └── NOW_clusters/\\n├── inventory.yaml # Generated inventory\\n└── settings.yaml # Generated settings\\n```plaintext ## Advanced Configuration ### Custom Provider Defaults Create `defs/upcloud_defaults.k`: ```kcl\\nimport upcloud_prov.upcloud as upcloud_schema _defaults = upcloud_schema.UpcloudDefaults { zone = \\"de-fra1\\" plan = \\"1xCPU-2GB\\" storage_size = 25 storage_tier = \\"maxiops\\"\\n} _defaults\\n```plaintext ### Cluster Definitions Create `clusters/main.k`: ```kcl\\nimport cluster.main as cluster_schema _cluster = cluster_schema.MainCluster { name = \\"my-infra-cluster\\" control_plane_count = 1 worker_count = 2 services = [ \\"kubernetes\\", \\"cilium\\", \\"redis\\" ]\\n} _cluster\\n```plaintext ## Environment-Specific Configurations ### Development Environment Create `settings-dev.k`: ```kcl\\nimport provisioning _settings = provisioning.Settings { main_name = \\"my-infra-dev\\" main_title = \\"My Infrastructure (Development)\\" # Development-specific settings servers_wait_started = 20 # Faster for dev runset = { wait = False # Don\'t wait in dev output_format = \\"json\\" }\\n} _settings\\n```plaintext ### Production Environment Create `settings-prod.k`: ```kcl\\nimport provisioning _settings = provisioning.Settings { main_name = \\"my-infra-prod\\" main_title = \\"My Infrastructure (Production)\\" # Production-specific settings servers_wait_started = 60 # More conservative runset = { wait = True output_format = \\"yaml\\" use_time = True } # Production security secrets = { provider = \\"sops\\" }\\n} _settings\\n```plaintext ## Troubleshooting ### Common Issues #### KCL Module Not Found ```plaintext\\nError: pkgpath provisioning not found\\n```plaintext **Solution**: Ensure the provisioning module is in the expected location: ```bash\\nls ../../../provisioning/extensions/kcl/provisioning/0.0.1/\\n```plaintext If missing, copy the files: ```bash\\nmkdir -p ../../../provisioning/extensions/kcl/provisioning/0.0.1\\ncp -r ../../../provisioning/kcl/* ../../../provisioning/extensions/kcl/provisioning/0.0.1/\\n```plaintext #### Import Path Errors ```plaintext\\nError: attribute \'Redis\' not found in module\\n```plaintext **Solution**: Check the import path: ```kcl\\n# Wrong\\nimport taskservs.redis.default.kcl.redis as redis_schema # Correct\\nimport taskservs.redis.kcl.redis as redis_schema\\n```plaintext #### Boolean Value Errors ```plaintext\\nError: name \'true\' is not defined\\n```plaintext **Solution**: Use capitalized booleans in KCL: ```kcl\\n# Wrong\\nenabled = true # Correct\\nenabled = True\\n```plaintext ### Debugging Commands ```bash\\n# Check KCL syntax\\nkcl run settings.k # Validate configuration\\nprovisioning -c -i my-infra validate config # Show current settings\\nprovisioning -i my-infra show settings # List available taskservs\\nprovisioning -i my-infra taskserv list # Check infrastructure status\\nprovisioning -i my-infra show servers\\n```plaintext ## Next Steps 1. **Customize your settings**: Modify `settings.k` for your specific needs\\n2. **Add taskservs**: Create configurations for the services you need\\n3. **Test thoroughly**: Use `--check` mode before actual deployment\\n4. **Create clusters**: Define complete deployment configurations\\n5. **Set up CI/CD**: Integrate with your deployment pipeline\\n6. **Monitor**: Set up logging and monitoring for your infrastructure For more advanced topics, see: - [KCL Module Guide](../development/KCL_MODULE_GUIDE.md)\\n- [Creating Custom Taskservers](../development/CUSTOM_TASKSERVERS.md)\\n- [Provider Configuration](../user/PROVIDER_SETUP.md)","breadcrumbs":"Workspace Setup » 1. Create a New Infrastructure Workspace","id":"1392","title":"1. Create a New Infrastructure Workspace"},"1393":{"body":"Version : 1.0.0 Date : 2025-10-06 Status : ✅ Production Ready","breadcrumbs":"Workspace Switching Guide » Workspace Switching Guide","id":"1393","title":"Workspace Switching Guide"},"1394":{"body":"The provisioning system now includes a centralized workspace management system that allows you to easily switch between multiple workspaces without manually editing configuration files.","breadcrumbs":"Workspace Switching Guide » Overview","id":"1394","title":"Overview"},"1395":{"body":"","breadcrumbs":"Workspace Switching Guide » Quick Start","id":"1395","title":"Quick Start"},"1396":{"body":"provisioning workspace list\\n```plaintext Output: ```plaintext\\nRegistered Workspaces: ● librecloud Path: /Users/Akasha/project-provisioning/workspace_librecloud Last used: 2025-10-06T12:29:43Z production Path: /opt/workspaces/production Last used: 2025-10-05T10:15:30Z\\n```plaintext The green ● indicates the currently active workspace. ### Check Active Workspace ```bash\\nprovisioning workspace active\\n```plaintext Output: ```plaintext\\nActive Workspace: Name: librecloud Path: /Users/Akasha/project-provisioning/workspace_librecloud Last used: 2025-10-06T12:29:43Z\\n```plaintext ### Switch to Another Workspace ```bash\\n# Option 1: Using activate\\nprovisioning workspace activate production # Option 2: Using switch (alias)\\nprovisioning workspace switch production\\n```plaintext Output: ```plaintext\\n✓ Workspace \'production\' activated Current workspace: production\\nPath: /opt/workspaces/production ℹ All provisioning commands will now use this workspace\\n```plaintext ### Register a New Workspace ```bash\\n# Register without activating\\nprovisioning workspace register my-project ~/workspaces/my-project # Register and activate immediately\\nprovisioning workspace register my-project ~/workspaces/my-project --activate\\n```plaintext ### Remove Workspace from Registry ```bash\\n# With confirmation prompt\\nprovisioning workspace remove old-workspace # Skip confirmation\\nprovisioning workspace remove old-workspace --force\\n```plaintext **Note**: This only removes the workspace from the registry. The workspace files are NOT deleted. ## Architecture ### Central User Configuration All workspace information is stored in a central user configuration file: **Location**: `~/Library/Application Support/provisioning/user_config.yaml` **Structure**: ```yaml\\n# Active workspace (current workspace in use)\\nactive_workspace: \\"librecloud\\" # Known workspaces (automatically managed)\\nworkspaces: - name: \\"librecloud\\" path: \\"/Users/Akasha/project-provisioning/workspace_librecloud\\" last_used: \\"2025-10-06T12:29:43Z\\" - name: \\"production\\" path: \\"/opt/workspaces/production\\" last_used: \\"2025-10-05T10:15:30Z\\" # User preferences (global settings)\\npreferences: editor: \\"vim\\" output_format: \\"yaml\\" confirm_delete: true confirm_deploy: true default_log_level: \\"info\\" preferred_provider: \\"upcloud\\" # Metadata\\nmetadata: created: \\"2025-10-06T12:29:43Z\\" last_updated: \\"2025-10-06T13:46:16Z\\" version: \\"1.0.0\\"\\n```plaintext ### How It Works 1. **Workspace Registration**: When you register a workspace, it\'s added to the `workspaces` list in `user_config.yaml` 2. **Activation**: When you activate a workspace: - `active_workspace` is updated to the workspace name - The workspace\'s `last_used` timestamp is updated - All provisioning commands now use this workspace\'s configuration 3. **Configuration Loading**: The config loader reads `active_workspace` from `user_config.yaml` and loads: - `workspace_path/config/provisioning.yaml` - `workspace_path/config/providers/*.toml` - `workspace_path/config/platform/*.toml` - `workspace_path/config/kms.toml` ## Advanced Features ### User Preferences You can set global user preferences that apply across all workspaces: ```bash\\n# Get a preference value\\nprovisioning workspace get-preference editor # Set a preference value\\nprovisioning workspace set-preference editor \\"code\\" # View all preferences\\nprovisioning workspace preferences\\n```plaintext **Available Preferences**: - `editor`: Default editor for config files (vim, code, nano, etc.)\\n- `output_format`: Default output format (yaml, json, toml)\\n- `confirm_delete`: Require confirmation for deletions (true/false)\\n- `confirm_deploy`: Require confirmation for deployments (true/false)\\n- `default_log_level`: Default log level (debug, info, warn, error)\\n- `preferred_provider`: Preferred cloud provider (aws, upcloud, local) ### Output Formats List workspaces in different formats: ```bash\\n# Table format (default)\\nprovisioning workspace list # JSON format\\nprovisioning workspace list --format json # YAML format\\nprovisioning workspace list --format yaml\\n```plaintext ### Quiet Mode Activate workspace without output messages: ```bash\\nprovisioning workspace activate production --quiet\\n```plaintext ## Workspace Requirements For a workspace to be activated, it must have: 1. **Directory exists**: The workspace directory must exist on the filesystem 2. **Config directory**: Must have a `config/` directory workspace_name/ └── config/ ├── provisioning.yaml # Required ├── providers/ # Optional ├── platform/ # Optional └── kms.toml # Optional 3. **Main config file**: Must have `config/provisioning.yaml` If these requirements are not met, the activation will fail with helpful error messages: ```plaintext\\n✗ Workspace \'my-project\' not found in registry\\n💡 Available workspaces: [list of workspaces]\\n💡 Register it first with: provisioning workspace register my-project \\n```plaintext ```plaintext\\n✗ Workspace is not migrated to new config system\\n💡 Missing: /path/to/workspace/config\\n💡 Run migration: provisioning workspace migrate my-project\\n```plaintext ## Migration from Old System If you have workspaces using the old context system (`ws_{name}.yaml` files), they still work but you should register them in the new system: ```bash\\n# Register existing workspace\\nprovisioning workspace register old-workspace ~/workspaces/old-workspace # Activate it\\nprovisioning workspace activate old-workspace\\n```plaintext The old `ws_{name}.yaml` files are still supported for backward compatibility, but the new centralized system is recommended. ## Best Practices ### 1. **One Active Workspace at a Time** Only one workspace can be active at a time. All provisioning commands use the active workspace\'s configuration. ### 2. **Use Descriptive Names** Use clear, descriptive names for your workspaces: ```bash\\n# ✅ Good\\nprovisioning workspace register production-us-east ~/workspaces/prod-us-east\\nprovisioning workspace register dev-local ~/workspaces/dev # ❌ Avoid\\nprovisioning workspace register ws1 ~/workspaces/workspace1\\nprovisioning workspace register temp ~/workspaces/t\\n```plaintext ### 3. **Keep Workspaces Organized** Store all workspaces in a consistent location: ```bash\\n~/workspaces/\\n├── production/\\n├── staging/\\n├── development/\\n└── testing/\\n```plaintext ### 4. **Regular Cleanup** Remove workspaces you no longer use: ```bash\\n# List workspaces to see which ones are unused\\nprovisioning workspace list # Remove old workspace\\nprovisioning workspace remove old-workspace\\n```plaintext ### 5. **Backup User Config** Periodically backup your user configuration: ```bash\\ncp \\"~/Library/Application Support/provisioning/user_config.yaml\\" \\\\ \\"~/Library/Application Support/provisioning/user_config.yaml.backup\\"\\n```plaintext ## Troubleshooting ### Workspace Not Found **Problem**: `✗ Workspace \'name\' not found in registry` **Solution**: Register the workspace first: ```bash\\nprovisioning workspace register name /path/to/workspace\\n```plaintext ### Missing Configuration **Problem**: `✗ Missing workspace configuration` **Solution**: Ensure the workspace has a `config/provisioning.yaml` file. Run migration if needed: ```bash\\nprovisioning workspace migrate name\\n```plaintext ### Directory Not Found **Problem**: `✗ Workspace directory not found: /path/to/workspace` **Solution**: 1. Check if the workspace was moved or deleted\\n2. Update the path or remove from registry: ```bash\\nprovisioning workspace remove name\\nprovisioning workspace register name /new/path\\n```plaintext ### Corrupted User Config **Problem**: `Error: Failed to parse user config` **Solution**: The system automatically creates a backup and regenerates the config. Check: ```bash\\nls -la \\"~/Library/Application Support/provisioning/user_config.yaml\\"*\\n```plaintext Restore from backup if needed: ```bash\\ncp \\"~/Library/Application Support/provisioning/user_config.yaml.backup.TIMESTAMP\\" \\\\ \\"~/Library/Application Support/provisioning/user_config.yaml\\"\\n```plaintext ## CLI Commands Reference | Command | Alias | Description |\\n|---------|-------|-------------|\\n| `provisioning workspace activate ` | - | Activate a workspace |\\n| `provisioning workspace switch ` | - | Alias for activate |\\n| `provisioning workspace list` | - | List all registered workspaces |\\n| `provisioning workspace active` | - | Show currently active workspace |\\n| `provisioning workspace register ` | - | Register a new workspace |\\n| `provisioning workspace remove ` | - | Remove workspace from registry |\\n| `provisioning workspace preferences` | - | Show user preferences |\\n| `provisioning workspace set-preference ` | - | Set a preference |\\n| `provisioning workspace get-preference ` | - | Get a preference value | ## Integration with Config System The workspace switching system is fully integrated with the new target-based configuration system: ### Configuration Hierarchy (Priority: Low → High) ```plaintext\\n1. Workspace config workspace/{name}/config/provisioning.yaml\\n2. Provider configs workspace/{name}/config/providers/*.toml\\n3. Platform configs workspace/{name}/config/platform/*.toml\\n4. User context ~/Library/Application Support/provisioning/ws_{name}.yaml (legacy)\\n5. User config ~/Library/Application Support/provisioning/user_config.yaml (new)\\n6. Environment variables PROVISIONING_*\\n```plaintext ### Example Workflow ```bash\\n# 1. Create and activate development workspace\\nprovisioning workspace register dev ~/workspaces/dev --activate # 2. Work on development\\nprovisioning server create web-dev-01\\nprovisioning taskserv create kubernetes # 3. Switch to production\\nprovisioning workspace switch production # 4. Deploy to production\\nprovisioning server create web-prod-01\\nprovisioning taskserv create kubernetes # 5. Switch back to development\\nprovisioning workspace switch dev # All commands now use dev workspace config\\n```plaintext ## KCL Workspace Configuration Starting with v3.6.0, workspaces use **KCL (Kusion Configuration Language)** for type-safe, schema-validated configurations instead of YAML. ### What Changed **Before (YAML)**: ```yaml\\nworkspace: name: myworkspace version: 1.0.0\\npaths: base: /path/to/workspace\\n```plaintext **Now (KCL - Type-Safe)**: ```kcl\\nimport provisioning.workspace_config as ws workspace_config = ws.WorkspaceConfig { workspace: { name: \\"myworkspace\\" version: \\"1.0.0\\" # Validated: must be semantic (X.Y.Z) } paths: { base: \\"/path/to/workspace\\" # ... all paths with type checking }\\n}\\n```plaintext ### Benefits of KCL Configuration - ✅ **Type Safety**: Catch configuration errors at load time, not runtime\\n- ✅ **Schema Validation**: Required fields, value constraints, format checking\\n- ✅ **Immutability**: Enforced immutable defaults prevent accidental changes\\n- ✅ **Self-Documenting**: Schema descriptions provide instant documentation\\n- ✅ **IDE Support**: KCL editor extensions with auto-completion ### Viewing Workspace Configuration ```bash\\n# View your KCL workspace configuration\\nprovisioning workspace config show # View in different formats\\nprovisioning workspace config show --format=yaml # YAML output\\nprovisioning workspace config show --format=json # JSON output\\nprovisioning workspace config show --format=kcl # Raw KCL file # Validate configuration\\nprovisioning workspace config validate\\n# Output: ✅ Validation complete - all configs are valid # Show configuration hierarchy\\nprovisioning workspace config hierarchy\\n```plaintext ### Migrating Existing Workspaces If you have workspaces with YAML configs (`provisioning.yaml`), you can migrate them to KCL: ```bash\\n# Migrate single workspace\\nprovisioning workspace migrate-config myworkspace # Migrate all workspaces\\nprovisioning workspace migrate-config --all # Preview changes without applying\\nprovisioning workspace migrate-config myworkspace --check # Create backup before migration\\nprovisioning workspace migrate-config myworkspace --backup # Force overwrite existing KCL files\\nprovisioning workspace migrate-config myworkspace --force\\n```plaintext **How it works**: 1. Reads existing `provisioning.yaml`\\n2. Converts to KCL using workspace configuration schema\\n3. Validates converted KCL against schema\\n4. Backs up original YAML (optional)\\n5. Saves new `provisioning.k` file ### Backward Compatibility ✅ **Full backward compatibility maintained**: - Existing YAML configs (`provisioning.yaml`) continue to work\\n- Config loader checks for KCL files first, falls back to YAML\\n- No breaking changes - migrate at your own pace\\n- Both formats can coexist during transition ## See Also - **Configuration Guide**: `docs/architecture/adr/ADR-010-configuration-format-strategy.md`\\n- **Migration Complete**: [Migration Guide](../guides/from-scratch.md)\\n- **From-Scratch Guide**: [From-Scratch Guide](../guides/from-scratch.md)\\n- **KCL Patterns**: KCL Module System --- **Maintained By**: Infrastructure Team\\n**Version**: 1.1.0 (Updated for KCL)\\n**Status**: ✅ Production Ready\\n**Last Updated**: 2025-12-03","breadcrumbs":"Workspace Switching Guide » List Available Workspaces","id":"1396","title":"List Available Workspaces"},"1397":{"body":"","breadcrumbs":"Workspace Switching System » Workspace Switching System (v2.0.5)","id":"1397","title":"Workspace Switching System (v2.0.5)"},"1398":{"body":"A centralized workspace management system has been implemented, allowing seamless switching between multiple workspaces without manually editing configuration files. This builds upon the target-based configuration system.","breadcrumbs":"Workspace Switching System » 🚀 Workspace Switching Completed (2025-10-02)","id":"1398","title":"🚀 Workspace Switching Completed (2025-10-02)"},"1399":{"body":"Centralized Configuration : Single user_config.yaml file stores all workspace information Simple CLI Commands : Switch workspaces with a single command Active Workspace Tracking : Automatic tracking of currently active workspace Workspace Registry : Maintain list of all known workspaces User Preferences : Global user settings that apply across all workspaces Automatic Updates : Last-used timestamps and metadata automatically managed Validation : Ensures workspaces have required configuration before activation","breadcrumbs":"Workspace Switching System » Key Features","id":"1399","title":"Key Features"},"14":{"body":"System requirements and prerequisites Different installation methods How to verify your installation Setting up your environment Troubleshooting common installation issues","breadcrumbs":"Installation Guide » What You\'ll Learn","id":"14","title":"What You\'ll Learn"},"140":{"body":"The installation process involves: Cloning the repository Installing Nushell plugins Setting up configuration Initializing your first workspace Estimated time: 15-20 minutes","breadcrumbs":"Installation Steps » Overview","id":"140","title":"Overview"},"1400":{"body":"# List all registered workspaces\\nprovisioning workspace list # Show currently active workspace\\nprovisioning workspace active # Switch to another workspace\\nprovisioning workspace activate \\nprovisioning workspace switch # alias # Register a new workspace\\nprovisioning workspace register [--activate] # Remove workspace from registry (does not delete files)\\nprovisioning workspace remove [--force] # View user preferences\\nprovisioning workspace preferences # Set user preference\\nprovisioning workspace set-preference # Get user preference\\nprovisioning workspace get-preference \\n```plaintext ## Central User Configuration **Location**: `~/Library/Application Support/provisioning/user_config.yaml` **Structure**: ```yaml\\n# Active workspace (current workspace in use)\\nactive_workspace: \\"librecloud\\" # Known workspaces (automatically managed)\\nworkspaces: - name: \\"librecloud\\" path: \\"/Users/Akasha/project-provisioning/workspace_librecloud\\" last_used: \\"2025-10-06T12:29:43Z\\" - name: \\"production\\" path: \\"/opt/workspaces/production\\" last_used: \\"2025-10-05T10:15:30Z\\" # User preferences (global settings)\\npreferences: editor: \\"vim\\" output_format: \\"yaml\\" confirm_delete: true confirm_deploy: true default_log_level: \\"info\\" preferred_provider: \\"upcloud\\" # Metadata\\nmetadata: created: \\"2025-10-06T12:29:43Z\\" last_updated: \\"2025-10-06T13:46:16Z\\" version: \\"1.0.0\\"\\n```plaintext ## Usage Example ```bash\\n# Start with workspace librecloud active\\n$ provisioning workspace active\\nActive Workspace: Name: librecloud Path: /Users/Akasha/project-provisioning/workspace_librecloud Last used: 2025-10-06T13:46:16Z # List all workspaces (● indicates active)\\n$ provisioning workspace list Registered Workspaces: ● librecloud Path: /Users/Akasha/project-provisioning/workspace_librecloud Last used: 2025-10-06T13:46:16Z production Path: /opt/workspaces/production Last used: 2025-10-05T10:15:30Z # Switch to production\\n$ provisioning workspace switch production\\n✓ Workspace \'production\' activated Current workspace: production\\nPath: /opt/workspaces/production ℹ All provisioning commands will now use this workspace # All subsequent commands use production workspace\\n$ provisioning server list\\n$ provisioning taskserv create kubernetes\\n```plaintext ## Integration with Config System The workspace switching system integrates seamlessly with the configuration system: 1. **Active Workspace Detection**: Config loader reads `active_workspace` from `user_config.yaml`\\n2. **Workspace Validation**: Ensures workspace has required `config/provisioning.yaml`\\n3. **Configuration Loading**: Loads workspace-specific configs automatically\\n4. **Automatic Timestamps**: Updates `last_used` on workspace activation **Configuration Hierarchy** (Priority: Low → High): ```plaintext\\n1. Workspace config workspace/{name}/config/provisioning.yaml\\n2. Provider configs workspace/{name}/config/providers/*.toml\\n3. Platform configs workspace/{name}/config/platform/*.toml\\n4. User config ~/Library/Application Support/provisioning/user_config.yaml\\n5. Environment variables PROVISIONING_*\\n```plaintext ## Benefits - ✅ **No Manual Config Editing**: Switch workspaces with single command\\n- ✅ **Multiple Workspaces**: Manage dev, staging, production simultaneously\\n- ✅ **User Preferences**: Global settings across all workspaces\\n- ✅ **Automatic Tracking**: Last-used timestamps, active workspace markers\\n- ✅ **Safe Operations**: Validation before activation, confirmation prompts\\n- ✅ **Backward Compatible**: Old `ws_{name}.yaml` files still supported For more detailed information, see [Workspace Switching Guide](../infrastructure/workspace-switching-guide.md).","breadcrumbs":"Workspace Switching System » Workspace Management Commands","id":"1400","title":"Workspace Management Commands"},"1401":{"body":"Complete command-line reference for Infrastructure Automation. This guide covers all commands, options, and usage patterns.","breadcrumbs":"CLI Reference » CLI Reference","id":"1401","title":"CLI Reference"},"1402":{"body":"Complete command syntax and options All available commands and subcommands Usage examples and patterns Scripting and automation Integration with other tools Advanced command combinations","breadcrumbs":"CLI Reference » What You\'ll Learn","id":"1402","title":"What You\'ll Learn"},"1403":{"body":"All provisioning commands follow this structure: provisioning [global-options] [subcommand] [command-options] [arguments]","breadcrumbs":"CLI Reference » Command Structure","id":"1403","title":"Command Structure"},"1404":{"body":"These options can be used with any command: Option Short Description Example --infra -i Specify infrastructure --infra production --environment Environment override --environment prod --check -c Dry run mode --check --debug -x Enable debug output --debug --yes -y Auto-confirm actions --yes --wait -w Wait for completion --wait --out Output format --out json --help -h Show help --help","breadcrumbs":"CLI Reference » Global Options","id":"1404","title":"Global Options"},"1405":{"body":"Format Description Use Case text Human-readable text Terminal viewing json JSON format Scripting, APIs yaml YAML format Configuration files toml TOML format Settings files table Tabular format Reports, lists","breadcrumbs":"CLI Reference » Output Formats","id":"1405","title":"Output Formats"},"1406":{"body":"","breadcrumbs":"CLI Reference » Core Commands","id":"1406","title":"Core Commands"},"1407":{"body":"Display help information for the system or specific commands. # General help\\nprovisioning help # Command-specific help\\nprovisioning help server\\nprovisioning help taskserv\\nprovisioning help cluster # Show all available commands\\nprovisioning help --all # Show help for subcommand\\nprovisioning server help create Options: --all - Show all available commands --detailed - Show detailed help with examples","breadcrumbs":"CLI Reference » help - Show Help Information","id":"1407","title":"help - Show Help Information"},"1408":{"body":"Display version information for the system and dependencies. # Basic version\\nprovisioning version\\nprovisioning --version\\nprovisioning -V # Detailed version with dependencies\\nprovisioning version --verbose # Show version info with title\\nprovisioning --info\\nprovisioning -I Options: --verbose - Show detailed version information --dependencies - Include dependency versions","breadcrumbs":"CLI Reference » version - Show Version Information","id":"1408","title":"version - Show Version Information"},"1409":{"body":"Display current environment configuration and settings. # Show environment variables\\nprovisioning env # Show all environment and configuration\\nprovisioning allenv # Show specific environment\\nprovisioning env --environment prod # Export environment\\nprovisioning env --export Output includes: Configuration file locations Environment variables Provider settings Path configurations","breadcrumbs":"CLI Reference » env - Environment Information","id":"1409","title":"env - Environment Information"},"141":{"body":"# Clone the repository\\ngit clone https://github.com/provisioning/provisioning-platform.git\\ncd provisioning-platform # Checkout the latest stable release (optional)\\ngit checkout tags/v3.5.0","breadcrumbs":"Installation Steps » Step 1: Clone the Repository","id":"141","title":"Step 1: Clone the Repository"},"1410":{"body":"","breadcrumbs":"CLI Reference » Server Management Commands","id":"1410","title":"Server Management Commands"},"1411":{"body":"Create new server instances based on configuration. # Create all servers in infrastructure\\nprovisioning server create --infra my-infra # Dry run (check mode)\\nprovisioning server create --infra my-infra --check # Create with confirmation\\nprovisioning server create --infra my-infra --yes # Create and wait for completion\\nprovisioning server create --infra my-infra --wait # Create specific server\\nprovisioning server create web-01 --infra my-infra # Create with custom settings\\nprovisioning server create --infra my-infra --settings custom.k Options: --check, -c - Dry run mode (show what would be created) --yes, -y - Auto-confirm creation --wait, -w - Wait for servers to be fully ready --settings, -s - Custom settings file --template, -t - Use specific template","breadcrumbs":"CLI Reference » server create - Create Servers","id":"1411","title":"server create - Create Servers"},"1412":{"body":"Remove server instances and associated resources. # Delete all servers\\nprovisioning server delete --infra my-infra # Delete with confirmation\\nprovisioning server delete --infra my-infra --yes # Delete but keep storage\\nprovisioning server delete --infra my-infra --keepstorage # Delete specific server\\nprovisioning server delete web-01 --infra my-infra # Dry run deletion\\nprovisioning server delete --infra my-infra --check Options: --yes, -y - Auto-confirm deletion --keepstorage - Preserve storage volumes --force - Force deletion even if servers are running","breadcrumbs":"CLI Reference » server delete - Delete Servers","id":"1412","title":"server delete - Delete Servers"},"1413":{"body":"Display information about servers. # List all servers\\nprovisioning server list --infra my-infra # List with detailed information\\nprovisioning server list --infra my-infra --detailed # List in specific format\\nprovisioning server list --infra my-infra --out json # List servers across all infrastructures\\nprovisioning server list --all # Filter by status\\nprovisioning server list --infra my-infra --status running Options: --detailed - Show detailed server information --status - Filter by server status --all - Show servers from all infrastructures","breadcrumbs":"CLI Reference » server list - List Servers","id":"1413","title":"server list - List Servers"},"1414":{"body":"Connect to servers via SSH. # SSH to server\\nprovisioning server ssh web-01 --infra my-infra # SSH with specific user\\nprovisioning server ssh web-01 --user admin --infra my-infra # SSH with custom key\\nprovisioning server ssh web-01 --key ~/.ssh/custom_key --infra my-infra # Execute single command\\nprovisioning server ssh web-01 --command \\"systemctl status nginx\\" --infra my-infra Options: --user - SSH username (default from configuration) --key - SSH private key file --command - Execute command and exit --port - SSH port (default: 22)","breadcrumbs":"CLI Reference » server ssh - SSH Access","id":"1414","title":"server ssh - SSH Access"},"1415":{"body":"Display pricing information for servers. # Show costs for all servers\\nprovisioning server price --infra my-infra # Show detailed cost breakdown\\nprovisioning server price --infra my-infra --detailed # Show monthly estimates\\nprovisioning server price --infra my-infra --monthly # Cost comparison between providers\\nprovisioning server price --infra my-infra --compare Options: --detailed - Detailed cost breakdown --monthly - Monthly cost estimates --compare - Compare costs across providers","breadcrumbs":"CLI Reference » server price - Cost Information","id":"1415","title":"server price - Cost Information"},"1416":{"body":"","breadcrumbs":"CLI Reference » Task Service Commands","id":"1416","title":"Task Service Commands"},"1417":{"body":"Install and configure task services on servers. # Install service on all eligible servers\\nprovisioning taskserv create kubernetes --infra my-infra # Install with check mode\\nprovisioning taskserv create kubernetes --infra my-infra --check # Install specific version\\nprovisioning taskserv create kubernetes --version 1.28 --infra my-infra # Install on specific servers\\nprovisioning taskserv create postgresql --servers db-01,db-02 --infra my-infra # Install with custom configuration\\nprovisioning taskserv create kubernetes --config k8s-config.yaml --infra my-infra Options: --version - Specific version to install --config - Custom configuration file --servers - Target specific servers --force - Force installation even if conflicts exist","breadcrumbs":"CLI Reference » taskserv create - Install Services","id":"1417","title":"taskserv create - Install Services"},"1418":{"body":"Remove task services from servers. # Remove service\\nprovisioning taskserv delete kubernetes --infra my-infra # Remove with data cleanup\\nprovisioning taskserv delete postgresql --cleanup-data --infra my-infra # Remove from specific servers\\nprovisioning taskserv delete nginx --servers web-01,web-02 --infra my-infra # Dry run removal\\nprovisioning taskserv delete kubernetes --infra my-infra --check Options: --cleanup-data - Remove associated data --servers - Target specific servers --force - Force removal","breadcrumbs":"CLI Reference » taskserv delete - Remove Services","id":"1418","title":"taskserv delete - Remove Services"},"1419":{"body":"Display available and installed task services. # List all available services\\nprovisioning taskserv list # List installed services\\nprovisioning taskserv list --infra my-infra --installed # List by category\\nprovisioning taskserv list --category database # List with versions\\nprovisioning taskserv list --versions # Search services\\nprovisioning taskserv list --search kubernetes Options: --installed - Show only installed services --category - Filter by service category --versions - Include version information --search - Search by name or description","breadcrumbs":"CLI Reference » taskserv list - List Services","id":"1419","title":"taskserv list - List Services"},"142":{"body":"The platform uses several Nushell plugins for enhanced functionality.","breadcrumbs":"Installation Steps » Step 2: Install Nushell Plugins","id":"142","title":"Step 2: Install Nushell Plugins"},"1420":{"body":"Generate configuration files for task services. # Generate configuration\\nprovisioning taskserv generate kubernetes --infra my-infra # Generate with custom template\\nprovisioning taskserv generate kubernetes --template custom --infra my-infra # Generate for specific servers\\nprovisioning taskserv generate nginx --servers web-01,web-02 --infra my-infra # Generate and save to file\\nprovisioning taskserv generate postgresql --output db-config.yaml --infra my-infra Options: --template - Use specific template --output - Save to specific file --servers - Target specific servers","breadcrumbs":"CLI Reference » taskserv generate - Generate Configurations","id":"1420","title":"taskserv generate - Generate Configurations"},"1421":{"body":"Check for and manage service version updates. # Check updates for all services\\nprovisioning taskserv check-updates --infra my-infra # Check specific service\\nprovisioning taskserv check-updates kubernetes --infra my-infra # Show available versions\\nprovisioning taskserv versions kubernetes # Update to latest version\\nprovisioning taskserv update kubernetes --infra my-infra # Update to specific version\\nprovisioning taskserv update kubernetes --version 1.29 --infra my-infra Options: --version - Target specific version --security-only - Only security updates --dry-run - Show what would be updated","breadcrumbs":"CLI Reference » taskserv check-updates - Version Management","id":"1421","title":"taskserv check-updates - Version Management"},"1422":{"body":"","breadcrumbs":"CLI Reference » Cluster Management Commands","id":"1422","title":"Cluster Management Commands"},"1423":{"body":"Deploy and configure application clusters. # Create cluster\\nprovisioning cluster create web-cluster --infra my-infra # Create with check mode\\nprovisioning cluster create web-cluster --infra my-infra --check # Create with custom configuration\\nprovisioning cluster create web-cluster --config cluster.yaml --infra my-infra # Create and scale immediately\\nprovisioning cluster create web-cluster --replicas 5 --infra my-infra Options: --config - Custom cluster configuration --replicas - Initial replica count --namespace - Kubernetes namespace","breadcrumbs":"CLI Reference » cluster create - Deploy Clusters","id":"1423","title":"cluster create - Deploy Clusters"},"1424":{"body":"Remove application clusters and associated resources. # Delete cluster\\nprovisioning cluster delete web-cluster --infra my-infra # Delete with data cleanup\\nprovisioning cluster delete web-cluster --cleanup --infra my-infra # Force delete\\nprovisioning cluster delete web-cluster --force --infra my-infra Options: --cleanup - Remove associated data --force - Force deletion --keep-volumes - Preserve persistent volumes","breadcrumbs":"CLI Reference » cluster delete - Remove Clusters","id":"1424","title":"cluster delete - Remove Clusters"},"1425":{"body":"Display information about deployed clusters. # List all clusters\\nprovisioning cluster list --infra my-infra # List with status\\nprovisioning cluster list --infra my-infra --status # List across all infrastructures\\nprovisioning cluster list --all # Filter by namespace\\nprovisioning cluster list --namespace production --infra my-infra Options: --status - Include status information --all - Show clusters from all infrastructures --namespace - Filter by namespace","breadcrumbs":"CLI Reference » cluster list - List Clusters","id":"1425","title":"cluster list - List Clusters"},"1426":{"body":"Adjust cluster size and resources. # Scale cluster\\nprovisioning cluster scale web-cluster --replicas 10 --infra my-infra # Auto-scale configuration\\nprovisioning cluster scale web-cluster --auto-scale --min 3 --max 20 --infra my-infra # Scale specific component\\nprovisioning cluster scale web-cluster --component api --replicas 5 --infra my-infra Options: --replicas - Target replica count --auto-scale - Enable auto-scaling --min, --max - Auto-scaling limits --component - Scale specific component","breadcrumbs":"CLI Reference » cluster scale - Scale Clusters","id":"1426","title":"cluster scale - Scale Clusters"},"1427":{"body":"","breadcrumbs":"CLI Reference » Infrastructure Commands","id":"1427","title":"Infrastructure Commands"},"1428":{"body":"Generate infrastructure and configuration files. # Generate new infrastructure\\nprovisioning generate infra --new my-infrastructure # Generate from template\\nprovisioning generate infra --template web-app --name my-app # Generate server configurations\\nprovisioning generate server --infra my-infra # Generate task service configurations\\nprovisioning generate taskserv --infra my-infra # Generate cluster configurations\\nprovisioning generate cluster --infra my-infra Subcommands: infra - Infrastructure configurations server - Server configurations taskserv - Task service configurations cluster - Cluster configurations Options: --new - Create new infrastructure --template - Use specific template --name - Name for generated resources --output - Output directory","breadcrumbs":"CLI Reference » generate - Generate Configurations","id":"1428","title":"generate - Generate Configurations"},"1429":{"body":"Show detailed information about infrastructure components. # Show settings\\nprovisioning show settings --infra my-infra # Show servers\\nprovisioning show servers --infra my-infra # Show specific server\\nprovisioning show servers web-01 --infra my-infra # Show task services\\nprovisioning show taskservs --infra my-infra # Show costs\\nprovisioning show costs --infra my-infra # Show in different format\\nprovisioning show servers --infra my-infra --out json Subcommands: settings - Configuration settings servers - Server information taskservs - Task service information costs - Cost information data - Raw infrastructure data","breadcrumbs":"CLI Reference » show - Display Information","id":"1429","title":"show - Display Information"},"143":{"body":"# Install from crates.io\\ncargo install nu_plugin_tera # Register with Nushell\\nnu -c \\"plugin add ~/.cargo/bin/nu_plugin_tera; plugin use tera\\"","breadcrumbs":"Installation Steps » Install nu_plugin_tera (Template Rendering)","id":"143","title":"Install nu_plugin_tera (Template Rendering)"},"1430":{"body":"List various types of resources. # List providers\\nprovisioning list providers # List task services\\nprovisioning list taskservs # List clusters\\nprovisioning list clusters # List infrastructures\\nprovisioning list infras # List with selection interface\\nprovisioning list servers --select Subcommands: providers - Available providers taskservs - Available task services clusters - Available clusters infras - Available infrastructures servers - Server instances","breadcrumbs":"CLI Reference » list - List Resources","id":"1430","title":"list - List Resources"},"1431":{"body":"Validate configuration files and infrastructure definitions. # Validate configuration\\nprovisioning validate config --infra my-infra # Validate with detailed output\\nprovisioning validate config --detailed --infra my-infra # Validate specific file\\nprovisioning validate config settings.k --infra my-infra # Quick validation\\nprovisioning validate quick --infra my-infra # Validate interpolation\\nprovisioning validate interpolation --infra my-infra Subcommands: config - Configuration validation quick - Quick infrastructure validation interpolation - Interpolation pattern validation Options: --detailed - Show detailed validation results --strict - Strict validation mode --rules - Show validation rules","breadcrumbs":"CLI Reference » validate - Validate Configuration","id":"1431","title":"validate - Validate Configuration"},"1432":{"body":"","breadcrumbs":"CLI Reference » Configuration Commands","id":"1432","title":"Configuration Commands"},"1433":{"body":"Initialize user and project configurations. # Initialize user configuration\\nprovisioning init config # Initialize with specific template\\nprovisioning init config dev # Initialize project configuration\\nprovisioning init project # Force overwrite existing\\nprovisioning init config --force Subcommands: config - User configuration project - Project configuration Options: --template - Configuration template --force - Overwrite existing files","breadcrumbs":"CLI Reference » init - Initialize Configuration","id":"1433","title":"init - Initialize Configuration"},"1434":{"body":"Manage configuration templates. # List available templates\\nprovisioning template list # Show template content\\nprovisioning template show dev # Validate templates\\nprovisioning template validate # Create custom template\\nprovisioning template create my-template --from dev Subcommands: list - List available templates show - Display template content validate - Validate templates create - Create custom template","breadcrumbs":"CLI Reference » template - Template Management","id":"1434","title":"template - Template Management"},"1435":{"body":"","breadcrumbs":"CLI Reference » Advanced Commands","id":"1435","title":"Advanced Commands"},"1436":{"body":"Start interactive Nushell session with provisioning library loaded. # Start interactive shell\\nprovisioning nu # Execute specific command\\nprovisioning nu -c \\"use lib_provisioning *; show_env\\" # Start with custom script\\nprovisioning nu --script my-script.nu Options: -c - Execute command and exit --script - Run specific script --load - Load additional modules","breadcrumbs":"CLI Reference » nu - Interactive Shell","id":"1436","title":"nu - Interactive Shell"},"1437":{"body":"Edit encrypted configuration files using SOPS. # Edit encrypted file\\nprovisioning sops settings.k --infra my-infra # Encrypt new file\\nprovisioning sops --encrypt new-secrets.k --infra my-infra # Decrypt for viewing\\nprovisioning sops --decrypt secrets.k --infra my-infra # Rotate keys\\nprovisioning sops --rotate-keys secrets.k --infra my-infra Options: --encrypt - Encrypt file --decrypt - Decrypt file --rotate-keys - Rotate encryption keys","breadcrumbs":"CLI Reference » sops - Secret Management","id":"1437","title":"sops - Secret Management"},"1438":{"body":"Manage infrastructure contexts and environments. # Show current context\\nprovisioning context # List available contexts\\nprovisioning context list # Switch context\\nprovisioning context switch production # Create new context\\nprovisioning context create staging --from development # Delete context\\nprovisioning context delete old-context Subcommands: list - List contexts switch - Switch active context create - Create new context delete - Delete context","breadcrumbs":"CLI Reference » context - Context Management","id":"1438","title":"context - Context Management"},"1439":{"body":"","breadcrumbs":"CLI Reference » Workflow Commands","id":"1439","title":"Workflow Commands"},"144":{"body":"# Install from custom repository\\ncargo install --git https://repo.jesusperez.pro/jesus/nushell-plugins nu_plugin_kcl # Register with Nushell\\nnu -c \\"plugin add ~/.cargo/bin/nu_plugin_kcl; plugin use kcl\\"","breadcrumbs":"Installation Steps » Install nu_plugin_kcl (Optional, KCL Integration)","id":"144","title":"Install nu_plugin_kcl (Optional, KCL Integration)"},"1440":{"body":"Manage complex workflows and batch operations. # Submit batch workflow\\nprovisioning workflows batch submit my-workflow.k # Monitor workflow progress\\nprovisioning workflows batch monitor workflow-123 # List workflows\\nprovisioning workflows batch list --status running # Get workflow status\\nprovisioning workflows batch status workflow-123 # Rollback failed workflow\\nprovisioning workflows batch rollback workflow-123 Options: --status - Filter by workflow status --follow - Follow workflow progress --timeout - Set timeout for operations","breadcrumbs":"CLI Reference » workflows - Batch Operations","id":"1440","title":"workflows - Batch Operations"},"1441":{"body":"Control the hybrid orchestrator system. # Start orchestrator\\nprovisioning orchestrator start # Check orchestrator status\\nprovisioning orchestrator status # Stop orchestrator\\nprovisioning orchestrator stop # Show orchestrator logs\\nprovisioning orchestrator logs # Health check\\nprovisioning orchestrator health","breadcrumbs":"CLI Reference » orchestrator - Orchestrator Management","id":"1441","title":"orchestrator - Orchestrator Management"},"1442":{"body":"","breadcrumbs":"CLI Reference » Scripting and Automation","id":"1442","title":"Scripting and Automation"},"1443":{"body":"Provisioning uses standard exit codes: 0 - Success 1 - General error 2 - Invalid command or arguments 3 - Configuration error 4 - Permission denied 5 - Resource not found","breadcrumbs":"CLI Reference » Exit Codes","id":"1443","title":"Exit Codes"},"1444":{"body":"Control behavior through environment variables: # Enable debug mode\\nexport PROVISIONING_DEBUG=true # Set environment\\nexport PROVISIONING_ENV=production # Set output format\\nexport PROVISIONING_OUTPUT_FORMAT=json # Disable interactive prompts\\nexport PROVISIONING_NONINTERACTIVE=true","breadcrumbs":"CLI Reference » Environment Variables","id":"1444","title":"Environment Variables"},"1445":{"body":"#!/bin/bash\\n# Example batch script # Set environment\\nexport PROVISIONING_ENV=production\\nexport PROVISIONING_NONINTERACTIVE=true # Validate first\\nif ! provisioning validate config --infra production; then echo \\"Configuration validation failed\\" exit 1\\nfi # Create infrastructure\\nprovisioning server create --infra production --yes --wait # Install services\\nprovisioning taskserv create kubernetes --infra production --yes\\nprovisioning taskserv create postgresql --infra production --yes # Deploy clusters\\nprovisioning cluster create web-app --infra production --yes echo \\"Deployment completed successfully\\"","breadcrumbs":"CLI Reference » Batch Operations","id":"1445","title":"Batch Operations"},"1446":{"body":"# Get server list as JSON\\nservers=$(provisioning server list --infra my-infra --out json) # Process with jq\\necho \\"$servers\\" | jq \'.[] | select(.status == \\"running\\") | .name\' # Use in scripts\\nfor server in $(echo \\"$servers\\" | jq -r \'.[] | select(.status == \\"running\\") | .name\'); do echo \\"Processing server: $server\\" provisioning server ssh \\"$server\\" --command \\"uptime\\" --infra my-infra\\ndone","breadcrumbs":"CLI Reference » JSON Output Processing","id":"1446","title":"JSON Output Processing"},"1447":{"body":"","breadcrumbs":"CLI Reference » Command Chaining and Pipelines","id":"1447","title":"Command Chaining and Pipelines"},"1448":{"body":"# Chain commands with && (stop on failure)\\nprovisioning validate config --infra my-infra && \\\\\\nprovisioning server create --infra my-infra --check && \\\\\\nprovisioning server create --infra my-infra --yes # Chain with || (continue on failure)\\nprovisioning taskserv create kubernetes --infra my-infra || \\\\\\necho \\"Kubernetes installation failed, continuing with other services\\"","breadcrumbs":"CLI Reference » Sequential Operations","id":"1448","title":"Sequential Operations"},"1449":{"body":"# Full deployment workflow\\ndeploy_infrastructure() { local infra_name=$1 echo \\"Deploying infrastructure: $infra_name\\" # Validate provisioning validate config --infra \\"$infra_name\\" || return 1 # Create servers provisioning server create --infra \\"$infra_name\\" --yes --wait || return 1 # Install base services for service in containerd kubernetes; do provisioning taskserv create \\"$service\\" --infra \\"$infra_name\\" --yes || return 1 done # Deploy applications provisioning cluster create web-app --infra \\"$infra_name\\" --yes || return 1 echo \\"Deployment completed: $infra_name\\"\\n} # Use the function\\ndeploy_infrastructure \\"production\\"","breadcrumbs":"CLI Reference » Complex Workflows","id":"1449","title":"Complex Workflows"},"145":{"body":"# Start Nushell\\nnu # List installed plugins\\nplugin list # Expected output should include:\\n# - tera\\n# - kcl (if installed)","breadcrumbs":"Installation Steps » Verify Plugin Installation","id":"145","title":"Verify Plugin Installation"},"1450":{"body":"","breadcrumbs":"CLI Reference » Integration with Other Tools","id":"1450","title":"Integration with Other Tools"},"1451":{"body":"# GitLab CI example\\ndeploy: script: - provisioning validate config --infra production - provisioning server create --infra production --check - provisioning server create --infra production --yes --wait - provisioning taskserv create kubernetes --infra production --yes only: - main","breadcrumbs":"CLI Reference » CI/CD Integration","id":"1451","title":"CI/CD Integration"},"1452":{"body":"# Health check script\\n#!/bin/bash # Check infrastructure health\\nif provisioning health check --infra production --out json | jq -e \'.healthy\'; then echo \\"Infrastructure healthy\\" exit 0\\nelse echo \\"Infrastructure unhealthy\\" # Send alert curl -X POST https://alerts.company.com/webhook \\\\ -d \'{\\"message\\": \\"Infrastructure health check failed\\"}\' exit 1\\nfi","breadcrumbs":"CLI Reference » Monitoring Integration","id":"1452","title":"Monitoring Integration"},"1453":{"body":"# Backup script\\n#!/bin/bash DATE=$(date +%Y%m%d_%H%M%S)\\nBACKUP_DIR=\\"/backups/provisioning/$DATE\\" # Create backup directory\\nmkdir -p \\"$BACKUP_DIR\\" # Export configurations\\nprovisioning config export --format yaml > \\"$BACKUP_DIR/config.yaml\\" # Backup infrastructure definitions\\nfor infra in $(provisioning list infras --out json | jq -r \'.[]\'); do provisioning show settings --infra \\"$infra\\" --out yaml > \\"$BACKUP_DIR/$infra.yaml\\"\\ndone echo \\"Backup completed: $BACKUP_DIR\\" This CLI reference provides comprehensive coverage of all provisioning commands. Use it as your primary reference for command syntax, options, and integration patterns.","breadcrumbs":"CLI Reference » Backup Automation","id":"1453","title":"Backup Automation"},"1454":{"body":"Version : 2.0.0 Date : 2025-10-06 Status : Implemented","breadcrumbs":"Workspace Config Architecture » Workspace Configuration Architecture","id":"1454","title":"Workspace Configuration Architecture"},"1455":{"body":"The provisioning system now uses a workspace-based configuration architecture where each workspace has its own complete configuration structure. This replaces the old ENV-based and template-only system.","breadcrumbs":"Workspace Config Architecture » Overview","id":"1455","title":"Overview"},"1456":{"body":"config.defaults.toml is ONLY a template, NEVER loaded at runtime This file exists solely as a reference template for generating workspace configurations. The system does NOT load it during operation.","breadcrumbs":"Workspace Config Architecture » Critical Design Principle","id":"1456","title":"Critical Design Principle"},"1457":{"body":"Configuration is loaded in the following order (lowest to highest priority): Workspace Config (Base): {workspace}/config/provisioning.yaml Provider Configs : {workspace}/config/providers/*.toml Platform Configs : {workspace}/config/platform/*.toml User Context : ~/Library/Application Support/provisioning/ws_{name}.yaml Environment Variables : PROVISIONING_* (highest priority)","breadcrumbs":"Workspace Config Architecture » Configuration Hierarchy","id":"1457","title":"Configuration Hierarchy"},"1458":{"body":"When a workspace is initialized, the following structure is created: {workspace}/\\n├── config/\\n│ ├── provisioning.yaml # Main workspace config (generated from template)\\n│ ├── providers/ # Provider-specific configs\\n│ │ ├── aws.toml\\n│ │ ├── local.toml\\n│ │ └── upcloud.toml\\n│ ├── platform/ # Platform service configs\\n│ │ ├── orchestrator.toml\\n│ │ └── mcp.toml\\n│ └── kms.toml # KMS configuration\\n├── infra/ # Infrastructure definitions\\n├── .cache/ # Cache directory\\n├── .runtime/ # Runtime data\\n│ ├── taskservs/\\n│ └── clusters/\\n├── .providers/ # Provider state\\n├── .kms/ # Key management\\n│ └── keys/\\n├── generated/ # Generated files\\n└── .gitignore # Workspace gitignore\\n```plaintext ## Template System Templates are located at: `/Users/Akasha/project-provisioning/provisioning/config/templates/` ### Available Templates 1. **workspace-provisioning.yaml.template** - Main workspace configuration\\n2. **provider-aws.toml.template** - AWS provider configuration\\n3. **provider-local.toml.template** - Local provider configuration\\n4. **provider-upcloud.toml.template** - UpCloud provider configuration\\n5. **kms.toml.template** - KMS configuration\\n6. **user-context.yaml.template** - User context configuration ### Template Variables Templates support the following interpolation variables: - `{{workspace.name}}` - Workspace name\\n- `{{workspace.path}}` - Absolute path to workspace\\n- `{{now.iso}}` - Current timestamp in ISO format\\n- `{{env.HOME}}` - User\'s home directory\\n- `{{env.*}}` - Environment variables (safe list only)\\n- `{{paths.base}}` - Base path (after config load) ## Workspace Initialization ### Command ```bash\\n# Using the workspace init function\\nnu -c \\"use provisioning/core/nulib/lib_provisioning/workspace/init.nu *; workspace-init \'my-workspace\' \'/path/to/workspace\' --providers [\'aws\' \'local\'] --activate\\"\\n```plaintext ### Process 1. **Create Directory Structure**: All necessary directories\\n2. **Generate Config from Template**: Creates `config/provisioning.yaml`\\n3. **Generate Provider Configs**: For each specified provider\\n4. **Generate KMS Config**: Security configuration\\n5. **Create User Context** (if --activate): User-specific overrides\\n6. **Create .gitignore**: Ignore runtime/cache files ## User Context User context files are stored per workspace: **Location**: `~/Library/Application Support/provisioning/ws_{workspace_name}.yaml` ### Purpose - Store user-specific overrides (debug settings, output preferences)\\n- Mark active workspace\\n- Override workspace paths if needed ### Example ```yaml\\nworkspace: name: \\"my-workspace\\" path: \\"/path/to/my-workspace\\" active: true debug: enabled: true log_level: \\"debug\\" output: format: \\"json\\" providers: default: \\"aws\\"\\n```plaintext ## Configuration Loading Process ### 1. Determine Active Workspace ```nushell\\n# Check user config directory for active workspace\\nlet user_config_dir = ~/Library/Application Support/provisioning/\\nlet active_workspace = (find workspace with active: true in ws_*.yaml files)\\n```plaintext ### 2. Load Workspace Config ```nushell\\n# Load main workspace config\\nlet workspace_config = {workspace.path}/config/provisioning.yaml\\n```plaintext ### 3. Load Provider Configs ```nushell\\n# Merge all provider configs\\nfor provider in {workspace.path}/config/providers/*.toml { merge provider config\\n}\\n```plaintext ### 4. Load Platform Configs ```nushell\\n# Merge all platform configs\\nfor platform in {workspace.path}/config/platform/*.toml { merge platform config\\n}\\n```plaintext ### 5. Apply User Context ```nushell\\n# Apply user-specific overrides\\nlet user_context = ~/Library/Application Support/provisioning/ws_{name}.yaml\\nmerge user_context (highest config priority)\\n```plaintext ### 6. Apply Environment Variables ```nushell\\n# Final overrides from environment\\nPROVISIONING_DEBUG=true\\nPROVISIONING_LOG_LEVEL=debug\\nPROVISIONING_PROVIDER=aws\\n# etc.\\n```plaintext ## Migration from Old System ### Before (ENV-based) ```bash\\nexport PROVISIONING=/usr/local/provisioning\\nexport PROVISIONING_INFRA_PATH=/path/to/infra\\nexport PROVISIONING_DEBUG=true\\n# ... many ENV variables\\n```plaintext ### After (Workspace-based) ```bash\\n# Initialize workspace\\nworkspace-init \\"production\\" \\"/workspaces/prod\\" --providers [\\"aws\\"] --activate # All config is now in workspace\\n# No ENV variables needed (except for overrides)\\n```plaintext ### Breaking Changes 1. **`config.defaults.toml` NOT loaded** - Only used as template\\n2. **Workspace required** - Must have active workspace or be in workspace directory\\n3. **New config locations** - User config in `~/Library/Application Support/provisioning/`\\n4. **YAML main config** - `provisioning.yaml` instead of TOML ## Workspace Management Commands ### Initialize Workspace ```nushell\\nuse provisioning/core/nulib/lib_provisioning/workspace/init.nu *\\nworkspace-init \\"my-workspace\\" \\"/path/to/workspace\\" --providers [\\"aws\\" \\"local\\"] --activate\\n```plaintext ### List Workspaces ```nushell\\nworkspace-list\\n```plaintext ### Activate Workspace ```nushell\\nworkspace-activate \\"my-workspace\\"\\n```plaintext ### Get Active Workspace ```nushell\\nworkspace-get-active\\n```plaintext ## Implementation Files ### Core Files 1. **Template Directory**: `/Users/Akasha/project-provisioning/provisioning/config/templates/`\\n2. **Workspace Init**: `/Users/Akasha/project-provisioning/provisioning/core/nulib/lib_provisioning/workspace/init.nu`\\n3. **Config Loader**: `/Users/Akasha/project-provisioning/provisioning/core/nulib/lib_provisioning/config/loader.nu` ### Key Changes in Config Loader #### Removed - `get-defaults-config-path()` - No longer loads config.defaults.toml\\n- Old hierarchy with user/project/infra TOML files #### Added - `get-active-workspace()` - Finds active workspace from user config\\n- Support for YAML config files\\n- Provider and platform config merging\\n- User context loading ## Configuration Schema ### Main Workspace Config (provisioning.yaml) ```yaml\\nworkspace: name: string version: string created: timestamp paths: base: string infra: string cache: string runtime: string # ... all paths core: version: string name: string debug: enabled: bool log_level: string # ... debug settings providers: active: [string] default: string # ... all other sections\\n```plaintext ### Provider Config (providers/*.toml) ```toml\\n[provider]\\nname = \\"aws\\"\\nenabled = true\\nworkspace = \\"workspace-name\\" [provider.auth]\\nprofile = \\"default\\"\\nregion = \\"us-east-1\\" [provider.paths]\\nbase = \\"{workspace}/.providers/aws\\"\\ncache = \\"{workspace}/.providers/aws/cache\\"\\n```plaintext ### User Context (ws_{name}.yaml) ```yaml\\nworkspace: name: string path: string active: bool debug: enabled: bool log_level: string output: format: string\\n```plaintext ## Benefits 1. **No Template Loading**: config.defaults.toml is template-only\\n2. **Workspace Isolation**: Each workspace is self-contained\\n3. **Explicit Configuration**: No hidden defaults from ENV\\n4. **Clear Hierarchy**: Predictable override behavior\\n5. **Multi-Workspace Support**: Easy switching between workspaces\\n6. **User Overrides**: Per-workspace user preferences\\n7. **Version Control**: Workspace configs can be committed (except secrets) ## Security Considerations ### Generated .gitignore The workspace .gitignore excludes: - `.cache/` - Cache files\\n- `.runtime/` - Runtime data\\n- `.providers/` - Provider state\\n- `.kms/keys/` - Secret keys\\n- `generated/` - Generated files\\n- `*.log` - Log files ### Secret Management - KMS keys stored in `.kms/keys/` (gitignored)\\n- SOPS config references keys, doesn\'t store them\\n- Provider credentials in user-specific locations (not workspace) ## Troubleshooting ### No Active Workspace Error ```plaintext\\nError: No active workspace found. Please initialize or activate a workspace.\\n```plaintext **Solution**: Initialize or activate a workspace: ```bash\\nworkspace-init \\"my-workspace\\" \\"/path/to/workspace\\" --activate\\n```plaintext ### Config File Not Found ```plaintext\\nError: Required configuration file not found: {workspace}/config/provisioning.yaml\\n```plaintext **Solution**: The workspace config is corrupted or deleted. Re-initialize: ```bash\\nworkspace-init \\"workspace-name\\" \\"/existing/path\\" --providers [\\"aws\\"]\\n```plaintext ### Provider Not Configured **Solution**: Add provider config to workspace: ```bash\\n# Generate provider config manually\\ngenerate-provider-config \\"/workspace/path\\" \\"workspace-name\\" \\"aws\\"\\n```plaintext ## Future Enhancements 1. **Workspace Templates**: Pre-configured workspace templates (dev, prod, test)\\n2. **Workspace Import/Export**: Share workspace configurations\\n3. **Remote Workspace**: Load workspace from remote Git repository\\n4. **Workspace Validation**: Comprehensive workspace health checks\\n5. **Config Migration Tool**: Automated migration from old ENV-based system ## Summary - **config.defaults.toml is ONLY a template** - Never loaded at runtime\\n- **Workspaces are self-contained** - Complete config structure generated from templates\\n- **New hierarchy**: Workspace → Provider → Platform → User Context → ENV\\n- **User context for overrides** - Stored in ~/Library/Application Support/provisioning/\\n- **Clear, explicit configuration** - No hidden defaults ## Related Documentation - Template files: `provisioning/config/templates/`\\n- Workspace init: `provisioning/core/nulib/lib_provisioning/workspace/init.nu`\\n- Config loader: `provisioning/core/nulib/lib_provisioning/config/loader.nu`\\n- User guide: `docs/user/workspace-management.md`","breadcrumbs":"Workspace Config Architecture » Workspace Structure","id":"1458","title":"Workspace Structure"},"1459":{"body":"This guide covers generating and managing temporary credentials (dynamic secrets) instead of using static secrets. See the Quick Reference section below for fast lookup.","breadcrumbs":"Dynamic Secrets Guide » Dynamic Secrets Guide","id":"1459","title":"Dynamic Secrets Guide"},"146":{"body":"Make the provisioning command available globally: # Option 1: Symlink to /usr/local/bin (recommended)\\nsudo ln -s \\"$(pwd)/provisioning/core/cli/provisioning\\" /usr/local/bin/provisioning # Option 2: Add to PATH in your shell profile\\necho \'export PATH=\\"$PATH:\'\\"$(pwd)\\"\'/provisioning/core/cli\\"\' >> ~/.bashrc # or ~/.zshrc\\nsource ~/.bashrc # or ~/.zshrc # Verify installation\\nprovisioning --version","breadcrumbs":"Installation Steps » Step 3: Add CLI to PATH","id":"146","title":"Step 3: Add CLI to PATH"},"1460":{"body":"Quick Start : Generate temporary credentials instead of using static secrets","breadcrumbs":"Dynamic Secrets Guide » Quick Reference","id":"1460","title":"Quick Reference"},"1461":{"body":"Generate AWS Credentials (1 hour) secrets generate aws --role deploy --workspace prod --purpose \\"deployment\\" Generate SSH Key (2 hours) secrets generate ssh --ttl 2 --workspace dev --purpose \\"server access\\" Generate UpCloud Subaccount (2 hours) secrets generate upcloud --workspace staging --purpose \\"testing\\" List Active Secrets secrets list Revoke Secret secrets revoke --reason \\"no longer needed\\" View Statistics secrets stats","breadcrumbs":"Dynamic Secrets Guide » Quick Commands","id":"1461","title":"Quick Commands"},"1462":{"body":"Type TTL Range Renewable Use Case AWS STS 15min - 12h ✅ Yes Cloud resource provisioning SSH Keys 10min - 24h ❌ No Temporary server access UpCloud 30min - 8h ❌ No UpCloud API operations Vault 5min - 24h ✅ Yes Any Vault-backed secret","breadcrumbs":"Dynamic Secrets Guide » Secret Types","id":"1462","title":"Secret Types"},"1463":{"body":"Base URL : http://localhost:9090/api/v1/secrets # Generate secret\\nPOST /generate # Get secret\\nGET /{id} # Revoke secret\\nPOST /{id}/revoke # Renew secret\\nPOST /{id}/renew # List secrets\\nGET /list # List expiring\\nGET /expiring # Statistics\\nGET /stats","breadcrumbs":"Dynamic Secrets Guide » REST API Endpoints","id":"1463","title":"REST API Endpoints"},"1464":{"body":"# Generate\\nlet creds = secrets generate aws ` --role deploy ` --region us-west-2 ` --workspace prod ` --purpose \\"Deploy servers\\" # Export to environment\\nexport-env { AWS_ACCESS_KEY_ID: ($creds.credentials.access_key_id) AWS_SECRET_ACCESS_KEY: ($creds.credentials.secret_access_key) AWS_SESSION_TOKEN: ($creds.credentials.session_token)\\n} # Use credentials\\nprovisioning server create # Cleanup\\nsecrets revoke ($creds.id) --reason \\"done\\"","breadcrumbs":"Dynamic Secrets Guide » AWS STS Example","id":"1464","title":"AWS STS Example"},"1465":{"body":"# Generate\\nlet key = secrets generate ssh ` --ttl 4 ` --workspace dev ` --purpose \\"Debug issue\\" # Save key\\n$key.credentials.private_key | save ~/.ssh/temp_key\\nchmod 600 ~/.ssh/temp_key # Use key\\nssh -i ~/.ssh/temp_key user@server # Cleanup\\nrm ~/.ssh/temp_key\\nsecrets revoke ($key.id) --reason \\"fixed\\"","breadcrumbs":"Dynamic Secrets Guide » SSH Key Example","id":"1465","title":"SSH Key Example"},"1466":{"body":"File : provisioning/platform/orchestrator/config.defaults.toml [secrets]\\ndefault_ttl_hours = 1\\nmax_ttl_hours = 12\\nauto_revoke_on_expiry = true\\nwarning_threshold_minutes = 5 aws_account_id = \\"123456789012\\"\\naws_default_region = \\"us-east-1\\" upcloud_username = \\"${UPCLOUD_USER}\\"\\nupcloud_password = \\"${UPCLOUD_PASS}\\"","breadcrumbs":"Dynamic Secrets Guide » Configuration","id":"1466","title":"Configuration"},"1467":{"body":"","breadcrumbs":"Dynamic Secrets Guide » Troubleshooting","id":"1467","title":"Troubleshooting"},"1468":{"body":"→ Check service initialization","breadcrumbs":"Dynamic Secrets Guide » \\"Provider not found\\"","id":"1468","title":"\\"Provider not found\\""},"1469":{"body":"→ Reduce TTL or configure higher max","breadcrumbs":"Dynamic Secrets Guide » \\"TTL exceeds maximum\\"","id":"1469","title":"\\"TTL exceeds maximum\\""},"147":{"body":"Generate keys for encrypting sensitive configuration: # Create Age key directory\\nmkdir -p ~/.config/provisioning/age # Generate private key\\nage-keygen -o ~/.config/provisioning/age/private_key.txt # Extract public key\\nage-keygen -y ~/.config/provisioning/age/private_key.txt > ~/.config/provisioning/age/public_key.txt # Secure the keys\\nchmod 600 ~/.config/provisioning/age/private_key.txt\\nchmod 644 ~/.config/provisioning/age/public_key.txt","breadcrumbs":"Installation Steps » Step 4: Generate Age Encryption Keys","id":"147","title":"Step 4: Generate Age Encryption Keys"},"1470":{"body":"→ Generate new secret instead","breadcrumbs":"Dynamic Secrets Guide » \\"Secret not renewable\\"","id":"1470","title":"\\"Secret not renewable\\""},"1471":{"body":"→ Check provider requirements (e.g., AWS needs \'role\')","breadcrumbs":"Dynamic Secrets Guide » \\"Missing required parameter\\"","id":"1471","title":"\\"Missing required parameter\\""},"1472":{"body":"✅ No static credentials stored ✅ Automatic expiration (1-12 hours) ✅ Auto-revocation on expiry ✅ Full audit trail ✅ Memory-only storage ✅ TLS in transit","breadcrumbs":"Dynamic Secrets Guide » Security Features","id":"1472","title":"Security Features"},"1473":{"body":"Orchestrator logs : provisioning/platform/orchestrator/data/orchestrator.log Debug secrets : secrets list | where is_expired == true","breadcrumbs":"Dynamic Secrets Guide » Support","id":"1473","title":"Support"},"1474":{"body":"Version : 1.0.0 | Date : 2025-10-06","breadcrumbs":"Mode System Guide » Mode System Quick Reference","id":"1474","title":"Mode System Quick Reference"},"1475":{"body":"# Check current mode\\nprovisioning mode current # List all available modes\\nprovisioning mode list # Switch to a different mode\\nprovisioning mode switch # Validate mode configuration\\nprovisioning mode validate\\n```plaintext --- ## Available Modes | Mode | Use Case | Auth | Orchestrator | OCI Registry |\\n|------|----------|------|--------------|--------------|\\n| **solo** | Local development | None | Local binary | Local Zot (optional) |\\n| **multi-user** | Team collaboration | Token (JWT) | Remote | Remote Harbor |\\n| **cicd** | CI/CD pipelines | Token (CI injected) | Remote | Remote Harbor |\\n| **enterprise** | Production | mTLS | Kubernetes HA | Harbor HA + DR | --- ## Mode Comparison ### Solo Mode - ✅ **Best for**: Individual developers\\n- 🔐 **Authentication**: None\\n- 🚀 **Services**: Local orchestrator only\\n- 📦 **Extensions**: Local filesystem\\n- 🔒 **Workspace Locking**: Disabled\\n- 💾 **Resource Limits**: Unlimited ### Multi-User Mode - ✅ **Best for**: Development teams (5-20 developers)\\n- 🔐 **Authentication**: Token (JWT, 24h expiry)\\n- 🚀 **Services**: Remote orchestrator, control-center, DNS, git\\n- 📦 **Extensions**: OCI registry (Harbor)\\n- 🔒 **Workspace Locking**: Enabled (Gitea provider)\\n- 💾 **Resource Limits**: 10 servers, 32 cores, 128GB per user ### CI/CD Mode - ✅ **Best for**: Automated pipelines\\n- 🔐 **Authentication**: Token (1h expiry, CI/CD injected)\\n- 🚀 **Services**: Remote orchestrator, DNS, git\\n- 📦 **Extensions**: OCI registry (always pull latest)\\n- 🔒 **Workspace Locking**: Disabled (stateless)\\n- 💾 **Resource Limits**: 5 servers, 16 cores, 64GB per pipeline ### Enterprise Mode - ✅ **Best for**: Large enterprises with strict compliance\\n- 🔐 **Authentication**: mTLS (TLS 1.3)\\n- 🚀 **Services**: All services on Kubernetes (HA)\\n- 📦 **Extensions**: OCI registry (signature verification)\\n- 🔒 **Workspace Locking**: Required (etcd provider)\\n- 💾 **Resource Limits**: 20 servers, 64 cores, 256GB per user --- ## Common Operations ### Initialize Mode System ```bash\\nprovisioning mode init\\n```plaintext ### Check Current Mode ```bash\\nprovisioning mode current # Output:\\n# mode: solo\\n# configured: true\\n# config_file: ~/.provisioning/config/active-mode.yaml\\n```plaintext ### List All Modes ```bash\\nprovisioning mode list # Output:\\n# ┌───────────────┬───────────────────────────────────┬─────────┐\\n# │ mode │ description │ current │\\n# ├───────────────┼───────────────────────────────────┼─────────┤\\n# │ solo │ Single developer local development │ ● │\\n# │ multi-user │ Team collaboration │ │\\n# │ cicd │ CI/CD pipeline execution │ │\\n# │ enterprise │ Production enterprise deployment │ │\\n# └───────────────┴───────────────────────────────────┴─────────┘\\n```plaintext ### Switch Mode ```bash\\n# Switch with confirmation\\nprovisioning mode switch multi-user # Dry run (preview changes)\\nprovisioning mode switch multi-user --dry-run # With validation\\nprovisioning mode switch multi-user --validate\\n```plaintext ### Show Mode Details ```bash\\n# Show current mode\\nprovisioning mode show # Show specific mode\\nprovisioning mode show enterprise\\n```plaintext ### Validate Mode ```bash\\n# Validate current mode\\nprovisioning mode validate # Validate specific mode\\nprovisioning mode validate cicd\\n```plaintext ### Compare Modes ```bash\\nprovisioning mode compare solo multi-user # Output shows differences in:\\n# - Authentication\\n# - Service deployments\\n# - Extension sources\\n# - Workspace locking\\n# - Security settings\\n```plaintext --- ## OCI Registry Management ### Solo Mode Only ```bash\\n# Start local OCI registry\\nprovisioning mode oci-registry start # Check registry status\\nprovisioning mode oci-registry status # View registry logs\\nprovisioning mode oci-registry logs # Stop registry\\nprovisioning mode oci-registry stop\\n```plaintext **Note**: OCI registry management only works in solo mode with local deployment. --- ## Mode-Specific Workflows ### Solo Mode Workflow ```bash\\n# 1. Initialize (defaults to solo)\\nprovisioning workspace init # 2. Start orchestrator\\ncd provisioning/platform/orchestrator\\n./scripts/start-orchestrator.nu --background # 3. (Optional) Start OCI registry\\nprovisioning mode oci-registry start # 4. Create infrastructure\\nprovisioning server create web-01 --check\\nprovisioning taskserv create kubernetes # Extensions loaded from local filesystem\\n```plaintext ### Multi-User Mode Workflow ```bash\\n# 1. Switch to multi-user mode\\nprovisioning mode switch multi-user # 2. Authenticate\\nprovisioning auth login\\n# Enter JWT token from team admin # 3. Lock workspace\\nprovisioning workspace lock my-infra # 4. Pull extensions from OCI registry\\nprovisioning extension pull upcloud\\nprovisioning extension pull kubernetes # 5. Create infrastructure\\nprovisioning server create web-01 # 6. Unlock workspace\\nprovisioning workspace unlock my-infra\\n```plaintext ### CI/CD Mode Workflow ```yaml\\n# GitLab CI example\\ndeploy: stage: deploy script: # Token injected by CI - export PROVISIONING_MODE=cicd - mkdir -p /var/run/secrets/provisioning - echo \\"$PROVISIONING_TOKEN\\" > /var/run/secrets/provisioning/token # Validate - provisioning validate --all # Test - provisioning test quick kubernetes # Deploy - provisioning server create --check - provisioning server create after_script: - provisioning workspace cleanup\\n```plaintext ### Enterprise Mode Workflow ```bash\\n# 1. Switch to enterprise mode\\nprovisioning mode switch enterprise # 2. Verify Kubernetes connectivity\\nkubectl get pods -n provisioning-system # 3. Login to Harbor\\ndocker login harbor.enterprise.local # 4. Request workspace (requires approval)\\nprovisioning workspace request prod-deployment\\n# Approval from: platform-team, security-team # 5. After approval, lock workspace\\nprovisioning workspace lock prod-deployment --provider etcd # 6. Pull extensions (with signature verification)\\nprovisioning extension pull upcloud --verify-signature # 7. Deploy infrastructure\\nprovisioning infra create --check\\nprovisioning infra create # 8. Release workspace\\nprovisioning workspace unlock prod-deployment\\n```plaintext --- ## Configuration Files ### Mode Templates ```plaintext\\nworkspace/config/modes/\\n├── solo.yaml # Solo mode configuration\\n├── multi-user.yaml # Multi-user mode configuration\\n├── cicd.yaml # CI/CD mode configuration\\n└── enterprise.yaml # Enterprise mode configuration\\n```plaintext ### Active Mode Configuration ```plaintext\\n~/.provisioning/config/active-mode.yaml\\n```plaintext This file is created/updated when you switch modes. --- ## OCI Registry Namespaces All modes use the following OCI registry namespaces: | Namespace | Purpose | Example |\\n|-----------|---------|---------|\\n| `*-extensions` | Extension artifacts | `provisioning-extensions/upcloud:latest` |\\n| `*-kcl` | KCL package artifacts | `provisioning-kcl/lib:v1.0.0` |\\n| `*-platform` | Platform service images | `provisioning-platform/orchestrator:latest` |\\n| `*-test` | Test environment images | `provisioning-test/ubuntu:22.04` | **Note**: Prefix varies by mode (`dev-`, `provisioning-`, `cicd-`, `prod-`) --- ## Troubleshooting ### Mode switch fails ```bash\\n# Validate mode first\\nprovisioning mode validate # Check runtime requirements\\nprovisioning mode validate --check-requirements\\n```plaintext ### Cannot start OCI registry (solo mode) ```bash\\n# Check if registry binary is installed\\nwhich zot # Install Zot\\n# macOS: brew install project-zot/tap/zot\\n# Linux: Download from https://github.com/project-zot/zot/releases # Check if port 5000 is available\\nlsof -i :5000\\n```plaintext ### Authentication fails (multi-user/cicd/enterprise) ```bash\\n# Check token expiry\\nprovisioning auth status # Re-authenticate\\nprovisioning auth login # For enterprise mTLS, verify certificates\\nls -la /etc/provisioning/certs/\\n# Should contain: client.crt, client.key, ca.crt\\n```plaintext ### Workspace locking issues (multi-user/enterprise) ```bash\\n# Check lock status\\nprovisioning workspace lock-status # Force unlock (use with caution)\\nprovisioning workspace unlock --force # Check lock provider status\\n# Multi-user: Check Gitea connectivity\\ncurl -I https://git.company.local # Enterprise: Check etcd cluster\\netcdctl endpoint health\\n```plaintext ### OCI registry connection fails ```bash\\n# Test registry connectivity\\ncurl https://harbor.company.local/v2/ # Check authentication token\\ncat ~/.provisioning/tokens/oci # Verify network connectivity\\nping harbor.company.local # For Harbor, check credentials\\ndocker login harbor.company.local\\n```plaintext --- ## Environment Variables | Variable | Purpose | Example |\\n|----------|---------|---------|\\n| `PROVISIONING_MODE` | Override active mode | `export PROVISIONING_MODE=cicd` |\\n| `PROVISIONING_WORKSPACE_CONFIG` | Override config location | `~/.provisioning/config` |\\n| `PROVISIONING_PROJECT_ROOT` | Project root directory | `/opt/project-provisioning` | --- ## Best Practices ### 1. Use Appropriate Mode - **Solo**: Individual development, experimentation\\n- **Multi-User**: Team collaboration, shared infrastructure\\n- **CI/CD**: Automated testing and deployment\\n- **Enterprise**: Production deployments, compliance requirements ### 2. Validate Before Switching ```bash\\nprovisioning mode validate \\n```plaintext ### 3. Backup Active Configuration ```bash\\n# Automatic backup created when switching\\nls ~/.provisioning/config/active-mode.yaml.backup\\n```plaintext ### 4. Use Check Mode ```bash\\nprovisioning server create --check\\n```plaintext ### 5. Lock Workspaces in Multi-User/Enterprise ```bash\\nprovisioning workspace lock \\n# ... make changes ...\\nprovisioning workspace unlock \\n```plaintext ### 6. Pull Extensions from OCI (Multi-User/CI/CD/Enterprise) ```bash\\n# Don\'t use local extensions in shared modes\\nprovisioning extension pull \\n```plaintext --- ## Security Considerations ### Solo Mode - ⚠️ No authentication (local development only)\\n- ⚠️ No encryption (sensitive data should use SOPS)\\n- ✅ Isolated environment ### Multi-User Mode - ✅ Token-based authentication\\n- ✅ TLS in transit\\n- ✅ Audit logging\\n- ⚠️ No encryption at rest (configure as needed) ### CI/CD Mode - ✅ Token authentication (short expiry)\\n- ✅ Full encryption (at rest + in transit)\\n- ✅ KMS for secrets\\n- ✅ Vulnerability scanning (critical threshold)\\n- ✅ Image signing required ### Enterprise Mode - ✅ mTLS authentication\\n- ✅ Full encryption (at rest + in transit)\\n- ✅ KMS for all secrets\\n- ✅ Vulnerability scanning (critical threshold)\\n- ✅ Image signing + signature verification\\n- ✅ Network isolation\\n- ✅ Compliance policies (SOC2, ISO27001, HIPAA) --- ## Support and Documentation - **Implementation Summary**: `MODE_SYSTEM_IMPLEMENTATION_SUMMARY.md`\\n- **KCL Schemas**: `provisioning/kcl/modes.k`, `provisioning/kcl/oci_registry.k`\\n- **Mode Templates**: `workspace/config/modes/*.yaml`\\n- **Commands**: `provisioning/core/nulib/lib_provisioning/mode/` --- **Last Updated**: 2025-10-06 | **Version**: 1.0.0","breadcrumbs":"Mode System Guide » Quick Start","id":"1475","title":"Quick Start"},"1476":{"body":"Complete guide to workspace management in the provisioning platform.","breadcrumbs":"Workspace Guide » Workspace Guide","id":"1476","title":"Workspace Guide"},"1477":{"body":"The comprehensive workspace guide is available here: → Workspace Switching Guide - Complete workspace documentation This guide covers: Workspace creation and initialization Switching between multiple workspaces User preferences and configuration Workspace registry management Backup and restore operations","breadcrumbs":"Workspace Guide » 📖 Workspace Switching Guide","id":"1477","title":"📖 Workspace Switching Guide"},"1478":{"body":"# List all workspaces\\nprovisioning workspace list # Switch to a workspace\\nprovisioning workspace switch # Create new workspace\\nprovisioning workspace init # Show active workspace\\nprovisioning workspace active","breadcrumbs":"Workspace Guide » Quick Start","id":"1478","title":"Quick Start"},"1479":{"body":"Workspace Switching Guide - Complete guide Workspace Configuration - Configuration commands Workspace Setup - Initial setup guide For complete workspace documentation, see Workspace Switching Guide .","breadcrumbs":"Workspace Guide » Additional Workspace Resources","id":"1479","title":"Additional Workspace Resources"},"148":{"body":"Set up basic environment variables: # Create environment file\\ncat > ~/.provisioning/env << \'ENVEOF\'\\n# Provisioning Environment Configuration\\nexport PROVISIONING_ENV=dev\\nexport PROVISIONING_PATH=$(pwd)\\nexport PROVISIONING_KAGE=~/.config/provisioning/age\\nENVEOF # Source the environment\\nsource ~/.provisioning/env # Add to shell profile for persistence\\necho \'source ~/.provisioning/env\' >> ~/.bashrc # or ~/.zshrc","breadcrumbs":"Installation Steps » Step 5: Configure Environment","id":"148","title":"Step 5: Configure Environment"},"1480":{"body":"Version : 1.0.0 Last Updated : 2025-10-06 System Version : 2.0.5+","breadcrumbs":"Workspace Enforcement Guide » Workspace Enforcement and Version Tracking Guide","id":"1480","title":"Workspace Enforcement and Version Tracking Guide"},"1481":{"body":"Overview Workspace Requirement Version Tracking Migration Framework Command Reference Troubleshooting Best Practices","breadcrumbs":"Workspace Enforcement Guide » Table of Contents","id":"1481","title":"Table of Contents"},"1482":{"body":"The provisioning system now enforces mandatory workspace requirements for all infrastructure operations. This ensures: Consistent Environment : All operations run in a well-defined workspace Version Compatibility : Workspaces track provisioning and schema versions Safe Migrations : Automatic migration framework with backup/rollback support Configuration Isolation : Each workspace has isolated configurations and state","breadcrumbs":"Workspace Enforcement Guide » Overview","id":"1482","title":"Overview"},"1483":{"body":"✅ Mandatory Workspace : Most commands require an active workspace ✅ Version Tracking : Workspaces track system, schema, and format versions ✅ Compatibility Checks : Automatic validation before operations ✅ Migration Framework : Safe upgrades with backup/restore ✅ Clear Error Messages : Helpful guidance when workspace is missing or incompatible","breadcrumbs":"Workspace Enforcement Guide » Key Features","id":"1483","title":"Key Features"},"1484":{"body":"","breadcrumbs":"Workspace Enforcement Guide » Workspace Requirement","id":"1484","title":"Workspace Requirement"},"1485":{"body":"Almost all provisioning commands now require an active workspace: Infrastructure : server, taskserv, cluster, infra Orchestration : workflow, batch, orchestrator Development : module, layer, pack Generation : generate Configuration : Most config commands Test : test environment commands","breadcrumbs":"Workspace Enforcement Guide » Commands That Require Workspace","id":"1485","title":"Commands That Require Workspace"},"1486":{"body":"Only informational and workspace management commands work without a workspace: help - Help system version - Show version information workspace - Workspace management commands guide / sc - Documentation and quick reference nu - Start Nushell session nuinfo - Nushell information","breadcrumbs":"Workspace Enforcement Guide » Commands That Don\'t Require Workspace","id":"1486","title":"Commands That Don\'t Require Workspace"},"1487":{"body":"If you run a command without an active workspace, you\'ll see: ✗ Workspace Required No active workspace is configured. To get started: 1. Create a new workspace: provisioning workspace init 2. Or activate an existing workspace: provisioning workspace activate 3. List available workspaces: provisioning workspace list\\n```plaintext --- ## Version Tracking ### Workspace Metadata Each workspace maintains metadata in `.provisioning/metadata.yaml`: ```yaml\\nworkspace: name: \\"my-workspace\\" path: \\"/path/to/workspace\\" version: provisioning: \\"2.0.5\\" # System version when created/updated schema: \\"1.0.0\\" # KCL schema version workspace_format: \\"2.0.0\\" # Directory structure version created: \\"2025-10-06T12:00:00Z\\"\\nlast_updated: \\"2025-10-06T13:30:00Z\\" migration_history: [] compatibility: min_provisioning_version: \\"2.0.0\\" min_schema_version: \\"1.0.0\\"\\n```plaintext ### Version Components #### 1. Provisioning Version - **What**: Version of the provisioning system (CLI + libraries)\\n- **Example**: `2.0.5`\\n- **Purpose**: Ensures workspace is compatible with current system #### 2. Schema Version - **What**: Version of KCL schemas used in workspace\\n- **Example**: `1.0.0`\\n- **Purpose**: Tracks configuration schema compatibility #### 3. Workspace Format Version - **What**: Version of workspace directory structure\\n- **Example**: `2.0.0`\\n- **Purpose**: Ensures workspace has required directories and files ### Checking Workspace Version View workspace version information: ```bash\\n# Check active workspace version\\nprovisioning workspace version # Check specific workspace version\\nprovisioning workspace version my-workspace # JSON output\\nprovisioning workspace version --format json\\n```plaintext **Example Output**: ```plaintext\\nWorkspace Version Information System: Version: 2.0.5 Workspace: Name: my-workspace Path: /Users/user/workspaces/my-workspace Version: 2.0.5 Schema Version: 1.0.0 Format Version: 2.0.0 Created: 2025-10-06T12:00:00Z Last Updated: 2025-10-06T13:30:00Z Compatibility: Compatible: true Reason: version_match Message: Workspace and system versions match Migrations: Total: 0\\n```plaintext --- ## Migration Framework ### When Migration is Needed Migration is required when: 1. **No Metadata**: Workspace created before version tracking (< 2.0.5)\\n2. **Version Mismatch**: System version is newer than workspace version\\n3. **Breaking Changes**: Major version update with structural changes ### Compatibility Scenarios #### Scenario 1: No Metadata (Unknown Version) ```plaintext\\nWorkspace version is incompatible: Workspace: my-workspace Path: /path/to/workspace Workspace metadata not found or corrupted This workspace needs migration: Run workspace migration: provisioning workspace migrate my-workspace\\n```plaintext #### Scenario 2: Migration Available ```plaintext\\nℹ Migration available: Workspace can be updated from 2.0.0 to 2.0.5 Run: provisioning workspace migrate my-workspace\\n```plaintext #### Scenario 3: Workspace Too New ```plaintext\\nWorkspace version (3.0.0) is newer than system (2.0.5) Workspace is newer than the system: Workspace version: 3.0.0 System version: 2.0.5 Upgrade the provisioning system to use this workspace.\\n```plaintext ### Running Migrations #### Basic Migration Migrate active workspace to current system version: ```bash\\nprovisioning workspace migrate\\n```plaintext #### Migrate Specific Workspace ```bash\\nprovisioning workspace migrate my-workspace\\n```plaintext #### Migration Options ```bash\\n# Skip backup (not recommended)\\nprovisioning workspace migrate --skip-backup # Force without confirmation\\nprovisioning workspace migrate --force # Migrate to specific version\\nprovisioning workspace migrate --target-version 2.1.0\\n```plaintext ### Migration Process When you run a migration: 1. **Validation**: System validates workspace exists and needs migration\\n2. **Backup**: Creates timestamped backup in `.workspace_backups/`\\n3. **Confirmation**: Prompts for confirmation (unless `--force`)\\n4. **Migration**: Applies migration steps sequentially\\n5. **Verification**: Validates migration success\\n6. **Metadata Update**: Records migration in workspace metadata **Example Migration Output**: ```plaintext\\nWorkspace Migration Workspace: my-workspace\\nPath: /path/to/workspace Current version: unknown\\nTarget version: 2.0.5 This will migrate the workspace from unknown to 2.0.5\\nA backup will be created before migration. Continue with migration? (y/N): y Creating backup...\\n✓ Backup created: /path/.workspace_backups/my-workspace_backup_20251006_123000 Migration Strategy: Initialize metadata\\nDescription: Add metadata tracking to existing workspace\\nFrom: unknown → To: 2.0.5 Migrating workspace to version 2.0.5...\\n✓ Initialize metadata completed ✓ Migration completed successfully\\n```plaintext ### Workspace Backups #### List Backups ```bash\\n# List backups for active workspace\\nprovisioning workspace list-backups # List backups for specific workspace\\nprovisioning workspace list-backups my-workspace\\n```plaintext **Example Output**: ```plaintext\\nWorkspace Backups for my-workspace name created reason size\\nmy-workspace_backup_20251006_1200 2025-10-06T12:00:00Z pre_migration 2.3 MB\\nmy-workspace_backup_20251005_1500 2025-10-05T15:00:00Z pre_migration 2.1 MB\\n```plaintext #### Restore from Backup ```bash\\n# Restore workspace from backup\\nprovisioning workspace restore-backup /path/to/backup # Force restore without confirmation\\nprovisioning workspace restore-backup /path/to/backup --force\\n```plaintext **Restore Process**: ```plaintext\\nRestore Workspace from Backup Backup: /path/.workspace_backups/my-workspace_backup_20251006_1200\\nOriginal path: /path/to/workspace\\nCreated: 2025-10-06T12:00:00Z\\nReason: pre_migration ⚠ This will replace the current workspace at: /path/to/workspace Continue with restore? (y/N): y ✓ Workspace restored from backup\\n```plaintext --- ## Command Reference ### Workspace Version Commands ```bash\\n# Show workspace version information\\nprovisioning workspace version [workspace-name] [--format table|json|yaml] # Check compatibility\\nprovisioning workspace check-compatibility [workspace-name] # Migrate workspace\\nprovisioning workspace migrate [workspace-name] [--skip-backup] [--force] [--target-version VERSION] # List backups\\nprovisioning workspace list-backups [workspace-name] # Restore from backup\\nprovisioning workspace restore-backup [--force]\\n```plaintext ### Workspace Management Commands ```bash\\n# List all workspaces\\nprovisioning workspace list # Show active workspace\\nprovisioning workspace active # Activate workspace\\nprovisioning workspace activate # Create new workspace (includes metadata initialization)\\nprovisioning workspace init [path] # Register existing workspace\\nprovisioning workspace register # Remove workspace from registry\\nprovisioning workspace remove [--force]\\n```plaintext --- ## Troubleshooting ### Problem: \\"No active workspace\\" **Solution**: Activate or create a workspace ```bash\\n# List available workspaces\\nprovisioning workspace list # Activate existing workspace\\nprovisioning workspace activate my-workspace # Or create new workspace\\nprovisioning workspace init new-workspace\\n```plaintext ### Problem: \\"Workspace has invalid structure\\" **Symptoms**: Missing directories or configuration files **Solution**: Run migration to fix structure ```bash\\nprovisioning workspace migrate my-workspace\\n```plaintext ### Problem: \\"Workspace version is incompatible\\" **Solution**: Run migration to upgrade workspace ```bash\\nprovisioning workspace migrate\\n```plaintext ### Problem: Migration Failed **Solution**: Restore from automatic backup ```bash\\n# List backups\\nprovisioning workspace list-backups # Restore from most recent backup\\nprovisioning workspace restore-backup /path/to/backup\\n```plaintext ### Problem: Can\'t Activate Workspace After Migration **Possible Causes**: 1. Migration failed partially\\n2. Workspace path changed\\n3. Metadata corrupted **Solutions**: ```bash\\n# Check workspace compatibility\\nprovisioning workspace check-compatibility my-workspace # If corrupted, restore from backup\\nprovisioning workspace restore-backup /path/to/backup # If path changed, re-register\\nprovisioning workspace remove my-workspace\\nprovisioning workspace register my-workspace /new/path --activate\\n```plaintext --- ## Best Practices ### 1. Always Use Named Workspaces Create workspaces for different environments: ```bash\\nprovisioning workspace init dev ~/workspaces/dev --activate\\nprovisioning workspace init staging ~/workspaces/staging\\nprovisioning workspace init production ~/workspaces/production\\n```plaintext ### 2. Let System Create Backups Never use `--skip-backup` for important workspaces. Backups are cheap, data loss is expensive. ```bash\\n# Good: Default with backup\\nprovisioning workspace migrate # Risky: No backup\\nprovisioning workspace migrate --skip-backup # DON\'T DO THIS\\n```plaintext ### 3. Check Compatibility Before Operations Before major operations, verify workspace compatibility: ```bash\\nprovisioning workspace check-compatibility\\n```plaintext ### 4. Migrate After System Upgrades After upgrading the provisioning system: ```bash\\n# Check if migration available\\nprovisioning workspace version # Migrate if needed\\nprovisioning workspace migrate\\n```plaintext ### 5. Keep Backups for Safety Don\'t immediately delete old backups: ```bash\\n# List backups\\nprovisioning workspace list-backups # Keep at least 2-3 recent backups\\n```plaintext ### 6. Use Version Control for Workspace Configs Initialize git in workspace directory: ```bash\\ncd ~/workspaces/my-workspace\\ngit init\\ngit add config/ infra/\\ngit commit -m \\"Initial workspace configuration\\"\\n```plaintext Exclude runtime and cache directories in `.gitignore`: ```gitignore\\n.cache/\\n.runtime/\\n.provisioning/\\n.workspace_backups/\\n```plaintext ### 7. Document Custom Migrations If you need custom migration steps, document them: ```bash\\n# Create migration notes\\necho \\"Custom steps for v2 to v3 migration\\" > MIGRATION_NOTES.md\\n```plaintext --- ## Migration History Each migration is recorded in workspace metadata: ```yaml\\nmigration_history: - from_version: \\"unknown\\" to_version: \\"2.0.5\\" migration_type: \\"metadata_initialization\\" timestamp: \\"2025-10-06T12:00:00Z\\" success: true notes: \\"Initial metadata creation\\" - from_version: \\"2.0.5\\" to_version: \\"2.1.0\\" migration_type: \\"version_update\\" timestamp: \\"2025-10-15T10:30:00Z\\" success: true notes: \\"Updated to workspace switching support\\"\\n```plaintext View migration history: ```bash\\nprovisioning workspace version --format yaml | grep -A 10 \\"migration_history\\"\\n```plaintext --- ## Summary The workspace enforcement and version tracking system provides: - **Safety**: Mandatory workspace prevents accidental operations outside defined environments\\n- **Compatibility**: Version tracking ensures workspace works with current system\\n- **Upgradability**: Migration framework handles version transitions safely\\n- **Recoverability**: Automatic backups protect against migration failures **Key Commands**: ```bash\\n# Create workspace\\nprovisioning workspace init my-workspace --activate # Check version\\nprovisioning workspace version # Migrate if needed\\nprovisioning workspace migrate # List backups\\nprovisioning workspace list-backups\\n```plaintext For more information, see: - **Workspace Switching Guide**: `docs/user/WORKSPACE_SWITCHING_GUIDE.md`\\n- **Quick Reference**: `provisioning sc` or `provisioning guide quickstart`\\n- **Help System**: `provisioning help workspace` --- **Questions or Issues?** Check the troubleshooting section or run: ```bash\\nprovisioning workspace check-compatibility\\n```plaintext This will provide specific guidance for your situation.","breadcrumbs":"Workspace Enforcement Guide » What Happens Without a Workspace?","id":"1487","title":"What Happens Without a Workspace?"},"1488":{"body":"Version : 1.0.0 Last Updated : 2025-12-04","breadcrumbs":"Workspace Infra Reference » Unified Workspace:Infrastructure Reference System","id":"1488","title":"Unified Workspace:Infrastructure Reference System"},"1489":{"body":"The Workspace:Infrastructure Reference System provides a unified notation for managing workspaces and their associated infrastructure. This system eliminates the need to specify infrastructure separately and enables convenient defaults.","breadcrumbs":"Workspace Infra Reference » Overview","id":"1489","title":"Overview"},"149":{"body":"Create your first workspace: # Initialize a new workspace\\nprovisioning workspace init my-first-workspace # Expected output:\\n# ✓ Workspace \'my-first-workspace\' created successfully\\n# ✓ Configuration template generated\\n# ✓ Workspace activated # Verify workspace\\nprovisioning workspace list","breadcrumbs":"Installation Steps » Step 6: Initialize Workspace","id":"149","title":"Step 6: Initialize Workspace"},"1490":{"body":"","breadcrumbs":"Workspace Infra Reference » Quick Start","id":"1490","title":"Quick Start"},"1491":{"body":"Use the -ws flag with workspace:infra notation: # Use production workspace with sgoyol infrastructure for this command only\\nprovisioning server list -ws production:sgoyol # Use default infrastructure of active workspace\\nprovisioning taskserv create kubernetes\\n```plaintext ### Persistent Activation Activate a workspace with a default infrastructure: ```bash\\n# Activate librecloud workspace and set wuji as default infra\\nprovisioning workspace activate librecloud:wuji # Now all commands use librecloud:wuji by default\\nprovisioning server list\\n```plaintext ## Notation Syntax ### Basic Format ```plaintext\\nworkspace:infra\\n```plaintext | Part | Description | Example |\\n|------|-------------|---------|\\n| `workspace` | Workspace name | `librecloud` |\\n| `:` | Separator | - |\\n| `infra` | Infrastructure name | `wuji` | ### Examples | Notation | Workspace | Infrastructure |\\n|----------|-----------|-----------------|\\n| `librecloud:wuji` | librecloud | wuji |\\n| `production:sgoyol` | production | sgoyol |\\n| `dev:local` | dev | local |\\n| `librecloud` | librecloud | (from default or context) | ## Resolution Priority When no infrastructure is explicitly specified, the system uses this priority order: 1. **Explicit `--infra` flag** (highest) ```bash provisioning server list --infra another-infra PWD Detection cd workspace_librecloud/infra/wuji\\nprovisioning server list # Auto-detects wuji Default Infrastructure # If workspace has default_infra set\\nprovisioning server list # Uses configured default Error (no infra found) # Error: No infrastructure specified","breadcrumbs":"Workspace Infra Reference » Temporal Override (Single Command)","id":"1491","title":"Temporal Override (Single Command)"},"1492":{"body":"","breadcrumbs":"Workspace Infra Reference » Usage Patterns","id":"1492","title":"Usage Patterns"},"1493":{"body":"Use -ws to override workspace:infra for a single command: # Currently in librecloud:wuji context\\nprovisioning server list # Shows librecloud:wuji # Temporary override for this command only\\nprovisioning server list -ws production:sgoyol # Shows production:sgoyol # Back to original context\\nprovisioning server list # Shows librecloud:wuji again\\n```plaintext ### Pattern 2: Persistent Workspace Activation Set a workspace as active with a default infrastructure: ```bash\\n# List available workspaces\\nprovisioning workspace list # Activate with infra notation\\nprovisioning workspace activate production:sgoyol # All subsequent commands use production:sgoyol\\nprovisioning server list\\nprovisioning taskserv create kubernetes\\n```plaintext ### Pattern 3: PWD-Based Inference The system auto-detects workspace and infrastructure from your current directory: ```bash\\n# Your workspace structure\\nworkspace_librecloud/ infra/ wuji/ settings.k another/ settings.k # Navigation auto-detects context\\ncd workspace_librecloud/infra/wuji\\nprovisioning server list # Uses wuji automatically cd ../another\\nprovisioning server list # Switches to another\\n```plaintext ### Pattern 4: Default Infrastructure Management Set a workspace-specific default infrastructure: ```bash\\n# During activation\\nprovisioning workspace activate librecloud:wuji # Or explicitly after activation\\nprovisioning workspace set-default-infra librecloud another-infra # View current defaults\\nprovisioning workspace list\\n```plaintext ## Command Reference ### Workspace Commands ```bash\\n# Activate workspace with infra\\nprovisioning workspace activate workspace:infra # Switch to different workspace\\nprovisioning workspace switch workspace_name # List all workspaces\\nprovisioning workspace list # Show active workspace\\nprovisioning workspace active # Set default infrastructure\\nprovisioning workspace set-default-infra workspace_name infra_name # Get default infrastructure\\nprovisioning workspace get-default-infra workspace_name\\n```plaintext ### Common Commands with `-ws` ```bash\\n# Server operations\\nprovisioning server create -ws workspace:infra\\nprovisioning server list -ws workspace:infra\\nprovisioning server delete name -ws workspace:infra # Task service operations\\nprovisioning taskserv create kubernetes -ws workspace:infra\\nprovisioning taskserv delete kubernetes -ws workspace:infra # Infrastructure operations\\nprovisioning infra validate -ws workspace:infra\\nprovisioning infra list -ws workspace:infra\\n```plaintext ## Features ### ✅ Unified Notation - Single `workspace:infra` format for all references\\n- Works with all provisioning commands\\n- Backward compatible with existing workflows ### ✅ Temporal Override - Use `-ws` flag for single-command overrides\\n- No permanent state changes\\n- Automatically reverted after command ### ✅ Persistent Defaults - Set default infrastructure per workspace\\n- Eliminates repetitive `--infra` flags\\n- Survives across sessions ### ✅ Smart Detection - Auto-detects workspace from directory\\n- Auto-detects infrastructure from PWD\\n- Fallback to configured defaults ### ✅ Error Handling - Clear error messages when infra not found\\n- Validation of workspace and infra existence\\n- Helpful hints for missing configurations ## Environment Context ### TEMP_WORKSPACE Variable The system uses `$env.TEMP_WORKSPACE` for temporal overrides: ```bash\\n# Set temporarily (via -ws flag automatically)\\n$env.TEMP_WORKSPACE = \\"production\\" # Check current context\\necho $env.TEMP_WORKSPACE # Clear after use\\nhide-env TEMP_WORKSPACE\\n```plaintext ## Validation ### Validating Notation ```bash\\n# Valid notation formats\\nlibrecloud:wuji # Standard format\\nproduction:sgoyol.v2 # With dots and hyphens\\ndev-01:local-test # Multiple hyphens\\nprod123:infra456 # Numeric names # Special characters\\nlib-cloud_01:wu-ji.v2 # Mix of all allowed chars\\n```plaintext ### Error Cases ```bash\\n# Workspace not found\\nprovisioning workspace activate unknown:infra\\n# Error: Workspace \'unknown\' not found in registry # Infrastructure not found\\nprovisioning workspace activate librecloud:unknown\\n# Error: Infrastructure \'unknown\' not found in workspace \'librecloud\' # Empty specification\\nprovisioning workspace activate \\"\\"\\n# Error: Workspace \'\' not found in registry\\n```plaintext ## Configuration ### User Configuration Default infrastructure is stored in `~/Library/Application Support/provisioning/user_config.yaml`: ```yaml\\nactive_workspace: \\"librecloud\\" workspaces: - name: \\"librecloud\\" path: \\"/Users/you/workspaces/librecloud\\" last_used: \\"2025-12-04T12:00:00Z\\" default_infra: \\"wuji\\" # Default infrastructure - name: \\"production\\" path: \\"/opt/workspaces/production\\" last_used: \\"2025-12-03T15:30:00Z\\" default_infra: \\"sgoyol\\"\\n```plaintext ### Workspace Schema In `provisioning/kcl/workspace_config.k`: ```kcl\\nschema InfraConfig: \\"\\"\\"Infrastructure context settings\\"\\"\\" current: str default?: str # Default infrastructure for workspace\\n```plaintext ## Best Practices ### 1. Use Persistent Activation for Long Sessions ```bash\\n# Good: Activate at start of session\\nprovisioning workspace activate production:sgoyol # Then use simple commands\\nprovisioning server list\\nprovisioning taskserv create kubernetes\\n```plaintext ### 2. Use Temporal Override for Ad-Hoc Operations ```bash\\n# Good: Quick one-off operation\\nprovisioning server list -ws production:other-infra # Avoid: Repeated -ws flags\\nprovisioning server list -ws prod:infra1\\nprovisioning taskserv list -ws prod:infra1 # Better to activate once\\n```plaintext ### 3. Navigate with PWD for Context Awareness ```bash\\n# Good: Navigate to infrastructure directory\\ncd workspace_librecloud/infra/wuji\\nprovisioning server list # Auto-detects context # Works well with: cd - history, terminal multiplexer panes\\n```plaintext ### 4. Set Meaningful Defaults ```bash\\n# Good: Default to production infrastructure\\nprovisioning workspace activate production:main-infra # Avoid: Default to dev infrastructure in production workspace\\n```plaintext ## Troubleshooting ### Issue: \\"Workspace not found in registry\\" **Solution**: Register the workspace first ```bash\\nprovisioning workspace register librecloud /path/to/workspace_librecloud\\n```plaintext ### Issue: \\"Infrastructure not found\\" **Solution**: Verify infrastructure directory exists ```bash\\nls workspace_librecloud/infra/ # Check available infras\\nprovisioning workspace activate librecloud:wuji # Use correct name\\n```plaintext ### Issue: Temporal override not working **Solution**: Ensure you\'re using `-ws` flag correctly ```bash\\n# Correct\\nprovisioning server list -ws production:sgoyol # Incorrect (missing space)\\nprovisioning server list-wsproduction:sgoyol # Incorrect (ws is not a command)\\nprovisioning -ws production:sgoyol server list\\n```plaintext ### Issue: PWD detection not working **Solution**: Navigate to proper infrastructure directory ```bash\\n# Must be in workspace structure\\ncd workspace_name/infra/infra_name # Then run command\\nprovisioning server list\\n```plaintext ## Migration from Old System ### Old Way ```bash\\nprovisioning workspace activate librecloud\\nprovisioning --infra wuji server list\\nprovisioning --infra wuji taskserv create kubernetes\\n```plaintext ### New Way ```bash\\nprovisioning workspace activate librecloud:wuji\\nprovisioning server list\\nprovisioning taskserv create kubernetes\\n```plaintext ## Performance Notes - **Notation parsing**: <1ms per command\\n- **Workspace detection**: <5ms from PWD\\n- **Workspace switching**: ~100ms (includes platform activation)\\n- **Temporal override**: No additional overhead ## Backward Compatibility All existing commands and flags continue to work: ```bash\\n# Old syntax still works\\nprovisioning --infra wuji server list # New syntax also works\\nprovisioning server list -ws librecloud:wuji # Mix and match\\nprovisioning --infra other-infra server list -ws librecloud:wuji\\n# Uses other-infra (explicit flag takes priority)\\n```plaintext ## See Also - `provisioning help workspace` - Workspace commands\\n- `provisioning help infra` - Infrastructure commands\\n- `docs/architecture/ARCHITECTURE_OVERVIEW.md` - Overall architecture\\n- `docs/user/WORKSPACE_SWITCHING_GUIDE.md` - Workspace switching details","breadcrumbs":"Workspace Infra Reference » Pattern 1: Temporal Override for Commands","id":"1493","title":"Pattern 1: Temporal Override for Commands"},"1494":{"body":"","breadcrumbs":"Workspace Config Commands » Workspace Configuration Management Commands","id":"1494","title":"Workspace Configuration Management Commands"},"1495":{"body":"The workspace configuration management commands provide a comprehensive set of tools for viewing, editing, validating, and managing workspace configurations.","breadcrumbs":"Workspace Config Commands » Overview","id":"1495","title":"Overview"},"1496":{"body":"Command Description workspace config show Display workspace configuration workspace config validate Validate all configuration files workspace config generate provider Generate provider configuration from template workspace config edit Edit configuration files workspace config hierarchy Show configuration loading hierarchy workspace config list List all configuration files","breadcrumbs":"Workspace Config Commands » Command Summary","id":"1496","title":"Command Summary"},"1497":{"body":"","breadcrumbs":"Workspace Config Commands » Commands","id":"1497","title":"Commands"},"1498":{"body":"Display the complete workspace configuration in various formats. # Show active workspace config (YAML format)\\nprovisioning workspace config show # Show specific workspace config\\nprovisioning workspace config show my-workspace # Show in JSON format\\nprovisioning workspace config show --out json # Show in TOML format\\nprovisioning workspace config show --out toml # Show specific workspace in JSON\\nprovisioning workspace config show my-workspace --out json\\n```plaintext **Output:** Complete workspace configuration in the specified format ### Validate Workspace Configuration Validate all configuration files for syntax and required sections. ```bash\\n# Validate active workspace\\nprovisioning workspace config validate # Validate specific workspace\\nprovisioning workspace config validate my-workspace\\n```plaintext **Checks performed:** - Main config (`provisioning.yaml`) - YAML syntax and required sections\\n- Provider configs (`providers/*.toml`) - TOML syntax\\n- Platform service configs (`platform/*.toml`) - TOML syntax\\n- KMS config (`kms.toml`) - TOML syntax **Output:** Validation report with success/error indicators ### Generate Provider Configuration Generate a provider configuration file from a template. ```bash\\n# Generate AWS provider config for active workspace\\nprovisioning workspace config generate provider aws # Generate UpCloud provider config for specific workspace\\nprovisioning workspace config generate provider upcloud --infra my-workspace # Generate local provider config\\nprovisioning workspace config generate provider local\\n```plaintext **What it does:** 1. Locates provider template in `extensions/providers/{name}/config.defaults.toml`\\n2. Interpolates workspace-specific values (`{{workspace.name}}`, `{{workspace.path}}`)\\n3. Saves to `{workspace}/config/providers/{name}.toml` **Output:** Generated configuration file ready for customization ### Edit Configuration Files Open configuration files in your editor for modification. ```bash\\n# Edit main workspace config\\nprovisioning workspace config edit main # Edit specific provider config\\nprovisioning workspace config edit provider aws # Edit platform service config\\nprovisioning workspace config edit platform orchestrator # Edit KMS config\\nprovisioning workspace config edit kms # Edit for specific workspace\\nprovisioning workspace config edit provider upcloud --infra my-workspace\\n```plaintext **Editor used:** Value of `$EDITOR` environment variable (defaults to `vi`) **Config types:** - `main` - Main workspace configuration (`provisioning.yaml`)\\n- `provider ` - Provider configuration (`providers/{name}.toml`)\\n- `platform ` - Platform service configuration (`platform/{name}.toml`)\\n- `kms` - KMS configuration (`kms.toml`) ### Show Configuration Hierarchy Display the configuration loading hierarchy and precedence. ```bash\\n# Show hierarchy for active workspace\\nprovisioning workspace config hierarchy # Show hierarchy for specific workspace\\nprovisioning workspace config hierarchy my-workspace\\n```plaintext **Output:** Visual hierarchy showing: 1. Environment Variables (highest priority)\\n2. User Context\\n3. Platform Services\\n4. Provider Configs\\n5. Workspace Config (lowest priority) ### List Configuration Files List all configuration files for a workspace. ```bash\\n# List all configs\\nprovisioning workspace config list # List only provider configs\\nprovisioning workspace config list --type provider # List only platform configs\\nprovisioning workspace config list --type platform # List only KMS config\\nprovisioning workspace config list --type kms # List for specific workspace\\nprovisioning workspace config list my-workspace --type all\\n```plaintext **Output:** Table of configuration files with type, name, and path ## Workspace Selection All config commands support two ways to specify the workspace: 1. **Active Workspace** (default): ```bash provisioning workspace config show Specific Workspace (using --infra flag): provisioning workspace config show --infra my-workspace","breadcrumbs":"Workspace Config Commands » Show Workspace Configuration","id":"1498","title":"Show Workspace Configuration"},"1499":{"body":"Workspace configurations are organized in a standard structure: {workspace}/\\n├── config/\\n│ ├── provisioning.yaml # Main workspace config\\n│ ├── providers/ # Provider configurations\\n│ │ ├── aws.toml\\n│ │ ├── upcloud.toml\\n│ │ └── local.toml\\n│ ├── platform/ # Platform service configs\\n│ │ ├── orchestrator.toml\\n│ │ ├── control-center.toml\\n│ │ └── mcp.toml\\n│ └── kms.toml # KMS configuration\\n```plaintext ## Configuration Hierarchy Configuration values are loaded in the following order (highest to lowest priority): 1. **Environment Variables** - `PROVISIONING_*` variables\\n2. **User Context** - `~/Library/Application Support/provisioning/ws_{name}.yaml`\\n3. **Platform Services** - `{workspace}/config/platform/*.toml`\\n4. **Provider Configs** - `{workspace}/config/providers/*.toml`\\n5. **Workspace Config** - `{workspace}/config/provisioning.yaml` Higher priority values override lower priority values. ## Examples ### Complete Workflow ```bash\\n# 1. Create new workspace with activation\\nprovisioning workspace init my-project ~/workspaces/my-project --providers [aws,local] --activate # 2. Validate configuration\\nprovisioning workspace config validate # 3. View configuration hierarchy\\nprovisioning workspace config hierarchy # 4. Generate additional provider config\\nprovisioning workspace config generate provider upcloud # 5. Edit provider settings\\nprovisioning workspace config edit provider upcloud # 6. List all configs\\nprovisioning workspace config list # 7. Show complete config in JSON\\nprovisioning workspace config show --out json # 8. Validate everything\\nprovisioning workspace config validate\\n```plaintext ### Multi-Workspace Management ```bash\\n# Create multiple workspaces\\nprovisioning workspace init dev ~/workspaces/dev --activate\\nprovisioning workspace init staging ~/workspaces/staging\\nprovisioning workspace init prod ~/workspaces/prod # Validate specific workspace\\nprovisioning workspace config validate staging # Show config for production\\nprovisioning workspace config show prod --out yaml # Edit provider for specific workspace\\nprovisioning workspace config edit provider aws --infra prod\\n```plaintext ### Configuration Troubleshooting ```bash\\n# 1. Validate all configs\\nprovisioning workspace config validate # 2. If errors, check hierarchy\\nprovisioning workspace config hierarchy # 3. List all config files\\nprovisioning workspace config list # 4. Edit problematic config\\nprovisioning workspace config edit provider aws # 5. Validate again\\nprovisioning workspace config validate\\n```plaintext ## Integration with Other Commands Config commands integrate seamlessly with other workspace operations: ```bash\\n# Create workspace with providers\\nprovisioning workspace init my-app ~/apps/my-app --providers [aws,upcloud] --activate # Generate additional configs\\nprovisioning workspace config generate provider local # Validate before deployment\\nprovisioning workspace config validate # Deploy infrastructure\\nprovisioning server create --infra my-app\\n```plaintext ## Tips 1. **Always validate after editing**: Run `workspace config validate` after manual edits 2. **Use hierarchy to understand precedence**: Run `workspace config hierarchy` to see which config files are being used 3. **Generate from templates**: Use `config generate provider` rather than creating configs manually 4. **Check before activation**: Validate a workspace before activating it as default 5. **Use --out json for scripting**: JSON output is easier to parse in scripts ## See Also - [Workspace Initialization](workspace-initialization.md)\\n- [Provider Configuration](provider-configuration.md)\\n- Configuration Architecture","breadcrumbs":"Workspace Config Commands » Configuration File Locations","id":"1499","title":"Configuration File Locations"},"15":{"body":"","breadcrumbs":"Installation Guide » System Requirements","id":"15","title":"System Requirements"},"150":{"body":"Run the installation verification: # Check system configuration\\nprovisioning validate config # Check all dependencies\\nprovisioning env # View detailed environment\\nprovisioning allenv Expected output should show: ✅ All core dependencies installed ✅ Age keys configured ✅ Workspace initialized ✅ Configuration valid","breadcrumbs":"Installation Steps » Step 7: Validate Installation","id":"150","title":"Step 7: Validate Installation"},"1500":{"body":"This guide covers the unified configuration rendering system in the CLI daemon that supports KCL, Nickel, and Tera template engines.","breadcrumbs":"Config Rendering Guide » Configuration Rendering Guide","id":"1500","title":"Configuration Rendering Guide"},"1501":{"body":"The CLI daemon (cli-daemon) provides a high-performance REST API for rendering configurations in three different formats: KCL : Type-safe infrastructure configuration language (familiar, existing patterns) Nickel : Functional configuration language with lazy evaluation (excellent for complex configs) Tera : Jinja2-compatible template engine (simple templating) All three renderers are accessible through a single unified API endpoint with intelligent caching to minimize latency.","breadcrumbs":"Config Rendering Guide » Overview","id":"1501","title":"Overview"},"1502":{"body":"","breadcrumbs":"Config Rendering Guide » Quick Start","id":"1502","title":"Quick Start"},"1503":{"body":"The daemon runs on port 9091 by default: # Start in background\\n./target/release/cli-daemon & # Check it\'s running\\ncurl http://localhost:9091/health\\n```plaintext ### Simple KCL Rendering ```bash\\ncurl -X POST http://localhost:9091/config/render \\\\ -H \\"Content-Type: application/json\\" \\\\ -d \'{ \\"language\\": \\"kcl\\", \\"content\\": \\"name = \\\\\\"my-server\\\\\\"\\\\ncpu = 4\\\\nmemory = 8192\\", \\"name\\": \\"server-config\\" }\'\\n```plaintext **Response**: ```json\\n{ \\"rendered\\": \\"name = \\\\\\"my-server\\\\\\"\\\\ncpu = 4\\\\nmemory = 8192\\", \\"error\\": null, \\"language\\": \\"kcl\\", \\"execution_time_ms\\": 45\\n}\\n```plaintext ## REST API Reference ### POST /config/render Render a configuration in any supported language. **Request Headers**: ```plaintext\\nContent-Type: application/json\\n```plaintext **Request Body**: ```json\\n{ \\"language\\": \\"kcl|nickel|tera\\", \\"content\\": \\"...configuration content...\\", \\"context\\": { \\"key1\\": \\"value1\\", \\"key2\\": 123 }, \\"name\\": \\"optional-config-name\\"\\n}\\n```plaintext **Parameters**: | Parameter | Type | Required | Description |\\n|-----------|------|----------|-------------|\\n| `language` | string | Yes | One of: `kcl`, `nickel`, `tera` |\\n| `content` | string | Yes | The configuration or template content to render |\\n| `context` | object | No | Variables to pass to the configuration (JSON object) |\\n| `name` | string | No | Optional name for logging purposes | **Response** (Success): ```json\\n{ \\"rendered\\": \\"...rendered output...\\", \\"error\\": null, \\"language\\": \\"kcl\\", \\"execution_time_ms\\": 23\\n}\\n```plaintext **Response** (Error): ```json\\n{ \\"rendered\\": null, \\"error\\": \\"KCL evaluation failed: undefined variable \'name\'\\", \\"language\\": \\"kcl\\", \\"execution_time_ms\\": 18\\n}\\n```plaintext **Status Codes**: - `200 OK` - Rendering completed (check `error` field in body for evaluation errors)\\n- `400 Bad Request` - Invalid request format\\n- `500 Internal Server Error` - Daemon error ### GET /config/stats Get rendering statistics across all languages. **Response**: ```json\\n{ \\"total_renders\\": 156, \\"successful_renders\\": 154, \\"failed_renders\\": 2, \\"average_time_ms\\": 28, \\"kcl_renders\\": 78, \\"nickel_renders\\": 52, \\"tera_renders\\": 26, \\"kcl_cache_hits\\": 68, \\"nickel_cache_hits\\": 35, \\"tera_cache_hits\\": 18\\n}\\n```plaintext ### POST /config/stats/reset Reset all rendering statistics. **Response**: ```json\\n{ \\"status\\": \\"success\\", \\"message\\": \\"Configuration rendering statistics reset\\"\\n}\\n```plaintext ## KCL Rendering ### Basic KCL Configuration ```bash\\ncurl -X POST http://localhost:9091/config/render \\\\ -H \\"Content-Type: application/json\\" \\\\ -d \'{ \\"language\\": \\"kcl\\", \\"content\\": \\"\\nname = \\\\\\"production-server\\\\\\"\\ntype = \\\\\\"web\\\\\\"\\ncpu = 4\\nmemory = 8192\\ndisk = 50 tags = { environment = \\\\\\"production\\\\\\" team = \\\\\\"platform\\\\\\"\\n}\\n\\", \\"name\\": \\"prod-server-config\\" }\'\\n```plaintext ### KCL with Context Variables Pass context variables using the `-D` flag syntax internally: ```bash\\ncurl -X POST http://localhost:9091/config/render \\\\ -H \\"Content-Type: application/json\\" \\\\ -d \'{ \\"language\\": \\"kcl\\", \\"content\\": \\"\\nname = option(\\\\\\"server_name\\\\\\", default=\\\\\\"default-server\\\\\\")\\nenvironment = option(\\\\\\"env\\\\\\", default=\\\\\\"dev\\\\\\")\\ncpu = option(\\\\\\"cpu_count\\\\\\", default=2)\\nmemory = option(\\\\\\"memory_mb\\\\\\", default=2048)\\n\\", \\"context\\": { \\"server_name\\": \\"app-server-01\\", \\"env\\": \\"production\\", \\"cpu_count\\": 8, \\"memory_mb\\": 16384 }, \\"name\\": \\"server-with-context\\" }\'\\n```plaintext ### Expected KCL Rendering Time - **First render (cache miss)**: 20-50ms\\n- **Cached render (same content)**: 1-5ms\\n- **Large configs (100+ variables)**: 50-100ms ## Nickel Rendering ### Basic Nickel Configuration ```bash\\ncurl -X POST http://localhost:9091/config/render \\\\ -H \\"Content-Type: application/json\\" \\\\ -d \'{ \\"language\\": \\"nickel\\", \\"content\\": \\"{ name = \\\\\\"production-server\\\\\\", type = \\\\\\"web\\\\\\", cpu = 4, memory = 8192, disk = 50, tags = { environment = \\\\\\"production\\\\\\", team = \\\\\\"platform\\\\\\" }\\n}\\", \\"name\\": \\"nickel-server-config\\" }\'\\n```plaintext ### Nickel with Lazy Evaluation Nickel excels at evaluating only what\'s needed: ```bash\\ncurl -X POST http://localhost:9091/config/render \\\\ -H \\"Content-Type: application/json\\" \\\\ -d \'{ \\"language\\": \\"nickel\\", \\"content\\": \\"{ server = { name = \\\\\\"db-01\\\\\\", # Expensive computation - only computed if accessed health_check = std.array.fold (fun acc x => acc + x) 0 [1, 2, 3, 4, 5] }, networking = { dns_servers = [\\\\\\"8.8.8.8\\\\\\", \\\\\\"8.8.4.4\\\\\\"], firewall_rules = [\\\\\\"allow_ssh\\\\\\", \\\\\\"allow_https\\\\\\"] }\\n}\\", \\"context\\": { \\"only_server\\": true } }\'\\n```plaintext ### Expected Nickel Rendering Time - **First render (cache miss)**: 30-60ms\\n- **Cached render (same content)**: 1-5ms\\n- **Large configs with lazy evaluation**: 40-80ms **Advantage**: Nickel only computes fields that are actually used in the output ## Tera Template Rendering ### Basic Tera Template ```bash\\ncurl -X POST http://localhost:9091/config/render \\\\ -H \\"Content-Type: application/json\\" \\\\ -d \'{ \\"language\\": \\"tera\\", \\"content\\": \\"\\nServer Configuration\\n==================== Name: {{ server_name }}\\nEnvironment: {{ environment | default(value=\\\\\\"development\\\\\\") }}\\nType: {{ server_type }} Assigned Tasks:\\n{% for task in tasks %} - {{ task }}\\n{% endfor %} {% if enable_monitoring %}\\nMonitoring: ENABLED - Prometheus: true - Grafana: true\\n{% else %}\\nMonitoring: DISABLED\\n{% endif %}\\n\\", \\"context\\": { \\"server_name\\": \\"prod-web-01\\", \\"environment\\": \\"production\\", \\"server_type\\": \\"web\\", \\"tasks\\": [\\"kubernetes\\", \\"prometheus\\", \\"cilium\\"], \\"enable_monitoring\\": true }, \\"name\\": \\"server-template\\" }\'\\n```plaintext ### Tera Filters and Functions Tera supports Jinja2-compatible filters and functions: ```bash\\ncurl -X POST http://localhost:9091/config/render \\\\ -H \\"Content-Type: application/json\\" \\\\ -d \'{ \\"language\\": \\"tera\\", \\"content\\": \\"\\nConfiguration for {{ environment | upper }}\\nServers: {{ server_count | default(value=1) }}\\nCost estimate: \\\\${{ monthly_cost | round(precision=2) }} {% for server in servers | reverse %}\\n- {{ server.name }}: {{ server.cpu }} CPUs\\n{% endfor %}\\n\\", \\"context\\": { \\"environment\\": \\"production\\", \\"server_count\\": 5, \\"monthly_cost\\": 1234.567, \\"servers\\": [ {\\"name\\": \\"web-01\\", \\"cpu\\": 4}, {\\"name\\": \\"db-01\\", \\"cpu\\": 8}, {\\"name\\": \\"cache-01\\", \\"cpu\\": 2} ] } }\'\\n```plaintext ### Expected Tera Rendering Time - **Simple templates**: 4-10ms\\n- **Complex templates with loops**: 10-20ms\\n- **Always fast** (template is pre-compiled) ## Performance Characteristics ### Caching Strategy All three renderers use LRU (Least Recently Used) caching: - **Cache Size**: 100 entries per renderer\\n- **Cache Key**: SHA256 hash of (content + context)\\n- **Cache Hit**: Typically < 5ms\\n- **Cache Miss**: Language-dependent (20-60ms) **To maximize cache hits**: 1. Render the same config multiple times → hits after first render\\n2. Use static content when possible → better cache reuse\\n3. Monitor cache hit ratio via `/config/stats` ### Benchmarks Comparison of rendering times (on commodity hardware): | Scenario | KCL | Nickel | Tera |\\n|----------|-----|--------|------|\\n| Simple config (10 vars) | 20ms | 30ms | 5ms |\\n| Medium config (50 vars) | 35ms | 45ms | 8ms |\\n| Large config (100+ vars) | 50-100ms | 50-80ms | 10ms |\\n| Cached render | 1-5ms | 1-5ms | 1-5ms | ### Memory Usage - Each renderer keeps 100 cached entries in memory\\n- Average config size in cache: ~5KB\\n- Maximum memory per renderer: ~500KB + overhead ## Error Handling ### Common Errors #### KCL Binary Not Found **Error Response**: ```json\\n{ \\"rendered\\": null, \\"error\\": \\"KCL binary not found in PATH. Install KCL or set KCL_PATH environment variable\\", \\"language\\": \\"kcl\\", \\"execution_time_ms\\": 0\\n}\\n```plaintext **Solution**: ```bash\\n# Install KCL\\nkcl version # Or set explicit path\\nexport KCL_PATH=/usr/local/bin/kcl\\n```plaintext #### Invalid KCL Syntax **Error Response**: ```json\\n{ \\"rendered\\": null, \\"error\\": \\"KCL evaluation failed: Parse error at line 3: expected \'=\'\\", \\"language\\": \\"kcl\\", \\"execution_time_ms\\": 12\\n}\\n```plaintext **Solution**: Verify KCL syntax. Run `kcl eval file.k` directly for better error messages. #### Missing Context Variable **Error Response**: ```json\\n{ \\"rendered\\": null, \\"error\\": \\"KCL evaluation failed: undefined variable \'required_var\'\\", \\"language\\": \\"kcl\\", \\"execution_time_ms\\": 8\\n}\\n```plaintext **Solution**: Provide required context variables or use `option()` with defaults. #### Invalid JSON in Context **HTTP Status**: `400 Bad Request`\\n**Body**: Error message about invalid JSON **Solution**: Ensure context is valid JSON. ## Integration Examples ### Using with Nushell ```nushell\\n# Render a KCL config from Nushell\\nlet config = open workspace/config/provisioning.k | into string\\nlet response = curl -X POST http://localhost:9091/config/render \\\\ -H \\"Content-Type: application/json\\" \\\\ -d $\\"{{ language: \\\\\\"kcl\\\\\\", content: $config }}\\" | from json print $response.rendered\\n```plaintext ### Using with Python ```python\\nimport requests\\nimport json def render_config(language, content, context=None, name=None): payload = { \\"language\\": language, \\"content\\": content, \\"context\\": context or {}, \\"name\\": name } response = requests.post( \\"http://localhost:9091/config/render\\", json=payload ) return response.json() # Example usage\\nresult = render_config( \\"kcl\\", \'name = \\"server\\"\\\\ncpu = 4\', {\\"name\\": \\"prod-server\\"}, \\"my-config\\"\\n) if result[\\"error\\"]: print(f\\"Error: {result[\'error\']}\\")\\nelse: print(f\\"Rendered in {result[\'execution_time_ms\']}ms\\") print(result[\\"rendered\\"])\\n```plaintext ### Using with Curl ```bash\\n#!/bin/bash # Function to render config\\nrender_config() { local language=$1 local content=$2 local name=${3:-\\"unnamed\\"} curl -X POST http://localhost:9091/config/render \\\\ -H \\"Content-Type: application/json\\" \\\\ -d @- << EOF\\n{ \\"language\\": \\"$language\\", \\"content\\": $(echo \\"$content\\" | jq -Rs .), \\"name\\": \\"$name\\"\\n}\\nEOF\\n} # Usage\\nrender_config \\"kcl\\" \\"name = \\\\\\"my-server\\\\\\"\\" \\"server-config\\"\\n```plaintext ## Troubleshooting ### Daemon Won\'t Start **Check log level**: ```bash\\nPROVISIONING_LOG_LEVEL=debug ./target/release/cli-daemon\\n```plaintext **Verify Nushell binary**: ```bash\\nwhich nu\\n# or set explicit path\\nNUSHELL_PATH=/usr/local/bin/nu ./target/release/cli-daemon\\n```plaintext ### Very Slow Rendering **Check cache hit rate**: ```bash\\ncurl http://localhost:9091/config/stats | jq \'.kcl_cache_hits / .kcl_renders\'\\n```plaintext **If low cache hit rate**: Rendering same configs repeatedly? **Monitor execution time**: ```bash\\ncurl http://localhost:9091/config/render ... | jq \'.execution_time_ms\'\\n```plaintext ### Rendering Hangs **Set timeout** (depends on client): ```bash\\ncurl --max-time 10 -X POST http://localhost:9091/config/render ...\\n```plaintext **Check daemon logs** for stuck processes. ### Out of Memory **Reduce cache size** (rebuild with modified config) or restart daemon. ## Best Practices 1. **Choose right language for task**: - KCL: Familiar, type-safe, use if already in ecosystem - Nickel: Large configs with lazy evaluation needs - Tera: Simple templating, fastest 2. **Use context variables** instead of hardcoding values: ```json \\"context\\": { \\"environment\\": \\"production\\", \\"replica_count\\": 3 } Monitor statistics to understand performance: watch -n 1 \'curl -s http://localhost:9091/config/stats | jq\' Cache warming : Pre-render common configs on startup Error handling : Always check error field in response","breadcrumbs":"Config Rendering Guide » Starting the Daemon","id":"1503","title":"Starting the Daemon"},"1504":{"body":"KCL Documentation Nickel User Manual Tera Template Engine CLI Daemon Architecture: provisioning/platform/cli-daemon/README.md","breadcrumbs":"Config Rendering Guide » See Also","id":"1504","title":"See Also"},"1505":{"body":"","breadcrumbs":"Config Rendering Guide » Quick Reference","id":"1505","title":"Quick Reference"},"1506":{"body":"POST http://localhost:9091/config/render\\n```plaintext ### Request Template ```bash\\ncurl -X POST http://localhost:9091/config/render \\\\ -H \\"Content-Type: application/json\\" \\\\ -d \'{ \\"language\\": \\"kcl|nickel|tera\\", \\"content\\": \\"...\\", \\"context\\": {...}, \\"name\\": \\"optional-name\\" }\'\\n```plaintext ### Quick Examples #### KCL - Simple Config ```bash\\ncurl -X POST http://localhost:9091/config/render \\\\ -H \\"Content-Type: application/json\\" \\\\ -d \'{ \\"language\\": \\"kcl\\", \\"content\\": \\"name = \\\\\\"server\\\\\\"\\\\ncpu = 4\\\\nmemory = 8192\\" }\'\\n```plaintext #### KCL - With Context ```bash\\ncurl -X POST http://localhost:9091/config/render \\\\ -H \\"Content-Type: application/json\\" \\\\ -d \'{ \\"language\\": \\"kcl\\", \\"content\\": \\"name = option(\\\\\\"server_name\\\\\\")\\\\nenvironment = option(\\\\\\"env\\\\\\", default=\\\\\\"dev\\\\\\")\\", \\"context\\": {\\"server_name\\": \\"prod-01\\", \\"env\\": \\"production\\"} }\'\\n```plaintext #### Nickel - Simple Config ```bash\\ncurl -X POST http://localhost:9091/config/render \\\\ -H \\"Content-Type: application/json\\" \\\\ -d \'{ \\"language\\": \\"nickel\\", \\"content\\": \\"{name = \\\\\\"server\\\\\\", cpu = 4, memory = 8192}\\" }\'\\n```plaintext #### Tera - Template with Loops ```bash\\ncurl -X POST http://localhost:9091/config/render \\\\ -H \\"Content-Type: application/json\\" \\\\ -d \'{ \\"language\\": \\"tera\\", \\"content\\": \\"{% for task in tasks %}{{ task }}\\\\n{% endfor %}\\", \\"context\\": {\\"tasks\\": [\\"kubernetes\\", \\"postgres\\", \\"redis\\"]} }\'\\n```plaintext ### Statistics ```bash\\n# Get stats\\ncurl http://localhost:9091/config/stats # Reset stats\\ncurl -X POST http://localhost:9091/config/stats/reset # Watch stats in real-time\\nwatch -n 1 \'curl -s http://localhost:9091/config/stats | jq\'\\n```plaintext ### Performance Guide | Language | Cold | Cached | Use Case |\\n|----------|------|--------|----------|\\n| **KCL** | 20-50ms | 1-5ms | Type-safe infrastructure configs |\\n| **Nickel** | 30-60ms | 1-5ms | Large configs, lazy evaluation |\\n| **Tera** | 5-20ms | 1-5ms | Simple templating | ### Status Codes | Code | Meaning |\\n|------|---------|\\n| 200 | Success (check `error` field for evaluation errors) |\\n| 400 | Invalid request |\\n| 500 | Daemon error | ### Response Fields ```json\\n{ \\"rendered\\": \\"...output or null on error\\", \\"error\\": \\"...error message or null on success\\", \\"language\\": \\"kcl|nickel|tera\\", \\"execution_time_ms\\": 23\\n}\\n```plaintext ### Languages Comparison #### KCL ```kcl\\nname = \\"server\\"\\ntype = \\"web\\"\\ncpu = 4\\nmemory = 8192 tags = { env = \\"prod\\" team = \\"platform\\"\\n}\\n```plaintext **Pros**: Familiar syntax, type-safe, existing patterns\\n**Cons**: Eager evaluation, verbose for simple cases #### Nickel ```nickel\\n{ name = \\"server\\", type = \\"web\\", cpu = 4, memory = 8192, tags = { env = \\"prod\\", team = \\"platform\\" }\\n}\\n```plaintext **Pros**: Lazy evaluation, functional style, compact\\n**Cons**: Different paradigm, smaller ecosystem #### Tera ```jinja2\\nServer: {{ name }}\\nType: {{ type | upper }}\\n{% for tag_name, tag_value in tags %}\\n- {{ tag_name }}: {{ tag_value }}\\n{% endfor %}\\n```plaintext **Pros**: Fast, simple, familiar template syntax\\n**Cons**: No validation, template-only ### Caching **How it works**: SHA256(content + context) → cached result **Cache hit**: < 5ms\\n**Cache miss**: 20-60ms (language dependent)\\n**Cache size**: 100 entries per language **Cache stats**: ```bash\\ncurl -s http://localhost:9091/config/stats | jq \'{ kcl_cache_hits: .kcl_cache_hits, kcl_renders: .kcl_renders, kcl_hit_ratio: (.kcl_cache_hits / .kcl_renders * 100)\\n}\'\\n```plaintext ### Common Tasks #### Batch Rendering ```bash\\n#!/bin/bash\\nfor config in configs/*.k; do curl -X POST http://localhost:9091/config/render \\\\ -H \\"Content-Type: application/json\\" \\\\ -d \\"$(jq -n --arg content \\\\\\"$(cat $config)\\\\\\" \\\\ \'{language: \\"kcl\\", content: $content}\')\\"\\ndone\\n```plaintext #### Validate Before Rendering ```bash\\n# KCL validation\\nkcl eval --strict my-config.k # Nickel validation (via daemon first render)\\ncurl ... # catches errors in response\\n```plaintext #### Monitor Cache Performance ```bash\\n#!/bin/bash\\nwhile true; do STATS=$(curl -s http://localhost:9091/config/stats) HIT_RATIO=$( echo \\"$STATS\\" | jq \'.kcl_cache_hits / .kcl_renders * 100\') echo \\"Cache hit ratio: ${HIT_RATIO}%\\" sleep 5\\ndone\\n```plaintext ### Error Examples #### Missing Binary ```json\\n{ \\"error\\": \\"KCL binary not found. Install KCL or set KCL_PATH\\", \\"rendered\\": null\\n}\\n```plaintext **Fix**: `export KCL_PATH=/path/to/kcl` or install KCL #### Syntax Error ```json\\n{ \\"error\\": \\"KCL evaluation failed: Parse error at line 3\\", \\"rendered\\": null\\n}\\n```plaintext **Fix**: Check KCL syntax, run `kcl eval file.k` directly #### Missing Variable ```json\\n{ \\"error\\": \\"KCL evaluation failed: undefined variable \'name\'\\", \\"rendered\\": null\\n}\\n```plaintext **Fix**: Provide in `context` or use `option()` with default ### Integration Quick Start #### Nushell ```nushell\\nuse lib_provisioning let config = open server.k | into string\\nlet result = (curl -X POST http://localhost:9091/config/render \\\\ -H \\"Content-Type: application/json\\" \\\\ -d {language: \\"kcl\\", content: $config} | from json) if ($result.error != null) { error $result.error\\n} else { print $result.rendered\\n}\\n```plaintext #### Python ```python\\nimport requests resp = requests.post(\\"http://localhost:9091/config/render\\", json={ \\"language\\": \\"kcl\\", \\"content\\": \'name = \\"server\\"\', \\"context\\": {}\\n})\\nresult = resp.json()\\nprint(result[\\"rendered\\"] if not result[\\"error\\"] else f\\"Error: {result[\'error\']}\\")\\n```plaintext #### Bash ```bash\\nrender() { curl -s -X POST http://localhost:9091/config/render \\\\ -H \\"Content-Type: application/json\\" \\\\ -d \\"$1\\" | jq \'.\'\\n} # Usage\\nrender \'{\\"language\\":\\"kcl\\",\\"content\\":\\"name = \\\\\\"server\\\\\\"\\"}\'\\n```plaintext ### Environment Variables ```bash\\n# Daemon configuration\\nPROVISIONING_LOG_LEVEL=debug # Log level\\nDAEMON_BIND=127.0.0.1:9091 # Bind address\\nNUSHELL_PATH=/usr/local/bin/nu # Nushell binary\\nKCL_PATH=/usr/local/bin/kcl # KCL binary\\nNICKEL_PATH=/usr/local/bin/nickel # Nickel binary\\n```plaintext ### Useful Commands ```bash\\n# Health check\\ncurl http://localhost:9091/health # Daemon info\\ncurl http://localhost:9091/info # View stats\\ncurl http://localhost:9091/config/stats | jq \'.\' # Pretty print stats\\ncurl -s http://localhost:9091/config/stats | jq \'{ total: .total_renders, success_rate: (.successful_renders / .total_renders * 100), avg_time: .average_time_ms, cache_hit_rate: ((.kcl_cache_hits + .nickel_cache_hits) / (.kcl_renders + .nickel_renders) * 100)\\n}\'\\n```plaintext ### Troubleshooting Checklist - [ ] Daemon running? `curl http://localhost:9091/health`\\n- [ ] Correct content for language?\\n- [ ] Valid JSON in context?\\n- [ ] Binary available? (KCL/Nickel)\\n- [ ] Check log level? `PROVISIONING_LOG_LEVEL=debug`\\n- [ ] Cache hit rate? `/config/stats`\\n- [ ] Error in response? Check `error` field","breadcrumbs":"Config Rendering Guide » API Endpoint","id":"1506","title":"API Endpoint"},"1507":{"body":"This comprehensive guide explains the configuration system of the Infrastructure Automation platform, helping you understand, customize, and manage all configuration aspects.","breadcrumbs":"Configuration » Configuration Guide","id":"1507","title":"Configuration Guide"},"1508":{"body":"Understanding the configuration hierarchy and precedence Working with different configuration file types Configuration interpolation and templating Environment-specific configurations User customization and overrides Validation and troubleshooting Advanced configuration patterns","breadcrumbs":"Configuration » What You\'ll Learn","id":"1508","title":"What You\'ll Learn"},"1509":{"body":"","breadcrumbs":"Configuration » Configuration Architecture","id":"1509","title":"Configuration Architecture"},"151":{"body":"If you plan to use platform services (orchestrator, control center, etc.): # Build platform services\\ncd provisioning/platform # Build orchestrator\\ncd orchestrator\\ncargo build --release\\ncd .. # Build control center\\ncd control-center\\ncargo build --release\\ncd .. # Build KMS service\\ncd kms-service\\ncargo build --release\\ncd .. # Verify builds\\nls */target/release/","breadcrumbs":"Installation Steps » Optional: Install Platform Services","id":"151","title":"Optional: Install Platform Services"},"1510":{"body":"The system uses a layered configuration approach with clear precedence rules: Runtime CLI arguments (highest precedence) ↓ (overrides)\\nEnvironment Variables ↓ (overrides)\\nInfrastructure Config (./.provisioning.toml) ↓ (overrides)\\nProject Config (./provisioning.toml) ↓ (overrides)\\nUser Config (~/.config/provisioning/config.toml) ↓ (overrides)\\nSystem Defaults (config.defaults.toml) (lowest precedence)\\n```plaintext ### Configuration File Types | File Type | Purpose | Location | Format |\\n|-----------|---------|----------|--------|\\n| **System Defaults** | Base system configuration | `config.defaults.toml` | TOML |\\n| **User Config** | Personal preferences | `~/.config/provisioning/config.toml` | TOML |\\n| **Project Config** | Project-wide settings | `./provisioning.toml` | TOML |\\n| **Infrastructure Config** | Infra-specific settings | `./.provisioning.toml` | TOML |\\n| **Environment Config** | Environment overrides | `config.{env}.toml` | TOML |\\n| **Infrastructure Definitions** | Infrastructure as Code | `settings.k`, `*.k` | KCL | ## Understanding Configuration Sections ### Core System Configuration ```toml\\n[core]\\nversion = \\"1.0.0\\" # System version\\nname = \\"provisioning\\" # System identifier\\n```plaintext ### Path Configuration The most critical configuration section that defines where everything is located: ```toml\\n[paths]\\n# Base directory - all other paths derive from this\\nbase = \\"/usr/local/provisioning\\" # Derived paths (usually don\'t need to change these)\\nkloud = \\"{{paths.base}}/infra\\"\\nproviders = \\"{{paths.base}}/providers\\"\\ntaskservs = \\"{{paths.base}}/taskservs\\"\\nclusters = \\"{{paths.base}}/cluster\\"\\nresources = \\"{{paths.base}}/resources\\"\\ntemplates = \\"{{paths.base}}/templates\\"\\ntools = \\"{{paths.base}}/tools\\"\\ncore = \\"{{paths.base}}/core\\" [paths.files]\\n# Important file locations\\nsettings_file = \\"settings.k\\"\\nkeys = \\"{{paths.base}}/keys.yaml\\"\\nrequirements = \\"{{paths.base}}/requirements.yaml\\"\\n```plaintext ### Debug and Logging ```toml\\n[debug]\\nenabled = false # Enable debug mode\\nmetadata = false # Show internal metadata\\ncheck = false # Default to check mode (dry run)\\nremote = false # Enable remote debugging\\nlog_level = \\"info\\" # Logging verbosity\\nno_terminal = false # Disable terminal features\\n```plaintext ### Output Configuration ```toml\\n[output]\\nfile_viewer = \\"less\\" # File viewer command\\nformat = \\"yaml\\" # Default output format (json, yaml, toml, text)\\n```plaintext ### Provider Configuration ```toml\\n[providers]\\ndefault = \\"local\\" # Default provider [providers.aws]\\napi_url = \\"\\" # AWS API endpoint (blank = default)\\nauth = \\"\\" # Authentication method\\ninterface = \\"CLI\\" # Interface type (CLI or API) [providers.upcloud]\\napi_url = \\"https://api.upcloud.com/1.3\\"\\nauth = \\"\\"\\ninterface = \\"CLI\\" [providers.local]\\napi_url = \\"\\"\\nauth = \\"\\"\\ninterface = \\"CLI\\"\\n```plaintext ### Encryption (SOPS) Configuration ```toml\\n[sops]\\nuse_sops = true # Enable SOPS encryption\\nconfig_path = \\"{{paths.base}}/.sops.yaml\\" # Search paths for Age encryption keys\\nkey_search_paths = [ \\"{{paths.base}}/keys/age.txt\\", \\"~/.config/sops/age/keys.txt\\"\\n]\\n```plaintext ## Configuration Interpolation The system supports powerful interpolation patterns for dynamic configuration values. ### Basic Interpolation Patterns #### Path Interpolation ```toml\\n# Reference other path values\\ntemplates = \\"{{paths.base}}/my-templates\\"\\ncustom_path = \\"{{paths.providers}}/custom\\"\\n```plaintext #### Environment Variable Interpolation ```toml\\n# Access environment variables\\nuser_home = \\"{{env.HOME}}\\"\\ncurrent_user = \\"{{env.USER}}\\"\\ncustom_path = \\"{{env.CUSTOM_PATH || /default/path}}\\" # With fallback\\n```plaintext #### Date/Time Interpolation ```toml\\n# Dynamic date/time values\\nlog_file = \\"{{paths.base}}/logs/app-{{now.date}}.log\\"\\nbackup_dir = \\"{{paths.base}}/backups/{{now.timestamp}}\\"\\n```plaintext #### Git Information Interpolation ```toml\\n# Git repository information\\ndeployment_branch = \\"{{git.branch}}\\"\\nversion_tag = \\"{{git.tag}}\\"\\ncommit_hash = \\"{{git.commit}}\\"\\n```plaintext #### Cross-Section References ```toml\\n# Reference values from other sections\\ndatabase_host = \\"{{providers.aws.database_endpoint}}\\"\\napi_key = \\"{{sops.decrypted_key}}\\"\\n```plaintext ### Advanced Interpolation #### Function Calls ```toml\\n# Built-in functions\\nconfig_path = \\"{{path.join(env.HOME, .config, provisioning)}}\\"\\nsafe_name = \\"{{str.lower(str.replace(project.name, \' \', \'-\'))}}\\"\\n```plaintext #### Conditional Expressions ```toml\\n# Conditional logic\\ndebug_level = \\"{{debug.enabled && \'debug\' || \'info\'}}\\"\\nstorage_path = \\"{{env.STORAGE_PATH || path.join(paths.base, \'storage\')}}\\"\\n```plaintext ### Interpolation Examples ```toml\\n[paths]\\nbase = \\"/opt/provisioning\\"\\nworkspace = \\"{{env.HOME}}/provisioning-workspace\\"\\ncurrent_project = \\"{{paths.workspace}}/{{env.PROJECT_NAME || \'default\'}}\\" [deployment]\\nenvironment = \\"{{env.DEPLOY_ENV || \'development\'}}\\"\\ntimestamp = \\"{{now.iso8601}}\\"\\nversion = \\"{{git.tag || git.commit}}\\" [database]\\nconnection_string = \\"postgresql://{{env.DB_USER}}:{{env.DB_PASS}}@{{env.DB_HOST || \'localhost\'}}/{{env.DB_NAME}}\\" [notifications]\\nslack_channel = \\"#{{env.TEAM_NAME || \'general\'}}-notifications\\"\\nemail_subject = \\"Deployment {{deployment.environment}} - {{deployment.timestamp}}\\"\\n```plaintext ## Environment-Specific Configuration ### Environment Detection The system automatically detects the environment using: 1. **PROVISIONING_ENV** environment variable\\n2. **Git branch patterns** (dev, staging, main/master)\\n3. **Directory patterns** (development, staging, production)\\n4. **Explicit configuration** ### Environment Configuration Files Create environment-specific configurations: #### Development Environment (`config.dev.toml`) ```toml\\n[core]\\nname = \\"provisioning-dev\\" [debug]\\nenabled = true\\nlog_level = \\"debug\\"\\nmetadata = true [providers]\\ndefault = \\"local\\" [cache]\\nenabled = false # Disable caching for development [notifications]\\nenabled = false # No notifications in dev\\n```plaintext #### Testing Environment (`config.test.toml`) ```toml\\n[core]\\nname = \\"provisioning-test\\" [debug]\\nenabled = true\\ncheck = true # Default to check mode in testing\\nlog_level = \\"info\\" [providers]\\ndefault = \\"local\\" [infrastructure]\\nauto_cleanup = true # Clean up test resources\\nresource_prefix = \\"test-{{git.branch}}-\\"\\n```plaintext #### Production Environment (`config.prod.toml`) ```toml\\n[core]\\nname = \\"provisioning-prod\\" [debug]\\nenabled = false\\nlog_level = \\"warn\\" [providers]\\ndefault = \\"aws\\" [security]\\nrequire_approval = true\\naudit_logging = true\\nencrypt_backups = true [notifications]\\nenabled = true\\ncritical_only = true\\n```plaintext ### Environment Switching ```bash\\n# Set environment for session\\nexport PROVISIONING_ENV=dev\\nprovisioning env # Use environment for single command\\nprovisioning --environment prod server create # Switch environment permanently\\nprovisioning env set prod\\n```plaintext ## User Configuration Customization ### Creating Your User Configuration ```bash\\n# Initialize user configuration from template\\nprovisioning init config # Or copy and customize\\ncp config-examples/config.user.toml ~/.config/provisioning/config.toml\\n```plaintext ### Common User Customizations #### Developer Setup ```toml\\n[paths]\\nbase = \\"/Users/alice/dev/provisioning\\" [debug]\\nenabled = true\\nlog_level = \\"debug\\" [providers]\\ndefault = \\"local\\" [output]\\nformat = \\"json\\"\\nfile_viewer = \\"code\\" [sops]\\nkey_search_paths = [ \\"/Users/alice/.config/sops/age/keys.txt\\"\\n]\\n```plaintext #### Operations Engineer Setup ```toml\\n[paths]\\nbase = \\"/opt/provisioning\\" [debug]\\nenabled = false\\nlog_level = \\"info\\" [providers]\\ndefault = \\"aws\\" [output]\\nformat = \\"yaml\\" [notifications]\\nenabled = true\\nemail = \\"ops-team@company.com\\"\\n```plaintext #### Team Lead Setup ```toml\\n[paths]\\nbase = \\"/home/teamlead/provisioning\\" [debug]\\nenabled = true\\nmetadata = true\\nlog_level = \\"info\\" [providers]\\ndefault = \\"upcloud\\" [security]\\nrequire_confirmation = true\\naudit_logging = true [sops]\\nkey_search_paths = [ \\"/secure/keys/team-lead.txt\\", \\"~/.config/sops/age/keys.txt\\"\\n]\\n```plaintext ## Project-Specific Configuration ### Project Configuration File (`provisioning.toml`) ```toml\\n[project]\\nname = \\"web-application\\"\\ndescription = \\"Main web application infrastructure\\"\\nversion = \\"2.1.0\\"\\nteam = \\"platform-team\\" [paths]\\n# Project-specific path overrides\\ninfra = \\"./infrastructure\\"\\ntemplates = \\"./custom-templates\\" [defaults]\\n# Project defaults\\nprovider = \\"aws\\"\\nregion = \\"us-west-2\\"\\nenvironment = \\"development\\" [cost_controls]\\nmax_monthly_budget = 5000.00\\nalert_threshold = 0.8 [compliance]\\nrequired_tags = [\\"team\\", \\"environment\\", \\"cost-center\\"]\\nencryption_required = true\\nbackup_required = true [notifications]\\nslack_webhook = \\"https://hooks.slack.com/services/...\\"\\nteam_email = \\"platform-team@company.com\\"\\n```plaintext ### Infrastructure-Specific Configuration (`.provisioning.toml`) ```toml\\n[infrastructure]\\nname = \\"production-web-app\\"\\nenvironment = \\"production\\"\\nregion = \\"us-west-2\\" [overrides]\\n# Infrastructure-specific overrides\\ndebug.enabled = false\\ndebug.log_level = \\"error\\"\\ncache.enabled = true [scaling]\\nauto_scaling_enabled = true\\nmin_instances = 3\\nmax_instances = 20 [security]\\nvpc_id = \\"vpc-12345678\\"\\nsubnet_ids = [\\"subnet-12345678\\", \\"subnet-87654321\\"]\\nsecurity_group_id = \\"sg-12345678\\" [monitoring]\\nenabled = true\\nretention_days = 90\\nalerting_enabled = true\\n```plaintext ## Configuration Validation ### Built-in Validation ```bash\\n# Validate current configuration\\nprovisioning validate config # Detailed validation with warnings\\nprovisioning validate config --detailed # Strict validation mode\\nprovisioning validate config strict # Validate specific environment\\nprovisioning validate config --environment prod\\n```plaintext ### Custom Validation Rules Create custom validation in your configuration: ```toml\\n[validation]\\n# Custom validation rules\\nrequired_sections = [\\"paths\\", \\"providers\\", \\"debug\\"]\\nrequired_env_vars = [\\"AWS_REGION\\", \\"PROJECT_NAME\\"]\\nforbidden_values = [\\"password123\\", \\"admin\\"] [validation.paths]\\n# Path validation rules\\nbase_must_exist = true\\nwritable_required = [\\"paths.base\\", \\"paths.cache\\"] [validation.security]\\n# Security validation\\nrequire_encryption = true\\nmin_key_length = 32\\n```plaintext ## Troubleshooting Configuration ### Common Configuration Issues #### Issue 1: Path Not Found Errors ```bash\\n# Problem: Base path doesn\'t exist\\n# Check current configuration\\nprovisioning env | grep paths.base # Verify path exists\\nls -la /path/shown/above # Fix: Update user config\\nnano ~/.config/provisioning/config.toml\\n# Set correct paths.base = \\"/correct/path\\"\\n```plaintext #### Issue 2: Interpolation Failures ```bash\\n# Problem: {{env.VARIABLE}} not resolving\\n# Check environment variables\\nenv | grep VARIABLE # Check interpolation\\nprovisioning validate interpolation test # Debug interpolation\\nprovisioning --debug validate interpolation validate\\n```plaintext #### Issue 3: SOPS Encryption Errors ```bash\\n# Problem: Cannot decrypt SOPS files\\n# Check SOPS configuration\\nprovisioning sops config # Verify key files\\nls -la ~/.config/sops/age/keys.txt # Test decryption\\nsops -d encrypted-file.k\\n```plaintext #### Issue 4: Provider Authentication ```bash\\n# Problem: Provider authentication failed\\n# Check provider configuration\\nprovisioning show providers # Test provider connection\\nprovisioning provider test aws # Verify credentials\\naws configure list # For AWS\\n```plaintext ### Configuration Debugging ```bash\\n# Show current configuration hierarchy\\nprovisioning config show --hierarchy # Show configuration sources\\nprovisioning config sources # Show interpolated values\\nprovisioning config interpolated # Debug specific section\\nprovisioning config debug paths\\nprovisioning config debug providers\\n```plaintext ### Configuration Reset ```bash\\n# Reset to defaults\\nprovisioning config reset # Reset specific section\\nprovisioning config reset providers # Backup current config before reset\\nprovisioning config backup\\n```plaintext ## Advanced Configuration Patterns ### Dynamic Configuration Loading ```toml\\n[dynamic]\\n# Load configuration from external sources\\nconfig_urls = [ \\"https://config.company.com/provisioning/base.toml\\", \\"file:///etc/provisioning/shared.toml\\"\\n] # Conditional configuration loading\\nload_if_exists = [ \\"./local-overrides.toml\\", \\"../shared/team-config.toml\\"\\n]\\n```plaintext ### Configuration Templating ```toml\\n[templates]\\n# Template-based configuration\\nbase_template = \\"aws-web-app\\"\\ntemplate_vars = { region = \\"us-west-2\\" instance_type = \\"t3.medium\\" team_name = \\"platform\\"\\n} # Template inheritance\\nextends = [\\"base-web\\", \\"monitoring\\", \\"security\\"]\\n```plaintext ### Multi-Region Configuration ```toml\\n[regions]\\nprimary = \\"us-west-2\\"\\nsecondary = \\"us-east-1\\" [regions.us-west-2]\\nproviders.aws.region = \\"us-west-2\\"\\navailability_zones = [\\"us-west-2a\\", \\"us-west-2b\\", \\"us-west-2c\\"] [regions.us-east-1]\\nproviders.aws.region = \\"us-east-1\\"\\navailability_zones = [\\"us-east-1a\\", \\"us-east-1b\\", \\"us-east-1c\\"]\\n```plaintext ### Configuration Profiles ```toml\\n[profiles]\\nactive = \\"development\\" [profiles.development]\\ndebug.enabled = true\\nproviders.default = \\"local\\"\\ncost_controls.enabled = false [profiles.staging]\\ndebug.enabled = true\\nproviders.default = \\"aws\\"\\ncost_controls.max_budget = 1000.00 [profiles.production]\\ndebug.enabled = false\\nproviders.default = \\"aws\\"\\nsecurity.strict_mode = true\\n```plaintext ## Configuration Management Best Practices ### 1. Version Control ```bash\\n# Track configuration changes\\ngit add provisioning.toml\\ngit commit -m \\"feat(config): add production settings\\" # Use branches for configuration experiments\\ngit checkout -b config/new-provider\\n```plaintext ### 2. Documentation ```toml\\n# Document your configuration choices\\n[paths]\\n# Using custom base path for team shared installation\\nbase = \\"/opt/team-provisioning\\" [debug]\\n# Debug enabled for troubleshooting infrastructure issues\\nenabled = true\\nlog_level = \\"debug\\" # Temporary while debugging network problems\\n```plaintext ### 3. Validation ```bash\\n# Always validate before committing\\nprovisioning validate config\\ngit add . && git commit -m \\"update config\\"\\n```plaintext ### 4. Backup ```bash\\n# Regular configuration backups\\nprovisioning config export --format yaml > config-backup-$(date +%Y%m%d).yaml # Automated backup script\\necho \'0 2 * * * provisioning config export > ~/backups/config-$(date +\\\\%Y\\\\%m\\\\%d).yaml\' | crontab -\\n```plaintext ### 5. Security - Never commit sensitive values in plain text\\n- Use SOPS for encrypting secrets\\n- Rotate encryption keys regularly\\n- Audit configuration access ```bash\\n# Encrypt sensitive configuration\\nsops -e settings.k > settings.encrypted.k # Audit configuration changes\\ngit log -p -- provisioning.toml\\n```plaintext ## Configuration Migration ### Migrating from Environment Variables ```bash\\n# Old: Environment variables\\nexport PROVISIONING_DEBUG=true\\nexport PROVISIONING_PROVIDER=aws # New: Configuration file\\n[debug]\\nenabled = true [providers]\\ndefault = \\"aws\\"\\n```plaintext ### Upgrading Configuration Format ```bash\\n# Check for configuration updates needed\\nprovisioning config check-version # Migrate to new format\\nprovisioning config migrate --from 1.0 --to 2.0 # Validate migrated configuration\\nprovisioning validate config\\n```plaintext ## Next Steps Now that you understand the configuration system: 1. **Create your user configuration**: `provisioning init config`\\n2. **Set up environment-specific configs** for your workflow\\n3. **Learn CLI commands**: [CLI Reference](cli-reference.md)\\n4. **Practice with examples**: [Examples and Tutorials](examples/)\\n5. **Troubleshoot issues**: [Troubleshooting Guide](troubleshooting-guide.md) You now have complete control over how provisioning behaves in your environment!","breadcrumbs":"Configuration » Configuration Hierarchy","id":"1510","title":"Configuration Hierarchy"},"1511":{"body":"Version : 1.0.0 Date : 2025-10-09 Status : Production Ready","breadcrumbs":"Authentication Layer Guide » Authentication Layer Implementation Guide","id":"1511","title":"Authentication Layer Implementation Guide"},"1512":{"body":"A comprehensive authentication layer has been integrated into the provisioning system to secure sensitive operations. The system uses nu_plugin_auth for JWT authentication with MFA support, providing enterprise-grade security with graceful user experience.","breadcrumbs":"Authentication Layer Guide » Overview","id":"1512","title":"Overview"},"1513":{"body":"","breadcrumbs":"Authentication Layer Guide » Key Features","id":"1513","title":"Key Features"},"1514":{"body":"RS256 asymmetric signing Access tokens (15min) + refresh tokens (7d) OS keyring storage (macOS Keychain, Windows Credential Manager, Linux Secret Service)","breadcrumbs":"Authentication Layer Guide » ✅ JWT Authentication","id":"1514","title":"✅ JWT Authentication"},"1515":{"body":"TOTP (Google Authenticator, Authy) WebAuthn/FIDO2 (YubiKey, Touch ID) Required for production and destructive operations","breadcrumbs":"Authentication Layer Guide » ✅ MFA Support","id":"1515","title":"✅ MFA Support"},"1516":{"body":"Production environment : Requires authentication + MFA Destructive operations : Requires authentication + MFA (delete, destroy) Development/test : Requires authentication, allows skip with flag Check mode : Always bypasses authentication (dry-run operations)","breadcrumbs":"Authentication Layer Guide » ✅ Security Policies","id":"1516","title":"✅ Security Policies"},"1517":{"body":"All authenticated operations logged User, timestamp, operation details MFA verification status JSON format for easy parsing","breadcrumbs":"Authentication Layer Guide » ✅ Audit Logging","id":"1517","title":"✅ Audit Logging"},"1518":{"body":"Clear instructions for login/MFA Distinct error types (platform auth vs provider auth) Helpful guidance for setup","breadcrumbs":"Authentication Layer Guide » ✅ User-Friendly Error Messages","id":"1518","title":"✅ User-Friendly Error Messages"},"1519":{"body":"","breadcrumbs":"Authentication Layer Guide » Quick Start","id":"1519","title":"Quick Start"},"152":{"body":"Use the interactive installer for a guided setup: # Build the installer\\ncd provisioning/platform/installer\\ncargo build --release # Run interactive installer\\n./target/release/provisioning-installer # Or headless installation\\n./target/release/provisioning-installer --headless --mode solo --yes","breadcrumbs":"Installation Steps » Optional: Install Platform with Installer","id":"152","title":"Optional: Install Platform with Installer"},"1520":{"body":"# Interactive login (password prompt)\\nprovisioning auth login # Save credentials to keyring\\nprovisioning auth login --save # Custom control center URL\\nprovisioning auth login admin --url http://control.example.com:9080\\n```plaintext ### 2. Enroll MFA (First Time) ```bash\\n# Enroll TOTP (Google Authenticator)\\nprovisioning auth mfa enroll totp # Scan QR code with authenticator app\\n# Or enter secret manually\\n```plaintext ### 3. Verify MFA (For Sensitive Operations) ```bash\\n# Get 6-digit code from authenticator app\\nprovisioning auth mfa verify --code 123456\\n```plaintext ### 4. Check Authentication Status ```bash\\n# View current authentication status\\nprovisioning auth status # Verify token is valid\\nprovisioning auth verify\\n```plaintext --- ## Protected Operations ### Server Operations ```bash\\n# ✅ CREATE - Requires auth (prod: +MFA)\\nprovisioning server create web-01 # Auth required\\nprovisioning server create web-01 --check # Auth skipped (check mode) # ❌ DELETE - Requires auth + MFA\\nprovisioning server delete web-01 # Auth + MFA required\\nprovisioning server delete web-01 --check # Auth skipped (check mode) # 📖 READ - No auth required\\nprovisioning server list # No auth required\\nprovisioning server ssh web-01 # No auth required\\n```plaintext ### Task Service Operations ```bash\\n# ✅ CREATE - Requires auth (prod: +MFA)\\nprovisioning taskserv create kubernetes # Auth required\\nprovisioning taskserv create kubernetes --check # Auth skipped # ❌ DELETE - Requires auth + MFA\\nprovisioning taskserv delete kubernetes # Auth + MFA required # 📖 READ - No auth required\\nprovisioning taskserv list # No auth required\\n```plaintext ### Cluster Operations ```bash\\n# ✅ CREATE - Requires auth (prod: +MFA)\\nprovisioning cluster create buildkit # Auth required\\nprovisioning cluster create buildkit --check # Auth skipped # ❌ DELETE - Requires auth + MFA\\nprovisioning cluster delete buildkit # Auth + MFA required\\n```plaintext ### Batch Workflows ```bash\\n# ✅ SUBMIT - Requires auth (prod: +MFA)\\nprovisioning batch submit workflow.k # Auth required\\nprovisioning batch submit workflow.k --skip-auth # Auth skipped (if allowed) # 📖 READ - No auth required\\nprovisioning batch list # No auth required\\nprovisioning batch status # No auth required\\n```plaintext --- ## Configuration ### Security Settings (`config.defaults.toml`) ```toml\\n[security]\\nrequire_auth = true # Enable authentication system\\nrequire_mfa_for_production = true # MFA for prod environment\\nrequire_mfa_for_destructive = true # MFA for delete operations\\nauth_timeout = 3600 # Token timeout (1 hour)\\naudit_log_path = \\"{{paths.base}}/logs/audit.log\\" [security.bypass]\\nallow_skip_auth = false # Allow PROVISIONING_SKIP_AUTH env var [plugins]\\nauth_enabled = true # Enable nu_plugin_auth [platform.control_center]\\nurl = \\"http://localhost:9080\\" # Control center URL\\n```plaintext ### Environment-Specific Configuration ```toml\\n# Development\\n[environments.dev]\\nsecurity.bypass.allow_skip_auth = true # Allow auth bypass in dev # Production\\n[environments.prod]\\nsecurity.bypass.allow_skip_auth = false # Never allow bypass\\nsecurity.require_mfa_for_production = true\\n```plaintext --- ## Authentication Bypass (Dev/Test Only) ### Environment Variable Method ```bash\\n# Export environment variable (dev/test only)\\nexport PROVISIONING_SKIP_AUTH=true # Run operations without authentication\\nprovisioning server create web-01 # Unset when done\\nunset PROVISIONING_SKIP_AUTH\\n```plaintext ### Per-Command Flag ```bash\\n# Some commands support --skip-auth flag\\nprovisioning batch submit workflow.k --skip-auth\\n```plaintext ### Check Mode (Always Bypasses Auth) ```bash\\n# Check mode is always allowed without auth\\nprovisioning server create web-01 --check\\nprovisioning taskserv create kubernetes --check\\n```plaintext ⚠️ **WARNING**: Auth bypass should ONLY be used in development/testing environments. Production systems should have `security.bypass.allow_skip_auth = false`. --- ## Error Messages ### Not Authenticated ```plaintext\\n❌ Authentication Required Operation: server create web-01\\nYou must be logged in to perform this operation. To login: provisioning auth login Note: Your credentials will be securely stored in the system keyring.\\n```plaintext **Solution**: Run `provisioning auth login ` --- ### MFA Required ```plaintext\\n❌ MFA Verification Required Operation: server delete web-01\\nReason: destructive operation (delete/destroy) To verify MFA: 1. Get code from your authenticator app 2. Run: provisioning auth mfa verify --code <6-digit-code> Don\'t have MFA set up? Run: provisioning auth mfa enroll totp\\n```plaintext **Solution**: Run `provisioning auth mfa verify --code 123456` --- ### Token Expired ```plaintext\\n❌ Authentication Required Operation: server create web-02\\nYou must be logged in to perform this operation. Error: Token verification failed\\n```plaintext **Solution**: Token expired, re-login with `provisioning auth login ` --- ## Audit Logging All authenticated operations are logged to the audit log file with the following information: ```json\\n{ \\"timestamp\\": \\"2025-10-09 14:32:15\\", \\"user\\": \\"admin\\", \\"operation\\": \\"server_create\\", \\"details\\": { \\"hostname\\": \\"web-01\\", \\"infra\\": \\"production\\", \\"environment\\": \\"prod\\", \\"orchestrated\\": false }, \\"mfa_verified\\": true\\n}\\n```plaintext ### Viewing Audit Logs ```bash\\n# View raw audit log\\ncat provisioning/logs/audit.log # Filter by user\\ncat provisioning/logs/audit.log | jq \'. | select(.user == \\"admin\\")\' # Filter by operation type\\ncat provisioning/logs/audit.log | jq \'. | select(.operation == \\"server_create\\")\' # Filter by date\\ncat provisioning/logs/audit.log | jq \'. | select(.timestamp | startswith(\\"2025-10-09\\"))\'\\n```plaintext --- ## Integration with Control Center The authentication system integrates with the provisioning platform\'s control center REST API: - **POST /api/auth/login** - Login with credentials\\n- **POST /api/auth/logout** - Revoke tokens\\n- **POST /api/auth/verify** - Verify token validity\\n- **GET /api/auth/sessions** - List active sessions\\n- **POST /api/mfa/enroll** - Enroll MFA device\\n- **POST /api/mfa/verify** - Verify MFA code ### Starting Control Center ```bash\\n# Start control center (required for authentication)\\ncd provisioning/platform/control-center\\ncargo run --release\\n```plaintext Or use the orchestrator which includes control center: ```bash\\ncd provisioning/platform/orchestrator\\n./scripts/start-orchestrator.nu --background\\n```plaintext --- ## Testing Authentication ### Manual Testing ```bash\\n# 1. Start control center\\ncd provisioning/platform/control-center\\ncargo run --release & # 2. Login\\nprovisioning auth login admin # 3. Try creating server (should succeed if authenticated)\\nprovisioning server create test-server --check # 4. Logout\\nprovisioning auth logout # 5. Try creating server (should fail - not authenticated)\\nprovisioning server create test-server --check\\n```plaintext ### Automated Testing ```bash\\n# Run authentication tests\\nnu provisioning/core/nulib/lib_provisioning/plugins/auth_test.nu\\n```plaintext --- ## Troubleshooting ### Plugin Not Available **Error**: `Authentication plugin not available` **Solution**: 1. Check plugin is built: `ls provisioning/core/plugins/nushell-plugins/nu_plugin_auth/target/release/`\\n2. Register plugin: `plugin add target/release/nu_plugin_auth`\\n3. Use plugin: `plugin use auth`\\n4. Verify: `which auth` --- ### Control Center Not Running **Error**: `Cannot connect to control center` **Solution**: 1. Start control center: `cd provisioning/platform/control-center && cargo run --release`\\n2. Or use orchestrator: `cd provisioning/platform/orchestrator && ./scripts/start-orchestrator.nu --background`\\n3. Check URL is correct in config: `provisioning config get platform.control_center.url` --- ### MFA Not Working **Error**: `Invalid MFA code` **Solutions**: - Ensure time is synchronized (TOTP codes are time-based)\\n- Code expires every 30 seconds, get fresh code\\n- Verify you\'re using the correct authenticator app entry\\n- Re-enroll if needed: `provisioning auth mfa enroll totp` --- ### Keyring Access Issues **Error**: `Keyring storage unavailable` **macOS**: Grant Keychain access to Terminal/iTerm2 in System Preferences → Security & Privacy **Linux**: Ensure `gnome-keyring` or `kwallet` is running **Windows**: Check Windows Credential Manager is accessible --- ## Architecture ### Authentication Flow ```plaintext\\n┌─────────────┐\\n│ User Command│\\n└──────┬──────┘ │ ▼\\n┌─────────────────────────────────┐\\n│ Infrastructure Command Handler │\\n│ (infrastructure.nu) │\\n└──────┬──────────────────────────┘ │ ▼\\n┌─────────────────────────────────┐\\n│ Auth Check │\\n│ - Determine operation type │\\n│ - Check if auth required │\\n│ - Check environment (prod/dev) │\\n└──────┬──────────────────────────┘ │ ▼\\n┌─────────────────────────────────┐\\n│ Auth Plugin Wrapper │\\n│ (auth.nu) │\\n│ - Call plugin or HTTP fallback │\\n│ - Verify token validity │\\n│ - Check MFA if required │\\n└──────┬──────────────────────────┘ │ ▼\\n┌─────────────────────────────────┐\\n│ nu_plugin_auth │\\n│ - JWT verification (RS256) │\\n│ - Keyring token storage │\\n│ - MFA verification │\\n└──────┬──────────────────────────┘ │ ▼\\n┌─────────────────────────────────┐\\n│ Control Center API │\\n│ - /api/auth/verify │\\n│ - /api/mfa/verify │\\n└──────┬──────────────────────────┘ │ ▼\\n┌─────────────────────────────────┐\\n│ Operation Execution │\\n│ (servers/create.nu, etc.) │\\n└──────┬──────────────────────────┘ │ ▼\\n┌─────────────────────────────────┐\\n│ Audit Logging │\\n│ - Log to audit.log │\\n│ - Include user, timestamp, MFA │\\n└─────────────────────────────────┘\\n```plaintext ### File Structure ```plaintext\\nprovisioning/\\n├── config/\\n│ └── config.defaults.toml # Security configuration\\n├── core/nulib/\\n│ ├── lib_provisioning/plugins/\\n│ │ └── auth.nu # Auth wrapper (550 lines)\\n│ ├── servers/\\n│ │ └── create.nu # Server ops with auth\\n│ ├── workflows/\\n│ │ └── batch.nu # Batch workflows with auth\\n│ └── main_provisioning/commands/\\n│ └── infrastructure.nu # Infrastructure commands with auth\\n├── core/plugins/nushell-plugins/\\n│ └── nu_plugin_auth/ # Native Rust plugin\\n│ ├── src/\\n│ │ ├── main.rs # Plugin implementation\\n│ │ └── helpers.rs # Helper functions\\n│ └── README.md # Plugin documentation\\n├── platform/control-center/ # Control Center (Rust)\\n│ └── src/auth/ # JWT auth implementation\\n└── logs/ └── audit.log # Audit trail\\n```plaintext --- ## Related Documentation - **Security System Overview**: `docs/architecture/ADR-009-security-system-complete.md`\\n- **JWT Authentication**: `docs/architecture/JWT_AUTH_IMPLEMENTATION.md`\\n- **MFA Implementation**: `docs/architecture/MFA_IMPLEMENTATION_SUMMARY.md`\\n- **Plugin README**: `provisioning/core/plugins/nushell-plugins/nu_plugin_auth/README.md`\\n- **Control Center**: `provisioning/platform/control-center/README.md` --- ## Summary of Changes | File | Changes | Lines Added |\\n|------|---------|-------------|\\n| `lib_provisioning/plugins/auth.nu` | Added security policy enforcement functions | +260 |\\n| `config/config.defaults.toml` | Added security configuration section | +19 |\\n| `servers/create.nu` | Added auth check for server creation | +25 |\\n| `workflows/batch.nu` | Added auth check for batch workflow submission | +43 |\\n| `main_provisioning/commands/infrastructure.nu` | Added auth checks for all infrastructure commands | +90 |\\n| `lib_provisioning/providers/interface.nu` | Added authentication guidelines for providers | +65 |\\n| **Total** | **6 files modified** | **~500 lines** | --- ## Best Practices ### For Users 1. **Always login**: Keep your session active to avoid interruptions\\n2. **Use keyring**: Save credentials with `--save` flag for persistence\\n3. **Enable MFA**: Use MFA for production operations\\n4. **Check mode first**: Always test with `--check` before actual operations\\n5. **Monitor audit logs**: Review audit logs regularly for security ### For Developers 1. **Check auth early**: Verify authentication before expensive operations\\n2. **Log operations**: Always log authenticated operations for audit\\n3. **Clear error messages**: Provide helpful guidance for auth failures\\n4. **Respect check mode**: Always skip auth in check/dry-run mode\\n5. **Test both paths**: Test with and without authentication ### For Operators 1. **Production hardening**: Set `allow_skip_auth = false` in production\\n2. **MFA enforcement**: Require MFA for all production environments\\n3. **Monitor audit logs**: Set up log monitoring and alerts\\n4. **Token rotation**: Configure short token timeouts (15min default)\\n5. **Backup authentication**: Ensure multiple admins have MFA enrolled --- ## License MIT License - See LICENSE file for details --- ## Quick Reference **Version**: 1.0.0\\n**Last Updated**: 2025-10-09 --- ### Quick Commands #### Login ```bash\\nprovisioning auth login # Interactive password\\nprovisioning auth login --save # Save to keyring\\n```plaintext #### MFA ```bash\\nprovisioning auth mfa enroll totp # Enroll TOTP\\nprovisioning auth mfa verify --code 123456 # Verify code\\n```plaintext #### Status ```bash\\nprovisioning auth status # Show auth status\\nprovisioning auth verify # Verify token\\n```plaintext #### Logout ```bash\\nprovisioning auth logout # Logout current session\\nprovisioning auth logout --all # Logout all sessions\\n```plaintext --- ### Protected Operations | Operation | Auth | MFA (Prod) | MFA (Delete) | Check Mode |\\n|-----------|------|------------|--------------|------------|\\n| `server create` | ✅ | ✅ | ❌ | Skip |\\n| `server delete` | ✅ | ✅ | ✅ | Skip |\\n| `server list` | ❌ | ❌ | ❌ | - |\\n| `taskserv create` | ✅ | ✅ | ❌ | Skip |\\n| `taskserv delete` | ✅ | ✅ | ✅ | Skip |\\n| `cluster create` | ✅ | ✅ | ❌ | Skip |\\n| `cluster delete` | ✅ | ✅ | ✅ | Skip |\\n| `batch submit` | ✅ | ✅ | ❌ | - | --- ### Bypass Authentication (Dev/Test Only) #### Environment Variable ```bash\\nexport PROVISIONING_SKIP_AUTH=true\\nprovisioning server create test\\nunset PROVISIONING_SKIP_AUTH\\n```plaintext #### Check Mode (Always Allowed) ```bash\\nprovisioning server create prod --check\\nprovisioning taskserv delete k8s --check\\n```plaintext #### Config Flag ```toml\\n[security.bypass]\\nallow_skip_auth = true # Only in dev/test\\n```plaintext --- ### Configuration #### Security Settings ```toml\\n[security]\\nrequire_auth = true\\nrequire_mfa_for_production = true\\nrequire_mfa_for_destructive = true\\nauth_timeout = 3600 [security.bypass]\\nallow_skip_auth = false # true in dev only [plugins]\\nauth_enabled = true [platform.control_center]\\nurl = \\"http://localhost:3000\\"\\n```plaintext --- ### Error Messages #### Not Authenticated ```plaintext\\n❌ Authentication Required\\nOperation: server create web-01\\nTo login: provisioning auth login \\n```plaintext **Fix**: `provisioning auth login ` #### MFA Required ```plaintext\\n❌ MFA Verification Required\\nOperation: server delete web-01\\nReason: destructive operation\\n```plaintext **Fix**: `provisioning auth mfa verify --code ` #### Token Expired ```plaintext\\nError: Token verification failed\\n```plaintext **Fix**: Re-login: `provisioning auth login ` --- ### Troubleshooting | Error | Solution |\\n|-------|----------|\\n| Plugin not available | `plugin add target/release/nu_plugin_auth` |\\n| Control center offline | Start: `cd provisioning/platform/control-center && cargo run` |\\n| Invalid MFA code | Get fresh code (expires in 30s) |\\n| Token expired | Re-login: `provisioning auth login ` |\\n| Keyring access denied | Grant app access in system settings | --- ### Audit Logs ```bash\\n# View audit log\\ncat provisioning/logs/audit.log # Filter by user\\ncat provisioning/logs/audit.log | jq \'. | select(.user == \\"admin\\")\' # Filter by operation\\ncat provisioning/logs/audit.log | jq \'. | select(.operation == \\"server_create\\")\'\\n```plaintext --- ### CI/CD Integration #### Option 1: Skip Auth (Dev/Test Only) ```bash\\nexport PROVISIONING_SKIP_AUTH=true\\nprovisioning server create ci-server\\n```plaintext #### Option 2: Check Mode ```bash\\nprovisioning server create ci-server --check\\n```plaintext #### Option 3: Service Account (Future) ```bash\\nexport PROVISIONING_AUTH_TOKEN=\\"\\"\\nprovisioning server create ci-server\\n```plaintext --- ### Performance | Operation | Auth Overhead |\\n|-----------|---------------|\\n| Server create | ~20ms |\\n| Taskserv create | ~20ms |\\n| Batch submit | ~20ms |\\n| Check mode | 0ms (skipped) | --- ### Related Docs - **Full Guide**: `docs/user/AUTHENTICATION_LAYER_GUIDE.md`\\n- **Implementation**: `AUTHENTICATION_LAYER_IMPLEMENTATION_SUMMARY.md`\\n- **Security ADR**: `docs/architecture/ADR-009-security-system-complete.md` --- **Quick Help**: `provisioning help auth` or `provisioning auth --help` --- **Last Updated**: 2025-10-09\\n**Maintained By**: Security Team --- ## Setup Guide ### Complete Authentication Setup Guide Current Settings (from your config) ```plaintext\\n[security]\\nrequire_auth = true # ✅ Auth is REQUIRED\\nallow_skip_auth = false # ❌ Cannot skip with env var\\nauth_timeout = 3600 # Token valid for 1 hour [platform.control_center]\\nurl = \\"http://localhost:3000\\" # Control Center endpoint\\n```plaintext ### STEP 1: Start Control Center The Control Center is the authentication backend: ```bash\\n# Check if it\'s already running\\ncurl http://localhost:3000/health # If not running, start it\\ncd /Users/Akasha/project-provisioning/provisioning/platform/control-center\\ncargo run --release & # Wait for it to start (may take 30-60 seconds)\\nsleep 30\\ncurl http://localhost:3000/health\\n```plaintext Expected Output: ```json\\n{\\"status\\": \\"healthy\\"}\\n```plaintext ### STEP 2: Find Default Credentials Check for default user setup: ```bash\\n# Look for initialization scripts\\nls -la /Users/Akasha/project-provisioning/provisioning/platform/control-center/ # Check for README or setup instructions\\ncat /Users/Akasha/project-provisioning/provisioning/platform/control-center/README.md # Or check for default config\\ncat /Users/Akasha/project-provisioning/provisioning/platform/control-center/config.toml 2>/dev/null || echo \\"Config not found\\"\\n```plaintext ### STEP 3: Log In Once you have credentials (usually admin / password from setup): ```bash\\n# Interactive login - will prompt for password\\nprovisioning auth login # Or with username\\nprovisioning auth login admin # Verify you\'re logged in\\nprovisioning auth status\\n```plaintext Expected Success Output: ```plaintext\\n✓ Login successful! User: admin\\nRole: admin\\nExpires: 2025-10-22T14:30:00Z\\nMFA: false Session active and ready\\n```plaintext ### STEP 4: Now Create Your Server Once authenticated: ```bash\\n# Try server creation again\\nprovisioning server create sgoyol --check # Or with full details\\nprovisioning server create sgoyol --infra workspace_librecloud --check\\n```plaintext ### 🛠️ Alternative: Skip Auth for Development If you want to bypass authentication temporarily for testing: #### Option A: Edit config to allow skip ```bash\\n# You would need to parse and modify TOML - easier to do next option\\n```plaintext #### Option B: Use environment variable (if allowed by config) ```bash\\nexport PROVISIONING_SKIP_AUTH=true\\nprovisioning server create sgoyol\\nunset PROVISIONING_SKIP_AUTH\\n```plaintext #### Option C: Use check mode (always works, no auth needed) ```bash\\nprovisioning server create sgoyol --check\\n```plaintext #### Option D: Modify config.defaults.toml (permanent for dev) Edit: `provisioning/config/config.defaults.toml` Change line 193 to: ```toml\\nallow_skip_auth = true\\n```plaintext ### 🔍 Troubleshooting | Problem | Solution |\\n|----------------------------|---------------------------------------------------------------------|\\n| Control Center won\'t start | Check port 3000 not in use: `lsof -i :3000` |\\n| \\"No token found\\" error | Login with: `provisioning auth login` |\\n| Login fails | Verify Control Center is running: `curl http://localhost:3000/health` |\\n| Token expired | Re-login: `provisioning auth login` |\\n| Plugin not available | Using HTTP fallback - this is OK, works without plugin |","breadcrumbs":"Authentication Layer Guide » 1. Login to Platform","id":"1520","title":"1. Login to Platform"},"1521":{"body":"Version : 1.0.0 Last Updated : 2025-10-08 Status : Production Ready","breadcrumbs":"Config Encryption Guide » Configuration Encryption Guide","id":"1521","title":"Configuration Encryption Guide"},"1522":{"body":"The Provisioning Platform includes a comprehensive configuration encryption system that provides: Transparent Encryption/Decryption : Configs are automatically decrypted on load Multiple KMS Backends : Age, AWS KMS, HashiCorp Vault, Cosmian KMS Memory-Only Decryption : Secrets never written to disk in plaintext SOPS Integration : Industry-standard encryption with SOPS Sensitive Data Detection : Automatic scanning for unencrypted sensitive data","breadcrumbs":"Config Encryption Guide » Overview","id":"1522","title":"Overview"},"1523":{"body":"Prerequisites Quick Start Configuration Encryption KMS Backends CLI Commands Integration with Config Loader Best Practices Troubleshooting","breadcrumbs":"Config Encryption Guide » Table of Contents","id":"1523","title":"Table of Contents"},"1524":{"body":"","breadcrumbs":"Config Encryption Guide » Prerequisites","id":"1524","title":"Prerequisites"},"1525":{"body":"SOPS (v3.10.2+) # macOS\\nbrew install sops # Linux\\nwget https://github.com/mozilla/sops/releases/download/v3.10.2/sops-v3.10.2.linux.amd64\\nsudo mv sops-v3.10.2.linux.amd64 /usr/local/bin/sops\\nsudo chmod +x /usr/local/bin/sops Age (for Age backend - recommended) # macOS\\nbrew install age # Linux\\napt install age AWS CLI (for AWS KMS backend - optional) brew install awscli","breadcrumbs":"Config Encryption Guide » Required Tools","id":"1525","title":"Required Tools"},"1526":{"body":"# Check SOPS\\nsops --version # Check Age\\nage --version # Check AWS CLI (optional)\\naws --version\\n```plaintext --- ## Quick Start ### 1. Initialize Encryption Generate Age keys and create SOPS configuration: ```bash\\nprovisioning config init-encryption --kms age\\n```plaintext This will: - Generate Age key pair in `~/.config/sops/age/keys.txt`\\n- Display your public key (recipient)\\n- Create `.sops.yaml` in your project ### 2. Set Environment Variables Add to your shell profile (`~/.zshrc` or `~/.bashrc`): ```bash\\n# Age encryption\\nexport SOPS_AGE_RECIPIENTS=\\"age1ql3z7hjy54pw3hyww5ayyfg7zqgvc7w3j2elw8zmrj2kg5sfn9aqmcac8p\\"\\nexport PROVISIONING_KAGE=\\"$HOME/.config/sops/age/keys.txt\\"\\n```plaintext Replace the recipient with your actual public key. ### 3. Validate Setup ```bash\\nprovisioning config validate-encryption\\n```plaintext Expected output: ```plaintext\\n✅ Encryption configuration is valid SOPS installed: true Age backend: true KMS enabled: false Errors: 0 Warnings: 0\\n```plaintext ### 4. Encrypt Your First Config ```bash\\n# Create a config with sensitive data\\ncat > workspace/config/secure.yaml < edit -> re-encrypt)\\nprovisioning config edit-secure workspace/config/secure.enc.yaml\\n```plaintext This will: 1. Decrypt the file temporarily\\n2. Open in your `$EDITOR` (vim/nano/etc)\\n3. Re-encrypt when you save and close\\n4. Remove temporary decrypted file ### Check Encryption Status ```bash\\n# Check if file is encrypted\\nprovisioning config is-encrypted workspace/config/secure.yaml # Get detailed encryption info\\nprovisioning config encryption-info workspace/config/secure.yaml\\n```plaintext --- ## KMS Backends ### Age (Recommended for Development) **Pros**: - Simple file-based keys\\n- No external dependencies\\n- Fast and secure\\n- Works offline **Setup**: ```bash\\n# Initialize\\nprovisioning config init-encryption --kms age # Set environment variables\\nexport SOPS_AGE_RECIPIENTS=\\"age1...\\" # Your public key\\nexport PROVISIONING_KAGE=\\"$HOME/.config/sops/age/keys.txt\\"\\n```plaintext **Encrypt/Decrypt**: ```bash\\nprovisioning config encrypt secrets.yaml --kms age\\nprovisioning config decrypt secrets.enc.yaml\\n```plaintext ### AWS KMS (Production) **Pros**: - Centralized key management\\n- Audit logging\\n- IAM integration\\n- Key rotation **Setup**: 1. Create KMS key in AWS Console\\n2. Configure AWS credentials: ```bash aws configure Update .sops.yaml: creation_rules: - path_regex: .*\\\\.enc\\\\.yaml$ kms: \\"arn:aws:kms:us-east-1:123456789012:key/12345678-1234-1234-1234-123456789012\\" Encrypt/Decrypt : provisioning config encrypt secrets.yaml --kms aws-kms\\nprovisioning config decrypt secrets.enc.yaml\\n```plaintext ### HashiCorp Vault (Enterprise) **Pros**: - Dynamic secrets\\n- Centralized secret management\\n- Audit logging\\n- Policy-based access **Setup**: 1. Configure Vault address and token: ```bash export VAULT_ADDR=\\"https://vault.example.com:8200\\" export VAULT_TOKEN=\\"s.xxxxxxxxxxxxxx\\" Update configuration: # workspace/config/provisioning.yaml\\nkms: enabled: true mode: \\"remote\\" vault: address: \\"https://vault.example.com:8200\\" transit_key: \\"provisioning\\" Encrypt/Decrypt : provisioning config encrypt secrets.yaml --kms vault\\nprovisioning config decrypt secrets.enc.yaml\\n```plaintext ### Cosmian KMS (Confidential Computing) **Pros**: - Confidential computing support\\n- Zero-knowledge architecture\\n- Post-quantum ready\\n- Cloud-agnostic **Setup**: 1. Deploy Cosmian KMS server\\n2. Update configuration: ```toml kms: enabled: true mode: \\"remote\\" remote: endpoint: \\"https://kms.example.com:9998\\" auth_method: \\"certificate\\" client_cert: \\"/path/to/client.crt\\" client_key: \\"/path/to/client.key\\" Encrypt/Decrypt : provisioning config encrypt secrets.yaml --kms cosmian\\nprovisioning config decrypt secrets.enc.yaml\\n```plaintext --- ## CLI Commands ### Configuration Encryption Commands | Command | Description |\\n|---------|-------------|\\n| `config encrypt ` | Encrypt configuration file |\\n| `config decrypt ` | Decrypt configuration file |\\n| `config edit-secure ` | Edit encrypted file securely |\\n| `config rotate-keys ` | Rotate encryption keys |\\n| `config is-encrypted ` | Check if file is encrypted |\\n| `config encryption-info ` | Show encryption details |\\n| `config validate-encryption` | Validate encryption setup |\\n| `config scan-sensitive ` | Find unencrypted sensitive configs |\\n| `config encrypt-all ` | Encrypt all sensitive configs |\\n| `config init-encryption` | Initialize encryption (generate keys) | ### Examples ```bash\\n# Encrypt workspace config\\nprovisioning config encrypt workspace/config/secure.yaml --in-place # Edit encrypted file\\nprovisioning config edit-secure workspace/config/secure.yaml # Scan for unencrypted sensitive configs\\nprovisioning config scan-sensitive workspace/config --recursive # Encrypt all sensitive configs in workspace\\nprovisioning config encrypt-all workspace/config --kms age --recursive # Check encryption status\\nprovisioning config is-encrypted workspace/config/secure.yaml # Get detailed info\\nprovisioning config encryption-info workspace/config/secure.yaml # Validate setup\\nprovisioning config validate-encryption\\n```plaintext --- ## Integration with Config Loader ### Automatic Decryption The config loader automatically detects and decrypts encrypted files: ```nushell\\n# Load encrypted config (automatically decrypted in memory)\\nuse lib_provisioning/config/loader.nu let config = (load-provisioning-config --debug)\\n```plaintext **Key Features**: - **Transparent**: No code changes needed\\n- **Memory-Only**: Decrypted content never written to disk\\n- **Fallback**: If decryption fails, attempts to load as plain file\\n- **Debug Support**: Shows decryption status with `--debug` flag ### Manual Loading ```nushell\\nuse lib_provisioning/config/encryption.nu # Load encrypted config\\nlet secure_config = (load-encrypted-config \\"workspace/config/secure.enc.yaml\\") # Memory-only decryption (no file created)\\nlet decrypted_content = (decrypt-config-memory \\"workspace/config/secure.enc.yaml\\")\\n```plaintext ### Configuration Hierarchy with Encryption The system supports encrypted files at any level: ```plaintext\\n1. workspace/{name}/config/provisioning.yaml ← Can be encrypted\\n2. workspace/{name}/config/providers/*.toml ← Can be encrypted\\n3. workspace/{name}/config/platform/*.toml ← Can be encrypted\\n4. ~/.../provisioning/ws_{name}.yaml ← Can be encrypted\\n5. Environment variables (PROVISIONING_*) ← Plain text\\n```plaintext --- ## Best Practices ### 1. Encrypt All Sensitive Data **Always encrypt configs containing**: - Passwords\\n- API keys\\n- Secret keys\\n- Private keys\\n- Tokens\\n- Credentials **Scan for unencrypted sensitive data**: ```bash\\nprovisioning config scan-sensitive workspace --recursive\\n```plaintext ### 2. Use Appropriate KMS Backend | Environment | Recommended Backend |\\n|-------------|---------------------|\\n| Development | Age (file-based) |\\n| Staging | AWS KMS or Vault |\\n| Production | AWS KMS or Vault |\\n| CI/CD | AWS KMS with IAM roles | ### 3. Key Management **Age Keys**: - Store private keys securely: `~/.config/sops/age/keys.txt`\\n- Set file permissions: `chmod 600 ~/.config/sops/age/keys.txt`\\n- Backup keys securely (encrypted backup)\\n- Never commit private keys to git **AWS KMS**: - Use separate keys per environment\\n- Enable key rotation\\n- Use IAM policies for access control\\n- Monitor usage with CloudTrail **Vault**: - Use transit engine for encryption\\n- Enable audit logging\\n- Implement least-privilege policies\\n- Regular policy reviews ### 4. File Organization ```plaintext\\nworkspace/\\n└── config/ ├── provisioning.yaml # Plain (no secrets) ├── secure.yaml # Encrypted (SOPS auto-detects) ├── providers/ │ ├── aws.toml # Plain (no secrets) │ └── aws-credentials.enc.toml # Encrypted └── platform/ └── database.enc.yaml # Encrypted\\n```plaintext ### 5. Git Integration **Add to `.gitignore`**: ```gitignore\\n# Unencrypted sensitive files\\n**/secrets.yaml\\n**/credentials.yaml\\n**/*.dec.yaml\\n**/*.dec.toml # Temporary decrypted files\\n*.tmp.yaml\\n*.tmp.toml\\n```plaintext **Commit encrypted files**: ```bash\\n# Encrypted files are safe to commit\\ngit add workspace/config/secure.enc.yaml\\ngit commit -m \\"Add encrypted configuration\\"\\n```plaintext ### 6. Rotation Strategy **Regular Key Rotation**: ```bash\\n# Generate new Age key\\nage-keygen -o ~/.config/sops/age/keys-new.txt # Update .sops.yaml with new recipient # Rotate keys for file\\nprovisioning config rotate-keys workspace/config/secure.yaml \\n```plaintext **Frequency**: - Development: Annually\\n- Production: Quarterly\\n- After team member departure: Immediately ### 7. Audit and Monitoring **Track encryption status**: ```bash\\n# Regular scans\\nprovisioning config scan-sensitive workspace --recursive # Validate encryption setup\\nprovisioning config validate-encryption\\n```plaintext **Monitor access** (with Vault/AWS KMS): - Enable audit logging\\n- Review access patterns\\n- Alert on anomalies --- ## Troubleshooting ### SOPS Not Found **Error**: ```plaintext\\nSOPS binary not found\\n```plaintext **Solution**: ```bash\\n# Install SOPS\\nbrew install sops # Verify\\nsops --version\\n```plaintext ### Age Key Not Found **Error**: ```plaintext\\nAge key file not found: ~/.config/sops/age/keys.txt\\n```plaintext **Solution**: ```bash\\n# Generate new key\\nmkdir -p ~/.config/sops/age\\nage-keygen -o ~/.config/sops/age/keys.txt # Set environment variable\\nexport PROVISIONING_KAGE=\\"$HOME/.config/sops/age/keys.txt\\"\\n```plaintext ### SOPS_AGE_RECIPIENTS Not Set **Error**: ```plaintext\\nno AGE_RECIPIENTS for file.yaml\\n```plaintext **Solution**: ```bash\\n# Extract public key from private key\\ngrep \\"public key:\\" ~/.config/sops/age/keys.txt # Set environment variable\\nexport SOPS_AGE_RECIPIENTS=\\"age1ql3z7hjy54pw3hyww5ayyfg7zqgvc7w3j2elw8zmrj2kg5sfn9aqmcac8p\\"\\n```plaintext ### Decryption Failed **Error**: ```plaintext\\nFailed to decrypt configuration file\\n```plaintext **Solutions**: 1. **Wrong key**: ```bash # Verify you have the correct private key provisioning config validate-encryption File corrupted : # Check file integrity\\nsops --decrypt workspace/config/secure.yaml Wrong backend : # Check SOPS metadata in file\\nhead -20 workspace/config/secure.yaml","breadcrumbs":"Config Encryption Guide » Verify Installation","id":"1526","title":"Verify Installation"},"1527":{"body":"Error : AccessDeniedException: User is not authorized to perform: kms:Decrypt\\n```plaintext **Solution**: ```bash\\n# Check AWS credentials\\naws sts get-caller-identity # Verify KMS key policy allows your IAM user/role\\naws kms describe-key --key-id \\n```plaintext ### Vault Connection Failed **Error**: ```plaintext\\nVault encryption failed: connection refused\\n```plaintext **Solution**: ```bash\\n# Verify Vault address\\necho $VAULT_ADDR # Check connectivity\\ncurl -k $VAULT_ADDR/v1/sys/health # Verify token\\nvault token lookup\\n```plaintext --- ## Security Considerations ### Threat Model **Protected Against**: - ✅ Plaintext secrets in git\\n- ✅ Accidental secret exposure\\n- ✅ Unauthorized file access\\n- ✅ Key compromise (with rotation) **Not Protected Against**: - ❌ Memory dumps during decryption\\n- ❌ Root/admin access to running process\\n- ❌ Compromised Age/KMS keys\\n- ❌ Social engineering ### Security Best Practices 1. **Principle of Least Privilege**: Only grant decryption access to those who need it\\n2. **Key Separation**: Use different keys for different environments\\n3. **Regular Audits**: Review who has access to keys\\n4. **Secure Key Storage**: Never store private keys in git\\n5. **Rotation**: Regularly rotate encryption keys\\n6. **Monitoring**: Monitor decryption operations (with AWS KMS/Vault) --- ## Additional Resources - **SOPS Documentation**: \\n- **Age Encryption**: \\n- **AWS KMS**: \\n- **HashiCorp Vault**: \\n- **Cosmian KMS**: --- ## Support For issues or questions: - Check troubleshooting section above\\n- Run: `provisioning config validate-encryption`\\n- Review logs with `--debug` flag --- ## Quick Reference ### Setup (One-time) ```bash\\n# 1. Initialize encryption\\nprovisioning config init-encryption --kms age # 2. Set environment variables (add to ~/.zshrc or ~/.bashrc)\\nexport SOPS_AGE_RECIPIENTS=\\"age1ql3z7hjy54pw3hyww5ayyfg7zqgvc7w3j2elw8zmrj2kg5sfn9aqmcac8p\\"\\nexport PROVISIONING_KAGE=\\"$HOME/.config/sops/age/keys.txt\\" # 3. Validate setup\\nprovisioning config validate-encryption\\n```plaintext ### Common Commands | Task | Command |\\n|------|---------|\\n| **Encrypt file** | `provisioning config encrypt secrets.yaml --in-place` |\\n| **Decrypt file** | `provisioning config decrypt secrets.enc.yaml` |\\n| **Edit encrypted** | `provisioning config edit-secure secrets.enc.yaml` |\\n| **Check if encrypted** | `provisioning config is-encrypted secrets.yaml` |\\n| **Scan for unencrypted** | `provisioning config scan-sensitive workspace --recursive` |\\n| **Encrypt all sensitive** | `provisioning config encrypt-all workspace/config --kms age` |\\n| **Validate setup** | `provisioning config validate-encryption` |\\n| **Show encryption info** | `provisioning config encryption-info secrets.yaml` | ### File Naming Conventions Automatically encrypted by SOPS: - `workspace/*/config/secure.yaml` ← Auto-encrypted\\n- `*.enc.yaml` ← Auto-encrypted\\n- `*.enc.yml` ← Auto-encrypted\\n- `*.enc.toml` ← Auto-encrypted\\n- `workspace/*/config/providers/*credentials*.toml` ← Auto-encrypted ### Quick Workflow ```bash\\n# Create config with secrets\\ncat > workspace/config/secure.yaml < edit -> re-encrypt)\\nprovisioning config edit-secure workspace/config/secure.yaml # Configs are auto-decrypted when loaded\\nprovisioning env # Automatically decrypts secure.yaml\\n```plaintext ### KMS Backends | Backend | Use Case | Setup Command |\\n|---------|----------|---------------|\\n| **Age** | Development, simple setup | `provisioning config init-encryption --kms age` |\\n| **AWS KMS** | Production, AWS environments | Configure in `.sops.yaml` |\\n| **Vault** | Enterprise, dynamic secrets | Set `VAULT_ADDR` and `VAULT_TOKEN` |\\n| **Cosmian** | Confidential computing | Configure in `config.toml` | ### Security Checklist - ✅ Encrypt all files with passwords, API keys, secrets\\n- ✅ Never commit unencrypted secrets to git\\n- ✅ Set file permissions: `chmod 600 ~/.config/sops/age/keys.txt`\\n- ✅ Add plaintext files to `.gitignore`: `*.dec.yaml`, `secrets.yaml`\\n- ✅ Regular key rotation (quarterly for production)\\n- ✅ Separate keys per environment (dev/staging/prod)\\n- ✅ Backup Age keys securely (encrypted backup) ### Troubleshooting | Problem | Solution |\\n|---------|----------|\\n| `SOPS binary not found` | `brew install sops` |\\n| `Age key file not found` | `provisioning config init-encryption --kms age` |\\n| `SOPS_AGE_RECIPIENTS not set` | `export SOPS_AGE_RECIPIENTS=\\"age1...\\"` |\\n| `Decryption failed` | Check key file: `provisioning config validate-encryption` |\\n| `AWS KMS Access Denied` | Verify IAM permissions: `aws sts get-caller-identity` | ### Testing ```bash\\n# Run all encryption tests\\nnu provisioning/core/nulib/lib_provisioning/config/encryption_tests.nu # Run specific test\\nnu provisioning/core/nulib/lib_provisioning/config/encryption_tests.nu --test roundtrip # Test full workflow\\nnu provisioning/core/nulib/lib_provisioning/config/encryption_tests.nu test-full-encryption-workflow # Test KMS backend\\nuse lib_provisioning/kms/client.nu\\nkms-test --backend age\\n```plaintext ### Integration Configs are **automatically decrypted** when loaded: ```nushell\\n# Nushell code - encryption is transparent\\nuse lib_provisioning/config/loader.nu # Auto-decrypts encrypted files in memory\\nlet config = (load-provisioning-config) # Access secrets normally\\nlet db_password = ($config | get database.password)\\n```plaintext ### Emergency Key Recovery If you lose your Age key: 1. **Check backups**: `~/.config/sops/age/keys.txt.backup`\\n2. **Check other systems**: Keys might be on other dev machines\\n3. **Contact team**: Team members with access can re-encrypt for you\\n4. **Rotate secrets**: If keys are lost, rotate all secrets ### Advanced #### Multiple Recipients (Team Access) ```yaml\\n# .sops.yaml\\ncreation_rules: - path_regex: .*\\\\.enc\\\\.yaml$ age: >- age1ql3z7hjy54pw3hyww5ayyfg7zqgvc7w3j2elw8zmrj2kg5sfn9aqmcac8p, age1ql3z7hjy54pw3hyww5ayyfg7zqgvc7w3j2elw8zmrj2kg5sfn9aqmcac8q\\n```plaintext #### Key Rotation ```bash\\n# Generate new key\\nage-keygen -o ~/.config/sops/age/keys-new.txt # Update .sops.yaml with new recipient # Rotate keys for file\\nprovisioning config rotate-keys workspace/config/secure.yaml \\n```plaintext #### Scan and Encrypt All ```bash\\n# Find all unencrypted sensitive configs\\nprovisioning config scan-sensitive workspace --recursive # Encrypt them all\\nprovisioning config encrypt-all workspace --kms age --recursive # Verify\\nprovisioning config scan-sensitive workspace --recursive\\n```plaintext ### Documentation - **Full Guide**: `docs/user/CONFIG_ENCRYPTION_GUIDE.md`\\n- **SOPS Docs**: \\n- **Age Docs**: --- **Last Updated**: 2025-10-08\\n**Version**: 1.0.0","breadcrumbs":"Config Encryption Guide » AWS KMS Access Denied","id":"1527","title":"AWS KMS Access Denied"},"1528":{"body":"","breadcrumbs":"Security System » Complete Security System (v4.0.0)","id":"1528","title":"Complete Security System (v4.0.0)"},"1529":{"body":"A comprehensive security system with 39,699 lines across 12 components providing enterprise-grade protection for infrastructure automation.","breadcrumbs":"Security System » 🔐 Enterprise-Grade Security Implementation","id":"1529","title":"🔐 Enterprise-Grade Security Implementation"},"153":{"body":"","breadcrumbs":"Installation Steps » Troubleshooting","id":"153","title":"Troubleshooting"},"1530":{"body":"","breadcrumbs":"Security System » Core Security Components","id":"1530","title":"Core Security Components"},"1531":{"body":"Type : RS256 token-based authentication Features : Argon2id hashing, token rotation, session management Roles : 5 distinct role levels with inheritance Commands : provisioning login\\nprovisioning mfa totp verify","breadcrumbs":"Security System » 1. Authentication (JWT)","id":"1531","title":"1. Authentication (JWT)"},"1532":{"body":"Type : Policy-as-code using Cedar authorization engine Features : Context-aware policies, hot reload, fine-grained control Updates : Dynamic policy reloading without service restart","breadcrumbs":"Security System » 2. Authorization (Cedar)","id":"1532","title":"2. Authorization (Cedar)"},"1533":{"body":"Methods : TOTP (Time-based OTP) + WebAuthn/FIDO2 Features : Backup codes, rate limiting, device binding Commands : provisioning mfa totp enroll\\nprovisioning mfa webauthn enroll","breadcrumbs":"Security System » 3. Multi-Factor Authentication (MFA)","id":"1533","title":"3. Multi-Factor Authentication (MFA)"},"1534":{"body":"Dynamic Secrets : AWS STS, SSH keys, UpCloud credentials KMS Integration : Vault + AWS KMS + Age + Cosmian Features : Auto-cleanup, TTL management, rotation policies Commands : provisioning secrets generate aws --ttl 1hr\\nprovisioning ssh connect server01","breadcrumbs":"Security System » 4. Secrets Management","id":"1534","title":"4. Secrets Management"},"1535":{"body":"Backends : RustyVault, Age, AWS KMS, HashiCorp Vault, Cosmian Features : Envelope encryption, key rotation, secure storage Commands : provisioning kms encrypt\\nprovisioning config encrypt secure.yaml","breadcrumbs":"Security System » 5. Key Management System (KMS)","id":"1535","title":"5. Key Management System (KMS)"},"1536":{"body":"Format : Structured JSON logs with full context Compliance : GDPR-compliant with PII filtering Retention : 7-year data retention policy Exports : 5 export formats (JSON, CSV, SYSLOG, Splunk, CloudWatch)","breadcrumbs":"Security System » 6. Audit Logging","id":"1536","title":"6. Audit Logging"},"1537":{"body":"Approval : Multi-party approval workflow Features : Temporary elevated privileges, auto-revocation, audit trail Commands : provisioning break-glass request \\"reason\\"\\nprovisioning break-glass approve ","breadcrumbs":"Security System » 7. Break-Glass Emergency Access","id":"1537","title":"7. Break-Glass Emergency Access"},"1538":{"body":"Standards : GDPR, SOC2, ISO 27001, incident response procedures Features : Compliance reporting, audit trails, policy enforcement Commands : provisioning compliance report\\nprovisioning compliance gdpr export ","breadcrumbs":"Security System » 8. Compliance Management","id":"1538","title":"8. Compliance Management"},"1539":{"body":"Filtering : By user, action, time range, resource Features : Structured query language, real-time search Commands : provisioning audit query --user alice --action deploy --from 24h","breadcrumbs":"Security System » 9. Audit Query System","id":"1539","title":"9. Audit Query System"},"154":{"body":"If plugins aren\'t recognized: # Rebuild plugin registry\\nnu -c \\"plugin list; plugin use tera\\"","breadcrumbs":"Installation Steps » Nushell Plugin Not Found","id":"154","title":"Nushell Plugin Not Found"},"1540":{"body":"Features : Rotation policies, expiration tracking, revocation Integration : Seamless with auth system","breadcrumbs":"Security System » 10. Token Management","id":"1540","title":"10. Token Management"},"1541":{"body":"Model : Role-based access control (RBAC) Features : Resource-level permissions, delegation, audit","breadcrumbs":"Security System » 11. Access Control","id":"1541","title":"11. Access Control"},"1542":{"body":"Standards : AES-256, TLS 1.3, envelope encryption Coverage : At-rest and in-transit encryption","breadcrumbs":"Security System » 12. Encryption","id":"1542","title":"12. Encryption"},"1543":{"body":"Overhead : <20ms per secure operation Tests : 350+ comprehensive test cases Endpoints : 83+ REST API endpoints CLI Commands : 111+ security-related commands","breadcrumbs":"Security System » Performance Characteristics","id":"1543","title":"Performance Characteristics"},"1544":{"body":"Component Command Purpose Login provisioning login User authentication MFA TOTP provisioning mfa totp enroll Setup time-based MFA MFA WebAuthn provisioning mfa webauthn enroll Setup hardware security key Secrets provisioning secrets generate aws --ttl 1hr Generate temporary credentials SSH provisioning ssh connect server01 Secure SSH session KMS Encrypt provisioning kms encrypt Encrypt configuration Break-Glass provisioning break-glass request \\"reason\\" Request emergency access Compliance provisioning compliance report Generate compliance report GDPR Export provisioning compliance gdpr export Export user data Audit provisioning audit query --user alice --action deploy --from 24h Search audit logs","breadcrumbs":"Security System » Quick Reference","id":"1544","title":"Quick Reference"},"1545":{"body":"Security system is integrated throughout provisioning platform: Embedded : All authentication/authorization checks Non-blocking : <20ms overhead on operations Graceful degradation : Fallback mechanisms for partial failures Hot reload : Policies update without service restart","breadcrumbs":"Security System » Architecture","id":"1545","title":"Architecture"},"1546":{"body":"Security policies and settings are defined in: provisioning/kcl/security.k - KCL security schema definitions provisioning/config/security/*.toml - Security policy configurations Environment-specific overrides in workspace/config/","breadcrumbs":"Security System » Configuration","id":"1546","title":"Configuration"},"1547":{"body":"Full implementation: ADR-009: Security System Complete User guides: Authentication Layer Guide Admin guides: MFA Admin Setup Guide Implementation details: Various documentation subdirectories","breadcrumbs":"Security System » Documentation","id":"1547","title":"Documentation"},"1548":{"body":"# Show security help\\nprovisioning help security # Show specific security command help\\nprovisioning login --help\\nprovisioning mfa --help\\nprovisioning secrets --help","breadcrumbs":"Security System » Help Commands","id":"1548","title":"Help Commands"},"1549":{"body":"Version : 1.0.0 Date : 2025-10-08 Status : Production-ready","breadcrumbs":"RustyVault KMS Guide » RustyVault KMS Backend Guide","id":"1549","title":"RustyVault KMS Backend Guide"},"155":{"body":"If you encounter permission errors: # Ensure proper ownership\\nsudo chown -R $USER:$USER ~/.config/provisioning # Check PATH\\necho $PATH | grep provisioning","breadcrumbs":"Installation Steps » Permission Denied","id":"155","title":"Permission Denied"},"1550":{"body":"RustyVault is a self-hosted, Rust-based secrets management system that provides a Vault-compatible API . The provisioning platform now supports RustyVault as a KMS backend alongside Age, Cosmian, AWS KMS, and HashiCorp Vault.","breadcrumbs":"RustyVault KMS Guide » Overview","id":"1550","title":"Overview"},"1551":{"body":"Self-hosted : Full control over your key management infrastructure Pure Rust : Better performance and memory safety Vault-compatible : Drop-in replacement for HashiCorp Vault Transit engine OSI-approved License : Apache 2.0 (vs HashiCorp\'s BSL) Embeddable : Can run as standalone service or embedded library No Vendor Lock-in : Open-source alternative to proprietary KMS solutions","breadcrumbs":"RustyVault KMS Guide » Why RustyVault?","id":"1551","title":"Why RustyVault?"},"1552":{"body":"KMS Service Backends:\\n├── Age (local development, file-based)\\n├── Cosmian (privacy-preserving, production)\\n├── AWS KMS (cloud-native AWS)\\n├── HashiCorp Vault (enterprise, external)\\n└── RustyVault (self-hosted, embedded) ✨ NEW\\n```plaintext --- ## Installation ### Option 1: Standalone RustyVault Server ```bash\\n# Install RustyVault binary\\ncargo install rusty_vault # Start RustyVault server\\nrustyvault server -config=/path/to/config.hcl\\n```plaintext ### Option 2: Docker Deployment ```bash\\n# Pull RustyVault image (if available)\\ndocker pull tongsuo/rustyvault:latest # Run RustyVault container\\ndocker run -d \\\\ --name rustyvault \\\\ -p 8200:8200 \\\\ -v $(pwd)/config:/vault/config \\\\ -v $(pwd)/data:/vault/data \\\\ tongsuo/rustyvault:latest\\n```plaintext ### Option 3: From Source ```bash\\n# Clone repository\\ngit clone https://github.com/Tongsuo-Project/RustyVault.git\\ncd RustyVault # Build and run\\ncargo build --release\\n./target/release/rustyvault server -config=config.hcl\\n```plaintext --- ## Configuration ### RustyVault Server Configuration Create `rustyvault-config.hcl`: ```hcl\\n# RustyVault Server Configuration storage \\"file\\" { path = \\"/vault/data\\"\\n} listener \\"tcp\\" { address = \\"0.0.0.0:8200\\" tls_disable = true # Enable TLS in production\\n} api_addr = \\"http://127.0.0.1:8200\\"\\ncluster_addr = \\"https://127.0.0.1:8201\\" # Enable Transit secrets engine\\ndefault_lease_ttl = \\"168h\\"\\nmax_lease_ttl = \\"720h\\"\\n```plaintext ### Initialize RustyVault ```bash\\n# Initialize (first time only)\\nexport VAULT_ADDR=\'http://127.0.0.1:8200\'\\nrustyvault operator init # Unseal (after every restart)\\nrustyvault operator unseal \\nrustyvault operator unseal \\nrustyvault operator unseal # Save root token\\nexport RUSTYVAULT_TOKEN=\'\'\\n```plaintext ### Enable Transit Engine ```bash\\n# Enable transit secrets engine\\nrustyvault secrets enable transit # Create encryption key\\nrustyvault write -f transit/keys/provisioning-main # Verify key creation\\nrustyvault read transit/keys/provisioning-main\\n```plaintext --- ## KMS Service Configuration ### Update `provisioning/config/kms.toml` ```toml\\n[kms]\\ntype = \\"rustyvault\\"\\nserver_url = \\"http://localhost:8200\\"\\ntoken = \\"${RUSTYVAULT_TOKEN}\\"\\nmount_point = \\"transit\\"\\nkey_name = \\"provisioning-main\\"\\ntls_verify = true [service]\\nbind_addr = \\"0.0.0.0:8081\\"\\nlog_level = \\"info\\"\\naudit_logging = true [tls]\\nenabled = false # Set true with HTTPS\\n```plaintext ### Environment Variables ```bash\\n# RustyVault connection\\nexport RUSTYVAULT_ADDR=\\"http://localhost:8200\\"\\nexport RUSTYVAULT_TOKEN=\\"s.xxxxxxxxxxxxxxxxxxxxxx\\"\\nexport RUSTYVAULT_MOUNT_POINT=\\"transit\\"\\nexport RUSTYVAULT_KEY_NAME=\\"provisioning-main\\"\\nexport RUSTYVAULT_TLS_VERIFY=\\"true\\" # KMS service\\nexport KMS_BACKEND=\\"rustyvault\\"\\nexport KMS_BIND_ADDR=\\"0.0.0.0:8081\\"\\n```plaintext --- ## Usage ### Start KMS Service ```bash\\n# With RustyVault backend\\ncd provisioning/platform/kms-service\\ncargo run # With custom config\\ncargo run -- --config=/path/to/kms.toml\\n```plaintext ### CLI Operations ```bash\\n# Encrypt configuration file\\nprovisioning kms encrypt provisioning/config/secrets.yaml # Decrypt configuration\\nprovisioning kms decrypt provisioning/config/secrets.yaml.enc # Generate data key (envelope encryption)\\nprovisioning kms generate-key --spec AES256 # Health check\\nprovisioning kms health\\n```plaintext ### REST API Usage ```bash\\n# Health check\\ncurl http://localhost:8081/health # Encrypt data\\ncurl -X POST http://localhost:8081/encrypt \\\\ -H \\"Content-Type: application/json\\" \\\\ -d \'{ \\"plaintext\\": \\"SGVsbG8sIFdvcmxkIQ==\\", \\"context\\": \\"environment=production\\" }\' # Decrypt data\\ncurl -X POST http://localhost:8081/decrypt \\\\ -H \\"Content-Type: application/json\\" \\\\ -d \'{ \\"ciphertext\\": \\"vault:v1:...\\", \\"context\\": \\"environment=production\\" }\' # Generate data key\\ncurl -X POST http://localhost:8081/datakey/generate \\\\ -H \\"Content-Type: application/json\\" \\\\ -d \'{\\"key_spec\\": \\"AES_256\\"}\'\\n```plaintext --- ## Advanced Features ### Context-based Encryption (AAD) Additional authenticated data binds encrypted data to specific contexts: ```bash\\n# Encrypt with context\\ncurl -X POST http://localhost:8081/encrypt \\\\ -d \'{ \\"plaintext\\": \\"c2VjcmV0\\", \\"context\\": \\"environment=prod,service=api\\" }\' # Decrypt requires same context\\ncurl -X POST http://localhost:8081/decrypt \\\\ -d \'{ \\"ciphertext\\": \\"vault:v1:...\\", \\"context\\": \\"environment=prod,service=api\\" }\'\\n```plaintext ### Envelope Encryption For large files, use envelope encryption: ```bash\\n# 1. Generate data key\\nDATA_KEY=$(curl -X POST http://localhost:8081/datakey/generate \\\\ -d \'{\\"key_spec\\": \\"AES_256\\"}\' | jq -r \'.plaintext\') # 2. Encrypt large file with data key (locally)\\nopenssl enc -aes-256-cbc -in large-file.bin -out encrypted.bin -K $DATA_KEY # 3. Store encrypted data key (from response)\\necho \\"vault:v1:...\\" > encrypted-data-key.txt\\n```plaintext ### Key Rotation ```bash\\n# Rotate encryption key in RustyVault\\nrustyvault write -f transit/keys/provisioning-main/rotate # Verify new version\\nrustyvault read transit/keys/provisioning-main # Rewrap existing ciphertext with new key version\\ncurl -X POST http://localhost:8081/rewrap \\\\ -d \'{\\"ciphertext\\": \\"vault:v1:...\\"}\'\\n```plaintext --- ## Production Deployment ### High Availability Setup Deploy multiple RustyVault instances behind a load balancer: ```yaml\\n# docker-compose.yml\\nversion: \'3.8\' services: rustyvault-1: image: tongsuo/rustyvault:latest ports: - \\"8200:8200\\" volumes: - ./config:/vault/config - vault-data-1:/vault/data rustyvault-2: image: tongsuo/rustyvault:latest ports: - \\"8201:8200\\" volumes: - ./config:/vault/config - vault-data-2:/vault/data lb: image: nginx:alpine ports: - \\"80:80\\" volumes: - ./nginx.conf:/etc/nginx/nginx.conf depends_on: - rustyvault-1 - rustyvault-2 volumes: vault-data-1: vault-data-2:\\n```plaintext ### TLS Configuration ```toml\\n# kms.toml\\n[kms]\\ntype = \\"rustyvault\\"\\nserver_url = \\"https://vault.example.com:8200\\"\\ntoken = \\"${RUSTYVAULT_TOKEN}\\"\\ntls_verify = true [tls]\\nenabled = true\\ncert_path = \\"/etc/kms/certs/server.crt\\"\\nkey_path = \\"/etc/kms/certs/server.key\\"\\nca_path = \\"/etc/kms/certs/ca.crt\\"\\n```plaintext ### Auto-Unseal (AWS KMS) ```hcl\\n# rustyvault-config.hcl\\nseal \\"awskms\\" { region = \\"us-east-1\\" kms_key_id = \\"arn:aws:kms:us-east-1:123456789012:key/...\\"\\n}\\n```plaintext --- ## Monitoring ### Health Checks ```bash\\n# RustyVault health\\ncurl http://localhost:8200/v1/sys/health # KMS service health\\ncurl http://localhost:8081/health # Metrics (if enabled)\\ncurl http://localhost:8081/metrics\\n```plaintext ### Audit Logging Enable audit logging in RustyVault: ```hcl\\n# rustyvault-config.hcl\\naudit { path = \\"/vault/logs/audit.log\\" format = \\"json\\"\\n}\\n```plaintext --- ## Troubleshooting ### Common Issues **1. Connection Refused** ```bash\\n# Check RustyVault is running\\ncurl http://localhost:8200/v1/sys/health # Check token is valid\\nexport VAULT_ADDR=\'http://localhost:8200\'\\nrustyvault token lookup\\n```plaintext **2. Authentication Failed** ```bash\\n# Verify token in environment\\necho $RUSTYVAULT_TOKEN # Renew token if needed\\nrustyvault token renew\\n```plaintext **3. Key Not Found** ```bash\\n# List available keys\\nrustyvault list transit/keys # Create missing key\\nrustyvault write -f transit/keys/provisioning-main\\n```plaintext **4. TLS Verification Failed** ```bash\\n# Disable TLS verification (dev only)\\nexport RUSTYVAULT_TLS_VERIFY=false # Or add CA certificate\\nexport RUSTYVAULT_CACERT=/path/to/ca.crt\\n```plaintext --- ## Migration from Other Backends ### From HashiCorp Vault RustyVault is API-compatible, minimal changes required: ```bash\\n# Old config (Vault)\\n[kms]\\ntype = \\"vault\\"\\naddress = \\"https://vault.example.com:8200\\"\\ntoken = \\"${VAULT_TOKEN}\\" # New config (RustyVault)\\n[kms]\\ntype = \\"rustyvault\\"\\nserver_url = \\"http://rustyvault.example.com:8200\\"\\ntoken = \\"${RUSTYVAULT_TOKEN}\\"\\n```plaintext ### From Age Re-encrypt existing encrypted files: ```bash\\n# 1. Decrypt with Age\\nprovisioning kms decrypt --backend age secrets.enc > secrets.plain # 2. Encrypt with RustyVault\\nprovisioning kms encrypt --backend rustyvault secrets.plain > secrets.rustyvault.enc\\n```plaintext --- ## Security Considerations ### Best Practices 1. **Enable TLS**: Always use HTTPS in production\\n2. **Rotate Tokens**: Regularly rotate RustyVault tokens\\n3. **Least Privilege**: Use policies to restrict token permissions\\n4. **Audit Logging**: Enable and monitor audit logs\\n5. **Backup Keys**: Secure backup of unseal keys and root token\\n6. **Network Isolation**: Run RustyVault in isolated network segment ### Token Policies Create restricted policy for KMS service: ```hcl\\n# kms-policy.hcl\\npath \\"transit/encrypt/provisioning-main\\" { capabilities = [\\"update\\"]\\n} path \\"transit/decrypt/provisioning-main\\" { capabilities = [\\"update\\"]\\n} path \\"transit/datakey/plaintext/provisioning-main\\" { capabilities = [\\"update\\"]\\n}\\n```plaintext Apply policy: ```bash\\nrustyvault policy write kms-service kms-policy.hcl\\nrustyvault token create -policy=kms-service\\n```plaintext --- ## Performance ### Benchmarks (Estimated) | Operation | Latency | Throughput |\\n|-----------|---------|------------|\\n| Encrypt | 5-15ms | 2,000-5,000 ops/sec |\\n| Decrypt | 5-15ms | 2,000-5,000 ops/sec |\\n| Generate Key | 10-20ms | 1,000-2,000 ops/sec | *Actual performance depends on hardware, network, and RustyVault configuration* ### Optimization Tips 1. **Connection Pooling**: Reuse HTTP connections\\n2. **Batching**: Batch multiple operations when possible\\n3. **Caching**: Cache data keys for envelope encryption\\n4. **Local Unseal**: Use auto-unseal for faster restarts --- ## Related Documentation - **KMS Service**: `docs/user/CONFIG_ENCRYPTION_GUIDE.md`\\n- **Dynamic Secrets**: `docs/user/DYNAMIC_SECRETS_QUICK_REFERENCE.md`\\n- **Security System**: `docs/architecture/ADR-009-security-system-complete.md`\\n- **RustyVault GitHub**: --- ## Support - **GitHub Issues**: \\n- **Documentation**: \\n- **Community**: --- **Last Updated**: 2025-10-08\\n**Maintained By**: Architecture Team","breadcrumbs":"RustyVault KMS Guide » Architecture Position","id":"1552","title":"Architecture Position"},"1553":{"body":"SecretumVault is an enterprise-grade, post-quantum ready secrets management system integrated as the 4th KMS backend in the provisioning platform, alongside Age (dev), Cosmian (prod), and RustyVault (self-hosted).","breadcrumbs":"SecretumVault KMS Guide » SecretumVault KMS Backend Guide","id":"1553","title":"SecretumVault KMS Backend Guide"},"1554":{"body":"","breadcrumbs":"SecretumVault KMS Guide » Overview","id":"1554","title":"Overview"},"1555":{"body":"SecretumVault provides: Post-Quantum Cryptography : Ready for quantum-resistant algorithms Enterprise Features : Policy-as-code (Cedar), audit logging, compliance tracking Multiple Storage Backends : Filesystem (dev), SurrealDB (staging), etcd (prod), PostgreSQL Transit Engine : Encryption-as-a-service for data protection KV Engine : Versioned secret storage with rotation policies High Availability : Seamless transition from embedded to distributed modes","breadcrumbs":"SecretumVault KMS Guide » What is SecretumVault?","id":"1555","title":"What is SecretumVault?"},"1556":{"body":"Scenario Backend Reason Local development Age Simple, no dependencies Testing/Staging SecretumVault Enterprise features, production-like Production Cosmian or SecretumVault Enterprise security, compliance Self-Hosted Enterprise SecretumVault + etcd Full control, HA support","breadcrumbs":"SecretumVault KMS Guide » When to Use SecretumVault","id":"1556","title":"When to Use SecretumVault"},"1557":{"body":"","breadcrumbs":"SecretumVault KMS Guide » Deployment Modes","id":"1557","title":"Deployment Modes"},"1558":{"body":"Storage : Filesystem (~/.config/provisioning/secretumvault/data) Performance : <3ms encryption/decryption Setup : No separate service required Best For : Local development and testing export PROVISIONING_ENV=dev\\nexport KMS_DEV_BACKEND=secretumvault\\nprovisioning kms encrypt config.yaml","breadcrumbs":"SecretumVault KMS Guide » Development Mode (Embedded)","id":"1558","title":"Development Mode (Embedded)"},"1559":{"body":"Storage : SurrealDB (document database) Performance : <10ms operations Setup : Start SecretumVault service separately Best For : Team testing, staging environments # Start SecretumVault service\\nsecretumvault server --storage-backend surrealdb # Configure provisioning\\nexport PROVISIONING_ENV=staging\\nexport SECRETUMVAULT_URL=http://localhost:8200\\nexport SECRETUMVAULT_TOKEN=your-auth-token provisioning kms encrypt config.yaml","breadcrumbs":"SecretumVault KMS Guide » Staging Mode (Service + SurrealDB)","id":"1559","title":"Staging Mode (Service + SurrealDB)"},"156":{"body":"If encryption fails: # Verify keys exist\\nls -la ~/.config/provisioning/age/ # Regenerate if needed\\nage-keygen -o ~/.config/provisioning/age/private_key.txt","breadcrumbs":"Installation Steps » Age Keys Not Found","id":"156","title":"Age Keys Not Found"},"1560":{"body":"Storage : etcd cluster (3+ nodes) Performance : <10ms operations (99th percentile) Setup : etcd cluster + SecretumVault service Best For : Production deployments with HA requirements # Setup etcd cluster (3 nodes minimum)\\netcd --name etcd1 --data-dir etcd1-data \\\\ --advertise-client-urls http://localhost:2379 \\\\ --listen-client-urls http://localhost:2379 # Start SecretumVault with etcd\\nsecretumvault server \\\\ --storage-backend etcd \\\\ --etcd-endpoints http://etcd1:2379,http://etcd2:2379,http://etcd3:2379 # Configure provisioning\\nexport PROVISIONING_ENV=prod\\nexport SECRETUMVAULT_URL=https://your-secretumvault:8200\\nexport SECRETUMVAULT_TOKEN=your-auth-token\\nexport SECRETUMVAULT_STORAGE=etcd provisioning kms encrypt config.yaml","breadcrumbs":"SecretumVault KMS Guide » Production Mode (Service + etcd)","id":"1560","title":"Production Mode (Service + etcd)"},"1561":{"body":"","breadcrumbs":"SecretumVault KMS Guide » Configuration","id":"1561","title":"Configuration"},"1562":{"body":"Variable Purpose Default Example PROVISIONING_ENV Deployment environment dev staging, prod KMS_DEV_BACKEND Development KMS backend age secretumvault KMS_STAGING_BACKEND Staging KMS backend secretumvault cosmian KMS_PROD_BACKEND Production KMS backend cosmian secretumvault SECRETUMVAULT_URL Server URL http://localhost:8200 https://kms.example.com SECRETUMVAULT_TOKEN Authentication token (none) (Bearer token) SECRETUMVAULT_STORAGE Storage backend filesystem surrealdb, etcd SECRETUMVAULT_TLS_VERIFY Verify TLS certificates false true","breadcrumbs":"SecretumVault KMS Guide » Environment Variables","id":"1562","title":"Environment Variables"},"1563":{"body":"System Defaults : provisioning/config/secretumvault.toml KMS Config : provisioning/config/kms.toml Edit these files to customize: Engine mount points Key names Storage backend settings Performance tuning Audit logging Key rotation policies","breadcrumbs":"SecretumVault KMS Guide » Configuration Files","id":"1563","title":"Configuration Files"},"1564":{"body":"","breadcrumbs":"SecretumVault KMS Guide » Operations","id":"1564","title":"Operations"},"1565":{"body":"# Encrypt a file\\nprovisioning kms encrypt config.yaml\\n# Output: config.yaml.enc # Encrypt with specific key\\nprovisioning kms encrypt --key-id my-key config.yaml # Encrypt and sign\\nprovisioning kms encrypt --sign config.yaml","breadcrumbs":"SecretumVault KMS Guide » Encrypt Data","id":"1565","title":"Encrypt Data"},"1566":{"body":"# Decrypt a file\\nprovisioning kms decrypt config.yaml.enc\\n# Output: config.yaml # Decrypt with specific key\\nprovisioning kms decrypt --key-id my-key config.yaml.enc # Verify and decrypt\\nprovisioning kms decrypt --verify config.yaml.enc","breadcrumbs":"SecretumVault KMS Guide » Decrypt Data","id":"1566","title":"Decrypt Data"},"1567":{"body":"# Generate AES-256 data key\\nprovisioning kms generate-key --spec AES256 # Generate AES-128 data key\\nprovisioning kms generate-key --spec AES128 # Generate RSA-4096 key\\nprovisioning kms generate-key --spec RSA4096","breadcrumbs":"SecretumVault KMS Guide » Generate Data Keys","id":"1567","title":"Generate Data Keys"},"1568":{"body":"# Check KMS health\\nprovisioning kms health # Get KMS version\\nprovisioning kms version # Detailed KMS status\\nprovisioning kms status","breadcrumbs":"SecretumVault KMS Guide » Health and Status","id":"1568","title":"Health and Status"},"1569":{"body":"# Rotate encryption key\\nprovisioning kms rotate-key provisioning-master # Check rotation policy\\nprovisioning kms rotation-policy provisioning-master # Update rotation interval\\nprovisioning kms update-rotation 90 # Rotate every 90 days","breadcrumbs":"SecretumVault KMS Guide » Key Rotation","id":"1569","title":"Key Rotation"},"157":{"body":"Once installation is complete, proceed to: → First Deployment","breadcrumbs":"Installation Steps » Next Steps","id":"157","title":"Next Steps"},"1570":{"body":"","breadcrumbs":"SecretumVault KMS Guide » Storage Backends","id":"1570","title":"Storage Backends"},"1571":{"body":"Local file-based storage with no external dependencies. Pros : Zero external dependencies Fast (local disk access) Easy to inspect/backup Cons : Single-node only No HA Manual backup required Configuration : [secretumvault.storage.filesystem]\\ndata_dir = \\"~/.config/provisioning/secretumvault/data\\"\\npermissions = \\"0700\\"","breadcrumbs":"SecretumVault KMS Guide » Filesystem (Development)","id":"1571","title":"Filesystem (Development)"},"1572":{"body":"Embedded or standalone document database. Pros : Embedded or distributed Flexible schema Real-time syncing Cons : More complex than filesystem New technology (less tested than etcd) Configuration : [secretumvault.storage.surrealdb]\\nconnection_url = \\"ws://localhost:8000\\"\\nnamespace = \\"provisioning\\"\\ndatabase = \\"secrets\\"\\nusername = \\"${SECRETUMVAULT_SURREALDB_USER:-admin}\\"\\npassword = \\"${SECRETUMVAULT_SURREALDB_PASS:-password}\\"","breadcrumbs":"SecretumVault KMS Guide » SurrealDB (Staging)","id":"1572","title":"SurrealDB (Staging)"},"1573":{"body":"Distributed key-value store for high availability. Pros : Proven in production HA and disaster recovery Consistent consensus protocol Multi-site replication Cons : Operational complexity Requires 3+ nodes More infrastructure Configuration : [secretumvault.storage.etcd]\\nendpoints = [\\"http://etcd1:2379\\", \\"http://etcd2:2379\\", \\"http://etcd3:2379\\"]\\ntls_enabled = true\\ntls_cert_file = \\"/path/to/client.crt\\"\\ntls_key_file = \\"/path/to/client.key\\"","breadcrumbs":"SecretumVault KMS Guide » etcd (Production)","id":"1573","title":"etcd (Production)"},"1574":{"body":"Relational database backend. Pros : Mature and reliable Advanced querying Full ACID transactions Cons : Schema requirements External database dependency More operational overhead Configuration : [secretumvault.storage.postgresql]\\nconnection_url = \\"postgresql://user:pass@localhost:5432/secretumvault\\"\\nmax_connections = 10\\nssl_mode = \\"require\\"","breadcrumbs":"SecretumVault KMS Guide » PostgreSQL (Enterprise)","id":"1574","title":"PostgreSQL (Enterprise)"},"1575":{"body":"","breadcrumbs":"SecretumVault KMS Guide » Troubleshooting","id":"1575","title":"Troubleshooting"},"1576":{"body":"Error : \\"Failed to connect to SecretumVault service\\" Solutions : Verify SecretumVault is running: curl http://localhost:8200/v1/sys/health Check server URL configuration: provisioning config show secretumvault.server_url Verify network connectivity: nc -zv localhost 8200","breadcrumbs":"SecretumVault KMS Guide » Connection Errors","id":"1576","title":"Connection Errors"},"1577":{"body":"Error : \\"Authentication failed: X-Vault-Token missing or invalid\\" Solutions : Set authentication token: export SECRETUMVAULT_TOKEN=your-token Verify token is still valid: provisioning secrets verify-token Get new token from SecretumVault: secretumvault auth login","breadcrumbs":"SecretumVault KMS Guide » Authentication Failures","id":"1577","title":"Authentication Failures"},"1578":{"body":"Filesystem Backend Error : \\"Permission denied: ~/.config/provisioning/secretumvault/data\\" Solution : Check directory permissions: ls -la ~/.config/provisioning/secretumvault/\\n# Should be: drwx------ (0700)\\nchmod 700 ~/.config/provisioning/secretumvault/data SurrealDB Backend Error : \\"Failed to connect to SurrealDB at ws://localhost:8000\\" Solution : Start SurrealDB first: surreal start --bind 0.0.0.0:8000 file://secretum.db etcd Backend Error : \\"etcd cluster unhealthy\\" Solution : Check etcd cluster status: etcdctl member list\\netcdctl endpoint health # Verify all nodes are reachable\\ncurl http://etcd1:2379/health\\ncurl http://etcd2:2379/health\\ncurl http://etcd3:2379/health","breadcrumbs":"SecretumVault KMS Guide » Storage Backend Errors","id":"1578","title":"Storage Backend Errors"},"1579":{"body":"Slow encryption/decryption : Check network latency (for service mode): ping -c 3 secretumvault-server Monitor SecretumVault performance: provisioning kms metrics Check storage backend performance: Filesystem: Check disk I/O SurrealDB: Monitor database load etcd: Check cluster consensus state High memory usage : Check cache settings: provisioning config show secretumvault.performance.cache_ttl Reduce cache TTL: provisioning config set secretumvault.performance.cache_ttl 60 Monitor active connections: provisioning kms status","breadcrumbs":"SecretumVault KMS Guide » Performance Issues","id":"1579","title":"Performance Issues"},"158":{"body":"Detailed Installation Guide Workspace Management Troubleshooting Guide","breadcrumbs":"Installation Steps » Additional Resources","id":"158","title":"Additional Resources"},"1580":{"body":"Enable debug logging : export RUST_LOG=debug\\nprovisioning kms encrypt config.yaml Check configuration : provisioning config show secretumvault\\nprovisioning config validate Test connectivity : provisioning kms health --verbose View audit logs : tail -f ~/.config/provisioning/logs/secretumvault-audit.log","breadcrumbs":"SecretumVault KMS Guide » Debugging","id":"1580","title":"Debugging"},"1581":{"body":"","breadcrumbs":"SecretumVault KMS Guide » Security Best Practices","id":"1581","title":"Security Best Practices"},"1582":{"body":"Never commit tokens to version control Use environment variables or .env files (gitignored) Rotate tokens regularly Use different tokens per environment","breadcrumbs":"SecretumVault KMS Guide » Token Management","id":"1582","title":"Token Management"},"1583":{"body":"Enable TLS verification in production: export SECRETUMVAULT_TLS_VERIFY=true Use proper certificates (not self-signed in production) Pin certificates to prevent MITM attacks","breadcrumbs":"SecretumVault KMS Guide » TLS/SSL","id":"1583","title":"TLS/SSL"},"1584":{"body":"Restrict who can access SecretumVault admin UI Use strong authentication (MFA preferred) Audit all secrets access Implement least-privilege principle","breadcrumbs":"SecretumVault KMS Guide » Access Control","id":"1584","title":"Access Control"},"1585":{"body":"Rotate keys regularly (every 90 days recommended) Keep old versions for decryption Test rotation procedures in staging first Monitor rotation status","breadcrumbs":"SecretumVault KMS Guide » Key Rotation","id":"1585","title":"Key Rotation"},"1586":{"body":"Backup SecretumVault data regularly Test restore procedures Store backups securely Keep backup keys separate from encrypted data","breadcrumbs":"SecretumVault KMS Guide » Backup and Recovery","id":"1586","title":"Backup and Recovery"},"1587":{"body":"","breadcrumbs":"SecretumVault KMS Guide » Migration Guide","id":"1587","title":"Migration Guide"},"1588":{"body":"# Export all secrets encrypted with Age\\nprovisioning secrets export --backend age --output secrets.json # Import into SecretumVault\\nprovisioning secrets import --backend secretumvault secrets.json # Re-encrypt all configurations\\nfind workspace/infra -name \\"*.enc\\" -exec provisioning kms reencrypt {} \\\\;","breadcrumbs":"SecretumVault KMS Guide » From Age to SecretumVault","id":"1588","title":"From Age to SecretumVault"},"1589":{"body":"# Both use Vault-compatible APIs, so migration is simpler:\\n# 1. Ensure SecretumVault keys are available\\n# 2. Update KMS_PROD_BACKEND=secretumvault\\n# 3. Test with staging first\\n# 4. Monitor during transition","breadcrumbs":"SecretumVault KMS Guide » From RustyVault to SecretumVault","id":"1589","title":"From RustyVault to SecretumVault"},"159":{"body":"This guide walks you through deploying your first infrastructure using the Provisioning Platform.","breadcrumbs":"First Deployment » First Deployment","id":"159","title":"First Deployment"},"1590":{"body":"# For production migration:\\n# 1. Set up SecretumVault with etcd backend\\n# 2. Verify high availability is working\\n# 3. Run parallel encryption with both systems\\n# 4. Validate all decryptions work\\n# 5. Update KMS_PROD_BACKEND=secretumvault\\n# 6. Monitor closely for 24 hours\\n# 7. Keep Cosmian as fallback for 7 days","breadcrumbs":"SecretumVault KMS Guide » From Cosmian to SecretumVault","id":"1590","title":"From Cosmian to SecretumVault"},"1591":{"body":"","breadcrumbs":"SecretumVault KMS Guide » Performance Tuning","id":"1591","title":"Performance Tuning"},"1592":{"body":"[secretumvault.performance]\\nmax_connections = 5\\nconnection_timeout = 5\\nrequest_timeout = 30\\ncache_ttl = 60","breadcrumbs":"SecretumVault KMS Guide » Development (Filesystem)","id":"1592","title":"Development (Filesystem)"},"1593":{"body":"[secretumvault.performance]\\nmax_connections = 20\\nconnection_timeout = 5\\nrequest_timeout = 30\\ncache_ttl = 300","breadcrumbs":"SecretumVault KMS Guide » Staging (SurrealDB)","id":"1593","title":"Staging (SurrealDB)"},"1594":{"body":"[secretumvault.performance]\\nmax_connections = 50\\nconnection_timeout = 10\\nrequest_timeout = 30\\ncache_ttl = 600","breadcrumbs":"SecretumVault KMS Guide » Production (etcd)","id":"1594","title":"Production (etcd)"},"1595":{"body":"","breadcrumbs":"SecretumVault KMS Guide » Compliance and Audit","id":"1595","title":"Compliance and Audit"},"1596":{"body":"All operations are logged: # View recent audit events\\nprovisioning kms audit --limit 100 # Export audit logs\\nprovisioning kms audit export --output audit.json # Audit specific operations\\nprovisioning kms audit --action encrypt --from 24h","breadcrumbs":"SecretumVault KMS Guide » Audit Logging","id":"1596","title":"Audit Logging"},"1597":{"body":"# Generate compliance report\\nprovisioning compliance report --backend secretumvault # GDPR data export\\nprovisioning compliance gdpr-export user@example.com # SOC2 audit trail\\nprovisioning compliance soc2-export --output soc2-audit.json","breadcrumbs":"SecretumVault KMS Guide » Compliance Reports","id":"1597","title":"Compliance Reports"},"1598":{"body":"","breadcrumbs":"SecretumVault KMS Guide » Advanced Topics","id":"1598","title":"Advanced Topics"},"1599":{"body":"Enable fine-grained access control: # Enable Cedar integration\\nprovisioning config set secretumvault.authorization.cedar_enabled true # Define access policies\\nprovisioning policy define-kms-access user@example.com admin\\nprovisioning policy define-kms-access deployer@example.com deploy-only","breadcrumbs":"SecretumVault KMS Guide » Cedar Authorization Policies","id":"1599","title":"Cedar Authorization Policies"},"16":{"body":"Linux : Any modern distribution (Ubuntu 20.04+, CentOS 8+, Debian 11+) macOS : 11.0+ (Big Sur and newer) Windows : Windows 10/11 with WSL2","breadcrumbs":"Installation Guide » Operating System Support","id":"16","title":"Operating System Support"},"160":{"body":"In this chapter, you\'ll: Configure a simple infrastructure Create your first server Install a task service (Kubernetes) Verify the deployment Estimated time: 10-15 minutes","breadcrumbs":"First Deployment » Overview","id":"160","title":"Overview"},"1600":{"body":"Configure master key settings: # Set KEK rotation interval\\nprovisioning config set secretumvault.rotation.rotation_interval_days 90 # Enable automatic rotation\\nprovisioning config set secretumvault.rotation.auto_rotate true # Retain old versions for decryption\\nprovisioning config set secretumvault.rotation.retain_old_versions true","breadcrumbs":"SecretumVault KMS Guide » Key Encryption Keys (KEK)","id":"1600","title":"Key Encryption Keys (KEK)"},"1601":{"body":"For production deployments across regions: # Region 1\\nexport SECRETUMVAULT_URL=https://kms-us-east.example.com\\nexport SECRETUMVAULT_STORAGE=etcd # Region 2 (for failover)\\nexport SECRETUMVAULT_URL_FALLBACK=https://kms-us-west.example.com","breadcrumbs":"SecretumVault KMS Guide » Multi-Region Setup","id":"1601","title":"Multi-Region Setup"},"1602":{"body":"Documentation : docs/user/SECRETUMVAULT_KMS_GUIDE.md (this file) Configuration Template : provisioning/config/secretumvault.toml KMS Configuration : provisioning/config/kms.toml Issues : Report issues with provisioning kms debug Logs : Check ~/.config/provisioning/logs/secretumvault-*.log","breadcrumbs":"SecretumVault KMS Guide » Support and Resources","id":"1602","title":"Support and Resources"},"1603":{"body":"Age KMS Guide - Simple local encryption Cosmian KMS Guide - Enterprise confidential computing RustyVault Guide - Self-hosted Vault KMS Overview - KMS backend comparison","breadcrumbs":"SecretumVault KMS Guide » See Also","id":"1603","title":"See Also"},"1604":{"body":"","breadcrumbs":"SSH Temporal Keys User Guide » SSH Temporal Keys - User Guide","id":"1604","title":"SSH Temporal Keys - User Guide"},"1605":{"body":"","breadcrumbs":"SSH Temporal Keys User Guide » Quick Start","id":"1605","title":"Quick Start"},"1606":{"body":"The fastest way to use temporal SSH keys: # Auto-generate, deploy, and connect (key auto-revoked after disconnect)\\nssh connect server.example.com # Connect with custom user and TTL\\nssh connect server.example.com --user deploy --ttl 30min # Keep key active after disconnect\\nssh connect server.example.com --keep\\n```plaintext ### Manual Key Management For more control over the key lifecycle: ```bash\\n# 1. Generate key\\nssh generate-key server.example.com --user root --ttl 1hr # Output:\\n# ✓ SSH key generated successfully\\n# Key ID: abc-123-def-456\\n# Type: dynamickeypair\\n# User: root\\n# Server: server.example.com\\n# Expires: 2024-01-01T13:00:00Z\\n# Fingerprint: SHA256:...\\n#\\n# Private Key (save securely):\\n# -----BEGIN OPENSSH PRIVATE KEY-----\\n# ...\\n# -----END OPENSSH PRIVATE KEY----- # 2. Deploy key to server\\nssh deploy-key abc-123-def-456 # 3. Use the private key to connect\\nssh -i /path/to/private/key root@server.example.com # 4. Revoke when done\\nssh revoke-key abc-123-def-456\\n```plaintext ## Key Features ### Automatic Expiration All keys expire automatically after their TTL: - **Default TTL**: 1 hour\\n- **Configurable**: From 5 minutes to 24 hours\\n- **Background Cleanup**: Automatic removal from servers every 5 minutes ### Multiple Key Types Choose the right key type for your use case: | Type | Description | Use Case |\\n|------|-------------|----------|\\n| **dynamic** (default) | Generated Ed25519 keys | Quick SSH access |\\n| **ca** | Vault CA-signed certificate | Enterprise with SSH CA |\\n| **otp** | Vault one-time password | Single-use access | ### Security Benefits ✅ No static SSH keys to manage\\n✅ Short-lived credentials (1 hour default)\\n✅ Automatic cleanup on expiration\\n✅ Audit trail for all operations\\n✅ Private keys never stored on disk ## Common Usage Patterns ### Development Workflow ```bash\\n# Quick SSH for debugging\\nssh connect dev-server.local --ttl 30min # Execute commands\\nssh root@dev-server.local \\"systemctl status nginx\\" # Connection closes, key auto-revokes\\n```plaintext ### Production Deployment ```bash\\n# Generate key with longer TTL for deployment\\nssh generate-key prod-server.example.com --ttl 2hr # Deploy to server\\nssh deploy-key # Run deployment script\\nssh -i /tmp/deploy-key root@prod-server.example.com < deploy.sh # Manual revoke when done\\nssh revoke-key \\n```plaintext ### Multi-Server Access ```bash\\n# Generate one key\\nssh generate-key server01.example.com --ttl 1hr # Use the same private key for multiple servers (if you have provisioning access)\\n# Note: Currently each key is server-specific, multi-server support coming soon\\n```plaintext ## Command Reference ### ssh generate-key Generate a new temporal SSH key. **Syntax**: ```bash\\nssh generate-key [options]\\n```plaintext **Options**: - `--user `: SSH user (default: root)\\n- `--ttl `: Key lifetime (default: 1hr)\\n- `--type `: Key type (default: dynamic)\\n- `--ip `: Allowed IP (OTP mode only)\\n- `--principal `: Principal (CA mode only) **Examples**: ```bash\\n# Basic usage\\nssh generate-key server.example.com # Custom user and TTL\\nssh generate-key server.example.com --user deploy --ttl 30min # Vault CA mode\\nssh generate-key server.example.com --type ca --principal admin\\n```plaintext ### ssh deploy-key Deploy a generated key to the target server. **Syntax**: ```bash\\nssh deploy-key \\n```plaintext **Example**: ```bash\\nssh deploy-key abc-123-def-456\\n```plaintext ### ssh list-keys List all active SSH keys. **Syntax**: ```bash\\nssh list-keys [--expired]\\n```plaintext **Examples**: ```bash\\n# List active keys\\nssh list-keys # Show only deployed keys\\nssh list-keys | where deployed == true # Include expired keys\\nssh list-keys --expired\\n```plaintext ### ssh get-key Get detailed information about a specific key. **Syntax**: ```bash\\nssh get-key \\n```plaintext **Example**: ```bash\\nssh get-key abc-123-def-456\\n```plaintext ### ssh revoke-key Immediately revoke a key (removes from server and tracking). **Syntax**: ```bash\\nssh revoke-key \\n```plaintext **Example**: ```bash\\nssh revoke-key abc-123-def-456\\n```plaintext ### ssh connect Auto-generate, deploy, connect, and revoke (all-in-one). **Syntax**: ```bash\\nssh connect [options]\\n```plaintext **Options**: - `--user `: SSH user (default: root)\\n- `--ttl `: Key lifetime (default: 1hr)\\n- `--type `: Key type (default: dynamic)\\n- `--keep`: Don\'t revoke after disconnect **Examples**: ```bash\\n# Quick connection\\nssh connect server.example.com # Custom user\\nssh connect server.example.com --user deploy # Keep key active after disconnect\\nssh connect server.example.com --keep\\n```plaintext ### ssh stats Show SSH key statistics. **Syntax**: ```bash\\nssh stats\\n```plaintext **Example Output**: ```plaintext\\nSSH Key Statistics: Total generated: 42 Active keys: 10 Expired keys: 32 Keys by type: dynamic: 35 otp: 5 certificate: 2 Last cleanup: 2024-01-01T12:00:00Z Cleaned keys: 5\\n```plaintext ### ssh cleanup Manually trigger cleanup of expired keys. **Syntax**: ```bash\\nssh cleanup\\n```plaintext ### ssh test Run a quick test of the SSH key system. **Syntax**: ```bash\\nssh test [--user ]\\n```plaintext **Example**: ```bash\\nssh test server.example.com --user root\\n```plaintext ### ssh help Show help information. **Syntax**: ```bash\\nssh help\\n```plaintext ## Duration Formats The `--ttl` option accepts various duration formats: | Format | Example | Meaning |\\n|--------|---------|---------|\\n| Minutes | `30min` | 30 minutes |\\n| Hours | `2hr` | 2 hours |\\n| Mixed | `1hr 30min` | 1.5 hours |\\n| Seconds | `3600sec` | 1 hour | ## Working with Private Keys ### Saving Private Keys When you generate a key, save the private key immediately: ```bash\\n# Generate and save to file\\nssh generate-key server.example.com | get private_key | save -f ~/.ssh/temp_key\\nchmod 600 ~/.ssh/temp_key # Use the key\\nssh -i ~/.ssh/temp_key root@server.example.com # Cleanup\\nrm ~/.ssh/temp_key\\n```plaintext ### Using SSH Agent Add the temporary key to your SSH agent: ```bash\\n# Generate key and extract private key\\nssh generate-key server.example.com | get private_key | save -f /tmp/temp_key\\nchmod 600 /tmp/temp_key # Add to agent\\nssh-add /tmp/temp_key # Connect (agent provides the key automatically)\\nssh root@server.example.com # Remove from agent\\nssh-add -d /tmp/temp_key\\nrm /tmp/temp_key\\n```plaintext ## Troubleshooting ### Key Deployment Fails **Problem**: `ssh deploy-key` returns error **Solutions**: 1. Check SSH connectivity to server: ```bash ssh root@server.example.com Verify provisioning key is configured: echo $PROVISIONING_SSH_KEY Check server SSH daemon: ssh root@server.example.com \\"systemctl status sshd\\"","breadcrumbs":"SSH Temporal Keys User Guide » Generate and Connect with Temporary Key","id":"1606","title":"Generate and Connect with Temporary Key"},"1607":{"body":"Problem : SSH connection fails with \\"Permission denied (publickey)\\" Solutions : Verify key was deployed: ssh list-keys | where id == \\"\\" Check key hasn\'t expired: ssh get-key | get expires_at Verify private key permissions: chmod 600 /path/to/private/key","breadcrumbs":"SSH Temporal Keys User Guide » Private Key Not Working","id":"1607","title":"Private Key Not Working"},"1608":{"body":"Problem : Expired keys not being removed Solutions : Check orchestrator is running: curl http://localhost:9090/health Trigger manual cleanup: ssh cleanup Check orchestrator logs: tail -f ./data/orchestrator.log | grep SSH","breadcrumbs":"SSH Temporal Keys User Guide » Cleanup Not Running","id":"1608","title":"Cleanup Not Running"},"1609":{"body":"","breadcrumbs":"SSH Temporal Keys User Guide » Best Practices","id":"1609","title":"Best Practices"},"161":{"body":"Create a basic infrastructure configuration: # Generate infrastructure template\\nprovisioning generate infra --new my-infra # This creates: workspace/infra/my-infra/\\n# - config.toml (infrastructure settings)\\n# - settings.k (KCL configuration)","breadcrumbs":"First Deployment » Step 1: Configure Infrastructure","id":"161","title":"Step 1: Configure Infrastructure"},"1610":{"body":"Short TTLs : Use the shortest TTL that works for your task ssh connect server.example.com --ttl 30min Immediate Revocation : Revoke keys when you\'re done ssh revoke-key Private Key Handling : Never share or commit private keys # Save to temp location, delete after use\\nssh generate-key server.example.com | get private_key | save -f /tmp/key\\n# ... use key ...\\nrm /tmp/key","breadcrumbs":"SSH Temporal Keys User Guide » Security","id":"1610","title":"Security"},"1611":{"body":"Automated Deployments : Generate key in CI/CD #!/bin/bash\\nKEY_ID=$(ssh generate-key prod.example.com --ttl 1hr | get id)\\nssh deploy-key $KEY_ID\\n# Run deployment\\nansible-playbook deploy.yml\\nssh revoke-key $KEY_ID Interactive Use : Use ssh connect for quick access ssh connect dev.example.com Monitoring : Check statistics regularly ssh stats","breadcrumbs":"SSH Temporal Keys User Guide » Workflow Integration","id":"1611","title":"Workflow Integration"},"1612":{"body":"","breadcrumbs":"SSH Temporal Keys User Guide » Advanced Usage","id":"1612","title":"Advanced Usage"},"1613":{"body":"If your organization uses HashiCorp Vault: CA Mode (Recommended) # Generate CA-signed certificate\\nssh generate-key server.example.com --type ca --principal admin --ttl 1hr # Vault signs your public key\\n# Server must trust Vault CA certificate\\n```plaintext **Setup** (one-time): ```bash\\n# On servers, add to /etc/ssh/sshd_config:\\nTrustedUserCAKeys /etc/ssh/trusted-user-ca-keys.pem # Get Vault CA public key:\\nvault read -field=public_key ssh/config/ca | \\\\ sudo tee /etc/ssh/trusted-user-ca-keys.pem # Restart SSH:\\nsudo systemctl restart sshd\\n```plaintext #### OTP Mode ```bash\\n# Generate one-time password\\nssh generate-key server.example.com --type otp --ip 192.168.1.100 # Use the OTP to connect (single use only)\\n```plaintext ### Scripting Use in scripts for automated operations: ```nushell\\n# deploy.nu\\ndef deploy [target: string] { let key = (ssh generate-key $target --ttl 1hr) ssh deploy-key $key.id # Run deployment try { ssh $\\"root@($target)\\" \\"bash /path/to/deploy.sh\\" } catch { print \\"Deployment failed\\" } # Always cleanup ssh revoke-key $key.id\\n}\\n```plaintext ## API Integration For programmatic access, use the REST API: ```bash\\n# Generate key\\ncurl -X POST http://localhost:9090/api/v1/ssh/generate \\\\ -H \\"Content-Type: application/json\\" \\\\ -d \'{ \\"key_type\\": \\"dynamickeypair\\", \\"user\\": \\"root\\", \\"target_server\\": \\"server.example.com\\", \\"ttl_seconds\\": 3600 }\' # Deploy key\\ncurl -X POST http://localhost:9090/api/v1/ssh/{key_id}/deploy # List keys\\ncurl http://localhost:9090/api/v1/ssh/keys # Get stats\\ncurl http://localhost:9090/api/v1/ssh/stats\\n```plaintext ## FAQ **Q: Can I use the same key for multiple servers?**\\nA: Currently, each key is tied to a specific server. Multi-server support is planned. **Q: What happens if the orchestrator crashes?**\\nA: Keys in memory are lost, but keys already deployed to servers remain until their expiration time. **Q: Can I extend the TTL of an existing key?**\\nA: No, you must generate a new key. This is by design for security. **Q: What\'s the maximum TTL?**\\nA: Configurable by admin, default maximum is 24 hours. **Q: Are private keys stored anywhere?**\\nA: Private keys exist only in memory during generation and are shown once to the user. They are never written to disk by the system. **Q: What happens if cleanup fails?**\\nA: The key remains in authorized_keys until the next cleanup run. You can trigger manual cleanup with `ssh cleanup`. **Q: Can I use this with non-root users?**\\nA: Yes, use `--user ` when generating the key. **Q: How do I know when my key will expire?**\\nA: Use `ssh get-key ` to see the exact expiration timestamp. ## Support For issues or questions: 1. Check orchestrator logs: `tail -f ./data/orchestrator.log`\\n2. Run diagnostics: `ssh stats`\\n3. Test connectivity: `ssh test server.example.com`\\n4. Review documentation: `SSH_KEY_MANAGEMENT.md` ## See Also - **Architecture**: `SSH_KEY_MANAGEMENT.md`\\n- **Implementation**: `SSH_IMPLEMENTATION_SUMMARY.md`\\n- **Configuration**: `config/ssh-config.toml.example`","breadcrumbs":"SSH Temporal Keys User Guide » Vault Integration","id":"1613","title":"Vault Integration"},"1614":{"body":"Version : 1.0.0 Last Updated : 2025-10-09 Target Audience : Developers, DevOps Engineers, System Administrators","breadcrumbs":"Plugin Integration Guide » Nushell Plugin Integration Guide","id":"1614","title":"Nushell Plugin Integration Guide"},"1615":{"body":"Overview Why Native Plugins? Prerequisites Installation Quick Start (5 Minutes) Authentication Plugin (nu_plugin_auth) KMS Plugin (nu_plugin_kms) Orchestrator Plugin (nu_plugin_orchestrator) Integration Examples Best Practices Troubleshooting Migration Guide Advanced Configuration Security Considerations FAQ","breadcrumbs":"Plugin Integration Guide » Table of Contents","id":"1615","title":"Table of Contents"},"1616":{"body":"The Provisioning Platform provides three native Nushell plugins that dramatically improve performance and user experience compared to traditional HTTP API calls: Plugin Purpose Performance Gain nu_plugin_auth JWT authentication, MFA, session management 20% faster nu_plugin_kms Encryption/decryption with multiple KMS backends 10x faster nu_plugin_orchestrator Orchestrator operations without HTTP overhead 50x faster","breadcrumbs":"Plugin Integration Guide » Overview","id":"1616","title":"Overview"},"1617":{"body":"Traditional HTTP Flow:\\nUser Command → HTTP Request → Network → Server Processing → Response → Parse JSON Total: ~50-100ms per operation Plugin Flow:\\nUser Command → Direct Rust Function Call → Return Nushell Data Structure Total: ~1-10ms per operation\\n```plaintext ### Key Features ✅ **Performance**: 10-50x faster than HTTP API\\n✅ **Type Safety**: Full Nushell type system integration\\n✅ **Pipeline Support**: Native Nushell data structures\\n✅ **Offline Capability**: KMS and orchestrator work without network\\n✅ **OS Integration**: Native keyring for secure token storage\\n✅ **Graceful Fallback**: HTTP still available if plugins not installed --- ## Why Native Plugins? ### Performance Comparison Real-world benchmarks from production workload: | Operation | HTTP API | Plugin | Improvement | Speedup |\\n|-----------|----------|--------|-------------|---------|\\n| **KMS Encrypt (RustyVault)** | ~50ms | ~5ms | -45ms | **10x** |\\n| **KMS Decrypt (RustyVault)** | ~50ms | ~5ms | -45ms | **10x** |\\n| **KMS Encrypt (Age)** | ~30ms | ~3ms | -27ms | **10x** |\\n| **KMS Decrypt (Age)** | ~30ms | ~3ms | -27ms | **10x** |\\n| **Orchestrator Status** | ~30ms | ~1ms | -29ms | **30x** |\\n| **Orchestrator Tasks List** | ~50ms | ~5ms | -45ms | **10x** |\\n| **Orchestrator Validate** | ~100ms | ~10ms | -90ms | **10x** |\\n| **Auth Login** | ~100ms | ~80ms | -20ms | 1.25x |\\n| **Auth Verify** | ~50ms | ~10ms | -40ms | **5x** |\\n| **Auth MFA Verify** | ~80ms | ~60ms | -20ms | 1.3x | ### Use Case: Batch Processing **Scenario**: Encrypt 100 configuration files ```nushell\\n# HTTP API approach\\nls configs/*.yaml | each { |file| http post http://localhost:9998/encrypt { data: (open $file) }\\n} | save encrypted/\\n# Total time: ~5 seconds (50ms × 100) # Plugin approach\\nls configs/*.yaml | each { |file| kms encrypt (open $file) --backend rustyvault\\n} | save encrypted/\\n# Total time: ~0.5 seconds (5ms × 100)\\n# Result: 10x faster\\n```plaintext ### Developer Experience Benefits **1. Native Nushell Integration** ```nushell\\n# HTTP: Parse JSON, check status codes\\nlet result = http post http://localhost:9998/encrypt { data: \\"secret\\" }\\nif $result.status == \\"success\\" { $result.encrypted\\n} else { error make { msg: $result.error }\\n} # Plugin: Direct return values\\nkms encrypt \\"secret\\"\\n# Returns encrypted string directly, errors use Nushell\'s error system\\n```plaintext **2. Pipeline Friendly** ```nushell\\n# HTTP: Requires wrapping, JSON parsing\\n[\\"secret1\\", \\"secret2\\"] | each { |s| (http post http://localhost:9998/encrypt { data: $s }).encrypted\\n} # Plugin: Natural pipeline flow\\n[\\"secret1\\", \\"secret2\\"] | each { |s| kms encrypt $s }\\n```plaintext **3. Tab Completion** ```nushell\\n# All plugin commands have full tab completion\\nkms \\n# → encrypt, decrypt, generate-key, status, backends kms encrypt --\\n# → --backend, --key, --context\\n```plaintext --- ## Prerequisites ### Required Software | Software | Minimum Version | Purpose |\\n|----------|----------------|---------|\\n| **Nushell** | 0.107.1 | Shell and plugin runtime |\\n| **Rust** | 1.75+ | Building plugins from source |\\n| **Cargo** | (included with Rust) | Build tool | ### Optional Dependencies | Software | Purpose | Platform |\\n|----------|---------|----------|\\n| **gnome-keyring** | Secure token storage | Linux |\\n| **kwallet** | Secure token storage | Linux (KDE) |\\n| **age** | Age encryption backend | All |\\n| **RustyVault** | High-performance KMS | All | ### Platform Support | Platform | Status | Notes |\\n|----------|--------|-------|\\n| **macOS** | ✅ Full | Keychain integration |\\n| **Linux** | ✅ Full | Requires keyring service |\\n| **Windows** | ✅ Full | Credential Manager integration |\\n| **FreeBSD** | ⚠️ Partial | No keyring integration | --- ## Installation ### Step 1: Clone or Navigate to Plugin Directory ```bash\\ncd /Users/Akasha/project-provisioning/provisioning/core/plugins/nushell-plugins\\n```plaintext ### Step 2: Build All Plugins ```bash\\n# Build in release mode (optimized for performance)\\ncargo build --release --all # Or build individually\\ncargo build --release -p nu_plugin_auth\\ncargo build --release -p nu_plugin_kms\\ncargo build --release -p nu_plugin_orchestrator\\n```plaintext **Expected output:** ```plaintext Compiling nu_plugin_auth v0.1.0 Compiling nu_plugin_kms v0.1.0 Compiling nu_plugin_orchestrator v0.1.0 Finished release [optimized] target(s) in 2m 15s\\n```plaintext ### Step 3: Register Plugins with Nushell ```bash\\n# Register all three plugins\\nplugin add target/release/nu_plugin_auth\\nplugin add target/release/nu_plugin_kms\\nplugin add target/release/nu_plugin_orchestrator # On macOS, full paths:\\nplugin add $PWD/target/release/nu_plugin_auth\\nplugin add $PWD/target/release/nu_plugin_kms\\nplugin add $PWD/target/release/nu_plugin_orchestrator\\n```plaintext ### Step 4: Verify Installation ```bash\\n# List registered plugins\\nplugin list | where name =~ \\"auth|kms|orch\\" # Test each plugin\\nauth --help\\nkms --help\\norch --help\\n```plaintext **Expected output:** ```plaintext\\n╭───┬─────────────────────────┬─────────┬───────────────────────────────────╮\\n│ # │ name │ version │ filename │\\n├───┼─────────────────────────┼─────────┼───────────────────────────────────┤\\n│ 0 │ nu_plugin_auth │ 0.1.0 │ .../nu_plugin_auth │\\n│ 1 │ nu_plugin_kms │ 0.1.0 │ .../nu_plugin_kms │\\n│ 2 │ nu_plugin_orchestrator │ 0.1.0 │ .../nu_plugin_orchestrator │\\n╰───┴─────────────────────────┴─────────┴───────────────────────────────────╯\\n```plaintext ### Step 5: Configure Environment (Optional) ```bash\\n# Add to ~/.config/nushell/env.nu\\n$env.RUSTYVAULT_ADDR = \\"http://localhost:8200\\"\\n$env.RUSTYVAULT_TOKEN = \\"your-vault-token\\"\\n$env.CONTROL_CENTER_URL = \\"http://localhost:3000\\"\\n$env.ORCHESTRATOR_DATA_DIR = \\"/opt/orchestrator/data\\"\\n```plaintext --- ## Quick Start (5 Minutes) ### 1. Authentication Workflow ```nushell\\n# Login (password prompted securely)\\nauth login admin\\n# ✓ Login successful\\n# User: admin\\n# Role: Admin\\n# Expires: 2025-10-09T14:30:00Z # Verify session\\nauth verify\\n# {\\n# \\"active\\": true,\\n# \\"user\\": \\"admin\\",\\n# \\"role\\": \\"Admin\\",\\n# \\"expires_at\\": \\"2025-10-09T14:30:00Z\\"\\n# } # Enroll in MFA (optional but recommended)\\nauth mfa enroll totp\\n# QR code displayed, save backup codes # Verify MFA\\nauth mfa verify --code 123456\\n# ✓ MFA verification successful # Logout\\nauth logout\\n# ✓ Logged out successfully\\n```plaintext ### 2. KMS Operations ```nushell\\n# Encrypt data\\nkms encrypt \\"my secret data\\"\\n# vault:v1:8GawgGuP... # Decrypt data\\nkms decrypt \\"vault:v1:8GawgGuP...\\"\\n# my secret data # Check available backends\\nkms status\\n# {\\n# \\"backend\\": \\"rustyvault\\",\\n# \\"status\\": \\"healthy\\",\\n# \\"url\\": \\"http://localhost:8200\\"\\n# } # Encrypt with specific backend\\nkms encrypt \\"data\\" --backend age --key age1xxxxxxx\\n```plaintext ### 3. Orchestrator Operations ```nushell\\n# Check orchestrator status (no HTTP call)\\norch status\\n# {\\n# \\"active_tasks\\": 5,\\n# \\"completed_tasks\\": 120,\\n# \\"health\\": \\"healthy\\"\\n# } # Validate workflow\\norch validate workflows/deploy.k\\n# {\\n# \\"valid\\": true,\\n# \\"workflow\\": { \\"name\\": \\"deploy_k8s\\", \\"operations\\": 5 }\\n# } # List running tasks\\norch tasks --status running\\n# [ { \\"task_id\\": \\"task_123\\", \\"name\\": \\"deploy_k8s\\", \\"progress\\": 45 } ]\\n```plaintext ### 4. Combined Workflow ```nushell\\n# Complete authenticated deployment pipeline\\nauth login admin | if $in.success { auth verify } | if $in.active { orch validate workflows/production.k | if $in.valid { kms encrypt (open secrets.yaml | to json) | save production-secrets.enc } }\\n# ✓ Pipeline completed successfully\\n```plaintext --- ## Authentication Plugin (nu_plugin_auth) The authentication plugin manages JWT-based authentication, MFA enrollment/verification, and session management with OS-native keyring integration. ### Available Commands | Command | Purpose | Example |\\n|---------|---------|---------|\\n| `auth login` | Login and store JWT | `auth login admin` |\\n| `auth logout` | Logout and clear tokens | `auth logout` |\\n| `auth verify` | Verify current session | `auth verify` |\\n| `auth sessions` | List active sessions | `auth sessions` |\\n| `auth mfa enroll` | Enroll in MFA | `auth mfa enroll totp` |\\n| `auth mfa verify` | Verify MFA code | `auth mfa verify --code 123456` | ### Command Reference #### `auth login [password]` Login to provisioning platform and store JWT tokens securely in OS keyring. **Arguments:** - `username` (required): Username for authentication\\n- `password` (optional): Password (prompted if not provided) **Flags:** - `--url `: Control center URL (default: `http://localhost:3000`)\\n- `--password `: Password (alternative to positional argument) **Examples:** ```nushell\\n# Interactive password prompt (recommended)\\nauth login admin\\n# Password: ••••••••\\n# ✓ Login successful\\n# User: admin\\n# Role: Admin\\n# Expires: 2025-10-09T14:30:00Z # Password in command (not recommended for production)\\nauth login admin mypassword # Custom control center URL\\nauth login admin --url https://control-center.example.com # Pipeline usage\\nlet creds = { username: \\"admin\\", password: (input --suppress-output \\"Password: \\") }\\nauth login $creds.username $creds.password\\n```plaintext **Token Storage Locations:** - **macOS**: Keychain Access (`login` keychain)\\n- **Linux**: Secret Service API (gnome-keyring, kwallet)\\n- **Windows**: Windows Credential Manager **Security Notes:** - Tokens encrypted at rest by OS\\n- Requires user authentication to access (macOS Touch ID, Linux password)\\n- Never stored in plain text files #### `auth logout` Logout from current session and remove stored tokens from keyring. **Examples:** ```nushell\\n# Simple logout\\nauth logout\\n# ✓ Logged out successfully # Conditional logout\\nif (auth verify | get active) { auth logout echo \\"Session terminated\\"\\n} # Logout all sessions (requires admin role)\\nauth sessions | each { |sess| auth logout --session-id $sess.session_id\\n}\\n```plaintext #### `auth verify` Verify current session status and check token validity. **Returns:** - `active` (bool): Whether session is active\\n- `user` (string): Username\\n- `role` (string): User role\\n- `expires_at` (datetime): Token expiration\\n- `mfa_verified` (bool): MFA verification status **Examples:** ```nushell\\n# Check if logged in\\nauth verify\\n# {\\n# \\"active\\": true,\\n# \\"user\\": \\"admin\\",\\n# \\"role\\": \\"Admin\\",\\n# \\"expires_at\\": \\"2025-10-09T14:30:00Z\\",\\n# \\"mfa_verified\\": true\\n# } # Pipeline usage\\nif (auth verify | get active) { echo \\"✓ Authenticated\\"\\n} else { auth login admin\\n} # Check expiration\\nlet session = auth verify\\nif ($session.expires_at | into datetime) < (date now) { echo \\"Session expired, re-authenticating...\\" auth login $session.user\\n}\\n```plaintext #### `auth sessions` List all active sessions for current user. **Examples:** ```nushell\\n# List all sessions\\nauth sessions\\n# [\\n# {\\n# \\"session_id\\": \\"sess_abc123\\",\\n# \\"created_at\\": \\"2025-10-09T12:00:00Z\\",\\n# \\"expires_at\\": \\"2025-10-09T14:30:00Z\\",\\n# \\"ip_address\\": \\"192.168.1.100\\",\\n# \\"user_agent\\": \\"nushell/0.107.1\\"\\n# }\\n# ] # Filter recent sessions (last hour)\\nauth sessions | where created_at > ((date now) - 1hr) # Find sessions by IP\\nauth sessions | where ip_address =~ \\"192.168\\" # Count active sessions\\nauth sessions | length\\n```plaintext #### `auth mfa enroll ` Enroll in Multi-Factor Authentication (TOTP or WebAuthn). **Arguments:** - `type` (required): MFA type (`totp` or `webauthn`) **TOTP Enrollment:** ```nushell\\nauth mfa enroll totp\\n# ✓ TOTP enrollment initiated\\n#\\n# Scan this QR code with your authenticator app:\\n#\\n# ████ ▄▄▄▄▄ █▀█ █▄▀▀▀▄ ▄▄▄▄▄ ████\\n# ████ █ █ █▀▀▀█▄ ▀▀█ █ █ ████\\n# ████ █▄▄▄█ █ █▀▄ ▀▄▄█ █▄▄▄█ ████\\n# (QR code continues...)\\n#\\n# Or enter manually:\\n# Secret: JBSWY3DPEHPK3PXP\\n# URL: otpauth://totp/Provisioning:admin?secret=JBSWY3DPEHPK3PXP&issuer=Provisioning\\n#\\n# Backup codes (save securely):\\n# 1. ABCD-EFGH-IJKL\\n# 2. MNOP-QRST-UVWX\\n# 3. YZAB-CDEF-GHIJ\\n# (8 more codes...)\\n```plaintext **WebAuthn Enrollment:** ```nushell\\nauth mfa enroll webauthn\\n# ✓ WebAuthn enrollment initiated\\n#\\n# Insert your security key and touch the button...\\n# (waiting for device interaction)\\n#\\n# ✓ Security key registered successfully\\n# Device: YubiKey 5 NFC\\n# Created: 2025-10-09T13:00:00Z\\n```plaintext **Supported Authenticator Apps:** - Google Authenticator\\n- Microsoft Authenticator\\n- Authy\\n- 1Password\\n- Bitwarden **Supported Hardware Keys:** - YubiKey (all models)\\n- Titan Security Key\\n- Feitian ePass\\n- macOS Touch ID\\n- Windows Hello #### `auth mfa verify --code ` Verify MFA code (TOTP or backup code). **Flags:** - `--code ` (required): 6-digit TOTP code or backup code **Examples:** ```nushell\\n# Verify TOTP code\\nauth mfa verify --code 123456\\n# ✓ MFA verification successful # Verify backup code\\nauth mfa verify --code ABCD-EFGH-IJKL\\n# ✓ MFA verification successful (backup code used)\\n# Warning: This backup code cannot be used again # Pipeline usage\\nlet code = input \\"MFA code: \\"\\nauth mfa verify --code $code\\n```plaintext **Error Cases:** ```nushell\\n# Invalid code\\nauth mfa verify --code 999999\\n# Error: Invalid MFA code\\n# → Verify time synchronization on your device # Rate limited\\nauth mfa verify --code 123456\\n# Error: Too many failed attempts\\n# → Wait 5 minutes before trying again # No MFA enrolled\\nauth mfa verify --code 123456\\n# Error: MFA not enrolled for this user\\n# → Run: auth mfa enroll totp\\n```plaintext ### Environment Variables | Variable | Description | Default |\\n|----------|-------------|---------|\\n| `USER` | Default username | Current OS user |\\n| `CONTROL_CENTER_URL` | Control center URL | `http://localhost:3000` |\\n| `AUTH_KEYRING_SERVICE` | Keyring service name | `provisioning-auth` | ### Troubleshooting Authentication **\\"No active session\\"** ```nushell\\n# Solution: Login first\\nauth login \\n```plaintext **\\"Keyring error\\" (macOS)** ```bash\\n# Check Keychain Access permissions\\n# System Preferences → Security & Privacy → Privacy → Full Disk Access\\n# Add: /Applications/Nushell.app (or /usr/local/bin/nu) # Or grant access manually\\nsecurity unlock-keychain ~/Library/Keychains/login.keychain-db\\n```plaintext **\\"Keyring error\\" (Linux)** ```bash\\n# Install keyring service\\nsudo apt install gnome-keyring # Ubuntu/Debian\\nsudo dnf install gnome-keyring # Fedora\\nsudo pacman -S gnome-keyring # Arch # Or use KWallet (KDE)\\nsudo apt install kwalletmanager # Start keyring daemon\\neval $(gnome-keyring-daemon --start)\\nexport $(gnome-keyring-daemon --start --components=secrets)\\n```plaintext **\\"MFA verification failed\\"** ```nushell\\n# Check time synchronization (TOTP requires accurate time)\\n# macOS:\\nsudo sntp -sS time.apple.com # Linux:\\nsudo ntpdate pool.ntp.org\\n# Or\\nsudo systemctl restart systemd-timesyncd # Use backup code if TOTP not working\\nauth mfa verify --code ABCD-EFGH-IJKL\\n```plaintext --- ## KMS Plugin (nu_plugin_kms) The KMS plugin provides high-performance encryption and decryption using multiple backend providers. ### Supported Backends | Backend | Performance | Use Case | Setup Complexity |\\n|---------|------------|----------|------------------|\\n| **rustyvault** | ⚡ Very Fast (~5ms) | Production KMS | Medium |\\n| **age** | ⚡ Very Fast (~3ms) | Local development | Low |\\n| **cosmian** | 🐢 Moderate (~30ms) | Cloud KMS | Medium |\\n| **aws** | 🐢 Moderate (~50ms) | AWS environments | Medium |\\n| **vault** | 🐢 Moderate (~40ms) | Enterprise KMS | High | ### Backend Selection Guide **Choose `rustyvault` when:** - ✅ Running in production with high throughput requirements\\n- ✅ Need ~5ms encryption/decryption latency\\n- ✅ Have RustyVault server deployed\\n- ✅ Require key rotation and versioning **Choose `age` when:** - ✅ Developing locally without external dependencies\\n- ✅ Need simple file encryption\\n- ✅ Want ~3ms latency\\n- ❌ Don\'t need centralized key management **Choose `cosmian` when:** - ✅ Using Cosmian KMS service\\n- ✅ Need cloud-based key management\\n- ⚠️ Can accept ~30ms latency **Choose `aws` when:** - ✅ Deployed on AWS infrastructure\\n- ✅ Using AWS IAM for access control\\n- ✅ Need AWS KMS integration\\n- ⚠️ Can accept ~50ms latency **Choose `vault` when:** - ✅ Using HashiCorp Vault enterprise\\n- ✅ Need advanced policy management\\n- ✅ Require audit trails\\n- ⚠️ Can accept ~40ms latency ### Available Commands | Command | Purpose | Example |\\n|---------|---------|---------|\\n| `kms encrypt` | Encrypt data | `kms encrypt \\"secret\\"` |\\n| `kms decrypt` | Decrypt data | `kms decrypt \\"vault:v1:...\\"` |\\n| `kms generate-key` | Generate DEK | `kms generate-key --spec AES256` |\\n| `kms status` | Backend status | `kms status` | ### Command Reference #### `kms encrypt [--backend ]` Encrypt data using specified KMS backend. **Arguments:** - `data` (required): Data to encrypt (string or binary) **Flags:** - `--backend `: KMS backend (`rustyvault`, `age`, `cosmian`, `aws`, `vault`)\\n- `--key `: Key ID or recipient (backend-specific)\\n- `--context `: Additional authenticated data (AAD) **Examples:** ```nushell\\n# Auto-detect backend from environment\\nkms encrypt \\"secret configuration data\\"\\n# vault:v1:8GawgGuP+emDKX5q... # RustyVault backend\\nkms encrypt \\"data\\" --backend rustyvault --key provisioning-main\\n# vault:v1:abc123def456... # Age backend (local encryption)\\nkms encrypt \\"data\\" --backend age --key age1xxxxxxxxx\\n# -----BEGIN AGE ENCRYPTED FILE-----\\n# YWdlLWVuY3J5cHRpb24ub3JnL3YxCi0+...\\n# -----END AGE ENCRYPTED FILE----- # AWS KMS\\nkms encrypt \\"data\\" --backend aws --key alias/provisioning\\n# AQICAHhwbGF0Zm9ybS1wcm92aXNpb25p... # With context (AAD for additional security)\\nkms encrypt \\"data\\" --backend rustyvault --key provisioning-main --context \\"user=admin,env=production\\" # Encrypt file contents\\nkms encrypt (open config.yaml) --backend rustyvault | save config.yaml.enc # Encrypt multiple files\\nls configs/*.yaml | each { |file| kms encrypt (open $file.name) --backend age | save $\\"encrypted/($file.name).enc\\"\\n}\\n```plaintext **Output Formats:** - **RustyVault**: `vault:v1:base64_ciphertext`\\n- **Age**: `-----BEGIN AGE ENCRYPTED FILE-----...-----END AGE ENCRYPTED FILE-----`\\n- **AWS**: `base64_aws_kms_ciphertext`\\n- **Cosmian**: `cosmian:v1:base64_ciphertext` #### `kms decrypt [--backend ]` Decrypt KMS-encrypted data. **Arguments:** - `encrypted` (required): Encrypted data (detects format automatically) **Flags:** - `--backend `: KMS backend (auto-detected from format if not specified)\\n- `--context `: Additional authenticated data (must match encryption context) **Examples:** ```nushell\\n# Auto-detect backend from format\\nkms decrypt \\"vault:v1:8GawgGuP...\\"\\n# secret configuration data # Explicit backend\\nkms decrypt \\"vault:v1:abc123...\\" --backend rustyvault # Age decryption\\nkms decrypt \\"-----BEGIN AGE ENCRYPTED FILE-----...\\"\\n# (uses AGE_IDENTITY from environment) # With context (must match encryption context)\\nkms decrypt \\"vault:v1:abc123...\\" --context \\"user=admin,env=production\\" # Decrypt file\\nkms decrypt (open config.yaml.enc) | save config.yaml # Decrypt multiple files\\nls encrypted/*.enc | each { |file| kms decrypt (open $file.name) | save $\\"configs/(($file.name | path basename) | str replace \'.enc\' \'\')\\"\\n} # Pipeline decryption\\nopen secrets.json | get database_password_enc | kms decrypt | str trim | psql --dbname mydb --password\\n```plaintext **Error Cases:** ```nushell\\n# Invalid ciphertext\\nkms decrypt \\"invalid_data\\"\\n# Error: Invalid ciphertext format\\n# → Verify data was encrypted with KMS # Context mismatch\\nkms decrypt \\"vault:v1:abc...\\" --context \\"wrong=context\\"\\n# Error: Authentication failed (AAD mismatch)\\n# → Verify encryption context matches # Backend unavailable\\nkms decrypt \\"vault:v1:abc...\\"\\n# Error: Failed to connect to RustyVault at http://localhost:8200\\n# → Check RustyVault is running: curl http://localhost:8200/v1/sys/health\\n```plaintext #### `kms generate-key [--spec ]` Generate data encryption key (DEK) using KMS envelope encryption. **Flags:** - `--spec `: Key specification (`AES128` or `AES256`, default: `AES256`)\\n- `--backend `: KMS backend **Examples:** ```nushell\\n# Generate AES-256 key\\nkms generate-key\\n# {\\n# \\"plaintext\\": \\"rKz3N8xPq...\\", # base64-encoded key\\n# \\"ciphertext\\": \\"vault:v1:...\\", # encrypted DEK\\n# \\"spec\\": \\"AES256\\"\\n# } # Generate AES-128 key\\nkms generate-key --spec AES128 # Use in envelope encryption pattern\\nlet dek = kms generate-key\\nlet encrypted_data = ($data | openssl enc -aes-256-cbc -K $dek.plaintext)\\n{ data: $encrypted_data, encrypted_key: $dek.ciphertext\\n} | save secure_data.json # Later, decrypt:\\nlet envelope = open secure_data.json\\nlet dek = kms decrypt $envelope.encrypted_key\\n$envelope.data | openssl enc -d -aes-256-cbc -K $dek\\n```plaintext **Use Cases:** - Envelope encryption (encrypt large data locally, protect DEK with KMS)\\n- Database field encryption\\n- File encryption with key wrapping #### `kms status` Show KMS backend status, configuration, and health. **Examples:** ```nushell\\n# Show current backend status\\nkms status\\n# {\\n# \\"backend\\": \\"rustyvault\\",\\n# \\"status\\": \\"healthy\\",\\n# \\"url\\": \\"http://localhost:8200\\",\\n# \\"mount_point\\": \\"transit\\",\\n# \\"version\\": \\"0.1.0\\",\\n# \\"latency_ms\\": 5\\n# } # Check all configured backends\\nkms status --all\\n# [\\n# { \\"backend\\": \\"rustyvault\\", \\"status\\": \\"healthy\\", ... },\\n# { \\"backend\\": \\"age\\", \\"status\\": \\"available\\", ... },\\n# { \\"backend\\": \\"aws\\", \\"status\\": \\"unavailable\\", \\"error\\": \\"...\\" }\\n# ] # Filter to specific backend\\nkms status | where backend == \\"rustyvault\\" # Health check in automation\\nif (kms status | get status) == \\"healthy\\" { echo \\"✓ KMS operational\\"\\n} else { error make { msg: \\"KMS unhealthy\\" }\\n}\\n```plaintext ### Backend Configuration #### RustyVault Backend ```bash\\n# Environment variables\\nexport RUSTYVAULT_ADDR=\\"http://localhost:8200\\"\\nexport RUSTYVAULT_TOKEN=\\"hvs.xxxxxxxxxxxxx\\"\\nexport RUSTYVAULT_MOUNT=\\"transit\\" # Transit engine mount point\\nexport RUSTYVAULT_KEY=\\"provisioning-main\\" # Default key name\\n```plaintext ```nushell\\n# Usage\\nkms encrypt \\"data\\" --backend rustyvault --key provisioning-main\\n```plaintext **Setup RustyVault:** ```bash\\n# Start RustyVault\\nrustyvault server -dev # Enable transit engine\\nrustyvault secrets enable transit # Create encryption key\\nrustyvault write -f transit/keys/provisioning-main\\n```plaintext #### Age Backend ```bash\\n# Generate Age keypair\\nage-keygen -o ~/.age/key.txt # Environment variables\\nexport AGE_IDENTITY=\\"$HOME/.age/key.txt\\" # Private key\\nexport AGE_RECIPIENT=\\"age1xxxxxxxxx\\" # Public key (from key.txt)\\n```plaintext ```nushell\\n# Usage\\nkms encrypt \\"data\\" --backend age\\nkms decrypt (open file.enc) --backend age\\n```plaintext #### AWS KMS Backend ```bash\\n# AWS credentials\\nexport AWS_REGION=\\"us-east-1\\"\\nexport AWS_ACCESS_KEY_ID=\\"AKIAXXXXX\\"\\nexport AWS_SECRET_ACCESS_KEY=\\"xxxxx\\" # KMS configuration\\nexport AWS_KMS_KEY_ID=\\"alias/provisioning\\"\\n```plaintext ```nushell\\n# Usage\\nkms encrypt \\"data\\" --backend aws --key alias/provisioning\\n```plaintext **Setup AWS KMS:** ```bash\\n# Create KMS key\\naws kms create-key --description \\"Provisioning Platform\\" # Create alias\\naws kms create-alias --alias-name alias/provisioning --target-key-id # Grant permissions\\naws kms create-grant --key-id --grantee-principal \\\\ --operations Encrypt Decrypt GenerateDataKey\\n```plaintext #### Cosmian Backend ```bash\\n# Cosmian KMS configuration\\nexport KMS_HTTP_URL=\\"http://localhost:9998\\"\\nexport KMS_HTTP_BACKEND=\\"cosmian\\"\\nexport COSMIAN_API_KEY=\\"your-api-key\\"\\n```plaintext ```nushell\\n# Usage\\nkms encrypt \\"data\\" --backend cosmian\\n```plaintext #### Vault Backend (HashiCorp) ```bash\\n# Vault configuration\\nexport VAULT_ADDR=\\"https://vault.example.com:8200\\"\\nexport VAULT_TOKEN=\\"hvs.xxxxxxxxxxxxx\\"\\nexport VAULT_MOUNT=\\"transit\\"\\nexport VAULT_KEY=\\"provisioning\\"\\n```plaintext ```nushell\\n# Usage\\nkms encrypt \\"data\\" --backend vault --key provisioning\\n```plaintext ### Performance Benchmarks **Test Setup:** - Data size: 1KB\\n- Iterations: 1000\\n- Hardware: Apple M1, 16GB RAM\\n- Network: localhost **Results:** | Backend | Encrypt (avg) | Decrypt (avg) | Throughput (ops/sec) |\\n|---------|---------------|---------------|----------------------|\\n| RustyVault | 4.8ms | 5.1ms | ~200 |\\n| Age | 2.9ms | 3.2ms | ~320 |\\n| Cosmian HTTP | 31ms | 29ms | ~33 |\\n| AWS KMS | 52ms | 48ms | ~20 |\\n| Vault | 38ms | 41ms | ~25 | **Scaling Test (1000 operations):** ```nushell\\n# RustyVault: ~5 seconds\\n0..1000 | each { |_| kms encrypt \\"data\\" --backend rustyvault } | length\\n# Age: ~3 seconds\\n0..1000 | each { |_| kms encrypt \\"data\\" --backend age } | length\\n```plaintext ### Troubleshooting KMS **\\"RustyVault connection failed\\"** ```bash\\n# Check RustyVault is running\\ncurl http://localhost:8200/v1/sys/health\\n# Expected: { \\"initialized\\": true, \\"sealed\\": false } # Check environment\\necho $env.RUSTYVAULT_ADDR\\necho $env.RUSTYVAULT_TOKEN # Test authentication\\ncurl -H \\"X-Vault-Token: $RUSTYVAULT_TOKEN\\" $RUSTYVAULT_ADDR/v1/sys/health\\n```plaintext **\\"Age encryption failed\\"** ```bash\\n# Check Age keys exist\\nls -la ~/.age/\\n# Expected: key.txt # Verify key format\\ncat ~/.age/key.txt | head -1\\n# Expected: # created: \\n# Line 2: # public key: age1xxxxx\\n# Line 3: AGE-SECRET-KEY-xxxxx # Extract public key\\nexport AGE_RECIPIENT=$(grep \\"public key:\\" ~/.age/key.txt | cut -d: -f2 | tr -d \' \')\\necho $AGE_RECIPIENT\\n```plaintext **\\"AWS KMS access denied\\"** ```bash\\n# Verify AWS credentials\\naws sts get-caller-identity\\n# Expected: Account, UserId, Arn # Check KMS key permissions\\naws kms describe-key --key-id alias/provisioning # Test encryption\\naws kms encrypt --key-id alias/provisioning --plaintext \\"test\\"\\n```plaintext --- ## Orchestrator Plugin (nu_plugin_orchestrator) The orchestrator plugin provides direct file-based access to orchestrator state, eliminating HTTP overhead for status queries and validation. ### Available Commands | Command | Purpose | Example |\\n|---------|---------|---------|\\n| `orch status` | Orchestrator status | `orch status` |\\n| `orch validate` | Validate workflow | `orch validate workflow.k` |\\n| `orch tasks` | List tasks | `orch tasks --status running` | ### Command Reference #### `orch status [--data-dir ]` Get orchestrator status from local files (no HTTP, ~1ms latency). **Flags:** - `--data-dir `: Data directory (default from `ORCHESTRATOR_DATA_DIR`) **Examples:** ```nushell\\n# Default data directory\\norch status\\n# {\\n# \\"active_tasks\\": 5,\\n# \\"completed_tasks\\": 120,\\n# \\"failed_tasks\\": 2,\\n# \\"pending_tasks\\": 3,\\n# \\"uptime\\": \\"2d 4h 15m\\",\\n# \\"health\\": \\"healthy\\"\\n# } # Custom data directory\\norch status --data-dir /opt/orchestrator/data # Monitor in loop\\nwhile true { clear orch status | table sleep 5sec\\n} # Alert on failures\\nif (orch status | get failed_tasks) > 0 { echo \\"⚠️ Failed tasks detected!\\"\\n}\\n```plaintext #### `orch validate [--strict]` Validate workflow KCL file syntax and structure. **Arguments:** - `workflow.k` (required): Path to KCL workflow file **Flags:** - `--strict`: Enable strict validation (warnings as errors) **Examples:** ```nushell\\n# Basic validation\\norch validate workflows/deploy.k\\n# {\\n# \\"valid\\": true,\\n# \\"workflow\\": {\\n# \\"name\\": \\"deploy_k8s_cluster\\",\\n# \\"version\\": \\"1.0.0\\",\\n# \\"operations\\": 5\\n# },\\n# \\"warnings\\": [],\\n# \\"errors\\": []\\n# } # Strict mode (warnings cause failure)\\norch validate workflows/deploy.k --strict\\n# Error: Validation failed with warnings:\\n# - Operation \'create_servers\': Missing retry_policy\\n# - Operation \'install_k8s\': Resource limits not specified # Validate all workflows\\nls workflows/*.k | each { |file| let result = orch validate $file.name if $result.valid { echo $\\"✓ ($file.name)\\" } else { echo $\\"✗ ($file.name): ($result.errors | str join \', \')\\" }\\n} # CI/CD validation\\ntry { orch validate workflow.k --strict echo \\"✓ Validation passed\\"\\n} catch { echo \\"✗ Validation failed\\" exit 1\\n}\\n```plaintext **Validation Checks:** - ✅ KCL syntax correctness\\n- ✅ Required fields present (`name`, `version`, `operations`)\\n- ✅ Dependency graph valid (no cycles)\\n- ✅ Resource limits within bounds\\n- ✅ Provider configurations valid\\n- ✅ Operation types supported\\n- ⚠️ Optional: Retry policies defined\\n- ⚠️ Optional: Resource limits specified #### `orch tasks [--status ] [--limit ]` List orchestrator tasks from local state. **Flags:** - `--status `: Filter by status (`pending`, `running`, `completed`, `failed`)\\n- `--limit `: Limit results (default: 100)\\n- `--data-dir `: Data directory **Examples:** ```nushell\\n# All tasks (last 100)\\norch tasks\\n# [\\n# {\\n# \\"task_id\\": \\"task_abc123\\",\\n# \\"name\\": \\"deploy_kubernetes\\",\\n# \\"status\\": \\"running\\",\\n# \\"priority\\": 5,\\n# \\"created_at\\": \\"2025-10-09T12:00:00Z\\",\\n# \\"progress\\": 45\\n# }\\n# ] # Running tasks only\\norch tasks --status running # Failed tasks (last 10)\\norch tasks --status failed --limit 10 # Pending high-priority tasks\\norch tasks --status pending | where priority > 7 # Monitor active tasks\\nwatch { orch tasks --status running | select name progress updated_at | table\\n} # Count tasks by status\\norch tasks | group-by status | each { |group| { status: $group.0, count: ($group.1 | length) }\\n}\\n```plaintext ### Environment Variables | Variable | Description | Default |\\n|----------|-------------|---------|\\n| `ORCHESTRATOR_DATA_DIR` | Data directory | `provisioning/platform/orchestrator/data` | ### Performance Comparison | Operation | HTTP API | Plugin | Latency Reduction |\\n|-----------|----------|--------|-------------------|\\n| Status query | ~30ms | ~1ms | **97% faster** |\\n| Validate workflow | ~100ms | ~10ms | **90% faster** |\\n| List tasks | ~50ms | ~5ms | **90% faster** | **Use Case: CI/CD Pipeline** ```nushell\\n# HTTP approach (slow)\\nhttp get http://localhost:9090/tasks --status running | each { |task| http get $\\"http://localhost:9090/tasks/($task.id)\\" }\\n# Total: ~500ms for 10 tasks # Plugin approach (fast)\\norch tasks --status running\\n# Total: ~5ms for 10 tasks\\n# Result: 100x faster\\n```plaintext ### Troubleshooting Orchestrator **\\"Failed to read status\\"** ```bash\\n# Check data directory exists\\nls -la provisioning/platform/orchestrator/data/ # Create if missing\\nmkdir -p provisioning/platform/orchestrator/data # Check permissions (must be readable)\\nchmod 755 provisioning/platform/orchestrator/data\\n```plaintext **\\"Workflow validation failed\\"** ```nushell\\n# Use strict mode for detailed errors\\norch validate workflows/deploy.k --strict # Check KCL syntax manually\\nkcl fmt workflows/deploy.k\\nkcl run workflows/deploy.k\\n```plaintext **\\"No tasks found\\"** ```bash\\n# Check orchestrator running\\nps aux | grep orchestrator # Start orchestrator if not running\\ncd provisioning/platform/orchestrator\\n./scripts/start-orchestrator.nu --background # Check task files\\nls provisioning/platform/orchestrator/data/tasks/\\n```plaintext --- ## Integration Examples ### Example 1: Complete Authenticated Deployment Full workflow with authentication, secrets, and deployment: ```nushell\\n# Step 1: Login with MFA\\nauth login admin\\nauth mfa verify --code (input \\"MFA code: \\") # Step 2: Verify orchestrator health\\nif (orch status | get health) != \\"healthy\\" { error make { msg: \\"Orchestrator unhealthy\\" }\\n} # Step 3: Validate deployment workflow\\nlet validation = orch validate workflows/production-deploy.k --strict\\nif not $validation.valid { error make { msg: $\\"Validation failed: ($validation.errors)\\" }\\n} # Step 4: Encrypt production secrets\\nlet secrets = open secrets/production.yaml\\nkms encrypt ($secrets | to json) --backend rustyvault --key prod-main | save secrets/production.enc # Step 5: Submit deployment\\nprovisioning cluster create production --check # Step 6: Monitor progress\\nwhile (orch tasks --status running | length) > 0 { orch tasks --status running | select name progress updated_at | table sleep 10sec\\n} echo \\"✓ Deployment complete\\"\\n```plaintext ### Example 2: Batch Secret Rotation Rotate all secrets in multiple environments: ```nushell\\n# Rotate database passwords\\n[\\"dev\\", \\"staging\\", \\"production\\"] | each { |env| # Generate new password let new_password = (openssl rand -base64 32) # Encrypt with environment-specific key let encrypted = kms encrypt $new_password --backend rustyvault --key $\\"($env)-main\\" # Save encrypted password { environment: $env, password_enc: $encrypted, rotated_at: (date now | format date \\"%Y-%m-%d %H:%M:%S\\") } | save $\\"secrets/db-password-($env).json\\" echo $\\"✓ Rotated password for ($env)\\"\\n}\\n```plaintext ### Example 3: Multi-Environment Deployment Deploy to multiple environments with validation: ```nushell\\n# Define environments\\nlet environments = [ { name: \\"dev\\", validate: \\"basic\\" }, { name: \\"staging\\", validate: \\"strict\\" }, { name: \\"production\\", validate: \\"strict\\", mfa_required: true }\\n] # Deploy to each environment\\n$environments | each { |env| echo $\\"Deploying to ($env.name)...\\" # Authenticate if production if $env.mfa_required? { if not (auth verify | get mfa_verified) { auth mfa verify --code (input $\\"MFA code for ($env.name): \\") } } # Validate workflow let validation = if $env.validate == \\"strict\\" { orch validate $\\"workflows/($env.name)-deploy.k\\" --strict } else { orch validate $\\"workflows/($env.name)-deploy.k\\" } if not $validation.valid { echo $\\"✗ Validation failed for ($env.name)\\" continue } # Decrypt secrets let secrets = kms decrypt (open $\\"secrets/($env.name).enc\\") # Deploy provisioning cluster create $env.name echo $\\"✓ Deployed to ($env.name)\\"\\n}\\n```plaintext ### Example 4: Automated Backup and Encryption Backup configuration files with encryption: ```nushell\\n# Backup script\\nlet backup_dir = $\\"backups/(date now | format date \\"%Y%m%d-%H%M%S\\")\\"\\nmkdir $backup_dir # Backup and encrypt configs\\nls configs/**/*.yaml | each { |file| let encrypted = kms encrypt (open $file.name) --backend age let backup_path = $\\"($backup_dir)/($file.name | path basename).enc\\" $encrypted | save $backup_path echo $\\"✓ Backed up ($file.name)\\"\\n} # Create manifest\\n{ backup_date: (date now), files: (ls $\\"($backup_dir)/*.enc\\" | length), backend: \\"age\\"\\n} | save $\\"($backup_dir)/manifest.json\\" echo $\\"✓ Backup complete: ($backup_dir)\\"\\n```plaintext ### Example 5: Health Monitoring Dashboard Real-time health monitoring: ```nushell\\n# Health dashboard\\nwhile true { clear # Header echo \\"=== Provisioning Platform Health Dashboard ===\\" echo $\\"Updated: (date now | format date \\"%Y-%m-%d %H:%M:%S\\")\\" echo \\"\\" # Authentication status let auth_status = try { auth verify } catch { { active: false } } echo $\\"Auth: (if $auth_status.active { \'✓ Active\' } else { \'✗ Inactive\' })\\" # KMS status let kms_health = kms status echo $\\"KMS: (if $kms_health.status == \'healthy\' { \'✓ Healthy\' } else { \'✗ Unhealthy\' })\\" # Orchestrator status let orch_health = orch status echo $\\"Orchestrator: (if $orch_health.health == \'healthy\' { \'✓ Healthy\' } else { \'✗ Unhealthy\' })\\" echo $\\"Active Tasks: ($orch_health.active_tasks)\\" echo $\\"Failed Tasks: ($orch_health.failed_tasks)\\" # Task summary echo \\"\\" echo \\"=== Running Tasks ===\\" orch tasks --status running | select name progress updated_at | table sleep 10sec\\n}\\n```plaintext --- ## Best Practices ### When to Use Plugins vs HTTP **✅ Use Plugins When:** - Performance is critical (high-frequency operations)\\n- Working in pipelines (Nushell data structures)\\n- Need offline capability (KMS, orchestrator local ops)\\n- Building automation scripts\\n- CI/CD pipelines **Use HTTP When:** - Calling from external systems (not Nushell)\\n- Need consistent REST API interface\\n- Cross-language integration\\n- Web UI backend ### Performance Optimization **1. Batch Operations** ```nushell\\n# ❌ Slow: Individual HTTP calls in loop\\nls configs/*.yaml | each { |file| http post http://localhost:9998/encrypt { data: (open $file.name) }\\n}\\n# Total: ~5 seconds (50ms × 100) # ✅ Fast: Plugin in pipeline\\nls configs/*.yaml | each { |file| kms encrypt (open $file.name)\\n}\\n# Total: ~0.5 seconds (5ms × 100)\\n```plaintext **2. Parallel Processing** ```nushell\\n# Process multiple operations in parallel\\nls configs/*.yaml | par-each { |file| kms encrypt (open $file.name) | save $\\"encrypted/($file.name).enc\\" }\\n```plaintext **3. Caching Session State** ```nushell\\n# Cache auth verification\\nlet $auth_cache = auth verify\\nif $auth_cache.active { # Use cached result instead of repeated calls echo $\\"Authenticated as ($auth_cache.user)\\"\\n}\\n```plaintext ### Error Handling **Graceful Degradation:** ```nushell\\n# Try plugin, fallback to HTTP if unavailable\\ndef kms_encrypt [data: string] { try { kms encrypt $data } catch { http post http://localhost:9998/encrypt { data: $data } | get encrypted }\\n}\\n```plaintext **Comprehensive Error Handling:** ```nushell\\n# Handle all error cases\\ndef safe_deployment [] { # Check authentication let auth_status = try { auth verify } catch { echo \\"✗ Authentication failed, logging in...\\" auth login admin auth verify } # Check KMS health let kms_health = try { kms status } catch { error make { msg: \\"KMS unavailable, cannot proceed\\" } } # Validate workflow let validation = try { orch validate workflow.k --strict } catch { error make { msg: \\"Workflow validation failed\\" } } # Proceed if all checks pass if $auth_status.active and $kms_health.status == \\"healthy\\" and $validation.valid { echo \\"✓ All checks passed, deploying...\\" provisioning cluster create production }\\n}\\n```plaintext ### Security Best Practices **1. Never Log Decrypted Data** ```nushell\\n# ❌ BAD: Logs plaintext password\\nlet password = kms decrypt $encrypted_password\\necho $\\"Password: ($password)\\" # Visible in logs! # ✅ GOOD: Use directly without logging\\nlet password = kms decrypt $encrypted_password\\npsql --dbname mydb --password $password # Not logged\\n```plaintext **2. Use Context (AAD) for Critical Data** ```nushell\\n# Encrypt with context\\nlet context = $\\"user=(whoami),env=production,date=(date now | format date \\"%Y-%m-%d\\")\\"\\nkms encrypt $sensitive_data --context $context # Decrypt requires same context\\nkms decrypt $encrypted --context $context\\n```plaintext **3. Rotate Backup Codes** ```nushell\\n# After using backup code, generate new set\\nauth mfa verify --code ABCD-EFGH-IJKL\\n# Warning: Backup code used\\nauth mfa regenerate-backups\\n# New backup codes generated\\n```plaintext **4. Limit Token Lifetime** ```nushell\\n# Check token expiration before long operations\\nlet session = auth verify\\nlet expires_in = (($session.expires_at | into datetime) - (date now))\\nif $expires_in < 5min { echo \\"⚠️ Token expiring soon, re-authenticating...\\" auth login $session.user\\n}\\n```plaintext --- ## Troubleshooting ### Common Issues Across Plugins **\\"Plugin not found\\"** ```bash\\n# Check plugin registration\\nplugin list | where name =~ \\"auth|kms|orch\\" # Re-register if missing\\ncd provisioning/core/plugins/nushell-plugins\\nplugin add target/release/nu_plugin_auth\\nplugin add target/release/nu_plugin_kms\\nplugin add target/release/nu_plugin_orchestrator # Restart Nushell\\nexit\\nnu\\n```plaintext **\\"Plugin command failed\\"** ```nushell\\n# Enable debug mode\\n$env.RUST_LOG = \\"debug\\" # Run command again to see detailed errors\\nkms encrypt \\"test\\" # Check plugin version compatibility\\nplugin list | where name =~ \\"kms\\" | select name version\\n```plaintext **\\"Permission denied\\"** ```bash\\n# Check plugin executable permissions\\nls -l provisioning/core/plugins/nushell-plugins/target/release/nu_plugin_*\\n# Should show: -rwxr-xr-x # Fix if needed\\nchmod +x provisioning/core/plugins/nushell-plugins/target/release/nu_plugin_*\\n```plaintext ### Platform-Specific Issues **macOS Issues:** ```bash\\n# \\"cannot be opened because the developer cannot be verified\\"\\nxattr -d com.apple.quarantine target/release/nu_plugin_auth\\nxattr -d com.apple.quarantine target/release/nu_plugin_kms\\nxattr -d com.apple.quarantine target/release/nu_plugin_orchestrator # Keychain access denied\\n# System Preferences → Security & Privacy → Privacy → Full Disk Access\\n# Add: /usr/local/bin/nu\\n```plaintext **Linux Issues:** ```bash\\n# Keyring service not running\\nsystemctl --user status gnome-keyring-daemon\\nsystemctl --user start gnome-keyring-daemon # Missing dependencies\\nsudo apt install libssl-dev pkg-config # Ubuntu/Debian\\nsudo dnf install openssl-devel # Fedora\\n```plaintext **Windows Issues:** ```powershell\\n# Credential Manager access denied\\n# Control Panel → User Accounts → Credential Manager\\n# Ensure Windows Credential Manager service is running # Missing Visual C++ runtime\\n# Download from: https://aka.ms/vs/17/release/vc_redist.x64.exe\\n```plaintext ### Debugging Techniques **Enable Verbose Logging:** ```nushell\\n# Set log level\\n$env.RUST_LOG = \\"debug,nu_plugin_auth=trace\\" # Run command\\nauth login admin # Check logs\\n```plaintext **Test Plugin Directly:** ```bash\\n# Test plugin communication (advanced)\\necho \'{\\"Call\\": [0, {\\"name\\": \\"auth\\", \\"call\\": \\"login\\", \\"args\\": [\\"admin\\", \\"password\\"]}]}\' \\\\ | target/release/nu_plugin_auth\\n```plaintext **Check Plugin Health:** ```nushell\\n# Test each plugin\\nauth --help # Should show auth commands\\nkms --help # Should show kms commands\\norch --help # Should show orch commands # Test functionality\\nauth verify # Should return session status\\nkms status # Should return backend status\\norch status # Should return orchestrator status\\n```plaintext --- ## Migration Guide ### Migrating from HTTP to Plugin-Based **Phase 1: Install Plugins (No Breaking Changes)** ```bash\\n# Build and register plugins\\ncd provisioning/core/plugins/nushell-plugins\\ncargo build --release --all\\nplugin add target/release/nu_plugin_auth\\nplugin add target/release/nu_plugin_kms\\nplugin add target/release/nu_plugin_orchestrator # Verify HTTP still works\\nhttp get http://localhost:9090/health\\n```plaintext **Phase 2: Update Scripts Incrementally** ```nushell\\n# Before (HTTP)\\ndef encrypt_config [file: string] { let data = open $file let result = http post http://localhost:9998/encrypt { data: $data } $result.encrypted | save $\\"($file).enc\\"\\n} # After (Plugin with fallback)\\ndef encrypt_config [file: string] { let data = open $file let encrypted = try { kms encrypt $data --backend rustyvault } catch { # Fallback to HTTP if plugin unavailable (http post http://localhost:9998/encrypt { data: $data }).encrypted } $encrypted | save $\\"($file).enc\\"\\n}\\n```plaintext **Phase 3: Test Migration** ```nushell\\n# Run side-by-side comparison\\ndef test_migration [] { let test_data = \\"test secret data\\" # Plugin approach let start_plugin = date now let plugin_result = kms encrypt $test_data let plugin_time = ((date now) - $start_plugin) # HTTP approach let start_http = date now let http_result = (http post http://localhost:9998/encrypt { data: $test_data }).encrypted let http_time = ((date now) - $start_http) echo $\\"Plugin: ($plugin_time)ms\\" echo $\\"HTTP: ($http_time)ms\\" echo $\\"Speedup: (($http_time / $plugin_time))x\\"\\n}\\n```plaintext **Phase 4: Gradual Rollout** ```nushell\\n# Use feature flag for controlled rollout\\n$env.USE_PLUGINS = true def encrypt_with_flag [data: string] { if $env.USE_PLUGINS { kms encrypt $data } else { (http post http://localhost:9998/encrypt { data: $data }).encrypted }\\n}\\n```plaintext **Phase 5: Full Migration** ```nushell\\n# Replace all HTTP calls with plugin calls\\n# Remove fallback logic once stable\\ndef encrypt_config [file: string] { let data = open $file kms encrypt $data --backend rustyvault | save $\\"($file).enc\\"\\n}\\n```plaintext ### Rollback Strategy ```nushell\\n# If issues arise, quickly rollback\\ndef rollback_to_http [] { # Remove plugin registrations plugin rm nu_plugin_auth plugin rm nu_plugin_kms plugin rm nu_plugin_orchestrator # Restart Nushell exec nu\\n}\\n```plaintext --- ## Advanced Configuration ### Custom Plugin Paths ```nushell\\n# ~/.config/nushell/config.nu\\n$env.PLUGIN_PATH = \\"/opt/provisioning/plugins\\" # Register from custom location\\nplugin add $\\"($env.PLUGIN_PATH)/nu_plugin_auth\\"\\nplugin add $\\"($env.PLUGIN_PATH)/nu_plugin_kms\\"\\nplugin add $\\"($env.PLUGIN_PATH)/nu_plugin_orchestrator\\"\\n```plaintext ### Environment-Specific Configuration ```nushell\\n# ~/.config/nushell/env.nu # Development environment\\nif ($env.ENV? == \\"dev\\") { $env.RUSTYVAULT_ADDR = \\"http://localhost:8200\\" $env.CONTROL_CENTER_URL = \\"http://localhost:3000\\"\\n} # Staging environment\\nif ($env.ENV? == \\"staging\\") { $env.RUSTYVAULT_ADDR = \\"https://vault-staging.example.com\\" $env.CONTROL_CENTER_URL = \\"https://control-staging.example.com\\"\\n} # Production environment\\nif ($env.ENV? == \\"prod\\") { $env.RUSTYVAULT_ADDR = \\"https://vault.example.com\\" $env.CONTROL_CENTER_URL = \\"https://control.example.com\\"\\n}\\n```plaintext ### Plugin Aliases ```nushell\\n# ~/.config/nushell/config.nu # Auth shortcuts\\nalias login = auth login\\nalias logout = auth logout\\nalias whoami = auth verify | get user # KMS shortcuts\\nalias encrypt = kms encrypt\\nalias decrypt = kms decrypt # Orchestrator shortcuts\\nalias status = orch status\\nalias tasks = orch tasks\\nalias validate = orch validate\\n```plaintext ### Custom Commands ```nushell\\n# ~/.config/nushell/custom_commands.nu # Encrypt all files in directory\\ndef encrypt-dir [dir: string] { ls $\\"($dir)/**/*\\" | where type == file | each { |file| kms encrypt (open $file.name) | save $\\"($file.name).enc\\" echo $\\"✓ Encrypted ($file.name)\\" }\\n} # Decrypt all files in directory\\ndef decrypt-dir [dir: string] { ls $\\"($dir)/**/*.enc\\" | each { |file| kms decrypt (open $file.name) | save (echo $file.name | str replace \'.enc\' \'\') echo $\\"✓ Decrypted ($file.name)\\" }\\n} # Monitor deployments\\ndef watch-deployments [] { while true { clear echo \\"=== Active Deployments ===\\" orch tasks --status running | table sleep 5sec }\\n}\\n```plaintext --- ## Security Considerations ### Threat Model **What Plugins Protect Against:** - ✅ Network eavesdropping (no HTTP for KMS/orch)\\n- ✅ Token theft from files (keyring storage)\\n- ✅ Credential exposure in logs (prompt-based input)\\n- ✅ Man-in-the-middle attacks (local file access) **What Plugins Don\'t Protect Against:** - ❌ Memory dumping (decrypted data in RAM)\\n- ❌ Malicious plugins (trust registry only)\\n- ❌ Compromised OS keyring\\n- ❌ Physical access to machine ### Secure Deployment **1. Verify Plugin Integrity** ```bash\\n# Check plugin signatures (if available)\\nsha256sum target/release/nu_plugin_auth\\n# Compare with published checksums # Build from trusted source\\ngit clone https://github.com/provisioning-platform/plugins\\ncd plugins\\ncargo build --release --all\\n```plaintext **2. Restrict Plugin Access** ```bash\\n# Set plugin permissions (only owner can execute)\\nchmod 700 target/release/nu_plugin_* # Store in protected directory\\nsudo mkdir -p /opt/provisioning/plugins\\nsudo chown $(whoami):$(whoami) /opt/provisioning/plugins\\nsudo chmod 755 /opt/provisioning/plugins\\nmv target/release/nu_plugin_* /opt/provisioning/plugins/\\n```plaintext **3. Audit Plugin Usage** ```nushell\\n# Log plugin calls (for compliance)\\ndef logged_encrypt [data: string] { let timestamp = date now let result = kms encrypt $data { timestamp: $timestamp, action: \\"encrypt\\" } | save --append audit.log $result\\n}\\n```plaintext **4. Rotate Credentials Regularly** ```nushell\\n# Weekly credential rotation script\\ndef rotate_credentials [] { # Re-authenticate auth logout auth login admin # Rotate KMS keys (if supported) kms rotate-key --key provisioning-main # Update encrypted secrets ls secrets/*.enc | each { |file| let plain = kms decrypt (open $file.name) kms encrypt $plain | save $file.name }\\n}\\n```plaintext --- ## FAQ **Q: Can I use plugins without RustyVault/Age installed?** A: Yes, authentication and orchestrator plugins work independently. KMS plugin requires at least one backend configured (Age is easiest for local dev). **Q: Do plugins work in CI/CD pipelines?** A: Yes, plugins work great in CI/CD. For headless environments (no keyring), use environment variables for auth or file-based tokens. ```bash\\n# CI/CD example\\nexport CONTROL_CENTER_TOKEN=\\"jwt-token-here\\"\\nkms encrypt \\"data\\" --backend age\\n```plaintext **Q: How do I update plugins?** A: Rebuild and re-register: ```bash\\ncd provisioning/core/plugins/nushell-plugins\\ngit pull\\ncargo build --release --all\\nplugin add --force target/release/nu_plugin_auth\\nplugin add --force target/release/nu_plugin_kms\\nplugin add --force target/release/nu_plugin_orchestrator\\n```plaintext **Q: Can I use multiple KMS backends simultaneously?** A: Yes, specify `--backend` for each operation: ```nushell\\nkms encrypt \\"data1\\" --backend rustyvault\\nkms encrypt \\"data2\\" --backend age\\nkms encrypt \\"data3\\" --backend aws\\n```plaintext **Q: What happens if a plugin crashes?** A: Nushell isolates plugin crashes. The command fails with an error, but Nushell continues running. Check logs with `$env.RUST_LOG = \\"debug\\"`. **Q: Are plugins compatible with older Nushell versions?** A: Plugins require Nushell 0.107.1+. For older versions, use HTTP API. **Q: How do I backup MFA enrollment?** A: Save backup codes securely (password manager, encrypted file). QR code can be re-scanned from the same secret. ```nushell\\n# Save backup codes\\nauth mfa enroll totp | save mfa-backup-codes.txt\\nkms encrypt (open mfa-backup-codes.txt) | save mfa-backup-codes.enc\\nrm mfa-backup-codes.txt\\n```plaintext **Q: Can plugins work offline?** A: Partially: - ✅ `kms` with Age backend (fully offline)\\n- ✅ `orch` status/tasks (reads local files)\\n- ❌ `auth` (requires control center)\\n- ❌ `kms` with RustyVault/AWS/Vault (requires network) **Q: How do I troubleshoot plugin performance?** A: Use Nushell\'s timing: ```nushell\\ntimeit { kms encrypt \\"data\\" }\\n# 5ms 123μs 456ns timeit { http post http://localhost:9998/encrypt { data: \\"data\\" } }\\n# 52ms 789μs 123ns\\n```plaintext --- ## Related Documentation - **Security System**: `/Users/Akasha/project-provisioning/docs/architecture/ADR-009-security-system-complete.md`\\n- **JWT Authentication**: `/Users/Akasha/project-provisioning/docs/architecture/JWT_AUTH_IMPLEMENTATION.md`\\n- **Config Encryption**: `/Users/Akasha/project-provisioning/docs/user/CONFIG_ENCRYPTION_GUIDE.md`\\n- **RustyVault Integration**: `/Users/Akasha/project-provisioning/RUSTYVAULT_INTEGRATION_SUMMARY.md`\\n- **MFA Implementation**: `/Users/Akasha/project-provisioning/docs/architecture/MFA_IMPLEMENTATION_SUMMARY.md`\\n- **Nushell Plugins Reference**: `/Users/Akasha/project-provisioning/docs/user/NUSHELL_PLUGINS_GUIDE.md` --- **Version**: 1.0.0\\n**Maintained By**: Platform Team\\n**Last Updated**: 2025-10-09\\n**Feedback**: Open an issue or contact ","breadcrumbs":"Plugin Integration Guide » Architecture Benefits","id":"1617","title":"Architecture Benefits"},"1618":{"body":"Complete guide to authentication, KMS, and orchestrator plugins.","breadcrumbs":"NuShell Plugins Guide » Nushell Plugins for Provisioning Platform","id":"1618","title":"Nushell Plugins for Provisioning Platform"},"1619":{"body":"Three native Nushell plugins provide high-performance integration with the provisioning platform: nu_plugin_auth - JWT authentication and MFA operations nu_plugin_kms - Key management (RustyVault, Age, Cosmian, AWS, Vault) nu_plugin_orchestrator - Orchestrator operations (status, validate, tasks)","breadcrumbs":"NuShell Plugins Guide » Overview","id":"1619","title":"Overview"},"162":{"body":"Edit the generated configuration: # Edit with your preferred editor\\n$EDITOR workspace/infra/my-infra/settings.k Example configuration: import provisioning.settings as cfg # Infrastructure settings\\ninfra_settings = cfg.InfraSettings { name = \\"my-infra\\" provider = \\"local\\" # Start with local provider environment = \\"development\\"\\n} # Server configuration\\nservers = [ { hostname = \\"dev-server-01\\" cores = 2 memory = 4096 # MB disk = 50 # GB }\\n]","breadcrumbs":"First Deployment » Step 2: Edit Configuration","id":"162","title":"Step 2: Edit Configuration"},"1620":{"body":"Performance Advantages : 10x faster than HTTP API calls (KMS operations) Direct access to Rust libraries (no HTTP overhead) Native integration with Nushell pipelines Type safety with Nushell\'s type system Developer Experience : Pipeline friendly - Use Nushell pipes naturally Tab completion - All commands and flags Consistent interface - Follows Nushell conventions Error handling - Nushell-native error messages","breadcrumbs":"NuShell Plugins Guide » Why Native Plugins?","id":"1620","title":"Why Native Plugins?"},"1621":{"body":"","breadcrumbs":"NuShell Plugins Guide » Installation","id":"1621","title":"Installation"},"1622":{"body":"Nushell 0.107.1+ Rust toolchain (for building from source) Access to provisioning platform services","breadcrumbs":"NuShell Plugins Guide » Prerequisites","id":"1622","title":"Prerequisites"},"1623":{"body":"cd /Users/Akasha/project-provisioning/provisioning/core/plugins/nushell-plugins # Build all plugins\\ncargo build --release -p nu_plugin_auth\\ncargo build --release -p nu_plugin_kms\\ncargo build --release -p nu_plugin_orchestrator # Or build individually\\ncargo build --release -p nu_plugin_auth\\ncargo build --release -p nu_plugin_kms\\ncargo build --release -p nu_plugin_orchestrator\\n```plaintext ### Register with Nushell ```bash\\n# Register all plugins\\nplugin add target/release/nu_plugin_auth\\nplugin add target/release/nu_plugin_kms\\nplugin add target/release/nu_plugin_orchestrator # Verify registration\\nplugin list | where name =~ \\"provisioning\\"\\n```plaintext ### Verify Installation ```bash\\n# Test auth commands\\nauth --help # Test KMS commands\\nkms --help # Test orchestrator commands\\norch --help\\n```plaintext --- ## Plugin: nu_plugin_auth Authentication plugin for JWT login, MFA enrollment, and session management. ### Commands #### `auth login [password]` Login to provisioning platform and store JWT tokens securely. **Arguments**: - `username` (required): Username for authentication\\n- `password` (optional): Password (prompts interactively if not provided) **Flags**: - `--url `: Control center URL (default: `http://localhost:9080`)\\n- `--password `: Password (alternative to positional argument) **Examples**: ```nushell\\n# Interactive password prompt (recommended)\\nauth login admin # Password in command (not recommended for production)\\nauth login admin mypassword # Custom URL\\nauth login admin --url http://control-center:9080 # Pipeline usage\\n\\"admin\\" | auth login\\n```plaintext **Token Storage**:\\nTokens are stored securely in OS-native keyring: - **macOS**: Keychain Access\\n- **Linux**: Secret Service (gnome-keyring, kwallet)\\n- **Windows**: Credential Manager **Success Output**: ```plaintext\\n✓ Login successful\\nUser: admin\\nRole: Admin\\nExpires: 2025-10-09T14:30:00Z\\n```plaintext --- #### `auth logout` Logout from current session and remove stored tokens. **Examples**: ```nushell\\n# Simple logout\\nauth logout # Pipeline usage (conditional logout)\\nif (auth verify | get active) { auth logout }\\n```plaintext **Success Output**: ```plaintext\\n✓ Logged out successfully\\n```plaintext --- #### `auth verify` Verify current session and check token validity. **Examples**: ```nushell\\n# Check session status\\nauth verify # Pipeline usage\\nauth verify | if $in.active { echo \\"Session valid\\" } else { echo \\"Session expired\\" }\\n```plaintext **Success Output**: ```json\\n{ \\"active\\": true, \\"user\\": \\"admin\\", \\"role\\": \\"Admin\\", \\"expires_at\\": \\"2025-10-09T14:30:00Z\\", \\"mfa_verified\\": true\\n}\\n```plaintext --- #### `auth sessions` List all active sessions for current user. **Examples**: ```nushell\\n# List sessions\\nauth sessions # Filter by date\\nauth sessions | where created_at > (date now | date to-timezone UTC | into string)\\n```plaintext **Output Format**: ```json\\n[ { \\"session_id\\": \\"sess_abc123\\", \\"created_at\\": \\"2025-10-09T12:00:00Z\\", \\"expires_at\\": \\"2025-10-09T14:30:00Z\\", \\"ip_address\\": \\"192.168.1.100\\", \\"user_agent\\": \\"nushell/0.107.1\\" }\\n]\\n```plaintext --- #### `auth mfa enroll ` Enroll in MFA (TOTP or WebAuthn). **Arguments**: - `type` (required): MFA type (`totp` or `webauthn`) **Examples**: ```nushell\\n# Enroll TOTP (Google Authenticator, Authy)\\nauth mfa enroll totp # Enroll WebAuthn (YubiKey, Touch ID, Windows Hello)\\nauth mfa enroll webauthn\\n```plaintext **TOTP Enrollment Output**: ```plaintext\\n✓ TOTP enrollment initiated Scan this QR code with your authenticator app: ████ ▄▄▄▄▄ █▀█ █▄▀▀▀▄ ▄▄▄▄▄ ████ ████ █ █ █▀▀▀█▄ ▀▀█ █ █ ████ ████ █▄▄▄█ █ █▀▄ ▀▄▄█ █▄▄▄█ ████ ... Or enter manually:\\nSecret: JBSWY3DPEHPK3PXP\\nURL: otpauth://totp/Provisioning:admin?secret=JBSWY3DPEHPK3PXP&issuer=Provisioning Backup codes (save securely):\\n1. ABCD-EFGH-IJKL\\n2. MNOP-QRST-UVWX\\n...\\n```plaintext --- #### `auth mfa verify --code ` Verify MFA code (TOTP or backup code). **Flags**: - `--code ` (required): 6-digit TOTP code or backup code **Examples**: ```nushell\\n# Verify TOTP code\\nauth mfa verify --code 123456 # Verify backup code\\nauth mfa verify --code ABCD-EFGH-IJKL\\n```plaintext **Success Output**: ```plaintext\\n✓ MFA verification successful\\n```plaintext --- ### Environment Variables | Variable | Description | Default |\\n|----------|-------------|---------|\\n| `USER` | Default username | Current OS user |\\n| `CONTROL_CENTER_URL` | Control center URL | `http://localhost:9080` | --- ### Error Handling **Common Errors**: ```nushell\\n# \\"No active session\\"\\nError: No active session found\\n→ Run: auth login # \\"Invalid credentials\\"\\nError: Authentication failed: Invalid username or password\\n→ Check username and password # \\"Token expired\\"\\nError: Token has expired\\n→ Run: auth login # \\"MFA required\\"\\nError: MFA verification required\\n→ Run: auth mfa verify --code # \\"Keyring error\\" (macOS)\\nError: Failed to access keyring\\n→ Check Keychain Access permissions # \\"Keyring error\\" (Linux)\\nError: Failed to access keyring\\n→ Install gnome-keyring or kwallet\\n```plaintext --- ## Plugin: nu_plugin_kms Key Management Service plugin supporting multiple backends. ### Supported Backends | Backend | Description | Use Case |\\n|---------|-------------|----------|\\n| `rustyvault` | RustyVault Transit engine | Production KMS |\\n| `age` | Age encryption (local) | Development/testing |\\n| `cosmian` | Cosmian KMS (HTTP) | Cloud KMS |\\n| `aws` | AWS KMS | AWS environments |\\n| `vault` | HashiCorp Vault | Enterprise KMS | ### Commands #### `kms encrypt [--backend ]` Encrypt data using KMS. **Arguments**: - `data` (required): Data to encrypt (string or binary) **Flags**: - `--backend `: KMS backend (`rustyvault`, `age`, `cosmian`, `aws`, `vault`)\\n- `--key `: Key ID or recipient (backend-specific)\\n- `--context `: Additional authenticated data (AAD) **Examples**: ```nushell\\n# Auto-detect backend from environment\\nkms encrypt \\"secret data\\" # RustyVault\\nkms encrypt \\"data\\" --backend rustyvault --key provisioning-main # Age (local encryption)\\nkms encrypt \\"data\\" --backend age --key age1xxxxxxxxx # AWS KMS\\nkms encrypt \\"data\\" --backend aws --key alias/provisioning # With context (AAD)\\nkms encrypt \\"data\\" --backend rustyvault --key provisioning-main --context \\"user=admin\\"\\n```plaintext **Output Format**: ```plaintext\\nvault:v1:abc123def456...\\n```plaintext --- #### `kms decrypt [--backend ]` Decrypt KMS-encrypted data. **Arguments**: - `encrypted` (required): Encrypted data (base64 or KMS format) **Flags**: - `--backend `: KMS backend (auto-detected if not specified)\\n- `--context `: Additional authenticated data (AAD, must match encryption) **Examples**: ```nushell\\n# Auto-detect backend\\nkms decrypt \\"vault:v1:abc123def456...\\" # RustyVault explicit\\nkms decrypt \\"vault:v1:abc123...\\" --backend rustyvault # Age\\nkms decrypt \\"-----BEGIN AGE ENCRYPTED FILE-----...\\" --backend age # With context\\nkms decrypt \\"vault:v1:abc123...\\" --backend rustyvault --context \\"user=admin\\"\\n```plaintext **Output**: ```plaintext\\nsecret data\\n```plaintext --- #### `kms generate-key [--spec ]` Generate data encryption key (DEK) using KMS. **Flags**: - `--spec `: Key specification (`AES128` or `AES256`, default: `AES256`)\\n- `--backend `: KMS backend **Examples**: ```nushell\\n# Generate AES-256 key\\nkms generate-key # Generate AES-128 key\\nkms generate-key --spec AES128 # Specific backend\\nkms generate-key --backend rustyvault\\n```plaintext **Output Format**: ```json\\n{ \\"plaintext\\": \\"base64-encoded-key\\", \\"ciphertext\\": \\"vault:v1:encrypted-key\\", \\"spec\\": \\"AES256\\"\\n}\\n```plaintext --- #### `kms status` Show KMS backend status and configuration. **Examples**: ```nushell\\n# Show status\\nkms status # Filter to specific backend\\nkms status | where backend == \\"rustyvault\\"\\n```plaintext **Output Format**: ```json\\n{ \\"backend\\": \\"rustyvault\\", \\"status\\": \\"healthy\\", \\"url\\": \\"http://localhost:8200\\", \\"mount_point\\": \\"transit\\", \\"version\\": \\"0.1.0\\"\\n}\\n```plaintext --- ### Environment Variables **RustyVault Backend**: ```bash\\nexport RUSTYVAULT_ADDR=\\"http://localhost:8200\\"\\nexport RUSTYVAULT_TOKEN=\\"your-token-here\\"\\nexport RUSTYVAULT_MOUNT=\\"transit\\"\\n```plaintext **Age Backend**: ```bash\\nexport AGE_RECIPIENT=\\"age1xxxxxxxxx\\"\\nexport AGE_IDENTITY=\\"/path/to/key.txt\\"\\n```plaintext **HTTP Backend (Cosmian)**: ```bash\\nexport KMS_HTTP_URL=\\"http://localhost:9998\\"\\nexport KMS_HTTP_BACKEND=\\"cosmian\\"\\n```plaintext **AWS KMS**: ```bash\\nexport AWS_REGION=\\"us-east-1\\"\\nexport AWS_ACCESS_KEY_ID=\\"...\\"\\nexport AWS_SECRET_ACCESS_KEY=\\"...\\"\\n```plaintext --- ### Performance Comparison | Operation | HTTP API | Plugin | Improvement |\\n|-----------|----------|--------|-------------|\\n| Encrypt (RustyVault) | ~50ms | ~5ms | **10x faster** |\\n| Decrypt (RustyVault) | ~50ms | ~5ms | **10x faster** |\\n| Encrypt (Age) | ~30ms | ~3ms | **10x faster** |\\n| Decrypt (Age) | ~30ms | ~3ms | **10x faster** |\\n| Generate Key | ~60ms | ~8ms | **7.5x faster** | --- ## Plugin: nu_plugin_orchestrator Orchestrator operations plugin for status, validation, and task management. ### Commands #### `orch status [--data-dir ]` Get orchestrator status from local files (no HTTP). **Flags**: - `--data-dir `: Data directory (default: `provisioning/platform/orchestrator/data`) **Examples**: ```nushell\\n# Default data dir\\norch status # Custom dir\\norch status --data-dir ./custom/data # Pipeline usage\\norch status | if $in.active_tasks > 0 { echo \\"Tasks running\\" }\\n```plaintext **Output Format**: ```json\\n{ \\"active_tasks\\": 5, \\"completed_tasks\\": 120, \\"failed_tasks\\": 2, \\"pending_tasks\\": 3, \\"uptime\\": \\"2d 4h 15m\\", \\"health\\": \\"healthy\\"\\n}\\n```plaintext --- #### `orch validate