provisioning/schemas/platform/usage-guide.md
2026-01-12 04:41:31 +00:00

20 KiB
Raw Blame History

Configuration System Usage Guide

Practical guide for using the provisioning platform configuration system across common scenarios.

Quick Start (5 Minutes)

For Local Development

# 1. Enter configuration system directory
cd provisioning/.typedialog/provisioning/platform

# 2. Generate solo configuration (interactive)
nu scripts/configure.nu orchestrator solo --backend cli

# 3. Export to TOML
nu scripts/generate-configs.nu orchestrator solo

# 4. Start orchestrator
cd ../../
ORCHESTRATOR_CONFIG=platform/config/orchestrator.solo.toml cargo run --bin orchestrator

For Team Staging

# 1. Generate multiuser configuration
cd provisioning/.typedialog/provisioning/platform
nu scripts/configure.nu control-center multiuser --backend web

# 2. Export configuration
nu scripts/generate-configs.nu control-center multiuser

# 3. Start with Docker Compose
cd ../../
docker-compose -f platform/infrastructure/docker/docker-compose.multiuser.yml up -d

For Production Enterprise

# 1. Generate enterprise configuration
cd provisioning/.typedialog/provisioning/platform
nu scripts/configure.nu orchestrator enterprise --backend web

# 2. Export configuration
nu scripts/generate-configs.nu orchestrator enterprise

# 3. Deploy to Kubernetes
cd ../../
kubectl apply -f platform/infrastructure/kubernetes/namespace.yaml
kubectl apply -f platform/infrastructure/kubernetes/*.yaml

Scenario 1: Single Developer Setup

Goal: Set up local orchestrator for development testing Time: 5-10 minutes Requirements: Nushell, Nickel, Rust toolchain

Step 1: Interactive Configuration

cd provisioning/.typedialog/provisioning/platform
nu scripts/configure.nu orchestrator solo --backend cli

Form Fields:

  • Workspace name: dev-workspace (default)
  • Workspace path: /home/username/provisioning/data/orchestrator (change to your path)
  • Server host: 127.0.0.1 (localhost only)
  • Server port: 9090 (default)
  • Storage backend: filesystem (selected by default)
  • Logging level: debug (recommended for dev)

Step 2: Validate Configuration

# Typecheck the generated Nickel
nickel typecheck configs/orchestrator.solo.ncl

# Should output: "✓ Type checking successful"

Step 3: Export to TOML

# Generate TOML from Nickel
nu scripts/generate-configs.nu orchestrator solo

# Output: provisioning/platform/config/orchestrator.solo.toml

Step 4: Start the Service

cd ../..
ORCHESTRATOR_CONFIG=provisioning/platform/config/orchestrator.solo.toml cargo run --bin orchestrator

Expected Output:

[INFO] Orchestrator starting...
[INFO] Server listening on 127.0.0.1:9090
[INFO] Storage backend: filesystem
[INFO] Ready to accept requests

Step 5: Test the Service

In another terminal:

# Check health
curl http://localhost:9090/health

# Submit a workflow
curl -X POST http://localhost:9090/api/workflows \
  -H "Content-Type: application/json" \
  -d '{"name": "test-workflow", "steps": []}'

Iteration: Modify Configuration

To change configuration:

Option A: Re-run Interactive Form

cd provisioning/.typedialog/provisioning/platform
nu scripts/configure.nu orchestrator solo --backend cli
# Answer with new values
nu scripts/generate-configs.nu orchestrator solo
# Restart service

Option B: Edit TOML Directly

# Edit the file directly
vi provisioning/platform/config/orchestrator.solo.toml
# Change values as needed
# Restart service

Option C: Environment Variable Override

# No file changes needed
export ORCHESTRATOR_SERVER_PORT=9999
export ORCHESTRATOR_LOG_LEVEL=info

ORCHESTRATOR_CONFIG=provisioning/platform/config/orchestrator.solo.toml cargo run --bin orchestrator

Scenario 2: Team Collaboration Setup

Goal: Set up shared team environment with PostgreSQL and RBAC Time: 20-30 minutes Requirements: Docker, Docker Compose, PostgreSQL running

Step 1: Interactive Configuration

cd provisioning/.typedialog/provisioning/platform

# Configure Control Center with RBAC
nu scripts/configure.nu control-center multiuser --backend web

Important Fields:

  • Database backend: postgres (for persistent storage)
  • Database host: postgres.provisioning.svc.cluster.local or localhost for local
  • Database password: Generate strong password (store in .env file, don't hardcode)
  • JWT secret: Generate 256-bit random string
  • MFA required: false (optional for team, not required)
  • Default role: viewer (least privilege)

Step 2: Create Environment File

# Create .env for secrets
cat > provisioning/platform/.env << 'EOF'
DB_PASSWORD=generate-strong-password-here
JWT_SECRET=generate-256-bit-random-base64-string
SURREALDB_PASSWORD=another-strong-password
EOF

# Protect the file
chmod 600 provisioning/platform/.env

Step 3: Export Configurations

# Export all three services for team setup
nu scripts/generate-configs.nu control-center multiuser
nu scripts/generate-configs.nu orchestrator multiuser
nu scripts/generate-configs.nu mcp-server multiuser

Step 4: Start Services with Docker Compose

cd ../..

# Generate Docker Compose from Nickel template
nu provisioning/.typedialog/provisioning/platform/scripts/render-docker-compose.nu multiuser

# Start all services
docker-compose -f provisioning/platform/infrastructure/docker/docker-compose.multiuser.yml \
  --env-file provisioning/platform/.env \
  up -d

Verify Services:

# Check all services are running
docker-compose -f provisioning/platform/infrastructure/docker/docker-compose.multiuser.yml ps

# Check logs for errors
docker-compose -f provisioning/platform/infrastructure/docker/docker-compose.multiuser.yml logs -f control-center

# Test Control Center UI
open http://localhost:8080
# Login with default credentials (or configure initially)

Step 5: Create Team Users and Roles

# Access PostgreSQL to set up users
docker-compose exec postgres psql -U provisioning -d provisioning

-- Create users
INSERT INTO users (username, email, role) VALUES
  ('alice@company.com', 'alice@company.com', 'admin'),
  ('bob@company.com', 'bob@company.com', 'operator'),
  ('charlie@company.com', 'charlie@company.com', 'developer');

-- Create RBAC assignments
INSERT INTO role_assignments (user_id, role) VALUES
  ((SELECT id FROM users WHERE username='alice@company.com'), 'admin'),
  ((SELECT id FROM users WHERE username='bob@company.com'), 'operator'),
  ((SELECT id FROM users WHERE username='charlie@company.com'), 'developer');

Step 6: Team Access

Admin (Alice):

  • Full platform access
  • Can create/modify users
  • Can manage all workflows and policies

Operator (Bob):

  • Execute and manage workflows
  • View logs and metrics
  • Cannot modify policies or users

Developer (Charlie):

  • Read-only access to workflows
  • Cannot execute or modify
  • Can view logs

Scenario 3: Production Enterprise Deployment

Goal: Deploy complete platform to Kubernetes with HA and monitoring Time: 1-2 hours (includes infrastructure setup) Requirements: Kubernetes cluster, kubectl, Helm (optional)

Step 1: Pre-Deployment Checklist

# Verify Kubernetes access
kubectl cluster-info

# Create namespace
kubectl create namespace provisioning

# Verify persistent volumes available
kubectl get pv

# Check node resources
kubectl top nodes
# Minimum 16 CPU, 32GB RAM across cluster

Step 2: Interactive Configuration (Enterprise Mode)

cd provisioning/.typedialog/provisioning/platform

nu scripts/configure.nu orchestrator enterprise --backend web
nu scripts/configure.nu control-center enterprise --backend web
nu scripts/configure.nu mcp-server enterprise --backend web

Critical Enterprise Settings:

  • Deployment mode: enterprise
  • Replicas: Orchestrator (3), Control Center (2), MCP Server (1-2)
  • Storage:
    • Orchestrator: surrealdb_cluster with 3 nodes
    • Control Center: postgres with HA
  • Security:
    • Auth: jwt (required)
    • TLS: true (required)
    • MFA: true (required)
  • Monitoring: All enabled
  • Logging: JSON format with 365-day retention

Step 3: Generate Secrets

# Generate secure values
JWT_SECRET=$(openssl rand -base64 32)
DB_PASSWORD=$(openssl rand -base64 32)
SURREALDB_PASSWORD=$(openssl rand -base64 32)
ADMIN_PASSWORD=$(openssl rand -base64 16)

# Create Kubernetes secret
kubectl create secret generic provisioning-secrets \
  -n provisioning \
  --from-literal=jwt-secret="$JWT_SECRET" \
  --from-literal=db-password="$DB_PASSWORD" \
  --from-literal=surrealdb-password="$SURREALDB_PASSWORD" \
  --from-literal=admin-password="$ADMIN_PASSWORD"

# Verify secret created
kubectl get secrets -n provisioning

Step 4: TLS Certificate Setup

# Generate self-signed certificate (for testing)
openssl req -x509 -nodes -days 365 -newkey rsa:2048 \
  -keyout provisioning.key \
  -out provisioning.crt \
  -subj "/CN=provisioning.example.com"

# Create TLS secret in Kubernetes
kubectl create secret tls provisioning-tls \
  -n provisioning \
  --cert=provisioning.crt \
  --key=provisioning.key

# For production: Use cert-manager or real certificates
# kubectl create secret tls provisioning-tls \
#   -n provisioning \
#   --cert=/path/to/cert.pem \
#   --key=/path/to/key.pem

Step 5: Export Configurations

# Export TOML configurations
nu scripts/generate-configs.nu orchestrator enterprise
nu scripts/generate-configs.nu control-center enterprise
nu scripts/generate-configs.nu mcp-server enterprise

Step 6: Create ConfigMaps for Configuration

# Create ConfigMaps with exported TOML
kubectl create configmap orchestrator-config \
  -n provisioning \
  --from-file=provisioning/platform/config/orchestrator.enterprise.toml

kubectl create configmap control-center-config \
  -n provisioning \
  --from-file=provisioning/platform/config/control-center.enterprise.toml

kubectl create configmap mcp-server-config \
  -n provisioning \
  --from-file=provisioning/platform/config/mcp-server.enterprise.toml

Step 7: Deploy Infrastructure

cd ../..

# Deploy in order of dependencies
kubectl apply -f provisioning/platform/infrastructure/kubernetes/namespace.yaml
kubectl apply -f provisioning/platform/infrastructure/kubernetes/resource-quota.yaml
kubectl apply -f provisioning/platform/infrastructure/kubernetes/rbac.yaml
kubectl apply -f provisioning/platform/infrastructure/kubernetes/network-policy.yaml

# Deploy storage (PostgreSQL, SurrealDB)
kubectl apply -f provisioning/platform/infrastructure/kubernetes/postgres-*.yaml
kubectl apply -f provisioning/platform/infrastructure/kubernetes/surrealdb-*.yaml

# Wait for databases to be ready
kubectl wait --for=condition=ready pod -l app=postgres -n provisioning --timeout=300s
kubectl wait --for=condition=ready pod -l app=surrealdb -n provisioning --timeout=300s

# Deploy platform services
kubectl apply -f provisioning/platform/infrastructure/kubernetes/orchestrator-*.yaml
kubectl apply -f provisioning/platform/infrastructure/kubernetes/control-center-*.yaml
kubectl apply -f provisioning/platform/infrastructure/kubernetes/mcp-server-*.yaml

# Deploy monitoring stack
kubectl apply -f provisioning/platform/infrastructure/kubernetes/prometheus-*.yaml
kubectl apply -f provisioning/platform/infrastructure/kubernetes/grafana-*.yaml
kubectl apply -f provisioning/platform/infrastructure/kubernetes/loki-*.yaml

# Deploy ingress
kubectl apply -f provisioning/platform/infrastructure/kubernetes/platform-ingress.yaml

Step 8: Verify Deployment

# Check all pods are running
kubectl get pods -n provisioning

# Check services
kubectl get svc -n provisioning

# Wait for all pods ready
kubectl wait --for=condition=Ready pods --all -n provisioning --timeout=600s

# Check ingress
kubectl get ingress -n provisioning

Step 9: Access the Platform

# Get Ingress IP
kubectl get ingress -n provisioning

# Configure DNS (or use /etc/hosts for testing)
echo "INGRESS_IP provisioning.example.com" | sudo tee -a /etc/hosts

# Access services
# Orchestrator: https://orchestrator.provisioning.example.com/api
# Control Center: https://control-center.provisioning.example.com
# MCP Server: https://mcp.provisioning.example.com
# Grafana: https://grafana.provisioning.example.com (admin/password)
# Prometheus: https://prometheus.provisioning.example.com (internal)

Step 10: Post-Deployment Configuration

# Create database schema
kubectl exec -it -n provisioning deployment/postgres -- psql -U provisioning -d provisioning -f /schema.sql

# Initialize Grafana dashboards
kubectl cp grafana-dashboards provisioning/grafana-0:/var/lib/grafana/dashboards/

# Configure alerts
kubectl apply -f provisioning/platform/infrastructure/kubernetes/prometheus-alerts.yaml

Common Tasks

Change Configuration Value

Without Service Restart (Environment Variable):

# Override specific value via environment variable
export ORCHESTRATOR_LOG_LEVEL=debug
export ORCHESTRATOR_SERVER_PORT=9999

# Service uses overridden values
ORCHESTRATOR_CONFIG=config.toml cargo run --bin orchestrator

With Service Restart (TOML Edit):

# Edit TOML directly
vi provisioning/platform/config/orchestrator.solo.toml

# Restart service
pkill -f "cargo run --bin orchestrator"
ORCHESTRATOR_CONFIG=config.toml cargo run --bin orchestrator

With Validation (Regenerate from Form):

# Re-run interactive form to regenerate
cd provisioning/.typedialog/provisioning/platform
nu scripts/configure.nu orchestrator solo --backend cli

# Validation ensures consistency
nu scripts/generate-configs.nu orchestrator solo

# Restart service with validated config

Add Team Member

In Kubernetes PostgreSQL:

kubectl exec -it -n provisioning deployment/postgres -- psql -U provisioning -d provisioning

-- Create user
INSERT INTO users (username, email, password_hash, role, created_at) VALUES
  ('newuser@company.com', 'newuser@company.com', crypt('password', gen_salt('bf')), 'developer', now());

-- Assign role
INSERT INTO role_assignments (user_id, role, granted_by, granted_at) VALUES
  ((SELECT id FROM users WHERE username='newuser@company.com'), 'developer', 1, now());

Scale Service Replicas

In Kubernetes:

# Scale orchestrator from 3 to 5 replicas
kubectl scale deployment orchestrator -n provisioning --replicas=5

# Verify scaling
kubectl get deployment orchestrator -n provisioning
kubectl get pods -n provisioning | grep orchestrator

Monitor Service Health

# Check pod status
kubectl describe pod orchestrator-0 -n provisioning

# Check service logs
kubectl logs -f deployment/orchestrator -n provisioning --all-containers=true

# Check resource usage
kubectl top pods -n provisioning

# Check service metrics (via Prometheus)
kubectl port-forward -n provisioning svc/prometheus 9091:9091
open http://localhost:9091

Backup Configuration

# Backup current TOML configs
tar -czf configs-backup-$(date +%Y%m%d).tar.gz provisioning/platform/config/

# Backup Kubernetes manifests
kubectl get all -n provisioning -o yaml > k8s-backup-$(date +%Y%m%d).yaml

# Backup database
kubectl exec -n provisioning deployment/postgres -- pg_dump -U provisioning provisioning | gzip > db-backup-$(date +%Y%m%d).sql.gz

Troubleshoot Configuration Issues

# Check Nickel syntax errors
nickel typecheck provisioning/.typedialog/provisioning/platform/configs/orchestrator.solo.ncl

# Validate TOML syntax
nickel export --format toml provisioning/.typedialog/provisioning/platform/configs/orchestrator.solo.ncl

# Check TOML is valid for Rust
ORCHESTRATOR_CONFIG=provisioning/platform/config/orchestrator.solo.toml cargo run --bin orchestrator -- --validate-config

# Check environment variable overrides
echo $ORCHESTRATOR_SERVER_PORT
echo $ORCHESTRATOR_LOG_LEVEL

# Examine actual config loaded (if service logs it)
ORCHESTRATOR_CONFIG=config.toml cargo run --bin orchestrator 2>&1 | grep -i "config\|configuration"

Configuration File Locations

provisioning/.typedialog/provisioning/platform/
├── forms/                          # User-facing interactive forms
│   ├── orchestrator-form.toml
│   ├── control-center-form.toml
│   └── fragments/                  # Reusable form sections
│
├── values/                         # User input files (gitignored)
│   ├── orchestrator.solo.ncl
│   ├── orchestrator.enterprise.ncl
│   └── (auto-generated by TypeDialog)
│
├── configs/                        # Composed Nickel configs
│   ├── orchestrator.solo.ncl       # Base + mode overlay + user input + validation
│   ├── control-center.multiuser.ncl
│   └── (4 services × 4 modes = 16 files)
│
├── schemas/                        # Type definitions
│   ├── orchestrator.ncl
│   ├── control-center.ncl
│   └── common/                     # Shared schemas
│
├── defaults/                       # Default values
│   ├── orchestrator-defaults.ncl
│   └── deployment/solo-defaults.ncl
│
├── validators/                     # Business rules
│   ├── orchestrator-validator.ncl
│   └── (per-service validators)
│
├── constraints/
│   └── constraints.toml           # Min/max values (single source of truth)
│
├── templates/                      # Deployment templates
│   ├── docker-compose/
│   │   ├── platform-stack.solo.yml.ncl
│   │   └── (4 modes)
│   └── kubernetes/
│       ├── orchestrator-deployment.yaml.ncl
│       └── (11 templates)
│
└── scripts/                        # Automation
    ├── configure.nu                # Interactive TypeDialog
    ├── generate-configs.nu         # Nickel → TOML export
    ├── validate-config.nu          # Typecheck Nickel
    ├── render-docker-compose.nu    # Templates → Docker Compose
    └── render-kubernetes.nu        # Templates → Kubernetes

TOML output location:

provisioning/platform/config/
├── orchestrator.solo.toml          # Consumed by orchestrator service
├── control-center.enterprise.toml  # Consumed by control-center service
└── (4 services × 4 modes = 16 files)

Tips & Best Practices

1. Use Version Control

# Commit TOML configs to track changes
git add provisioning/platform/config/*.toml
git commit -m "Update orchestrator enterprise config: increase worker threads to 16"

# Do NOT commit Nickel source files in values/
echo "provisioning/.typedialog/provisioning/platform/values/*.ncl" >> .gitignore

2. Test Before Production Deployment

# Test in solo mode first
nu scripts/configure.nu orchestrator solo
cargo run --bin orchestrator

# Then test in staging (multiuser mode)
nu scripts/configure.nu orchestrator multiuser
docker-compose -f docker-compose.multiuser.yml up

# Finally deploy to production (enterprise)
nu scripts/configure.nu orchestrator enterprise
# Then Kubernetes deployment

3. Document Custom Configurations

# Add comments to configurations
# In values/*.ncl or config/*.ncl:

# Custom configuration for high-throughput testing
# - Increased workers from 4 to 8
# - Increased queue.max_concurrent_tasks from 5 to 20
# - Lowered logging level from debug to info
{
  orchestrator = {
    # Worker threads increased for testing parallel task processing
    server.workers = 8,
    queue.max_concurrent_tasks = 20,
    logging.level = "info",
  },
}

4. Secrets Management

Never hardcode secrets in configuration files:

# WRONG - Don't do this
[orchestrator.security]
jwt_secret = "hardcoded-secret-exposed-in-git"

# RIGHT - Use environment variables
export ORCHESTRATOR_SECURITY_JWT_SECRET="actual-secret-from-vault"

# TOML references it:
[orchestrator.security]
jwt_secret = "${JWT_SECRET}"  # Loaded at runtime

5. Monitor Changes

# Track configuration changes over time
git log --oneline provisioning/platform/config/

# See what changed
git diff <commit1> <commit2> provisioning/platform/config/orchestrator.solo.toml

Version: 1.0 Last Updated: 2025-01-05 Status: Production Ready