897 lines
27 KiB
Markdown
Raw Blame History

This file contains ambiguous Unicode characters

This file contains Unicode characters that might be confused with other characters. If you think that this is intentional, you can safely ignore this warning. Use the Escape button to reveal them.

# Provisioning Platform Configuration Examples
Production-ready reference configurations demonstrating different deployment scenarios and best practices.
## Purpose
Examples provide:
- **Real-world configurations** - Complete, tested working setups ready for production use
- **Best practices** - Recommended patterns, values, and architectural approaches
- **Learning resource** - How to use the configuration system effectively
- **Starting point** - Copy, customize, and deploy for your environment
- **Documentation** - Detailed inline comments explaining every configuration option
## Quick Start
Choose your deployment mode and get started immediately:
```bash
# Solo development (local, single developer)
nickel export --format toml orchestrator-solo.ncl > orchestrator.toml
# Team collaboration (PostgreSQL, RBAC, audit logging)
nickel export --format toml control-center-multiuser.ncl > control-center.toml
# Production enterprise (HA, SurrealDB cluster, full monitoring)
nickel export --format toml full-platform-enterprise.ncl > platform.toml
```
## Example Configurations by Mode
### 1. orchestrator-solo.ncl
**Deployment Mode**: Solo (Single Developer)
**Resource Requirements**:
- CPU: 2 cores
- RAM: 4 GB
- Disk: 50 GB (local data)
**Configuration Highlights**:
- **Workspace**: Local `dev-workspace` at `/home/developer/provisioning/data/orchestrator`
- **Server**: Localhost binding (127.0.0.1:9090), 2 workers, 128 connections max
- **Storage**: Filesystem backend (no external database required)
- **Queue**: 3 max concurrent tasks (minimal for development)
- **Batch**: 2 parallel limit with frequent checkpointing (every 50 operations)
- **Logging**: Debug level, human-readable text format, concurrent stdout + file output
- **Security**: Auth disabled, CORS allows all origins, no TLS
- **Monitoring**: Health checks only (metrics disabled), resource tracking disabled
- **Features**: Experimental features enabled for testing and iteration
**Ideal For**:
- ✅ Single developer local development
- ✅ Quick prototyping and experimentation
- ✅ Learning the provisioning platform
- ✅ CI/CD local testing without external services
**Key Advantages**:
- No external dependencies (database-free)
- Fast startup (<10 seconds)
- Minimal resource footprint
- Verbose debug logging for troubleshooting
- Zero security overhead
**Key Limitations**:
- Localhost-only (not accessible remotely)
- Single-threaded processing (3 concurrent tasks max)
- No persistence across restarts (if using `:memory:` storage)
- No audit logging
**Usage**:
```bash
# Export to TOML and run
nickel export --format toml orchestrator-solo.ncl > orchestrator.solo.toml
ORCHESTRATOR_CONFIG=orchestrator.solo.toml cargo run --bin orchestrator
# With TypeDialog interactive configuration
nu ../../scripts/configure.nu orchestrator solo --backend cli
```
**Customization Examples**:
```bash
# Increase concurrency for testing (still development-friendly)
queue.max_concurrent_tasks = 5
# Reduce debug noise for cleaner logs
logging.level = "info"
# Change workspace location
workspace.path = "/path/to/my/workspace"
```
---
### 2. orchestrator-enterprise.ncl
**Deployment Mode**: Enterprise (Production High-Availability)
**Resource Requirements**:
- CPU: 8+ cores (recommended 16)
- RAM: 16+ GB (recommended 32+ GB)
- Disk: 500+ GB (SurrealDB cluster)
**Configuration Highlights**:
- **Workspace**: Production `production` workspace at `/var/lib/provisioning/orchestrator` with multi-workspace support enabled
- **Server**: All interfaces binding (0.0.0.0:9090), 16 workers, 4096 connections max
- **Storage**: SurrealDB cluster (3 nodes) for distributed storage and high availability
- **Queue**: 100 max concurrent tasks, 5 retry attempts, 2-hour timeout for long-running operations
- **Batch**: 50 parallel limit with frequent checkpointing (every 1000 operations) and automatic cleanup
- **Logging**: Info level, JSON structured format for log aggregation
- Standard logs: 500MB files, kept 30 versions (90 days)
- Audit logs: 200MB files, kept 365 versions (1 year)
- **Security**: JWT authentication required, specific CORS origins, TLS 1.3 mandatory, 10,000 RPS rate limit
- **Extensions**: Auto-load from OCI registry with daily refresh, 10 concurrent initializations
- **Monitoring**:
- Metrics every 10 seconds
- Profiling at 10% sample rate
- Resource tracking with CPU/memory/disk alerts
- Health checks every 30 seconds
- **Features**: Audit logging, task history, performance tracking all enabled
**Ideal For**:
- Production deployments with SLAs
- High-throughput, mission-critical workloads
- Multi-team environments requiring audit trails
- Large-scale infrastructure deployments
- Compliance and governance requirements
**Key Advantages**:
- High availability (3 SurrealDB replicas with failover)
- Production security (JWT + TLS 1.3 mandatory)
- Full observability (metrics, profiling, audit logs)
- High throughput (100 concurrent tasks)
- Extension management via OCI registry
- Automatic rollback and recovery capabilities
**Key Limitations**:
- Requires SurrealDB cluster setup and maintenance
- Resource-intensive (8+ CPU, 16+ GB RAM minimum)
- More complex initial setup and configuration
- Requires secrets management (JWT keys, TLS certificates)
- Network isolation and load balancing setup required
**Environment Variables Required**:
```javascript
export JWT_SECRET="<secure-random-256-bit-string>"
export SURREALDB_PASSWORD="<database-password>"
```
**Usage**:
```bash
# Deploy standalone with SurrealDB
nickel export --format toml orchestrator-enterprise.ncl > orchestrator.enterprise.toml
ORCHESTRATOR_CONFIG=orchestrator.enterprise.toml cargo run --bin orchestrator
# Deploy to Kubernetes with all enterprise infrastructure
nu ../../scripts/render-kubernetes.nu enterprise --namespace production
kubectl apply -f provisioning/platform/infrastructure/kubernetes/*.yaml
```
**Customization Examples**:
```bash
# Adjust concurrency for your specific infrastructure
queue.max_concurrent_tasks = 50 # Scale down if resource-constrained
# Change SurrealDB cluster endpoints
storage.surrealdb_url = "surrealdb://node1:8000,node2:8000,node3:8000"
# Modify audit log retention for compliance
logging.outputs[1].rotation.max_backups = 2555 # 7 years for HIPAA compliance
# Increase rate limiting for high-frequency integrations
security.rate_limit.requests_per_second = 20000
```
---
### 3. control-center-multiuser.ncl
**Deployment Mode**: MultiUser (Team Collaboration & Staging)
**Resource Requirements**:
- CPU: 4 cores
- RAM: 8 GB
- Disk: 100 GB (PostgreSQL data + logs)
**Configuration Highlights**:
- **Server**: All interfaces binding (0.0.0.0:8080), 4 workers, 256 connections max
- **Database**: PostgreSQL with connection pooling (min 5, max 20 connections)
- **Auth**: JWT with 8-hour token expiration (aligned with team workday)
- **RBAC**: 4 pre-defined roles with granular permissions
- `admin`: Infrastructure lead with full access (`*` permissions)
- `operator`: Operations team - execute, manage, view workflows and policies
- `developer`: Development team - read-only workflow and policy access
- `viewer`: Minimal read-only for non-technical stakeholders
- **MFA**: Optional per-user (TOTP + email methods available, not globally required)
- **Password Policies**: 12-character minimum, requires uppercase/lowercase/digits, 90-day rotation, history count of 3
- **Session Policies**: 8-hour maximum duration, 1-hour idle timeout, 3 concurrent sessions per user
- **Rate Limiting**: 1000 RPS global, 100 RPS per-user, 20 burst requests
- **CORS**: Allows localhost:3000 (dev), control-center.example.com, orchestrator.example.com
- **Logging**: Info level, JSON format, 200MB files kept 15 versions (90 days retention)
- **Features**: Audit logging enabled, policy enforcement enabled
**Ideal For**:
- Team collaboration (2-50 engineers)
- Staging environments before production
- Development team operations
- RBAC with different access levels
- Compliance-light environments (SOC2 optional)
**Key Advantages**:
- Team-friendly security (optional MFA, reasonable password policy)
- RBAC supports different team roles and responsibilities
- Persistent storage (PostgreSQL) maintains state across restarts
- Audit trail for basic compliance
- Flexible session management (multiple concurrent sessions)
- Good balance of security and usability
**Key Limitations**:
- Requires PostgreSQL database setup
- Single replica (not HA by default)
- More complex than solo mode
- RBAC requires careful role definition
**Environment Variables Required**:
```javascript
export DB_PASSWORD="<postgresql-password>"
export JWT_SECRET="<secure-random-string>"
```
**Usage**:
```bash
# Generate and deploy
nickel export --format toml control-center-multiuser.ncl > control-center.multiuser.toml
CONTROL_CENTER_CONFIG=control-center.multiuser.toml cargo run --bin control-center
# With Docker Compose for team
nu ../../scripts/render-docker-compose.nu multiuser
docker-compose -f docker-compose.multiuser.yml up -d
# Access the UI
# http://localhost:8080 (or your configured domain)
```
**RBAC Quick Reference**:
| Role | Intended Users | Key Permissions |
| ------ | ---------------- | ----------------- |
| admin | Infrastructure leads | All operations: full access |
| operator | Operations engineers | Execute workflows, manage tasks, view policies |
| developer | Application developers | View workflows, view policies (read-only) |
| viewer | Non-technical (PM, QA) | View workflows only (minimal read) |
**Customization Examples**:
```bash
# Require MFA globally for higher security
mfa.required = true
# Add custom role for auditors
rbac.roles.auditor = {
description = "Compliance auditor",
permissions = ["audit.view", "orchestrator.view"],
}
# Adjust for larger team (more concurrent sessions)
policies.session.max_concurrent = 5
# Stricter password policy for regulated industry
policies.password = {
min_length = 16,
require_special_chars = true,
expiration_days = 60,
history_count = 8,
}
```
---
### 4. full-platform-enterprise.ncl
**Deployment Mode**: Enterprise Integrated (Complete Platform)
**Resource Requirements**:
- CPU: 16+ cores (3 replicas × 4 cores each + infrastructure)
- RAM: 32+ GB (orchestrator 12GB + control-center 4GB + databases 12GB + monitoring 4GB)
- Disk: 1+ TB (databases, logs, metrics, artifacts)
**Services Configured**:
**Orchestrator Section**:
- SurrealDB cluster (3 nodes) for distributed workflow storage
- 100 concurrent tasks with 5 retry attempts
- Full audit logging and monitoring
- JWT authentication with configurable token expiration
- Extension loading from OCI registry
- High-performance tuning (16 workers, 4096 connections)
**Control Center Section**:
- PostgreSQL HA backend for policy/RBAC storage
- Full RBAC (4 roles with 7+ permissions each)
- MFA required (TOTP + email methods)
- SOC2 compliance enabled with audit logging
- Strict password policy (16+ chars, special chars required)
- 30-minute session idle timeout for security
- Per-user rate limiting (100 RPS)
**MCP Server Section**:
- Claude integration for AI-powered provisioning
- Full MCP capability support (tools, resources, prompts, sampling)
- Orchestrator and Control Center integration
- Read-only filesystem access with 10MB file limit
- JWT authentication
- Advanced audit logging (all requests logged except sensitive data)
- 100 RPS rate limiting with 20-request burst
**Global Configuration**:
```javascript
let deployment_mode = "enterprise"
let namespace = "provisioning"
let domain = "provisioning.example.com"
let environment = "production"
```
**Infrastructure Components** (when deployed to Kubernetes):
- Load Balancer (Nginx) - TLS termination, CORS, rate limiting
- 3x Orchestrator replicas - Distributed processing
- 2x Control Center replicas - Policy management
- 1-2x MCP Server replicas - AI integration
- PostgreSQL HA - Primary/replica setup
- SurrealDB cluster - 3 nodes with replication
- Prometheus - Metrics collection
- Grafana - Visualization and dashboards
- Loki - Log aggregation
- Harbor - Private OCI image registry
**Ideal For**:
- Production deployments with full SLAs
- Enterprise compliance requirements (SOC2, HIPAA)
- Multi-team organizations
- AI/LLM integration for provisioning
- Large-scale infrastructure management (1000+ resources)
- High-availability deployments with 99.9%+ uptime requirements
**Key Advantages**:
- Complete service integration (no missing pieces)
- Production-grade HA setup (3 replicas, load balancing)
- Full compliance and audit capabilities
- AI/LLM integration via MCP Server
- Comprehensive monitoring and observability
- Clear separation of concerns per service
- Global variables for easy parameterization
**Key Limitations**:
- Complex setup requiring multiple services
- Resource-intensive (16+ CPU, 32+ GB RAM minimum)
- Requires Kubernetes or advanced Docker Compose setup
- Multiple databases to maintain (PostgreSQL + SurrealDB)
- Network setup complexity (TLS, CORS, rate limiting)
**Environment Variables Required**:
```bash
# Database credentials
export DB_PASSWORD="<postgresql-password>"
export SURREALDB_PASSWORD="<surrealdb-password>"
# Security
export JWT_SECRET="<secure-random-256-bit-string>"
export KMS_KEY="<kms-encryption-key-for-secrets>"
# AI/LLM integration
export CLAUDE_API_KEY="<anthropic-api-key>"
export CLAUDE_MODEL="claude-3-opus-20240229"
# TLS certificates (for production)
export TLS_CERT="<certificate-path>"
export TLS_KEY="<key-path>"
```
**Architecture Diagram**:
```bash
┌───────────────────────────────────────────────┐
│ Nginx Load Balancer (TLS, CORS, RateLimit)
│ https://orchestrator.example.com │
│ https://control-center.example.com │
│ https://mcp.example.com │
└──────────┬──────────────────────┬─────────────┘
│ │
┌──────▼──────┐ ┌────────▼────────┐
│ Orchestrator│ │ Control Center │
(3 replicas)│ │ (2 replicas)
└──────┬──────┘ └────────┬────────┘
│ │
┌──────▼──────┐ ┌────────▼────────┐ ┌─────────────────┐
│ SurrealDB │ │ PostgreSQL HA │ │ MCP Server │
│ Cluster │ │ │ │ (1-2 replicas)
(3 nodes) │ │ Primary/Replica│ │ │
└─────────────┘ └─────────────────┘ │ ↓ Claude API │
└─────────────────┘
┌─────────────────────────────────────────────────┐
│ Observability Stack (Optional)
├──────────────────┬──────────────────────────────┤
│ Prometheus │ Grafana │ Loki │
(Metrics)(Dashboards)(Logs)
└──────────────────┴──────────────────────────────┘
```
**Usage**:
```bash
# Export complete configuration
nickel export --format toml full-platform-enterprise.ncl > platform.toml
# Extract individual service configs if needed
# (Each service extracts its section from platform.toml)
# Deploy to Kubernetes with all enterprise infrastructure
nu ../../scripts/render-kubernetes.nu enterprise --namespace production
# Apply all manifests
kubectl create namespace production
kubectl apply -f provisioning/platform/infrastructure/kubernetes/*.yaml
# Or deploy with Docker Compose for single-node testing
nu ../../scripts/render-docker-compose.nu enterprise
docker-compose -f docker-compose.enterprise.yml up -d
```
**Customization Examples**:
```bash
# Adjust deployment domain
let domain = "my-company.com"
let namespace = "infrastructure"
# Scale for higher throughput
orchestrator.queue.max_concurrent_tasks = 200
orchestrator.security.rate_limit.requests_per_second = 50000
# Add HIPAA compliance
control_center.policies.compliance.hipaa.enabled = true
control_center.policies.audit.retention_days = 2555 # 7 years
# Custom MCP Server model
mcp_server.integration.claude.model = "claude-3-sonnet-20240229"
# Enable caching for performance
mcp_server.features.enable_caching = true
mcp_server.performance.cache_ttl = 7200
```
---
## Deployment Mode Comparison Matrix
| Feature | Solo | MultiUser | Enterprise |
| --------- | ------ | ----------- | ----------- |
| **Ideal For** | Dev | Team/Staging | Production |
| **Storage** | Filesystem | PostgreSQL | SurrealDB Cluster |
| **Replicas** | 1 | 1 | 3+ (HA) |
| **Max Concurrency** | 3 tasks | 5-10 | 100 |
| **Security** | None | RBAC + JWT | Full + MFA + SOC2 |
| **Monitoring** | Health check | Basic | Full (Prom+Grafana) |
| **Setup Time** | <5 min | 15 min | 30+ min |
| **Min CPU** | 2 | 4 | 16 |
| **Min RAM** | 4GB | 8GB | 32GB |
| **Audit Logs** | No | 90 days | 365 days |
| **TLS Required** | No | No | Yes |
| **Compliance** | None | Basic | SOC2 + HIPAA ready |
---
## Getting Started Guide
### Step 1: Choose Your Deployment Mode
- **Solo**: Single developer working locally Use `orchestrator-solo.ncl`
- **Team**: 2-50 engineers, staging environment Use `control-center-multiuser.ncl`
- **Production**: Full enterprise deployment Use `full-platform-enterprise.ncl`
### Step 2: Export Configuration to TOML
```toml
# Start with solo mode
nickel export --format toml orchestrator-solo.ncl > orchestrator.toml
# Validate the export
cat orchestrator.toml | head -20
```
### Step 3: Validate Configuration
```toml
# Typecheck the Nickel configuration
nickel typecheck orchestrator-solo.ncl
# Validate using provided script
nu ../../scripts/validate-config.nu orchestrator-solo.ncl
```
### Step 4: Customize for Your Environment
Edit the exported `.toml` or the `.ncl` file:
```nickel
# Option A: Edit TOML directly (simpler)
vi orchestrator.toml # Change workspace path, port, etc.
# Option B: Edit Nickel and re-export (type-safe)
vi orchestrator-solo.ncl
nickel export --format toml orchestrator-solo.ncl > orchestrator.toml
```
### Step 5: Deploy
```bash
# Docker Compose
ORCHESTRATOR_CONFIG=orchestrator.toml docker-compose up -d
# Direct Rust execution
ORCHESTRATOR_CONFIG=orchestrator.toml cargo run --bin orchestrator
# Kubernetes
nu ../../scripts/render-kubernetes.nu solo
kubectl apply -f provisioning/platform/infrastructure/kubernetes/*.yaml
```
---
## Common Customizations
### Changing Domain/Namespace
In any `.ncl` file at the top:
```javascript
let domain = "your-domain.com"
let namespace = "your-namespace"
let environment = "your-env"
```
### Increasing Resource Limits
For higher throughput:
```bash
queue.max_concurrent_tasks = 200 # Default: 100
security.rate_limit.requests_per_second = 50000 # Default: 10000
server.workers = 32 # Default: 16
```
### Enabling Compliance Features
For regulated environments:
```bash
policies.compliance.soc2.enabled = true
policies.compliance.hipaa.enabled = true
policies.audit.retention_days = 2555 # 7 years
```
### Custom Logging
For troubleshooting:
```bash
logging.level = "debug" # Default: info
logging.format = "text" # Default: json (use text for development)
logging.outputs[0].level = "debug" # stdout level
```
---
## Validation & Testing
### Syntax Validation
```bash
# Typecheck all examples
for f in *.ncl; do
echo "Checking $f..."
nickel typecheck "$f"
done
```
### Configuration Export
```toml
# Export to TOML
nickel export --format toml orchestrator-solo.ncl | head -30
# Export to JSON
nickel export --format json full-platform-enterprise.ncl | jq '.orchestrator.server'
```
### Load in Rust Application
```rust
# With dry-run flag (if supported)
ORCHESTRATOR_CONFIG=orchestrator.solo.toml cargo run --bin orchestrator -- --validate
# Or simply attempt startup
ORCHESTRATOR_CONFIG=orchestrator.solo.toml timeout 5 cargo run --bin orchestrator
```
---
## Troubleshooting
### "Type mismatch" Error
**Cause**: Field value doesn't match expected type
**Fix**: Check the schema for correct type. Common issues:
- Use `true`/`false` not `"true"`/`"false"` for booleans
- Use `9090` not `"9090"` for numbers
- Use record syntax `{ key = value }` not `{ "key": value }`
### Port Already in Use
**Fix**: Change the port in your configuration:
```toml
server.port = 9999 # Instead of 9090
```
### Database Connection Errors
**Fix**: For multiuser/enterprise modes:
- Ensure PostgreSQL is running: `docker-compose up -d postgres`
- Verify credentials in environment variables
- Check network connectivity
- Validate connection string format
### Import Not Found
**Fix**: Ensure all relative paths in imports are correct:
```bash
# Correct (relative to examples/)
let defaults = import "../defaults/orchestrator-defaults.ncl" in
# Wrong (absolute path)
let defaults = import "/full/path/to/defaults.ncl" in
```
---
## Best Practices
1. **Start Small**: Begin with solo mode, graduate to multiuser, then enterprise
2. **Environment Variables**: Never hardcode secrets, use environment variables
3. **Version Control**: Keep examples in Git with clear comments
4. **Validation**: Always typecheck and export before deploying
5. **Documentation**: Add comments explaining non-obvious configuration choices
6. **Testing**: Deploy to staging first, validate all services before production
7. **Monitoring**: Enable metrics and logging from day one for easier troubleshooting
8. **Backups**: Regular backups of database state and configurations
---
## Adding New Examples
### Create a Custom Example
```bash
# Copy an existing example as template
cp orchestrator-solo.ncl orchestrator-custom.ncl
# Edit for your use case
vi orchestrator-custom.ncl
# Validate
nickel typecheck orchestrator-custom.ncl
# Export and test
nickel export --format toml orchestrator-custom.ncl > orchestrator.custom.toml
```
### Naming Convention
- **Service + Mode**: `{service}-{mode}.ncl` (orchestrator-solo.ncl)
- **Scenario**: `{service}-{scenario}.ncl` (orchestrator-high-throughput.ncl)
- **Full Stack**: `full-platform-{mode}.ncl` (full-platform-enterprise.ncl)
---
## See Also
- **Parent README**: `../README.md` - Complete configuration system overview
- **Schemas**: `../schemas/` - Type definitions and validation rules
- **Defaults**: `../defaults/` - Base configurations for composition
- **Scripts**: `../scripts/` - Automation for configuration workflow
- **Forms**: `../forms/` - Interactive TypeDialog form definitions
---
**Version**: 2.0
**Last Updated**: 2025-01-05
**Status**: Production Ready - All examples tested and validated
## Using Examples
### View Example
```bash
cat provisioning/.typedialog/provisioning/platform/examples/orchestrator-solo.ncl
```
### Copy and Customize
```bash
# Start with solo example
cp examples/orchestrator-solo.ncl values/orchestrator.solo.ncl
# Edit for your environment
vi values/orchestrator.solo.ncl
# Validate
nu scripts/validate-config.nu values/orchestrator.solo.ncl
```
### Generate from Example
```bash
# Use example as base, regenerate with TypeDialog
nu scripts/configure.nu orchestrator solo --backend web
```
## Example Structure
Each example is a complete Nickel configuration:
```nickel
# orchestrator-solo.ncl
{
orchestrator = {
workspace = { },
server = { },
storage = { },
queue = { },
monitoring = { },
},
}
```
## Configuration Elements
### Workspace Configuration
- **name** - Workspace identifier
- **path** - Directory path
- **enabled** - Enable/disable flag
- **multi_workspace** - Support multiple workspaces
### Server Configuration
- **host** - Bind address (127.0.0.1 for solo, 0.0.0.0 for public)
- **port** - Listen port
- **workers** - Thread count (mode-dependent)
- **keep_alive** - Connection keep-alive timeout
- **max_connections** - Connection limit
### Storage Configuration
- **backend** - 'filesystem | 'rocksdb | 'surrealdb | 'postgres
- **path** - Local storage path (filesystem/rocksdb)
- **connection_string** - DB URL (surrealdb/postgres)
### Queue Configuration (Orchestrator)
- **max_concurrent_tasks** - Concurrent task limit
- **retry_attempts** - Retry count
- **retry_delay** - Delay between retries (ms)
- **task_timeout** - Task execution timeout (ms)
### Monitoring Configuration (Optional)
- **enabled** - Enable metrics collection
- **metrics_interval** - Collection frequency (seconds)
- **health_check_interval** - Health check frequency
## Creating New Examples
### 1. Start with Existing Example
```bash
cp examples/orchestrator-solo.ncl examples/orchestrator-custom.ncl
```
### 2. Modify for Your Use Case
```bash
# Update configuration values
orchestrator.server.workers = 8 # More workers
orchestrator.queue.max_concurrent_tasks = 20 # Higher concurrency
```
### 3. Validate Configuration
```toml
nickel typecheck examples/orchestrator-custom.ncl
nickel eval examples/orchestrator-custom.ncl
```
### 4. Document Purpose
Add comments explaining:
- Use case (deployment scenario)
- Resource requirements
- Expected load
- Customization needed
### 5. Save as Reference
```bash
mv examples/orchestrator-custom.ncl examples/orchestrator-{scenario}.ncl
```
## Best Practices for Examples
1. **Clear documentation** - Explain the use case at the top
2. **Realistic values** - Use production-appropriate configurations
3. **Complete configuration** - Include all required sections
4. **Inline comments** - Explain non-obvious choices
5. **Validated** - Typecheck all examples before committing
6. **Organized** - Group by service and deployment mode
## Example Naming Convention
- **Service-mode**: `{service}-{mode}.ncl` (orchestrator-solo.ncl)
- **Scenario**: `{service}-{scenario}.ncl` (orchestrator-gpu-intensive.ncl)
- **Full stack**: `full-platform-{mode}.ncl` (full-platform-enterprise.ncl)
## Customizing Examples
### For Your Environment
```bash
# orchestrator-solo.ncl (customized)
{
orchestrator = {
workspace = {
name = "my-workspace", # Your workspace name
path = "/home/user/projects/workspace", # Your path
},
server = {
host = "127.0.0.1", # Keep local for solo
port = 9090,
},
storage = {
backend = 'filesystem, # No external DB needed
path = "/home/user/provisioning/data", # Your path
},
},
}
```
### For Different Resources
```bash
# orchestrator-multiuser.ncl (customized for team)
{
orchestrator = {
server = {
host = "0.0.0.0", # Public binding
port = 9090,
workers = 4, # Team concurrency
},
queue = {
max_concurrent_tasks = 10, # Team workload
},
},
}
```
## Testing Examples
```bash
# Typecheck example
nickel typecheck examples/orchestrator-solo.ncl
# Evaluate and view
nickel eval examples/orchestrator-solo.ncl | head -20
# Export to TOML
nickel export --format toml examples/orchestrator-solo.ncl > test.toml
```
---
**Version**: 1.0.0
**Last Updated**: 2025-01-05