Keyboard shortcuts

Press or to navigate between chapters

Press S or / to search in the book

Press ? to show this help

Press Esc to hide this help

Development Workflow Guide

This document outlines the recommended development workflows, coding practices, testing strategies, and debugging techniques for the provisioning project.

Table of Contents

  1. Overview
  2. Development Setup
  3. Daily Development Workflow
  4. Code Organization
  5. Testing Strategies
  6. Debugging Techniques
  7. Integration Workflows
  8. Collaboration Guidelines
  9. Quality Assurance
  10. Best Practices

Overview

The provisioning project employs a multi-language, multi-component architecture requiring specific development workflows to maintain consistency, quality, and efficiency.

Key Technologies:

  • Nushell: Primary scripting and automation language
  • Rust: High-performance system components
  • KCL: Configuration language and schemas
  • TOML: Configuration files
  • Jinja2: Template engine

Development Principles:

  • Configuration-Driven: Never hardcode, always configure
  • Hybrid Architecture: Rust for performance, Nushell for flexibility
  • Test-First: Comprehensive testing at all levels
  • Documentation-Driven: Code and APIs are self-documenting

Development Setup

Initial Environment Setup

1. Clone and Navigate:

# Clone repository
git clone https://github.com/company/provisioning-system.git
cd provisioning-system

# Navigate to workspace
cd workspace/tools

2. Initialize Workspace:

# Initialize development workspace
nu workspace.nu init --user-name $USER --infra-name dev-env

# Check workspace health
nu workspace.nu health --detailed --fix-issues

3. Configure Development Environment:

# Create user configuration
cp workspace/config/local-overrides.toml.example workspace/config/$USER.toml

# Edit configuration for development
$EDITOR workspace/config/$USER.toml

4. Set Up Build System:

# Navigate to build tools
cd src/tools

# Check build prerequisites
make info

# Perform initial build
make dev-build

Tool Installation

Required Tools:

# Install Nushell
cargo install nu

# Install KCL
cargo install kcl-cli

# Install additional tools
cargo install cross          # Cross-compilation
cargo install cargo-audit    # Security auditing
cargo install cargo-watch    # File watching

Optional Development Tools:

# Install development enhancers
cargo install nu_plugin_tera    # Template plugin
cargo install sops              # Secrets management
brew install k9s                # Kubernetes management

IDE Configuration

VS Code Setup (.vscode/settings.json):

{
  "files.associations": {
    "*.nu": "shellscript",
    "*.k": "kcl",
    "*.toml": "toml"
  },
  "nushell.shellPath": "/usr/local/bin/nu",
  "rust-analyzer.cargo.features": "all",
  "editor.formatOnSave": true,
  "editor.rulers": [100],
  "files.trimTrailingWhitespace": true
}

Recommended Extensions:

  • Nushell Language Support
  • Rust Analyzer
  • KCL Language Support
  • TOML Language Support
  • Better TOML

Daily Development Workflow

Morning Routine

1. Sync and Update:

# Sync with upstream
git pull origin main

# Update workspace
cd workspace/tools
nu workspace.nu health --fix-issues

# Check for updates
nu workspace.nu status --detailed

2. Review Current State:

# Check current infrastructure
provisioning show servers
provisioning show settings

# Review workspace status
nu workspace.nu status

Development Cycle

1. Feature Development:

# Create feature branch
git checkout -b feature/new-provider-support

# Start development environment
cd workspace/tools
nu workspace.nu init --workspace-type development

# Begin development
$EDITOR workspace/extensions/providers/new-provider/nulib/provider.nu

2. Incremental Testing:

# Test syntax during development
nu --check workspace/extensions/providers/new-provider/nulib/provider.nu

# Run unit tests
nu workspace/extensions/providers/new-provider/tests/unit/basic-test.nu

# Integration testing
nu workspace.nu tools test-extension providers/new-provider

3. Build and Validate:

# Quick development build
cd src/tools
make dev-build

# Validate changes
make validate-all

# Test distribution
make test-dist

Testing During Development

Unit Testing:

# Add test examples to functions
def create-server [name: string] -> record {
    # @test: "test-server" -> {name: "test-server", status: "created"}
    # Implementation here
}

Integration Testing:

# Test with real infrastructure
nu workspace/extensions/providers/new-provider/nulib/provider.nu \
    create-server test-server --dry-run

# Test with workspace isolation
PROVISIONING_WORKSPACE_USER=$USER provisioning server create test-server --check

End-of-Day Routine

1. Commit Progress:

# Stage changes
git add .

# Commit with descriptive message
git commit -m "feat(provider): add new cloud provider support

- Implement basic server creation
- Add configuration schema
- Include unit tests
- Update documentation"

# Push to feature branch
git push origin feature/new-provider-support

2. Workspace Maintenance:

# Clean up development data
nu workspace.nu cleanup --type cache --age 1d

# Backup current state
nu workspace.nu backup --auto-name --components config,extensions

# Check workspace health
nu workspace.nu health

Code Organization

Nushell Code Structure

File Organization:

Extension Structure:
├── nulib/
│   ├── main.nu              # Main entry point
│   ├── core/                # Core functionality
│   │   ├── api.nu           # API interactions
│   │   ├── config.nu        # Configuration handling
│   │   └── utils.nu         # Utility functions
│   ├── commands/            # User commands
│   │   ├── create.nu        # Create operations
│   │   ├── delete.nu        # Delete operations
│   │   └── list.nu          # List operations
│   └── tests/               # Test files
│       ├── unit/            # Unit tests
│       └── integration/     # Integration tests
└── templates/               # Template files
    ├── config.j2            # Configuration templates
    └── manifest.j2          # Manifest templates

Function Naming Conventions:

# Use kebab-case for commands
def create-server [name: string] -> record { ... }
def validate-config [config: record] -> bool { ... }

# Use snake_case for internal functions
def get_api_client [] -> record { ... }
def parse_config_file [path: string] -> record { ... }

# Use descriptive prefixes
def check-server-status [server: string] -> string { ... }
def get-server-info [server: string] -> record { ... }
def list-available-zones [] -> list<string> { ... }

Error Handling Pattern:

def create-server [
    name: string
    --dry-run: bool = false
] -> record {
    # 1. Validate inputs
    if ($name | str length) == 0 {
        error make {
            msg: "Server name cannot be empty"
            label: {
                text: "empty name provided"
                span: (metadata $name).span
            }
        }
    }

    # 2. Check prerequisites
    let config = try {
        get-provider-config
    } catch {
        error make {msg: "Failed to load provider configuration"}
    }

    # 3. Perform operation
    if $dry_run {
        return {action: "create", server: $name, status: "dry-run"}
    }

    # 4. Return result
    {server: $name, status: "created", id: (generate-id)}
}

Rust Code Structure

Project Organization:

src/
├── lib.rs                   # Library root
├── main.rs                  # Binary entry point
├── config/                  # Configuration handling
│   ├── mod.rs
│   ├── loader.rs            # Config loading
│   └── validation.rs        # Config validation
├── api/                     # HTTP API
│   ├── mod.rs
│   ├── handlers.rs          # Request handlers
│   └── middleware.rs        # Middleware components
└── orchestrator/            # Orchestration logic
    ├── mod.rs
    ├── workflow.rs          # Workflow management
    └── task_queue.rs        # Task queue management

Error Handling:

use anyhow::{Context, Result};
use thiserror::Error;

#[derive(Error, Debug)]
pub enum ProvisioningError {
    #[error("Configuration error: {message}")]
    Config { message: String },

    #[error("Network error: {source}")]
    Network {
        #[from]
        source: reqwest::Error,
    },

    #[error("Validation failed: {field}")]
    Validation { field: String },
}

pub fn create_server(name: &str) -> Result<ServerInfo> {
    let config = load_config()
        .context("Failed to load configuration")?;

    validate_server_name(name)
        .context("Server name validation failed")?;

    let server = provision_server(name, &config)
        .context("Failed to provision server")?;

    Ok(server)
}

KCL Schema Organization

Schema Structure:

# Base schema definitions
schema ServerConfig:
    name: str
    plan: str
    zone: str
    tags?: {str: str} = {}

    check:
        len(name) > 0, "Server name cannot be empty"
        plan in ["1xCPU-2GB", "2xCPU-4GB", "4xCPU-8GB"], "Invalid plan"

# Provider-specific extensions
schema UpCloudServerConfig(ServerConfig):
    template?: str = "Ubuntu Server 22.04 LTS (Jammy Jellyfish)"
    storage?: int = 25

    check:
        storage >= 10, "Minimum storage is 10GB"
        storage <= 2048, "Maximum storage is 2TB"

# Composition schemas
schema InfrastructureConfig:
    servers: [ServerConfig]
    networks?: [NetworkConfig] = []
    load_balancers?: [LoadBalancerConfig] = []

    check:
        len(servers) > 0, "At least one server required"

Testing Strategies

Test-Driven Development

TDD Workflow:

  1. Write Test First: Define expected behavior
  2. Run Test (Fail): Confirm test fails as expected
  3. Write Code: Implement minimal code to pass
  4. Run Test (Pass): Confirm test now passes
  5. Refactor: Improve code while keeping tests green

Nushell Testing

Unit Test Pattern:

# Function with embedded test
def validate-server-name [name: string] -> bool {
    # @test: "valid-name" -> true
    # @test: "" -> false
    # @test: "name-with-spaces" -> false

    if ($name | str length) == 0 {
        return false
    }

    if ($name | str contains " ") {
        return false
    }

    true
}

# Separate test file
# tests/unit/server-validation-test.nu
def test_validate_server_name [] {
    # Valid cases
    assert (validate-server-name "valid-name")
    assert (validate-server-name "server123")

    # Invalid cases
    assert not (validate-server-name "")
    assert not (validate-server-name "name with spaces")
    assert not (validate-server-name "name@with!special")

    print "✅ validate-server-name tests passed"
}

Integration Test Pattern:

# tests/integration/server-lifecycle-test.nu
def test_complete_server_lifecycle [] {
    # Setup
    let test_server = "test-server-" + (date now | format date "%Y%m%d%H%M%S")

    try {
        # Test creation
        let create_result = (create-server $test_server --dry-run)
        assert ($create_result.status == "dry-run")

        # Test validation
        let validate_result = (validate-server-config $test_server)
        assert $validate_result

        print $"✅ Server lifecycle test passed for ($test_server)"
    } catch { |e|
        print $"❌ Server lifecycle test failed: ($e.msg)"
        exit 1
    }
}

Rust Testing

Unit Testing:

#[cfg(test)]
mod tests {
    use super::*;
    use tokio_test;

    #[test]
    fn test_validate_server_name() {
        assert!(validate_server_name("valid-name"));
        assert!(validate_server_name("server123"));

        assert!(!validate_server_name(""));
        assert!(!validate_server_name("name with spaces"));
        assert!(!validate_server_name("name@special"));
    }

    #[tokio::test]
    async fn test_server_creation() {
        let config = test_config();
        let result = create_server("test-server", &config).await;

        assert!(result.is_ok());
        let server = result.unwrap();
        assert_eq!(server.name, "test-server");
        assert_eq!(server.status, "created");
    }
}

Integration Testing:

#[cfg(test)]
mod integration_tests {
    use super::*;
    use testcontainers::*;

    #[tokio::test]
    async fn test_full_workflow() {
        // Setup test environment
        let docker = clients::Cli::default();
        let postgres = docker.run(images::postgres::Postgres::default());

        let config = TestConfig {
            database_url: format!("postgresql://localhost:{}/test",
                                 postgres.get_host_port_ipv4(5432))
        };

        // Test complete workflow
        let workflow = create_workflow(&config).await.unwrap();
        let result = execute_workflow(workflow).await.unwrap();

        assert_eq!(result.status, WorkflowStatus::Completed);
    }
}

KCL Testing

Schema Validation Testing:

# Test KCL schemas
kcl test kcl/

# Validate specific schemas
kcl check kcl/server.k --data test-data.yaml

# Test with examples
kcl run kcl/server.k -D name="test-server" -D plan="2xCPU-4GB"

Test Automation

Continuous Testing:

# Watch for changes and run tests
cargo watch -x test -x check

# Watch Nushell files
find . -name "*.nu" | entr -r nu tests/run-all-tests.nu

# Automated testing in workspace
nu workspace.nu tools test-all --watch

Debugging Techniques

Debug Configuration

Enable Debug Mode:

# Environment variables
export PROVISIONING_DEBUG=true
export PROVISIONING_LOG_LEVEL=debug
export RUST_LOG=debug
export RUST_BACKTRACE=1

# Workspace debug
export PROVISIONING_WORKSPACE_USER=$USER

Nushell Debugging

Debug Techniques:

# Debug prints
def debug-server-creation [name: string] {
    print $"🐛 Creating server: ($name)"

    let config = get-provider-config
    print $"🐛 Config loaded: ($config | to json)"

    let result = try {
        create-server-api $name $config
    } catch { |e|
        print $"🐛 API call failed: ($e.msg)"
        $e
    }

    print $"🐛 Result: ($result | to json)"
    $result
}

# Conditional debugging
def create-server [name: string] {
    if $env.PROVISIONING_DEBUG? == "true" {
        print $"Debug: Creating server ($name)"
    }

    # Implementation
}

# Interactive debugging
def debug-interactive [] {
    print "🐛 Entering debug mode..."
    print "Available commands: $env.PATH"
    print "Current config: " (get-config | to json)

    # Drop into interactive shell
    nu --interactive
}

Error Investigation:

# Comprehensive error handling
def safe-server-creation [name: string] {
    try {
        create-server $name
    } catch { |e|
        # Log error details
        {
            timestamp: (date now | format date "%Y-%m-%d %H:%M:%S"),
            operation: "create-server",
            input: $name,
            error: $e.msg,
            debug: $e.debug?,
            env: {
                user: $env.USER,
                workspace: $env.PROVISIONING_WORKSPACE_USER?,
                debug: $env.PROVISIONING_DEBUG?
            }
        } | save --append logs/error-debug.json

        # Re-throw with context
        error make {
            msg: $"Server creation failed: ($e.msg)",
            label: {text: "failed here", span: $e.span?}
        }
    }
}

Rust Debugging

Debug Logging:

use tracing::{debug, info, warn, error, instrument};

#[instrument]
pub async fn create_server(name: &str) -> Result<ServerInfo> {
    debug!("Starting server creation for: {}", name);

    let config = load_config()
        .map_err(|e| {
            error!("Failed to load config: {:?}", e);
            e
        })?;

    info!("Configuration loaded successfully");
    debug!("Config details: {:?}", config);

    let server = provision_server(name, &config).await
        .map_err(|e| {
            error!("Provisioning failed for {}: {:?}", name, e);
            e
        })?;

    info!("Server {} created successfully", name);
    Ok(server)
}

Interactive Debugging:

// Use debugger breakpoints
#[cfg(debug_assertions)]
{
    println!("Debug: server creation starting");
    dbg!(&config);
    // Add breakpoint here in IDE
}

Log Analysis

Log Monitoring:

# Follow all logs
tail -f workspace/runtime/logs/$USER/*.log

# Filter for errors
grep -i error workspace/runtime/logs/$USER/*.log

# Monitor specific component
tail -f workspace/runtime/logs/$USER/orchestrator.log | grep -i workflow

# Structured log analysis
jq '.level == "ERROR"' workspace/runtime/logs/$USER/structured.jsonl

Debug Log Levels:

# Different verbosity levels
PROVISIONING_LOG_LEVEL=trace provisioning server create test
PROVISIONING_LOG_LEVEL=debug provisioning server create test
PROVISIONING_LOG_LEVEL=info provisioning server create test

Integration Workflows

Existing System Integration

Working with Legacy Components:

# Test integration with existing system
provisioning --version                    # Legacy system
src/core/nulib/provisioning --version    # New system

# Test workspace integration
PROVISIONING_WORKSPACE_USER=$USER provisioning server list

# Validate configuration compatibility
provisioning validate config
nu workspace.nu config validate

API Integration Testing

REST API Testing:

# Test orchestrator API
curl -X GET http://localhost:9090/health
curl -X GET http://localhost:9090/tasks

# Test workflow creation
curl -X POST http://localhost:9090/workflows/servers/create \
  -H "Content-Type: application/json" \
  -d '{"name": "test-server", "plan": "2xCPU-4GB"}'

# Monitor workflow
curl -X GET http://localhost:9090/workflows/batch/status/workflow-id

Database Integration

SurrealDB Integration:

# Test database connectivity
use core/nulib/lib_provisioning/database/surreal.nu
let db = (connect-database)
(test-connection $db)

# Workflow state testing
let workflow_id = (create-workflow-record "test-workflow")
let status = (get-workflow-status $workflow_id)
assert ($status.status == "pending")

External Tool Integration

Container Integration:

# Test with Docker
docker run --rm -v $(pwd):/work provisioning:dev provisioning --version

# Test with Kubernetes
kubectl apply -f manifests/test-pod.yaml
kubectl logs test-pod

# Validate in different environments
make test-dist PLATFORM=docker
make test-dist PLATFORM=kubernetes

Collaboration Guidelines

Branch Strategy

Branch Naming:

  • feature/description - New features
  • fix/description - Bug fixes
  • docs/description - Documentation updates
  • refactor/description - Code refactoring
  • test/description - Test improvements

Workflow:

# Start new feature
git checkout main
git pull origin main
git checkout -b feature/new-provider-support

# Regular commits
git add .
git commit -m "feat(provider): implement server creation API"

# Push and create PR
git push origin feature/new-provider-support
gh pr create --title "Add new provider support" --body "..."

Code Review Process

Review Checklist:

  • Code follows project conventions
  • Tests are included and passing
  • Documentation is updated
  • No hardcoded values
  • Error handling is comprehensive
  • Performance considerations addressed

Review Commands:

# Test PR locally
gh pr checkout 123
cd src/tools && make ci-test

# Run specific tests
nu workspace/extensions/providers/new-provider/tests/run-all.nu

# Check code quality
cargo clippy -- -D warnings
nu --check $(find . -name "*.nu")

Documentation Requirements

Code Documentation:

# Function documentation
def create-server [
    name: string        # Server name (must be unique)
    plan: string        # Server plan (e.g., "2xCPU-4GB")
    --dry-run: bool     # Show what would be created without doing it
] -> record {           # Returns server creation result
    # Creates a new server with the specified configuration
    #
    # Examples:
    #   create-server "web-01" "2xCPU-4GB"
    #   create-server "test" "1xCPU-2GB" --dry-run

    # Implementation
}

Communication

Progress Updates:

  • Daily standup participation
  • Weekly architecture reviews
  • PR descriptions with context
  • Issue tracking with details

Knowledge Sharing:

  • Technical blog posts
  • Architecture decision records
  • Code review discussions
  • Team documentation updates

Quality Assurance

Code Quality Checks

Automated Quality Gates:

# Pre-commit hooks
pre-commit install

# Manual quality check
cd src/tools
make validate-all

# Security audit
cargo audit

Quality Metrics:

  • Code coverage > 80%
  • No critical security vulnerabilities
  • All tests passing
  • Documentation coverage complete
  • Performance benchmarks met

Performance Monitoring

Performance Testing:

# Benchmark builds
make benchmark

# Performance profiling
cargo flamegraph --bin provisioning-orchestrator

# Load testing
ab -n 1000 -c 10 http://localhost:9090/health

Resource Monitoring:

# Monitor during development
nu workspace/tools/runtime-manager.nu monitor --duration 5m

# Check resource usage
du -sh workspace/runtime/
df -h

Best Practices

Configuration Management

Never Hardcode:

# Bad
def get-api-url [] { "https://api.upcloud.com" }

# Good
def get-api-url [] {
    get-config-value "providers.upcloud.api_url" "https://api.upcloud.com"
}

Error Handling

Comprehensive Error Context:

def create-server [name: string] {
    try {
        validate-server-name $name
    } catch { |e|
        error make {
            msg: $"Invalid server name '($name)': ($e.msg)",
            label: {text: "server name validation failed", span: $e.span?}
        }
    }

    try {
        provision-server $name
    } catch { |e|
        error make {
            msg: $"Server provisioning failed for '($name)': ($e.msg)",
            help: "Check provider credentials and quota limits"
        }
    }
}

Resource Management

Clean Up Resources:

def with-temporary-server [name: string, action: closure] {
    let server = (create-server $name)

    try {
        do $action $server
    } catch { |e|
        # Clean up on error
        delete-server $name
        $e
    }

    # Clean up on success
    delete-server $name
}

Testing Best Practices

Test Isolation:

def test-with-isolation [test_name: string, test_action: closure] {
    let test_workspace = $"test-($test_name)-(date now | format date '%Y%m%d%H%M%S')"

    try {
        # Set up isolated environment
        $env.PROVISIONING_WORKSPACE_USER = $test_workspace
        nu workspace.nu init --user-name $test_workspace

        # Run test
        do $test_action

        print $"✅ Test ($test_name) passed"
    } catch { |e|
        print $"❌ Test ($test_name) failed: ($e.msg)"
        exit 1
    } finally {
        # Clean up test environment
        nu workspace.nu cleanup --user-name $test_workspace --type all --force
    }
}

This development workflow provides a comprehensive framework for efficient, quality-focused development while maintaining the project’s architectural principles and ensuring smooth collaboration across the team.