Keyboard shortcuts

Press or to navigate between chapters

Press S or / to search in the book

Press ? to show this help

Press Esc to hide this help

Integration Guide

This document explains how the new project structure integrates with existing systems, API compatibility and versioning, database migration strategies, deployment considerations, and monitoring and observability.

Table of Contents

  1. Overview
  2. Existing System Integration
  3. API Compatibility and Versioning
  4. Database Migration Strategies
  5. Deployment Considerations
  6. Monitoring and Observability
  7. Legacy System Bridge
  8. Migration Pathways
  9. Troubleshooting Integration Issues

Overview

Provisioning has been designed with integration as a core principle, ensuring seamless compatibility between new development-focused components and existing production systems while providing clear migration pathways.

Integration Principles:

  • Backward Compatibility: All existing APIs and interfaces remain functional
  • Gradual Migration: Systems can be migrated incrementally without disruption
  • Dual Operation: New and legacy systems operate side-by-side during transition
  • Zero Downtime: Migrations occur without service interruption
  • Data Integrity: All data migrations are atomic and reversible

Integration Architecture:

Integration Ecosystem
┌─────────────────┐    ┌─────────────────┐    ┌─────────────────┐
│   Legacy Core   │ ←→ │  Bridge Layer   │ ←→ │   New Systems   │
│                 │    │                 │    │                 │
│ - ENV config    │    │ - Compatibility │    │ - TOML config   │
│ - Direct calls  │    │ - Translation   │    │ - Orchestrator  │
│ - File-based    │    │ - Monitoring    │    │ - Workflows     │
│ - Simple logging│    │ - Validation    │    │ - REST APIs     │
└─────────────────┘    └─────────────────┘    └─────────────────┘
```plaintext

## Existing System Integration

### Command-Line Interface Integration

**Seamless CLI Compatibility**:

```bash
# All existing commands continue to work unchanged
./core/nulib/provisioning server create web-01 2xCPU-4GB
./core/nulib/provisioning taskserv install kubernetes
./core/nulib/provisioning cluster create buildkit

# New commands available alongside existing ones
./src/core/nulib/provisioning server create web-01 2xCPU-4GB --orchestrated
nu workspace/tools/workspace.nu health --detailed
```plaintext

**Path Resolution Integration**:

```nushell
# Automatic path resolution between systems
use workspace/lib/path-resolver.nu

# Resolves to workspace path if available, falls back to core
let config_path = (path-resolver resolve_path "config" "user" --fallback-to-core)

# Seamless extension discovery
let provider_path = (path-resolver resolve_extension "providers" "upcloud")
```plaintext

### Configuration System Bridge

**Dual Configuration Support**:

```nushell
# Configuration bridge supports both ENV and TOML
def get-config-value-bridge [key: string, default: string = ""] -> string {
    # Try new TOML configuration first
    let toml_value = try {
        get-config-value $key
    } catch { null }

    if $toml_value != null {
        return $toml_value
    }

    # Fall back to ENV variable (legacy support)
    let env_key = ($key | str replace "." "_" | str upcase | $"PROVISIONING_($in)")
    let env_value = ($env | get $env_key | default null)

    if $env_value != null {
        return $env_value
    }

    # Use default if provided
    if $default != "" {
        return $default
    }

    # Error with helpful migration message
    error make {
        msg: $"Configuration not found: ($key)",
        help: $"Migrate from ($env_key) environment variable to ($key) in config file"
    }
}
```plaintext

### Data Integration

**Shared Data Access**:

```nushell
# Unified data access across old and new systems
def get-server-info [server_name: string] -> record {
    # Try new orchestrator data store first
    let orchestrator_data = try {
        get-orchestrator-server-data $server_name
    } catch { null }

    if $orchestrator_data != null {
        return $orchestrator_data
    }

    # Fall back to legacy file-based storage
    let legacy_data = try {
        get-legacy-server-data $server_name
    } catch { null }

    if $legacy_data != null {
        return ($legacy_data | migrate-to-new-format)
    }

    error make {msg: $"Server not found: ($server_name)"}
}
```plaintext

### Process Integration

**Hybrid Process Management**:

```nushell
# Orchestrator-aware process management
def create-server-integrated [
    name: string,
    plan: string,
    --orchestrated: bool = false
] -> record {
    if $orchestrated and (check-orchestrator-available) {
        # Use new orchestrator workflow
        return (create-server-workflow $name $plan)
    } else {
        # Use legacy direct creation
        return (create-server-direct $name $plan)
    }
}

def check-orchestrator-available [] -> bool {
    try {
        http get "http://localhost:9090/health" | get status == "ok"
    } catch {
        false
    }
}
```plaintext

## API Compatibility and Versioning

### REST API Versioning

**API Version Strategy**:

- **v1**: Legacy compatibility API (existing functionality)
- **v2**: Enhanced API with orchestrator features
- **v3**: Full workflow and batch operation support

**Version Header Support**:

```bash
# API calls with version specification
curl -H "API-Version: v1" http://localhost:9090/servers
curl -H "API-Version: v2" http://localhost:9090/workflows/servers/create
curl -H "API-Version: v3" http://localhost:9090/workflows/batch/submit
```plaintext

### API Compatibility Layer

**Backward Compatible Endpoints**:

```rust
// Rust API compatibility layer
#[derive(Debug, Serialize, Deserialize)]
struct ApiRequest {
    version: Option<String>,
    #[serde(flatten)]
    payload: serde_json::Value,
}

async fn handle_versioned_request(
    headers: HeaderMap,
    req: ApiRequest,
) -> Result<ApiResponse, ApiError> {
    let api_version = headers
        .get("API-Version")
        .and_then(|v| v.to_str().ok())
        .unwrap_or("v1");

    match api_version {
        "v1" => handle_v1_request(req.payload).await,
        "v2" => handle_v2_request(req.payload).await,
        "v3" => handle_v3_request(req.payload).await,
        _ => Err(ApiError::UnsupportedVersion(api_version.to_string())),
    }
}

// V1 compatibility endpoint
async fn handle_v1_request(payload: serde_json::Value) -> Result<ApiResponse, ApiError> {
    // Transform request to legacy format
    let legacy_request = transform_to_legacy_format(payload)?;

    // Execute using legacy system
    let result = execute_legacy_operation(legacy_request).await?;

    // Transform response to v1 format
    Ok(transform_to_v1_response(result))
}
```plaintext

### Schema Evolution

**Backward Compatible Schema Changes**:

```kcl
# API schema with version support
schema ServerCreateRequest {
    # V1 fields (always supported)
    name: str
    plan: str
    zone?: str = "auto"

    # V2 additions (optional for backward compatibility)
    orchestrated?: bool = false
    workflow_options?: WorkflowOptions

    # V3 additions
    batch_options?: BatchOptions
    dependencies?: [str] = []

    # Version constraints
    api_version?: str = "v1"

    check:
        len(name) > 0, "Name cannot be empty"
        plan in ["1xCPU-2GB", "2xCPU-4GB", "4xCPU-8GB", "8xCPU-16GB"], "Invalid plan"
}

# Conditional validation based on API version
schema WorkflowOptions:
    wait_for_completion?: bool = true
    timeout_seconds?: int = 300
    retry_count?: int = 3

    check:
        timeout_seconds > 0, "Timeout must be positive"
        retry_count >= 0, "Retry count must be non-negative"
```plaintext

### Client SDK Compatibility

**Multi-Version Client Support**:

```nushell
# Nushell client with version support
def "client create-server" [
    name: string,
    plan: string,
    --api-version: string = "v1",
    --orchestrated: bool = false
] -> record {
    let endpoint = match $api_version {
        "v1" => "/servers",
        "v2" => "/workflows/servers/create",
        "v3" => "/workflows/batch/submit",
        _ => (error make {msg: $"Unsupported API version: ($api_version)"})
    }

    let request_body = match $api_version {
        "v1" => {name: $name, plan: $plan},
        "v2" => {name: $name, plan: $plan, orchestrated: $orchestrated},
        "v3" => {
            operations: [{
                id: "create_server",
                type: "server_create",
                config: {name: $name, plan: $plan}
            }]
        },
        _ => (error make {msg: $"Unsupported API version: ($api_version)"})
    }

    http post $"http://localhost:9090($endpoint)" $request_body
        --headers {
            "Content-Type": "application/json",
            "API-Version": $api_version
        }
}
```plaintext

## Database Migration Strategies

### Database Architecture Evolution

**Migration Strategy**:

```plaintext
Database Evolution Path
┌─────────────────┐    ┌─────────────────┐    ┌─────────────────┐
│  File-based     │ → │   SQLite        │ → │   SurrealDB     │
│  Storage        │    │   Migration     │    │   Full Schema   │
│                 │    │                 │    │                 │
│ - JSON files    │    │ - Structured    │    │ - Graph DB      │
│ - Text logs     │    │ - Transactions  │    │ - Real-time     │
│ - Simple state  │    │ - Backup/restore│    │ - Clustering    │
└─────────────────┘    └─────────────────┘    └─────────────────┘
```plaintext

### Migration Scripts

**Automated Database Migration**:

```nushell
# Database migration orchestration
def migrate-database [
    --from: string = "filesystem",
    --to: string = "surrealdb",
    --backup-first: bool = true,
    --verify: bool = true
] -> record {
    if $backup_first {
        print "Creating backup before migration..."
        let backup_result = (create-database-backup $from)
        print $"Backup created: ($backup_result.path)"
    }

    print $"Migrating from ($from) to ($to)..."

    match [$from, $to] {
        ["filesystem", "sqlite"] => migrate_filesystem_to_sqlite,
        ["filesystem", "surrealdb"] => migrate_filesystem_to_surrealdb,
        ["sqlite", "surrealdb"] => migrate_sqlite_to_surrealdb,
        _ => (error make {msg: $"Unsupported migration path: ($from) → ($to)"})
    }

    if $verify {
        print "Verifying migration integrity..."
        let verification = (verify-migration $from $to)
        if not $verification.success {
            error make {
                msg: $"Migration verification failed: ($verification.errors)",
                help: "Restore from backup and retry migration"
            }
        }
    }

    print $"Migration from ($from) to ($to) completed successfully"
    {from: $from, to: $to, status: "completed", migrated_at: (date now)}
}
```plaintext

**File System to SurrealDB Migration**:

```nushell
def migrate_filesystem_to_surrealdb [] -> record {
    # Initialize SurrealDB connection
    let db = (connect-surrealdb)

    # Migrate server data
    let server_files = (ls data/servers/*.json)
    let migrated_servers = []

    for server_file in $server_files {
        let server_data = (open $server_file.name | from json)

        # Transform to new schema
        let server_record = {
            id: $server_data.id,
            name: $server_data.name,
            plan: $server_data.plan,
            zone: ($server_data.zone? | default "unknown"),
            status: $server_data.status,
            ip_address: $server_data.ip_address?,
            created_at: $server_data.created_at,
            updated_at: (date now),
            metadata: ($server_data.metadata? | default {}),
            tags: ($server_data.tags? | default [])
        }

        # Insert into SurrealDB
        let insert_result = try {
            query-surrealdb $"CREATE servers:($server_record.id) CONTENT ($server_record | to json)"
        } catch { |e|
            print $"Warning: Failed to migrate server ($server_data.name): ($e.msg)"
        }

        $migrated_servers = ($migrated_servers | append $server_record.id)
    }

    # Migrate workflow data
    migrate_workflows_to_surrealdb $db

    # Migrate state data
    migrate_state_to_surrealdb $db

    {
        migrated_servers: ($migrated_servers | length),
        migrated_workflows: (migrate_workflows_to_surrealdb $db).count,
        status: "completed"
    }
}
```plaintext

### Data Integrity Verification

**Migration Verification**:

```nushell
def verify-migration [from: string, to: string] -> record {
    print "Verifying data integrity..."

    let source_data = (read-source-data $from)
    let target_data = (read-target-data $to)

    let errors = []

    # Verify record counts
    if $source_data.servers.count != $target_data.servers.count {
        $errors = ($errors | append "Server count mismatch")
    }

    # Verify key records
    for server in $source_data.servers {
        let target_server = ($target_data.servers | where id == $server.id | first)

        if ($target_server | is-empty) {
            $errors = ($errors | append $"Missing server: ($server.id)")
        } else {
            # Verify critical fields
            if $target_server.name != $server.name {
                $errors = ($errors | append $"Name mismatch for server ($server.id)")
            }

            if $target_server.status != $server.status {
                $errors = ($errors | append $"Status mismatch for server ($server.id)")
            }
        }
    }

    {
        success: ($errors | length) == 0,
        errors: $errors,
        verified_at: (date now)
    }
}
```plaintext

## Deployment Considerations

### Deployment Architecture

**Hybrid Deployment Model**:

```plaintext
Deployment Architecture
┌─────────────────────────────────────────────────────────────────┐
│                    Load Balancer / Reverse Proxy               │
└─────────────────────┬───────────────────────────────────────────┘
                      │
    ┌─────────────────┼─────────────────┐
    │                 │                 │
┌───▼────┐      ┌─────▼─────┐      ┌───▼────┐
│Legacy  │      │Orchestrator│      │New     │
│System  │ ←→   │Bridge      │  ←→  │Systems │
│        │      │            │      │        │
│- CLI   │      │- API Gate  │      │- REST  │
│- Files │      │- Compat    │      │- DB    │
│- Logs  │      │- Monitor   │      │- Queue │
└────────┘      └────────────┘      └────────┘
```plaintext

### Deployment Strategies

**Blue-Green Deployment**:

```bash
# Blue-Green deployment with integration bridge
# Phase 1: Deploy new system alongside existing (Green environment)
cd src/tools
make all
make create-installers

# Install new system without disrupting existing
./packages/installers/install-provisioning-2.0.0.sh \
    --install-path /opt/provisioning-v2 \
    --no-replace-existing \
    --enable-bridge-mode

# Phase 2: Start orchestrator and validate integration
/opt/provisioning-v2/bin/orchestrator start --bridge-mode --legacy-path /opt/provisioning-v1

# Phase 3: Gradual traffic shift
# Route 10% traffic to new system
nginx-traffic-split --new-backend 10%

# Validate metrics and gradually increase
nginx-traffic-split --new-backend 50%
nginx-traffic-split --new-backend 90%

# Phase 4: Complete cutover
nginx-traffic-split --new-backend 100%
/opt/provisioning-v1/bin/orchestrator stop
```plaintext

**Rolling Update**:

```nushell
def rolling-deployment [
    --target-version: string,
    --batch-size: int = 3,
    --health-check-interval: duration = 30sec
] -> record {
    let nodes = (get-deployment-nodes)
    let batches = ($nodes | group_by --chunk-size $batch_size)

    let deployment_results = []

    for batch in $batches {
        print $"Deploying to batch: ($batch | get name | str join ', ')"

        # Deploy to batch
        for node in $batch {
            deploy-to-node $node $target_version
        }

        # Wait for health checks
        sleep $health_check_interval

        # Verify batch health
        let batch_health = ($batch | each { |node| check-node-health $node })
        let healthy_nodes = ($batch_health | where healthy == true | length)

        if $healthy_nodes != ($batch | length) {
            # Rollback batch on failure
            print $"Health check failed, rolling back batch"
            for node in $batch {
                rollback-node $node
            }
            error make {msg: "Rolling deployment failed at batch"}
        }

        print $"Batch deployed successfully"
        $deployment_results = ($deployment_results | append {
            batch: $batch,
            status: "success",
            deployed_at: (date now)
        })
    }

    {
        strategy: "rolling",
        target_version: $target_version,
        batches: ($deployment_results | length),
        status: "completed",
        completed_at: (date now)
    }
}
```plaintext

### Configuration Deployment

**Environment-Specific Deployment**:

```bash
# Development deployment
PROVISIONING_ENV=dev ./deploy.sh \
    --config-source config.dev.toml \
    --enable-debug \
    --enable-hot-reload

# Staging deployment
PROVISIONING_ENV=staging ./deploy.sh \
    --config-source config.staging.toml \
    --enable-monitoring \
    --backup-before-deploy

# Production deployment
PROVISIONING_ENV=prod ./deploy.sh \
    --config-source config.prod.toml \
    --zero-downtime \
    --enable-all-monitoring \
    --backup-before-deploy \
    --health-check-timeout 5m
```plaintext

### Container Integration

**Docker Deployment with Bridge**:

```dockerfile
# Multi-stage Docker build supporting both systems
FROM rust:1.70 as builder
WORKDIR /app
COPY . .
RUN cargo build --release

FROM ubuntu:22.04 as runtime
WORKDIR /app

# Install both legacy and new systems
COPY --from=builder /app/target/release/orchestrator /app/bin/
COPY legacy-provisioning/ /app/legacy/
COPY config/ /app/config/

# Bridge script for dual operation
COPY bridge-start.sh /app/bin/

ENV PROVISIONING_BRIDGE_MODE=true
ENV PROVISIONING_LEGACY_PATH=/app/legacy
ENV PROVISIONING_NEW_PATH=/app/bin

EXPOSE 8080
CMD ["/app/bin/bridge-start.sh"]
```plaintext

**Kubernetes Integration**:

```yaml
# Kubernetes deployment with bridge sidecar
apiVersion: apps/v1
kind: Deployment
metadata:
  name: provisioning-system
spec:
  replicas: 3
  template:
    spec:
      containers:
      - name: orchestrator
        image: provisioning-system:2.0.0
        ports:
        - containerPort: 8080
        env:
        - name: PROVISIONING_BRIDGE_MODE
          value: "true"
        volumeMounts:
        - name: config
          mountPath: /app/config
        - name: legacy-data
          mountPath: /app/legacy/data

      - name: legacy-bridge
        image: provisioning-legacy:1.0.0
        env:
        - name: BRIDGE_ORCHESTRATOR_URL
          value: "http://localhost:9090"
        volumeMounts:
        - name: legacy-data
          mountPath: /data

      volumes:
      - name: config
        configMap:
          name: provisioning-config
      - name: legacy-data
        persistentVolumeClaim:
          claimName: provisioning-data
```plaintext

## Monitoring and Observability

### Integrated Monitoring Architecture

**Monitoring Stack Integration**:

```plaintext
Observability Architecture
┌─────────────────────────────────────────────────────────────────┐
│                    Monitoring Dashboard                         │
│  ┌─────────────┐  ┌─────────────┐  ┌─────────────┐           │
│  │   Grafana   │  │  Jaeger     │  │  AlertMgr   │           │
│  └─────────────┘  └─────────────┘  └─────────────┘           │
└─────────────┬───────────────┬───────────────┬─────────────────┘
              │               │               │
   ┌──────────▼──────────┐   │   ┌───────────▼───────────┐
   │     Prometheus      │   │   │      Jaeger           │
   │   (Metrics)         │   │   │    (Tracing)          │
   └──────────┬──────────┘   │   └───────────┬───────────┘
              │               │               │
┌─────────────▼─────────────┐ │ ┌─────────────▼─────────────┐
│        Legacy             │ │ │        New System         │
│      Monitoring           │ │ │       Monitoring          │
│                           │ │ │                           │
│ - File-based logs        │ │ │ - Structured logs         │
│ - Simple metrics         │ │ │ - Prometheus metrics      │
│ - Basic health checks    │ │ │ - Distributed tracing     │
└───────────────────────────┘ │ └───────────────────────────┘
                              │
                    ┌─────────▼─────────┐
                    │   Bridge Monitor  │
                    │                   │
                    │ - Integration     │
                    │ - Compatibility   │
                    │ - Migration       │
                    └───────────────────┘
```plaintext

### Metrics Integration

**Unified Metrics Collection**:

```nushell
# Metrics bridge for legacy and new systems
def collect-system-metrics [] -> record {
    let legacy_metrics = collect-legacy-metrics
    let new_metrics = collect-new-metrics
    let bridge_metrics = collect-bridge-metrics

    {
        timestamp: (date now),
        legacy: $legacy_metrics,
        new: $new_metrics,
        bridge: $bridge_metrics,
        integration: {
            compatibility_rate: (calculate-compatibility-rate $bridge_metrics),
            migration_progress: (calculate-migration-progress),
            system_health: (assess-overall-health $legacy_metrics $new_metrics)
        }
    }
}

def collect-legacy-metrics [] -> record {
    let log_files = (ls logs/*.log)
    let process_stats = (get-process-stats "legacy-provisioning")

    {
        active_processes: $process_stats.count,
        log_file_sizes: ($log_files | get size | math sum),
        last_activity: (get-last-log-timestamp),
        error_count: (count-log-errors "last 1h"),
        performance: {
            avg_response_time: (calculate-avg-response-time),
            throughput: (calculate-throughput)
        }
    }
}

def collect-new-metrics [] -> record {
    let orchestrator_stats = try {
        http get "http://localhost:9090/metrics"
    } catch {
        {status: "unavailable"}
    }

    {
        orchestrator: $orchestrator_stats,
        workflow_stats: (get-workflow-metrics),
        api_stats: (get-api-metrics),
        database_stats: (get-database-metrics)
    }
}
```plaintext

### Logging Integration

**Unified Logging Strategy**:

```nushell
# Structured logging bridge
def log-integrated [
    level: string,
    message: string,
    --component: string = "bridge",
    --legacy-compat: bool = true
] {
    let log_entry = {
        timestamp: (date now | format date "%Y-%m-%d %H:%M:%S%.3f"),
        level: $level,
        component: $component,
        message: $message,
        system: "integrated",
        correlation_id: (generate-correlation-id)
    }

    # Write to structured log (new system)
    $log_entry | to json | save --append logs/integrated.jsonl

    if $legacy_compat {
        # Write to legacy log format
        let legacy_entry = $"[($log_entry.timestamp)] [($level)] ($component): ($message)"
        $legacy_entry | save --append logs/legacy.log
    }

    # Send to monitoring system
    send-to-monitoring $log_entry
}
```plaintext

### Health Check Integration

**Comprehensive Health Monitoring**:

```nushell
def health-check-integrated [] -> record {
    let health_checks = [
        {name: "legacy-system", check: (check-legacy-health)},
        {name: "orchestrator", check: (check-orchestrator-health)},
        {name: "database", check: (check-database-health)},
        {name: "bridge-compatibility", check: (check-bridge-health)},
        {name: "configuration", check: (check-config-health)}
    ]

    let results = ($health_checks | each { |check|
        let result = try {
            do $check.check
        } catch { |e|
            {status: "unhealthy", error: $e.msg}
        }

        {name: $check.name, result: $result}
    })

    let healthy_count = ($results | where result.status == "healthy" | length)
    let total_count = ($results | length)

    {
        overall_status: (if $healthy_count == $total_count { "healthy" } else { "degraded" }),
        healthy_services: $healthy_count,
        total_services: $total_count,
        services: $results,
        checked_at: (date now)
    }
}
```plaintext

## Legacy System Bridge

### Bridge Architecture

**Bridge Component Design**:

```nushell
# Legacy system bridge module
export module bridge {
    # Bridge state management
    export def init-bridge [] -> record {
        let bridge_config = get-config-section "bridge"

        {
            legacy_path: ($bridge_config.legacy_path? | default "/opt/provisioning-v1"),
            new_path: ($bridge_config.new_path? | default "/opt/provisioning-v2"),
            mode: ($bridge_config.mode? | default "compatibility"),
            monitoring_enabled: ($bridge_config.monitoring? | default true),
            initialized_at: (date now)
        }
    }

    # Command translation layer
    export def translate-command [
        legacy_command: list<string>
    ] -> list<string> {
        match $legacy_command {
            ["provisioning", "server", "create", $name, $plan, ...$args] => {
                let new_args = ($args | each { |arg|
                    match $arg {
                        "--dry-run" => "--dry-run",
                        "--wait" => "--wait",
                        $zone if ($zone | str starts-with "--zone=") => $zone,
                        _ => $arg
                    }
                })

                ["provisioning", "server", "create", $name, $plan] ++ $new_args ++ ["--orchestrated"]
            },
            _ => $legacy_command  # Pass through unchanged
        }
    }

    # Data format translation
    export def translate-response [
        legacy_response: record,
        target_format: string = "v2"
    ] -> record {
        match $target_format {
            "v2" => {
                id: ($legacy_response.id? | default (generate-uuid)),
                name: $legacy_response.name,
                status: $legacy_response.status,
                created_at: ($legacy_response.created_at? | default (date now)),
                metadata: ($legacy_response | reject name status created_at),
                version: "v2-compat"
            },
            _ => $legacy_response
        }
    }
}
```plaintext

### Bridge Operation Modes

**Compatibility Mode**:

```nushell
# Full compatibility with legacy system
def run-compatibility-mode [] {
    print "Starting bridge in compatibility mode..."

    # Intercept legacy commands
    let legacy_commands = monitor-legacy-commands

    for command in $legacy_commands {
        let translated = (bridge translate-command $command)

        try {
            let result = (execute-new-system $translated)
            let legacy_result = (bridge translate-response $result "v1")
            respond-to-legacy $legacy_result
        } catch { |e|
            # Fall back to legacy system on error
            let fallback_result = (execute-legacy-system $command)
            respond-to-legacy $fallback_result
        }
    }
}
```plaintext

**Migration Mode**:

```nushell
# Gradual migration with traffic splitting
def run-migration-mode [
    --new-system-percentage: int = 50
] {
    print $"Starting bridge in migration mode (($new_system_percentage)% new system)"

    let commands = monitor-all-commands

    for command in $commands {
        let route_to_new = ((random integer 1..100) <= $new_system_percentage)

        if $route_to_new {
            try {
                execute-new-system $command
            } catch {
                # Fall back to legacy on failure
                execute-legacy-system $command
            }
        } else {
            execute-legacy-system $command
        }
    }
}
```plaintext

## Migration Pathways

### Migration Phases

**Phase 1: Parallel Deployment**

- Deploy new system alongside existing
- Enable bridge for compatibility
- Begin data synchronization
- Monitor integration health

**Phase 2: Gradual Migration**

- Route increasing traffic to new system
- Migrate data in background
- Validate consistency
- Address integration issues

**Phase 3: Full Migration**

- Complete traffic cutover
- Decommission legacy system
- Clean up bridge components
- Finalize data migration

### Migration Automation

**Automated Migration Orchestration**:

```nushell
def execute-migration-plan [
    migration_plan: string,
    --dry-run: bool = false,
    --skip-backup: bool = false
] -> record {
    let plan = (open $migration_plan | from yaml)

    if not $skip_backup {
        create-pre-migration-backup
    }

    let migration_results = []

    for phase in $plan.phases {
        print $"Executing migration phase: ($phase.name)"

        if $dry_run {
            print $"[DRY RUN] Would execute phase: ($phase)"
            continue
        }

        let phase_result = try {
            execute-migration-phase $phase
        } catch { |e|
            print $"Migration phase failed: ($e.msg)"

            if $phase.rollback_on_failure? | default false {
                print "Rolling back migration phase..."
                rollback-migration-phase $phase
            }

            error make {msg: $"Migration failed at phase ($phase.name): ($e.msg)"}
        }

        $migration_results = ($migration_results | append $phase_result)

        # Wait between phases if specified
        if "wait_seconds" in $phase {
            sleep ($phase.wait_seconds * 1sec)
        }
    }

    {
        migration_plan: $migration_plan,
        phases_completed: ($migration_results | length),
        status: "completed",
        completed_at: (date now),
        results: $migration_results
    }
}
```plaintext

**Migration Validation**:

```nushell
def validate-migration-readiness [] -> record {
    let checks = [
        {name: "backup-available", check: (check-backup-exists)},
        {name: "new-system-healthy", check: (check-new-system-health)},
        {name: "database-accessible", check: (check-database-connectivity)},
        {name: "configuration-valid", check: (validate-migration-config)},
        {name: "resources-available", check: (check-system-resources)},
        {name: "network-connectivity", check: (check-network-health)}
    ]

    let results = ($checks | each { |check|
        {
            name: $check.name,
            result: (do $check.check),
            timestamp: (date now)
        }
    })

    let failed_checks = ($results | where result.status != "ready")

    {
        ready_for_migration: ($failed_checks | length) == 0,
        checks: $results,
        failed_checks: $failed_checks,
        validated_at: (date now)
    }
}
```plaintext

## Troubleshooting Integration Issues

### Common Integration Problems

#### API Compatibility Issues

**Problem**: Version mismatch between client and server

```bash
# Diagnosis
curl -H "API-Version: v1" http://localhost:9090/health
curl -H "API-Version: v2" http://localhost:9090/health

# Solution: Check supported versions
curl http://localhost:9090/api/versions

# Update client API version
export PROVISIONING_API_VERSION=v2
```plaintext

#### Configuration Bridge Issues

**Problem**: Configuration not found in either system

```nushell
# Diagnosis
def diagnose-config-issue [key: string] -> record {
    let toml_result = try {
        get-config-value $key
    } catch { |e| {status: "failed", error: $e.msg} }

    let env_key = ($key | str replace "." "_" | str upcase | $"PROVISIONING_($in)")
    let env_result = try {
        $env | get $env_key
    } catch { |e| {status: "failed", error: $e.msg} }

    {
        key: $key,
        toml_config: $toml_result,
        env_config: $env_result,
        migration_needed: ($toml_result.status == "failed" and $env_result.status != "failed")
    }
}

# Solution: Migrate configuration
def migrate-single-config [key: string] {
    let diagnosis = (diagnose-config-issue $key)

    if $diagnosis.migration_needed {
        let env_value = $diagnosis.env_config
        set-config-value $key $env_value
        print $"Migrated ($key) from environment variable"
    }
}
```plaintext

#### Database Integration Issues

**Problem**: Data inconsistency between systems

```nushell
# Diagnosis and repair
def repair-data-consistency [] -> record {
    let legacy_data = (read-legacy-data)
    let new_data = (read-new-data)

    let inconsistencies = []

    # Check server records
    for server in $legacy_data.servers {
        let new_server = ($new_data.servers | where id == $server.id | first)

        if ($new_server | is-empty) {
            print $"Missing server in new system: ($server.id)"
            create-server-record $server
            $inconsistencies = ($inconsistencies | append {type: "missing", id: $server.id})
        } else if $new_server != $server {
            print $"Inconsistent server data: ($server.id)"
            update-server-record $server
            $inconsistencies = ($inconsistencies | append {type: "inconsistent", id: $server.id})
        }
    }

    {
        inconsistencies_found: ($inconsistencies | length),
        repairs_applied: ($inconsistencies | length),
        repaired_at: (date now)
    }
}
```plaintext

### Debug Tools

**Integration Debug Mode**:

```bash
# Enable comprehensive debugging
export PROVISIONING_DEBUG=true
export PROVISIONING_LOG_LEVEL=debug
export PROVISIONING_BRIDGE_DEBUG=true
export PROVISIONING_INTEGRATION_TRACE=true

# Run with integration debugging
provisioning server create test-server 2xCPU-4GB --debug-integration
```plaintext

**Health Check Debugging**:

```nushell
def debug-integration-health [] -> record {
    print "=== Integration Health Debug ==="

    # Check all integration points
    let legacy_health = try {
        check-legacy-system
    } catch { |e| {status: "error", error: $e.msg} }

    let orchestrator_health = try {
        http get "http://localhost:9090/health"
    } catch { |e| {status: "error", error: $e.msg} }

    let bridge_health = try {
        check-bridge-status
    } catch { |e| {status: "error", error: $e.msg} }

    let config_health = try {
        validate-config-integration
    } catch { |e| {status: "error", error: $e.msg} }

    print $"Legacy System: ($legacy_health.status)"
    print $"Orchestrator: ($orchestrator_health.status)"
    print $"Bridge: ($bridge_health.status)"
    print $"Configuration: ($config_health.status)"

    {
        legacy: $legacy_health,
        orchestrator: $orchestrator_health,
        bridge: $bridge_health,
        configuration: $config_health,
        debug_timestamp: (date now)
    }
}
```plaintext

This integration guide provides a comprehensive framework for seamlessly integrating new development components with existing production systems while maintaining reliability, compatibility, and clear migration pathways.