syntaxis/docs/provision/advanced-features.md
Jesús Pérez 9cef9b8d57 refactor: consolidate configuration directories
Merge _configs/ into config/ for single configuration directory.
Update all path references.

Changes:
- Move _configs/* to config/
- Update .gitignore for new patterns
- No code references to _configs/ found

Impact: -1 root directory (layout_conventions.md compliance)
2025-12-26 18:36:23 +00:00

21 KiB

🔧 Advanced Features & Extensibility Guide

Version: 1.0 Date: 2025-11-19 Audience: Advanced users, DevOps engineers, platform teams


Table of Contents

  1. Custom Deployment Patterns
  2. Extending the Service Catalog
  3. Terraform Code Generation
  4. Service Mesh Integration
  5. Advanced Scaling
  6. Security & RBAC
  7. Monitoring & Metrics
  8. Multi-Region Deployment
  9. Custom Health Checks
  10. Plugin Architecture

Custom Deployment Patterns

Creating New Patterns

The service catalog supports custom deployment patterns. Add patterns to configs/services-catalog.toml:

[pattern.custom-staging]
name = "Custom Staging"
description = "Staging environment with monitoring"

[pattern.custom-staging.services]
syntaxis-cli = true
syntaxis-api = true
surrealdb = true
nats = true

[pattern.custom-staging.metadata]
environment = "staging"
monitoring = "prometheus"
logging = "elk"

Programmatic Pattern Generation

use provctl_bridge::ServiceCatalog;

fn main() {
    let catalog = ServiceCatalog::from_file("services-catalog.toml")?;

    // Generate for custom pattern
    let k8s = catalog.generate_kubernetes("custom-staging")?;
    let docker = catalog.generate_docker_compose("custom-staging")?;

    // Save to files
    std::fs::write("k8s-staging.yaml", k8s)?;
    std::fs::write("docker-compose-staging.yml", docker)?;

    Ok(())
}

Pattern-Specific Overrides

[pattern.high-availability]
name = "High Availability"
description = "HA configuration with replication"

[pattern.high-availability.services]
syntaxis-api = { enabled = true, replicas = 3, affinity = "spread" }
surrealdb = { enabled = true, replicas = 2, backup = true }
nats = { enabled = true, replicas = 2, clustering = true }

[pattern.high-availability.features]
tls = true
autoscaling = true
backup_retention_days = 30

Extending the Service Catalog

Adding New Services

[service.prometheus]
name = "prometheus"
display_name = "Prometheus Monitoring"
description = "Metrics collection and alerting"
type = "monitoring"
external_service = true

[service.prometheus.binary]
name = "prometheus"
installed_to = "~/.local/bin/prometheus"

[service.prometheus.server]
default_host = "127.0.0.1"
default_port = 9090
scheme = "http"
port_configurable = true

[service.prometheus.usage]
description = "Prometheus time-series database"
basic_command = "prometheus --config.file=prometheus.yml"
health_check_endpoint = "/-/healthy"
health_check_method = "GET"
health_check_expected_status = 200
examples = [
    "prometheus --config.file=prometheus.yml",
    "curl http://127.0.0.1:9090/-/healthy",
    "curl http://127.0.0.1:9090/api/v1/query?query=up"
]

[service.prometheus.metadata]
platform_support = ["linux", "macos"]
min_memory_mb = 512
min_disk_space_mb = 500
internet_required = false
user_interaction = "optional"
background_service = true

[service.prometheus.dependencies]
requires = []
optional = ["surrealdb"]
conflicts = []

[service.prometheus.configuration]
config_location = "~/.config/prometheus"
tls_support = true
authentication = "optional"
clustering = false

Extending Rust Types

Modify provctl-bridge/src/catalog.rs to add new fields:

/// Extended service definition with monitoring
#[derive(Debug, Clone, Serialize, Deserialize)]
pub struct ServiceDefinition {
    // ... existing fields ...

    /// Monitoring configuration
    #[serde(default)]
    pub monitoring: Option<MonitoringConfig>,

    /// Backup configuration
    #[serde(default)]
    pub backup: Option<BackupConfig>,

    /// Custom labels and annotations
    #[serde(default)]
    pub labels: Option<HashMap<String, String>>,
}

/// Monitoring configuration
#[derive(Debug, Clone, Serialize, Deserialize)]
pub struct MonitoringConfig {
    pub metrics_port: u16,
    pub metrics_path: String,
    pub scrape_interval: String,
}

/// Backup configuration
#[derive(Debug, Clone, Serialize, Deserialize)]
pub struct BackupConfig {
    pub enabled: bool,
    pub frequency: String,
    pub retention_days: u32,
    pub destination: String,
}

Terraform Code Generation

Add Terraform Generation Method

impl ServiceCatalog {
    /// Generate Terraform code from deployment pattern
    pub fn generate_terraform(&self, pattern_name: &str) -> Result<String, Box<dyn std::error::Error>> {
        let pattern = self
            .pattern
            .as_ref()
            .and_then(|p| p.get(pattern_name))
            .ok_or(format!("Pattern '{}' not found", pattern_name))?;

        let services = pattern
            .get("services")
            .and_then(|v| v.as_array())
            .ok_or("Pattern missing services list")?;

        let mut output = String::from(
            r#"# Generated Terraform configuration from syntaxis catalog
terraform {
  required_version = ">= 1.0"
  required_providers {
    kubernetes = {
      source  = "hashicorp/kubernetes"
      version = "~> 2.0"
    }
  }
}

provider "kubernetes" {
  config_path = "~/.kube/config"
}

# Syntaxis namespace
resource "kubernetes_namespace" "syntaxis" {
  metadata {
    name = "syntaxis"
  }
}

"#
        );

        // Generate resources for each service
        for service_name in services {
            let name_str = service_name.as_str().ok_or("Service name must be string")?;
            if let Some(svc) = self.get_service(name_str) {
                output.push_str(&self.generate_terraform_deployment(name_str, svc)?);
                if svc.server.is_some() || svc.web.is_some() {
                    output.push_str(&self.generate_terraform_service(name_str, svc)?);
                }
            }
        }

        Ok(output)
    }

    fn generate_terraform_deployment(
        &self,
        name: &str,
        service: &ServiceDefinition,
    ) -> Result<String, Box<dyn std::error::Error>> {
        let memory = service
            .metadata
            .as_ref()
            .map(|m| m.min_memory_mb)
            .unwrap_or(256);

        Ok(format!(
            r#"resource "kubernetes_deployment" "{name}" {{
  metadata {{
    name      = "{name}"
    namespace = kubernetes_namespace.syntaxis.metadata[0].name
    labels = {{
      app = "{name}"
    }}
  }}

  spec {{
    replicas = 1

    selector {{
      match_labels = {{
        app = "{name}"
      }}
    }}

    template {{
      metadata {{
        labels = {{
          app = "{name}"
        }}
      }}

      spec {{
        container {{
          name  = "{name}"
          image = "{name}:latest"

          resources {{
            requests = {{
              cpu    = "100m"
              memory = "{memory}Mi"
            }}
            limits = {{
              cpu    = "500m"
              memory = "{memory_limit}Mi"
            }}
          }}
        }}
      }}
    }}
  }}
}}

"#,
            name = name,
            memory = memory,
            memory_limit = memory * 2
        ))
    }

    fn generate_terraform_service(
        &self,
        name: &str,
        service: &ServiceDefinition,
    ) -> Result<String, Box<dyn std::error::Error>> {
        let port = if let Some(server) = &service.server {
            server.default_port
        } else if let Some(web) = &service.web {
            web.default_port
        } else {
            return Ok(String::new());
        };

        Ok(format!(
            r#"resource "kubernetes_service" "{name}" {{
  metadata {{
    name      = "{name}"
    namespace = kubernetes_namespace.syntaxis.metadata[0].name
  }}

  spec {{
    selector = {{
      app = "{name}"
    }}

    port {{
      port        = {port}
      target_port = {port}
    }}

    type = "ClusterIP"
  }}
}}

"#,
            name = name,
            port = port
        ))
    }
}

Usage Example

# Generate Terraform for production pattern
cat > generate_terraform.rs << 'EOF'
use provctl_bridge::ServiceCatalog;
use std::fs;

fn main() {
    let catalog = ServiceCatalog::from_file(
        "configs/services-catalog.toml"
    ).expect("Failed to load catalog");

    let terraform = catalog.generate_terraform("production")
        .expect("Failed to generate Terraform");

    fs::write("main.tf", terraform)
        .expect("Failed to write Terraform code");

    println!("Generated main.tf");
    println!("Run: terraform init && terraform apply");
}
EOF

rustc generate_terraform.rs && ./generate_terraform
terraform init
terraform apply

Service Mesh Integration

Istio Integration Example

Add to TOML:

[service.syntaxis-api.mesh]
enabled = true
mtls_mode = "STRICT"
circuit_breaker = true
connection_pool = "unlimited"
timeout = "30s"
retries = 3

Generate Istio Configuration

pub fn generate_istio_config(&self, namespace: &str) -> Result<String, Box<dyn std::error::Error>> {
    let mut output = String::from(
        r#"apiVersion: networking.istio.io/v1beta1
kind: Namespace
metadata:
  name: {namespace}
spec:
  mtls_mode: STRICT
---
"#
    );

    for service_name in self.service_names() {
        if let Some(service) = self.get_service(&service_name) {
            if let Some(server) = &service.server {
                output.push_str(&format!(
                    r#"apiVersion: networking.istio.io/v1beta1
kind: VirtualService
metadata:
  name: {}
  namespace: {}
spec:
  hosts:
  - {}
  http:
  - match:
    - uri:
        prefix: /
    route:
    - destination:
        host: {}.{}.svc.cluster.local
        port:
          number: {}
    timeout: 30s
    retries:
      attempts: 3
      perTryTimeout: 10s
---
"#,
                    service_name, namespace, service_name, service_name, namespace, server.default_port
                ));
            }
        }
    }

    Ok(output)
}

Advanced Scaling

Horizontal Pod Autoscaling

[pattern.production-autoscaling]
name = "Production with Autoscaling"

[pattern.production-autoscaling.services]
syntaxis-api = { enabled = true, replicas = 2, max_replicas = 10, cpu_threshold = 70 }
surrealdb = { enabled = true, replicas = 1, max_replicas = 3 }
nats = { enabled = true, replicas = 2, max_replicas = 5 }

Generate HPA Configuration

pub fn generate_hpa_config(&self, pattern_name: &str) -> Result<String, Box<dyn std::error::Error>> {
    let pattern = self.pattern
        .as_ref()
        .and_then(|p| p.get(pattern_name))
        .ok_or("Pattern not found")?;

    let mut output = String::new();

    for (service_name, config) in self.service.iter() {
        if let Some(hpa_config) = pattern.get("services")
            .and_then(|s| s.get(service_name))
            .and_then(|s| s.get("max_replicas"))
        {
            if let Some(max_replicas) = hpa_config.as_u64() {
                output.push_str(&format!(
                    r#"apiVersion: autoscaling/v2
kind: HorizontalPodAutoscaler
metadata:
  name: {}
spec:
  scaleTargetRef:
    apiVersion: apps/v1
    kind: Deployment
    name: {}
  minReplicas: 1
  maxReplicas: {}
  metrics:
  - type: Resource
    resource:
      name: cpu
      target:
        type: Utilization
        averageUtilization: 70
---
"#,
                    service_name, service_name, max_replicas
                ));
            }
        }
    }

    Ok(output)
}

Security & RBAC

RBAC Configuration

[pattern.production-secure]
name = "Production Secure"

[pattern.production-secure.security]
network_policy = true
pod_security_policy = true
rbac_enabled = true
secret_management = "sealed-secrets"

Generate RBAC Resources

pub fn generate_rbac_config(&self, namespace: &str) -> Result<String, Box<dyn std::error::Error>> {
    let rbac = format!(
        r#"apiVersion: v1
kind: ServiceAccount
metadata:
  name: syntaxis
  namespace: {}
---
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
  name: syntaxis-role
  namespace: {}
rules:
- apiGroups: [""]
  resources: ["pods", "pods/logs"]
  verbs: ["get", "list", "watch"]
- apiGroups: [""]
  resources: ["services"]
  verbs: ["get", "list"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
  name: syntaxis-rolebinding
  namespace: {}
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: Role
  name: syntaxis-role
subjects:
- kind: ServiceAccount
  name: syntaxis
  namespace: {}
"#,
        namespace, namespace, namespace, namespace
    );

    Ok(rbac)
}

Monitoring & Metrics

Prometheus Integration

[service.syntaxis-api.metrics]
enabled = true
port = 9090
path = "/metrics"
scrape_interval = "15s"

[[service.syntaxis-api.alerts]]
name = "HighErrorRate"
condition = "rate(errors_total[5m]) > 0.05"
duration = "5m"
severity = "critical"

[[service.syntaxis-api.alerts]]
name = "HighLatency"
condition = "histogram_quantile(0.95, latency_ms) > 1000"
duration = "5m"
severity = "warning"

Generate Prometheus Config

pub fn generate_prometheus_scrape_config(&self) -> Result<String, Box<dyn std::error::Error>> {
    let mut output = String::from(
        r#"global:
  scrape_interval: 15s
  evaluation_interval: 15s

scrape_configs:
"#
    );

    for service_name in self.service_names() {
        if let Some(service) = self.get_service(&service_name) {
            if let Some(server) = &service.server {
                output.push_str(&format!(
                    r#"  - job_name: '{}'
    static_configs:
      - targets: ['{}:{}']
"#,
                    service_name, service_name, server.default_port
                ));
            }
        }
    }

    Ok(output)
}

Multi-Region Deployment

Regional Pattern Configuration

[pattern.multi-region]
name = "Multi-Region"
description = "Deployed across multiple regions"

[pattern.multi-region.regions.us-east]
provider = "aws"
region = "us-east-1"
services = ["syntaxis-api", "surrealdb"]
replica_count = 2

[pattern.multi-region.regions.eu-west]
provider = "aws"
region = "eu-west-1"
services = ["syntaxis-api", "surrealdb"]
replica_count = 2

[pattern.multi-region.regions.ap-south]
provider = "aws"
region = "ap-south-1"
services = ["syntaxis-api", "surrealdb"]
replica_count = 1

Generate Multi-Region Terraform

pub fn generate_multi_region_terraform(
    &self,
    pattern_name: &str,
) -> Result<HashMap<String, String>, Box<dyn std::error::Error>> {
    let pattern = self.pattern
        .as_ref()
        .and_then(|p| p.get(pattern_name))
        .ok_or("Pattern not found")?;

    let regions = pattern.get("regions")
        .and_then(|r| r.as_table())
        .ok_or("No regions defined")?;

    let mut configs = HashMap::new();

    for (region_name, region_config) in regions {
        let provider = region_config.get("provider")
            .and_then(|p| p.as_str())
            .unwrap_or("aws");

        let tf = format!(
            r#"terraform {{
  required_providers {{
    {} = {{
      source = "hashicorp/{}"
    }}
  }}
}}

provider "{}" {{
  region = "{}"
}}

module "syntaxis" {{
  source = "./modules/syntaxis"
  region = "{}"
}}
"#,
            provider, provider, provider, region_name, region_name
        );

        configs.insert(region_name.to_string(), tf);
    }

    Ok(configs)
}

Custom Health Checks

Advanced Health Check Configuration

[service.syntaxis-api.health_checks]

[[service.syntaxis-api.health_checks.http]]
name = "api_health"
endpoint = "/health"
method = "GET"
expected_status = 200
timeout = "5s"
interval = "10s"

[[service.syntaxis-api.health_checks.http]]
name = "api_ready"
endpoint = "/ready"
method = "GET"
expected_status = 200
timeout = "5s"

[[service.syntaxis-api.health_checks.tcp]]
name = "api_port"
host = "127.0.0.1"
port = 3000
timeout = "5s"

[[service.syntaxis-api.health_checks.grpc]]
name = "api_grpc"
endpoint = "127.0.0.1:3000"
service = "syntaxis.Health"
timeout = "5s"

[[service.syntaxis-api.health_checks.exec]]
name = "api_exec"
command = ["syntaxis-api", "--health-check"]
timeout = "5s"
expected_exit_code = 0

Generate Custom Health Check Configs

pub fn generate_health_check_config(&self, service_name: &str) -> Result<String, Box<dyn std::error::Error>> {
    let service = self.get_service(service_name)
        .ok_or("Service not found")?;

    // Generate health check scripts, configs, etc.
    // This can output Kubernetes probes, Docker health checks, etc.

    Ok(format!(
        r#"livenessProbe:
  httpGet:
    path: /health
    port: {}
  initialDelaySeconds: 10
  periodSeconds: 10
  timeoutSeconds: 5

readinessProbe:
  httpGet:
    path: /ready
    port: {}
  initialDelaySeconds: 5
  periodSeconds: 5
  timeoutSeconds: 5
"#,
        service.server.as_ref().map(|s| s.default_port).unwrap_or(0),
        service.server.as_ref().map(|s| s.default_port).unwrap_or(0)
    ))
}

Plugin Architecture

Plugin System Design

/// Plugin trait for extending catalog functionality
pub trait CatalogPlugin {
    /// Plugin name
    fn name(&self) -> &str;

    /// Plugin version
    fn version(&self) -> &str;

    /// Process catalog before generation
    fn process_catalog(&self, catalog: &mut ServiceCatalog) -> Result<(), Box<dyn std::error::Error>>;

    /// Generate additional code
    fn generate_code(&self, catalog: &ServiceCatalog) -> Result<HashMap<String, String>, Box<dyn std::error::Error>>;

    /// Validate catalog against plugin rules
    fn validate(&self, catalog: &ServiceCatalog) -> Result<(), Vec<String>>;
}

/// Plugin registry
pub struct PluginRegistry {
    plugins: Vec<Box<dyn CatalogPlugin>>,
}

impl PluginRegistry {
    pub fn new() -> Self {
        Self {
            plugins: Vec::new(),
        }
    }

    pub fn register(&mut self, plugin: Box<dyn CatalogPlugin>) {
        self.plugins.push(plugin);
    }

    pub fn process_catalog(&self, catalog: &mut ServiceCatalog) -> Result<(), Box<dyn std::error::Error>> {
        for plugin in &self.plugins {
            plugin.process_catalog(catalog)?;
        }
        Ok(())
    }

    pub fn validate(&self, catalog: &ServiceCatalog) -> Result<(), Vec<String>> {
        let mut errors = Vec::new();

        for plugin in &self.plugins {
            if let Err(plugin_errors) = plugin.validate(catalog) {
                errors.extend(plugin_errors);
            }
        }

        if errors.is_empty() {
            Ok(())
        } else {
            Err(errors)
        }
    }
}

Example Plugin: Security Validator

pub struct SecurityPlugin;

impl CatalogPlugin for SecurityPlugin {
    fn name(&self) -> &str {
        "security"
    }

    fn version(&self) -> &str {
        "1.0.0"
    }

    fn process_catalog(&self, _catalog: &mut ServiceCatalog) -> Result<(), Box<dyn std::error::Error>> {
        Ok(())
    }

    fn generate_code(&self, catalog: &ServiceCatalog) -> Result<HashMap<String, String>, Box<dyn std::error::Error>> {
        let mut code = HashMap::new();

        // Generate RBAC configuration
        let rbac = self.generate_rbac_for_services(catalog)?;
        code.insert("rbac.yaml".to_string(), rbac);

        // Generate network policies
        let netpol = self.generate_network_policies(catalog)?;
        code.insert("network-policies.yaml".to_string(), netpol);

        Ok(code)
    }

    fn validate(&self, catalog: &ServiceCatalog) -> Result<(), Vec<String>> {
        let mut errors = Vec::new();

        for service in catalog.service.values() {
            // Check that all services have metadata
            if service.metadata.is_none() {
                errors.push(format!("Service {} missing metadata", service.name));
            }
        }

        if errors.is_empty() {
            Ok(())
        } else {
            Err(errors)
        }
    }
}

Best Practices

Catalog Maintenance

  1. Version Control: Keep catalog in git
  2. Schema Validation: Validate against schema before deploying
  3. Testing: Test all pattern changes
  4. Documentation: Document custom patterns and extensions
  5. Review Process: Require review for catalog changes

Performance Optimization

  1. Lazy Loading: Load only needed services
  2. Caching: Cache parsed catalogs
  3. Parallel Generation: Generate multiple patterns in parallel
  4. Incremental Updates: Support partial catalog updates

Security Best Practices

  1. Secrets Management: Use external secret management
  2. RBAC: Implement least privilege principle
  3. Network Policies: Restrict service communication
  4. Pod Security: Use pod security standards
  5. Image Security: Scan and sign container images

Conclusion

The syntaxis catalog system is designed to be extensible and flexible. Use these advanced features to customize deployments for your specific needs while maintaining consistency and manageability.

For questions or contributions, see the project documentation and contribute to the ecosystem.


Status: Ready for Advanced Usage Last Updated: 2025-11-19