provisioning/docs/infrastructure-specific-extensions.md

1225 lines
32 KiB
Markdown
Raw Normal View History

2025-10-07 11:12:02 +01:00
# Infrastructure-Specific Extension Development
This guide focuses on creating extensions tailored to specific infrastructure requirements, business needs, and organizational constraints.
## Table of Contents
1. [Overview](#overview)
2. [Infrastructure Assessment](#infrastructure-assessment)
3. [Custom Taskserv Development](#custom-taskserv-development)
4. [Provider-Specific Extensions](#provider-specific-extensions)
5. [Multi-Environment Management](#multi-environment-management)
6. [Integration Patterns](#integration-patterns)
7. [Real-World Examples](#real-world-examples)
## Overview
Infrastructure-specific extensions address unique requirements that generic modules cannot cover:
- **Company-specific applications and services**
- **Compliance and security requirements**
- **Legacy system integrations**
- **Custom networking configurations**
- **Specialized monitoring and alerting**
- **Multi-cloud and hybrid deployments**
## Infrastructure Assessment
### Identifying Extension Needs
Before creating custom extensions, assess your infrastructure requirements:
#### 1. Application Inventory
```bash
# Document existing applications
cat > infrastructure-assessment.yaml << EOF
applications:
- name: "legacy-billing-system"
type: "monolith"
runtime: "java-8"
database: "oracle-11g"
integrations: ["ldap", "file-storage", "email"]
compliance: ["pci-dss", "sox"]
- name: "customer-portal"
type: "microservices"
runtime: "nodejs-16"
database: "postgresql-13"
integrations: ["redis", "elasticsearch", "s3"]
compliance: ["gdpr", "hipaa"]
infrastructure:
- type: "on-premise"
location: "datacenter-primary"
capabilities: ["kubernetes", "vmware", "storage-array"]
- type: "cloud"
provider: "aws"
regions: ["us-east-1", "eu-west-1"]
services: ["eks", "rds", "s3", "cloudfront"]
compliance_requirements:
- "PCI DSS Level 1"
- "SOX compliance"
- "GDPR data protection"
- "HIPAA safeguards"
network_requirements:
- "air-gapped environments"
- "private subnet isolation"
- "vpn connectivity"
- "load balancer integration"
EOF
```
#### 2. Gap Analysis
```bash
# Analyze what standard modules don't cover
./provisioning/core/cli/module-loader discover taskservs > available-modules.txt
# Create gap analysis
cat > gap-analysis.md << EOF
# Infrastructure Gap Analysis
## Standard Modules Available
$(cat available-modules.txt)
## Missing Capabilities
- [ ] Legacy Oracle database integration
- [ ] Company-specific LDAP authentication
- [ ] Custom monitoring for legacy systems
- [ ] Compliance reporting automation
- [ ] Air-gapped deployment workflows
- [ ] Multi-datacenter replication
## Custom Extensions Needed
1. **oracle-db-taskserv**: Oracle database with company settings
2. **company-ldap-taskserv**: LDAP integration with custom schema
3. **compliance-monitor-taskserv**: Automated compliance checking
4. **airgap-deployment-cluster**: Air-gapped deployment patterns
5. **company-monitoring-taskserv**: Custom monitoring dashboard
EOF
```
### Requirements Gathering
#### Business Requirements Template
```kcl
"""
Business Requirements Schema for Custom Extensions
Use this template to document requirements before development
"""
schema BusinessRequirements:
"""Document business requirements for custom extensions"""
# Project information
project_name: str
stakeholders: [str]
timeline: str
budget_constraints?: str
# Functional requirements
functional_requirements: [FunctionalRequirement]
# Non-functional requirements
performance_requirements: PerformanceRequirements
security_requirements: SecurityRequirements
compliance_requirements: [str]
# Integration requirements
existing_systems: [ExistingSystem]
required_integrations: [Integration]
# Operational requirements
monitoring_requirements: [str]
backup_requirements: [str]
disaster_recovery_requirements: [str]
schema FunctionalRequirement:
id: str
description: str
priority: "high" | "medium" | "low"
acceptance_criteria: [str]
schema PerformanceRequirements:
max_response_time: str
throughput_requirements: str
availability_target: str
scalability_requirements: str
schema SecurityRequirements:
authentication_method: str
authorization_model: str
encryption_requirements: [str]
audit_requirements: [str]
network_security: [str]
schema ExistingSystem:
name: str
type: str
version: str
api_available: bool
integration_method: str
schema Integration:
target_system: str
integration_type: "api" | "database" | "file" | "message_queue"
data_format: str
frequency: str
direction: "inbound" | "outbound" | "bidirectional"
```
## Custom Taskserv Development
### Company-Specific Application Taskserv
#### Example: Legacy ERP System Integration
```bash
# Create company-specific taskserv
mkdir -p extensions/taskservs/company-specific/legacy-erp/kcl
cd extensions/taskservs/company-specific/legacy-erp/kcl
```
Create `legacy-erp.k`:
```kcl
"""
Legacy ERP System Taskserv
Handles deployment and management of company's legacy ERP system
"""
import provisioning.lib as lib
import provisioning.dependencies as deps
import provisioning.defaults as defaults
# ERP system configuration
schema LegacyERPConfig:
"""Configuration for legacy ERP system"""
# Application settings
erp_version: str = "12.2.0"
installation_mode: "standalone" | "cluster" | "ha" = "ha"
# Database configuration
database_type: "oracle" | "sqlserver" = "oracle"
database_version: str = "19c"
database_size: str = "500Gi"
database_backup_retention: int = 30
# Network configuration
erp_port: int = 8080
database_port: int = 1521
ssl_enabled: bool = True
internal_network_only: bool = True
# Integration settings
ldap_server: str
file_share_path: str
email_server: str
# Compliance settings
audit_logging: bool = True
encryption_at_rest: bool = True
encryption_in_transit: bool = True
data_retention_years: int = 7
# Resource allocation
app_server_resources: ERPResourceConfig
database_resources: ERPResourceConfig
# Backup configuration
backup_schedule: str = "0 2 * * *" # Daily at 2 AM
backup_retention_policy: BackupRetentionPolicy
check:
erp_port > 0 and erp_port < 65536, "ERP port must be valid"
database_port > 0 and database_port < 65536, "Database port must be valid"
data_retention_years > 0, "Data retention must be positive"
len(ldap_server) > 0, "LDAP server required"
schema ERPResourceConfig:
"""Resource configuration for ERP components"""
cpu_request: str
memory_request: str
cpu_limit: str
memory_limit: str
storage_size: str
storage_class: str = "fast-ssd"
schema BackupRetentionPolicy:
"""Backup retention policy for ERP system"""
daily_backups: int = 7
weekly_backups: int = 4
monthly_backups: int = 12
yearly_backups: int = 7
# Environment-specific resource configurations
erp_resource_profiles = {
"development": {
app_server_resources = {
cpu_request = "1"
memory_request = "4Gi"
cpu_limit = "2"
memory_limit = "8Gi"
storage_size = "50Gi"
storage_class = "standard"
}
database_resources = {
cpu_request = "2"
memory_request = "8Gi"
cpu_limit = "4"
memory_limit = "16Gi"
storage_size = "100Gi"
storage_class = "standard"
}
},
"production": {
app_server_resources = {
cpu_request = "4"
memory_request = "16Gi"
cpu_limit = "8"
memory_limit = "32Gi"
storage_size = "200Gi"
storage_class = "fast-ssd"
}
database_resources = {
cpu_request = "8"
memory_request = "32Gi"
cpu_limit = "16"
memory_limit = "64Gi"
storage_size = "2Ti"
storage_class = "fast-ssd"
}
}
}
# Taskserv definition
schema LegacyERPTaskserv(lib.TaskServDef):
"""Legacy ERP Taskserv Definition"""
name: str = "legacy-erp"
config: LegacyERPConfig
environment: "development" | "staging" | "production"
# Dependencies for legacy ERP
legacy_erp_dependencies: deps.TaskservDependencies = {
name = "legacy-erp"
# Infrastructure dependencies
requires = ["kubernetes", "storage-class"]
optional = ["monitoring", "backup-agent", "log-aggregator"]
conflicts = ["modern-erp"]
# Services provided
provides = ["erp-api", "erp-ui", "erp-reports", "erp-integration"]
# Resource requirements
resources = {
cpu = "8"
memory = "32Gi"
disk = "2Ti"
network = True
privileged = True # Legacy systems often need privileged access
}
# Health checks
health_checks = [
{
command = "curl -k https://localhost:8080/health"
interval = 60
timeout = 30
retries = 3
},
{
command = "sqlplus system/password@localhost:1521/XE <<< 'SELECT 1 FROM DUAL;'"
interval = 300
timeout = 60
retries = 2
}
]
# Installation phases
phases = [
{
name = "pre-install"
order = 1
parallel = False
required = True
},
{
name = "database-setup"
order = 2
parallel = False
required = True
},
{
name = "application-install"
order = 3
parallel = False
required = True
},
{
name = "integration-setup"
order = 4
parallel = True
required = False
},
{
name = "compliance-validation"
order = 5
parallel = False
required = True
}
]
# Compatibility
os_support = ["linux"]
arch_support = ["amd64"]
timeout = 3600 # 1 hour for legacy system deployment
}
# Default configuration
legacy_erp_default: LegacyERPTaskserv = {
name = "legacy-erp"
environment = "production"
config = {
erp_version = "12.2.0"
installation_mode = "ha"
database_type = "oracle"
database_version = "19c"
database_size = "1Ti"
database_backup_retention = 30
erp_port = 8080
database_port = 1521
ssl_enabled = True
internal_network_only = True
# Company-specific settings
ldap_server = "ldap.company.com"
file_share_path = "/mnt/company-files"
email_server = "smtp.company.com"
# Compliance settings
audit_logging = True
encryption_at_rest = True
encryption_in_transit = True
data_retention_years = 7
# Production resources
app_server_resources = erp_resource_profiles.production.app_server_resources
database_resources = erp_resource_profiles.production.database_resources
backup_schedule = "0 2 * * *"
backup_retention_policy = {
daily_backups = 7
weekly_backups = 4
monthly_backups = 12
yearly_backups = 7
}
}
}
# Export for provisioning system
{
config: legacy_erp_default,
dependencies: legacy_erp_dependencies,
profiles: erp_resource_profiles
}
```
### Compliance-Focused Taskserv
Create `compliance-monitor.k`:
```kcl
"""
Compliance Monitoring Taskserv
Automated compliance checking and reporting for regulated environments
"""
import provisioning.lib as lib
import provisioning.dependencies as deps
schema ComplianceMonitorConfig:
"""Configuration for compliance monitoring system"""
# Compliance frameworks
enabled_frameworks: [ComplianceFramework]
# Monitoring settings
scan_frequency: str = "0 0 * * *" # Daily
real_time_monitoring: bool = True
# Reporting settings
report_frequency: str = "0 0 * * 0" # Weekly
report_recipients: [str]
report_format: "pdf" | "html" | "json" = "pdf"
# Alerting configuration
alert_severity_threshold: "low" | "medium" | "high" = "medium"
alert_channels: [AlertChannel]
# Data retention
audit_log_retention_days: int = 2555 # 7 years
report_retention_days: int = 365
# Integration settings
siem_integration: bool = True
siem_endpoint?: str
check:
audit_log_retention_days >= 2555, "Audit logs must be retained for at least 7 years"
len(report_recipients) > 0, "At least one report recipient required"
schema ComplianceFramework:
"""Compliance framework configuration"""
name: "pci-dss" | "sox" | "gdpr" | "hipaa" | "iso27001" | "nist"
version: str
enabled: bool = True
custom_controls?: [ComplianceControl]
schema ComplianceControl:
"""Custom compliance control"""
id: str
description: str
check_command: str
severity: "low" | "medium" | "high" | "critical"
remediation_guidance: str
schema AlertChannel:
"""Alert channel configuration"""
type: "email" | "slack" | "teams" | "webhook" | "sms"
endpoint: str
severity_filter: ["low", "medium", "high", "critical"]
# Taskserv definition
schema ComplianceMonitorTaskserv(lib.TaskServDef):
"""Compliance Monitor Taskserv Definition"""
name: str = "compliance-monitor"
config: ComplianceMonitorConfig
# Dependencies
compliance_monitor_dependencies: deps.TaskservDependencies = {
name = "compliance-monitor"
# Dependencies
requires = ["kubernetes"]
optional = ["monitoring", "logging", "backup"]
provides = ["compliance-reports", "audit-logs", "compliance-api"]
# Resource requirements
resources = {
cpu = "500m"
memory = "1Gi"
disk = "50Gi"
network = True
privileged = False
}
# Health checks
health_checks = [
{
command = "curl -f http://localhost:8080/health"
interval = 30
timeout = 10
retries = 3
},
{
command = "compliance-check --dry-run"
interval = 300
timeout = 60
retries = 1
}
]
# Compatibility
os_support = ["linux"]
arch_support = ["amd64", "arm64"]
}
# Default configuration with common compliance frameworks
compliance_monitor_default: ComplianceMonitorTaskserv = {
name = "compliance-monitor"
config = {
enabled_frameworks = [
{
name = "pci-dss"
version = "3.2.1"
enabled = True
},
{
name = "sox"
version = "2002"
enabled = True
},
{
name = "gdpr"
version = "2018"
enabled = True
}
]
scan_frequency = "0 */6 * * *" # Every 6 hours
real_time_monitoring = True
report_frequency = "0 0 * * 1" # Weekly on Monday
report_recipients = ["compliance@company.com", "security@company.com"]
report_format = "pdf"
alert_severity_threshold = "medium"
alert_channels = [
{
type = "email"
endpoint = "security-alerts@company.com"
severity_filter = ["medium", "high", "critical"]
},
{
type = "slack"
endpoint = "https://hooks.slack.com/services/T00000000/B00000000/XXXXXXXXXXXXXXXXXXXXXXXX"
severity_filter = ["high", "critical"]
}
]
audit_log_retention_days = 2555
report_retention_days = 365
siem_integration = True
siem_endpoint = "https://siem.company.com/api/events"
}
}
# Export configuration
{
config: compliance_monitor_default,
dependencies: compliance_monitor_dependencies
}
```
## Provider-Specific Extensions
### Custom Cloud Provider Integration
When working with specialized or private cloud providers:
```bash
# Create custom provider extension
mkdir -p extensions/providers/company-private-cloud/kcl
cd extensions/providers/company-private-cloud/kcl
```
Create `provision_company-private-cloud.k`:
```kcl
"""
Company Private Cloud Provider
Integration with company's private cloud infrastructure
"""
import provisioning.defaults as defaults
import provisioning.server as server
schema CompanyPrivateCloudConfig:
"""Company private cloud configuration"""
# API configuration
api_endpoint: str = "https://cloud-api.company.com"
api_version: str = "v2"
auth_token: str
# Network configuration
management_network: str = "10.0.0.0/24"
production_network: str = "10.1.0.0/16"
dmz_network: str = "10.2.0.0/24"
# Resource pools
compute_cluster: str = "production-cluster"
storage_cluster: str = "storage-cluster"
# Compliance settings
encryption_required: bool = True
audit_all_operations: bool = True
# Company-specific settings
cost_center: str
department: str
project_code: str
check:
len(api_endpoint) > 0, "API endpoint required"
len(auth_token) > 0, "Authentication token required"
len(cost_center) > 0, "Cost center required for billing"
schema CompanyPrivateCloudServer(server.Server):
"""Server configuration for company private cloud"""
# Instance configuration
instance_class: "standard" | "compute-optimized" | "memory-optimized" | "storage-optimized" = "standard"
instance_size: "small" | "medium" | "large" | "xlarge" | "2xlarge" = "medium"
# Storage configuration
root_disk_type: "ssd" | "nvme" | "spinning" = "ssd"
root_disk_size: int = 50
additional_storage?: [CompanyCloudStorage]
# Network configuration
network_segment: "management" | "production" | "dmz" = "production"
security_groups: [str] = ["default"]
# Compliance settings
encrypted_storage: bool = True
backup_enabled: bool = True
monitoring_enabled: bool = True
# Company metadata
cost_center: str
department: str
project_code: str
environment: "dev" | "test" | "staging" | "prod" = "prod"
check:
root_disk_size >= 20, "Root disk must be at least 20GB"
len(cost_center) > 0, "Cost center required"
len(department) > 0, "Department required"
schema CompanyCloudStorage:
"""Additional storage configuration"""
size: int
type: "ssd" | "nvme" | "spinning" | "archive" = "ssd"
mount_point: str
encrypted: bool = True
backup_enabled: bool = True
# Instance size configurations
instance_specs = {
"small": {
vcpus = 2
memory_gb = 4
network_performance = "moderate"
},
"medium": {
vcpus = 4
memory_gb = 8
network_performance = "good"
},
"large": {
vcpus = 8
memory_gb = 16
network_performance = "high"
},
"xlarge": {
vcpus = 16
memory_gb = 32
network_performance = "high"
},
"2xlarge": {
vcpus = 32
memory_gb = 64
network_performance = "very-high"
}
}
# Provider defaults
company_private_cloud_defaults: defaults.ServerDefaults = {
lock = False
time_zone = "UTC"
running_wait = 20
running_timeout = 600 # Private cloud may be slower
# Company-specific OS image
storage_os_find = "name: company-ubuntu-20.04-hardened | arch: x86_64"
# Network settings
network_utility_ipv4 = True
network_public_ipv4 = False # Private cloud, no public IPs
# Security settings
user = "company-admin"
user_ssh_port = 22
fix_local_hosts = True
# Company metadata
labels = "provider: company-private-cloud, compliance: required"
}
# Export provider configuration
{
config: CompanyPrivateCloudConfig,
server: CompanyPrivateCloudServer,
defaults: company_private_cloud_defaults,
instance_specs: instance_specs
}
```
## Multi-Environment Management
### Environment-Specific Configuration Management
Create environment-specific extensions that handle different deployment patterns:
```bash
# Create environment management extension
mkdir -p extensions/clusters/company-environments/kcl
cd extensions/clusters/company-environments/kcl
```
Create `company-environments.k`:
```kcl
"""
Company Environment Management
Standardized environment configurations for different deployment stages
"""
import provisioning.cluster as cluster
import provisioning.server as server
schema CompanyEnvironment:
"""Standard company environment configuration"""
# Environment metadata
name: str
type: "development" | "testing" | "staging" | "production" | "disaster-recovery"
region: str
availability_zones: [str]
# Network configuration
vpc_cidr: str
subnet_configuration: SubnetConfiguration
# Security configuration
security_profile: SecurityProfile
# Compliance requirements
compliance_level: "basic" | "standard" | "high" | "critical"
data_classification: "public" | "internal" | "confidential" | "restricted"
# Resource constraints
resource_limits: ResourceLimits
# Backup and DR configuration
backup_configuration: BackupConfiguration
disaster_recovery_configuration?: DRConfiguration
# Monitoring and alerting
monitoring_level: "basic" | "standard" | "enhanced"
alert_routing: AlertRouting
schema SubnetConfiguration:
"""Network subnet configuration"""
public_subnets: [str]
private_subnets: [str]
database_subnets: [str]
management_subnets: [str]
schema SecurityProfile:
"""Security configuration profile"""
encryption_at_rest: bool
encryption_in_transit: bool
network_isolation: bool
access_logging: bool
vulnerability_scanning: bool
# Access control
multi_factor_auth: bool
privileged_access_management: bool
network_segmentation: bool
# Compliance controls
audit_logging: bool
data_loss_prevention: bool
endpoint_protection: bool
schema ResourceLimits:
"""Resource allocation limits for environment"""
max_cpu_cores: int
max_memory_gb: int
max_storage_tb: int
max_instances: int
# Cost controls
max_monthly_cost: int
cost_alerts_enabled: bool
schema BackupConfiguration:
"""Backup configuration for environment"""
backup_frequency: str
retention_policy: {str: int}
cross_region_backup: bool
encryption_enabled: bool
schema DRConfiguration:
"""Disaster recovery configuration"""
dr_region: str
rto_minutes: int # Recovery Time Objective
rpo_minutes: int # Recovery Point Objective
automated_failover: bool
schema AlertRouting:
"""Alert routing configuration"""
business_hours_contacts: [str]
after_hours_contacts: [str]
escalation_policy: [EscalationLevel]
schema EscalationLevel:
"""Alert escalation level"""
level: int
delay_minutes: int
contacts: [str]
# Environment templates
environment_templates = {
"development": {
type = "development"
compliance_level = "basic"
data_classification = "internal"
security_profile = {
encryption_at_rest = False
encryption_in_transit = False
network_isolation = False
access_logging = True
vulnerability_scanning = False
multi_factor_auth = False
privileged_access_management = False
network_segmentation = False
audit_logging = False
data_loss_prevention = False
endpoint_protection = False
}
resource_limits = {
max_cpu_cores = 50
max_memory_gb = 200
max_storage_tb = 10
max_instances = 20
max_monthly_cost = 5000
cost_alerts_enabled = True
}
monitoring_level = "basic"
},
"production": {
type = "production"
compliance_level = "critical"
data_classification = "confidential"
security_profile = {
encryption_at_rest = True
encryption_in_transit = True
network_isolation = True
access_logging = True
vulnerability_scanning = True
multi_factor_auth = True
privileged_access_management = True
network_segmentation = True
audit_logging = True
data_loss_prevention = True
endpoint_protection = True
}
resource_limits = {
max_cpu_cores = 1000
max_memory_gb = 4000
max_storage_tb = 500
max_instances = 200
max_monthly_cost = 100000
cost_alerts_enabled = True
}
monitoring_level = "enhanced"
disaster_recovery_configuration = {
dr_region = "us-west-2"
rto_minutes = 60
rpo_minutes = 15
automated_failover = True
}
}
}
# Export environment templates
{
templates: environment_templates,
schema: CompanyEnvironment
}
```
## Integration Patterns
### Legacy System Integration
Create integration patterns for common legacy system scenarios:
```bash
# Create integration patterns
mkdir -p extensions/taskservs/integrations/legacy-bridge/kcl
cd extensions/taskservs/integrations/legacy-bridge/kcl
```
Create `legacy-bridge.k`:
```kcl
"""
Legacy System Integration Bridge
Provides standardized integration patterns for legacy systems
"""
import provisioning.lib as lib
import provisioning.dependencies as deps
schema LegacyBridgeConfig:
"""Configuration for legacy system integration bridge"""
# Bridge configuration
bridge_name: str
integration_type: "api" | "database" | "file" | "message-queue" | "etl"
# Legacy system details
legacy_system: LegacySystemInfo
# Modern system details
modern_system: ModernSystemInfo
# Data transformation configuration
data_transformation: DataTransformationConfig
# Security configuration
security_config: IntegrationSecurityConfig
# Monitoring and alerting
monitoring_config: IntegrationMonitoringConfig
schema LegacySystemInfo:
"""Legacy system information"""
name: str
type: "mainframe" | "as400" | "unix" | "windows" | "database" | "file-system"
version: str
# Connection details
connection_method: "direct" | "vpn" | "dedicated-line" | "api-gateway"
endpoint: str
port?: int
# Authentication
auth_method: "password" | "certificate" | "kerberos" | "ldap" | "token"
credentials_source: "vault" | "config" | "environment"
# Data characteristics
data_format: "fixed-width" | "csv" | "xml" | "json" | "binary" | "proprietary"
character_encoding: str = "utf-8"
# Operational characteristics
availability_hours: str = "24/7"
maintenance_windows: [MaintenanceWindow]
schema ModernSystemInfo:
"""Modern system information"""
name: str
type: "microservice" | "api" | "database" | "event-stream" | "file-store"
# Connection details
endpoint: str
api_version?: str
# Data format
data_format: "json" | "xml" | "avro" | "protobuf"
# Authentication
auth_method: "oauth2" | "jwt" | "api-key" | "mutual-tls"
schema DataTransformationConfig:
"""Data transformation configuration"""
transformation_rules: [TransformationRule]
error_handling: ErrorHandlingConfig
data_validation: DataValidationConfig
schema TransformationRule:
"""Individual data transformation rule"""
source_field: str
target_field: str
transformation_type: "direct" | "calculated" | "lookup" | "conditional"
transformation_expression?: str
schema ErrorHandlingConfig:
"""Error handling configuration"""
retry_policy: RetryPolicy
dead_letter_queue: bool = True
error_notification: bool = True
schema RetryPolicy:
"""Retry policy configuration"""
max_attempts: int = 3
initial_delay_seconds: int = 5
backoff_multiplier: float = 2.0
max_delay_seconds: int = 300
schema DataValidationConfig:
"""Data validation configuration"""
schema_validation: bool = True
business_rules_validation: bool = True
data_quality_checks: [DataQualityCheck]
schema DataQualityCheck:
"""Data quality check definition"""
name: str
check_type: "completeness" | "uniqueness" | "validity" | "consistency"
threshold: float = 0.95
action_on_failure: "warn" | "stop" | "quarantine"
schema IntegrationSecurityConfig:
"""Security configuration for integration"""
encryption_in_transit: bool = True
encryption_at_rest: bool = True
# Access control
source_ip_whitelist?: [str]
api_rate_limiting: bool = True
# Audit and compliance
audit_all_transactions: bool = True
pii_data_handling: PIIHandlingConfig
schema PIIHandlingConfig:
"""PII data handling configuration"""
pii_fields: [str]
anonymization_enabled: bool = True
retention_policy_days: int = 365
schema IntegrationMonitoringConfig:
"""Monitoring configuration for integration"""
metrics_collection: bool = True
performance_monitoring: bool = True
# SLA monitoring
sla_targets: SLATargets
# Alerting
alert_on_failures: bool = True
alert_on_performance_degradation: bool = True
schema SLATargets:
"""SLA targets for integration"""
max_latency_ms: int = 5000
min_availability_percent: float = 99.9
max_error_rate_percent: float = 0.1
schema MaintenanceWindow:
"""Maintenance window definition"""
day_of_week: int # 0=Sunday, 6=Saturday
start_time: str # HH:MM format
duration_hours: int
# Taskserv definition
schema LegacyBridgeTaskserv(lib.TaskServDef):
"""Legacy Bridge Taskserv Definition"""
name: str = "legacy-bridge"
config: LegacyBridgeConfig
# Dependencies
legacy_bridge_dependencies: deps.TaskservDependencies = {
name = "legacy-bridge"
requires = ["kubernetes"]
optional = ["monitoring", "logging", "vault"]
provides = ["legacy-integration", "data-bridge"]
resources = {
cpu = "500m"
memory = "1Gi"
disk = "10Gi"
network = True
privileged = False
}
health_checks = [
{
command = "curl -f http://localhost:8080/health"
interval = 30
timeout = 10
retries = 3
},
{
command = "integration-test --quick"
interval = 300
timeout = 120
retries = 1
}
]
os_support = ["linux"]
arch_support = ["amd64", "arm64"]
}
# Export configuration
{
config: LegacyBridgeTaskserv,
dependencies: legacy_bridge_dependencies
}
```
## Real-World Examples
### Example 1: Financial Services Company
```bash
# Financial services specific extensions
mkdir -p extensions/taskservs/financial-services/{trading-system,risk-engine,compliance-reporter}/kcl
```
### Example 2: Healthcare Organization
```bash
# Healthcare specific extensions
mkdir -p extensions/taskservs/healthcare/{hl7-processor,dicom-storage,hipaa-audit}/kcl
```
### Example 3: Manufacturing Company
```bash
# Manufacturing specific extensions
mkdir -p extensions/taskservs/manufacturing/{iot-gateway,scada-bridge,quality-system}/kcl
```
### Usage Examples
#### Loading Infrastructure-Specific Extensions
```bash
# Load company-specific extensions
cd workspace/infra/production
module-loader load taskservs . [legacy-erp, compliance-monitor, legacy-bridge]
module-loader load providers . [company-private-cloud]
module-loader load clusters . [company-environments]
# Verify loading
module-loader list taskservs .
module-loader validate .
```
#### Using in Server Configuration
```kcl
# Import loaded extensions
import .taskservs.legacy-erp.legacy-erp as erp
import .taskservs.compliance-monitor.compliance-monitor as compliance
import .providers.company-private-cloud as private_cloud
# Configure servers with company-specific extensions
company_servers: [server.Server] = [
{
hostname = "erp-prod-01"
title = "Production ERP Server"
# Use company private cloud
# Provider-specific configuration goes here
taskservs = [
{
name = "legacy-erp"
profile = "production"
},
{
name = "compliance-monitor"
profile = "default"
}
]
}
]
```
This comprehensive guide covers all aspects of creating infrastructure-specific extensions, from assessment and planning to implementation and deployment.