Vapora/docs/integrations/provisioning-integration.md
Jesús Pérez d14150da75 feat: Phase 5.3 - Multi-Agent Learning Infrastructure
Implement intelligent agent learning from Knowledge Graph execution history
with per-task-type expertise tracking, recency bias, and learning curves.

## Phase 5.3 Implementation

### Learning Infrastructure ( Complete)
- LearningProfileService with per-task-type expertise metrics
- TaskTypeExpertise model tracking success_rate, confidence, learning curves
- Recency bias weighting: recent 7 days weighted 3x higher (exponential decay)
- Confidence scoring prevents overfitting: min(1.0, executions / 20)
- Learning curves computed from daily execution windows

### Agent Scoring Service ( Complete)
- Unified AgentScore combining SwarmCoordinator + learning profiles
- Scoring formula: 0.3*base + 0.5*expertise + 0.2*confidence
- Rank agents by combined score for intelligent assignment
- Support for recency-biased scoring (recent_success_rate)
- Methods: rank_agents, select_best, rank_agents_with_recency

### KG Integration ( Complete)
- KGPersistence::get_executions_for_task_type() - query by agent + task type
- KGPersistence::get_agent_executions() - all executions for agent
- Coordinator::load_learning_profile_from_kg() - core KG→Learning integration
- Coordinator::load_all_learning_profiles() - batch load for multiple agents
- Convert PersistedExecution → ExecutionData for learning calculations

### Agent Assignment Integration ( Complete)
- AgentCoordinator uses learning profiles for task assignment
- extract_task_type() infers task type from title/description
- assign_task() scores candidates using AgentScoringService
- Fallback to load-based selection if no learning data available
- Learning profiles stored in coordinator.learning_profiles RwLock

### Profile Adapter Enhancements ( Complete)
- create_learning_profile() - initialize empty profiles
- add_task_type_expertise() - set task-type expertise
- update_profile_with_learning() - update swarm profiles from learning

## Files Modified

### vapora-knowledge-graph/src/persistence.rs (+30 lines)
- get_executions_for_task_type(agent_id, task_type, limit)
- get_agent_executions(agent_id, limit)

### vapora-agents/src/coordinator.rs (+100 lines)
- load_learning_profile_from_kg() - core KG integration method
- load_all_learning_profiles() - batch loading for agents
- assign_task() already uses learning-based scoring via AgentScoringService

### Existing Complete Implementation
- vapora-knowledge-graph/src/learning.rs - calculation functions
- vapora-agents/src/learning_profile.rs - data structures and expertise
- vapora-agents/src/scoring.rs - unified scoring service
- vapora-agents/src/profile_adapter.rs - adapter methods

## Tests Passing
- learning_profile: 7 tests 
- scoring: 5 tests 
- profile_adapter: 6 tests 
- coordinator: learning-specific tests 

## Data Flow
1. Task arrives → AgentCoordinator::assign_task()
2. Extract task_type from description
3. Query KG for task-type executions (load_learning_profile_from_kg)
4. Calculate expertise with recency bias
5. Score candidates (SwarmCoordinator + learning)
6. Assign to top-scored agent
7. Execution result → KG → Update learning profiles

## Key Design Decisions
 Recency bias: 7-day half-life with 3x weight for recent performance
 Confidence scoring: min(1.0, total_executions / 20) prevents overfitting
 Hierarchical scoring: 30% base load, 50% expertise, 20% confidence
 KG query limit: 100 recent executions per task-type for performance
 Async loading: load_learning_profile_from_kg supports concurrent loads

## Next: Phase 5.4 - Cost Optimization
Ready to implement budget enforcement and cost-aware provider selection.
2026-01-11 13:03:53 +00:00

13 KiB

⚙️ Provisioning Integration

Deploying VAPORA via Provisioning Taskservs & KCL

Version: 0.1.0 Status: Specification (VAPORA v1.0 Deployment) Purpose: How Provisioning creates and manages VAPORA infrastructure


🎯 Objetivo

Provisioning es el deployment engine para VAPORA:

  • Define infraestructura con KCL schemas (no Helm)
  • Crea taskservs para cada componente VAPORA
  • Ejecuta batch workflows para operaciones complejas
  • Escala agents dinámicamente
  • Monitorea health y triggers rollback

📁 VAPORA Workspace Structure

provisioning/vapora-wrksp/
├── workspace.toml                  # Workspace definition
├── kcl/                            # KCL Infrastructure-as-Code
│   ├── cluster.k                   # K8s cluster (nodes, networks)
│   ├── services.k                  # Microservices (backend, agents)
│   ├── storage.k                   # SurrealDB + Rook Ceph
│   ├── agents.k                    # Agent pools + scaling
│   └── multi-ia.k                  # LLM Router + providers
├── taskservs/                      # Taskserv definitions
│   ├── vapora-backend.toml         # API backend
│   ├── vapora-frontend.toml        # Web UI
│   ├── vapora-agents.toml          # Agent runtime
│   ├── vapora-mcp-gateway.toml     # MCP plugins
│   └── vapora-llm-router.toml      # Multi-IA router
├── workflows/                      # Batch operations
│   ├── deploy-full-stack.yaml
│   ├── scale-agents.yaml
│   ├── upgrade-vapora.yaml
│   └── disaster-recovery.yaml
└── README.md                       # Setup guide

🏗️ KCL Schemas

1. Cluster Definition (cluster.k)

import kcl_plugin.kubernetes as k

# VAPORA Cluster
cluster = k.Cluster {
    name = "vapora-cluster"
    version = "1.30"

    network = {
        cni = "cilium"              # Network plugin
        serviceMesh = "istio"       # Service mesh
        ingressController = "istio-gateway"
    }

    storage = {
        provider = "rook-ceph"
        replication_factor = 3
        storage_classes = [
            { name = "ssd", type = "nvme" },
            { name = "hdd", type = "sata" },
        ]
    }

    nodes = [
        # Control plane
        {
            role = "control-plane"
            count = 3
            instance_type = "t3.medium"
            resources = { cpu = "2", memory = "4Gi" }
        },
        # Worker nodes for agents (scalable)
        {
            role = "worker"
            count = 5
            instance_type = "t3.large"
            resources = { cpu = "4", memory = "8Gi" }
            labels = { workload = "agents", tier = "compute" }
            taints = []
        },
        # Worker nodes for data
        {
            role = "worker"
            count = 3
            instance_type = "t3.xlarge"
            resources = { cpu = "8", memory = "16Gi" }
            labels = { workload = "data", tier = "storage" }
        },
    ]

    addons = [
        "metrics-server",
        "prometheus",
        "grafana",
    ]
}

2. Services Definition (services.k)

import kcl_plugin.kubernetes as k

services = [
    # Backend API
    {
        name = "vapora-backend"
        namespace = "vapora-system"
        replicas = 3
        image = "vapora/backend:0.1.0"
        port = 8080
        resources = {
            requests = { cpu = "1", memory = "2Gi" }
            limits = { cpu = "2", memory = "4Gi" }
        }
        env = [
            { name = "DATABASE_URL", value = "surrealdb://surreal-0.vapora-system:8000" },
            { name = "NATS_URL", value = "nats://nats-0.vapora-system:4222" },
        ]
    },

    # Frontend
    {
        name = "vapora-frontend"
        namespace = "vapora-system"
        replicas = 2
        image = "vapora/frontend:0.1.0"
        port = 3000
        resources = {
            requests = { cpu = "500m", memory = "512Mi" }
            limits = { cpu = "1", memory = "1Gi" }
        }
    },

    # Agent Runtime
    {
        name = "vapora-agents"
        namespace = "vapora-agents"
        replicas = 3
        image = "vapora/agents:0.1.0"
        port = 8089
        resources = {
            requests = { cpu = "2", memory = "4Gi" }
            limits = { cpu = "4", memory = "8Gi" }
        }
        # Autoscaling
        hpa = {
            min_replicas = 3
            max_replicas = 20
            target_cpu = "70"
        }
    },

    # MCP Gateway
    {
        name = "vapora-mcp-gateway"
        namespace = "vapora-system"
        replicas = 2
        image = "vapora/mcp-gateway:0.1.0"
        port = 8888
    },

    # LLM Router
    {
        name = "vapora-llm-router"
        namespace = "vapora-system"
        replicas = 2
        image = "vapora/llm-router:0.1.0"
        port = 8899
        env = [
            { name = "CLAUDE_API_KEY", valueFrom = "secret:vapora-secrets:claude-key" },
            { name = "OPENAI_API_KEY", valueFrom = "secret:vapora-secrets:openai-key" },
            { name = "GEMINI_API_KEY", valueFrom = "secret:vapora-secrets:gemini-key" },
        ]
    },
]

3. Storage Definition (storage.k)

import kcl_plugin.kubernetes as k

storage = {
    # SurrealDB StatefulSet
    surrealdb = {
        name = "surrealdb"
        namespace = "vapora-system"
        replicas = 3
        image = "surrealdb/surrealdb:1.8"
        port = 8000
        storage = {
            size = "50Gi"
            storage_class = "rook-ceph"
        }
    },

    # Redis cache
    redis = {
        name = "redis"
        namespace = "vapora-system"
        replicas = 1
        image = "redis:7-alpine"
        port = 6379
        storage = {
            size = "20Gi"
            storage_class = "ssd"
        }
    },

    # NATS JetStream
    nats = {
        name = "nats"
        namespace = "vapora-system"
        replicas = 3
        image = "nats:2.10-scratch"
        port = 4222
        storage = {
            size = "30Gi"
            storage_class = "rook-ceph"
        }
    },
}

4. Agent Pools (agents.k)

agents = {
    architect = {
        role_id = "architect"
        replicas = 2
        max_concurrent = 1
        container = {
            image = "vapora/agents:architect-0.1.0"
            resources = { cpu = "4", memory = "8Gi" }
        }
    },

    developer = {
        role_id = "developer"
        replicas = 5          # Can scale to 20
        max_concurrent = 2
        container = {
            image = "vapora/agents:developer-0.1.0"
            resources = { cpu = "4", memory = "8Gi" }
        }
        hpa = {
            min_replicas = 5
            max_replicas = 20
            target_queue_depth = 10  # Scale when queue > 10
        }
    },

    reviewer = {
        role_id = "code-reviewer"
        replicas = 3
        max_concurrent = 2
        container = {
            image = "vapora/agents:reviewer-0.1.0"
            resources = { cpu = "2", memory = "4Gi" }
        }
    },

    # ... other 9 roles
}

🛠️ Taskservs Definition

Example: Backend Taskserv

# taskservs/vapora-backend.toml

[taskserv]
name = "vapora-backend"
type = "service"
version = "0.1.0"
description = "VAPORA REST API backend"

[source]
repository = "ssh://git@repo.jesusperez.pro:32225/jesus/Vapora.git"
branch = "main"
path = "vapora-backend/"

[build]
runtime = "rust"
build_command = "cargo build --release"
binary_path = "target/release/vapora-backend"
dockerfile = "Dockerfile.backend"

[deployment]
namespace = "vapora-system"
replicas = 3
image = "vapora/backend:${version}"
image_pull_policy = "Always"

[ports]
http = 8080
metrics = 9090

[resources]
requests = { cpu = "1000m", memory = "2Gi" }
limits = { cpu = "2000m", memory = "4Gi" }

[health_check]
path = "/health"
interval_secs = 10
timeout_secs = 5
failure_threshold = 3

[dependencies]
- "surrealdb"        # Must exist
- "nats"             # Must exist
- "redis"            # Optional

[scaling]
min_replicas = 3
max_replicas = 10
target_cpu_percent = 70
target_memory_percent = 80

[environment]
DATABASE_URL = "surrealdb://surrealdb-0:8000"
NATS_URL = "nats://nats-0:4222"
REDIS_URL = "redis://redis-0:6379"
RUST_LOG = "debug,vapora=trace"

[secrets]
JWT_SECRET = "secret:vapora-secrets:jwt-secret"
DATABASE_PASSWORD = "secret:vapora-secrets:db-password"

🔄 Workflows (Batch Operations)

Deploy Full Stack

# workflows/deploy-full-stack.yaml

apiVersion: provisioning/v1
kind: Workflow
metadata:
  name: deploy-vapora-full-stack
  namespace: vapora-system
spec:
  description: "Deploy complete VAPORA stack from scratch"

  steps:
    # Step 1: Create cluster
    - name: create-cluster
      task: provisioning.cluster
      params:
        config: kcl/cluster.k
      timeout: 1h
      on_failure: abort

    # Step 2: Install operators (Istio, Prometheus, Rook)
    - name: install-addons
      task: provisioning.addon
      depends_on: [create-cluster]
      params:
        addons: [istio, prometheus, rook-ceph]
      timeout: 30m

    # Step 3: Deploy data layer
    - name: deploy-data
      task: provisioning.deploy-taskservs
      depends_on: [install-addons]
      params:
        taskservs: [surrealdb, redis, nats]
      timeout: 30m

    # Step 4: Deploy core services
    - name: deploy-core
      task: provisioning.deploy-taskservs
      depends_on: [deploy-data]
      params:
        taskservs: [vapora-backend, vapora-llm-router, vapora-mcp-gateway]
      timeout: 30m

    # Step 5: Deploy frontend
    - name: deploy-frontend
      task: provisioning.deploy-taskservs
      depends_on: [deploy-core]
      params:
        taskservs: [vapora-frontend]
      timeout: 15m

    # Step 6: Deploy agent pools
    - name: deploy-agents
      task: provisioning.deploy-agents
      depends_on: [deploy-core]
      params:
        agents: [architect, developer, reviewer, tester, documenter, devops, monitor, security, pm, decision-maker, orchestrator, presenter]
        initial_replicas: { architect: 2, developer: 5, ... }
      timeout: 30m

    # Step 7: Verify health
    - name: health-check
      task: provisioning.health-check
      depends_on: [deploy-agents, deploy-frontend]
      params:
        services: all
        timeout: 5m
      on_failure: rollback

    # Step 8: Initialize database
    - name: init-database
      task: provisioning.run-migrations
      depends_on: [health-check]
      params:
        sql_files: [migrations/*.surql]
      timeout: 10m

    # Step 9: Configure ingress
    - name: configure-ingress
      task: provisioning.configure-ingress
      depends_on: [init-database]
      params:
        gateway: istio-gateway
        hosts:
          - vapora.example.com
      timeout: 10m

  rollback_on_failure: true
  on_completion:
    - name: notify-slack
      task: notifications.slack
      params:
        webhook: "${SLACK_WEBHOOK}"
        message: "VAPORA deployment completed successfully!"

Scale Agents

# workflows/scale-agents.yaml

apiVersion: provisioning/v1
kind: Workflow
spec:
  description: "Dynamically scale agent pools based on queue depth"

  steps:
    - name: check-queue-depth
      task: provisioning.query
      params:
        query: "SELECT queue_depth FROM agent_health WHERE role = '${AGENT_ROLE}'"
      outputs: [queue_depth]

    - name: decide-scaling
      task: provisioning.evaluate
      params:
        condition: |
          if queue_depth > 10 && current_replicas < max_replicas:
            scale_to = min(current_replicas + 2, max_replicas)
            action = "scale_up"
          elif queue_depth < 2 && current_replicas > min_replicas:
            scale_to = max(current_replicas - 1, min_replicas)
            action = "scale_down"
          else:
            action = "no_change"
      outputs: [action, scale_to]

    - name: execute-scaling
      task: provisioning.scale-taskserv
      when: action != "no_change"
      params:
        taskserv: "vapora-agents-${AGENT_ROLE}"
        replicas: "${scale_to}"
      timeout: 5m

🎯 CLI Usage

cd provisioning/vapora-wrksp

# 1. Create cluster
provisioning cluster create --config kcl/cluster.k

# 2. Deploy full stack
provisioning workflow run workflows/deploy-full-stack.yaml

# 3. Check status
provisioning health-check --services all

# 4. Scale agents
provisioning taskserv scale vapora-agents-developer --replicas 10

# 5. Monitor
provisioning dashboard open          # Grafana dashboard
provisioning logs tail -f vapora-backend

# 6. Upgrade
provisioning taskserv upgrade vapora-backend --image vapora/backend:0.3.0

# 7. Rollback
provisioning taskserv rollback vapora-backend --to-version 0.1.0

🎯 Implementation Checklist

  • KCL schemas (cluster, services, storage, agents)
  • Taskserv definitions (5 services)
  • Workflows (deploy, scale, upgrade, disaster-recovery)
  • Namespace creation + RBAC
  • PVC provisioning (Rook Ceph)
  • Service discovery (DNS, load balancing)
  • Health checks + readiness probes
  • Logging aggregation (ELK or similar)
  • Secrets management (RustyVault integration)
  • Monitoring (Prometheus metrics export)
  • Documentation + runbooks

📊 Success Metrics

Full VAPORA deployed < 1 hour All services healthy post-deployment Agent pools scale automatically Rollback works if deployment fails Monitoring captures all metrics Scaling decisions in < 1 min


Version: 0.1.0 Status: Integration Specification Complete Purpose: Provisioning deployment of VAPORA infrastructure