# Woodpecker CI/CD Guide for VAPORA Provisioning Complete reference for understanding, running, and troubleshooting VAPORA's Woodpecker CI/CD pipelines. --- ## Overview VAPORA uses five integrated Woodpecker CI/CD pipelines for complete deployment automation. These pipelines are self-hosted alternatives to GitHub Actions, providing full control over infrastructure and execution environment. ### Pipeline Architecture ``` Push to Repository ↓ [Validate & Build] - Generates artifacts ↓ ├── Manual Promotion → [Deploy to Docker] │ ↓ │ Health checks → Services running locally │ └── Manual Promotion → [Deploy to Kubernetes] with dry-run ↓ Review changes ↓ Actual deployment ↓ [Health Check] (automatic every 15min/6hr) ↓ [Rollback] if issues detected ``` --- ## Quick Reference ### Pipeline Files ``` .woodpecker/ ├── validate-and-build.yml # Validates configs, generates artifacts ├── deploy-docker.yml # Deploys to Docker Compose ├── deploy-kubernetes.yml # Deploys to Kubernetes ├── health-check.yml # Continuous monitoring (scheduled) ├── rollback.yml # Safe deployment rollback ├── SETUP.md # Installation and configuration └── WOODPECKER_GUIDE.md # This file ``` ### Pipeline Triggers | Pipeline | Trigger | Branch | Manual | |----------|---------|--------|--------| | **validate-and-build** | Push, PR | main/develop | Yes | | **deploy-docker** | Manual promotion | main/develop | Yes | | **deploy-kubernetes** | Manual promotion | main/develop | Yes | | **health-check** | Cron (15min, 6hr) | Any | Yes | | **rollback** | Manual promotion | main/develop | Yes | ### Environment Variables All pipelines use: ```bash ARTIFACTS_DIR=provisioning/artifacts # Generated configs LOG_DIR=provisioning/logs # Pipeline logs VAPORA_NAMESPACE=vapora # K8s namespace ``` --- ## Workflows in Detail ### 1. Validate & Build (validate-and-build.yml) **Purpose**: Validate all configurations and generate deployment artifacts **Triggers**: - Push to `main` or `develop` branches (if provisioning files change) - Manual promotion from Woodpecker UI - Pull requests affecting provisioning **Execution Flow**: ``` setup └─ prepare: Create directories, display info ↓ install_dependencies └─ install_tools: Install Rust, Nushell, Nickel, jinja2, yq ↓ validate_solo/multiuser/enterprise (parallel) └─ validate_*: Run mode-specific validation ↓ build_artifacts ├─ install_tools: Reinstall tools (cached layer) ├─ build_artifacts: Run CI pipeline to generate outputs ├─ verify_artifacts: Validate JSON, YAML, TOML formats └─ generate_manifest: Create README documenting outputs ↓ publish └─ publish_artifacts: Display artifact summary ``` **Duration**: ~5 minutes **Outputs**: ``` provisioning/artifacts/ ├── config-solo.json ├── config-multiuser.json ├── config-enterprise.json ├── vapora-solo.toml/yaml ├── vapora-multiuser.toml/yaml ├── vapora-enterprise.toml/yaml ├── configmap.yaml ├── deployment.yaml ├── docker-compose.yml └── README.md ``` **Usage**: ```bash # Automatic (on push) git commit -m "Update provisioning config" git push origin main # Manual (from Woodpecker UI) 1. Go to repository → Latest build 2. Click "Promote" button 3. Select "validate-and-build" pipeline 4. Click "Promote" ``` **Expected Output**: ``` ✓ Solo configuration validated ✓ Multiuser configuration validated ✓ Enterprise configuration validated ✓ JSON outputs validated ✓ YAML outputs validated ✓ TOML outputs validated ✓ Manifests generated ✓ Artifacts ready for deployment ``` --- ### 2. Deploy to Docker (deploy-docker.yml) **Purpose**: Deploy VAPORA to Docker Compose for local/staging testing **Triggers**: - Manual promotion from completed validate-and-build build - Manual promotion from Woodpecker UI **Execution Flow**: ``` setup └─ prepare: Display deployment info ↓ install_dependencies └─ install_tools: Install tools + Docker ↓ download_artifacts └─ fetch_latest_artifacts: Get configs from workspace ↓ validate_docker_config └─ validate_compose: Validate docker-compose.yml format ↓ deploy_docker_compose ├─ pull_images: Download container images └─ compose_up: Start services with docker compose ↓ health_checks ├─ verify_services: Check HTTP endpoints └─ collect_logs: Gather service logs ↓ verify_endpoints └─ test_endpoints: Test API calls to running services ↓ generate_report └─ create_deployment_report: Generate deployment summary ↓ publish └─ publish_results & notify_slack ``` **Duration**: ~3 minutes **Service Endpoints** (after deployment): ``` - Backend API: http://localhost:8001 - Frontend UI: http://localhost:3000 - Agents: http://localhost:8002 - LLM Router: http://localhost:8003 - SurrealDB: http://localhost:8000 - Health: http://localhost:8001/health ``` **Usage**: ```bash # From Woodpecker UI (after validate-and-build) 1. Go to completed validate-and-build build 2. Click "Promote" 3. Select "deploy-docker" 4. Click "Promote" # Monitor via Woodpecker UI 1. Go to repository → Active builds 2. Watch deploy-docker build progress 3. Check each stage for logs ``` **Local Testing**: ```bash # Download artifacts from Woodpecker workspace # Extract provisioning/artifacts/docker-compose.yml # Start services docker compose -f docker-compose.yml up -d # Check health curl http://localhost:8001/health # View logs docker compose logs -f backend # Stop services docker compose down ``` **Verification**: - ✓ Backend responds at port 8001 - ✓ Frontend accessible at port 3000 - ✓ Agents running at port 8002 - ✓ LLM Router at port 8003 - ✓ SurrealDB at port 8000 - ✓ Health endpoint returns 200 OK --- ### 3. Deploy to Kubernetes (deploy-kubernetes.yml) **Purpose**: Deploy VAPORA to Kubernetes with dry-run validation **Triggers**: - Manual promotion from completed validate-and-build - Manual promotion from Woodpecker UI **Execution Flow**: ``` setup └─ prepare: Display deployment info ↓ install_dependencies └─ install_tools: Install kubectl, tools ↓ configure_kubernetes ├─ setup_kubeconfig_staging/production: Decode kubeconfig └─ verify_cluster: Test cluster access ↓ validate_manifests ├─ validate_kubernetes_manifests: Check manifest validity └─ dry_run_validation: Kubernetes dry-run check ↓ create_namespace ├─ ensure_namespace: Create vapora namespace └─ setup_rbac: Configure service accounts ↓ deploy_configmap └─ apply_configmap: Deploy configuration ↓ deploy_services (with monitoring) ├─ apply_deployments: Deploy all three services ├─ monitor_rollout_backend: Wait for backend ready ├─ monitor_rollout_agents: Wait for agents ready └─ monitor_rollout_llm_router: Wait for router ready ↓ verify_deployment ├─ check_pods: Verify pod status ├─ check_services: Verify service endpoints ├─ collect_logs: Gather deployment logs └─ annotate_deployment: Add metadata ↓ generate_report └─ create_deployment_report: Generate summary ↓ publish └─ publish_results & notify_slack ``` **Duration**: ~5-10 minutes (includes rollout waits) **Deployment Options**: ```bash # Via Woodpecker UI Promotion 1. Select environment: staging or production 2. Select deployment mode: solo, multiuser, enterprise 3. Set dry_run: true (first), then false (actual) 4. Set rollout_timeout: 300 (seconds) ``` **Dry-Run Usage** (Recommended): ```bash # Step 1: Promote with dry-run enabled Mode: enterprise Environment: staging Dry-run: true Rollout timeout: 300 # Step 2: Review dry-run output in logs # Check proposed changes to deployments # Step 3: If satisfied, promote again with dry-run disabled Dry-run: false # Step 4: Monitor rollout # Watch rollout status and pod health ``` **Verification Commands** (after deployment): ```bash # Check deployments kubectl get deployments -n vapora # Check pods kubectl get pods -n vapora -o wide # Check services kubectl get services -n vapora # View logs kubectl logs -f deployment/vapora-backend -n vapora # Check events kubectl get events -n vapora --sort-by='.lastTimestamp' # Port forward for local testing kubectl port-forward -n vapora svc/vapora-backend 8001:8001 curl http://localhost:8001/health # Check rollout history kubectl rollout history deployment/vapora-backend -n vapora ``` **Deployment Modes**: | Mode | Replicas | Resources | Use Case | |------|----------|-----------|----------| | **solo** | 1 | Minimal | Development, testing | | **multiuser** | 2 | Standard | Team/staging environments | | **enterprise** | 3 | Optimized | Production with HA | --- ### 4. Health Check & Monitoring (health-check.yml) **Purpose**: Continuous health monitoring across Docker and Kubernetes **Triggers**: - Schedule: Every 15 minutes (quick check) - Schedule: Every 6 hours (comprehensive diagnostics) - Manual promotion from Woodpecker UI **Execution Flow**: ``` setup └─ prepare: Display check info ↓ install_dependencies └─ install_tools: Install kubectl, Docker tools ↓ configure_kubernetes └─ setup_kubeconfig: Configure cluster access ↓ health_check_docker (if available) ├─ check_docker_containers: Container status ├─ check_docker_endpoints: HTTP health checks └─ collect_docker_diagnostics: System resource info ↓ health_check_kubernetes ├─ check_k8s_deployments: Deployment replica status ├─ check_k8s_services: Service endpoints ├─ check_k8s_events: Recent cluster events └─ collect_pod_logs: Application logs ↓ analyze_health ├─ generate_health_report: Create summary └─ check_health_status: Determine overall status ↓ publish └─ publish_reports & notify_slack ``` **Duration**: ~5 minutes **Checked Resources** (Docker): - Container status (Up/Down) - HTTP endpoints (8001, 8002, 8003, 3000, 8000) - Network connectivity - Resource usage **Checked Resources** (Kubernetes): - Deployment replica status - Pod readiness conditions - Service availability - ConfigMap data - Recent cluster events - Pod logs (last 100 lines) **Reports Generated**: ``` provisioning/logs/health-checks/ ├── docker-containers.log ├── docker-endpoints.log ├── docker-diagnostics.log ├── k8s-deployments.log ├── k8s-services.log ├── k8s-events.log ├── k8s-diagnostics.log ├── pods/ │ ├── backend.log │ ├── agents.log │ └── llm-router.log └── HEALTH_REPORT.md ``` **Manual Trigger**: ```bash # From Woodpecker UI 1. Click "Promote" on any completed build 2. Select "health-check" pipeline 3. Click "Promote" # View results 1. Wait for build to complete 2. Check "Artifacts" for health reports 3. Review pod logs for errors ``` **Alert Conditions**: - ❌ Pod in CrashLoopBackOff state - ❌ Endpoint not responding - ❌ Service not running - ❌ Recent error events in cluster --- ### 5. Rollback Deployment (rollback.yml) **Purpose**: Safe deployment rollback with pre-checks and verification **Triggers**: - Manual promotion only (safety feature) **Execution Flow**: ``` pre_rollback_checks └─ verify_environment: Confirm rollback parameters ↓ install_dependencies └─ install_tools: Install kubectl, tools ↓ configure_kubernetes └─ setup_kubeconfig: Configure target cluster ↓ store_deployment_history └─ snapshot_current_state: Backup current deployments ↓ kubernetes_rollback ├─ perform_rollback: Execute kubectl rollout undo ├─ verify_rollback: Check rollback status └─ check_pod_health: Verify pod readiness ↓ docker_rollback_guide ├─ generate_docker_guide: Create manual instructions └─ store_docker_state: Backup docker-compose.yml ↓ post_rollback_verification └─ generate_rollback_report: Create summary ↓ publish └─ publish_artifacts & notify_slack ``` **Duration**: ~3-5 minutes **Rollback Parameters**: ```yaml Target: - kubernetes # Automatic K8s rollback - docker # Guided Docker rollback Environment: - staging # Staging cluster - production # Production cluster Deployment: - all # Rollback all services - backend # Rollback specific service - agents - llm-router Revision: - 0 # Previous revision (default) - 1, 2, 3... # Specific revision number ``` **Usage**: ```bash # Kubernetes Rollback (Automatic) 1. Go to Woodpecker UI 2. Click "Promote" 3. Select "rollback" pipeline 4. Set: - Target: kubernetes - Environment: production - Deployment: all - Revision: 0 (previous) 5. Click "Promote" 6. Monitor rollout status # Docker Rollback (Manual Guide) 1. Follow generated DOCKER_ROLLBACK_GUIDE.md 2. Execute git/docker commands as instructed 3. Verify services running with health checks ``` **Verification After Rollback**: ```bash # Kubernetes kubectl get pods -n vapora kubectl logs -f deployment/vapora-backend -n vapora kubectl rollout history deployment/vapora-backend -n vapora # Docker docker compose ps docker compose logs -f curl http://localhost:8001/health ``` **Rollback History**: ```bash # View deployment revisions kubectl rollout history deployment/vapora-backend -n vapora # Output example: REVISION CHANGE-CAUSE 1 2 Deployment rolled out 3 Deployment rolled out # Find the working revision and use that number ``` --- ## Integration Patterns ### Pattern 1: Automatic Validation on Every Push ``` Developer pushes feature branch ↓ Git webhook triggers Woodpecker ↓ [Validate & Build] runs automatically ↓ Artifacts generated in workspace ↓ Build completes (visible in Woodpecker UI) ``` ### Pattern 2: Staging Deployment ``` 1. Merge PR to develop branch ↓ 2. [Validate & Build] runs automatically ↓ 3. In Woodpecker UI → Promote to deploy-kubernetes - Mode: multiuser - Environment: staging - Dry-run: true ↓ 4. Review dry-run output ↓ 5. Promote again with dry-run: false ↓ 6. [Health Check] runs (automatic in 15min) ↓ 7. Staging live ``` ### Pattern 3: Production Deployment ``` 1. Code review approved ↓ 2. Merge PR to main branch ↓ 3. [Validate & Build] runs automatically ↓ 4. In Woodpecker UI → Promote to deploy-kubernetes - Mode: enterprise - Environment: production - Dry-run: true ↓ 5. **CAREFULLY** review all changes ↓ 6. Promote again with dry-run: false ↓ 7. [Health Check] monitoring starts (every 6 hours) ↓ 8. Production deployment complete ``` ### Pattern 4: Emergency Rollback ``` 1. Production issue detected ↓ 2. [Health Check] alerts in Slack (if configured) ↓ 3. In Woodpecker UI → Promote to rollback - Target: kubernetes - Environment: production - Deployment: all - Revision: 0 (previous) ↓ 4. Monitor rollout status ↓ 5. Services restored ↓ 6. Investigate root cause ↓ 7. Plan corrected deployment ``` --- ## Configuration & Secrets ### Secrets Required ```bash # Kubernetes kubeconfigs (base64 encoded) KUBE_CONFIG_STAGING # For staging deployments KUBE_CONFIG_PRODUCTION # For production deployments # Optional: Slack notifications SLACK_WEBHOOK # General notifications SLACK_WEBHOOK_ALERTS # Critical alerts only ``` ### Adding Secrets in Woodpecker UI 1. Go to repository → Settings → Secrets 2. Click "Add secret" 3. Enter name: `KUBE_CONFIG_STAGING` 4. Paste base64-encoded kubeconfig value 5. Click "Add" 6. Repeat for other secrets ### Encoding Kubeconfig ```bash # Get kubeconfig and encode cat ~/.kube/config | base64 # Verify locally before adding echo "base64_value_here" | base64 -d | kubectl cluster-info ``` ### Environment Variables Available in Pipelines ```bash # Woodpecker System Variables CI_BUILD_LINK # Link to build in UI CI_COMMIT_SHA # Full commit hash CI_COMMIT_BRANCH # Branch name CI_COMMIT_AUTHOR # Commit author # Pipeline-Defined Variables ARTIFACTS_DIR # provisioning/artifacts LOG_DIR # provisioning/logs VAPORA_NAMESPACE # vapora (K8s namespace) ``` --- ## Monitoring & Troubleshooting ### Checking Build Status **Via Woodpecker UI**: 1. Go to repository page 2. See "Active builds" and "Previous builds" 3. Click a build to see pipeline execution 4. Click a stage to see detailed logs **Via Terminal**: ```bash # If using woodpecker-cli woodpecker-cli build list -r owner/repo # View specific build woodpecker-cli build view -r owner/repo -b # Watch build live woodpecker-cli build watch -r owner/repo -b ``` ### Accessing Logs **From Woodpecker UI**: 1. Click build → see stages 2. Click stage → see full logs 3. Scroll through logs or search **From Workspace**: ```bash # Logs persisted in workspace (visible as artifacts) provisioning/logs/ ├── validate-solo.log ├── build.log ├── docker/ ├── kubernetes/ └── health-checks/ ``` ### Common Issues #### Issue 1: "Pipeline not triggering" **Symptoms**: Push doesn't start validate-and-build **Diagnose**: 1. Check webhook in GitHub settings 2. Verify repository authorized in Woodpecker 3. Check file paths match `trigger.paths.include` 4. Review Woodpecker logs: `WOODPECKER_LOG_LEVEL=debug` **Fix**: ```bash # Manually re-authorize in Woodpecker UI # Settings → Repositories → VAPORA → Activate # Test webhook curl -X POST https://your-woodpecker/hook \ -H "X-GitHub-Event: push" \ -d '{"ref":"refs/heads/main"}' ``` #### Issue 2: "Secret not found" **Symptoms**: Stage fails with "secret not found" **Diagnose**: 1. Go to repository → Settings → Secrets 2. Verify secret exists and name matches exactly 3. Check secret value is not empty **Fix**: ```bash # Re-add secret in UI # Make sure spelling is exact (case-sensitive) # Test secret locally echo "secret_value" | base64 -d ``` #### Issue 3: "Kubeconfig decode error" **Symptoms**: `base64: invalid input` during kubectl setup **Diagnose**: 1. Check if base64 value is valid 2. Test decode locally **Fix**: ```bash # Test locally first echo "kube_config_base64_value" | base64 -d | kubectl cluster-info # If invalid, re-encode cat ~/.kube/config | base64 # Add to Woodpecker secret ``` #### Issue 4: "Deployment timeout" **Symptoms**: Waiting for pod readiness timeout **Diagnose**: 1. Check pod logs: `kubectl logs -n vapora ` 2. Check pod events: `kubectl describe pod -n vapora ` 3. Check resource constraints **Fix**: ```bash # Increase timeout in deploy-kubernetes.yml rollout_timeout: 600 # 10 minutes # Check pod logs for errors kubectl logs -n vapora deployment/vapora-backend --tail=50 # Check resource availability kubectl top nodes kubectl top pods -n vapora ``` #### Issue 5: "Docker connection failed" **Symptoms**: `Cannot connect to Docker daemon` in deploy-docker **Diagnose**: 1. Check Docker socket mounted 2. Verify Docker daemon running **Fix**: ```bash # Verify socket mounted in agent docker exec woodpecker-agent ls -la /var/run/docker.sock # Test Docker access docker ps # Restart Docker if needed sudo systemctl restart docker ``` --- ## Performance Tuning ### Parallel Validation Validation stages run in parallel (solo, multiuser, enterprise): ```yaml validate_solo: depends_on: [install_dependencies] # Runs while multiuser and enterprise also run validate_multiuser: depends_on: [install_dependencies] # All three in parallel, not sequential ``` **Impact**: Reduces validation time by ~3x ### Caching Tool installation caches automatically: ```bash # First run: downloads and installs - cargo install nu --locked # Subsequent runs: uses cached Docker layer ``` ### Workspace Cleanup Between builds, workspace persists. To reclaim space: 1. Delete old workspace volumes 2. Configure retention policy in Woodpecker 3. Use `docker volume prune` carefully --- ## Security Considerations ### Secret Management ✅ **Best Practices**: - Store all sensitive values as secrets - Use environment-specific secrets (staging vs prod) - Rotate secrets quarterly - Never log secret values - Use unique kubeconfigs per environment ❌ **Anti-Patterns**: - Hardcoding secrets in YAML - Using same secret for all environments - Storing secrets in git history - Logging secret values during debug ### RBAC & Access Control ```bash # Kubernetes: Limit service account permissions kubectl create serviceaccount vapora-deployer -n vapora # Assign minimal necessary permissions kubectl create role vapora-deployer \ --verb=get,list,watch,create,update,patch \ --resource=deployments,configmaps,pods # Bind role to service account kubectl create rolebinding vapora-deployer \ --role=vapora-deployer \ --serviceaccount=vapora:vapora-deployer ``` ### Pipeline Execution - Pipelines run in isolated Docker containers - Limited to workspace directory - No access to host filesystem (unless mounted) - Network isolation between stages possible --- ## Advanced Topics ### Custom Pipeline Parameters Use Woodpecker promotions to pass parameters: ```yaml deploy-kubernetes.yml: environment: - Deploy_Environment # Read from promotion UI - Rollback_Target - Rollback_Revision ``` ### Multi-Agent Setup Deploy multiple agents for distributed execution: ```bash # Agent 1: Docker builds - WOODPECKER_FILTER_LABELS=type:docker # Agent 2: Kubernetes operations - WOODPECKER_FILTER_LABELS=type:kubernetes # In pipeline, require specific agent labels: - type:kubernetes ``` ### Conditional Execution Skip stages based on conditions: ```yaml deploy-production: when: evaluate: 'return build.Deploy_Environment == "production"' # Only runs if Deploy_Environment is production ``` --- ## Comparison with GitHub Actions ### Feature Comparison | Feature | Woodpecker | GitHub Actions | |---------|-----------|---| | **Hosting** | Self-hosted | GitHub-hosted | | **Infrastructure Control** | ✓ Full control | Limited | | **YAML Syntax** | Similar but different | GitHub-specific | | **PR Integration** | Limited | Native | | **Manual Dispatch** | Via promotions | workflow_dispatch | | **Secrets Management** | Built-in UI | GitHub secrets | | **Artifact Storage** | Workspace + volumes | Actions API | | **Cost (self-hosted)** | Infrastructure only | GitHub minutes quota | | **Dry-run Support** | ✓ First-class | Manual pattern | ### When to Choose Woodpecker ✓ Want to self-host CI/CD ✓ Need full infrastructure control ✓ Prefer to avoid vendor lock-in ✓ Have compliance/data residency requirements ✓ Want to run multiple repos unified CI/CD ### When to Choose GitHub Actions ✓ Want GitHub-hosted runners ✓ Prefer tight GitHub integration ✓ Want PR comments and status checks ✓ Don't want infrastructure overhead --- ## Support & Resources - **Woodpecker Documentation**: https://woodpecker-ci.org/docs - **VAPORA Repository**: https://github.com/your-org/vapora - **GitHub Actions Guide**: `./../.github/GITHUB_ACTIONS_GUIDE.md` - **Nushell Scripts**: `provisioning/scripts/*.nu` --- ## Quick Start Checklist - [ ] Install Woodpecker server - [ ] Configure GitHub OAuth app - [ ] Authorize VAPORA repository - [ ] Add `KUBE_CONFIG_STAGING` secret - [ ] Add `KUBE_CONFIG_PRODUCTION` secret - [ ] Test: Push to feature branch - [ ] Verify: validate-and-build completes - [ ] Test: Promote to deploy-docker - [ ] Test: Promote to deploy-kubernetes (dry-run) - [ ] Configure: Slack webhooks (optional) - [ ] Document: Team runbooks --- **Generated**: 2026-01-12 **Status**: Production-ready **Pipelines**: 5 comprehensive workflows **Documentation**: Complete reference guide